Thursday, August 29, 2013

My Wallpaper Collection

I have amassed a fairly large collection of wallpapers over the years.  At one point, I decided to start over since I was tired of all the wallpapers I had.  My new wallpaper collection only uses high resolution photos from themes or other sources that look really good on high resolution monitors.

If you want to have a copy of my wallpaper collection, you can get it using BTSync (more info here), which will also allow you to get any new wallpapers I add to my collection (I usually add 5-8 per week).  If you haven't already, install the app, then add a new sync folder.  Choose where you want my wallpapers to be stored, then put the following in for the secret:

BFVR62FRM2AOZT3TLBISA2MLNZJ4XO5CA

This should get you started downloading.  Whenever I add new wallpapers, you'll get them.  If there are any you don't want, just delete them.

If you need help setting up your wallpaper to cycle through these images, go here.  The only instruction I would add is in step 3, browse to the folder where the pictures are being downloaded.

Wednesday, August 21, 2013

Dropping Dropbox and Google Drive for BTSync & a Raspberry Pi

I thought this article deserved republication, so I decided to take the easy route with today's post and just link to Jack's blog post.  In his post, Jack describes how he decided to drop his cloud file sync'ing app in favor of something run in house.  The best part about it is that it works just as well as Dropbox or Google Drive, but without putting your file on any device other than your own.  The advantage of this, as I've posted before, is that you can sync an unlimited number and size of files.

I've done something similar to this using Western Digital's My Book Live Duo.  Since the MBLD runs on Linux (Lenny Squeeze), it was pretty easy to load BTSync and fire it up.  So, my NAS participates in the synchronization of my folders from my desktop to my laptop.  So, even if my desktop is offine (or in standby) I can still sync between my laptop and my NAS (over the internet too) and when my desktop comes online it'll sync up with the NAS and laptop.

Friday, August 16, 2013

Device level context switching

UPDATE: I've developed some code to make this a regular view that can be dragged onto the page without any configuration required.  The following code will create the standalone view and add it to all four device context pages:


UPDATE: I've rewritten this widget to make it easier to implement.  Now instead of having to specify the {Item.ItemID} variable in the browser view URL, the widget just grabs the information from the parent URL.  This is also better because any additional arguments you had in the URL will continue through to the other context pages.  Here's the updated code:


Now all you have to do is point to this widget in your custom content directory.

Enjoy!

You may not know about it, but NPC classifies every device as either a router, switch, server, or device.  The device category is for every type of device that isn't a router, switch, or server.  This is too bad because NetVoyant actually has an extendable list of device classifications; you can make as many as you want.  However any additional classes will show up in NPC as 'devices' because NPC doesn't understand them.  This is fine in most cases.  However, certain cases will cause problems.

For example, if I have an F5 load balancer and I'm monitoring the device in SuperAgent as well as NetVoyant, NPC has to choose whether to classify the device as a server (as SuperAgent reports it) or as a device (since NetVoyant either classifies it as 'other' or 'load balancers' if you've classified it).  Turns out the NV classification is last on the list.  If a device is monitored by RA or SA, NPC will classify it as a router or server, respectively, regardless of what classification exists in NV.

In this case, what I usually do is instruct customers how to switch from one context page to another after drilling in.  For example, after I drill into the F5 and get to the server page, I would update the URL to read pg=d instead of pg=s.  This loads the device page for the F5 instead of the server page.  This can be handy since the device page may have specific F5 views on it that don't appear on the server page.
In order to make this easier, I built a simple html page that can be loaded into a browser view that will allow quick switching between all four context view types.  Here's the page:

<html>
<script type="text/javascript">
var url1='<a target="_top" href="/npc/Default.aspx?pg=';
var str=location.search;
str=str.replace("?ItemID=","");
document.write(url1 + 'r' + '&DeviceID=' + str + '">Router</a> ');
document.write(url1 + 'sw' + '&DeviceID=' + str + '">Switch</a> ');
document.write(url1 + 'd' + '&DeviceID=' + str + '">Device</a> ');
document.write(url1 + 's' + '&DeviceID=' + str + '">Server</a> ');
&nbsp;<a target="_blank" href="http://stuart.weenig.com/2012/08/device-level-context-switching.html"><img src="/npc/images/DialogQuestion.gif" border=0></a>
</script>
</html>

Link to this page from a browser view with a title like 'View this device as a...' and a URL like this:
/content/viewdeviceas.html?ItemID={Item.ItemID}
As long as this page is named 'viewdeviceas.html' and it's hosted under a virtual directory on NPC's IIS web server with an alias of 'content' it should load just fine.  Give it a height of 33, turn off the border and hide the scroll bars.  This makes an excellent small browser view that can go right at the top of the page, displayed right under the page tabs.

Thursday, August 15, 2013

Using Distributions to show Performance of Multiple Objects on a Time Scale

Many people building custom views in NV will no doubt build one of two types of views: Details Trend or Management TopN.  Unfortunately, this bypasses some of the cooler views like the distribution views.  Consider this scenario: I have multiple third party devices and the manufacturer has provided a special MIB to monitor CPU utilization (instead of doing the smart thing like publishing their CPU statistics into the hrprocessor or UCDavis MIB OIDs).  So, I now have the opportunity to build a custom dataset to pull in the CPU utilization for these devices.  (Side note, i should probably republish my instructions on how to build a custom dataset.)
After I build the dataset, I'll start building my views.  Let's suppose that the vendor has only provided the CPU utilization as an average of all the CPUs on the device or that the device will only ever have one CPU.  The end result is that there is only one poll instance per device for that dataset.  This means that I'll only really build views on the device level and configure the views to drill down to the device page instead of the poll instance page.  After building the appropriate trends on the device page, I'd go to an overview page and build a table or bar chart to show the devices with the highest CPU utilization.  All of this is great and normal and is what most people do when building views for this kind of data.
The problem with stopping here is that there is no way to look at multiple devices over a period of time and see how the devices were performing within that timeframe.  The reason for this is that a TopN table or bar chart will display the rollup (usually the average) of the metric within the timeframe.  In the case of my custom dataset, I'd see the average CPU utilization over the last hour, last day, last week, etc.  This is ok as long as I pick one of the standard timeframes.  Notice what happens when you pick last 4 hours in NPC.  A table or bar chart will only do last hour.  That's because NV hasn't pre-calculated rollups on a 4-hour basis.  So, it becomes important to show the performance of the metric over time showing the values within the timeframe, be it a standard rollup period or not.
That's where distribution views can help.  While they don't necessary show the value of each one of the poll instances analyzed, they do categorize the metric into groups.  For example, I could build a distribution view to group the metrics like: 0-25%, 25-50%, 50-75%, 75-95%, and over 95%.  In this case, NPC would look at all the data during the timeframe (if last hour with 5 minute polling, it will look at 12 data points for each poll instance included in the context) and categorize each data point into one of the buckets I've defined.  The end result is a trend plot over time showing how many devices are in which buckets for each point in time.
Users need to be instructed in the proper way to interpret the view.  If the view is setup properly, the undesirable buckets will have more extreme colors (reds and oranges).  When a user sees a time period in which a larger number of devices are in the undesirable buckets, they should understand that a large number of devices has experience higher CPU utilization.  If 10 devices' CPU utilization goes from 20% to 60%, the bars before the increase will show 10 devices in the 0-25% bucket while the bars after the increase will show 10 devices in the 50-75% bucket.  NPC also calculates the percentage of total devices in each bucket.  So, if half of my devices are in the 50-75% range, a mouseover will reveal 50% in that bucket.
This visualization can be equated to creating a pie chart for each poll cycle.  If you look at one poll cycle for all the devices and created a pie chart with 5 slices, it would be easy to understand how many devices need attention.  Imagine taking the crust off the pie, stretching it out flat and stacking it next to the pie crusts for the other poll cycles in the period.
One disadvantage to the distribution charts is that they lack drill down.  So, while a distribution is good for a summary page, a table showing the rollups over the same timeframe will be helpful to identify which devices are experiencing the higher CPU utilization.  This table would allow drill down to the device page where the individual trend plot could be analyzed individually.  It could also be compared to the rest of the data being gathered by NV for the device.

Cycling DIVs on a web page

Just before my boys were born, I installed a Foscam IP Camera on the ceiling of their room.  I have enjoyed being able to check on the boys as they're napping or sleeping in their cribs.  The model I got has pan/tilt capabilities and a microphone built in.  So, I can hear them breathing or crying and focus on one or the other.  I can do all this without opening their door.  Also, since it's mounted on the ceiling, they don't ever notice it.  It's equipped with a bank of IR LEDs around the lens that turn on whenever the room is too dark, so I can even look in on them when their room is completely dark.  I found a decent iPhone app that allows me to connect to the camera.  I could even hook up some speakers to the camera and use a feature of the app to talk to the boys in their room.  Not much different than a baby monitor.

Since then, a three pack of outdoor cameras went on sale so I went ahead and purchased them.  I mounted them on my front porch, and the back corner of the house overlooking the back door and the sideyard approaching the gate.  The cameras have motion detection and automatic file uploading, so I get a picture every time the lawn guys approach the gate and any time a solicitor approaches the front door.  I also built a simple html page displaying the feed from all four cameras on one page.  Unfortunately, at the highest resolution (and why wouldn't I want the highest resolution?) the four feeds don't show up on a single page unless i scale the page to 75%.  This is easy to do and Chrome even remembers that when I am looking at that page to automatically scale it down to 75%.  This has worked well, but I've always wanted a better way.

Yesterday, I finally got the tricky parts of what I really wanted to do worked out.  The goal was to have a web page that would show all four feeds but only show one at a time.  The page should cycle through each video feed and stay on it for a few seconds before moving on to the next feed.  You wouldn't expect this to be too difficult and in the end, it really wasn't.  This is the first version and the intent of this post is not to show the finished code for my page but to show how DIV elements on a web page can be cycled.

To start with, here is the html page with the DIV elements:


In my situation, each DIV contains the code to display the video feed from a single camera. The first DIV starts out visible while the rest are hidden. This doesn't really matter since the first time the javascript runs it will reset everything anyway. Technically, all four DIVs could start out hidden or displayed.

Here is the javascript code itself.  The comments should make it pretty self explanatory.  The very bottom starts the function running.  At the end of the function, the setTimeout command instructs the browser to call the function again after 1000 seconds.  This could be modified to read from an input box so that the user can adjust the cycle speed.  I guess i could look for some gauges out there to make it pretty. I had to get a little sloppy by storing the next div index in a hidden div. This was the easiest way that I as a non-programmer could avoid problems with global vs. local variables in the function. Theoretically, this could be used for any number of DIVs within the container DIV.  There are some fun transition effects available in the webkit and through CSS3, but like I said, that's not what this post is about.

Thanks to SirLagz for helping me debug my code and check my syntax (it's amazing what a missing double quote will do)!

Tuesday, August 13, 2013

Ripping Movies to iTunes

I recently acquired an Apple TV.  No, it wasn't a Christmas gift, it was a result of winning a $50 Apple store gift card.  I had originally thought it was an iTunes gift card, but apparently, they're not the same thing and can't be used interchangeably.  Since I had been thinking about getting an Apple TV for over a year now, I figured this was the best opportunity.  I figured now that I have an XBOX, adding an Apple TV would eliminate the need of having a PC connected to my TV.  Up 'til now, I've been using a PC connected to the TV for Windows Media Center (through which I get TV DVR functionality).  That also afforded me a couple extra features: Hulu Desktop, Netflix (through WMC), streaming from the iTunes library on my office PC, and any movies I'd ripped to my PC.  By adding an Apple TV to the mix, Netflix, iTunes streaming, and movies could be eliminated.  By using the XBOX as a media center extender, the only thing that I really needed the PC for was Hulu desktop.  Look for my next post to see how I fixed that problem.

So, at any rate, I didn't need the PC connected to the TV anymore, which was a relief.  I had been worried about overheating the PC (when the cabinet was closed) and since I had moved my main router to the same UPS as that PC, the UPS had complained of overload.  As a result, I shut down the PC and decided to use only the Apple TV and the XBOX for my entertainment needs.

All of that verbosity was to explain why I went through the process of learning how to rip DVDs to iTunes.  I already have a fairly extensive library of music, audiobooks, podcasts, ebooks, and apps in my iTunes on my office PC.  I decided that I would try to rip my DVD collection to iTunes so that I could just stream them from my office PC to my Apple TV.  This would also make it possible to take some movies/TV shows with me on my iPhone/iPad.

So, now, how to do it:

The first thing you have to do is understand the legality of this process.  This can only be done with DVDs you currently own.  You cannot do this with rental DVDs since the rental agreement does not include a fair use clause.  However, if you own the DVD, encoding it to a different format is the same as burning music you have bought from iTunes to a CD.  It all falls under the classification of 'fair use'.

The next thing you need to do is get prepared by installing two pieces of software: Handbrake and DVD Decrypter.  UPDATE: I've started using MakeMKV Beta instead of DVD Decrypter.  It also supports ripping Blu Ray disks as well as DVDs.  See here for more information and another guide specifically tailored to Blu Ray disks.  You should probably be able to do this all using just Handbrake, but I've found that ripping the DVD to your hard drive allows for more efficient encoding using a queue.

Once you've got those two installed, open DVD Decrypter.  You'll need to make a couple option changes in order to make this quicker and easier.  First you need to have a folder on your hard drive where you'll store the raw DVD files.  I use C:\RIP\.  On the General tab, set this as the default destination folder and set the folder option to 'Semi Automatic'.  This should allow DVD decrypter to automatically place each new DVD's files into their own folders.  Because I've been going through my entire DVD collection encoding movies off and on for a few weeks, I also liked to tell DVD Decrypter to eject the DVD when finished.  On the Device tab, check the box that says 'Eject Tray after...Read'.  I also turn off the success noise.  On the Sounds tab, uncheck the success sound.

Now pop in a DVD and click the big button in the bottom left corner of DVD Decrypter.  That should get things going.  You can now sit back for 15-20 minutes while the files are being ripped to your hard drive in raw format.

Once that is done, open Handbrake.  You need to tell Handbrake to automatically output the files to the automatic input folder for iTunes.  This makes it so that when encoding is finished, iTunes will automatically import the file.  Open up Tools>>Options and on the General tab check the 'Automatically name output files' checkbox and set the default path to "C:\Users\YourUserName\Music\Automatically Add to iTunes".  This may be slightly different on your system, but you should be able to find it.  Also, change the Format box to only read {source}.  You also might find it helpful to enable the 'Remove Underscores from Name' and 'Change case to Title Case' options.
Back in the main Handbrake window, click the Source drop down and select "Folder".  Browse to the folder where DVD Decrypter output the raw DVD files.  Handbrake will scan through the files and should select the longest title.  (FYI, in DVD nomenclature, a title is essentially a single video on the DVD.  You'll find that DVDs will usually contain one title for the main feature, one for each of the special features, and usually a couple small ones used as transition videos in the DVD menu.)
You'll only want to encode the main title (unless the DVD is a TV show in which case you'll want to use something like VLC media player to play the raw DVD files and determine which titles to encode).  Since I'd like the best quality video, I choose the 'High Profile' in Handbrake.  This sets all the settings up for the best possible quality.  It also results in a large file, so if you're low on hard drive space, this might not be an option.  I set this profile as the default so I don't have to pick it every time, and I also check the 'Large file size' option.  Check that the name of the output file is how you want it, then click the 'Add to Queue' button.

At this point, you can pop in the next DVD and repeat the process for the next movie.  I usually queue up 10-15 movies a day and let the encode run at night.  Encoding is a CPU intensive process and will usually max out your processor making using your computer for other stuff fairly slow.  Run it at night when the encode is the only thing going on.

Wednesday, August 7, 2013

Checking if a TCP Port is Open on a Remote Server using Powershell

This tip is related to a previous post about using a utility called TCPing.exe to check to see if a remote port is open (i.e. the service is listening and accepting connections).  Since Windows 2008 doesn't come with the telnet client installed and since some circumstances will mean that access to the internet isn't available (i.e. a server in a hardened data center), an alternative method is to use Powershell.

This can be wrapped into a script pretty easily, or if you just memorize it, you can impress whomever is looking over your shoulder.

So, memorize this bit (this will ping 192.168.0.1:80):


If the output is "True", then then the tcping was successful.  If the connection is unsuccessful, it will be completely obvious in the output.

To wrap it into a script, create a text file like this (this will ping proxyserver.domain.com:8080 by default. Pass the server name and port to override the default.