Tuesday, June 26, 2012

Using Profile Pics Elsewhere

UPDATE: Added section on Google+ profile pics

Just discovered a couple useful tips for embedding your twitter or Facebook profile pictures elsewhere.  Apparently, you can use the APIs to pull the images out.

For Facebook: http://graph.facebook.com/<username>/picture (where <username> is the username of the Facebook user whose profile picture you want to display) will display that user's current profile picture.  That URL doesn't even have to be updated when the user changes his/her profile picture.
For example:

For twitter: http://api.twitter.com/1/users/profile_image/<username> (where <username> is the username of the twitter user whose profile picture you want to display) will display that user's current profile picture.
For example:

Turns out Google+ also has a way of doing it: https://s2.googleusercontent.com/s2/photos/profile/{id} (where {id} is the big long ID number that G+ assigns to each user).  Don't forget to add some height and width attributes to your img tag as the profile pic from G+ can be fairly large.
For example:

I know, my profile pictures aren't that interesting, but that's only because we recently had professional photos done and it made sense to use one of them as both my Facebook and twitter profile pictures (notice the are actually different given the size restrictions on each service).  As far as I know, LinkedIn doesn't have anything quite as easy as this.  You have to make two calls to get the data: one using your own LinkedIn credentials to make sure you have permission to view the photo, then another to actually call the photo.  Kinda sucks if you ask me.  Maybe they'll wise up.

Tuesday, June 19, 2012

NetVoyant Rollups: Sums, Maximum, Percentiles, etc.

For most situations out there, the default rollup is perfectly fine.  What i mean is that when you add an expression to a dataset, the default rollup (which is an average) is exactly what someone would be looking for in a rollup.  If i show top interfaces for an hour, I'd like to sort those interfaces by the highest average utilization, which means i want NV to take an average of the utilization data points during that hour.

However, in some situations, it may be more accurate to calculate a different rollup.  For example, if i wanted to, i could have NV calculate both the average value of all the data points collected in the last hour and also calculate the standard deviation so that i know how consistent my utilization is.  Higher standard deviation means there are at least some points that are far away from the average.  I could also have NV calculate the maximum or a percentile of all the points from the last hour.  By adding max and percentile to a view, i can easily see more clearly what is happening on an interface.

One other situation is volume.  If you're polling some OID for some kind of volume (KB or MB), the first thing you should do in your expression is put it in bytes.  This allows you to take advantage of the auto scaling feature in the views.  This means that instead of showing numbers like 12000000 along the dependent axis, NV can display something like 12.  You'd then put {Scale} in the axis label so that KB, MB, GB, etc. is displayed indicating the unit.
The next thing you'd do for volume is change the rollup.  Obviously if you're tracking volume, having an average of all the points collected in the last hour is useless.  What you really want is a sum of the volume in the last hour.  To do this, remove all rollup types.

Did i mention how to do that?  I guess i didn't.  Edit the expression and click the advanced button.  Uncheck all the checkboxes so that the rollup is a sum instead of an average.

Another trick about rates:
If you're polling an OID and want to convert it to rate, create a new expression and divide the expression by the variable 'duration'.  Duration is always equal to the number of seconds in a poll cycle.  Technically it's the number of seconds since the last poll, so you do have to be a little careful about that.
Again, if your OID is in some unit like KB, convert it to bits (KB*1024*8).  Then when you divide by duration, you get bits per second.  By setting the view auto-scale to rate, NV will automatically convert it to the needed value (Kbps, Mbps, Gbps, etc.).

Thursday, June 7, 2012

The most awesome new feature of Office 2010

Alright, I found the my new favorite feature of Office 2010. I've had office 2010 for a while, but everybody knows we all only use the features we used to use. Well, I have a new feature that have added to my quick launch bar: Insert>>Screenshot>>Screenshot Clipping. The whole insert screenshot feature is pretty cool and is available in Word, PowerPoint, Excel, and Outlook (it's probably everywhere, but I haven't found a good use for it in all of them).

When you first hit the Screenshot button in the ribbon bar, you get a drop down containing thumbnails of all the windows you have currently open and not minimized. Clicking one of these thumbnails inserts a screen shot of that window at the cursor. While this is great by itself, perhaps the more useful feature (and the one i've pinned to my quicklaunch bar) is Screenshot Clipping. When you click on this, the current window is minimized and the whole screen goes grey. The mouse turns into a + cursor. Draw a box around any portion of the screen and as soon as you let up on the mouse button, a picture of that portion of the screen is inserted at the cursor! It's completely awesome.

The reason it's completely awesome is because of the ease with which it accomplishes a task which would require multiple keystrokes/clicks (doing it the previously easiest way) or even a third program (if you did it the ancient way). The previously easiest way was to activate the window of which you wanted a screenshot, pressing Alt+PrtScn, pasting back into the Word doc or email, and using Word's image cropping tool to crop out the parts not needed. This was pretty good and I was always surprised at the number of people that did it the hard way.

The hard way involved using Windows7's snipping tool. Launch it and (depending on the mode) you can get a capture of the full screen, a window, a rectangle, or a freeform shape. Once you do this, the picture shows up in the snipping tool. If you've got it setup, it also copies it to the clipboard so you can paste it wherever you want. While this works and gives flexibility into the whole process, I always found it tedious.

Anyway, I was so excited about this feature, I had to put a blog post about it. Now if only Google would put something like that into the blogger post editing toolbar.

Friday, June 1, 2012

NetVoyant Duplicates Finder

UPDATE: This method has been replaced by the ODBC method.

I've been working for a while now on a good way to find and remove duplicates from NetVoyant.  Luckily, there is a web service that can delete devices (more on NetQoS web services).  All you need is the device IP address and the poller (to build the web service URL).  I played around for a while trying to build something in a windows batch file and couldn't get it to do what I wanted to do.  So, I reverted to Perl (which I probably should have done from the beginning).  Anyway, the result is a script that can be run regularly on the NetVoyant master console.  The output is a CSV file and an html file.  The CSV file contains the output from the brains of the duplicate finder script, namely: a list of every device that exists more than once in the NV system, along with the device properties including the poller.  The CSV file is output to the script directory.  The script can be configured to output the html file wherever you want.

After that, the script uses perl to wrap the data in the CSV into an html widget.  The widget shows the same data as the CSV as well as a link on every line to delete the device.  As long as the NV pollers resolve by name, the link should work to delete the device and its corresponding scope.  If you only want the CSV, edit the batch file and comment out the call to the Perl script (i.e. put 'rem' in front of the line that start with the word 'perl').

If you do want the HTML, you'll need to install Strawberry Perl and download a couple of modules.  Installing Strawberry Perl on the NetQoS boxes isn't a new thing.  Most of the developers and support guys have Perl installed on their test boxes and I've had it installed on many customers' boxes.  The install doesn't require a reboot and you can take all the defaults.  After doing the install, open a command prompt and type the following:
D:\>cpan Text::CSV
D:\>cpan Text::CSV_XS
Perl will download and install the necessary modules and return you to the command prompt when its done.

After that, all you need to do is download the zip and extract the files to somewhere on your NVMC.  Setup a scheduled task to run the batch file every so often.  The web page doesn't update unless the script runs (it doesn't refresh the list of duplicate devices simply by refreshing the page).

To get the script to output the html file to somewhere other than the script directory, go to the makehtml.pl file and modify the line that starts with 'my $outputfile = ' and update the output file path and name.  For example:
my $outputfile = 'D:\\NetVoyant\\Portal\\WebSite\\dupslist.html'
Perl requires a double backslash since a single backslash is the escape character.

That's it.  You're done.  You can use the browser view to put the resulting html file on an NPC page if you've designated a destination that is served up by the NVMC's IIS web service.

Enjoy!  If you have improvements, please let me know so I can update the source.

P.S. If you don't have internet access on the box, you won't be able to install the Text::CSV modules to install (since they come from the internet).  The solution is to download the Text::CSV and Text::CSV_XS tarballs and extract them using winzip or winrar or 7z.  You might need to extract several times until you get just the folder with the files in them.  Then copy them to the NVMC.  Open a command prompt and cd to the directory containing Makefile.pl (you'll have to do this for each one).  Then execute the following:
perl Makefile.PL && dmake && dmake test && dmake install
Do Text::CSV first, then Text::CSV_XS.