Thursday, September 20, 2012

IM 2.0 on Linux vs. Windows

With the release of IM 2.0, I've been testing the installation of the various components on CentOS because my lab's investor (my wife) doesn't see the need to purchase a RedHat license.  All the better anyway since others might want to know if CentOS is an option for installation to save on adoption costs.  Frankly, I'm not sure why CA decided to go with RHEL.
While it is probably the most popular Linux server operating system, all (I repeat, ALL) of the previous NetQoS software ran on Windows.  I'm not counting MTP since it's sold as an appliance not software.  The target audience for the original NetQoS products was the network engineer.  It has since bloomed to include application owners and server administrators.  However, if you look at the role of the person who normally administers and champions the NetQoS products, it's still a network engineer.
It is my opinion that network engineers are most familiar with two operating systems: Cisco IOS and Windows.  There will be that case where the network engineer used to be on the server team but is now working on the network side.  While this obviously happens, I think there are just as many server-turned-network engineers who come from Linux/mixed (Windows & Linux, come on even Linux only environments have Exchange) environments as come from Windows only environments.  So, my conclusion is that most network engineers will be most familiar with Cisco IOS and Windows (both from server OS and desktop OS experience).  IM 2.0 should have been released on Windows.
There is another possible reason to use Linux over Windows: speed.  I agree with this argument.  Even with CentOS, I can turn off the GUI and save the resources that would otherwise be dedicated to display a locked screen 99.999% of the time.  However, the minimum RAM requirement for IM 2.0 is 4GB.  What!? I thought Linux was a better performer and could get away with not having as much RAM.  Well, it turns out that even in a lab environment when monitoring a very small infrastructure, 3GB isn't always enough.  The fact that I installed DA/DR on a box with only 1GB was pointed to as a possible reason why I was seeing problems on my installation.  Wait guys, if i have to dedicate a ton of resources, why don't we just run it on Windows?
Wasn't IM 2.0 supposed to be developed on Java?  If that's the case, why does the OS even matter?  Shouldn't it be a fairly trivial matter to compile installers for all of the major operating systems?

I'm not a developer, so you really shouldn't be reading any of this without your tough over in your cheek.  But still.

Really?
I have to learn Linux?
Really?
I have to purchase RHEL?
Really?
I have to dedicate at least 4GB of RAM in a lab environment?
Really?

Wednesday, August 15, 2012

Deleting Unused Views from NPC

I posted this on the community, but I've had to look it up a couple times and always came back to my blog to look for it before I went to the community.  That told me I should have it posted here.

If you've used the custom view wizard to create views in NPC, you may have noticed that there's no way to delete views.  Also, if you've ever installed an ePack or used the nqwebtool (not recommended, doesn't work with all versions), NPC views will get created.  There may come a time when you want to delete those views; for example, if you've deleted the dataset the view is tied to or if you recreated a view that already exists and you don't need the duplicate.  Like I said, there's no way to do this in the GUI, but there is a way to do it through the database.

Standard warnings apply, don't do this, it will break your stuff, you're on your own, back up your stuff, don't cry to me, don't tell support I told you this would work without a problem, you're on your own, etc. etc. etc.

You should be able to delete views from NPC by creating a page (preferably not in your 'my pages' menu) and put the views you want to delete onto that page. Then execute a couple queries that delete the information from the database for all the views on that page. You would need to get the page id of the page you created; just look in the url for the pageid. It should be a number, usually 5 digits, like 36598.

In the following example, that's the pageid I'll use. These queries will show you what will be deleted:

These queries will execute the deletion:

This doesn't delete views created on the device, router, server, switch, or any of the poll instance context pages.  Those would require a bit more work.  The following query should result in the ID numbers of any views in NPC that no longer have the specified dataset:
select distinct controlid from control_properties where propertyname='DataSetName' and propertyvalue='<dataset_short_name>';

Where <dataset_short_name>  is the short name of the dataset (i.e. avail as opposed to 'Device Availability').

If we combine that query with a query to NV to get a list of the datasets, and you could easily make sure to catch all views for obsolete datasets.  The problem is that this is a bit more complex because NetVoyant has some datasets that aren't in the dataset list.  Actually, NV has some contexts that aren't in the dataset list.  So, get the current dataset list with this query on the NVMC:
select dataset_name from datasets;

Then change the results from something like this:

Into something that can be used in the query, like this:
'aimdc','aimhost','aimvm','avail','ciscoMemPool','ciscoSwitch','ciscoSystem','dsx1near','dsx3near','etherlikepaus','etherstats','frcircuit','hrdevice','hrprocessor','hrstorage','hrswrun','ifstats','nbarstats','protodist','qosclass','qoscolor','qosiphc','qosmatch','qospolice','qosqueue','qosred','qosset','qosts','reach','rtthttp','rttjitter','rttstats'

Then add the following to the end of that list:
'rttstats,rttjitter,rtthttp','event_log','event_list','rttstatscap:operations','ciscoProcess'

So it reads like this:
'aimdc','aimhost','aimvm','avail','ciscoMemPool','ciscoSwitch','ciscoSystem','dsx1near','dsx3near','etherlikepaus','etherstats','frcircuit','hrdevice','hrprocessor','hrstorage','hrswrun','ifstats','nbarstats','protodist','qosclass','qoscolor','qosiphc','qosmatch','qospolice','qosqueue','qosred','qosset','qosts','reach','rtthttp','rttjitter','rttstats','rttstats,rttjitter,rtthttp','event_log','event_list','rttstatscap:operations','ciscoProcess'


So, to identify all the views in NPC that aren't tied to one of these datasets, do this query on NPC:

To delete all these views, their properties, and remove them from any pages they may be on, use these:

Note: don't copy and paste these queries as they are specific to my installation.  You may have built other datasets and views that should be included in this process.  If you copy and paste these queries, those views will be deleted and you'll have to rebuild them.

Again, this isn't recommended or endorsed by CA. You should not try this.

Wednesday, August 8, 2012

Running commands on a remote computer

In a Windows environment, running commands on a remote computer isn't as easy as it should be.  There are third party tools out there that basically install an agent on the remote computer that allows you to push commands through the agent to the remote computer's command interpreter.  While this may be fine and probably works, it's not the only option.  In fact, there is a built in method of executing commands on a remote computer.  It involves the scheduled tasks feature of Windows.

Most people use scheduled tasks to run a program on a schedule.  In fact, most people don't even use scheduled tasks.  If they do, they use it to run defrag (if they're using an older version of Windows).  However, scheduled tasks can be very powerful if used properly because programs can be run locally on the box using either system credentials or a specific user's credentials.  While scheduling a task might not seem to be the best way to run code remotely, there's a little known feature that actually makes this work wonderfully: schtasks.  This command line utility allows for programmatic manipulation of scheduled tasks.  The kicker is that this command line utility can be used to manipulate scheduled tasks on a remote machine!

Therein lies the entire strategy of running code remotely.  First of all, Microsoft has put together a surprisingly helpful set of examples.  Check it out to familiarize yourself with the commands.

So, the strategy is this:
  1. Package the code to be run on the remote computer into something that can be run silently (i.e. does not require any input from the user).  This may mean writing your batch file or perl script.
  2. Copy the package to the remote computer.  Obviously, you'll need RW access to put the script on the remote computer.  This can be done remotely (and even recursively) by mapping a drive to the destination and copying the files to the mapped drive.
  3. Use schtasks to create a new task to run once in the past.
  4. Use schtasks to run the new task now
  5. Use schtasks to delete the task (optional)
To illustrate this, I'll show how I deploy a certain script and run it immediately.  This script also has to be run nightly, but after any update to the script, I have to run it immediately on all the servers.  The script is here and might of interest to any NV users out there.
I use this script:

I call the the update script like this:

>deploy.bat myservers.txt

The argument is the name of a text file containing the names of the servers I want to push the updated file to.

If you wanted to run the batch file once then remove all traces, I use the following script.  The only problem with this one is that you have to wait for your script to finish before you can delete the scheduled task and the script.  This script let's you indicate when it's safe to go ahead with the deletion by querying the scheduled tasks list on the remote computer(s).  It will show 'Running' in the status column while the script is running and 'Ready' when it has finished.

That's about it.  Happy hacking!

Tuesday, August 7, 2012

Creating Properties for sysName, sysDescr, and sysObjectID

UPDATE: I've combined this tool with the tool I built that allows administrators to add/delete/rediscover devices without logging into the console.  This tool is combined with the properties creator simply because they both need to run right after discovery.  It actually has 3 parts: a widget, a JavaScript file and the batch file that runs every night.  First the batch file, which is an expansion of the properties creator.  In addition to the normal task of adding properties for new devices, this also adds a pair of properties that, when rendered on the NPC device details page, present the administrator with a rediscover button and a delete button.  By default, a password is required.  The password is set in the external JavaScript file (below) on lines 2 & 16 for rediscovery and deletion, respectively:


Next is the JavaScript file, which must be in the custom virtual directory (with alias 'custom'):


Lastly the widget.  The widget is only for adding new devices.


If you want to add a delete button, remove the text 'style="display:none;"' from the widget source.



Occasionally, I find it necessary to build auto-enable rules in NetVoyant based on SNMP properties like sysName, sysDescr, and sysObjectID.  Unfortunately, these are not all available for every dataset as SNMP parameters that could be used in rules.  However, custom properties are always available in auto-enable rules (not discovery rules since property comparison happens after initial discovery).  What this means is that rules can be built to automatically disable poll instances according to model, OS version, or software version (as obtained via the sysDescr).
In order for this to work however, each device needs custom properties.  Setting these manually is a pain and would take forever for anything other than a lab system.  To combat this, I've built this script (thanks Wade for the query help) that creates custom properties for every SNMP capable device containing the sysDescr, sysObjectID, sysName, sysContact, and sysLocation.

@echo off
set sqlcommand=mysql nms2 --skip-column-names -e "select count(*) from devices where snmp_capable=2 and dev_properties
set propertieslist=(select property_set_id from properties where property_name=
set logfile=D:\updateproperties.log
echo %date% - %time% - Script Started >> %logfile%
for %%A in (sysDescr,sysObjectID,sysName,sysContact,sysLocation) do (
 echo Devices with %%A property: >> %logfile%
 %sqlcommand% in %propertieslist%'%%A')" >> %logfile%
 echo Devices without %%A property: >> %logfile%
 %sqlcommand% not in %propertieslist%'%%A')" >> %logfile%
)
echo Running query  >> %logfile%
set inspropsql=mysql nms2 -e "replace into properties (select dev_properties,
set inspropsql2=0, 0 from devices where snmp_capable=2)"
%inspropsql% 'sysDescr', 18, sys_descr, %inspropsql2%
%inspropsql% 'sysObjectID',18, sys_objectid, %inspropsql2%
%inspropsql% 'sysName', 18, sys_name, %inspropsql2%
%inspropsql% 'sysContact', 18, sys_contact, %inspropsql2%
%inspropsql% 'sysLocation', 18,sys_location, %inspropsql2%
for %%A in (sysDescr,sysObjectID,sysName,sysContact,sysLocation) do (
echo Devices with %%A property: >> %logfile% %sqlcommand% in %propertieslist%'%%A')" >> %logfile% echo Devices without %%A property: >> %logfile% %sqlcommand% not in %propertieslist%'%%A')" >> %logfile% ) echo %date% - %time% - Script Ended ----------------------------------->> %logfile%

This script has to be run on the poller(s).  A new device will not get the properties until the script is run again, so, it is probably best to set it to run as a scheduled task every night right after discovery.

Once you've got the properties in place, you can create auto-enable rules using these properties by referencing them with the $.  So, for example, if I wanted to disable ifstats monitoring on all devices that have a sysLocation like 'France', I would add the following to the Property Rule in the add auto-enable rule dialog box:
$sysLocation like '%France%'
Save the rule, apply it to the dataset, then rediscover the device.  Voilá!

Monday, August 6, 2012

Data source rights and new data sources

When you add a new data source in NPC, be default, only the nqadmin and nquser accounts get access to this new data source.  The nqadmin is made an administrator and the nquser is made a user.  If you have 200 other users (including your normal user account), you won't be given administrative rights to this new data source even if you're an admin on all the other data sources.  In order to fix this the proper way, you have to edit all the users and grant them access the new data source.  While this can be done in bulk, what happens most often is that the admin doesn't take the time to distinguish between administrators and users and either grants everyone user access or every admin access or doesn't grant anyone access.  They may not notice this until someone can't do something they think they're supposed to.

It actually used to be better than this.  NPC had a concept of a permission set.  A permission set was a group of data source access rights that could be applied to users.  So, i could create one permission set called administrators and give them admin access to everything and another permission set called users and give them user access to everything.  I would then assign the admin permission set to all the admins and the user permission set to all the users.  If i added a new data source, all i would have to do is go to the permission set and update the permissions for that new data source and it would be applied to everyone with that permission set.  However, for reasons as yet unexplained, the guys building NPC decided that was too efficient and decided on a per user definition.  I guess since they already had a role component they didn't want to make role-based permissions as well.  Why they didn't just roll permission sets into the roles is beyond me (i can understand technically why they didn't: because it's more versatile that way.  But really, who knows enough about NPC to really use it that way?).

Anyway, if you find yourself in this situation and you don't want to have to do it manually, you can always do it in the database.  Standard disclaimer: don't do this, it will break your stuff, i won't help you fix it, back up your stuff, you're on your own, don't tell support i told you to do this when you call them to have them help you fix it, etc., etc., etc.

Run the following query to turn all NPC admins into admins on all the data sources:
replace into data_source_rights 
     (select a.UserID, b.SourceID, 1 as UserLevel
     from user_definitions a, data_sources2 b 
     where a.userlevel=1 and sourceid<>0)
;

Run the following query to turn all NPC users into users on all the data sources:
replace into data_source_rights 
     (select a.UserID, b.SourceID, 4 as UserLevel
     from user_definitions a, data_sources2 b 
     where a.userlevel=4 and sourceid<>0)
;

Monday, July 23, 2012

Migrating My Documents to Google Drive or Dropbox

Well, the new buzzword is 'Cloud' and it seems everyone is getting on the bandwagon and offering services through the 'Cloud'.  First of all, most of these services either aren't actually based on cloud technology but are rather the same web services always available but now using new marketing hype to generate business OR the service is based on cloud technology and has been for a long time.  Either way, to the end user there really isn't much effect except that companies are clawing their way over each other to make sure they get your business in their 'cloud' as opposed to their competitors' clouds.  End result: we get cool new products.

One of the relatively new services getting a lot of buzz is Google Drive, which competes with Dropbox, Skydrive, Cubby, ownCloud, etc.  The purpose of this blog post is not to debate the benefits of one service over the other.  I recently migrated my 'My Documents' folder to my Google Drive in order to be able to access my documents everywhere, have a backup in case my PC exploded, and all the other reasons these services exist.  This blog post will explain how I did it and what to look out for.  I will use Google Drive as the example, but any other service should be interchangeable with it.

The first thing to do is obviously get an account with Google Drive.  If you already have a Google account, you can use that.  If not, get a gmail account, then go to Google Drive to setup your drive.

Second: Install the Google Drive desktop application.  This little app creates a new folder under your profile folder called (surprisingly) 'Google Drive'.  Once that folder is created, anything you had in your Google Drive in the cloud will get synchronized to this folder on your desktop.  The same works in the other direction.  Any files and folders placed into your PC's Google Drive folder will get synchronized up to your drive on the internet.

The next thing you need to do is determine if you'll be able to fit your documents within the space allocated on your drive.  Since Google hands out 5GB for free, you should be able to synchronize your documents to the cloud if your 'My Documents' folder is less than 5GB in size.  To find this out, go to your my documents folder, select everything (Ctrl+A) and open the properties folder (Alt+Enter).

Ideally, it would be nice to put most of the files in your profile up there.  Luckily, Windows makes it easy to change the location of most of the folders under your profile.  For example, you could have your 'My Documents' folder actually stored on a secondary hard drive or a flash drive.  If you're planning on storing more than just your documents on your Google Drive, first go to your PC's Google Drive folder and create a folder for each profile sub-folder you want to include.  Like this:

  • C:\Users\sweenig\Google Drive\Documents
  • C:\Users\sweenig\Google Drive\Desktop
  • C:\Users\sweenig\Google Drive\Downloads
  • C:\Users\sweenig\Google Drive\Favorites
  • C:\Users\sweenig\Google Drive\Links
  • C:\Users\sweenig\Google Drive\Music
  • C:\Users\sweenig\Google Drive\Pictures
  • C:\Users\sweenig\Google Drive\Videos
You may not want to include all these folders.  Pick and choose the ones that you want and that can fit (don't forget to consider that some of these folders may increase in size significantly).  

Now that you've got new locations for all your profile sub-folders, go to the existing folders and change their location.  Open up your profile folder (start>>run>>C:\Users\%USERPROFILE%).  Right click on the folder you want to move to your Google Drive, open the properties dialog box, and go to the location tab.  Click the 'Move...' button and browse to the new folder under the Google Drive folder that corresponds to this folder.  Hit Ok and Windows will ask you if you want to move the old files to the new location.  Say yes.  This next part may take a while depending on the size of the folder.  This is not copying your files to the Google Drive on the internet; it's copying the files from the old location on your computer to the new location on your computer (that happens to be synchronized with the internet).  As this move process proceeds, you should see some activity on the Google Drive app icon.  It's synchronizing your files from the local Google Drive\Documents folder up to the internet.  Pause this as necessary if it slows down your internet too much.

Repeat this process for the remaining folders (if you have space).  When it's all over (which may take a while if you have a lot of files) you should be able to access all your folders the same way you did before.  Changing the 'location' of the profile sub-folders instructed Windows to use the new location but make it look like the old location.  

If you need help visualizing the size of the various folders in your profile, use a tool like Windirstat.  Point it at your profile directory and it will show you how big each folder is using a pretty cool graphic.

Thursday, July 12, 2012

GXMLG 2.0

I finally broke down and rebuilt my GXMLG tool.  Given the complexity of the task, I originally wrote the applet using MS Access.  However, to make things easier to distribute and easier to troubleshoot and use, version 2.0 uses perl and is run from the command line.  To illustrate the difference, the old utility was 6.5MB.  The new script is 11KB.  That's what a graphical interface gives you.



You'll have to install perl (I use strawberry perl on windows boxes) and run the script like this:
>perl gxmlg.pl
Running it without any arguments shows you the help file:
This script outputs any combination of configuration files for the NetQoS
suite of products. You must install strawberry perl and Text::CSV,
Text::CSV_XS, and Getopt::Long. To install CPAN modules, run cpan [module name]
from the command prompt.

Example: cpan Text::CSV

This script was created by Stuart Weenig (C)2012.  For more information visit
http://stuart.weenig.com. This script may be redistributed as long as all the
files are kept in their original state.


Current Version: 2.0

Usage: PERL gxmlg.pl [-outnpcxml] [-outsacsv] [-outnvcsvnv] [-outucmcsv]
                     [-infile NAMEOFINFILE] [-npcinspath INSERTPATH]
                     [-npcxmlname NPCXMLFILE] [-sacsvname SACSVFILE]
                     [-nvcsvname NVCSVFILE] [-ucmcsvname UCMCSVFILE]

    -infile NAMEOFINFILE        Specifies the name of the Sites file to be
                                imported. (If omitted: sites.csv)
    -outnpcxml                  Output an NPC XML groups definition file.
    -npcxmlname NPCXMLFILE      Name of the NPC file to be output. (If
                                omitted: NPCGroups.xml)
    -npcinspath INSERTPATH      The path to the group that will serve as
                                the insertion point.  Required if using
                                -outnpcxml option.
    -outsacsv                   Output a SA networks CSV file.
    -sacsvname SACSVFILE        Name of the SA file to be output. (If omitted:
                                SANetworks.csv)
    -outnvcsv                   Output a NV discovery scopes file.
    -nvcsvname NVCSVFILE        Name of the NV file to be output. (If omitted:
                                NVScopes.csv)
    -outucmcsv                  Output a UCMonitor locations file.
    -ucmcsvname UCMCSVFILE      Name of the UCMonitor locations file to be output.
                                (If omitted: UCMLocations.csv)

You must install strawberry perl and Text::CSV, Text::CSV_XS, and Getopt::Long.
To install CPAN modules, run cpan [module name] from the command prompt
Example: cpan Text::CSV

If the server you will be running this on doesn't have access to the internet, you
won't be able to install the modules automatically (since they have to be downloaded
from the internet).  The solution is to download the tarballs from www.cpan.org and
extract them using winzip or winrar or 7z.  You might need to extract several times
until you get just the folder with the files in them.  Then copy them to your server.
Open a command prompt and cd to the directory containing Makefile.pl (you'll have to
do this for each module).  Then execute the following:

     perl Makefile.pl && dmake && dmake test && dmake install

For the text modules, do Text::CSV first, then Text::CSV_XS.

The script itself is pretty simple.  Specify an input file.  This input file is the same sites/networks file referenced in my previous blog post.  Here's a sample sites list to get you started.  Then just decide which output files you want.  If you specify the npc output file, you'll also need to specify the insertion point (more information about the insertion point).

If you don't have internet access on the box, you won't be able to install the Text::CSV modules to install (since they come from the internet).  The solution is to download the Text::CSV and Text::CSV_XS tarballs and extract them using winzip or winrar or 7z.  You might need to extract several times until you get just the folder with the files in them.  Then copy them to the NVMC.  Open a command prompt and cd to the directory containing Makefile.pl (you'll have to do this for each one).  Then execute the following:

perl Makefile.PL && dmake && dmake test && dmake install

Do Text::CSV first, then Text::CSV_XS.

Monday, July 9, 2012

AT&T vs. Comcast

I've had AT&T dry loop DSL for several years.  Dry loop is the term for DSL without a phone line.  Anyway, it's been giving me problems lately and since I work from home, my work is affected any time I have an internet performance degradation.  Not to mention, if my internet is slow, Christy can't watch Netflix while I'm working.  So, I've recently decided to switch to cable internet from Comcast (Xfinity Performance).  With my AT&T internet, I'm supposed to get 6Mbps download and 700Kbps upload.  Since it's DSL, it's supposed to be a guaranteed rate.  However, this is the speed I got today just before the Comcast guy came to setup my new internet.
This is actually better than normal.  My normal download speed is just under 3Mbps with uploads somewhere around 150Kbps.  Needless to say, this is why I'm switching to Comcast.  Their plan is $20 cheaper and boasts speeds up to 20Mbps.  Cable internet isn't a guaranteed rate, but even if they live up to half the promise and the average rate is 10Mbps, that will be a huge improvement over my old internet connection.

Here are the results after the upgrade to Comcast internet.  Needless to say, I'm happier than I was.

CA Network Flow Analyzer

Back in May, I gave a presentation for the CA Infrastructure Management Global User Community.  At the time, it was recorded using Microsoft's LiveMeeting recording feature.  For some reason, PowerPoint animations don't get recorded correctly and the recording of my presentation wasn't that great.  So, I've re-recorded my presentation and uploaded it to YouTube.  It also turns out I can now upload videos longer than the default 15 minute limit.  Cool!
Anyway, here's the video:

Sunday, July 1, 2012

Manually Configuring Applications in SuperAgent

Over the years, I've configured thousands of SuperAgent applications.  I've refined the process, which includes a YouTube video (shown below) and an application flow diagram (detailed below) that I give to application owners to fill out.  Usually they give me back their own version of the application infrastructure, which has more and less than what is needed for SuperAgent.  That usually results in a meeting where I've taken their data and plugged it into my AFD and I solicit the missing information.  So, what I've decided to put in this blog post should be everything needed to get started configuring applications in SuperAgent.  Obviously, the SuperAgent administrator will need to know how to properly administer SuperAgent, but this primer is meant more for the application owners than the SuperAgent administrator.

First, the video.  This video is based on a bounce diagram presentation that I've given countless times and explains how SuperAgent works on a fundamental level.  This is important information to present to the application owners so they know why we're asking for the application flow information.



Beside the video, I also present the Application Flow Diagram (AFD). This is a Visio diagram that shows the information needed in order to configure an application in SuperAgent. I've also written a document to explain the application detailed in the example and how to fill it all out. Here it is:


Introduction

The purpose of this document is to describe a low complexity application and detail the parameters that must be obtained about that application in order to correctly configure the application for monitoring within CA Application Delivery Analysis (SuperAgent). This document also attempts to identify the types of people responsible for obtaining/providing that information about the application and infrastructure to the NetQoS administrator. Given the diversity of modern organizations, the recommended roles may not have the information required.

Tuesday, June 26, 2012

Using Profile Pics Elsewhere

UPDATE: Added section on Google+ profile pics

Just discovered a couple useful tips for embedding your twitter or Facebook profile pictures elsewhere.  Apparently, you can use the APIs to pull the images out.

For Facebook: http://graph.facebook.com/<username>/picture (where <username> is the username of the Facebook user whose profile picture you want to display) will display that user's current profile picture.  That URL doesn't even have to be updated when the user changes his/her profile picture.
For example:


For twitter: http://api.twitter.com/1/users/profile_image/<username> (where <username> is the username of the twitter user whose profile picture you want to display) will display that user's current profile picture.
For example:


Turns out Google+ also has a way of doing it: https://s2.googleusercontent.com/s2/photos/profile/{id} (where {id} is the big long ID number that G+ assigns to each user).  Don't forget to add some height and width attributes to your img tag as the profile pic from G+ can be fairly large.
For example:


I know, my profile pictures aren't that interesting, but that's only because we recently had professional photos done and it made sense to use one of them as both my Facebook and twitter profile pictures (notice the are actually different given the size restrictions on each service).  As far as I know, LinkedIn doesn't have anything quite as easy as this.  You have to make two calls to get the data: one using your own LinkedIn credentials to make sure you have permission to view the photo, then another to actually call the photo.  Kinda sucks if you ask me.  Maybe they'll wise up.

Tuesday, June 19, 2012

NetVoyant Rollups: Sums, Maximum, Percentiles, etc.


For most situations out there, the default rollup is perfectly fine.  What i mean is that when you add an expression to a dataset, the default rollup (which is an average) is exactly what someone would be looking for in a rollup.  If i show top interfaces for an hour, I'd like to sort those interfaces by the highest average utilization, which means i want NV to take an average of the utilization data points during that hour.

However, in some situations, it may be more accurate to calculate a different rollup.  For example, if i wanted to, i could have NV calculate both the average value of all the data points collected in the last hour and also calculate the standard deviation so that i know how consistent my utilization is.  Higher standard deviation means there are at least some points that are far away from the average.  I could also have NV calculate the maximum or a percentile of all the points from the last hour.  By adding max and percentile to a view, i can easily see more clearly what is happening on an interface.

One other situation is volume.  If you're polling some OID for some kind of volume (KB or MB), the first thing you should do in your expression is put it in bytes.  This allows you to take advantage of the auto scaling feature in the views.  This means that instead of showing numbers like 12000000 along the dependent axis, NV can display something like 12.  You'd then put {Scale} in the axis label so that KB, MB, GB, etc. is displayed indicating the unit.
The next thing you'd do for volume is change the rollup.  Obviously if you're tracking volume, having an average of all the points collected in the last hour is useless.  What you really want is a sum of the volume in the last hour.  To do this, remove all rollup types.

Did i mention how to do that?  I guess i didn't.  Edit the expression and click the advanced button.  Uncheck all the checkboxes so that the rollup is a sum instead of an average.

Another trick about rates:
If you're polling an OID and want to convert it to rate, create a new expression and divide the expression by the variable 'duration'.  Duration is always equal to the number of seconds in a poll cycle.  Technically it's the number of seconds since the last poll, so you do have to be a little careful about that.
Again, if your OID is in some unit like KB, convert it to bits (KB*1024*8).  Then when you divide by duration, you get bits per second.  By setting the view auto-scale to rate, NV will automatically convert it to the needed value (Kbps, Mbps, Gbps, etc.).

Thursday, June 7, 2012

The most awesome new feature of Office 2010

Alright, I found the my new favorite feature of Office 2010. I've had office 2010 for a while, but everybody knows we all only use the features we used to use. Well, I have a new feature that have added to my quick launch bar: Insert>>Screenshot>>Screenshot Clipping. The whole insert screenshot feature is pretty cool and is available in Word, PowerPoint, Excel, and Outlook (it's probably everywhere, but I haven't found a good use for it in all of them).

When you first hit the Screenshot button in the ribbon bar, you get a drop down containing thumbnails of all the windows you have currently open and not minimized. Clicking one of these thumbnails inserts a screen shot of that window at the cursor. While this is great by itself, perhaps the more useful feature (and the one i've pinned to my quicklaunch bar) is Screenshot Clipping. When you click on this, the current window is minimized and the whole screen goes grey. The mouse turns into a + cursor. Draw a box around any portion of the screen and as soon as you let up on the mouse button, a picture of that portion of the screen is inserted at the cursor! It's completely awesome.

The reason it's completely awesome is because of the ease with which it accomplishes a task which would require multiple keystrokes/clicks (doing it the previously easiest way) or even a third program (if you did it the ancient way). The previously easiest way was to activate the window of which you wanted a screenshot, pressing Alt+PrtScn, pasting back into the Word doc or email, and using Word's image cropping tool to crop out the parts not needed. This was pretty good and I was always surprised at the number of people that did it the hard way.

The hard way involved using Windows7's snipping tool. Launch it and (depending on the mode) you can get a capture of the full screen, a window, a rectangle, or a freeform shape. Once you do this, the picture shows up in the snipping tool. If you've got it setup, it also copies it to the clipboard so you can paste it wherever you want. While this works and gives flexibility into the whole process, I always found it tedious.

Anyway, I was so excited about this feature, I had to put a blog post about it. Now if only Google would put something like that into the blogger post editing toolbar.

Friday, June 1, 2012

NetVoyant Duplicates Finder

UPDATE: This method has been replaced by the ODBC method.

I've been working for a while now on a good way to find and remove duplicates from NetVoyant.  Luckily, there is a web service that can delete devices (more on NetQoS web services).  All you need is the device IP address and the poller (to build the web service URL).  I played around for a while trying to build something in a windows batch file and couldn't get it to do what I wanted to do.  So, I reverted to Perl (which I probably should have done from the beginning).  Anyway, the result is a script that can be run regularly on the NetVoyant master console.  The output is a CSV file and an html file.  The CSV file contains the output from the brains of the duplicate finder script, namely: a list of every device that exists more than once in the NV system, along with the device properties including the poller.  The CSV file is output to the script directory.  The script can be configured to output the html file wherever you want.

After that, the script uses perl to wrap the data in the CSV into an html widget.  The widget shows the same data as the CSV as well as a link on every line to delete the device.  As long as the NV pollers resolve by name, the link should work to delete the device and its corresponding scope.  If you only want the CSV, edit the batch file and comment out the call to the Perl script (i.e. put 'rem' in front of the line that start with the word 'perl').

If you do want the HTML, you'll need to install Strawberry Perl and download a couple of modules.  Installing Strawberry Perl on the NetQoS boxes isn't a new thing.  Most of the developers and support guys have Perl installed on their test boxes and I've had it installed on many customers' boxes.  The install doesn't require a reboot and you can take all the defaults.  After doing the install, open a command prompt and type the following:
D:\>cpan Text::CSV
D:\>cpan Text::CSV_XS
Perl will download and install the necessary modules and return you to the command prompt when its done.

After that, all you need to do is download the zip and extract the files to somewhere on your NVMC.  Setup a scheduled task to run the batch file every so often.  The web page doesn't update unless the script runs (it doesn't refresh the list of duplicate devices simply by refreshing the page).

To get the script to output the html file to somewhere other than the script directory, go to the makehtml.pl file and modify the line that starts with 'my $outputfile = ' and update the output file path and name.  For example:
my $outputfile = 'D:\\NetVoyant\\Portal\\WebSite\\dupslist.html'
Perl requires a double backslash since a single backslash is the escape character.

That's it.  You're done.  You can use the browser view to put the resulting html file on an NPC page if you've designated a destination that is served up by the NVMC's IIS web service.

Enjoy!  If you have improvements, please let me know so I can update the source.

P.S. If you don't have internet access on the box, you won't be able to install the Text::CSV modules to install (since they come from the internet).  The solution is to download the Text::CSV and Text::CSV_XS tarballs and extract them using winzip or winrar or 7z.  You might need to extract several times until you get just the folder with the files in them.  Then copy them to the NVMC.  Open a command prompt and cd to the directory containing Makefile.pl (you'll have to do this for each one).  Then execute the following:
perl Makefile.PL && dmake && dmake test && dmake install
Do Text::CSV first, then Text::CSV_XS.

Thursday, May 31, 2012

Windows 7 Desktop Gadget for displaying NPC Views

At some point last year, I built a Windows 7 desktop gadget for displaying NPC views on the desktop.  It's a fairly simple gadget.  You set the URL you want to display (which could actually be any URL, but use the 'Generate URL' view menu option in NPC to get an NPC view URL) and the size.  The gadget will auto refresh every 60 seconds.  The settings dialog also allows simply scaling the size of the gadget.  So, if the aspect ratio is fine, but it needs to be bigger, just put a bigger number in that field.  It's just a scalar and the height and width will be multiplied by that number.

You can download the gadget here.

Enjoy!

Thursday, May 24, 2012

NPC and NetVoyant Web Services Gadgets

In combination with my method of inserting custom content into NPC, I've created a couple of gadgets that can be added to NPC.  These gadgets give the viewer access to the NPC and NetVoyant web services that can be used to import and export group definitions, add devices to NetVoyant for polling, delete devices from NetVoyant, and delete discovery scopes from NetVoyant.

Installation Instructions

  • You can download them here [link removed, see the Tools page].
  • Extract the zip contents to the custom content directory described here.  
  • Edit the html files to update the URL.
    • Look for the <form> tag and update the action string to point to your NPC or NV server (depending on the html file).
  • Put three browser views on a page that only NPC administrators can access.  
  • Edit the three browser views to point to the following URLs (you should probably update the view names as well and I like to hide the view border)
You should now have a page showing all the controls.  Here's what they do:

NPCGroupExport.html

This gadget exports XML for any group in NPC.  This XML describes the group structure and defines any rules for the groups.  This can be used in conjunction with NPCGroupImport.html to modify or populate NPC groups programmatically.

NPCGroupImport.html

This gadget imports XML into NPC to redefine groups.  This can be used to redefine groups, rules, or membership.

NVDeviceMgmt.html [this widget has been moved here]

This gadget facilitates adding and removing devices from NetVoyant.

I hope you enjoy them.  If you have any improvements, please let me know so I can update the source.

Monday, May 21, 2012

Analyzing TCP Application Performance

Well, I've been talking about doing it for a long time and I finally got an excuse. One of my customers wanted me to prepare my SuperAgent bounce diagram presentation for recording. In stead, I opted to make a YouTube video. I've given this presentation countless times and now that I've got a video, I can spend more time answering advanced questions than going over fundamentals. Without further ado, here it is.





I've got plans for more videos covering the things I cover most often.  My next project is the video of the presentation I did for the CAIM Community's monthly webcast.

Thursday, May 10, 2012

Recent Political Revelations

Don't worry, this isn't a post about gay marriage and I don't intend on starting an argument.  Let me be clear, I am going to try to ensure that this post isn't about either side of the argument.

That said, I would like to let people know the rules that I play by when engaging in passionate debate with others on political or religious topics.

How to Argue


  • I'm not going to argue with my good friends trying to convince them that they should think or choose one way or the other.  I'll let our democratic republic system of government determine that.  
  • I don't feel a particular need to broadcast the side of the issue on which I stand.  
  • I will not berate anyone for announcing their own position, nor will I berate anyone for trying to convince others of the reasons behind their position.  
  • I will play fair and make statements like 'I believe...' and 'I think...' instead of statements like 'It is a fact that...'.  I will not say, 'You are wrong' nor 'I am right'.
  • I will probably choose to not engage with people who try get me to contravene these rules.  If someone can't have an argument within these constraints, I don't think arguing will produce any meaningful results.
  • I will be concise and logical in my arguments.
  • I will admit I am wrong when someone points out the logic in my arguments.
  • I will allow others time to rephrase or explain their arguments if I point out a flaw in their reasoning.

So, where on the moral-issues-that-are-laws scale does gay marriage fall?  Let me give the boundaries of my moral scale: murder and coffee.  I morally believe murder is wrong and it should be outlawed.  I morally believe drinking coffee is wrong, but it should not be outlawed.  My question is not whether or not gay marriage actually is morally wrong.  My question is not really whether or not it should be outlawed because everyone is free to choose and that free will engenders disagreement.  My question is whether or not the majority of the people think it should be outlawed.  

I stand on one side of the gay marriage issue, and my conviction is such that no amount of argument will sway me.  I am open to any discussion about the issue, pending my availability after work and family responsibilities.

Wednesday, May 2, 2012

SNMP Counter32 vs. Gauge32

I ran into a problem recently where a manufacturer had built a MIB that contained an OID for 'objects in cache' with syntax Counter32.  However, when polling the value of that OID, it was discovered that the OID didn't behave like a Counter32; it went up and down (Counter32 is supposed to only go up; a lower value than the previous poll indicates a roll-over).  It occurred to me that the manufacturer probably meant to indicate the current number of objects in cache and mistakenly set the syntax to Counter32.  Since the actual number of objects in cache can rise or fall, a Counter32 wouldn't accurately represent this.  Instead, a Counter32 would indicate how many items had been added/removed from the cache since the previous poll (since most NMS systems would take the delta between the previous counter value and the current counter value).  While knowing how many items had been added or removed from the cache since the previous poll might be useful, what is probably more useful would be the total count.  The difference actually has nothing to do with the value returned by the device.  The problem is that since the MIB indicated that the OID is a Counter32, most NMS systems interpret that type of object differently, performing a delta instead of reporting the actual number returned by the device.

The fix for this is to change the way the NMS system interprets the returned value by changing the MIB.  In this case, the syntax needs to be changed from Counter32 to Gauge32.

Here is what the MIB contained originally:
 proxyNumObjects OBJECT-TYPE
  SYNTAX Counter32
  MAX-ACCESS read-only
  STATUS current
  DESCRIPTION
   "The number of objects currently held by the proxy."
 ::= { proxySysPerf 2 }

Here is what the MIB needs to be changed to:
 proxyNumObjects OBJECT-TYPE
  SYNTAX Gauge32
  MAX-ACCESS read-only
  STATUS current
  DESCRIPTION
   "The number of objects currently held by the proxy."
 ::= { proxySysPerf 2 }

Changing this syntax in the MIB and recompiling into the NMS system instructs the NMS to use the raw value returned instead of performing a delta with the previously obtained value.

In NetVoyant, recompiling the newly edited MIB will be sufficient to correct this problem.  However, a restart of the Mibs service is required before the newly compiled syntax gets used.  Since everything depends on the Mibs service, everything will get restarted.

Monday, April 30, 2012

How to use a CD/DVD on a Computer That Doesn't Have a CD/DVD Drive

A friend of mine asked me today how to use a CD on a computer that doesn't have a CD drive.  Luckily, this is an easy, albeit technical, one.  The overall strategy goes like this: create an ISO file of the CD, get the ISO file onto the CD-drive-less computer, open the ISO file with an emulator.


Creating an ISO file of the disc

This is where half the magic happens.  An ISO file is basically a file that exists on your hard drive that contains everything about an optical disc.  It's just like a Word document or an Excel workbook.  Except instead of opening in Word or Excel, you have to open it in a special program (don't worry, it's as easy as double clicking the ISO file).
The first thing to do is download and install ISO Recorder by Alex Feinman.  This will allow you to copy the CD to your computer's hard drive in the form of an ISO file.  After you've installed it, put the disc in your drive and look in 'My Computer'.  You should see the CD-drive icon change to the icon of the disc inserted.  Right click on that drive and click "Create Image from CD/DVD".
ISO Recorder will pop up asking where you want to save the new ISO file.  Pick a good place for it and hit next.  Wait for it to finish and you're ready for the next step.


Moving the ISO file to the CD-drive-less computer

This part can be accomplished via whatever method you choose.  The easiest (and least technical) is to just copy the ISO to a USB flash drive.  Then copy the ISO from the flash drive to the new computer.  Other options are to copy via the network or via torrent (depending on the size).


Opening the ISO File with an Emulator

This is the fun part.  Download and install Daemon Tools Lite on the CD-drive-less computer.  You may need to reboot after the installation; do that before continuing.  Once that's finished, find the ISO file and double click it.  You should get a message saying "Mounting Image to Virtual Drive".  After your mouse stops showing the hour glass and/or the message goes away, look in your 'My Computer'.  You should see a CD-drive.  WHAT?!  This is the virtual drive.  It should have your virtual CD in it.  You can now use the CD as if it were installed in a real drive.  And guess what, this CD can't get scratches on it.

ISO version of discs are a very handy way of keeping backups of discs.  Especially if you are worried that kids may destroy the originals.  Of course, DVDs manufactured by the movie industry usually have copy protection on them, so you might not be able to do this for just any disc.

Thursday, April 19, 2012

Managing NetVoyant through Web Services

Web services can be used to manage NetVoyant devices.  I've had at least one customer who built auto-provisioning of NV monitoring through web services.  It can be a headache, but it is possible.  I won't go into details about how to actually automate the use of the web services; go take a class for that.  This practice actually goes against my preferred method of adding all devices to NV and using class designations, auto-discovery rules, and auto-enable rules to manage what gets monitored.  However, in some cases, that can't be done.  Here are the details on the web services.

In any distributed system, there are two types of systems: NV Master Console and NV Poller.  NV uses scopes to manage the devices to be monitored.  Each scope details an IP address to be monitored, a subnet mask, and the poller responsible for monitoring it.  The process of adding and removing devices from NV involves manipulating the scope that corresponds to the device to be added or removed.


Adding a device to NetVoyant

If you are using web services to add devices to a distributed NetVoyant system, you'll either want to add the device to the poller responsible for polling all the devices in that region OR you'll want to add it to the poller that is least loaded (in terms of total number of devices).  It's also possible that you may have multiple pollers covering a single region and need to determine which poller is least loaded.In order to find out the number of devices on each poller, the GetDeviceCount operation of the NetVoyantService should be invoked on each poller (and not the master console).
The NetVoyantService can be found at /pollerwebservice/NetVoyantService.asmx?WSDL on each poller.  By invoking this procedure without any arguments, the result will be XML indicating how many devices total are on that particular poller.  Unfortunately, this must be run against each poller to get that poller's count.  Below is shown a sample output.

<?xml version="1.0" encoding="utf-8" ?>
<int xmlns="http://netqos.com/NetVoyantWS/">41</int>

Wednesday, April 11, 2012

Using the browser view to add custom content to NPC

A while ago I started making flash videos using Camstudio as a way of teaching people how to use NPC. Camstudio outputs a swf with accompanying html to make it easy to post the video to a website. I wanted this to be added to NPC so i started using the browser view. I needed a place to post the html and swf files so that the browser view could access the files through a url. So, i went into IIS on the npc system and added a virtual directory pointing to a folder on the D: drive. I put the swf and html files into that folder and pointed the browser view to the url. It worked pretty well. Given a little skill with html, anyone could insert anything into NPC pages.
One handy way to use this would be to insert small snippets that help people understand what specific views mean, or how to use/interpret specific pages.

Friday, March 30, 2012

Understanding SuperAgent Network Regions

I've found that many people don't understand the concept of regions in a network definition in SuperAgent.  Given the power of a region to make defining networks easier and give more granular reports, I'm actually quite surprised that it hasn't been evangelized a bit more.  So, here's my explanation:

SuperAgent organizes data according into buckets.  SA could store the analysis data for every single client IP address in its own bucket in the database, but that's kind of the point of MTP.  Also, having reports that are that granular are only helpful if you already know where the problem is.  In addition, if you think about it, storing the analysis of two client IP addresses in two individual buckets in the database doesn't make sense if those two client IP addresses are connected to the same switch, which is using the same router to get to the WAN which is coming into the same network hardware in the datacenter.  If the two clients are using all the same network hardware, measuring two different network round trip times for those two clients is virtually impossible.  Think about it, the only thing that is different is the client's NIC, which doesn't really affect SA metrics, due to modern technologies like TCP offload engine (TOE) which bring the ACK turn around time on the NIC down to sub-millisecond.

Ok, so there's the reason to summarize networks according to the network path.  If a bunch of IP addresses use the same network path to get back to the servers monitored by SA, there's not much value in storing the analysis on a per-IP basis.

However, for groups of IP addresses that do use different network infrastructure, it is imperative to separate them so that the differentiating network hardware can be isolated and therefore identified and troubleshot (troubleshooted?).

Therefore SA provides the ability to define client networks.  Each client network instructs SA how to group IP address blocks together and treat them as one unit for analysis and storage.  Each network definition should only contain the IP addresses that share all of their network infrastructure.

This is nice because it cuts down on the amount of configuration required in SA.  To illustrate, let me give an example.  A US company has decided that its IP address scheme is to allocate an entire /10 block of IP addresses to each time zone (e.g. 10.0.0.0/10 for Eastern, 10.64.0.0/10 for Central, 10.128.0.0/10 for Mountain, and 10.192.0.0/10 for Pacific).  It then decides to allocate an entire /19 block of IP addresses to each site within that time zone (e.g. 10.30.0.0/19 for NYC, 10.74.32.0/19 for Chicago, & 10.200.128.0/19 for LAX, among others).  This is actually really easy to configure in SA.  The networks would be defined in SA as such:
Network NameNetworkMask
EST10.0.0.010
CST10.64.0.010
MST10.128.0.010
PST10.192.0.010
NYC10.30.0.019
......19
CHI10.74.32.019
......19
LAX10.200.128.019
......19
Each of the time zone IP address blocks should be configured so that any clients in the time zone that don't match a site definition get categorized somewhere.  Any traffic showing up in those networks is an indicator that a site is missing.  On a side note, the time zone networks could be given their own network type and special, tighter thresholds could be applied so that incidents trip immediately for any amount of NRTT.  A special network incident response could be setup to send an email to the SA admin to notify him/her that traffic has been seen on a time zone network (indicating a site network definition that is either missing or incomplete).

While this is great, the network administrators at the US based company decided that a standard of 32 VLANs should be implemented at every site.  Each VLAN should be a /24 subnet and each VLAN has a standard use (floor 1, floor 2, floor 3, printers, servers, wireless, etc.).  With the networks above defined in SA, the network administrator won't be able to differentiate between bad performance on a wireless VLAN and bad performance on a wired VLAN.  At this point the administrator has two options: 1) either he can rebuild all the network definitions defining every single /24 subnet or 2) he can define 32 regions in each of the site network definitions.  The better option is #2.  Here's why:

Defining 32 regions on a /19 network definition in SA is equivalent to defining all 32 /24 sub-subnets within that /19 network.  It's shorthand.  Once defined, the /19 network definition will have a plus sign (+) next to it.  When clicked, the admin can see that SA actually has 32 networks defined within that /19.  The nice thing is that they are all grouped together according to site (/19 network).

One disadvantage is that the name originally assigned to the /19 network is the same one originally assigned to all the sub-subnets (regions).  This however can be overcome by expanding a /19 (hitting the plus sign) and renaming the VLANs as necessary.  Each region can be named individually.  The way to get around this is to use option 1 and create a CSV containing all the /24 networks each with a site name prefix and a VLAN designator (name and/or VLAN number).

Thursday, March 29, 2012

Understanding SA Discovery and Pruning/Grooming

First of all, a little conceptual history around SuperAgent:
SuperAgent was meant to automate the task of analyzing packet captures for essential metrics indicating server or network latency.  An engineer wanted a better way to do it than manually and SA was born.  Since its inception, it has grown by leaps and bounds increasing its capabilities.  Despite the growth, one major concept has remained: SA is meant to automate a manual process for your top applications.  This is not a scalability issue.  It's something fundamental to the through process behind every revision of the product.  SA is meant to analyze the transactions of applications of interest to determine where latency lies.

With the most recent version, SA added a feature that automatically discovers and configures applications.  This opened up a whole new area of SA since admins didn't have to automatically configure the applications they were interested in.  All they had to do was identify the servers that might be involved and SA did the rest.  Expectations began to rise since admins could now easily increase the bounds of what was considered an 'application of interest'.

In order to prevent performance problems that might arise in very complex environments, the developers imposed a limit on the discovery process.  When the discovery process has discovered and configured 1000 servers or 1000 applications (whichever comes first) a pruning process will begin.  This algorithm reevaluates the active combinations every 5 minutes to determine which 1000 servers and which 1000 applications will remain in the configuration.  This doesn't affect any applications configured by the administrator and shouldn't affect the largest, most active applications.  Administrators have to understand that this is by design and that the applications configured in SA don't necessarily represent all the applications hosted by a server.

Luckily, the server and application limits can be raised with a simple query in the database.  To view the current limits execute the following query:
select * from parameter_descriptions where parameter like 'maxNumAuto%';
Updating those values will change the limits.  Remember, those limits were put into place to prevent performance problems.  Also SA hasn't been tested by CA's QA department with any limit other than 1000, so if you run into any problems after changing those limits, you'll get push back from support because of it.  This is one of the things included in the CIG, which is basically required for every case, so support will know that you did it.  

I have increased the limits by 500 in some cases, just to push the envelope a little.  I didn't experience severe, immediate problems.  If you need much more than that, consider more infrastructure (read more SA master consoles).

Wednesday, March 21, 2012

Disabling SuperAgent Relationship Groups

When an application is configured in SuperAgent, the application actually consists of two parts: the application configuration item itself and the servers assigned to the application.  In order to grant permissions to specific applications, administrators would need to create a group in NPC containing both the application configuration item and the servers.  The problem with this is that the group that would be created would be static; any time the application configuration changed in SuperAgent, the group would have to be updated.

The answer to this problem is a pair of group sets created by SuperAgent that contain these items.  One set contains a group for every application in SuperAgent containing the application CI and the servers assigned to it.  The other set contains a group for every server containing the server and any application CIs the server belongs to.

This worked well in the past when applications were manually configured.  This only resulted in twice as many groups as the number of applications/servers that the admin was willing to configure.  However, with the advent of automatic application discovery and configuration in SuperAgent, it is possible that the number of these groups could skyrocket.  This high number of groups can degrade the performance of NPC sync.  As a result, there is a need to disable them.  The only bad part about disabling them is that you no longer can take advantage of the dynamic groups for permissions purposes.

In order to disable these groups, you have to go to SuperAgent and execute a couple of queries.  Before executing any queries you find on the internet, you should backup your database.  There.  You have been warned.
REPLACE INTO parameter_descriptions(Parameter, Level, Type, DefaultValue, Description) VALUES ('SyncRelationshipGroups',   'ProductSync', 'boolean', 'false', 'Sends App/Svr relationship groups to the performance center'), ('SyncRelationshipsEnabled', 'ProductSync', 'boolean', 'true',  'Sends App/Svr relationship combinations to the performance center');
UPDATE parameter_descriptions SET DefaultValue = '0' WHERE Parameter IN ('pullLastFullSyncTime', 'pushLastFullSyncTime', 'pullLastIncrSyncTime', 'pushLastIncrSyncTime');
UPDATE parameter_descriptions SET DefaultValue = 'true' WHERE Parameter = 'pullForceFullSync';
After running these queries, kick off a full resync of NPC.  It may take some time for all those groups to get deleted from NPC.  I don't suggest doing this on multiple SA MCs at once.  Do them one at a time and let NPC sync with one before moving on to the next.

Friday, March 16, 2012

How to Build or Modify Groups in NPC by using XML

In a previous post, I discussed how to build application reporting groups using dynamic system groups.  While this strategy is the recommended way of building application reporting groups, it can become tedious to actually copy and paste all the system groups into your application group.  Especially if you have the most recent version of SuperAgent and it has discovered a ton of applications and networks.  Luckily, there is an unpublished easier way: XML.  NPC has the ability to use XML to modify the group structure.  This includes the ability to put referential copies of system groups into custom application reporting groups.

The following is an excerpt from some documentation I wrote about how to use the web service to modify various portions of the NetQoS systems:

Group management in NPC is performed through the AdminCommand web service (found at PortalWebService/AdminCommandWS.asmx?WSDL ).  Adding or removing groups and adding or removing members from groups is accomplished by invoking the UpdateGroups operation (found at PortalWebService/AdminCommandWS.asmx?op=UpdateGroups).  Three main parameters are required: (1)useIDs, (2)allowDeletes, and (3) XML defining the groups (the groups to be added/removed/updated).  Groups themselves are defined explicitly in the XML while membership of groups is managed by a set of rules applied to each group.  Managed objects that match the rules are included as members of the group, objects that don’t match are excluded.  
The first two parameters are global and don’t change often.  The parameter useIDs instructs the operation whether or not to use the group ID values from the XML definition when identifying the group to be updated.  The useIDs parameter will need to be set to ‘true’ to ensure the correct groups are updated.  The only time a value of false would be used is when a single XML file is being used to import a group structure from one NPC system to another.

Thursday, January 19, 2012

Fighting SOPA and PIPA

SOPA and PIPA are two pieces of legislation aimed at decreasing the amount of illegal piracy that happens on the internet.  They could, if passed in their current version, grant the government powers that could cause problems for legitimate web sites.  I can't say it much better than Wikipedia and Google.  Read the information available before making any decision.

One thing I can do is help spread the message.  If you have a Twitter account and live in Texas, the following links will tweet a message to the Texas congressmen and women asking them to represent their constituents by opposing SOPA/PIPA.

Brought the bill to the house:
Lamar Smith

Texan representatives on Twitter:
Joe Barton
Kevin Brady
Michael Burgess
John Culberson
John Carter
John Cornyn
Henry Cuellar
Quico Canseco
Lloyd Doggett
Bill Flores
Kay Granger
Gene Green
Charlie Gonzalez
Louie Gohmert
Al Green
Ruben Hinojosa
Kay Hutchison
Jeb Hensarling
Sheila Jackson Lee
Eddie Johnson
Michael McCaul
Kenny Marchant
Randy Neugebauer
Pete Olson
Ron Paul
Ted Poe
Silver Reyes
Pete Sessions

Friday, January 13, 2012

How to Use Poll Instance Properties in a View


If you aren't completely, 100% comfortable with manipulating the database manually, stop reading this post now.  If you continue reading this post, you do so at your own risk.

In NV (and NPC if you've jailbroken it), you can create views using the NV Custom View Wizard.  One of the things the developers did but didn't really publicize is that you can pull poll instance properties into those views.  This includes any custom properties you may have added to the poll instance.  By default all poll instances have at least two properties: 'Name' and 'Description'.  Built in dataset poll instances (like ifstats) will have other properties which you can also use.  The trick is that you have to manipulate the database slightly in order to instruct the view to even look at the properties.  It isn't too bad, but like I said before, if you don't know how to do this with your eyes closed, don't try it.
  1. First you have to get the control id of the view (technically called a control) you have just created and will be modifying.  This is pretty easy to get, just open the view in the wizard and look in the url for the controlid parameter.  In this example, the id is 1200031.
  2. Next you need to get into the database and take a look at the control properties for your control.  Something like this: 'select * from control_properties where controlid=1200031 order by pageid, propertiesid, userid, propertyname;'
That should result in something like the following.

ControlID PageID PropertiesID UserID PropertyName PropertyType PropertyValue Editable Enabled
1200031 0 0 0 ColumnNames string latencymin,latencyaverage,latencymax Y Y
1200031 0 0 0 ControlType string PollInstance Y Y
1200031 0 0 0 Create.Date string Added at 1/13/2012 9:05:07 AM by nqadmin Y Y
1200031 0 0 0 data.chartType string Table Y Y
1200031 0 0 0 DataSetName string ISILONPerf Y Y
1200031 0 0 0 Description string This report focuses on the worst values for the specified parameter and therefore may be more prone to problems or failure. Y Y
1200031 0 0 0 DisplayFormats string ms|ms|ms Y Y
1200031 0 0 0 DisplayNames string Minimum,Average,Maximum Y Y
1200031 0 0 0 drillDown.target string Y Y
1200031 0 0 0 FieldNames string latencymin;latencyaverage;latencymax Y Y
1200031 0 0 0 FieldNames2 string Y Y
1200031 0 0 0 FieldNames3 string Y Y
1200031 0 0 0 FieldNames4 string Y Y
1200031 0 0 0 FieldNames5 string Y Y
1200031 0 0 0 Filepath string /nqWidgets/Poller/wptTopN.ascx Y Y
1200031 0 0 0 footer.text string Y Y
1200031 0 0 0 Limit string 10 Y Y
1200031 0 0 0 MibTables string ISILON_MIB.nodeProtocolPerfEntry Y Y
1200031 0 0 0 OrderBy string latencyaverage DESC Y Y
1200031 0 0 0 PropertyNames string Description,Name Y Y
1200031 0 0 0 RedThreshold string Y Y
1200031 0 0 0 Title string Isilon Node Performance Table by Protocol Y Y
1200031 0 0 0 Wizard.Action string ReportWizard('/npc/ReportWizard.aspx?PageID={PageID}&CtrlID={CtrlID}&PropertiesID={PropertiesID}'); Y Y
1200031 0 0 0 yaxis2.ColumnNames string Y Y
1200031 0 0 0 yaxis2.DisplayColors string Y Y
1200031 0 0 0 yaxis2.DisplayFormats string Y Y
1200031 0 0 0 yaxis2.DisplayNames string Y Y
1200031 0 0 0 YellowThreshold string Y Y

  1. You'll notice that I have a property called PropertyNames.  This is the entry you need to add to this table.  The propertyname will be 'PropertyNames' as shown.  The propertyvalue will be a comma separated list of the properties you would like to be available to this view.  For example, since I want both the description and the name properties available to this view, I added them both in there.
    You would normally add this record using an insert SQL statement.  A 0 in the pageid, propertyid, and/or userid fields represents a wildcard.  Technically, a view can exist multiple times on a page.  If it does, the pageid will contain the id of the page on which the control instance(s) exist(s).  The propertyid refers to which instance of the view on the page.  The userid refers to any properties that the user may have modified and saved to only their account.  If you want this change to apply to all instances of the view, the easiest would be to make the change to the default (pageid=0, propertyid=0, userid=0) then re-customize any views that have been customized.
  2. Once you've got that done, you can go back to the view wizard.  Go to the 4th step.  You can now use the property values within expressions or the 'Where' field.  The syntax is as follows: p_PropertyName.property_value, where PropertyName is the name from the comma separated list you entered into the database.  In my case, if I wanted to display the description, I would create an expression called 'Description' and the expression formula would be 'p_Description.property_value'.
One of the nice things about exposing properties is that you can use the properties in the 'Where' field of the view.  The where field allows you to pre-filter the view so that only certain objects show up.  For example, if I knew that the description contained an enumerated set of values (cifs, nfs, http, other), I could create a view that only shows the cifs objects by putting "p_Description.property_value = 'cifs'" in the Where field.

Unfortunately, this doesn't work for device properties like sysLocation, sysContact, sysName, sysObject, etc.  However, there is a script floating around that will take those parameters and store them in custom poll instance properties, which could then be exposed using this method.

Monday, January 9, 2012

How to convert VHS to DVD

We don't have that many VHS tapes lying around our house.  However, our wedding video is one important exception.  As a present for my wife on our 10 year anniversary, I decided to see if I had enough junk in my garage to do a conversion to a digital format so we could watch it.  Turns out I do.

The first think I would obviously need is a VHS player, also known as a VCR.  Fairly simple, nothing special.  I might have gotten a little better quality if the VCR had an S-video output instead of RCA composite video.  Maybe.

Friday, January 6, 2012

How to Wipe a Computer

I find myself leaving another job in favor of a better job.  Has happened quite a bit recently.  I wonder what that says about the economy?

Anyway, I've got another laptop I need to wipe clean before I can turn it back into my previous employer.  I figure, while I'm doing it, I might as well write down the various techniques.  You may want to do this anytime you're turning in an old computer or recycling it or giving it away.  Who wants to leave all that personal information on a computer!?

There are basically two options, the preference will be up to you.

Thursday, January 5, 2012

How to get Free HD TV

I hinted in my previous post that I made Hulu obsolete at our house at the time that I installed my Apple TV.  I didn't perform any real magic, but I do want to explain how I did it in case anyone out there is looking to do the magic that I do.

I normally only watch Hulu for ABC shows since I don't get good reception on our TV for the ABC affiliate here in Houston.  Let me explain.  Before I added the Apple TV to the mix, I used an antenna mounted in my attic which fed a signal to a pair of USB HDTV tuners connected to a pc connected to my TV.  I used Windows Media Center (WMC: a free piece of software included in every version of Windows Vista and Windows7) to watch and record TV.  WMC has a free guide built in and since I used two tuners, I could record/watch up to two shows at one time.  It was great.  I could pause live TV, I could schedule my favorite shows to record, and since it was a full blown PC, I could watch Netflix and use Hulu Desktop for online content.  I could even watch general conference via the internet browser.

All of this was great, except that the position of the antenna didn't give me great reception on ABC and I only got fair reception on some channels, missing out on the secondary channels altogether in some cases.  So, at the same time that I installed the Apple TV, I chose to get one other piece of hardware to alleviate the need of having a PC connected to the TV.  After all, I had to have the PC turned on all the time in order to ensure that all my shows would get recorded.  That combined with a big external hard drive that I had hooked up for TV show storage added to the power and heat inside my little entertainment cabinet.  I knew that I could use my XBOX as a media center extender, so I knew I could shift the load from the TV PC to my office PC.  The only problem was that I would have to run coax cable from the antenna down to my office and then plug in the two USB HDTV tuners.  I didn't want to have to run more cable through the wall.

The solution was to get a SiliconDust HD HomeRun Dual.  I found it used on the internet (thank you Amazon) for around $75.  The advantage of this device is that it combines the two tuners into one device and it uses Ethernet as opposed to USB for connectivity to the Windows Media Center PC.  This means that instead of running a coax cable from the antenna through the walls to my office PC, I could just run an Ethernet cable from the antenna/HomeRun Dual to my home router.  Since this is much easier due to the placement of my router, this became the optimum solution.

So, I installed the HomeRun Dual in the attic and connected it to the antenna.  I ran an Ethernet cable from there down to my punch-down and from there patched it into my router.  I installed the little utility on my office PC and fired up WMC.  WMC found the tuner on the network without any real work and before I knew it, I was watching TV on my office PC.  As it turns out, any other PC on my network can use one or both of the tuners as long as another PC isn't using them.  They are a pool of tuners available to everyone on the network.  Awesome!

The only problems was that the reception had changed.  I wasn't getting all the channels I was getting before or I was getting them, but not well enough to watch.  This wasn't unexpected since any time you change wiring in regards to an antenna, things can change.  I decided I needed to get my antenna higher so it would get better reception for those channels.  I mounted the antenna on the chimney and checked things out.  Lo and behold, I got wonderful reception on all the channels I used to get and I also now get very good reception on ABC and the affiliates.  I even now get 3 sub-channels to PBS!

Now that I get ABC, I don't really need Hulu.  If there is something I want to watch that I don't get through Netflix, over the air HDTV, or isn't in my collection of DVDs, I can always check Hulu.  If all else fails, Amazon and gohastings.com are always there to help with a used DVD.