Wednesday, January 16, 2013

Automating NFA Parser Reports

UPDATE: CA Support has endorsed the NAST tool as the replacement for the NFAParser.  I haven't tested it, but if it's like the other updated tools it will run faster.  The nice thing is that the syntax for running the NAST tool silently is the same as the NFAParser.  So, it doesn't take much to update this tool to use the new tool.

A while back I was tasked with making it possible to view NFA Parser output inside NPC.  It was actually easier than I thought.  I came up with something that isn't as optimal as I would like it (I'll explain why later), but it works for now.

The first thing you have to do is to download the NFA Parser which is part of CA's Support Tools 6 and copy it to each harvester. If you don't want to use all the tools, you can just download the parser and put it on each harvester.  The output of the parser is an HTML file which is ready to be published to a web service so you can link to it from NPC.  The easiest way to do this is to call the parser with a working directory of C:\inetpub\wwwroot on the harvester.  That way the output will be put in that directory, ready to be viewed in a browser.  However, every time you run the parser, the output file's name contains a date/time stamp, so that makes it a little difficult to link to.  The solution is to wrap it all in a batch file that clears the old output, calls the new output, then renames the new output to some static name.  Here's what that batch script would look like:


This could be tweaked a bit to keep the last X files using the following batch script:


This second option moves the existing files up in a queue by renaming them with a higher name, except for the highest one that gets deleted.  So, if I created a scheduled task like this: C:\inetpub\wwwroot\nfa.bat 1 5, I would eventually end up with 5 files, each one representing one of the last 5 runs with nfaout1.htm being the most recent, each spanning 1 minute.  This second method is the option I'm using in production and it seems to work just fine.  In order to easily give access to the files, I create an HTML table with a column for the servers then a column for each of the retained reports.  Then I put in a row for each harvester.  I put that HTML in my custom content directory and load it into a browser view.


Obviously running the report more frequently and with a longer timespan will increase load on the harvester, so don't turn it on to run for 1 minute every minute.