Google Latitude - Google, Inc.
Summary - This handy little app scares people who are paranoid about Google being their big brother. I look at Google in this way: "If Google wanted any of my information, they'd get it. So why go to the effort of hiding it?" Latitude tracks my current GPS position and uploads it regularly to the Google cloud. Within the app, I have a few defined friends that also use Latitude. I choose to share my current location with them. This means I can always check where my wife, my Dad, my brother, and my best friend are. Hopefully the service won't be turned off without the functionality being moved into Google+.
Pros - Easy to setup, easy to add friends, quick to launch, and I always know if my parents are on their way over for a surprise visit to see the grandkids. I can also set an option (that's disabled by default) to keep a log of my locations (instead of just my current location). This could be useful for parents trying to keep an eye on their kids or for catching the crooks that steal your phone.
Cons - Google knows where you've been. Not many people use it. It's not yet integrated with Google+ (maybe someday).
I'm an engineer who doesn't care for a lot of fluff for fluff's sake.
Saturday, December 15, 2012
Friday, December 14, 2012
iOS App Review: Grabatron
Grabatron - Future Games of London
Summary - This game has kept itself on my first page of apps for several months now. It's a simple game where you fly a UFO by tilting the iPhone. Tap the screen and a giant claw descends. The goal is to grab stuff with the claw and destroy it. Simple gamification techniques make the game easy to put down and resume later while still feeling like I've made progress. The whole thing has a fun 1950s Aliens Attack! feel to the graphics and theme.
Pros
Cons
Summary - This game has kept itself on my first page of apps for several months now. It's a simple game where you fly a UFO by tilting the iPhone. Tap the screen and a giant claw descends. The goal is to grab stuff with the claw and destroy it. Simple gamification techniques make the game easy to put down and resume later while still feeling like I've made progress. The whole thing has a fun 1950s Aliens Attack! feel to the graphics and theme.
Pros
- Easy to learn the mechanics of the game
- In app upgrades can be bought with real money or crystals earned in the game
- Crystals can also be earned by watching neatly tucked away ads
- The object of the game is destruction!
Cons
- Loads slowly on my iPhone4. Once loaded, performance is fine
- Music and sound effects were annoying at first. Easy to disable though
Thursday, December 13, 2012
iOS App Review: Tower Madness
TowerMadness - Limbic Software
Summary - One of the most fun games I've played. It's not that much different than other tower defense games, but consistent new maps and tower upgrades keep me coming back and playing the same parts of the game over and over.
Pros
Cons
Summary - One of the most fun games I've played. It's not that much different than other tower defense games, but consistent new maps and tower upgrades keep me coming back and playing the same parts of the game over and over.
Pros
- New maps released somehow right after I get tired of the existing maps
- New weapon upgrades released somehow right after I get tired of the existing weapon upgrades
- Good graphics
- Easy to learn
- Endless mode allows for longer play with more money and more aliens to kill
- Pinch/spread zoom in/out seamlessly integrates with 3D environment
- 1x/2x/4x play modes so you can play faster
Cons
- No 'upgrade all the way' or 'upgrade as much as I can afford' option (instead you have to go through each upgrade manually
- No option in endless mode to send 10/50/100 waves at once (instead you have to tap the single button 10/50/100 times)
- More maps can be bought as in app purchases - that's how they make money
- More weapons can be bought as in app purchases
- 8x play mode would be nice
- My iPhone 4 slows down when more than 40 waves are present on the playing field and in 4x mode. Doesn't seem to affect score
- One tower has an upgrade that turns it into a Tesla Coil. While it's a great weapon, it doesn't work against airborne enemies. Since it's all science fiction anyway, I would prefer that this upgrade gave the tower the ability to engage airborne enemies, much like the famed Tesla Death Ray
Wednesday, December 12, 2012
SA Collector Feed List through ODBC
Continuing my effort to document the various ways I've used the ODBC connector for the NetQoS products, I'm going to post a couple of the queries and controls I've built and that I use in production. Today's query comes from a need to review all the collector feeds for all the collectors on two different master consoles. Collector feeds are configured for each collection NIC on standard collectors. I haven't run this query on a MC with a MTP connected, but it should result in the same thing.
The query comes from three different tables and results in a list of feeds and options to edit each collector. Unfortunately, the functionality to directly modify the feed properties within the GUI isn't present yet, but something like it should be coming soon. Here's the SelectCommand and OdbcConnection String to put in the configuration.xml:
To create the view, run the following SQL commands against the NPC server:
The query comes from three different tables and results in a list of feeds and options to edit each collector. Unfortunately, the functionality to directly modify the feed properties within the GUI isn't present yet, but something like it should be coming soon. Here's the SelectCommand and OdbcConnection String to put in the configuration.xml:
To create the view, run the following SQL commands against the NPC server:
Tuesday, December 11, 2012
ODBC Connector for NPC
Most of the built in reporting for NPC is great and can be modified by the administrator to get things to look exactly the way they need to be. However, sometimes the built in reports just don't do it. Show my side note.
Specifically, the data that we need to pull is not necessarily available either because the views themselves are limited or because the queries are not the kind of reports that are normally run in NPC or even in the data source. In this case, we have to turn to the ODBC Connector. Not much is known about this, but I was recently introduced to it as a way of bringing external data into NPC. Turns out it's very flexibly and fairly easy to work with.
The ODBC connector allows you to source a table view in NPC with any MySQL query on any server your connector server has MySQL access to. So, if you can run the query manually, you can use the query as the source of a view on an NPC page. Pretty cool huh? I've built a couple of queries and views into my own NPC. Now instead of having scheduled tasks that run queries nightly and display that (sometimes stale) output in a browser view, we have dynamic live data in paginated form. Here's how you do it.
Standard warnings apply: you'll break your stuff, I won't help you fix it, neither will CA support. Don't do this. For more details look at the footer of this page. Make a backup first.
Install the ODBC connector. Usually this is done on the NPC server, but there's no reason it couldn't be done on one of the data sources or even on a standalone server. Warning: all NetQoS services on the target server will be stopped as part of the installation, so you'll need to do a reboot at the end of the installation. Don't worry, you're prompted to do it.
After you get it installed, you'll need to hack NPC a little so you can add it as a data source. Run the following command on the NPC server.
Now add the ODBC connector as a data source in NPC just like you'd add any other data source. So far, so good. Now we need to design our query, build the view that will show the data, and put that view on the page.
I use HeidiSQL because it's easy to look around at things and involves less typing than the command line. Use whatever you want. Either way, design your query such that you can run it and the output looks the way you would want it to look in NPC. Don't use spaces in the field names, we will replace the field names later with more readable names. For example, here's an easy query: select 'Hello World!' as title, 'Nice' as Column2, '1' as Column3;. This results in one row with three columns. This query needs to be added to the configuration XML in D:\netqos\portal\integration\ODBC\configuration.xml. You add it between the <SelectCommands> tags and using a <SelectCommand> tag like this (the tags are probably case sensitive, do it like the ones already there):
Once that's in there, you'll also need to specify a portion of the configuration that will tell the ODBC connector which server to connect to and against which to run the query. You add it between the <OdbcConnectionStrings> tags with an <OdbcConnectionString> tag liek this:
This should all be self explanatory. Save the XML configuration and we should be ready to move on to the next step: building the view.
Even though we might be pulling data from the NetVoyant database, we can't use the NetVoyant view wizard to create the views for the ODBC connector. Luckily, it only takes a few database inserts to create a new view. Be careful, as removing a view is a bit more complicated than adding a view. To add a view, simply use the following SQL script. The comments in the script should be descriptive enough. Each comment is about the line that follows it. They key is to make sure the Integration.Parameter.SelectCommandName is the same as the name specified in the ODBC configuration.xml (in this case 'HelloWorld') and that the Integration.Parameter.OdbcConnectionStringName is set to the name specified in the config.xml (in this case 'NetVoyantMC').
You can have selectcommands and odbcconnectionstrings in yours configuration.xml even if they're not used.
After using the SQL commands above to create the view, simply edit a page and look in the corresponding view category for your new view. Add it to the page and it should run the query when the page is refreshed, showing updated data.
If you've added the view but need to modify the select query in the config.xml, you'll need to update the corresponding properties to display them. Also, you can use the 'concat()' and 'convert() using utf8' functions in a query to put advanced html into a particular column (like a link to a certain item's details page). Something like this would give you a row for each collector with links to edit each collector (I'll post the full configuration in my next post):
In the next few posts, I'll post some queries (like the one above) and views that I've built to do some basic reporting in NPC from SuperAgent and NetVoyant.
Software should come out of the box with the best settings enabled but be customizable to the point that it can field even the strangest requests. NPC is close to that. CAPC promises to lock down the customizability in an effort to ensure that people are looking at things the right way. The problem is that people don't usually want to change the way they look at things. CAPC will fail at this unless it becomes much more flexible. Ok, stepping off my soap box and returning to topic. |
The ODBC connector allows you to source a table view in NPC with any MySQL query on any server your connector server has MySQL access to. So, if you can run the query manually, you can use the query as the source of a view on an NPC page. Pretty cool huh? I've built a couple of queries and views into my own NPC. Now instead of having scheduled tasks that run queries nightly and display that (sometimes stale) output in a browser view, we have dynamic live data in paginated form. Here's how you do it.
Standard warnings apply: you'll break your stuff, I won't help you fix it, neither will CA support. Don't do this. For more details look at the footer of this page. Make a backup first.
Install the ODBC connector. Usually this is done on the NPC server, but there's no reason it couldn't be done on one of the data sources or even on a standalone server. Warning: all NetQoS services on the target server will be stopped as part of the installation, so you'll need to do a reboot at the end of the installation. Don't worry, you're prompted to do it.
After you get it installed, you'll need to hack NPC a little so you can add it as a data source. Run the following command on the NPC server.
Now add the ODBC connector as a data source in NPC just like you'd add any other data source. So far, so good. Now we need to design our query, build the view that will show the data, and put that view on the page.
I use HeidiSQL because it's easy to look around at things and involves less typing than the command line. Use whatever you want. Either way, design your query such that you can run it and the output looks the way you would want it to look in NPC. Don't use spaces in the field names, we will replace the field names later with more readable names. For example, here's an easy query: select 'Hello World!' as title, 'Nice' as Column2, '1' as Column3;. This results in one row with three columns. This query needs to be added to the configuration XML in D:\netqos\portal\integration\ODBC\configuration.xml. You add it between the <SelectCommands> tags and using a <SelectCommand> tag like this (the tags are probably case sensitive, do it like the ones already there):
Once that's in there, you'll also need to specify a portion of the configuration that will tell the ODBC connector which server to connect to and against which to run the query. You add it between the <OdbcConnectionStrings> tags with an <OdbcConnectionString> tag liek this:
This should all be self explanatory. Save the XML configuration and we should be ready to move on to the next step: building the view.
Even though we might be pulling data from the NetVoyant database, we can't use the NetVoyant view wizard to create the views for the ODBC connector. Luckily, it only takes a few database inserts to create a new view. Be careful, as removing a view is a bit more complicated than adding a view. To add a view, simply use the following SQL script. The comments in the script should be descriptive enough. Each comment is about the line that follows it. They key is to make sure the Integration.Parameter.SelectCommandName is the same as the name specified in the ODBC configuration.xml (in this case 'HelloWorld') and that the Integration.Parameter.OdbcConnectionStringName is set to the name specified in the config.xml (in this case 'NetVoyantMC').
INSERT INTO controls SELECT -- sets the controlid to the highest unused id below 30000 if(min(id)-1>0,min(id)-1,29999), 'Hello World', -- name of the view 'Graph', 'ODBC Connector Views', -- category of the view '/nqWidgets/Integration/wicTable.ascx', 1, 0x200000, 0, 0, 0, 1, '|nc|g|', '', 'Y' FROM controls WHERE datasourcetype=0x200000; INSERT INTO control_properties (ControlID, Editable, PropertyName, PropertyValue) VALUES -- name of the view (-1, 'N', 'Title', 'Hello World'), -- sql column names separated by semicolon (-1, 'N', 'DisplayColumns', 'title;column2;column3'), -- format of the columns (normally string) (-1, 'N', 'DisplayFormats', 'string,string,string'), -- nice column names (-1, 'N', 'DisplayNames', 'Message','Column 1','Column 2'), -- column widths (not sure on units) (-1, 'N', 'ColumnWidths', ',,'), -- alignment of columns (-1, 'N', 'HorizontalAlignments', 'Left,Center,Right'), -- sql column name to sort by (-1, 'N', 'SortField', 'title'), -- desc or asc (-1, 'N', 'SortDirection', 'Desc'), -- name of selectcommand in config.xml (which query to run) (-1, 'N', 'Integration.Parameter.SelectCommandName', 'HelloWorld'), -- don't change this (-1, 'N', 'Integration.DataType', 'SelectCommand'), -- don't change this (-1, 'N', 'Integration.Parameter.DateTimeFormat', 'yyyy-MM-dd HH:mm'), -- max 200 (-1, 'N', 'Integration.Parameter.TopN', '200'), -- name of odbcconnectionstring (which server to query) (-1, 'N', 'Integration.Parameter.OdbcConnectionStringName','NetVoyantMC'); UPDATE control_properties SET ControlID = (select min(id) from controls where datasourcetype=0x200000) WHERE ControlID = -1;
You can have selectcommands and odbcconnectionstrings in yours configuration.xml even if they're not used.
After using the SQL commands above to create the view, simply edit a page and look in the corresponding view category for your new view. Add it to the page and it should run the query when the page is refreshed, showing updated data.
If you've added the view but need to modify the select query in the config.xml, you'll need to update the corresponding properties to display them. Also, you can use the 'concat()' and 'convert() using utf8' functions in a query to put advanced html into a particular column (like a link to a certain item's details page). Something like this would give you a row for each collector with links to edit each collector (I'll post the full configuration in my next post):
In the next few posts, I'll post some queries (like the one above) and views that I've built to do some basic reporting in NPC from SuperAgent and NetVoyant.
Monday, December 10, 2012
iOS App Review: Flashlight by Rik
I've been asked before what apps I have installed on my iPhone. I figured I could detail which ones I use and why through my blog. This will give me room to put my reasons for/against a certain app. I'll try to include my overview, pros, and cons. Here goes nothing...
Flashlight by Rik - Henri Asseily
Summary - This turns the iPhone's camera's flash into a flashlight. Instead of flashing on when taking a picture, this app turns on the flash continuously.
Pros - Tons of options including an option to darken the screen while in use. When in darkened mode it shows your battery usage so you can keep an eye on things. It includes standard flashlight functions like auto SOS message through Morse code, manual Morse code, and strobe (more just for fun). There's also an option to control the intensity, so it can be turned down to make your battery last longer. It's also free.
Cons - Like any other flashlight app, it will consume battery while it's on. You can't use other apps and keep the flashlight on. This is not unique to this flashlight app and has to do more with iOS' so called 'multitasking'.
Flashlight by Rik - Henri Asseily
Summary - This turns the iPhone's camera's flash into a flashlight. Instead of flashing on when taking a picture, this app turns on the flash continuously.
Pros - Tons of options including an option to darken the screen while in use. When in darkened mode it shows your battery usage so you can keep an eye on things. It includes standard flashlight functions like auto SOS message through Morse code, manual Morse code, and strobe (more just for fun). There's also an option to control the intensity, so it can be turned down to make your battery last longer. It's also free.
Cons - Like any other flashlight app, it will consume battery while it's on. You can't use other apps and keep the flashlight on. This is not unique to this flashlight app and has to do more with iOS' so called 'multitasking'.
Monday, December 3, 2012
Automating iTunes
In order to train my kids to go to sleep with ambient noise and without I wanted to setup a way to have music play in their room on some nights and no music playing on other nights. Given that the holiday season is here, I decided to go ahead and document how I did this.
First of all, I put a small form factor PC in their room on one of the bookshelves behind the books. I hooked up some cheap speakers to it and installed iTunes. This was enough to get started. Since iTunes on that PC was joined to my home share, it was easy to control the music using my iPhone. There were several problems with this: 1) I had to remember to turn it off or it would be going all night, 2) I had to carve out a section of my iTunes library that would not wake the boys up, and 3) I sometimes had to turn on the music while I was holding a sleeping baby (not easy). So, I decided to automate things. A quick Google search led me to Maximize Software's collection of Windows iTunes scripts. After unpacking the scripts and familiarizing myself with the various options, I started building a batch file that would automate things.
Let's define the objectives:
Let me break it down:
I put this patch file into the same directory as the scripts and setup 3 scheduled tasks, each starting on a different day and repeating every 4 days:
First of all, I put a small form factor PC in their room on one of the bookshelves behind the books. I hooked up some cheap speakers to it and installed iTunes. This was enough to get started. Since iTunes on that PC was joined to my home share, it was easy to control the music using my iPhone. There were several problems with this: 1) I had to remember to turn it off or it would be going all night, 2) I had to carve out a section of my iTunes library that would not wake the boys up, and 3) I sometimes had to turn on the music while I was holding a sleeping baby (not easy). So, I decided to automate things. A quick Google search led me to Maximize Software's collection of Windows iTunes scripts. After unpacking the scripts and familiarizing myself with the various options, I started building a batch file that would automate things.
Let's define the objectives:
- I want the music to come on and turn off by itself - this should be pretty easy to do using a scheduled task.
- I don't want the music to start off at full blast in case we've put the boys down early.
- I want the music to play whenever we put the boys to sleep, which could be as early as 7pm but as late as 9pm.
- I want to be able to rotate playlists so it's not the same thing every night.
- I want to be able to kick off the music any time.
- I want to be able to manually override either the current song or the current volume level.
So, the first thing I did was copy all the music I wanted to play into that computer's iTunes library. I wanted to be able to use my computer independently of the nursery music, so that meant all the music needed to be local. The next thing I did was setup a couple of playlists. The first is called AllMusic and is a smart list with the following criteria: Media Kind = Music & Playlist is not 'Waves' & Playlist is not 'Christmas' & Location = 'on this computer'. Waves and Christmas are the other two playlists. Christmas is also a smart play list with the following criteria: Media Kind = Music & Genre = Christmas & Location = 'on this computer'. Waves is a manual list that contains a single 1 hour 8 minute MP3 of waves crashing on a beach. The idea behind the playlists is that each playlist gets a night and there would also be one quiet night, setting up a rotation of 4 nights.
The next thing to do was to setup the batch file to play any given playlist, ramp the music up, and ramp the music down. Here's the batch file:
@echo off SetVolume.vbs 0 PlayPlaylist.vbs "%1" for /L %%A IN (0,1,100) DO ( echo.Increasing volume by 1/sec. call:WAITER 1 SetVolume.vbs +1 ) call:WAITER 7200 for /L %%A IN (100,-1,0) DO ( echo.Decreasing volume by 1/10 sec. call:WAITER 10 SetVolume.vbs -1 ) Stop.vbs SetVolume.vbs 100 GOTO:EOF :WAITER echo.Waiting for %~1 second(s). choice /C x /N /T %~1 /D x > NUL goto:eof
@echo off SetVolume.vbs 0 PlayPlaylist.vbs "%1"This section starts things off by turning off screen echo, setting the starting volume to 0, then starting to play the playlist provided as the first parameter when calling the batch file.
for /L %%A IN (0,1,100) DO ( echo.Increasing volume by 1/sec. call:WAITER 1 SetVolume.vbs +1 )This section ramps up the volume over 100 seconds. It increases the volume by 1% every 1 second. This section makes the first use of the WAITER function shown next. I chose to increment the volume instead of setting the volume to a specific level so that I could manually override the volume. This means I could use my phone to increase the volume to 100 immediately instead of waiting 100 seconds. If the script tries to increase the volume to something past 100, the volume just stays at 100. If this for loop were in its 20th iteration and I had set it to increase to a specific level each time, then if I immediately pushed it up to 100, the next second would push it back to 21. Not desirable, especially when ramping down the volume.
:WAITER echo.Waiting for %~1 second(s). choice /C x /N /T %~1 /D x > NUL goto:eofThe waiter function simply waits a predetermined number of seconds. So, if I wanted to wait X seconds, I would just call waiter like this: call:WAITER X.
call:WAITER 7200This section waits for 7200 seconds which is 2 hours. This is my 2 hour window; if I start the music at 6:45, it'll still be playing even if we put the boys to bed late.
for /L %%A IN (100,-1,0) DO ( echo.Decreasing volume by 1/10 sec. call:WAITER 10 SetVolume.vbs -1 )This section uses the waiter function again to ramp down the volume over 1000 seconds (or 16 minutes and 40 seconds).
Stop.vbs SetVolume.vbs 100 GOTO:EOFThis section cleans up after everything is over, making the whole thing ready to manually play music during the day if I wanted to.
I put this patch file into the same directory as the scripts and setup 3 scheduled tasks, each starting on a different day and repeating every 4 days:
- sleepitunes.bat AllMusic
- sleepitunes.bat Christmas
- sleepitunes.bat Waves
This has worked pretty well for us. I've toyed around with doing something like this as an alarm clock, but since the boys were born, I haven't really needed an alarm clock. However, it wouldn't be very difficult. The ramp up rate could be changed simply by changing the first call to the waiter function from 'call:WAITER 1' to 'call:WAITER 10'.
Monday, October 22, 2012
NPC and the footer.text Property
Often when displaying report data, it is handy to have a little explanation about the data being presented. In the case of NPC, this is accomplished by means of the footer.text property. As it turns out, this footer.text is HTML based and can use some fairly advanced HTML features.
I say this can be done in NPC, but it can't; at least unless you jailbreak NPC. At this point, CA has decided that CAPC will replace NPC. Currently, there's no upgrade path and many of the features that some customers rely upon haven't been rebuilt into CAPC (remember CAPC is a rebuild from the ground up, so just because it was in NPC doesn't mean it will be in CAPC). I've already posted on the community about some of the features that I require before I will upgrade myself or anyone else to CAPC (browser view for example).
However, since NPC now has one foot in the grave (CA is evangelizing IM2.0 which requires CAPC more than Jim Jones' kool-aid), I see no reason why anyone wouldn't want to start experimenting with some of the hidden features in NPC. So, if you haven't jailbroken NPC yet, you might consider it. However, if you don't have NV as a data source, don't worry about it. The NV view wizard is all you really get. While that opens tons of doors to customization you never thought possible, it doesn't do anything for non-NV customers. If you're interested, email me. Once CA puts the last nail in NPC's coffin, I'll publish the steps here. Until then, I need deniability.
So, on to the topic of this post. When using the view wizard to create/edit NV views in NV (or NPC if you've jailbroken it) it turns out you can put some fairly complex html in the footer field. Obviously, you can use {Resolution} and {Samples} in there to pull the current resolution and data point count (whether it's sampled data or not). However just today I tried and was able to put a fully formed html table in the footer. I then tried to take it a step further and embed a YouTube video. That worked fine as well.
In case you don't see it, this can be useful when trying to explain to new users what certain views mean. For example, I could put together a complete page with live data and YouTube videos or text explaining what each one means and how to use it. For example, I could put some text or an image in the footer that could expand a DIV section containing a YouTube video explaining how to interpret the data. The nice part about this is that the footer is tied to the view. Previously, I would put a separate browser view embedding the help information. This is good for situations where you want to put the data on the left and explanation on the right. However, putting the explanation in the footer ensures that the explanation stays with the data even through moves and copies.
So in order to put a hidden YouTube video in the footer of a view:
- First check to make sure the view supports a footer (most do, this is just a sanity check).
- Install HeidiSQL and connect to your NPC server.
- Go to the control_properties table and filter it to only show your view. Get the controlid from the NPC web gui by looking at the status bar when you open the view menu and mouseover the edit option. Then set a filter for controlid=X, pageid in (0, Y), and propertiesid in (0, Z).
- Find the footer.text property.
- Edit the propertyvalue column and insert your html.
- Save the record and you're good to go.
If you change the footer.text where pageid=0, propertiesid=0, and userid=0, you are modifying the default definition of the view and any existing views tied to the default settings and any new views will have this modified setting.
Here's the code you would insert if you wanted to embed my SA video. You'll need to take out the carriage returns to make it one long line:
Tuesday, October 9, 2012
Windows 8 Highlights
I've been watching (mostly passively) the development and announcements around Windows 8. Microsoft's new flagship desktop operating system will be unleashed sometime later this month. I say unleashed, but I really should say released again since there was a developer's preview version of the operating system, a consumer preview version, and a release preview version that were released previously. In my case, the multiple previews actually cemented a prejudice I have against MS' radical new ideas in their first iteration. I actually downloaded and installed the developer preview and it told me everything I needed to know about Windows 8: it's the next Windows ME, or Windows Vista and I won't be rushing to it any time soon; Microsoft has tried to do something radically different, but they never get it right on the first try. It's always the iteration afterward that hits pay-dirt (right after ME was XP and right after Vista was 7).
Either way, Michael Mace put together a great introductory video explaining what changes with this version of the operating system. Here it is.
Either way, Michael Mace put together a great introductory video explaining what changes with this version of the operating system. Here it is.
Wednesday, October 3, 2012
NQSync
Here's another one of my tools. I found myself constantly needing to make sure the supportcig.exe is up to date on all my servers. I also spend a lot of time tweaking my nqbackup.bat script, and every time there's a change, I need to copy it out to all my servers. I also ended up trying to organize the mess that is the huge quantity of supportcig files distributed across all my servers. So, of course, I scripted it.
NQSync.bat does several things I'm proud of. First of all, it finds two unused drive letters on the system on which it's being run, starting with Z. It does this so that the script can temporarily map network drives out to all the target systems and to the master system. I should explain: the script can be run from anywhere as long as the credentials used to run the script have admin rights to all the servers. First it maps a drive to the master server's D: drive. Then it maps another network drive out to each of the target servers in turn (this is why we need two unused drive letters). Then, depending on the options will do three actions:
NQSync.bat does several things I'm proud of. First of all, it finds two unused drive letters on the system on which it's being run, starting with Z. It does this so that the script can temporarily map network drives out to all the target systems and to the master system. I should explain: the script can be run from anywhere as long as the credentials used to run the script have admin rights to all the servers. First it maps a drive to the master server's D: drive. Then it maps another network drive out to each of the target servers in turn (this is why we need two unused drive letters). Then, depending on the options will do three actions:
- Update the supportCIG.exe file on the remote server - the script will copy the supportCIG.exe file from the master's D:\SupportCIG\ directory to the target server(s) D:\SupportCIG\ directory.
To enable this option, include the switch /u anywhere on the line when calling the script. - Gather the supportCIG output files onto the master server - the script will copy any zip files found in D:\SupportCIG\ to the master server into the D:\SupportCIG\[Server]\ directory, where [Server] is the server's name or IP address.
To enable this option, include the switch /g anywhere on the line when calling the script. - Synchronize tools to the remote server - the script will the entire contents of the D:\DBTools3 and D:\SupportTools6 directories out to the remote server in the same location. This is how I get a copy of my updated nqbackup.bat file out to all the servers.
To enable this option, include the witch /s anywhere on the line when calling the script.
The script itself will need to be modified to identify the master server. Just go to line 2 and put the name or IP address of the master server. Also, you'll need to provide a text file containing the name or IP address of each target server (one per line). Obviously, it doesn't need to contain the master server's IP address. Make the path to this text file one of the arguments. For example:
nqsync.bat /u /g D:\lists\allservers.txt
This would update the CIG and gather all the outputs for all the servers in the text file. Obviously, if the text file is in the same directory as nqsync.bat, you can exclude the full path and just put the name of the file.
NQSync.bat uses some fun code to find the first and second available drive letters. I've tested this by using this script. It identifies which drive letters are in use and which are available.
NQSync.bat uses some fun code to find the first and second available drive letters. I've tested this by using this script. It identifies which drive letters are in use and which are available.
Wednesday, September 26, 2012
Installing IM 2.0 on Vanilla CentOS 6.3
Following up this post, I've decided to go ahead and post the current versions of all my scripts. I should explain a little about my environment. I have the VM's setup cloned off a vanilla* CentOS 6.3 64-bit installation. They each use DHCP to get their IP addresses from a DHCP server that has reservations to they always use the same IP addresses†. On the same network I have a web server that hosts the installers for CAPC, IMDA, & IMDR. So, I've written scripts that will download the installers, run all the pre-requisites and checks and run the installers. The scripts are here, here, here, here, here, & here. They are not up to date with the current installer. They will need to be modified to work properly with the current installer.
Here's how I use them:
DLAll.sh - this is not really a script, just a text file containing the bootstrap commands to kick off all the other scripts. I copy and paste from this script into the ssh session. Each section should be clearly commented as to what it does. The password to get into my server is masked, but it wouldn't work on your network unless your server were a clone of mine anyway. Modify as you wish. I couldn't get the PC to talk to the DA without disabling the firewalls, so that's at the top. I usually disable the firewalls on all of them and will probably build that configuration into my base image in the future.
InstallDR1.sh - this script gets everything ready for the Data Repository installation. These are basically all the commands that are run as root.
InstallDR2.sh - this script contains the commands that the dradmin user needs to execute. This one requires manual intervention at 4 places, so I've documented what I do at each stop.
InstallDA.sh - this one installs the Data Aggregator
InstallDC.sh - this one installs the Data Collector
InstallPC.sh - this one installs Performance Center
I think that might be all. So i'm going to leave it at that for now. If you've got any suggestions on how the scripts could be improved, let me know.
*Vanilla=updates applied and startup mode changed to console only
†This may be the source of other problems since the linux host file doesn't actually have the IP address in it.
Here's how I use them:
DLAll.sh - this is not really a script, just a text file containing the bootstrap commands to kick off all the other scripts. I copy and paste from this script into the ssh session. Each section should be clearly commented as to what it does. The password to get into my server is masked, but it wouldn't work on your network unless your server were a clone of mine anyway. Modify as you wish. I couldn't get the PC to talk to the DA without disabling the firewalls, so that's at the top. I usually disable the firewalls on all of them and will probably build that configuration into my base image in the future.
InstallDR1.sh - this script gets everything ready for the Data Repository installation. These are basically all the commands that are run as root.
InstallDR2.sh - this script contains the commands that the dradmin user needs to execute. This one requires manual intervention at 4 places, so I've documented what I do at each stop.
InstallDA.sh - this one installs the Data Aggregator
InstallDC.sh - this one installs the Data Collector
InstallPC.sh - this one installs Performance Center
I think that might be all. So i'm going to leave it at that for now. If you've got any suggestions on how the scripts could be improved, let me know.
*Vanilla=updates applied and startup mode changed to console only
†This may be the source of other problems since the linux host file doesn't actually have the IP address in it.
Thursday, September 20, 2012
Installing CAPC on vanilla CentOS 6.3
Since IM 2.0 came out, I've been testing the installation of the various components on CentOS 6.3. (No your're not seeing things, my last post was going to be this post, but I went on a rant instead.) I've come up with a few scripts that I use to get everything configured. The first script was actually to install the data aggregator and repository. It's not really a script, since some parts can't be scripted. However, the parts that can be scripted have been. I just copy and paste the commands into a putty window.
Anyway, the CAPC installation can be completely automated. This is on a vanilla CentOS 6.3 64-bit installation. The only change I made was to change the /etc/inittab to boot to the console instead of the GUI (change the 5 to a 3) and reboot. I don't bother setting static IP addresses in my lab. I use DHCP reservations. It makes it a snap to configure. I connect to the server via putty and paste in the following commands:
There you go. I'll post the other scripts in separate posts since they're a bit more involved.
Anyway, the CAPC installation can be completely automated. This is on a vanilla CentOS 6.3 64-bit installation. The only change I made was to change the /etc/inittab to boot to the console instead of the GUI (change the 5 to a 3) and reboot. I don't bother setting static IP addresses in my lab. I use DHCP reservations. It makes it a snap to configure. I connect to the server via putty and paste in the following commands:
There you go. I'll post the other scripts in separate posts since they're a bit more involved.
IM 2.0 on Linux vs. Windows
With the release of IM 2.0, I've been testing the installation of the various components on CentOS because my lab's investor (my wife) doesn't see the need to purchase a RedHat license. All the better anyway since others might want to know if CentOS is an option for installation to save on adoption costs. Frankly, I'm not sure why CA decided to go with RHEL.
While it is probably the most popular Linux server operating system, all (I repeat, ALL) of the previous NetQoS software ran on Windows. I'm not counting MTP since it's sold as an appliance not software. The target audience for the original NetQoS products was the network engineer. It has since bloomed to include application owners and server administrators. However, if you look at the role of the person who normally administers and champions the NetQoS products, it's still a network engineer.
It is my opinion that network engineers are most familiar with two operating systems: Cisco IOS and Windows. There will be that case where the network engineer used to be on the server team but is now working on the network side. While this obviously happens, I think there are just as many server-turned-network engineers who come from Linux/mixed (Windows & Linux, come on even Linux only environments have Exchange) environments as come from Windows only environments. So, my conclusion is that most network engineers will be most familiar with Cisco IOS and Windows (both from server OS and desktop OS experience). IM 2.0 should have been released on Windows.
There is another possible reason to use Linux over Windows: speed. I agree with this argument. Even with CentOS, I can turn off the GUI and save the resources that would otherwise be dedicated to display a locked screen 99.999% of the time. However, the minimum RAM requirement for IM 2.0 is 4GB. What!? I thought Linux was a better performer and could get away with not having as much RAM. Well, it turns out that even in a lab environment when monitoring a very small infrastructure, 3GB isn't always enough. The fact that I installed DA/DR on a box with only 1GB was pointed to as a possible reason why I was seeing problems on my installation. Wait guys, if i have to dedicate a ton of resources, why don't we just run it on Windows?
Wasn't IM 2.0 supposed to be developed on Java? If that's the case, why does the OS even matter? Shouldn't it be a fairly trivial matter to compile installers for all of the major operating systems?
I'm not a developer, so you really shouldn't be reading any of this without your tough over in your cheek. But still.
Really?
I have to learn Linux?
Really?
I have to purchase RHEL?
Really?
I have to dedicate at least 4GB of RAM in a lab environment?
Really?
While it is probably the most popular Linux server operating system, all (I repeat, ALL) of the previous NetQoS software ran on Windows. I'm not counting MTP since it's sold as an appliance not software. The target audience for the original NetQoS products was the network engineer. It has since bloomed to include application owners and server administrators. However, if you look at the role of the person who normally administers and champions the NetQoS products, it's still a network engineer.
It is my opinion that network engineers are most familiar with two operating systems: Cisco IOS and Windows. There will be that case where the network engineer used to be on the server team but is now working on the network side. While this obviously happens, I think there are just as many server-turned-network engineers who come from Linux/mixed (Windows & Linux, come on even Linux only environments have Exchange) environments as come from Windows only environments. So, my conclusion is that most network engineers will be most familiar with Cisco IOS and Windows (both from server OS and desktop OS experience). IM 2.0 should have been released on Windows.
There is another possible reason to use Linux over Windows: speed. I agree with this argument. Even with CentOS, I can turn off the GUI and save the resources that would otherwise be dedicated to display a locked screen 99.999% of the time. However, the minimum RAM requirement for IM 2.0 is 4GB. What!? I thought Linux was a better performer and could get away with not having as much RAM. Well, it turns out that even in a lab environment when monitoring a very small infrastructure, 3GB isn't always enough. The fact that I installed DA/DR on a box with only 1GB was pointed to as a possible reason why I was seeing problems on my installation. Wait guys, if i have to dedicate a ton of resources, why don't we just run it on Windows?
Wasn't IM 2.0 supposed to be developed on Java? If that's the case, why does the OS even matter? Shouldn't it be a fairly trivial matter to compile installers for all of the major operating systems?
I'm not a developer, so you really shouldn't be reading any of this without your tough over in your cheek. But still.
Really?
I have to learn Linux?
Really?
I have to purchase RHEL?
Really?
I have to dedicate at least 4GB of RAM in a lab environment?
Really?
Wednesday, August 15, 2012
Deleting Unused Views from NPC
I posted this on the community, but I've had to look it up a couple times and always came back to my blog to look for it before I went to the community. That told me I should have it posted here.
If you've used the custom view wizard to create views in NPC, you may have noticed that there's no way to delete views. Also, if you've ever installed an ePack or used the nqwebtool (not recommended, doesn't work with all versions), NPC views will get created. There may come a time when you want to delete those views; for example, if you've deleted the dataset the view is tied to or if you recreated a view that already exists and you don't need the duplicate. Like I said, there's no way to do this in the GUI, but there is a way to do it through the database.
Standard warnings apply, don't do this, it will break your stuff, you're on your own, back up your stuff, don't cry to me, don't tell support I told you this would work without a problem, you're on your own, etc. etc. etc.
You should be able to delete views from NPC by creating a page (preferably not in your 'my pages' menu) and put the views you want to delete onto that page. Then execute a couple queries that delete the information from the database for all the views on that page. You would need to get the page id of the page you created; just look in the url for the pageid. It should be a number, usually 5 digits, like 36598.
In the following example, that's the pageid I'll use. These queries will show you what will be deleted:
These queries will execute the deletion:
This doesn't delete views created on the device, router, server, switch, or any of the poll instance context pages. Those would require a bit more work. The following query should result in the ID numbers of any views in NPC that no longer have the specified dataset:
select distinct controlid from control_properties where propertyname='DataSetName' and propertyvalue='<dataset_short_name>';
Where <dataset_short_name> is the short name of the dataset (i.e. avail as opposed to 'Device Availability').
If we combine that query with a query to NV to get a list of the datasets, and you could easily make sure to catch all views for obsolete datasets. The problem is that this is a bit more complex because NetVoyant has some datasets that aren't in the dataset list. Actually, NV has some contexts that aren't in the dataset list. So, get the current dataset list with this query on the NVMC:
select dataset_name from datasets;
Then change the results from something like this:
Into something that can be used in the query, like this:
'aimdc','aimhost','aimvm','avail','ciscoMemPool','ciscoSwitch','ciscoSystem','dsx1near','dsx3near','etherlikepaus','etherstats','frcircuit','hrdevice','hrprocessor','hrstorage','hrswrun','ifstats','nbarstats','protodist','qosclass','qoscolor','qosiphc','qosmatch','qospolice','qosqueue','qosred','qosset','qosts','reach','rtthttp','rttjitter','rttstats'
Then add the following to the end of that list:
'rttstats,rttjitter,rtthttp','event_log','event_list','rttstatscap:operations','ciscoProcess'
So it reads like this:
'aimdc','aimhost','aimvm','avail','ciscoMemPool','ciscoSwitch','ciscoSystem','dsx1near','dsx3near','etherlikepaus','etherstats','frcircuit','hrdevice','hrprocessor','hrstorage','hrswrun','ifstats','nbarstats','protodist','qosclass','qoscolor','qosiphc','qosmatch','qospolice','qosqueue','qosred','qosset','qosts','reach','rtthttp','rttjitter','rttstats','rttstats,rttjitter,rtthttp','event_log','event_list','rttstatscap:operations','ciscoProcess'
So, to identify all the views in NPC that aren't tied to one of these datasets, do this query on NPC:
To delete all these views, their properties, and remove them from any pages they may be on, use these:
Note: don't copy and paste these queries as they are specific to my installation. You may have built other datasets and views that should be included in this process. If you copy and paste these queries, those views will be deleted and you'll have to rebuild them.
Again, this isn't recommended or endorsed by CA. You should not try this.
If you've used the custom view wizard to create views in NPC, you may have noticed that there's no way to delete views. Also, if you've ever installed an ePack or used the nqwebtool (not recommended, doesn't work with all versions), NPC views will get created. There may come a time when you want to delete those views; for example, if you've deleted the dataset the view is tied to or if you recreated a view that already exists and you don't need the duplicate. Like I said, there's no way to do this in the GUI, but there is a way to do it through the database.
Standard warnings apply, don't do this, it will break your stuff, you're on your own, back up your stuff, don't cry to me, don't tell support I told you this would work without a problem, you're on your own, etc. etc. etc.
You should be able to delete views from NPC by creating a page (preferably not in your 'my pages' menu) and put the views you want to delete onto that page. Then execute a couple queries that delete the information from the database for all the views on that page. You would need to get the page id of the page you created; just look in the url for the pageid. It should be a number, usually 5 digits, like 36598.
In the following example, that's the pageid I'll use. These queries will show you what will be deleted:
These queries will execute the deletion:
This doesn't delete views created on the device, router, server, switch, or any of the poll instance context pages. Those would require a bit more work. The following query should result in the ID numbers of any views in NPC that no longer have the specified dataset:
select distinct controlid from control_properties where propertyname='DataSetName' and propertyvalue='<dataset_short_name>';
Where <dataset_short_name> is the short name of the dataset (i.e. avail as opposed to 'Device Availability').
If we combine that query with a query to NV to get a list of the datasets, and you could easily make sure to catch all views for obsolete datasets. The problem is that this is a bit more complex because NetVoyant has some datasets that aren't in the dataset list. Actually, NV has some contexts that aren't in the dataset list. So, get the current dataset list with this query on the NVMC:
select dataset_name from datasets;
Then change the results from something like this:
Into something that can be used in the query, like this:
'aimdc','aimhost','aimvm','avail','ciscoMemPool','ciscoSwitch','ciscoSystem','dsx1near','dsx3near','etherlikepaus','etherstats','frcircuit','hrdevice','hrprocessor','hrstorage','hrswrun','ifstats','nbarstats','protodist','qosclass','qoscolor','qosiphc','qosmatch','qospolice','qosqueue','qosred','qosset','qosts','reach','rtthttp','rttjitter','rttstats'
Then add the following to the end of that list:
'rttstats,rttjitter,rtthttp','event_log','event_list','rttstatscap:operations','ciscoProcess'
So it reads like this:
'aimdc','aimhost','aimvm','avail','ciscoMemPool','ciscoSwitch','ciscoSystem','dsx1near','dsx3near','etherlikepaus','etherstats','frcircuit','hrdevice','hrprocessor','hrstorage','hrswrun','ifstats','nbarstats','protodist','qosclass','qoscolor','qosiphc','qosmatch','qospolice','qosqueue','qosred','qosset','qosts','reach','rtthttp','rttjitter','rttstats','rttstats,rttjitter,rtthttp','event_log','event_list','rttstatscap:operations','ciscoProcess'
So, to identify all the views in NPC that aren't tied to one of these datasets, do this query on NPC:
To delete all these views, their properties, and remove them from any pages they may be on, use these:
Note: don't copy and paste these queries as they are specific to my installation. You may have built other datasets and views that should be included in this process. If you copy and paste these queries, those views will be deleted and you'll have to rebuild them.
Again, this isn't recommended or endorsed by CA. You should not try this.
Wednesday, August 8, 2012
Running commands on a remote computer
In a Windows environment, running commands on a remote computer isn't as easy as it should be. There are third party tools out there that basically install an agent on the remote computer that allows you to push commands through the agent to the remote computer's command interpreter. While this may be fine and probably works, it's not the only option. In fact, there is a built in method of executing commands on a remote computer. It involves the scheduled tasks feature of Windows.
Most people use scheduled tasks to run a program on a schedule. In fact, most people don't even use scheduled tasks. If they do, they use it to run defrag (if they're using an older version of Windows). However, scheduled tasks can be very powerful if used properly because programs can be run locally on the box using either system credentials or a specific user's credentials. While scheduling a task might not seem to be the best way to run code remotely, there's a little known feature that actually makes this work wonderfully: schtasks. This command line utility allows for programmatic manipulation of scheduled tasks. The kicker is that this command line utility can be used to manipulate scheduled tasks on a remote machine!
Therein lies the entire strategy of running code remotely. First of all, Microsoft has put together a surprisingly helpful set of examples. Check it out to familiarize yourself with the commands.
So, the strategy is this:
I use this script:
I call the the update script like this:
The argument is the name of a text file containing the names of the servers I want to push the updated file to.
If you wanted to run the batch file once then remove all traces, I use the following script. The only problem with this one is that you have to wait for your script to finish before you can delete the scheduled task and the script. This script let's you indicate when it's safe to go ahead with the deletion by querying the scheduled tasks list on the remote computer(s). It will show 'Running' in the status column while the script is running and 'Ready' when it has finished.
That's about it. Happy hacking!
Most people use scheduled tasks to run a program on a schedule. In fact, most people don't even use scheduled tasks. If they do, they use it to run defrag (if they're using an older version of Windows). However, scheduled tasks can be very powerful if used properly because programs can be run locally on the box using either system credentials or a specific user's credentials. While scheduling a task might not seem to be the best way to run code remotely, there's a little known feature that actually makes this work wonderfully: schtasks. This command line utility allows for programmatic manipulation of scheduled tasks. The kicker is that this command line utility can be used to manipulate scheduled tasks on a remote machine!
Therein lies the entire strategy of running code remotely. First of all, Microsoft has put together a surprisingly helpful set of examples. Check it out to familiarize yourself with the commands.
So, the strategy is this:
- Package the code to be run on the remote computer into something that can be run silently (i.e. does not require any input from the user). This may mean writing your batch file or perl script.
- Copy the package to the remote computer. Obviously, you'll need RW access to put the script on the remote computer. This can be done remotely (and even recursively) by mapping a drive to the destination and copying the files to the mapped drive.
- Use schtasks to create a new task to run once in the past.
- Use schtasks to run the new task now
- Use schtasks to delete the task (optional)
I use this script:
I call the the update script like this:
>deploy.bat myservers.txt
The argument is the name of a text file containing the names of the servers I want to push the updated file to.
If you wanted to run the batch file once then remove all traces, I use the following script. The only problem with this one is that you have to wait for your script to finish before you can delete the scheduled task and the script. This script let's you indicate when it's safe to go ahead with the deletion by querying the scheduled tasks list on the remote computer(s). It will show 'Running' in the status column while the script is running and 'Ready' when it has finished.
That's about it. Happy hacking!
Tuesday, August 7, 2012
Creating Properties for sysName, sysDescr, and sysObjectID
UPDATE: I've combined this tool with the tool I built that allows administrators to add/delete/rediscover devices without logging into the console. This tool is combined with the properties creator simply because they both need to run right after discovery. It actually has 3 parts: a widget, a JavaScript file and the batch file that runs every night. First the batch file, which is an expansion of the properties creator. In addition to the normal task of adding properties for new devices, this also adds a pair of properties that, when rendered on the NPC device details page, present the administrator with a rediscover button and a delete button. By default, a password is required. The password is set in the external JavaScript file (below) on lines 2 & 16 for rediscovery and deletion, respectively:
Next is the JavaScript file, which must be in the custom virtual directory (with alias 'custom'):
Lastly the widget. The widget is only for adding new devices.
If you want to add a delete button, remove the text 'style="display:none;"' from the widget source.
Occasionally, I find it necessary to build auto-enable rules in NetVoyant based on SNMP properties like sysName, sysDescr, and sysObjectID. Unfortunately, these are not all available for every dataset as SNMP parameters that could be used in rules. However, custom properties are always available in auto-enable rules (not discovery rules since property comparison happens after initial discovery). What this means is that rules can be built to automatically disable poll instances according to model, OS version, or software version (as obtained via the sysDescr).
In order for this to work however, each device needs custom properties. Setting these manually is a pain and would take forever for anything other than a lab system. To combat this, I've built this script (thanks Wade for the query help) that creates custom properties for every SNMP capable device containing the sysDescr, sysObjectID, sysName, sysContact, and sysLocation.
This script has to be run on the poller(s). A new device will not get the properties until the script is run again, so, it is probably best to set it to run as a scheduled task every night right after discovery.
Once you've got the properties in place, you can create auto-enable rules using these properties by referencing them with the $. So, for example, if I wanted to disable ifstats monitoring on all devices that have a sysLocation like 'France', I would add the following to the Property Rule in the add auto-enable rule dialog box:
$sysLocation like '%France%'
Save the rule, apply it to the dataset, then rediscover the device. Voilá!
Next is the JavaScript file, which must be in the custom virtual directory (with alias 'custom'):
Lastly the widget. The widget is only for adding new devices.
If you want to add a delete button, remove the text 'style="display:none;"' from the widget source.
Occasionally, I find it necessary to build auto-enable rules in NetVoyant based on SNMP properties like sysName, sysDescr, and sysObjectID. Unfortunately, these are not all available for every dataset as SNMP parameters that could be used in rules. However, custom properties are always available in auto-enable rules (not discovery rules since property comparison happens after initial discovery). What this means is that rules can be built to automatically disable poll instances according to model, OS version, or software version (as obtained via the sysDescr).
In order for this to work however, each device needs custom properties. Setting these manually is a pain and would take forever for anything other than a lab system. To combat this, I've built this script (thanks Wade for the query help) that creates custom properties for every SNMP capable device containing the sysDescr, sysObjectID, sysName, sysContact, and sysLocation.
@echo off
set sqlcommand=mysql nms2 --skip-column-names -e "select count(*) from devices where snmp_capable=2 and dev_properties
set propertieslist=(select property_set_id from properties where property_name=
set logfile=D:\updateproperties.log
echo %date% - %time% - Script Started >> %logfile%
for %%A in (sysDescr,sysObjectID,sysName,sysContact,sysLocation) do (
echo Devices with %%A property: >> %logfile%
%sqlcommand% in %propertieslist%'%%A')" >> %logfile%
echo Devices without %%A property: >> %logfile%
%sqlcommand% not in %propertieslist%'%%A')" >> %logfile%
)
echo Running query >> %logfile%
set inspropsql=mysql nms2 -e "replace into properties (select dev_properties,
set inspropsql2=0, 0 from devices where snmp_capable=2)"
%inspropsql% 'sysDescr', 18, sys_descr, %inspropsql2%
%inspropsql% 'sysObjectID',18, sys_objectid, %inspropsql2%
%inspropsql% 'sysName', 18, sys_name, %inspropsql2%
%inspropsql% 'sysContact', 18, sys_contact, %inspropsql2%
%inspropsql% 'sysLocation', 18,sys_location, %inspropsql2%
for %%A in (sysDescr,sysObjectID,sysName,sysContact,sysLocation) do (
echo Devices with %%A property: >> %logfile%
%sqlcommand% in %propertieslist%'%%A')" >> %logfile%
echo Devices without %%A property: >> %logfile%
%sqlcommand% not in %propertieslist%'%%A')" >> %logfile%
)
echo %date% - %time% - Script Ended ----------------------------------->> %logfile%
This script has to be run on the poller(s). A new device will not get the properties until the script is run again, so, it is probably best to set it to run as a scheduled task every night right after discovery.
Once you've got the properties in place, you can create auto-enable rules using these properties by referencing them with the $. So, for example, if I wanted to disable ifstats monitoring on all devices that have a sysLocation like 'France', I would add the following to the Property Rule in the add auto-enable rule dialog box:
$sysLocation like '%France%'
Save the rule, apply it to the dataset, then rediscover the device. Voilá!
Monday, August 6, 2012
Data source rights and new data sources
When you add a new data source in NPC, be default, only the nqadmin and nquser accounts get access to this new data source. The nqadmin is made an administrator and the nquser is made a user. If you have 200 other users (including your normal user account), you won't be given administrative rights to this new data source even if you're an admin on all the other data sources. In order to fix this the proper way, you have to edit all the users and grant them access the new data source. While this can be done in bulk, what happens most often is that the admin doesn't take the time to distinguish between administrators and users and either grants everyone user access or every admin access or doesn't grant anyone access. They may not notice this until someone can't do something they think they're supposed to.
It actually used to be better than this. NPC had a concept of a permission set. A permission set was a group of data source access rights that could be applied to users. So, i could create one permission set called administrators and give them admin access to everything and another permission set called users and give them user access to everything. I would then assign the admin permission set to all the admins and the user permission set to all the users. If i added a new data source, all i would have to do is go to the permission set and update the permissions for that new data source and it would be applied to everyone with that permission set. However, for reasons as yet unexplained, the guys building NPC decided that was too efficient and decided on a per user definition. I guess since they already had a role component they didn't want to make role-based permissions as well. Why they didn't just roll permission sets into the roles is beyond me (i can understand technically why they didn't: because it's more versatile that way. But really, who knows enough about NPC to really use it that way?).
Anyway, if you find yourself in this situation and you don't want to have to do it manually, you can always do it in the database. Standard disclaimer: don't do this, it will break your stuff, i won't help you fix it, back up your stuff, you're on your own, don't tell support i told you to do this when you call them to have them help you fix it, etc., etc., etc.
Run the following query to turn all NPC admins into admins on all the data sources:
Run the following query to turn all NPC users into users on all the data sources:
It actually used to be better than this. NPC had a concept of a permission set. A permission set was a group of data source access rights that could be applied to users. So, i could create one permission set called administrators and give them admin access to everything and another permission set called users and give them user access to everything. I would then assign the admin permission set to all the admins and the user permission set to all the users. If i added a new data source, all i would have to do is go to the permission set and update the permissions for that new data source and it would be applied to everyone with that permission set. However, for reasons as yet unexplained, the guys building NPC decided that was too efficient and decided on a per user definition. I guess since they already had a role component they didn't want to make role-based permissions as well. Why they didn't just roll permission sets into the roles is beyond me (i can understand technically why they didn't: because it's more versatile that way. But really, who knows enough about NPC to really use it that way?).
Anyway, if you find yourself in this situation and you don't want to have to do it manually, you can always do it in the database. Standard disclaimer: don't do this, it will break your stuff, i won't help you fix it, back up your stuff, you're on your own, don't tell support i told you to do this when you call them to have them help you fix it, etc., etc., etc.
Run the following query to turn all NPC admins into admins on all the data sources:
replace into data_source_rights (select a.UserID, b.SourceID, 1 as UserLevel from user_definitions a, data_sources2 b where a.userlevel=1 and sourceid<>0) ;
Run the following query to turn all NPC users into users on all the data sources:
replace into data_source_rights (select a.UserID, b.SourceID, 4 as UserLevel from user_definitions a, data_sources2 b where a.userlevel=4 and sourceid<>0) ;
Monday, July 23, 2012
Migrating My Documents to Google Drive or Dropbox
Well, the new buzzword is 'Cloud' and it seems everyone is getting on the bandwagon and offering services through the 'Cloud'. First of all, most of these services either aren't actually based on cloud technology but are rather the same web services always available but now using new marketing hype to generate business OR the service is based on cloud technology and has been for a long time. Either way, to the end user there really isn't much effect except that companies are clawing their way over each other to make sure they get your business in their 'cloud' as opposed to their competitors' clouds. End result: we get cool new products.
One of the relatively new services getting a lot of buzz is Google Drive, which competes with Dropbox, Skydrive, Cubby, ownCloud, etc. The purpose of this blog post is not to debate the benefits of one service over the other. I recently migrated my 'My Documents' folder to my Google Drive in order to be able to access my documents everywhere, have a backup in case my PC exploded, and all the other reasons these services exist. This blog post will explain how I did it and what to look out for. I will use Google Drive as the example, but any other service should be interchangeable with it.
The first thing to do is obviously get an account with Google Drive. If you already have a Google account, you can use that. If not, get a gmail account, then go to Google Drive to setup your drive.
Second: Install the Google Drive desktop application. This little app creates a new folder under your profile folder called (surprisingly) 'Google Drive'. Once that folder is created, anything you had in your Google Drive in the cloud will get synchronized to this folder on your desktop. The same works in the other direction. Any files and folders placed into your PC's Google Drive folder will get synchronized up to your drive on the internet.
The next thing you need to do is determine if you'll be able to fit your documents within the space allocated on your drive. Since Google hands out 5GB for free, you should be able to synchronize your documents to the cloud if your 'My Documents' folder is less than 5GB in size. To find this out, go to your my documents folder, select everything (Ctrl+A) and open the properties folder (Alt+Enter).
Ideally, it would be nice to put most of the files in your profile up there. Luckily, Windows makes it easy to change the location of most of the folders under your profile. For example, you could have your 'My Documents' folder actually stored on a secondary hard drive or a flash drive. If you're planning on storing more than just your documents on your Google Drive, first go to your PC's Google Drive folder and create a folder for each profile sub-folder you want to include. Like this:
One of the relatively new services getting a lot of buzz is Google Drive, which competes with Dropbox, Skydrive, Cubby, ownCloud, etc. The purpose of this blog post is not to debate the benefits of one service over the other. I recently migrated my 'My Documents' folder to my Google Drive in order to be able to access my documents everywhere, have a backup in case my PC exploded, and all the other reasons these services exist. This blog post will explain how I did it and what to look out for. I will use Google Drive as the example, but any other service should be interchangeable with it.
The first thing to do is obviously get an account with Google Drive. If you already have a Google account, you can use that. If not, get a gmail account, then go to Google Drive to setup your drive.
Second: Install the Google Drive desktop application. This little app creates a new folder under your profile folder called (surprisingly) 'Google Drive'. Once that folder is created, anything you had in your Google Drive in the cloud will get synchronized to this folder on your desktop. The same works in the other direction. Any files and folders placed into your PC's Google Drive folder will get synchronized up to your drive on the internet.
The next thing you need to do is determine if you'll be able to fit your documents within the space allocated on your drive. Since Google hands out 5GB for free, you should be able to synchronize your documents to the cloud if your 'My Documents' folder is less than 5GB in size. To find this out, go to your my documents folder, select everything (Ctrl+A) and open the properties folder (Alt+Enter).
Ideally, it would be nice to put most of the files in your profile up there. Luckily, Windows makes it easy to change the location of most of the folders under your profile. For example, you could have your 'My Documents' folder actually stored on a secondary hard drive or a flash drive. If you're planning on storing more than just your documents on your Google Drive, first go to your PC's Google Drive folder and create a folder for each profile sub-folder you want to include. Like this:
- C:\Users\sweenig\Google Drive\Documents
- C:\Users\sweenig\Google Drive\Desktop
- C:\Users\sweenig\Google Drive\Downloads
- C:\Users\sweenig\Google Drive\Favorites
- C:\Users\sweenig\Google Drive\Links
- C:\Users\sweenig\Google Drive\Music
- C:\Users\sweenig\Google Drive\Pictures
- C:\Users\sweenig\Google Drive\Videos
You may not want to include all these folders. Pick and choose the ones that you want and that can fit (don't forget to consider that some of these folders may increase in size significantly).
Now that you've got new locations for all your profile sub-folders, go to the existing folders and change their location. Open up your profile folder (start>>run>>C:\Users\%USERPROFILE%). Right click on the folder you want to move to your Google Drive, open the properties dialog box, and go to the location tab. Click the 'Move...' button and browse to the new folder under the Google Drive folder that corresponds to this folder. Hit Ok and Windows will ask you if you want to move the old files to the new location. Say yes. This next part may take a while depending on the size of the folder. This is not copying your files to the Google Drive on the internet; it's copying the files from the old location on your computer to the new location on your computer (that happens to be synchronized with the internet). As this move process proceeds, you should see some activity on the Google Drive app icon. It's synchronizing your files from the local Google Drive\Documents folder up to the internet. Pause this as necessary if it slows down your internet too much.
Repeat this process for the remaining folders (if you have space). When it's all over (which may take a while if you have a lot of files) you should be able to access all your folders the same way you did before. Changing the 'location' of the profile sub-folders instructed Windows to use the new location but make it look like the old location.
If you need help visualizing the size of the various folders in your profile, use a tool like Windirstat. Point it at your profile directory and it will show you how big each folder is using a pretty cool graphic.
Thursday, July 12, 2012
GXMLG 2.0
I finally broke down and rebuilt my GXMLG tool. Given the complexity of the task, I originally wrote the applet using MS Access. However, to make things easier to distribute and easier to troubleshoot and use, version 2.0 uses perl and is run from the command line. To illustrate the difference, the old utility was 6.5MB. The new script is 11KB. That's what a graphical interface gives you.
You'll have to install perl (I use strawberry perl on windows boxes) and run the script like this:
The script itself is pretty simple. Specify an input file. This input file is the same sites/networks file referenced in my previous blog post. Here's a sample sites list to get you started. Then just decide which output files you want. If you specify the npc output file, you'll also need to specify the insertion point (more information about the insertion point).
If you don't have internet access on the box, you won't be able to install the Text::CSV modules to install (since they come from the internet). The solution is to download the Text::CSV and Text::CSV_XS tarballs and extract them using winzip or winrar or 7z. You might need to extract several times until you get just the folder with the files in them. Then copy them to the NVMC. Open a command prompt and cd to the directory containing Makefile.pl (you'll have to do this for each one). Then execute the following:
Do Text::CSV first, then Text::CSV_XS.
You'll have to install perl (I use strawberry perl on windows boxes) and run the script like this:
>perl gxmlg.plRunning it without any arguments shows you the help file:
This script outputs any combination of configuration files for the NetQoS suite of products. You must install strawberry perl and Text::CSV, Text::CSV_XS, and Getopt::Long. To install CPAN modules, run cpan [module name] from the command prompt. Example: cpan Text::CSV This script was created by Stuart Weenig (C)2012. For more information visit http://stuart.weenig.com. This script may be redistributed as long as all the files are kept in their original state. Current Version: 2.0 Usage: PERL gxmlg.pl [-outnpcxml] [-outsacsv] [-outnvcsvnv] [-outucmcsv] [-infile NAMEOFINFILE] [-npcinspath INSERTPATH] [-npcxmlname NPCXMLFILE] [-sacsvname SACSVFILE] [-nvcsvname NVCSVFILE] [-ucmcsvname UCMCSVFILE] -infile NAMEOFINFILE Specifies the name of the Sites file to be imported. (If omitted: sites.csv) -outnpcxml Output an NPC XML groups definition file. -npcxmlname NPCXMLFILE Name of the NPC file to be output. (If omitted: NPCGroups.xml) -npcinspath INSERTPATH The path to the group that will serve as the insertion point. Required if using -outnpcxml option. -outsacsv Output a SA networks CSV file. -sacsvname SACSVFILE Name of the SA file to be output. (If omitted: SANetworks.csv) -outnvcsv Output a NV discovery scopes file. -nvcsvname NVCSVFILE Name of the NV file to be output. (If omitted: NVScopes.csv) -outucmcsv Output a UCMonitor locations file. -ucmcsvname UCMCSVFILE Name of the UCMonitor locations file to be output. (If omitted: UCMLocations.csv) You must install strawberry perl and Text::CSV, Text::CSV_XS, and Getopt::Long. To install CPAN modules, run cpan [module name] from the command prompt Example: cpan Text::CSV If the server you will be running this on doesn't have access to the internet, you won't be able to install the modules automatically (since they have to be downloaded from the internet). The solution is to download the tarballs from www.cpan.org and extract them using winzip or winrar or 7z. You might need to extract several times until you get just the folder with the files in them. Then copy them to your server. Open a command prompt and cd to the directory containing Makefile.pl (you'll have to do this for each module). Then execute the following: perl Makefile.pl && dmake && dmake test && dmake install For the text modules, do Text::CSV first, then Text::CSV_XS.
The script itself is pretty simple. Specify an input file. This input file is the same sites/networks file referenced in my previous blog post. Here's a sample sites list to get you started. Then just decide which output files you want. If you specify the npc output file, you'll also need to specify the insertion point (more information about the insertion point).
If you don't have internet access on the box, you won't be able to install the Text::CSV modules to install (since they come from the internet). The solution is to download the Text::CSV and Text::CSV_XS tarballs and extract them using winzip or winrar or 7z. You might need to extract several times until you get just the folder with the files in them. Then copy them to the NVMC. Open a command prompt and cd to the directory containing Makefile.pl (you'll have to do this for each one). Then execute the following:
perl Makefile.PL && dmake && dmake test && dmake install
Do Text::CSV first, then Text::CSV_XS.
at
5:27 PM
0
comments
More posts like this:
GXMLG,
My Tools,
NetQoS,
NetVoyant,
NPC,
ReporterAnalyzer NFA,
SuperAgent ADA
Monday, July 9, 2012
AT&T vs. Comcast
I've had AT&T dry loop DSL for several years. Dry loop is the term for DSL without a phone line. Anyway, it's been giving me problems lately and since I work from home, my work is affected any time I have an internet performance degradation. Not to mention, if my internet is slow, Christy can't watch Netflix while I'm working. So, I've recently decided to switch to cable internet from Comcast (Xfinity Performance). With my AT&T internet, I'm supposed to get 6Mbps download and 700Kbps upload. Since it's DSL, it's supposed to be a guaranteed rate. However, this is the speed I got today just before the Comcast guy came to setup my new internet.
This is actually better than normal. My normal download speed is just under 3Mbps with uploads somewhere around 150Kbps. Needless to say, this is why I'm switching to Comcast. Their plan is $20 cheaper and boasts speeds up to 20Mbps. Cable internet isn't a guaranteed rate, but even if they live up to half the promise and the average rate is 10Mbps, that will be a huge improvement over my old internet connection.
Here are the results after the upgrade to Comcast internet. Needless to say, I'm happier than I was.
This is actually better than normal. My normal download speed is just under 3Mbps with uploads somewhere around 150Kbps. Needless to say, this is why I'm switching to Comcast. Their plan is $20 cheaper and boasts speeds up to 20Mbps. Cable internet isn't a guaranteed rate, but even if they live up to half the promise and the average rate is 10Mbps, that will be a huge improvement over my old internet connection.
Here are the results after the upgrade to Comcast internet. Needless to say, I'm happier than I was.
CA Network Flow Analyzer
Back in May, I gave a presentation for the CA Infrastructure Management Global User Community. At the time, it was recorded using Microsoft's LiveMeeting recording feature. For some reason, PowerPoint animations don't get recorded correctly and the recording of my presentation wasn't that great. So, I've re-recorded my presentation and uploaded it to YouTube. It also turns out I can now upload videos longer than the default 15 minute limit. Cool!
Anyway, here's the video:
Anyway, here's the video:
Sunday, July 1, 2012
Manually Configuring Applications in SuperAgent
Over the years, I've configured thousands of SuperAgent applications. I've refined the process, which includes a YouTube video (shown below) and an application flow diagram (detailed below) that I give to application owners to fill out. Usually they give me back their own version of the application infrastructure, which has more and less than what is needed for SuperAgent. That usually results in a meeting where I've taken their data and plugged it into my AFD and I solicit the missing information. So, what I've decided to put in this blog post should be everything needed to get started configuring applications in SuperAgent. Obviously, the SuperAgent administrator will need to know how to properly administer SuperAgent, but this primer is meant more for the application owners than the SuperAgent administrator.
First, the video. This video is based on a bounce diagram presentation that I've given countless times and explains how SuperAgent works on a fundamental level. This is important information to present to the application owners so they know why we're asking for the application flow information.
Beside the video, I also present the Application Flow Diagram (AFD). This is a Visio diagram that shows the information needed in order to configure an application in SuperAgent. I've also written a document to explain the application detailed in the example and how to fill it all out. Here it is:
First, the video. This video is based on a bounce diagram presentation that I've given countless times and explains how SuperAgent works on a fundamental level. This is important information to present to the application owners so they know why we're asking for the application flow information.
Beside the video, I also present the Application Flow Diagram (AFD). This is a Visio diagram that shows the information needed in order to configure an application in SuperAgent. I've also written a document to explain the application detailed in the example and how to fill it all out. Here it is:
Introduction
The purpose of this document is to describe a low complexity application and detail the parameters that must be obtained about that application in order to correctly configure the application for monitoring within CA Application Delivery Analysis (SuperAgent). This document also attempts to identify the types of people responsible for obtaining/providing that information about the application and infrastructure to the NetQoS administrator. Given the diversity of modern organizations, the recommended roles may not have the information required.Tuesday, June 26, 2012
Using Profile Pics Elsewhere
UPDATE: Added section on Google+ profile pics
Just discovered a couple useful tips for embedding your twitter or Facebook profile pictures elsewhere. Apparently, you can use the APIs to pull the images out.
For Facebook: http://graph.facebook.com/<username>/picture (where <username> is the username of the Facebook user whose profile picture you want to display) will display that user's current profile picture. That URL doesn't even have to be updated when the user changes his/her profile picture.
For example:
For twitter: http://api.twitter.com/1/users/profile_image/<username> (where <username> is the username of the twitter user whose profile picture you want to display) will display that user's current profile picture.
For example:
Turns out Google+ also has a way of doing it: https://s2.googleusercontent.com/s2/photos/profile/{id} (where {id} is the big long ID number that G+ assigns to each user). Don't forget to add some height and width attributes to your img tag as the profile pic from G+ can be fairly large.
For example:
I know, my profile pictures aren't that interesting, but that's only because we recently had professional photos done and it made sense to use one of them as both my Facebook and twitter profile pictures (notice the are actually different given the size restrictions on each service). As far as I know, LinkedIn doesn't have anything quite as easy as this. You have to make two calls to get the data: one using your own LinkedIn credentials to make sure you have permission to view the photo, then another to actually call the photo. Kinda sucks if you ask me. Maybe they'll wise up.
Just discovered a couple useful tips for embedding your twitter or Facebook profile pictures elsewhere. Apparently, you can use the APIs to pull the images out.
For Facebook: http://graph.facebook.com/<username>/picture (where <username> is the username of the Facebook user whose profile picture you want to display) will display that user's current profile picture. That URL doesn't even have to be updated when the user changes his/her profile picture.
For example:
For twitter: http://api.twitter.com/1/users/profile_image/<username> (where <username> is the username of the twitter user whose profile picture you want to display) will display that user's current profile picture.
For example:
Turns out Google+ also has a way of doing it: https://s2.googleusercontent.com/s2/photos/profile/{id} (where {id} is the big long ID number that G+ assigns to each user). Don't forget to add some height and width attributes to your img tag as the profile pic from G+ can be fairly large.
For example:
I know, my profile pictures aren't that interesting, but that's only because we recently had professional photos done and it made sense to use one of them as both my Facebook and twitter profile pictures (notice the are actually different given the size restrictions on each service). As far as I know, LinkedIn doesn't have anything quite as easy as this. You have to make two calls to get the data: one using your own LinkedIn credentials to make sure you have permission to view the photo, then another to actually call the photo. Kinda sucks if you ask me. Maybe they'll wise up.
Tuesday, June 19, 2012
NetVoyant Rollups: Sums, Maximum, Percentiles, etc.
For most situations out there, the default rollup is perfectly fine. What i mean is that when you add an expression to a dataset, the default rollup (which is an average) is exactly what someone would be looking for in a rollup. If i show top interfaces for an hour, I'd like to sort those interfaces by the highest average utilization, which means i want NV to take an average of the utilization data points during that hour.
However, in some situations, it may be more accurate to calculate a different rollup. For example, if i wanted to, i could have NV calculate both the average value of all the data points collected in the last hour and also calculate the standard deviation so that i know how consistent my utilization is. Higher standard deviation means there are at least some points that are far away from the average. I could also have NV calculate the maximum or a percentile of all the points from the last hour. By adding max and percentile to a view, i can easily see more clearly what is happening on an interface.
One other situation is volume. If you're polling some OID for some kind of volume (KB or MB), the first thing you should do in your expression is put it in bytes. This allows you to take advantage of the auto scaling feature in the views. This means that instead of showing numbers like 12000000 along the dependent axis, NV can display something like 12. You'd then put {Scale} in the axis label so that KB, MB, GB, etc. is displayed indicating the unit.
The next thing you'd do for volume is change the rollup. Obviously if you're tracking volume, having an average of all the points collected in the last hour is useless. What you really want is a sum of the volume in the last hour. To do this, remove all rollup types.
Did i mention how to do that? I guess i didn't. Edit the expression and click the advanced button. Uncheck all the checkboxes so that the rollup is a sum instead of an average.
Another trick about rates:
If you're polling an OID and want to convert it to rate, create a new expression and divide the expression by the variable 'duration'. Duration is always equal to the number of seconds in a poll cycle. Technically it's the number of seconds since the last poll, so you do have to be a little careful about that.
Again, if your OID is in some unit like KB, convert it to bits (KB*1024*8). Then when you divide by duration, you get bits per second. By setting the view auto-scale to rate, NV will automatically convert it to the needed value (Kbps, Mbps, Gbps, etc.).
Thursday, June 7, 2012
The most awesome new feature of Office 2010
Alright, I found the my new favorite feature of Office 2010. I've had office 2010 for a while, but everybody knows we all only use the features we used to use. Well, I have a new feature that have added to my quick launch bar: Insert>>Screenshot>>Screenshot Clipping. The whole insert screenshot feature is pretty cool and is available in Word, PowerPoint, Excel, and Outlook (it's probably everywhere, but I haven't found a good use for it in all of them).
When you first hit the Screenshot button in the ribbon bar, you get a drop down containing thumbnails of all the windows you have currently open and not minimized. Clicking one of these thumbnails inserts a screen shot of that window at the cursor. While this is great by itself, perhaps the more useful feature (and the one i've pinned to my quicklaunch bar) is Screenshot Clipping. When you click on this, the current window is minimized and the whole screen goes grey. The mouse turns into a + cursor. Draw a box around any portion of the screen and as soon as you let up on the mouse button, a picture of that portion of the screen is inserted at the cursor! It's completely awesome.
The reason it's completely awesome is because of the ease with which it accomplishes a task which would require multiple keystrokes/clicks (doing it the previously easiest way) or even a third program (if you did it the ancient way). The previously easiest way was to activate the window of which you wanted a screenshot, pressing Alt+PrtScn, pasting back into the Word doc or email, and using Word's image cropping tool to crop out the parts not needed. This was pretty good and I was always surprised at the number of people that did it the hard way.
The hard way involved using Windows7's snipping tool. Launch it and (depending on the mode) you can get a capture of the full screen, a window, a rectangle, or a freeform shape. Once you do this, the picture shows up in the snipping tool. If you've got it setup, it also copies it to the clipboard so you can paste it wherever you want. While this works and gives flexibility into the whole process, I always found it tedious.
Anyway, I was so excited about this feature, I had to put a blog post about it. Now if only Google would put something like that into the blogger post editing toolbar.
When you first hit the Screenshot button in the ribbon bar, you get a drop down containing thumbnails of all the windows you have currently open and not minimized. Clicking one of these thumbnails inserts a screen shot of that window at the cursor. While this is great by itself, perhaps the more useful feature (and the one i've pinned to my quicklaunch bar) is Screenshot Clipping. When you click on this, the current window is minimized and the whole screen goes grey. The mouse turns into a + cursor. Draw a box around any portion of the screen and as soon as you let up on the mouse button, a picture of that portion of the screen is inserted at the cursor! It's completely awesome.
The reason it's completely awesome is because of the ease with which it accomplishes a task which would require multiple keystrokes/clicks (doing it the previously easiest way) or even a third program (if you did it the ancient way). The previously easiest way was to activate the window of which you wanted a screenshot, pressing Alt+PrtScn, pasting back into the Word doc or email, and using Word's image cropping tool to crop out the parts not needed. This was pretty good and I was always surprised at the number of people that did it the hard way.
The hard way involved using Windows7's snipping tool. Launch it and (depending on the mode) you can get a capture of the full screen, a window, a rectangle, or a freeform shape. Once you do this, the picture shows up in the snipping tool. If you've got it setup, it also copies it to the clipboard so you can paste it wherever you want. While this works and gives flexibility into the whole process, I always found it tedious.
Anyway, I was so excited about this feature, I had to put a blog post about it. Now if only Google would put something like that into the blogger post editing toolbar.
Friday, June 1, 2012
NetVoyant Duplicates Finder
UPDATE: This method has been replaced by the ODBC method.
I've been working for a while now on a good way to find and remove duplicates from NetVoyant. Luckily, there is a web service that can delete devices (more on NetQoS web services). All you need is the device IP address and the poller (to build the web service URL). I played around for a while trying to build something in a windows batch file and couldn't get it to do what I wanted to do. So, I reverted to Perl (which I probably should have done from the beginning). Anyway, the result is a script that can be run regularly on the NetVoyant master console. The output is a CSV file and an html file. The CSV file contains the output from the brains of the duplicate finder script, namely: a list of every device that exists more than once in the NV system, along with the device properties including the poller. The CSV file is output to the script directory. The script can be configured to output the html file wherever you want.
After that, the script uses perl to wrap the data in the CSV into an html widget. The widget shows the same data as the CSV as well as a link on every line to delete the device. As long as the NV pollers resolve by name, the link should work to delete the device and its corresponding scope. If you only want the CSV, edit the batch file and comment out the call to the Perl script (i.e. put 'rem' in front of the line that start with the word 'perl').
If you do want the HTML, you'll need to install Strawberry Perl and download a couple of modules. Installing Strawberry Perl on the NetQoS boxes isn't a new thing. Most of the developers and support guys have Perl installed on their test boxes and I've had it installed on many customers' boxes. The install doesn't require a reboot and you can take all the defaults. After doing the install, open a command prompt and type the following:
After that, all you need to do is download the zip and extract the files to somewhere on your NVMC. Setup a scheduled task to run the batch file every so often. The web page doesn't update unless the script runs (it doesn't refresh the list of duplicate devices simply by refreshing the page).
To get the script to output the html file to somewhere other than the script directory, go to the makehtml.pl file and modify the line that starts with 'my $outputfile = ' and update the output file path and name. For example:
That's it. You're done. You can use the browser view to put the resulting html file on an NPC page if you've designated a destination that is served up by the NVMC's IIS web service.
Enjoy! If you have improvements, please let me know so I can update the source.
P.S. If you don't have internet access on the box, you won't be able to install the Text::CSV modules to install (since they come from the internet). The solution is to download the Text::CSV and Text::CSV_XS tarballs and extract them using winzip or winrar or 7z. You might need to extract several times until you get just the folder with the files in them. Then copy them to the NVMC. Open a command prompt and cd to the directory containing Makefile.pl (you'll have to do this for each one). Then execute the following:
I've been working for a while now on a good way to find and remove duplicates from NetVoyant. Luckily, there is a web service that can delete devices (more on NetQoS web services). All you need is the device IP address and the poller (to build the web service URL). I played around for a while trying to build something in a windows batch file and couldn't get it to do what I wanted to do. So, I reverted to Perl (which I probably should have done from the beginning). Anyway, the result is a script that can be run regularly on the NetVoyant master console. The output is a CSV file and an html file. The CSV file contains the output from the brains of the duplicate finder script, namely: a list of every device that exists more than once in the NV system, along with the device properties including the poller. The CSV file is output to the script directory. The script can be configured to output the html file wherever you want.
After that, the script uses perl to wrap the data in the CSV into an html widget. The widget shows the same data as the CSV as well as a link on every line to delete the device. As long as the NV pollers resolve by name, the link should work to delete the device and its corresponding scope. If you only want the CSV, edit the batch file and comment out the call to the Perl script (i.e. put 'rem' in front of the line that start with the word 'perl').
If you do want the HTML, you'll need to install Strawberry Perl and download a couple of modules. Installing Strawberry Perl on the NetQoS boxes isn't a new thing. Most of the developers and support guys have Perl installed on their test boxes and I've had it installed on many customers' boxes. The install doesn't require a reboot and you can take all the defaults. After doing the install, open a command prompt and type the following:
D:\>cpan Text::CSVPerl will download and install the necessary modules and return you to the command prompt when its done.
D:\>cpan Text::CSV_XS
After that, all you need to do is download the zip and extract the files to somewhere on your NVMC. Setup a scheduled task to run the batch file every so often. The web page doesn't update unless the script runs (it doesn't refresh the list of duplicate devices simply by refreshing the page).
To get the script to output the html file to somewhere other than the script directory, go to the makehtml.pl file and modify the line that starts with 'my $outputfile = ' and update the output file path and name. For example:
my $outputfile = 'D:\\NetVoyant\\Portal\\WebSite\\dupslist.html'Perl requires a double backslash since a single backslash is the escape character.
That's it. You're done. You can use the browser view to put the resulting html file on an NPC page if you've designated a destination that is served up by the NVMC's IIS web service.
Enjoy! If you have improvements, please let me know so I can update the source.
P.S. If you don't have internet access on the box, you won't be able to install the Text::CSV modules to install (since they come from the internet). The solution is to download the Text::CSV and Text::CSV_XS tarballs and extract them using winzip or winrar or 7z. You might need to extract several times until you get just the folder with the files in them. Then copy them to the NVMC. Open a command prompt and cd to the directory containing Makefile.pl (you'll have to do this for each one). Then execute the following:
perl Makefile.PL && dmake && dmake test && dmake installDo Text::CSV first, then Text::CSV_XS.
at
5:15 PM
1 comments
More posts like this:
How To,
My Tools,
NetQoS,
NetVoyant,
NPC,
NPC Browser View
Thursday, May 31, 2012
Windows 7 Desktop Gadget for displaying NPC Views
At some point last year, I built a Windows 7 desktop gadget for displaying NPC views on the desktop. It's a fairly simple gadget. You set the URL you want to display (which could actually be any URL, but use the 'Generate URL' view menu option in NPC to get an NPC view URL) and the size. The gadget will auto refresh every 60 seconds. The settings dialog also allows simply scaling the size of the gadget. So, if the aspect ratio is fine, but it needs to be bigger, just put a bigger number in that field. It's just a scalar and the height and width will be multiplied by that number.
You can download the gadget here.
Enjoy!
You can download the gadget here.
Enjoy!
Thursday, May 24, 2012
NPC and NetVoyant Web Services Gadgets
In combination with my method of inserting custom content into NPC, I've created a couple of gadgets that can be added to NPC. These gadgets give the viewer access to the NPC and NetVoyant web services that can be used to import and export group definitions, add devices to NetVoyant for polling, delete devices from NetVoyant, and delete discovery scopes from NetVoyant.
I hope you enjoy them. If you have any improvements, please let me know so I can update the source.
Installation Instructions
- You can download them here [link removed, see the Tools page].
- Extract the zip contents to the custom content directory described here.
- Edit the html files to update the URL.
- Look for the <form> tag and update the action string to point to your NPC or NV server (depending on the html file).
- Put three browser views on a page that only NPC administrators can access.
- Edit the three browser views to point to the following URLs (you should probably update the view names as well and I like to hide the view border)
- NPCGroupExport.html
- NPCGroupImport.html
- NVDeviceMgmt.html [this widget has been moved here]
NPCGroupExport.html
This gadget exports XML for any group in NPC. This XML describes the group structure and defines any rules for the groups. This can be used in conjunction with NPCGroupImport.html to modify or populate NPC groups programmatically.NPCGroupImport.html
This gadget imports XML into NPC to redefine groups. This can be used to redefine groups, rules, or membership.NVDeviceMgmt.html [this widget has been moved here]
This gadget facilitates adding and removing devices from NetVoyant.I hope you enjoy them. If you have any improvements, please let me know so I can update the source.
at
10:27 PM
1 comments
More posts like this:
How To,
My Tools,
NetQoS,
NetVoyant,
NPC,
NPC Browser View
Monday, May 21, 2012
Analyzing TCP Application Performance
Well, I've been talking about doing it for a long time and I finally got an excuse. One of my customers wanted me to prepare my SuperAgent bounce diagram presentation for recording. In stead, I opted to make a YouTube video. I've given this presentation countless times and now that I've got a video, I can spend more time answering advanced questions than going over fundamentals. Without further ado, here it is.
I've got plans for more videos covering the things I cover most often. My next project is the video of the presentation I did for the CAIM Community's monthly webcast.
I've got plans for more videos covering the things I cover most often. My next project is the video of the presentation I did for the CAIM Community's monthly webcast.
Subscribe to:
Posts (Atom)