I'm an engineer who doesn't care for a lot of fluff for fluff's sake.
Friday, November 15, 2013
Wednesday, November 13, 2013
Counting Down to a date with Javascript
I recently added a couple of countdown timers to the right side of my blog for some major events coming up. If you're wondering how I did it, I did what any self-respecting coder would do: I checked Google.
Unfortunately I didn't come up with anything that was very easy to use. So I took what I found and modified it. The result is here on Github.
The comments in the code should make it self explanatory. The function can be called any number of times for any number of countdown timers on the page. Just set a different var name and target.
Unfortunately I didn't come up with anything that was very easy to use. So I took what I found and modified it. The result is here on Github.
The comments in the code should make it self explanatory. The function can be called any number of times for any number of countdown timers on the page. Just set a different var name and target.
Monday, November 11, 2013
Preparing Windows 2008 for NetQoS Installations
UPDATE: I've added the ability to disable IPv6 and configure SNMP with default settings (public community string and access from any host). I've also corrected the spelling in two places.
UPDATE: There have been several changes lately that I haven't published out here on the blog. Suffice it to say that you can now re-run the script and choose which parts to run and which parts not to run. You are prompted with a yes/no dialog before each section of the script runs. Also, the .Net 4.0 uninstaller now runs as a part of the script. It runs right before Windows Updates and will reboot if .Net 4.0 is uninstalled. You'll need to rerun the script if you want it to run Windows Updates for you.
I've got some ambitious plans for the next version. I want the script to allow you to run it once manually and it will create a response file. That response file can then be used to repeat the script on any number of servers. So, if you want to only run certain parts, the first time you manually run it will create an output file with that info. Then you can use that output file as an input for future runs. This should make preparing a bunch of w2k8 servers easier.
There are a bunch of things that have to be done to a Windows 2008 server before the NetQoS software can be installed. Being the efficient (aka lazy) engineer that I am, I decided to script the whole thing. Here are each of the pieces of my script. Download the whole thing here. If you want to run the script without copying and pasting each piece, you must run 'set-executionpolicy remotesigned -forced' first. Otherwise powershell doesn't allow the script to run. I usually copy and paste each part so I see each part as it happens.
UPDATE: There have been several changes lately that I haven't published out here on the blog. Suffice it to say that you can now re-run the script and choose which parts to run and which parts not to run. You are prompted with a yes/no dialog before each section of the script runs. Also, the .Net 4.0 uninstaller now runs as a part of the script. It runs right before Windows Updates and will reboot if .Net 4.0 is uninstalled. You'll need to rerun the script if you want it to run Windows Updates for you.
I've got some ambitious plans for the next version. I want the script to allow you to run it once manually and it will create a response file. That response file can then be used to repeat the script on any number of servers. So, if you want to only run certain parts, the first time you manually run it will create an output file with that info. Then you can use that output file as an input for future runs. This should make preparing a bunch of w2k8 servers easier.
There are a bunch of things that have to be done to a Windows 2008 server before the NetQoS software can be installed. Being the efficient (aka lazy) engineer that I am, I decided to script the whole thing. Here are each of the pieces of my script. Download the whole thing here. If you want to run the script without copying and pasting each piece, you must run 'set-executionpolicy remotesigned -forced' first. Otherwise powershell doesn't allow the script to run. I usually copy and paste each part so I see each part as it happens.
at
4:34 PM
0
comments
More posts like this:
How To,
NetQoS,
NPC,
ReporterAnalyzer NFA,
SuperAgent ADA
Wednesday, September 4, 2013
PiTunes with USB Sound
Continuing my ever improving efforts towards my PiTunes system, I've made some modifications. The most recent modifications were to address two different problems: 1) sound quality isn't great and 2) WiFi.
First, the sound quality. We've noticed sometimes that the sound playback isn't great. Sometimes the sound skips or pops. This is apparently an issue with the analog audio port on the Raspberry Pi. I'm not totally versed on it, but basically, it's not a regular audio out port. The RPi guys skimped by not putting in a dedicated digital to audio converter (to convert the digital signal from the chip to analog) so they used a workaround that isn't great. Since most people use the digital signal through the HDMI port it isn't a problem. The workaround is to use a separate sound card. I opted for a USB sound card. They're pretty cheap and fairly ubiquitous. I got mine for $2.51 plus shipping. Not bad. I installed the USB card and rebooted the Pi. Now all I had to do was get the Pi to recognize the USB sound card as the default instead of the on board analog out port. I wouldn't have figured this one out without the help of my good friend down under, SirLagz. He's got a lot of cool stuff about RPi on his blog. Check it out! No seriously, you need to look at it.
So, to disable the on board sound card, the /etc/modprobe.d/alsa-base.conf file needs to be edited. Mine now looks like this:
After a reboot, the USB device became the primary. Then I had to update my script so that it would change the volume on the proper output port. To find out what it had to be changed to, I ran alsamixer and looked at the name at the bottom of the slider. In my case, the name was 'Speaker'. So, here's what my script looks like now. I've tested it once, we'll see how it works tonight. So far, the sound quality is much better:
First, the sound quality. We've noticed sometimes that the sound playback isn't great. Sometimes the sound skips or pops. This is apparently an issue with the analog audio port on the Raspberry Pi. I'm not totally versed on it, but basically, it's not a regular audio out port. The RPi guys skimped by not putting in a dedicated digital to audio converter (to convert the digital signal from the chip to analog) so they used a workaround that isn't great. Since most people use the digital signal through the HDMI port it isn't a problem. The workaround is to use a separate sound card. I opted for a USB sound card. They're pretty cheap and fairly ubiquitous. I got mine for $2.51 plus shipping. Not bad. I installed the USB card and rebooted the Pi. Now all I had to do was get the Pi to recognize the USB sound card as the default instead of the on board analog out port. I wouldn't have figured this one out without the help of my good friend down under, SirLagz. He's got a lot of cool stuff about RPi on his blog. Check it out! No seriously, you need to look at it.
So, to disable the on board sound card, the /etc/modprobe.d/alsa-base.conf file needs to be edited. Mine now looks like this:
After a reboot, the USB device became the primary. Then I had to update my script so that it would change the volume on the proper output port. To find out what it had to be changed to, I ran alsamixer and looked at the name at the bottom of the slider. In my case, the name was 'Speaker'. So, here's what my script looks like now. I've tested it once, we'll see how it works tonight. So far, the sound quality is much better:
Thursday, August 29, 2013
My Wallpaper Collection
I have amassed a fairly large collection of wallpapers over the years. At one point, I decided to start over since I was tired of all the wallpapers I had. My new wallpaper collection only uses high resolution photos from themes or other sources that look really good on high resolution monitors.
If you want to have a copy of my wallpaper collection, you can get it using BTSync (more info here), which will also allow you to get any new wallpapers I add to my collection (I usually add 5-8 per week). If you haven't already, install the app, then add a new sync folder. Choose where you want my wallpapers to be stored, then put the following in for the secret:
This should get you started downloading. Whenever I add new wallpapers, you'll get them. If there are any you don't want, just delete them.
If you need help setting up your wallpaper to cycle through these images, go here. The only instruction I would add is in step 3, browse to the folder where the pictures are being downloaded.
If you want to have a copy of my wallpaper collection, you can get it using BTSync (more info here), which will also allow you to get any new wallpapers I add to my collection (I usually add 5-8 per week). If you haven't already, install the app, then add a new sync folder. Choose where you want my wallpapers to be stored, then put the following in for the secret:
BFVR62FRM2AOZT3TLBISA2MLNZJ4XO5CA
This should get you started downloading. Whenever I add new wallpapers, you'll get them. If there are any you don't want, just delete them.
If you need help setting up your wallpaper to cycle through these images, go here. The only instruction I would add is in step 3, browse to the folder where the pictures are being downloaded.
Wednesday, August 21, 2013
Dropping Dropbox and Google Drive for BTSync & a Raspberry Pi
I thought this article deserved republication, so I decided to take the easy route with today's post and just link to Jack's blog post. In his post, Jack describes how he decided to drop his cloud file sync'ing app in favor of something run in house. The best part about it is that it works just as well as Dropbox or Google Drive, but without putting your file on any device other than your own. The advantage of this, as I've posted before, is that you can sync an unlimited number and size of files.
I've done something similar to this using Western Digital's My Book Live Duo. Since the MBLD runs on Linux (Lenny Squeeze), it was pretty easy to load BTSync and fire it up. So, my NAS participates in the synchronization of my folders from my desktop to my laptop. So, even if my desktop is offine (or in standby) I can still sync between my laptop and my NAS (over the internet too) and when my desktop comes online it'll sync up with the NAS and laptop.
I've done something similar to this using Western Digital's My Book Live Duo. Since the MBLD runs on Linux (Lenny Squeeze), it was pretty easy to load BTSync and fire it up. So, my NAS participates in the synchronization of my folders from my desktop to my laptop. So, even if my desktop is offine (or in standby) I can still sync between my laptop and my NAS (over the internet too) and when my desktop comes online it'll sync up with the NAS and laptop.
Friday, August 16, 2013
Device level context switching
UPDATE: I've developed some code to make this a regular view that can be dragged onto the page without any configuration required. The following code will create the standalone view and add it to all four device context pages:
UPDATE: I've rewritten this widget to make it easier to implement. Now instead of having to specify the {Item.ItemID} variable in the browser view URL, the widget just grabs the information from the parent URL. This is also better because any additional arguments you had in the URL will continue through to the other context pages. Here's the updated code:
Now all you have to do is point to this widget in your custom content directory.
Enjoy!
You may not know about it, but NPC classifies every device as either a router, switch, server, or device. The device category is for every type of device that isn't a router, switch, or server. This is too bad because NetVoyant actually has an extendable list of device classifications; you can make as many as you want. However any additional classes will show up in NPC as 'devices' because NPC doesn't understand them. This is fine in most cases. However, certain cases will cause problems.
For example, if I have an F5 load balancer and I'm monitoring the device in SuperAgent as well as NetVoyant, NPC has to choose whether to classify the device as a server (as SuperAgent reports it) or as a device (since NetVoyant either classifies it as 'other' or 'load balancers' if you've classified it). Turns out the NV classification is last on the list. If a device is monitored by RA or SA, NPC will classify it as a router or server, respectively, regardless of what classification exists in NV.
In this case, what I usually do is instruct customers how to switch from one context page to another after drilling in. For example, after I drill into the F5 and get to the server page, I would update the URL to read pg=d instead of pg=s. This loads the device page for the F5 instead of the server page. This can be handy since the device page may have specific F5 views on it that don't appear on the server page.
In order to make this easier, I built a simple html page that can be loaded into a browser view that will allow quick switching between all four context view types. Here's the page:
Link to this page from a browser view with a title like 'View this device as a...' and a URL like this:
UPDATE: I've rewritten this widget to make it easier to implement. Now instead of having to specify the {Item.ItemID} variable in the browser view URL, the widget just grabs the information from the parent URL. This is also better because any additional arguments you had in the URL will continue through to the other context pages. Here's the updated code:
Now all you have to do is point to this widget in your custom content directory.
Enjoy!
You may not know about it, but NPC classifies every device as either a router, switch, server, or device. The device category is for every type of device that isn't a router, switch, or server. This is too bad because NetVoyant actually has an extendable list of device classifications; you can make as many as you want. However any additional classes will show up in NPC as 'devices' because NPC doesn't understand them. This is fine in most cases. However, certain cases will cause problems.
For example, if I have an F5 load balancer and I'm monitoring the device in SuperAgent as well as NetVoyant, NPC has to choose whether to classify the device as a server (as SuperAgent reports it) or as a device (since NetVoyant either classifies it as 'other' or 'load balancers' if you've classified it). Turns out the NV classification is last on the list. If a device is monitored by RA or SA, NPC will classify it as a router or server, respectively, regardless of what classification exists in NV.
In this case, what I usually do is instruct customers how to switch from one context page to another after drilling in. For example, after I drill into the F5 and get to the server page, I would update the URL to read pg=d instead of pg=s. This loads the device page for the F5 instead of the server page. This can be handy since the device page may have specific F5 views on it that don't appear on the server page.
In order to make this easier, I built a simple html page that can be loaded into a browser view that will allow quick switching between all four context view types. Here's the page:
<html>
<script type="text/javascript">
var url1='<a target="_top" href="/npc/Default.aspx?pg=';
var str=location.search;
str=str.replace("?ItemID=","");
document.write(url1 + 'r' + '&DeviceID=' + str + '">Router</a> ');
document.write(url1 + 'sw' + '&DeviceID=' + str + '">Switch</a> ');
document.write(url1 + 'd' + '&DeviceID=' + str + '">Device</a> ');
document.write(url1 + 's' + '&DeviceID=' + str + '">Server</a> ');
<a target="_blank" href="http://stuart.weenig.com/2012/08/device-level-context-switching.html"><img src="/npc/images/DialogQuestion.gif" border=0></a>
</script>
</html>
Link to this page from a browser view with a title like 'View this device as a...' and a URL like this:
/content/viewdeviceas.html?ItemID={Item.ItemID}
As long as this page is named 'viewdeviceas.html' and it's hosted under a virtual directory on NPC's IIS web server with an alias of 'content' it should load just fine. Give it a height of 33, turn off the border and hide the scroll bars. This makes an excellent small browser view that can go right at the top of the page, displayed right under the page tabs.
Thursday, August 15, 2013
Using Distributions to show Performance of Multiple Objects on a Time Scale
Many people building custom views in NV will no doubt build one of two types of views: Details Trend or Management TopN. Unfortunately, this bypasses some of the cooler views like the distribution views. Consider this scenario: I have multiple third party devices and the manufacturer has provided a special MIB to monitor CPU utilization (instead of doing the smart thing like publishing their CPU statistics into the hrprocessor or UCDavis MIB OIDs). So, I now have the opportunity to build a custom dataset to pull in the CPU utilization for these devices. (Side note, i should probably republish my instructions on how to build a custom dataset.)
After I build the dataset, I'll start building my views. Let's suppose that the vendor has only provided the CPU utilization as an average of all the CPUs on the device or that the device will only ever have one CPU. The end result is that there is only one poll instance per device for that dataset. This means that I'll only really build views on the device level and configure the views to drill down to the device page instead of the poll instance page. After building the appropriate trends on the device page, I'd go to an overview page and build a table or bar chart to show the devices with the highest CPU utilization. All of this is great and normal and is what most people do when building views for this kind of data.
The problem with stopping here is that there is no way to look at multiple devices over a period of time and see how the devices were performing within that timeframe. The reason for this is that a TopN table or bar chart will display the rollup (usually the average) of the metric within the timeframe. In the case of my custom dataset, I'd see the average CPU utilization over the last hour, last day, last week, etc. This is ok as long as I pick one of the standard timeframes. Notice what happens when you pick last 4 hours in NPC. A table or bar chart will only do last hour. That's because NV hasn't pre-calculated rollups on a 4-hour basis. So, it becomes important to show the performance of the metric over time showing the values within the timeframe, be it a standard rollup period or not.
That's where distribution views can help. While they don't necessary show the value of each one of the poll instances analyzed, they do categorize the metric into groups. For example, I could build a distribution view to group the metrics like: 0-25%, 25-50%, 50-75%, 75-95%, and over 95%. In this case, NPC would look at all the data during the timeframe (if last hour with 5 minute polling, it will look at 12 data points for each poll instance included in the context) and categorize each data point into one of the buckets I've defined. The end result is a trend plot over time showing how many devices are in which buckets for each point in time.
Users need to be instructed in the proper way to interpret the view. If the view is setup properly, the undesirable buckets will have more extreme colors (reds and oranges). When a user sees a time period in which a larger number of devices are in the undesirable buckets, they should understand that a large number of devices has experience higher CPU utilization. If 10 devices' CPU utilization goes from 20% to 60%, the bars before the increase will show 10 devices in the 0-25% bucket while the bars after the increase will show 10 devices in the 50-75% bucket. NPC also calculates the percentage of total devices in each bucket. So, if half of my devices are in the 50-75% range, a mouseover will reveal 50% in that bucket.
This visualization can be equated to creating a pie chart for each poll cycle. If you look at one poll cycle for all the devices and created a pie chart with 5 slices, it would be easy to understand how many devices need attention. Imagine taking the crust off the pie, stretching it out flat and stacking it next to the pie crusts for the other poll cycles in the period.
One disadvantage to the distribution charts is that they lack drill down. So, while a distribution is good for a summary page, a table showing the rollups over the same timeframe will be helpful to identify which devices are experiencing the higher CPU utilization. This table would allow drill down to the device page where the individual trend plot could be analyzed individually. It could also be compared to the rest of the data being gathered by NV for the device.
After I build the dataset, I'll start building my views. Let's suppose that the vendor has only provided the CPU utilization as an average of all the CPUs on the device or that the device will only ever have one CPU. The end result is that there is only one poll instance per device for that dataset. This means that I'll only really build views on the device level and configure the views to drill down to the device page instead of the poll instance page. After building the appropriate trends on the device page, I'd go to an overview page and build a table or bar chart to show the devices with the highest CPU utilization. All of this is great and normal and is what most people do when building views for this kind of data.
The problem with stopping here is that there is no way to look at multiple devices over a period of time and see how the devices were performing within that timeframe. The reason for this is that a TopN table or bar chart will display the rollup (usually the average) of the metric within the timeframe. In the case of my custom dataset, I'd see the average CPU utilization over the last hour, last day, last week, etc. This is ok as long as I pick one of the standard timeframes. Notice what happens when you pick last 4 hours in NPC. A table or bar chart will only do last hour. That's because NV hasn't pre-calculated rollups on a 4-hour basis. So, it becomes important to show the performance of the metric over time showing the values within the timeframe, be it a standard rollup period or not.
That's where distribution views can help. While they don't necessary show the value of each one of the poll instances analyzed, they do categorize the metric into groups. For example, I could build a distribution view to group the metrics like: 0-25%, 25-50%, 50-75%, 75-95%, and over 95%. In this case, NPC would look at all the data during the timeframe (if last hour with 5 minute polling, it will look at 12 data points for each poll instance included in the context) and categorize each data point into one of the buckets I've defined. The end result is a trend plot over time showing how many devices are in which buckets for each point in time.
Users need to be instructed in the proper way to interpret the view. If the view is setup properly, the undesirable buckets will have more extreme colors (reds and oranges). When a user sees a time period in which a larger number of devices are in the undesirable buckets, they should understand that a large number of devices has experience higher CPU utilization. If 10 devices' CPU utilization goes from 20% to 60%, the bars before the increase will show 10 devices in the 0-25% bucket while the bars after the increase will show 10 devices in the 50-75% bucket. NPC also calculates the percentage of total devices in each bucket. So, if half of my devices are in the 50-75% range, a mouseover will reveal 50% in that bucket.
This visualization can be equated to creating a pie chart for each poll cycle. If you look at one poll cycle for all the devices and created a pie chart with 5 slices, it would be easy to understand how many devices need attention. Imagine taking the crust off the pie, stretching it out flat and stacking it next to the pie crusts for the other poll cycles in the period.
One disadvantage to the distribution charts is that they lack drill down. So, while a distribution is good for a summary page, a table showing the rollups over the same timeframe will be helpful to identify which devices are experiencing the higher CPU utilization. This table would allow drill down to the device page where the individual trend plot could be analyzed individually. It could also be compared to the rest of the data being gathered by NV for the device.
Cycling DIVs on a web page
Just before my boys were born, I installed a Foscam IP Camera on the ceiling of their room. I have enjoyed being able to check on the boys as they're napping or sleeping in their cribs. The model I got has pan/tilt capabilities and a microphone built in. So, I can hear them breathing or crying and focus on one or the other. I can do all this without opening their door. Also, since it's mounted on the ceiling, they don't ever notice it. It's equipped with a bank of IR LEDs around the lens that turn on whenever the room is too dark, so I can even look in on them when their room is completely dark. I found a decent iPhone app that allows me to connect to the camera. I could even hook up some speakers to the camera and use a feature of the app to talk to the boys in their room. Not much different than a baby monitor.
Since then, a three pack of outdoor cameras went on sale so I went ahead and purchased them. I mounted them on my front porch, and the back corner of the house overlooking the back door and the sideyard approaching the gate. The cameras have motion detection and automatic file uploading, so I get a picture every time the lawn guys approach the gate and any time a solicitor approaches the front door. I also built a simple html page displaying the feed from all four cameras on one page. Unfortunately, at the highest resolution (and why wouldn't I want the highest resolution?) the four feeds don't show up on a single page unless i scale the page to 75%. This is easy to do and Chrome even remembers that when I am looking at that page to automatically scale it down to 75%. This has worked well, but I've always wanted a better way.
Yesterday, I finally got the tricky parts of what I really wanted to do worked out. The goal was to have a web page that would show all four feeds but only show one at a time. The page should cycle through each video feed and stay on it for a few seconds before moving on to the next feed. You wouldn't expect this to be too difficult and in the end, it really wasn't. This is the first version and the intent of this post is not to show the finished code for my page but to show how DIV elements on a web page can be cycled.
To start with, here is the html page with the DIV elements:
In my situation, each DIV contains the code to display the video feed from a single camera. The first DIV starts out visible while the rest are hidden. This doesn't really matter since the first time the javascript runs it will reset everything anyway. Technically, all four DIVs could start out hidden or displayed.
Here is the javascript code itself. The comments should make it pretty self explanatory. The very bottom starts the function running. At the end of the function, the setTimeout command instructs the browser to call the function again after 1000 seconds. This could be modified to read from an input box so that the user can adjust the cycle speed. I guess i could look for some gauges out there to make it pretty. I had to get a little sloppy by storing the next div index in a hidden div. This was the easiest way that I as a non-programmer could avoid problems with global vs. local variables in the function. Theoretically, this could be used for any number of DIVs within the container DIV. There are some fun transition effects available in the webkit and through CSS3, but like I said, that's not what this post is about.
Thanks to SirLagz for helping me debug my code and check my syntax (it's amazing what a missing double quote will do)!
Since then, a three pack of outdoor cameras went on sale so I went ahead and purchased them. I mounted them on my front porch, and the back corner of the house overlooking the back door and the sideyard approaching the gate. The cameras have motion detection and automatic file uploading, so I get a picture every time the lawn guys approach the gate and any time a solicitor approaches the front door. I also built a simple html page displaying the feed from all four cameras on one page. Unfortunately, at the highest resolution (and why wouldn't I want the highest resolution?) the four feeds don't show up on a single page unless i scale the page to 75%. This is easy to do and Chrome even remembers that when I am looking at that page to automatically scale it down to 75%. This has worked well, but I've always wanted a better way.
Yesterday, I finally got the tricky parts of what I really wanted to do worked out. The goal was to have a web page that would show all four feeds but only show one at a time. The page should cycle through each video feed and stay on it for a few seconds before moving on to the next feed. You wouldn't expect this to be too difficult and in the end, it really wasn't. This is the first version and the intent of this post is not to show the finished code for my page but to show how DIV elements on a web page can be cycled.
To start with, here is the html page with the DIV elements:
In my situation, each DIV contains the code to display the video feed from a single camera. The first DIV starts out visible while the rest are hidden. This doesn't really matter since the first time the javascript runs it will reset everything anyway. Technically, all four DIVs could start out hidden or displayed.
Here is the javascript code itself. The comments should make it pretty self explanatory. The very bottom starts the function running. At the end of the function, the setTimeout command instructs the browser to call the function again after 1000 seconds. This could be modified to read from an input box so that the user can adjust the cycle speed. I guess i could look for some gauges out there to make it pretty. I had to get a little sloppy by storing the next div index in a hidden div. This was the easiest way that I as a non-programmer could avoid problems with global vs. local variables in the function. Theoretically, this could be used for any number of DIVs within the container DIV. There are some fun transition effects available in the webkit and through CSS3, but like I said, that's not what this post is about.
Thanks to SirLagz for helping me debug my code and check my syntax (it's amazing what a missing double quote will do)!
Tuesday, August 13, 2013
Ripping Movies to iTunes
I recently acquired an Apple TV. No, it wasn't a Christmas gift, it was a result of winning a $50 Apple store gift card. I had originally thought it was an iTunes gift card, but apparently, they're not the same thing and can't be used interchangeably. Since I had been thinking about getting an Apple TV for over a year now, I figured this was the best opportunity. I figured now that I have an XBOX, adding an Apple TV would eliminate the need of having a PC connected to my TV. Up 'til now, I've been using a PC connected to the TV for Windows Media Center (through which I get TV DVR functionality). That also afforded me a couple extra features: Hulu Desktop, Netflix (through WMC), streaming from the iTunes library on my office PC, and any movies I'd ripped to my PC. By adding an Apple TV to the mix, Netflix, iTunes streaming, and movies could be eliminated. By using the XBOX as a media center extender, the only thing that I really needed the PC for was Hulu desktop. Look for my next post to see how I fixed that problem.
So, at any rate, I didn't need the PC connected to the TV anymore, which was a relief. I had been worried about overheating the PC (when the cabinet was closed) and since I had moved my main router to the same UPS as that PC, the UPS had complained of overload. As a result, I shut down the PC and decided to use only the Apple TV and the XBOX for my entertainment needs.
All of that verbosity was to explain why I went through the process of learning how to rip DVDs to iTunes. I already have a fairly extensive library of music, audiobooks, podcasts, ebooks, and apps in my iTunes on my office PC. I decided that I would try to rip my DVD collection to iTunes so that I could just stream them from my office PC to my Apple TV. This would also make it possible to take some movies/TV shows with me on my iPhone/iPad.
So, now, how to do it:
The first thing you have to do is understand the legality of this process. This can only be done with DVDs you currently own. You cannot do this with rental DVDs since the rental agreement does not include a fair use clause. However, if you own the DVD, encoding it to a different format is the same as burning music you have bought from iTunes to a CD. It all falls under the classification of 'fair use'.
The next thing you need to do is get prepared by installing two pieces of software: Handbrake and DVD Decrypter. UPDATE: I've started using MakeMKV Beta instead of DVD Decrypter. It also supports ripping Blu Ray disks as well as DVDs. See here for more information and another guide specifically tailored to Blu Ray disks. You should probably be able to do this all using just Handbrake, but I've found that ripping the DVD to your hard drive allows for more efficient encoding using a queue.
Once you've got those two installed, open DVD Decrypter. You'll need to make a couple option changes in order to make this quicker and easier. First you need to have a folder on your hard drive where you'll store the raw DVD files. I use C:\RIP\. On the General tab, set this as the default destination folder and set the folder option to 'Semi Automatic'. This should allow DVD decrypter to automatically place each new DVD's files into their own folders. Because I've been going through my entire DVD collection encoding movies off and on for a few weeks, I also liked to tell DVD Decrypter to eject the DVD when finished. On the Device tab, check the box that says 'Eject Tray after...Read'. I also turn off the success noise. On the Sounds tab, uncheck the success sound.
Now pop in a DVD and click the big button in the bottom left corner of DVD Decrypter. That should get things going. You can now sit back for 15-20 minutes while the files are being ripped to your hard drive in raw format.
Once that is done, open Handbrake. You need to tell Handbrake to automatically output the files to the automatic input folder for iTunes. This makes it so that when encoding is finished, iTunes will automatically import the file. Open up Tools>>Options and on the General tab check the 'Automatically name output files' checkbox and set the default path to "C:\Users\YourUserName\Music\Automatically Add to iTunes". This may be slightly different on your system, but you should be able to find it. Also, change the Format box to only read {source}. You also might find it helpful to enable the 'Remove Underscores from Name' and 'Change case to Title Case' options.
Back in the main Handbrake window, click the Source drop down and select "Folder". Browse to the folder where DVD Decrypter output the raw DVD files. Handbrake will scan through the files and should select the longest title. (FYI, in DVD nomenclature, a title is essentially a single video on the DVD. You'll find that DVDs will usually contain one title for the main feature, one for each of the special features, and usually a couple small ones used as transition videos in the DVD menu.)
You'll only want to encode the main title (unless the DVD is a TV show in which case you'll want to use something like VLC media player to play the raw DVD files and determine which titles to encode). Since I'd like the best quality video, I choose the 'High Profile' in Handbrake. This sets all the settings up for the best possible quality. It also results in a large file, so if you're low on hard drive space, this might not be an option. I set this profile as the default so I don't have to pick it every time, and I also check the 'Large file size' option. Check that the name of the output file is how you want it, then click the 'Add to Queue' button.
At this point, you can pop in the next DVD and repeat the process for the next movie. I usually queue up 10-15 movies a day and let the encode run at night. Encoding is a CPU intensive process and will usually max out your processor making using your computer for other stuff fairly slow. Run it at night when the encode is the only thing going on.
So, at any rate, I didn't need the PC connected to the TV anymore, which was a relief. I had been worried about overheating the PC (when the cabinet was closed) and since I had moved my main router to the same UPS as that PC, the UPS had complained of overload. As a result, I shut down the PC and decided to use only the Apple TV and the XBOX for my entertainment needs.
All of that verbosity was to explain why I went through the process of learning how to rip DVDs to iTunes. I already have a fairly extensive library of music, audiobooks, podcasts, ebooks, and apps in my iTunes on my office PC. I decided that I would try to rip my DVD collection to iTunes so that I could just stream them from my office PC to my Apple TV. This would also make it possible to take some movies/TV shows with me on my iPhone/iPad.
So, now, how to do it:
The first thing you have to do is understand the legality of this process. This can only be done with DVDs you currently own. You cannot do this with rental DVDs since the rental agreement does not include a fair use clause. However, if you own the DVD, encoding it to a different format is the same as burning music you have bought from iTunes to a CD. It all falls under the classification of 'fair use'.
The next thing you need to do is get prepared by installing two pieces of software: Handbrake and DVD Decrypter. UPDATE: I've started using MakeMKV Beta instead of DVD Decrypter. It also supports ripping Blu Ray disks as well as DVDs. See here for more information and another guide specifically tailored to Blu Ray disks. You should probably be able to do this all using just Handbrake, but I've found that ripping the DVD to your hard drive allows for more efficient encoding using a queue.
Once you've got those two installed, open DVD Decrypter. You'll need to make a couple option changes in order to make this quicker and easier. First you need to have a folder on your hard drive where you'll store the raw DVD files. I use C:\RIP\. On the General tab, set this as the default destination folder and set the folder option to 'Semi Automatic'. This should allow DVD decrypter to automatically place each new DVD's files into their own folders. Because I've been going through my entire DVD collection encoding movies off and on for a few weeks, I also liked to tell DVD Decrypter to eject the DVD when finished. On the Device tab, check the box that says 'Eject Tray after...Read'. I also turn off the success noise. On the Sounds tab, uncheck the success sound.
Now pop in a DVD and click the big button in the bottom left corner of DVD Decrypter. That should get things going. You can now sit back for 15-20 minutes while the files are being ripped to your hard drive in raw format.
Once that is done, open Handbrake. You need to tell Handbrake to automatically output the files to the automatic input folder for iTunes. This makes it so that when encoding is finished, iTunes will automatically import the file. Open up Tools>>Options and on the General tab check the 'Automatically name output files' checkbox and set the default path to "C:\Users\YourUserName\Music\Automatically Add to iTunes". This may be slightly different on your system, but you should be able to find it. Also, change the Format box to only read {source}. You also might find it helpful to enable the 'Remove Underscores from Name' and 'Change case to Title Case' options.
Back in the main Handbrake window, click the Source drop down and select "Folder". Browse to the folder where DVD Decrypter output the raw DVD files. Handbrake will scan through the files and should select the longest title. (FYI, in DVD nomenclature, a title is essentially a single video on the DVD. You'll find that DVDs will usually contain one title for the main feature, one for each of the special features, and usually a couple small ones used as transition videos in the DVD menu.)
You'll only want to encode the main title (unless the DVD is a TV show in which case you'll want to use something like VLC media player to play the raw DVD files and determine which titles to encode). Since I'd like the best quality video, I choose the 'High Profile' in Handbrake. This sets all the settings up for the best possible quality. It also results in a large file, so if you're low on hard drive space, this might not be an option. I set this profile as the default so I don't have to pick it every time, and I also check the 'Large file size' option. Check that the name of the output file is how you want it, then click the 'Add to Queue' button.
At this point, you can pop in the next DVD and repeat the process for the next movie. I usually queue up 10-15 movies a day and let the encode run at night. Encoding is a CPU intensive process and will usually max out your processor making using your computer for other stuff fairly slow. Run it at night when the encode is the only thing going on.
Wednesday, August 7, 2013
Checking if a TCP Port is Open on a Remote Server using Powershell
This tip is related to a previous post about using a utility called TCPing.exe to check to see if a remote port is open (i.e. the service is listening and accepting connections). Since Windows 2008 doesn't come with the telnet client installed and since some circumstances will mean that access to the internet isn't available (i.e. a server in a hardened data center), an alternative method is to use Powershell.
This can be wrapped into a script pretty easily, or if you just memorize it, you can impress whomever is looking over your shoulder.
So, memorize this bit (this will ping 192.168.0.1:80):
If the output is "True", then then the tcping was successful. If the connection is unsuccessful, it will be completely obvious in the output.
To wrap it into a script, create a text file like this (this will ping proxyserver.domain.com:8080 by default. Pass the server name and port to override the default.
This can be wrapped into a script pretty easily, or if you just memorize it, you can impress whomever is looking over your shoulder.
So, memorize this bit (this will ping 192.168.0.1:80):
If the output is "True", then then the tcping was successful. If the connection is unsuccessful, it will be completely obvious in the output.
To wrap it into a script, create a text file like this (this will ping proxyserver.domain.com:8080 by default. Pass the server name and port to override the default.
Thursday, July 25, 2013
Google's new Chromecast dongle sells out on Play Store
UPDATE: Several reviews have come out about the Chromecast. Since I haven't dedicated the $35 to purchasing one myself, I'll defer to David Pogue.
Chromecast: is it a game changer? The public is eating it up. It's definitely a game changer. But to understand how it's going to affect things, you have to think about how Apple has been going about the same thing. For years now, Apple has had a 'pet project' called AppleTV. It has never been on the front line of Apple's advertising. There have even been several analysts over the years that have predicted the doom of AppleTV. Apple has persisted though without really highlighting AppleTV. AirPlay was available through the iOS devices, but was mainly used for streaming music. It was a cool feature but not a wave maker.
When the iPhone 5 and iPad 3 came out, they both had AirPlay, which meant that anyone who already had an AppleTV could mirror their screen to their big screen TV. This was a significant event and clearly showed Apple's desire to get into the living room. They've since released newer updated hardware and software for the little device. You can watch Hulu, Netflix, HBO, and a bunch of other content, as long as your have an account. Recently, an iOS game developer released a game that really doesn't work without mirroring. It's a tennis game much like the game that comes with the Wii. The difference is that you play with your phone as the controller using its internal accelerometers and gyroscopes to detect your motion. The video is displayed on your phone, but that doesn't really work when you're swinging your phone around like a tennis racket. However, if you mirror your phone to your AppleTV, you essentially get the same game that came with the first generation Wii. This game opens the door for other games that can be built in the same way. All the work is being done on the phone/controller in your hand, while the video is mirrored up to the big screen. The next advance I expect to see from Airplay is the ability to mirror multiple devices to the same AppleTV. Putting two people's phones' screens on a single TV gives multiplayer games a chance (imagine Mario cart but using your phone as the steering wheel).
Then Google released Chromecast. It's 1/3 the cost of the AppleTV and seems to work across different platforms. If you only look at the Chrome browser mirroring capabilities, this is huge. All the things that can be done in a Chrome browser can now be done on a big screen TV while not requiring any extra remote controls much in the same was as Airplay does for Apple phones' screens. While Chromecast appears to compete directly with AppleTV given all the current features, there's more to it than that. AppleTV and Chromecast are on the same trajectory. While Google was late to the phone game joining in only after Apple already had a tight grip on the market, they came to the table much sooner with Chromecast.
The other major factor here is that Airplay for Apple iOS devices only works with mobile devices. Chromecast promises to work not only using any mobile device but also PC computers with the Chrome browser installed. This means that all the content that people currently consume using their PC can now be consumed on their TV. This may not seem big, but given the cheap entry point, Chromecast could easily be used as a secondary monitor for every device in the house.
On top of that, since Chromecast can mirror anything from the Chrome browser, much content that has had a hard time breaking into the living room now has a direct link. For example, Hulu has two services, free and paid (Hulu Plus). The paid service doesn't have much content that the free service doesn't. There's a little, but it's not really what subscribers are paying for. Hulu Plus subscribers have the ability to stream Hulu content on just about any device they can get their hands on. Hulu free users can only get content through their browser (but not a browser on a mobile device). With Chromecast, users can easily use the free Hulu service but still view it on their TV without hooking up a PC. This means that Hulu will need to reevaluate what users are really paying for. At $8 a month, a one time investment of $35 for Chromecast not only will pay for itself in 5 months, but will get me pretty much the same content with little extra hassle.
Yes, Chromecast is a game changer. At $35, it's cheap enough to give it a chance even if it doesn't eventually work out. It's not like the $99 investment in an AppleTV.
Chromecast: is it a game changer? The public is eating it up. It's definitely a game changer. But to understand how it's going to affect things, you have to think about how Apple has been going about the same thing. For years now, Apple has had a 'pet project' called AppleTV. It has never been on the front line of Apple's advertising. There have even been several analysts over the years that have predicted the doom of AppleTV. Apple has persisted though without really highlighting AppleTV. AirPlay was available through the iOS devices, but was mainly used for streaming music. It was a cool feature but not a wave maker.
When the iPhone 5 and iPad 3 came out, they both had AirPlay, which meant that anyone who already had an AppleTV could mirror their screen to their big screen TV. This was a significant event and clearly showed Apple's desire to get into the living room. They've since released newer updated hardware and software for the little device. You can watch Hulu, Netflix, HBO, and a bunch of other content, as long as your have an account. Recently, an iOS game developer released a game that really doesn't work without mirroring. It's a tennis game much like the game that comes with the Wii. The difference is that you play with your phone as the controller using its internal accelerometers and gyroscopes to detect your motion. The video is displayed on your phone, but that doesn't really work when you're swinging your phone around like a tennis racket. However, if you mirror your phone to your AppleTV, you essentially get the same game that came with the first generation Wii. This game opens the door for other games that can be built in the same way. All the work is being done on the phone/controller in your hand, while the video is mirrored up to the big screen. The next advance I expect to see from Airplay is the ability to mirror multiple devices to the same AppleTV. Putting two people's phones' screens on a single TV gives multiplayer games a chance (imagine Mario cart but using your phone as the steering wheel).
Then Google released Chromecast. It's 1/3 the cost of the AppleTV and seems to work across different platforms. If you only look at the Chrome browser mirroring capabilities, this is huge. All the things that can be done in a Chrome browser can now be done on a big screen TV while not requiring any extra remote controls much in the same was as Airplay does for Apple phones' screens. While Chromecast appears to compete directly with AppleTV given all the current features, there's more to it than that. AppleTV and Chromecast are on the same trajectory. While Google was late to the phone game joining in only after Apple already had a tight grip on the market, they came to the table much sooner with Chromecast.
The other major factor here is that Airplay for Apple iOS devices only works with mobile devices. Chromecast promises to work not only using any mobile device but also PC computers with the Chrome browser installed. This means that all the content that people currently consume using their PC can now be consumed on their TV. This may not seem big, but given the cheap entry point, Chromecast could easily be used as a secondary monitor for every device in the house.
On top of that, since Chromecast can mirror anything from the Chrome browser, much content that has had a hard time breaking into the living room now has a direct link. For example, Hulu has two services, free and paid (Hulu Plus). The paid service doesn't have much content that the free service doesn't. There's a little, but it's not really what subscribers are paying for. Hulu Plus subscribers have the ability to stream Hulu content on just about any device they can get their hands on. Hulu free users can only get content through their browser (but not a browser on a mobile device). With Chromecast, users can easily use the free Hulu service but still view it on their TV without hooking up a PC. This means that Hulu will need to reevaluate what users are really paying for. At $8 a month, a one time investment of $35 for Chromecast not only will pay for itself in 5 months, but will get me pretty much the same content with little extra hassle.
Yes, Chromecast is a game changer. At $35, it's cheap enough to give it a chance even if it doesn't eventually work out. It's not like the $99 investment in an AppleTV.
Tuesday, July 23, 2013
BTSync: A new alternative to cloud based drive services
In a previous post, I wrote about how to connect your system folders (My Documents, My Pictures, etc.) to a Google Drive/Dropbox/Skydrive account. The benefit of this is that you always have a backup available on the web. If my hard drive were to die today, I wouldn't be too bad off since all I need to do is download the Google Drive desktop app and redirect my system folders to my Google Drive folder. All my stuff would come back like it was never gone. I'd still have to install applications but that's not too bad. It's nice every once in a very long while to lose all my programs. It forces me to trim the fat or look for better versions of the apps I use (Like Paint.Net or Notepad++).
1Without paying anything
2Shared with Gmail and Google+ Hi Res Photos
3Although pretty easy
4If you have a paid subscription to Office360
5Not necessarily Gmail users, but anyone with a Google account
So, let me highlight some of the reasons that BTSync intrigues me. First of all, there is no limit to the amount of content that can be synchronized. This is mainly due to the fact that your sync'd files are not stored on some limited corporate server somewhere, they're stored only on the systems where your files are synchronized. This is a double edged knife, however. While Cloud based drives can be used for backup, BTSync doesn't back your content up to the internet. If you add a folder to BTSync only on one computer, the files aren't copied anywhere. So, unlimited size but no storage on any corporate servers. That may be two advantages in some peoples' books.
Unlike most of the corporate cloud based drives, BTSync is only about the file transfers. As such, there are no online editors for your files. However, since there is no web access to your files (because they're not on any corporate server) you'll only be accessing your files from your desktop. You can install any office productivity suite locally (or even use google docs in a roundabout way).
Sharing is another feature that is different from other offerings. While other offerings essentially require you to have an account in order to have RW access to a shared folder, BTSync will allow anyone with the app to access/sync your folder as long as they have the secret, a special, very long, very complicated password. If you give another person the secret to your folder, they can sync your folder with a folder on their computer. Any changes either of you make will be reflected in the other's sync'd folder.
You can however, give out two other types of secrets: a read only secret and a one time secret. The RO secret allows the person to whom you give it to sync your folder to a folder on their system, but they won't be able to make changes to your folder. This is a good way of distributing files to friends. For example, you could setup your pictures folder and hand out the RO secret to family members. They would then get copies of any pictures you put in your pictures folder (think of doing this with your iCloud Photo Stream).
Have you started thinking about the possibilities yet?
Another thing I did right away with BTSync was to synchronize my Dosbox working directory across all my PCs. I play retro DOS games every once in a while. By synchronizing the working directory for Dosbox (a DOS emulator) I can access the games, save files, and anything else on any of my computers. This allows me to play a game on my desktop then save the game, exit Dosbox, go to the living room and launch Dosbox and pick up the saved game right where I left off.
I'm hosting a LAN party this weekend. I setup a folder where I intend to put all the installers and files needed (including my dosbox folder). I've added it to BTSync and will hand out the RO key via Facebook and email to everybody coming to the party. That way they can install the games ahead of time to make sure they work.
Another idea I had was to use BTSync to replace NQSync (which I had originally intended to write using the bittorrent protocol anyway).
One more feature then I'm done. I promise.
In my previous post, I talked about changing the default location of system folders. This is necessary since most cloud drives require the sync'd files to be in a particular folder. So I have to move my system folders to that sync'd folder in order to get them to sync. With BTSync, I don't have to move the folders. I can setup each folder in BTSync without moving it. This means that I don't have to move anything, I don't have to change Windows configurations, or anything.
I haven't done it yet, but I will probably only run Google Drive on one of my PCs. The rest will use BTSync to stay in sync.
So, that's my initial review of BTSync. So far, I don't see it replacing Google Drive, but I do see myself using it to distribute pictures and home videos to my family, keeping my games in sync across all my PCs, possibly synchronizing recorded TV shows, making backups, and using it at work.
However, I've always been a fan of the Bittorrent protocol. It's a peer to peer file sharing program that most people use to download illegal movies or music. While those days are behind me, I've tried to help people understand that P2P is no the same as illegal. There are perfectly legal uses of P2P protocols like Bittorrent (see this, this, and this).
And so my becoming a fan of Google Drive (and also Google Apps that allow me to work on a spreadsheet or document with many other people simultaneously) only makes sense. If you're happy with that, or you only have one computer and no one you would ever want to share anything with, stop reading this blog post now (check out my most popular post instead).
Last week I stumbled upon BTSync (it has since been spun off and renamed to Resilio Sync). This is a little app created by the same people that designed the Bittorrent protocol. While there are several uses of BTSync, the main use is to compare it to the features and functionality of products like Google Drive. Here is my comparison matrix:
Feature | Google Drive | SkyDrive | Dropbox | BTSync |
---|---|---|---|---|
Size Limit1 | 15GB2 | 7GB | 18GB | ∞ |
Shareable Content | Must be in Google Drive folder | Must be in SkyDrive folder | Must be in Dropbox folder3 | Any number of existing folders |
Your files stored on a corporate server | Yes | Yes | Yes | No |
Online File Editors | Yes | Yes4 | Viewer: Yes Editor: No | No |
Web Access to Files | Yes | Yes | Yes | No |
Sharing | Only with Google users5 | Only with MSN Passport users | Only with Dropbox users | Anyone with the app |
2Shared with Gmail and Google+ Hi Res Photos
3Although pretty easy
4If you have a paid subscription to Office360
5Not necessarily Gmail users, but anyone with a Google account
So, let me highlight some of the reasons that BTSync intrigues me. First of all, there is no limit to the amount of content that can be synchronized. This is mainly due to the fact that your sync'd files are not stored on some limited corporate server somewhere, they're stored only on the systems where your files are synchronized. This is a double edged knife, however. While Cloud based drives can be used for backup, BTSync doesn't back your content up to the internet. If you add a folder to BTSync only on one computer, the files aren't copied anywhere. So, unlimited size but no storage on any corporate servers. That may be two advantages in some peoples' books.
Unlike most of the corporate cloud based drives, BTSync is only about the file transfers. As such, there are no online editors for your files. However, since there is no web access to your files (because they're not on any corporate server) you'll only be accessing your files from your desktop. You can install any office productivity suite locally (or even use google docs in a roundabout way).
Sharing is another feature that is different from other offerings. While other offerings essentially require you to have an account in order to have RW access to a shared folder, BTSync will allow anyone with the app to access/sync your folder as long as they have the secret, a special, very long, very complicated password. If you give another person the secret to your folder, they can sync your folder with a folder on their computer. Any changes either of you make will be reflected in the other's sync'd folder.
You can however, give out two other types of secrets: a read only secret and a one time secret. The RO secret allows the person to whom you give it to sync your folder to a folder on their system, but they won't be able to make changes to your folder. This is a good way of distributing files to friends. For example, you could setup your pictures folder and hand out the RO secret to family members. They would then get copies of any pictures you put in your pictures folder (think of doing this with your iCloud Photo Stream).
Have you started thinking about the possibilities yet?
Another thing I did right away with BTSync was to synchronize my Dosbox working directory across all my PCs. I play retro DOS games every once in a while. By synchronizing the working directory for Dosbox (a DOS emulator) I can access the games, save files, and anything else on any of my computers. This allows me to play a game on my desktop then save the game, exit Dosbox, go to the living room and launch Dosbox and pick up the saved game right where I left off.
I'm hosting a LAN party this weekend. I setup a folder where I intend to put all the installers and files needed (including my dosbox folder). I've added it to BTSync and will hand out the RO key via Facebook and email to everybody coming to the party. That way they can install the games ahead of time to make sure they work.
Another idea I had was to use BTSync to replace NQSync (which I had originally intended to write using the bittorrent protocol anyway).
One more feature then I'm done. I promise.
In my previous post, I talked about changing the default location of system folders. This is necessary since most cloud drives require the sync'd files to be in a particular folder. So I have to move my system folders to that sync'd folder in order to get them to sync. With BTSync, I don't have to move the folders. I can setup each folder in BTSync without moving it. This means that I don't have to move anything, I don't have to change Windows configurations, or anything.
I haven't done it yet, but I will probably only run Google Drive on one of my PCs. The rest will use BTSync to stay in sync.
So, that's my initial review of BTSync. So far, I don't see it replacing Google Drive, but I do see myself using it to distribute pictures and home videos to my family, keeping my games in sync across all my PCs, possibly synchronizing recorded TV shows, making backups, and using it at work.
Thursday, June 20, 2013
Tailing a File in Windows
There is an easy and very useful command in most Minix systems called tail. Essentially tail shows the last bit of a file. This can be really handy when you're trying to read a log file with 10,000 lines in it. You can use tail to view the last 10 lines in the file. While no such command exists for the native Windows command interpreter, there is an option in Windows PowerShell. It's pretty easy to use and since PowerShell is built into all Windows systems XP and later, it's a handy way to get similar functionality without having to download or install anything.
However, the command isn't called tail, it's called Get-Content. Here are some good examples and some of the options. The one I like is:
This returns all lines of the file specified in [filename] and waits for the file to be changed. When the file changes, the display is updated. This is great for when you want to watch a log file for new entries.
However, the command isn't called tail, it's called Get-Content. Here are some good examples and some of the options. The one I like is:
This returns all lines of the file specified in [filename] and waits for the file to be changed. When the file changes, the display is updated. This is great for when you want to watch a log file for new entries.
Friday, June 7, 2013
NetVoyant Trap Processing
I decided to go through some of the content I've garnered over the years and make a few videos. This one shows how NetVoyant goes through its processing of incoming traps. Enjoy!
Monday, June 3, 2013
AirPrinting with an HP printer
If you have an iOS device and an HP printer, you might have been disappointed to find out that even though the HP printer uses AppleTalk (the protocol behind AirPrint), you can't actually print to the printer from you iOS device. This has been one of the main reasons my wife hasn't completely abandoned her desktop PC.
Here's how I fixed it:
Use a little utility called AirPrint installer. It installs a windows service that reshares the printers from your Windows PC using AppleTalk (the version that is compatible with iOS 6). I tried doing it without the reg fix, but the printers wouldn't work. After the reg fix, sure enough, I saw the two printers I have connected to my PC but they both had padlocks. I used the recommended fix (enabling the guest account on my pc) and they started working.
Here's the utility: http://forums.macrumors.com/showthread.php?t=1293865
Here's how I fixed it:
Use a little utility called AirPrint installer. It installs a windows service that reshares the printers from your Windows PC using AppleTalk (the version that is compatible with iOS 6). I tried doing it without the reg fix, but the printers wouldn't work. After the reg fix, sure enough, I saw the two printers I have connected to my PC but they both had padlocks. I used the recommended fix (enabling the guest account on my pc) and they started working.
Here's the utility: http://forums.macrumors.com/showthread.php?t=1293865
PiTunes
Previously, I posted my experience automating iTunes for use in the nursery. That has been working very well and I wouldn't normally see any reason to change it. However, since I recently acquired a Raspberry Pi, I've had several ideas about what to do with it. The idea that won was to replace the nursery room computer (which was already a Dell Studio Hybrid small form factor PC). The advantages would be that the Pi is fanless and quite a bit smaller. I could also then reuse the Dell Hybrid for other things.
The same objectives apply:
By the way, the contents of /mnt/nas/PiTunes are: christmas/ nothing/ music/ play.sh waves/. Version 2 of the script will play the Christmas directory instead of the music directory during the month of December.
The next part involves running the script from CRON, which is equivalent to Windows' Scheduled Tasks. To add jobs, just run crontab -e. The format is one job per line, parameters separated by spaces:
Mine looks like this:
So, let's look at the objectives again:
The same objectives apply:
- I want the music to come on and turn off by itself
- I don't want the music to start off at full blast in case we've put the boys down early.
- I want the music to play whenever we put the boys to sleep, which could be as early as 7pm but as late as 9pm.
- I want to be able to rotate playlists so it's not the same thing every night.
- I want to be able to kick off the music any time.
- I want to be able to manually override either the current song or the current volume level.
So, of course the first thing I did was download the Rasbian image and get it booted up, connected via SSH, and downloading updates. I'll assume if you're also trying to do this that you'll use the instructions at RaspberryPi.org to get you started with that. To get the most recent updates, type sudo apt-get update && sudo apt-get upgrade. This may take a while, depending on your internet speed. The Raspberry Pi doesn't have that much CPU either, so you might just kick this one off and go to bed.
I also purchased a Netgear G54/N150 Wireless USB Micro Adapter and plugged it in. I'm a Windows guy, so most of what Linux does appears magical to me. I had a suspicion that the Raspberry Pi would get on my wifi all by itself. Of course it didn't because at the very least it didn't have my SSID and WPA pre-shared key (the password). I was lucky enough that the Raspbian distribution recognized it and fired it up without requiring me to load any drivers (thank the stars!). I verified this by issuing an iwconfig command. This showed me that it was all ready to go. Since my network uses WPA2 encryption, I had to configure a built in utility called supplicant. In the end, all I had to do was edit /etc/wpa_supplicant/wpa_supplicant.conf and add a couple lines:
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="Hobbiton"
proto=RSN
key_mgmt=WPA-PSK
pairwise=CCMP TKIP
group=CCMP TKIP
psk="yeahright"
}
Much documentation online says all you have to do to get it working is bring the interface down and back up. I wasn't able to get it to grab a DHCP address without rebooting the Pi. To get the interface to come online at boot time, I added ifup wlan0 to /etc/rc.local.
After I got the Pi up and running, it was time to install a few things that aren't included in the base image. Mainly, I needed mplayer (that's the media player that will actually play the music) and ffmpeg (a bunch of codecs that will allow mplayer to play all kinds of files including mp3 and m4a). I also wanted to be able to access the log file from wherever, so I decided to install a lightweight web server. I'll log to index.html in the www folder. Type sudo apt-get install lighttpd mplayer ffmpeg and sit back for a while.
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="Hobbiton"
proto=RSN
key_mgmt=WPA-PSK
pairwise=CCMP TKIP
group=CCMP TKIP
psk="yeahright"
}
Much documentation online says all you have to do to get it working is bring the interface down and back up. I wasn't able to get it to grab a DHCP address without rebooting the Pi. To get the interface to come online at boot time, I added ifup wlan0 to /etc/rc.local.
After I got the Pi up and running, it was time to install a few things that aren't included in the base image. Mainly, I needed mplayer (that's the media player that will actually play the music) and ffmpeg (a bunch of codecs that will allow mplayer to play all kinds of files including mp3 and m4a). I also wanted to be able to access the log file from wherever, so I decided to install a lightweight web server. I'll log to index.html in the www folder. Type sudo apt-get install lighttpd mplayer ffmpeg and sit back for a while.
The next thing is to get the actual music onto the Raspberry Pi. At first, I started out by putting all the music (and my scripts) on a USB stick. In order to give the Pi access to a USB drive, just follow the instructions here. I eventually went instead with a network attached hard drive, so I followed the instructions here to create a persistent mount to my NAS. After a while, I noticed some jitter in the music, so I moved the data back to the SD card (where the OS is also installed). I'm trying to listen now to see if that makes a difference to the jitter.
Now, on to the script. The script runs as a cron job. The comments should explain exactly what's happening.
Now, on to the script. The script runs as a cron job. The comments should explain exactly what's happening.
By the way, the contents of /mnt/nas/PiTunes are: christmas/ nothing/ music/ play.sh waves/. Version 2 of the script will play the Christmas directory instead of the music directory during the month of December.
The next part involves running the script from CRON, which is equivalent to Windows' Scheduled Tasks. To add jobs, just run crontab -e. The format is one job per line, parameters separated by spaces:
Mine looks like this:
So, let's look at the objectives again:
- I want the music to come on and turn off by itself - check!
- I don't want the music to start off at full blast in case we've put the boys down early - check!
- I want the music to play whenever we put the boys to sleep, which could be as early as 7pm but as late as 9pm - check!
- I want to be able to rotate playlists so it's not the same thing every night - check!
- I want to be able to kick off the music any time - this can be done, although it's not as easy. I have to ssh to the Raspberry Pi and kick off the script manually. However, if it's a silent day, nothing will happen. I'll probably look at a way to adjust the script so that when it's called from CRON it does its normal thing but when called from command line it always plays music.
- I want to be able to manually override either the current song or the current volume level - This can also be done, but it's not as easy. While in ramp up or ramp down mode, the volume doesn't increase/decrease by 1, it gets set to a particular level. Overriding the volume during ramp up/down won't make any difference because as soon as the script loops it will reset the volume to right back where it was. I may be able to change this in the future. While playing, I can change the volume by logging in via SSH and issuing amixer -q set PCM X% where X is the volume I want to set to. However, during ramp down, the volume starts back at 100% before dropping down. It may not take much, but the next version may have an increment/decrement instead of fixed volume levels. If I get this working, this objective will be met.
Some of the plans I have for the future are an automatic switch over to Christmas music instead of regular music during the month of December, web service based controls, & incremental/decremental volume controls. We'll see when I have time to work on it.
Monday, May 20, 2013
Setting a default group for an NPC report page
UPDATE: An additional tool now available: autorefresh.html. This widget uses its own script to handle automatically refreshing a page. It does not call the built in Auto-Refresh capability of NPC. You provide the number of minutes between refreshes in the browser view url: /custom/autorefresh.html?interval=5
UPDATE: An additional tool now available for download lets you set the default timeframe for a page. Do you have a page that you always want to show the last 24 hours of data? Just add this widget.
UPDATE: The newest version is out. I updated the script to fix a problem when going back to NPC when a page using the default page context setter was used. I also have released a modified version that allows you to set the default IP SLA test type on an IP SLA test report page.
UPDATE: An additional tool now available for download lets you set the default timeframe for a page. Do you have a page that you always want to show the last 24 hours of data? Just add this widget.
UPDATE: The newest version is out. I updated the script to fix a problem when going back to NPC when a page using the default page context setter was used. I also have released a modified version that allows you to set the default IP SLA test type on an IP SLA test report page.
Page IPSLA Type Default
Page Group Default
Friday, May 17, 2013
Interface Summary Table Ultimate Tweak
A while back, I did a major customization of the Top Least Interfaces Table in NPC. This is a NetVoyant view that normally shows interface availability and utilization in and out of every interface. There's no reason, however, that that table can't contain many more metrics. That's essentially what I did with this customization.
In order to implement this run the following command on the NPC server:
That should do it. Now the default definition for that view should contain all the advanced metrics shown above. The view also has a new title when in the view list: "Interface Utilization Summary". The way you can know it'st he right one is by hovering over the view in the view list and it should pop up a description with my name in it.
This can also be applied directly to the NV view. You can do it through the wizard, or just run the following command on the NV server:
In order to implement this run the following command on the NPC server:
That should do it. Now the default definition for that view should contain all the advanced metrics shown above. The view also has a new title when in the view list: "Interface Utilization Summary". The way you can know it'st he right one is by hovering over the view in the view list and it should pop up a description with my name in it.
This can also be applied directly to the NV view. You can do it through the wizard, or just run the following command on the NV server:
Wednesday, May 8, 2013
NV Default Tweaks
To go along with my post about the default tweaks that I do to a vanilla SuperAgent (ADA) installation, I decided to go ahead and document my default tweaks for NetVoyant. Note the disclaimer at the bottom of this page. All of these tweaks should be done before the first discovery cycle begins.
- Add discovery scopes by network, not individual IP address. This is a hot topic, but I maintain that using networks is better than individual IP addresses, if only for the sake of administration. If you've configured DNS and discovery properly (see point 5 below) IP address changes won't require any intervention. If you'd rather keep a super tight grip on your stack, go right ahead.
- Enable daily Periodic Discover: just a checkbox under discovery
- Tweak SNMP Timeout: Change the timeout from 5 seconds to 2. If it hasn't responded after 2 seconds, it's not going to respond after 5.
- Enable Reachability Only Monitoring: If you want to monitor devices in scope but not SNMP capable, you can by only using ICMP. Enable this by unchecking the box that says 'Ignore Non-SNMP Devices'. You'll also need to go to Config>>Discovery>>Device Models and check the 'Enabled' checkbox on the 'NonSNMP Devices' model.
- Update Device Naming: This one takes some thinking. If you know you will have DNS entries for all of your devices, the best would be to let NV poll via FQDN (vs. polling by IP address). That way, if your discovery scopes include networks instead of individual IP addresses you won't have to change anything in NV when the IP address of a device changes. Since NV will be polling via FQDN and the new IP address is still in scope, NV won't know any different. Set Default device name to 'DNS Name'. If there isn't one, NV will poll via IP address.
- Give NV more resources: Slide the resource usage slider up to its max. If NV isn't the only thing on the server, do this carefully.
- Disable Undesired Classes: Under Discovery>>Device Classes disable any device classes you won't want to monitor. This is one way you can prevent NV from monitoring everything on your network even though you've added scopes by network. I typically disable Printers and workstations. You will need to keep an eye on any SNMP capable devices that show up in the other group. This means NV doesn't know what class the device belongs to. Right click the device and click change classification. If you need a new class, come to Config>>Discovery>>Device Classes and create it. After you make a classification change, make sure your undesired classes still say "No Device Models Enabled Upon Discovery'.
Tip: when you're reclassifying devices, you can set the icon that gets used by the NV console when displaying the device. This is only for the console, but it can make things easier to troubleshoot. You can either use one of the built in images (found at D:\netqos\netvoyant\classes\redpoint\images) or store your own there (keep it to less than 20x20 pixels) by entering the image name (without the .gif) in the change classification dialog box. - Disable polling of the System Idle Process: If the Host Resources Software Performance (hrswrun) dataset is going to be used, setup a discovery rule called 'Default' with expression:
hrSWRunName <> 'System Idle Process'
It's also a good idea to go ahead and set the poll event severity to none. Otherwise you'll get an alarm every time a process fails to poll. This can be a good thing, since it indicates that a process has gone down. However, if NV is polling a process that is being run by a user, when the user logs off, the process will disappear. In fact, I usually go through and disable poll events for all datasets. This should be done understanding what is lost when not getting poll events. - Disable Host Resource Device Table (hrdevice): Create a discovery rule called 'None' with expression:
1==2
If you've already discovered some/all of your devices, set the poll instance expiration to 0 and enable the 'None' discovery rule. Then run a full rediscovery. After that's done, disable polling and periodic discovery on that dataset. - Disable VMware datasets: You will only get data for these datasets if you own CA Virtual Assurance. If you do, skip this step. If you don't, disable polling and periodic discovery for VMware Datacenter Element (aimdc), VMware Host (aimhost), and VMware Virtual Machine (aimvm).
- Disable NBAR and RMON2: if you have NBAR or RMON2 probes and want to poll them from NV, skip this step. Otherwise, disable polling and periodic discovery for Protocol Distribution (NBAR) (nbarstats) and Protocol Distribution (RMON2) (protodist).
- Disable polling of optical, removable, and floppy drives: Add a discovery rule to the Host Resource Storage (hrstorage) dataset called 'Default' with expression:
hrStorageType NOT IN ('1.3.6.1.2.1.25.2.1.7','1.3.6.1.2.1.25.2.1.5')
If you've already discovered some/all of your devices, set the poll instance expiration to 0 and enable the 'Default' discovery rule. Then run a full rediscovery. After that's done, set the poll instance expiration back to something reasonable like 28. - Disable polling of various interface types: Add a discovery rule called 'Default' with expression:
ifInOctets+ifOutOctets<>0 AND ifType NOT IN (1, 18, 24, 134, 37, 100, 101, 102, 103, 104) AND ifSpeed<>0
If you're curious about which interface types this excludes, look on the Config tab under Discovery>>Interface Types. - Enable Verbosity on the Topology service: Go to Services>>Topology and change the drop down from 'Normal' to 'Normal (Verbose)'. There's no save button. Turn this back to 'Normal' after NV is up and running and stable in production.
- Disable Traps: If NV isn't going to be your trap handler, prevent stray traps from getting logged into the database by going to Services>>Traps and setting start mode to 'Manual'. Then click 'Stop' to stop the service.
- Configure your view options: Under the View menu, make sure everything is enabled.
Tuesday, May 7, 2013
Default SuperAgent Tweaks
Whenever I'm setting up a new SuperAgent system, there are always a few things I go through and do before I start data collection. So, here's my list so I don't have to remember it:
- Add collectors by DNS - I like to add by NetBIOS name then click the IP button. This helps me make sure I get the right IP address. Then, after SA has done its check of the collector, I click the DNS button, which finds the DNS name from the IP address it previously resolved from the NetBIOS name. This double check makes sure I have the right server since the FQDN name should be fairly similar to the NetBIOS name.
- Add a port exclusion - Given the troubles I've had with large deployments and auto-discovery, I've decided to start adding a huge port exclusion from the get go. I add 1025-65535 for the whole domain. When/if I need to monitor an application in that range, I can always add an exception. This can be done via the GUI or through a query:
insert into application_rules
(application_id,exclude_port_begin,exclude_port_end,rule_type)
values (0,1025,65535,0);
New in ADA 9.3! - This option can be enabled on each collector. For standard collector or virtual collector, create a new text file: drive:\CA\bin\saConfigInterfaceRuntimeOptions.ini with the following line:
/force positive config
Then restart the CA ADA Monitor service. - Add actions to the default incident responses - add all the possible actions to the incident responses. If I have the email address of the person/group that will be monitoring the SA for problems, I put an email action in the collection device default incident response.
- Create a 'No Response' network incident response - create this incident response with no actions.
- Adjust network types - I like to have only 4 network types: Internet - VPN, LAN, Not Rated, WAN. I delete all the other network types. Assign the Internet - VPN and Not Rated network types to the 'No Response' incident response created earlier.
- Edit the 'Weekends' maintenance schedule - Change the name to 'Full Time Maintenance' and change the period to all day, every day. If there is a standing maintenance window that affects every server everywhere, add that period to the default.
- Change the data retention settings - Bump everything up to their max. If it becomes a problem later on, I can always tune it down.
- Change the free space alarm - Change this from 5GB to 20GB and put somebody's email address in there.
- Import a networks list - I prefer to use the GXMLG, but at least understand regions if doing it by hand. You can also use the standard private networks list if you have nothing to start with.
- Bump up the default observation numbers
New in ADA 9.3! - You don't have to do this via direct database manipulation any more. Just go to Administration >> Policies >> Performance Thresholds. The middle table allows manipulation of the default threshold settings. You can also setup the default threshold settings for the 'Not Rated' and 'Internet - VPN' network types; set them up for no thresholds on the network and combined thresholds. - Import servers as subnets instead of individual servers - This just makes sense. If possible, try to group servers together into subnets by application. This makes it easier to assign groups of servers to an application. If this isn't possible, enter the entire subnet.
Those are all the tweaks I can think of at the moment. If I think of any others, I'll add them to this list.
SuperAgent Application Thresholds
SuperAgent thresholds are comprised of several different components. The most critical part of the thresholds is the baseline sensitivity. Out of the box, SA thresholds are applied to every application and are set with the sensitivity values dictated by NetQoS. There are actually two types of thresholds that can be applied: sensitivity and milliseconds.
Doing anything in the database directly isn't supported by CA and you may break your stuff. If you do, i'm not responsible and CA will probably have you revert to a db backup before even considering talking to you. So either don't tell them that you did this or make sure you can backup and restore your database as needed. There, you have been warned.
The second type of threshold is a more traditional threshold that looks at the value and determines if it is over a specified value. This threshold is much harder to set since you'd have to track data and understand what values you should set. This type of threshold does have one advantage: baseline creep protection.
Baseline creep is when the baseline increases over time because of a slowly degrading performance. Thresholds tied to that baseline would also slowly increase. This is like boiling a frog. You start out with a live frog in cool water and heat it up gradually. By the time the water is hot enough to kill and boil the frog, it's too late for the frog to jump out.
SuperAgent also takes into consideration the fact that a single observation of a transaction that exceeds a threshold (either sensitivity or millisecond) is nothing to pay attention to. The problems really come into play when many observations are seen exceeding the threshold. The minimum observation count is the number of observations that must exceed the threshold within a 5 minute period before the whole 5 minute period is marked as degraded or excessively degraded. These numbers are quite low out of the box. It is common practice to bump these numbers up (usually by a power of 10) in order to reduce the amount of noise that is reported by SA. More on this later.
When an application is configured, either by a user or by the system, a default set of thresholds is applied. The same settings are used for all applications. This can be a problem with newer SA systems since auto-discovery tends to create many applications. If they are all using the default thresholds, it can result in much noise. This is not because the thresholds are too low. Remember, the default thresholds are tied to the baseline. The real problem is that the default minimum observation numbers are too low. Luckily, these numbers can be changed.
The thresholds and minimum observations can be changed in the GUI through two different places. In the applications list or under policies. The applications list is the better place to be if you want to change more than one application/network type set at a time. In the applications list, multiple applications can be selected (maximum of 100 applications selected at a time) and the thresholds edited for all those applications. This may be handy at least for editing the thresholds of the user created applications.
New in ADA 9.3! - A new option has been added to the GUI that allows the modification of the default threshold for new applications (new system discovered applications and new user defined applications). Go to Administration >> Policies >> Performance Thresholds. The middle table allows modification of the default threshold set. You should also go back to applications that have already been defined and update those thresholds. Once an application is discovered by the system or created by the user, the thresholds are independent of the default set.
When changing the thresholds for the system applications, there are several tactics. The first involves increasing the minimum observation count. This can be done with a fairly simple query that both increases the minimum observation count for all defined applications but also modifies the default application thresholds so that all future applications use the same settings.
--run this query to increase the minimum observation count by a power of 10.
update performance_incident set observations = observations * 10;
You shouldn't have to reload the collectors to get this change to take effect, however if you do experience problems seeing the updated threshold values, reloading the collectors should fix it.
A best practice when configuring SuperAgent is to configure a special network type for all the network definitions in SA whose network performance is not entirely within your control. Alarming on networks like this is ineffective since the resulting alarms are inactionable. I usually create a network type called 'Internet - VPN' to indicate any networks that are entirely or partially out of my domain of control. In other words, I set the network type to 'Internet - VPN' for any client IP address ranges across the internet or on another organization's network. If I were to detect a problem with the network metrics to a user within one of these networks, I wouldn't know if the problem were within my portion of the network or out on the internet. If it were out on the internet, I wouldn't be able to do much about it.
So, first of all, create the 'Internet - VPN' network type and assign all your non-internal IP address ranges to it. This would include VPN IP addresses since a portion of their conversation occurs over the internet.
The next step is optional, since the third step negates its necessity. However, if you don't want to go ahead with the third step, implementing this step will at least prevent you from getting alerts on the network metrics for those networks. All that you need to do is create a new network incident response for the 'Internet - VPN' network type and don't assign any actions to it. This should weed out email notifications from issues detected for networks where you can't help the network performance.
New in ADA 9.3! - A new option has been added to the GUI that negates having to perform step three using direct database manipulation. Instead, go to Administration >> Policies >> Performance Thresholds. Click 'Add Custom by Network Type' in the second table. Pick the 'Internet - VPN' network type. Change the Network and Combined thresholds from 'Use Default' to 'Customize' then change the now enabled drop downs from 'Sensitivity' to 'None'. You'll want to do this for NRTT, RTD, NCST, ERTT, DTT, and TTT.
Step three involves a little database manipulation. Essentially, you will need to add a record to the performance_incident table for every metric/app combo you want to ignore. Since you'll need to ignore NRTT, RTD, NCST, ERTT, DTT, and TTT, you'll need to add 6 rows for every application. Luckily, this isn't too hard. The only downside is that this doesn't set things up for any future applications. You'll have to repeat the process. If you do, the query will fail unless you do a complete undo of everything else first. This first query undoes all the threshold sets for the network type containing the string 'VPN'. Make sure your network type has this string or modify the query below.
-- run this query to remove any thresholds currently tied to that network type
Delete from performance_incident where agg_id = (select max(agg_id) from aggregates where agg_type=1 and agg_name like '%VPN%');
Once you've done that, or if this is the first time you're running this, run the following query. Again, make sure your network type has the string 'VPN' in the name. Essentially, this inserts a row ignoring thresholding for the VPN network type (hence the 0's in the query below right after m.metric_type) for every application and for each of the metrics we want to ignore (hence the last set of numbers).
-- run this query to disable network and combined metrics for the network type whose name contains the string: VPN
INSERT INTO performance_incident (app_id, agg_id, metric_type, thres1, thres1_type, thres2, thres2_type, observations)
SELECT a.app_id, (select max(agg_id) from aggregates where agg_type=1 and agg_name like '%VPN%'), m.metric_type, 100, 0, 90, 0, 50 as observations
FROM applications as a, metric_types as m where m.metric_type in ( 0 , 1 , 2 , 3 , 4 , 9 );
Doing anything in the database directly isn't supported by CA and you may break your stuff. If you do, i'm not responsible and CA will probably have you revert to a db backup before even considering talking to you. So either don't tell them that you did this or make sure you can backup and restore your database as needed. There, you have been warned.
Sensitivity
Sensitivity is a unit-less scalar number between 0 and 200. This type of threshold looks for deviations from baseline. A higher number (think 'more sensitive') will alert on a slight deviation. A lower number will not alert until the deviation is more extreme. Think of it as that sensitive co-worker who goes to HR for everything. If that person is very sensitive, any little thing will cause them to go to HR. If they were less sensitive, it would take something more extreme for them to march over and report you. Sensitivity baselines are really handy since the actual numbers involved in the threshold change as the baseline changes. This means that if one day of the week is typically different than the other days, the baseline would reflect that. Since the baseline reflects that, so do the thresholds for that day. SuperAgent baselines take into consideration hour of the day and other factors to get very good baselines. The other thing that SuperAgent does with regards to baselines is that it baselines every combination individually. Since every combination has its own baseline, a single set of thresholds that refer only to the baseline can be set across the board. This is how things come out of the box.
Milliseconds
The second type of threshold is a more traditional threshold that looks at the value and determines if it is over a specified value. This threshold is much harder to set since you'd have to track data and understand what values you should set. This type of threshold does have one advantage: baseline creep protection. Baseline creep is when the baseline increases over time because of a slowly degrading performance. Thresholds tied to that baseline would also slowly increase. This is like boiling a frog. You start out with a live frog in cool water and heat it up gradually. By the time the water is hot enough to kill and boil the frog, it's too late for the frog to jump out.
Minimum Observation Count
SuperAgent also takes into consideration the fact that a single observation of a transaction that exceeds a threshold (either sensitivity or millisecond) is nothing to pay attention to. The problems really come into play when many observations are seen exceeding the threshold. The minimum observation count is the number of observations that must exceed the threshold within a 5 minute period before the whole 5 minute period is marked as degraded or excessively degraded. These numbers are quite low out of the box. It is common practice to bump these numbers up (usually by a power of 10) in order to reduce the amount of noise that is reported by SA. More on this later.
Default Application Thresholds
When an application is configured, either by a user or by the system, a default set of thresholds is applied. The same settings are used for all applications. This can be a problem with newer SA systems since auto-discovery tends to create many applications. If they are all using the default thresholds, it can result in much noise. This is not because the thresholds are too low. Remember, the default thresholds are tied to the baseline. The real problem is that the default minimum observation numbers are too low. Luckily, these numbers can be changed.
Changing Thresholds Through the Web GUI
The thresholds and minimum observations can be changed in the GUI through two different places. In the applications list or under policies. The applications list is the better place to be if you want to change more than one application/network type set at a time. In the applications list, multiple applications can be selected (maximum of 100 applications selected at a time) and the thresholds edited for all those applications. This may be handy at least for editing the thresholds of the user created applications.New in ADA 9.3! - A new option has been added to the GUI that allows the modification of the default threshold for new applications (new system discovered applications and new user defined applications). Go to Administration >> Policies >> Performance Thresholds. The middle table allows modification of the default threshold set. You should also go back to applications that have already been defined and update those thresholds. Once an application is discovered by the system or created by the user, the thresholds are independent of the default set.
Changing Thresholds Through a MySQL Query
Setting Thresholds for Internet/VPN Network Type
A best practice when configuring SuperAgent is to configure a special network type for all the network definitions in SA whose network performance is not entirely within your control. Alarming on networks like this is ineffective since the resulting alarms are inactionable. I usually create a network type called 'Internet - VPN' to indicate any networks that are entirely or partially out of my domain of control. In other words, I set the network type to 'Internet - VPN' for any client IP address ranges across the internet or on another organization's network. If I were to detect a problem with the network metrics to a user within one of these networks, I wouldn't know if the problem were within my portion of the network or out on the internet. If it were out on the internet, I wouldn't be able to do much about it.So, first of all, create the 'Internet - VPN' network type and assign all your non-internal IP address ranges to it. This would include VPN IP addresses since a portion of their conversation occurs over the internet.
The next step is optional, since the third step negates its necessity. However, if you don't want to go ahead with the third step, implementing this step will at least prevent you from getting alerts on the network metrics for those networks. All that you need to do is create a new network incident response for the 'Internet - VPN' network type and don't assign any actions to it. This should weed out email notifications from issues detected for networks where you can't help the network performance.
New in ADA 9.3! - A new option has been added to the GUI that negates having to perform step three using direct database manipulation. Instead, go to Administration >> Policies >> Performance Thresholds. Click 'Add Custom by Network Type' in the second table. Pick the 'Internet - VPN' network type. Change the Network and Combined thresholds from 'Use Default' to 'Customize' then change the now enabled drop downs from 'Sensitivity' to 'None'. You'll want to do this for NRTT, RTD, NCST, ERTT, DTT, and TTT.
Tuesday, April 16, 2013
Finding the data source for a particular device in NPC
Recently, we needed to know which data source was contributing to the report data for a particular device in NPC. This was fairly easy to find out given a simple query:
mysql -P 3308 -D netqosportal -e "select a.itemname as Device, v6_ntoa(a.address) as Address, b.consolename as DataSource from dst_device as a, data_sources2 as b where a.sourceid=b.sourceid and itemname like '%devicename%' order by a.itemname;"
Simply replace devicename with the device name and execute this at a command prompt on the NPC server. The result should look something like this:
It would be nice if NPC or the new CAPC had some kind of feature that showed the datasource(s) for a particular object on the device details page.
mysql -P 3308 -D netqosportal -e "select a.itemname as Device, v6_ntoa(a.address) as Address, b.consolename as DataSource from dst_device as a, data_sources2 as b where a.sourceid=b.sourceid and itemname like '%devicename%' order by a.itemname;"
Simply replace devicename with the device name and execute this at a command prompt on the NPC server. The result should look something like this:
+------------------------+-----------------+----------------------+ | Device | Address | DataSource | +------------------------+-----------------+----------------------+ | center | 192.168.100.2 | NetVoyant | | nacogdoches | 192.168.100.3 | NetVoyant | | nacogdoches | 192.168.100.3 | ReporterAnalyzer | | houston | 192.168.100.4 | ReporterAnalyzer | | houston | 192.168.100.4 | NetVoyant | | dallas | 192.168.100.5 | ReporterAnalyzer | | dallas | 192.168.100.5 | NetVoyant | | sanfelipe | 192.168.100.6 | ReporterAnalyzer | | sanfelipe | 192.168.100.6 | NetVoyant | | austin | 192.168.100.7 | ReporterAnalyzer | | austin | 192.168.100.7 | NetVoyant | | elpaso | 192.168.100.8 | NetVoyant | | brownsville | 192.168.100.9 | NetVoyant | | beaumont | 192.168.100.10 | NetVoyant | | lufkin | 192.168.100.11 | NetVoyant | | ftworth | 192.168.100.12 | NetVoyant | | ftworth | 192.168.100.12 | ReporterAnalyzer | | tyler | 192.168.100.13 | ReporterAnalyzer | | tyler | 192.168.100.13 | NetVoyant | | henderson | 192.168.100.14 | NetVoyant | | amarillo | 192.168.100.15 | NetVoyant | | amarillo | 192.168.100.15 | ReporterAnalyzer | | sanantonio | 192.168.100.16 | NetVoyant | | bexar | 192.168.100.17 | NetVoyant | +------------------------+-----------------+----------------------+
It would be nice if NPC or the new CAPC had some kind of feature that showed the datasource(s) for a particular object on the device details page.
Subscribe to:
Posts (Atom)