Tuesday, September 12, 2017

Hurricane Harvey - Our Story

Hurricane Harvey started affecting us Friday, 25 AUG 2017.  It was my Friday off, and we were preparing for the next Monday when the twins would enter kindergarten. As such, Friday was "Meet the Teacher" at their school. The sky was dark and there was light rain. It felt like a good day to cuddle up and watch a movie. Friday evening, I was in touch with our ward leadership as we coordinated our response teams.
Our band is to perform on 9 SEP, so the next morning (Sat 26 AUG), Christy and I got together with the rest of the band and had practice over near Black Horse Ranch. We were finishing up with the first set as the bottom fell out so we decided to break. We were across Cypress creek from our kids and we wanted to make sure we got back to them before any flooding started. We had identified some friends of ours who were in a neighborhood near us that was likely to flood. They were preparing to move out the following Thursday (31 AUG) so they had most of their stuff in boxes already. She (Kelia) is almost 9 months pregnant so it was agreed that they would preemptively evacuate to our home. On the way home, we went over and helped him (Kris) lift some of their furniture onto blocks and 2x4's. They had a few other things to finish and he has a large pickup truck, so we left them to finish expecting them to come over later in the day. They could get out even if the flooding started. It's important to note that their neighborhood usually floods before anything else in the area and when the flooding is really bad, their neighborhood dumps out into our neighborhood. As long as they are not flooding into us, we don't have a problem draining our neighborhood.
They came over later that afternoon and the kids started playing. We got them setup in our spare bedroom with bunk bed cots for the kids (4 total people in that family). Kris and I went back over to his neighborhood. He went back over to the house to get a few things they had forgotten and I went to help another friend attempt to waterproof his garage door. We jammed some tarps into the hinges of the garage door and weighted it down with landscaping bricks. This turned out to be a pretty good barrier against the water that eventually rose a couple feet above the bottom of the garage door. When we returned, there was only water in the street gutters. A few hours later, water had risen to cover the street. We started keeping an eye on things. The reservoir we drain into was well below us, so I wasn't worried that flooding would get to dangerous levels for us. The last two major floods had not produced enough water quickly enough to have it come up more than halfway up the driveway.

Sunday morning (27 AUG) dawned with some water covering the streets, but less than the highest point overnight. The other friend in the same neighborhood that always floods first had not yet evacuated. The overnight rise of water had not receded so they were looking at evacuation options. Kris and I rode in his big truck and started to prepare their house for flooding and to convince them to evacuate. It became evident that the water was going to keep rising. Last year, this family had waited until it was too late to evacuate during the 2016 tax day flood and had to be evacuated by canoe. We emphasized how important it was to avoid getting to that point again. The father wanted to wait it out, so we took the mother and kids to another ward member's two story home which was serving as a dispatch location for the emergency crews. The father was left to his own devices to get out (which he eventually did on his own).
A large number of ward members had congregated at that two story home for a previously scheduled baptism. Since extended family had flown in for the event, it was decided that the baptism would be performed not at the presently closed Eldridge building, but in the pool in the rain (pretty memorable!). Since there was a large group and the Bishop was present, it was decided that the sacrament would be administered since all other church meetings were cancelled. Shortly afterward, the rain lightened up and most of our street drained.
Sunday afternoon, a rescue request came in for a family in Enchanted Valley. It was outside our area of responsibility, so our dispatch tried to find resources in the area. Unfruitful, he dispatched Scott with his big Yukon. They made it to the family and loaded everyone up. There wasn’t room for two of the four rescuers, so they stayed behind to be retrieved after dropping off the family at a safe location. Upon returning, Scott decided to splash around a little and stalled his Yukon. They pushed it up onto dry land and notified our dispatch. I saw that call come in and reached out to a few Jeeper groups who had been offering help and making rescues since the rain started. Allan and Z responded and we headed out toward Telge and 290. I made it past the Sheriff station before the water started getting deeper. I gave Allan and Z my tow strap and they continued on (they have a few inches more clearance than I do). Their two door jeep would only hold two of the four rescuers, so Scott and his nephew stayed with his truck while the two who had originally stayed behind were brought back to me. I had backtracked and waited under the 290 bridge at Telge. Upon returning, the water had risen. Allan commented that he could probably make it back in, but he was worried about getting out while the water was still rising. We decided to attempt to rescue via Huffmeister. It was dark by this time. In my lower Jeep, I led the charge. The streets were clear of water until about a half mile north of Cypress North Houston. I was cruising at about 40mph when we hit the water. Needless to say, it was a pucker moment. We were all fine, but it was one of those moments where everything went into slow motion. The water started getting deeper, so again Allan in his swamp thing went ahead to see what they could see. The water ended up being too deep (reportedly about 6’) so, we weren’t getting to Scott and his nephew any time soon. They would have to ride it out. The rain started coming down harder and we had just received news that the Addicks and Barker reservoirs would be opened up at 2am. Not yet knowing how this would affect the current water levels, we decided to break for the night.  We went to bed late Sunday night as we watched the waters begin to slowly creep over the street in front of our house.

Monday morning (28 AUG) when Christy got up, Kelia told her the water outside, which was up to the sidewalk, was no longer draining away.  Christy insisted that we start making plans to raise our important possessions and make an evacuation plan. I was hesitant because of past experience with extreme flooding had never given us any problems. I reluctantly conceded though and we figured out what we would do if we decided to evacuate.
We found out that the neighborhood that always floods first had breached the main road and was spilling into our neighborhood. It wasn't going to get any worse for them, but it was coming in quickly enough that our drainage system wouldn't be able to keep up for long. I went for a hike in my chest waders and saw our main drainage creek rising. This meant that what we were draining into was full and it was only going to get worse from here. This had never happened before. It turns out that the Addicks reservoir had filled up. <a href="https://www.usatoday.com/story/news/nation-now/2017/08/28/controlled-release-water-houston-reservoirs/607594001/">The Army corps of engineers had already opened it up to drain it</a> (which would send the water south toward the ocean) but the water leaving was less than what was coming in. I broadcasted my hike live over Facebook (https://youtu.be/-XQDWJn-1Tw). Upon returning home, I had decided it was time to get out while the water was low enough for my Jeep and Kris' truck. Our neighbors (4 adults and one infant) also needed to evacuate. We rallied everyone into motion and started implementing our plan. We got everything that we could think of up a as high as we could. Kris and I loaded our vehicles with the essentials we would be taking with us. I got my family loaded up in my Jeep and Kris got his and the neighbors loaded up in his truck. My neighbor got 3 videos of our escape (part 1, part 2, part 3) from the back of Kris’ truck.
My Jeep dove into the water and got us to a high point right before the exit of the neighborhood onto Barker Cypress (which was the spillover point for the first neighborhood that flooded). I parked there and we made sure the boys had their life jackets on and seatbelts off (in case we had to ditch the Jeep). About 8 minutes later (which seemed like an eternity) Kris' truck caught up with us and we pushed forward into the deeper water right before the exit onto Barker Cypress. It's at this point that I think I got water in my differential, more on that later. With no option but to push forward, we got water up to top of the Jeep tires before making it out onto the shallows of Barker Cypress. We turned south away from Cypress Creek and away from 290 where the water was coming from. We made it down to Tuckerton without any real issues except for some water up to the middle of the Jeep tires. It was dry from there on out. My plan was to head Southwest until we found a place to land. While en route, James, a fellow Cub Master, texted me offering to let us come to his place indefinitely, which we did. His neighborhood was wet but didn't have any water on the streets. I realized later that we were living out the story of the three little pigs, Kris' family fleeing the straw house from the big bad Harvey to our house of sticks, which we eventually fled to James' house of brick.
I spent most of Monday afternoon coordinating rescues, surveying potentially flooded/closed streets, and making various runs to the Longenbaugh Mormon church, which had been turned into a shelter. There was a ton of food and other donations to be received and sorted as well as families to take care of. I got a call later in the day asking to help with evacuation of a family of 13 near the intersection of Queenston and Tuckerton. I made it to the Shell station there, which had turned into a staging point for various high clearance vehicles that were going in to make rescues. I arranged for an ATV with a flatbed trailer to make the run into the house where the family was. They would bring them out to me and I would take them to a shelter. After the ATV was dispatched, they family called and cancelled. I still feel bad for the driver of the ATV. I got signed up to do a shift on Tuesday from 4-8pm at the Longenbaugh building. The shelter required two Elders or High Priests present at all times.

Tuesday morning (29 AUG), Christy and I ventured out in the Jeep to try to make it back to the house. Several reports on our neighborhood Facebook page indicated that the waters were receding. We found high water on Red Rugosa, but not too much covering the street in front of our house. We discovered that the water seems to have entered the garage and gotten to the front porch, but didn't come into the house. We also found a telephone/electrical pole which had been parked at the end of the street waiting to be installed that had floated into our front yard. This was surprising, but not unreasonable. It was wood and the water was high and apparently thrashing towards the three storm drains in front of our house. Some neighbors saw it churning around and lashed it to one of my trees. Christy and I elevated a few more things in case the water came up some more. I built a couple of impromptu sandbags out of wet towels, garbage bags, and landscaping bricks to put at the front and back doors. I spent the afternoon at the Longenbaugh building. The clouds moved off, the sun shone. It felt like a good sign that the storm was over. There was a rumor about a kicked in door across Barker Cypress, so I decided that I would spend the night at the house with my shotgun. It also gave me a chance to watch Guardians of the Galaxy 2.

Wednesday morning (30 AUG), our roads were dry and we had decided that we could probably come back to our house from James'. The sun had started to shine and only the lowest intersections still had water. Harvey had moved on to east Houston, so while we were technically still in the storm, we were now on the dry side. We came back home and started to put things back down on the ground. I spent the afternoon loading up small items from Kris’ house and being a shuttle driver for the ward team that was gutting a home. I eventually got some food brought in for the teams in both locations.

Thursday (1 SEP) we spent most of the day moving Kris and his family into their new home. The roads were dry, so it was a simple matter of loading up the U-Haul twice. Their new home is less than a mile away, but on our side of Barker Cypress (less chance of flooding).

Friday (2 SEP) I spent the morning getting some things back in place around the house until my brother, John, came in from Dallas. When he got here, we got together with the ward team to work on removing some wood flooring from a flooded house. That took the rest of the night and we only got the main living room (150/1200 sq. ft.).

Saturday (3 SEP) I dropped my brother off with the team that would continue for the next six hours working on that wood floor. I had arranged to attend a differential fluid changing party hosted by a shop owner on Clay road just inside the beltway. They were changing fluids for free, so it was a good opportunity to make sure everything was in working order and also make sure I got the water out of my gears. They also gave me some pointers which made installing the wiring harness for my trailer hitch dead simple. I got done with that around 1pm and went to act as a coordinator for the team that was finishing the wood floor removal and the other team that had begun gutting another house.

Sunday (4 SEP) began with gutting a few houses in our neighborhood. We had abbreviated church meetings at 1pm during which a new Bishopric was called and we were notified that we would be meeting back in the West road building for the foreseeable future.

Monday (5 SEP) was Labor day and our crew chief had advised that those of us who had been working for several days straight take some time off to recover. I heeded that advice and played with the kids. I brought out the slot car track and we raced. In the evening, grandpa invited us over to go fishing. He had just bought three new kids fishing poles (Star Wars themed, of course). Luke caught a baby brim and a baby bass. I caught a turtle and Grandpa caught a brim and another turtle. We let the first one go, but decided to relocate the second one since the turtles have a tendency to kill the ducklings. Cole became an expert caster, sometimes throwing his practice weight 25 feet from the shore.

Tuesday (6 SEP) meant a return to work; the Chevron offices had been closed since the storm. It appears the tunnels were flooded since the demo work had already been done and there were dozens of fans and dehumidifiers.

Friday, August 5, 2016

Sunrise Alarm Clock v2

In an earlier post, I shared my code for how I shed light inside my closet early in the morning. If my wife and I got up at the same time, I'd use this for an ambient light alarm clock. This type of clock is more gentle. However, since I get up well before my wife, I use it instead to gently light my closet while I'm doing the early morning ninja routine.

Anyway, another option that became available a while back is the SenseHat. It has the same 8x8 grid of RGB LEDs along with some other environmental sensors. I decided to see how hard it would be to switch out the UnicornHat for the SenseHat. Turns out, not very hard.

The only thing that really has to change is the import and object declaration, which if you declare it with the name UH like the other script, means nothing else changes.
One nice thing about using the SenseHAT instead of the simpler UnicornHat is that the SenseHAT provides some environmental sensors which could be used for other home automation projects. For example, you could string a humidity sensor over into the bathroom and whenever the humidity of the sensor is higher than what is detected by the SenseHAT (plus some margin), you could trigger a relay to turn on the bathroom vent fan. Also, you could use the humidity and temperature sensors to trigger an IFTTT recipe to provide an extra input to your Nest controlled air conditioner. It's possible you could use the accelerometer to plot seismic activity in the area (not sure about this one but it should be theoretically possible).

Wednesday, March 2, 2016

Rate, Volume, Utilization, and Parsecs

But wait, the parsec is not a unit of time, but a unit of distance! Wait, what? All arguments aside about how the Millennium Falcon could make the Kessel run in a shorter distance through enormous gravitational shears, knowing your unit is extremely important.
I work in network monitoring and one of the main reports my tools provide measures how much an interface is used. Because the tool is better than poo, it presents the utilization in several different units. First, let's review the units. Each of these units are SI units, so standard SI prefixes apply when talking about larger bases of the base unit:
  • Bytes - measures the total number of octets that were transmitted (or received depending on p.o.v.)
  • Bits per second - measures the number of 0's and 1's that were transmitted (or received depending on p.o.v.) in a single second.
  • Percent utilization (%) - measures the percentage of a period of time that the interface was transmitting (or receiving).
Let's break it down.

Bytes

This one is pretty simple and is referred to as VOLUME. It's simply the total number of Bytes transmitted (or received) during the measurement window. An SNMP polling station would poll the octet counter at a regular interval. Every time the octet counter is polled, the delta between the previous poll results and the current poll results represents the total number of Bytes during the measurement interval.
V = B1 - B0
Polling too frequently will result in small values. Whenever rollups happen, the individual data points should be summed (integrate over the rollup interval). As long as rollups are done that way, the poll rate is less consequential.
Rollover is accounted for by assuming that a lower number than the previous measurement is caused by rollover and the new measurement (measured from 0) is added to whatever remained between the previous measurement and the max limit of the counter.

Layman's example

This is similar to tracking how many miles a car travels. You simply take a reading of the odometer before beginning a trip and another at the end of the trip. The difference is the total miles the trip entailed. You could take readings more often. You'd just need to add up all your measurements at the end of the trip to get the total for the trip.

Bits per second

Bits per second is a simple count measured over a unit of time, making it a RATE. It counts the number of bits that went through the interface, then normalizes the count over a standard unit of time, the second. It is calculated like this:
R = (Δ bits) / (Δ time)
That is, you take the total number of bits and divide it over the total time of the measurement. This is usually done through SNMP by looking at the octet counters. The NMS will poll the sysUpTime and the octet counters at a certain time (T0 and B0). It will then poll the sysUpTime and octet counters at some other time in the future (T1 and B1). The RATE is calculated by dividing the difference between these two measurements (and adjusting the octet counters to get it into bits instead of bytes 8 bits = 1 Byte):
R = 8 (B1 - B0) / T1 - T0
The resulting unit is bits/second and represents an average of the number of bits transmitted per second over the measurement interval (T1-T0). When doing the rollup, average is the most common descriptor. In addition, min, max, standard deviation, variance, and 50th, 75th, and 90th percentiles would be useful.
If you're already gathering VOLUME, you'll notice that B1 - B0 used in the RATE calculation comes from the volume calculation. That's on purpose and is why it is said that RATE is derived from the VOLUME measurement. In fact, if the polling interval is fairly regular, the rate can be said to be approximately linearly proportional to the volume.

Layman's example

This is not really any different than measuring the speed of your car while on a trip. You take a reading of the odometer and the clock at the beginning of the trip and again at the end of the trip. The difference in miles, divided by the total time of the trip (in hours in this case) will give you an average speed in mph. You could increase the resolution of your measurements by taking a reading and performing the calculation every 5 minutes. This would give you a data point describing the average speed for every 5 minutes of your trip.

Percent Utilization

Percent UTILIZATION measures how much capacity is used and is reported in percentage of the total capacity available. This is calculated by dividing the current RATE by the total rate capable by the interface. Alternatively, it could be calculated by dividing the VOLUME by the total volume capability of the interface. The latter requires a bit more derivation, so most use the former.
This metric requires knowledge about the interface's capabilities. This is usually obtained by polling the bandwidth statement (ifspeed) of the interface (1.3.6.1.2.1.2.2.1.5), which is in bits per second (bps). Once obtained, the percent UTILIZATION can be calculated like this:
U = 8 (B1 - B0) / T1 - T0 / ifSpeed
You may notice that a part of this formula looks the same as the RATE calculation. It is. Simplifying the formulas:
U = 8 (B1 - B0) / T1 - T0 / ifSpeed * 100
R = 8 (B1 - B0) / T1 - T0
U = R / ifSpeed * 100
Since the UTILIZATION formula involves dividing a rate (in bps) by a speed (in bps), the result is unitless. This means that the unit can be thought of as % (percentage). Rollups of UTILIZATION should be treated the same way as rollups for RATE. You should also notice that the percent utilization should be linearly proportional to the rate, given a constant bandwidth capacity of the interface.

Layman's example

This calculation is similar to calculating how close a driver is to the speed limit. By dividing the current speed (derived using the formulas above for speed) by the total allowable speed, you can calculate what percentage of the limit the car is currently traveling. When driving, moving at 100% of the speed limit is actually good. You are actually making the most of the available resource. The only time 100% utilization is a problem is when you need to do something else with that speed (i.e. other cars on the road not travelling at the same speed). The same actually holds true for networking. Utilization of 100% is not bad until you need some percentage of those resources for another task.

Wednesday, February 3, 2016

SharePoint Kanban

I was recently asked to help reproduce a Kanban board I had built in SharePoint for one of my projects. Having built it only once previously, I learned a few things and the resulting reproduction had a few improvements over the original.
First, I start with a custom list. I never use the templates in SharePoint because they are constantly trying to make things more complicated in an effort to make things more simple.

The Easy Stuff


  • Rename the [Title] field to 'Task' or something more representative of the items you'll have on your Kanban board.
  • Create a [Person or Group] field to contain the person responsible for the item. It's important that this not be a simple text field. I'll explain why when we build the views.
  • Create any other meta data columns that you want for your items (notes, description, priority, estimated effort, etc.)
  • Create a [Date and Time] field to contain the due date. Call it [Due Date] if you want to use the formulas here without modification.
  • Create a [Date and Time] field for every phase. For example, if my phases were:
    Deploy Launchpad, Igniter Primed, Mount Rocket, Connect Detonator, Remove Safety Cap, Detonate
    Then I would create the following fields as [Date and Time] fields:
    [Launchpad], [Igniter], [Mount], [Connect], [Safety], [Detonate].
    Essentially, each field will contain the date and time that that phase was completed. If the date is blank, that stage hasn't been completed. If there is a value in the field, then that phase has been completed (and was completed at that date/time). 
  • Go to List Settings >> Advanced Settings and disable attachments for the list (this was a dumb feature for what we're using the list for). 

The Next Phase Calculation

Create a [Calcuated] column called "Next Phase". This column should evaluate the phase fields to determine which phase is the current phase being worked on. Continuing with my example, if I had already deployed the launchpad, primed the igniter, and mounted the rocket, the "Next Phase" would be to connect the detonator.

This is done by evaluating the last phase to see if it is complete. If the last phase has a date/time value, it is completed and the next stage is "Done". If the last phase does not have a value, we need to figure out if the "next phase" is this phase or the previous one. Here's the formula (using my example field names, you should be using yours):

=IF(NOT(ISBLANK([Detonate])), "Done",
IF(NOT(ISBLANK([Safety])), "Detonate",
IF(NOT(ISBLANK([Connect])), "Safety",
IF(NOT(ISBLANK([Mount])), "Connect",
IF(NOT(ISBLANK([Igniter])), "Mount",
IF(NOT(ISBLANK([Launchpad])),"Igniter",
"Launchpad"))))))

Technically, the end of each line above can say anything you want. Since it would be nice to be able to sort the [Next Phase] column to show tasks in order, it would be nice if they were in some sort of sortable order. Unfortunately, alphabetical order won't work. We can easily fix this by prefixing each resulting string with a number to indicate the order, like this:

=IF(NOT(ISBLANK([Detonate])), "6 Done",

IF(NOT(ISBLANK([Safety])), "5 Detonate",
IF(NOT(ISBLANK([Connect])), "4 Safety",
IF(NOT(ISBLANK([Mount])), "3 Connect",
IF(NOT(ISBLANK([Igniter])), "2 Mount",
IF(NOT(ISBLANK([Launchpad])),"1 Igniter",
"0 Launchpad"))))))

The IF(NOT(ISBLANK( logically means, if the previous phase has a value but the current phase didn't, the current phase is the next phase.

The Status Column

This column is designed to figure out the status of each item as compared to the due date. Four possible states exist:
  1. If there's no [Due Date] (e.g. [Due Date] is blank), the the status is "No Due Date".
  2. If the item has not been completed (e.g. the last phase field is blank) and the item is not yet due (e.g. the [Due Date] is greater than today), its status would be "On Time for Completion".
  3. If the item has not been completed (e.g. the last phase field is blank) and the item is due (e.g. the [Due Date] is less than today), its status would be "Overdue".
  4. If the item has been completed (e.g. the last phase field is not blank) and the item was completed before the due date (e.g. the last phase field is less than the [Due Date]), its status would be "Completed On Time".
  5. If the item has been completed (e.g. the last phase field is not blank) and the item was completed after the due date (e.g. the last phase field is greater than the [Due Date]), its status would be "Completed Late".
Here's the formula:

=IF(ISBLANK([Due Date]),
      "No Due Date",
      IF(ISBLANK([Detonate]),
             IF([Due Date]>=Now(),"On Time for Completion","Overdue"),
             IF([Detonate]<=[Due Date],"Completed on Time","Completed Late")
      )
)

The Views

I recommend 4 types of views:
  • Datasheet View - This should be a datasheet view of all items. Usually sorted by [Due Date].
  • All Items - This is a standard version view of the Datasheet View. You can alternatively add groupings based on Next Stage.
  • My Items - This is either a standard or datasheet view (your preference), also sorted by [Due Date], however, filtered by the person the item is assigned to (remember above where I said we'd use this later?). SharePoint has a session variable called [Me], which contains the username of the current user. By putting a filter where the [Assignee] field is equal to [Me], we create a view that only shows the items assigned to the current logged in user. This means that anyone on the team can log in and look at this view and see only their items. This won't work if you made the assignee field a simple text string; it needs to be a [Person or Group] field.
  • Phase specific views - these views aren't required but are often requested. You basically build a copy of the Datasheet View or the All Items view but filter it where [Next Phase] field equals a particular phase. You would repeat this for every phase. I find this tedious when those who want this type of breakdown could just look at the Datasheet or All Items views and just filter for a particular value in the [Next Phase] field. However, some people can't handle that level of sophistication, so statically defining views is the only way to please them.

Thursday, June 11, 2015

Sunrise Alarm Clock

I know it's been a while since I've posted anything. Much longer since I've posted anything about the RaspberryPi. However, things worked out today and over the last few weeks that I was able to finish up a couple projects involving different Pi for different purposes. Today's is the Unicorn Sunrise.

I have a RaspberryPi powering my NAS. I should say, a RaspberryPi is my NAS since it's just an RPi with a USB HDD attached and Samba running. My NAS, along with most of my equipment is on a high shelf in my walk in closet, right next to an air conditioning vent. It stays cool and it's out of the way. Since I get up for work very early in the morning, I take on the role of ninja, trying to make my way out of the house without waking anyone up. I often use the light in the closet, but I'd rather not. It's bright and if I forget to close the closet door, it lights up the whole room.

A couple months ago, I bought a Unicorn Hat. They're very fun and I bought it because it was cheap, not because I had anything in mind. I had fun playing with the various demo scripts, then got to thinking about how I could actually use this thing. I thought about building something that could replace the LED based alarm clock for the kids, but the brightness of thing would make it like turning on flood lights in their room. That wouldn't do. That's when I had the idea of simulating a sunrise in my closet for when I'm getting ready for work. I could gradually increase the brightness by turning on one LED at a time. Then I had the idea of actually cycling through each of the colors between 0 and the color I picked for sunrise (Let the sunshine in!). That made for a longer cycle: 8 rows of 8 LEDs, for 100 different brightness percentage levels. That didn't end up being a problem though.

I ended up putting a static bag over the top to tone down the brightness even further. Anyway, after following the normal instructions for installing the Unicorn Hat, I wrote this script and set a cronjob to kick it off at 5am.


I just got it put up there, so we'll see how it goes tomorrow morning. I already had the idea of actually just switching between midnight blue and sunrise. That'll be the next version.

Tuesday, June 9, 2015

NQBackup

UPDATE 6/9/2015: Version 1.7 now released. This update adds standalone support. Since CA is including newer versions of MySQL in their products, DBToolv3 is no longer going to work. This change allows you to specify to use MySQLDump instead of DBToolv3. Essentially, you unremark line 15 and remove/remark line 14. If I get enthusiastic about it, I may update the script to allow a switch from the command line to specify which method to use. I'm just not there yet.
UPDATE 2/10/15: Version 1.6 now released. This update changes the way harvesters and DSAs are backed up, by only backing up the ReaperArchive, ReaperArchive15, and HarvesterArchive directories to a single directory (no redundant rolling backups). It only backs up files that have the archive bit set, so before running it the first time, set the archive bit for all the files in those directories. I also fixed the date naming method so it's YYYYMMDD instead of YYYYDDMM. I also added timestamping to the log so you know how long it takes to perform the file backups vs. the database backups.
UPDATE 2/27/14: Version 1.5 now released. This version doesn't have too many changes. I just added the lines below that allow the NFA mess of data files to be backed up along with everything else. This one script can still be used on any product. However, when running on a Harvester or DSA, extra commands backup the data files.
The syntax for running the tool hasn't changed since 1.4 (but 1.4 introduces some major changes), so you should be able to drop the script in place without changing any scheduled tasks.

nqbackup.bat <dbname> <num_backups_to_keep>

Remember, if you need a reminder how to run the tool, just run it without any arguments (or just double click it from Windows Explorer).


Tuesday, March 17, 2015

Custom Formula: IPFromDEC (IP address from decimal)

See part 1 of this thread.
See part 2 of this thread.
Download
Install Instructions

UPDATE: I cleaned up the code. It may be a little less intuitive, but the code itself is simpler. I added a new boolean checker: IPIsValidIP(), which returns true if the IP address is a valid dotted decimal IP address.


UPDATE: I added a new formula: IPGetCIDRList(). This list takes two IP addresses and defines all the address blocks between them, inclusively. For example:

IPGetCIDRList("10.0.0.0","10.0.0.5")="10.0.0.0/30,10.0.0.4/31"

This is handy if you want to get all the address blocks in a range. The more simple the range, the shorter the list of summarizations. The more complex the range, the longer the list. For example:

IPGetCIDRList("10.1.2.3","192.168.35.7")="10.1.2.3/32, 10.1.2.4/30, 10.1.2.8/29, 10.1.2.16/28, 10.1.2.32/27, 10.1.2.64/26, 10.1.2.128/25, 10.1.3.0/24, 10.1.4.0/22, 10.1.8.0/21, 10.1.16.0/20, 10.1.32.0/19, 10.1.64.0/18, 10.1.128.0/17, 10.2.0.0/15, 10.4.0.0/14, 10.8.0.0/13, 10.16.0.0/12, 10.32.0.0/11, 10.64.0.0/10, 10.128.0.0/9, 11.0.0.0/8, 12.0.0.0/6, 16.0.0.0/4, 32.0.0.0/3, 64.0.0.0/2, 128.0.0.0/2, 192.0.0.0/9, 192.128.0.0/11, 192.160.0.0/13, 192.168.0.0/19, 192.168.32.0/23, 192.168.34.0/24, 192.168.35.0/29"

Download using the same link as below. If you're upgrading, close Excel and copy the downloaded XLAM file to the same location as your existing IPConversion.xlam file. If you don't know where that is, look in your add ins list (File>>Options>>Add Ins, look in the location column).


UPDATE: I compiled and have now published my IPConversion.xlam.  How to install this add-in.
How to use:
FunctionArgument 1Argument 2192.168.15.3410.20.30.40335.20.30.40142.20.30.40
IP2DECConverts from dotted decimal to decimal3232239394169090600Invalid IP Address2383683112
IPFROMDECConverts from decimal to dotted decimal192.168.15.3410.20.30.40#VALUE!142.20.30.40
IPNetworkReturns the network number given an IP address and mask24192.168.15.010.20.30.0Invalid IP Address142.20.30.0
IPIsInSubnetDetermines if the given IP address is within the given subnet192.168.0.016TRUEFALSEFALSEFALSE
IPGetOctetReturns the octet specified (1-4)31530Invalid IP Address30
IPGetCIDRListReturns a comma separated list of CIDR address blocks between the two provided IP addresses192.168.255.255192.168.15.34/31, 192.168.15.36/30, 192.168.15.40/29, 192.168.15.48/28, 192.168.15.64/26, 192.168.15.128/25, 192.168.16.0/20, 192.168.32.0/19, 192.168.64.0/18, 192.168.128.0/1710.20.30.40/29, 10.20.30.48/28, 10.20.30.64/26, 10.20.30.128/25, 10.20.31.0/24, 10.20.32.0/19, 10.20.64.0/18, 10.20.128.0/17, 10.21.0.0/16, 10.22.0.0/15, 10.24.0.0/13, 10.32.0.0/11, 10.64.0.0/10, 10.128.0.0/9, 11.0.0.0/8, 12.0.0.0/6, 16.0.0.0/4, 32.0.0.0/3, 64.0.0.0/2, 128.0.0.0/2, 192.0.0.0/9, 192.128.0.0/11, 192.160.0.0/13, 192.168.0.0/16142.20.30.40/29, 142.20.30.48/28, 142.20.30.64/26, 142.20.30.128/25, 142.20.31.0/24, 142.20.32.0/19, 142.20.64.0/18, 142.20.128.0/17, 142.21.0.0/16, 142.22.0.0/15, 142.24.0.0/13, 142.32.0.0/11, 142.64.0.0/10, 142.128.0.0/9, 143.0.0.0/8, 144.0.0.0/4, 160.0.0.0/3, 192.0.0.0/9, 192.128.0.0/11, 192.160.0.0/13, 192.168.0.0/16
IPIsValidIPReturns true if the specified IP address is a valid IP addressTRUETRUEFALSETRUE

In a previous post, I showed how to convert an IP address from dotted decimal notation to a decimal number. Well, i found myself in a situation where i needed to do the reverse.  So, there are two ways of doing this.

The first involves using a big formula to chop the decimal value into its equivalent dotted decimal counter parts. The formula goes like this (this formula references cell B2 where the decimal format of an IP address should be):
=ROUNDDOWN(B2/2^24,0)&"."&ROUNDDOWN(MOD(B2,2^24)/2^16,0)&"."&ROUNDDOWN(MOD(MOD(B2,2^24),2^16)/2^8,0)&"."&MOD(MOD(MOD(B2,2^24),2^16),2^8)

While this is nice, it would be even nicer if i could just do something like this:
=IPFromDEC(B2)

This is the second method.  If you've already created your IP2DEC.xlam file and have it enabled as an add in, you're ready to go, you can add a custom formula to break the IP address back out into the same add-in (if you haven't, click here to see how).

Open a blank workbook in Excel.  Press Alt+F11 or click 'Visual Basic' on the Developer tab in the ribbon bar.  If the project explorer isn't visible, show it by pressing Ctrl+R or by choosing View>>Project Explorer.  You should see two projects in there, one for the new blank workbook that opened and one for the IP2DEC add-in.  You should see Module1 under the IP2DEC add-in (if you don't see this, you didn't do the steps on the previous post).  Double click it.  You should now see the IP2DEC public function code.  Now all you need to do append some code to the bottom of the module that will define the function for converting back to dotted decimal format.
Public Function IPFROMDEC(ipaddress) As String
If ipaddress + 1 - 1 > 4294967295# Then GoTo toobig
Dim firstoctet As String, secondoctet As String, thirdoctet As String, fourthoctet As String
firstoctet = Int(ipaddress / (2 ^ 24))
secondoctet = ipaddress - (firstoctet * 2 ^ 24)
secondoctet = Int(secondoctet / (2 ^ 16))
thirdoctet = ipaddress - (firstoctet * 2 ^ 24) - (secondoctet * 2 ^ 16)
thirdoctet = Int(thirdoctet / (2 ^ 8))
fourthoctet = ipaddress - (firstoctet * 2 ^ 24) - (secondoctet * 2 ^ 16) - (thirdoctet * 2 ^ 8)
fourthoctet = Int(fourthoctet)
Select Case 255
Case Is < firstoctet
GoTo toobig
Case Is < secondoctet
GoTo toobig
Case Is < thirdoctet
GoTo toobig
Case Is < fourthoctet
GoTo toobig
End Select
IPFROMDEC = firstoctet & "." & secondoctet & "." & thirdoctet & "." & fourthoctet
Exit Function
toobig:
IPFROMDEC = "Invalid IP Address"
End Function
Hit the save button and go try it out.  Put an IP address in one cell and use IP2DEC() to convert it to decimal.  Then use IPFROMDEC() to convert it back.

You might think this an exercise in futility, however, this can come in handy when trying to parse out IP address blocks given CIDR notation.  For example, if you wanted to calculate the starting and ending IP address for the 192.168.1.0/24 block of IP addresses, I'd have the following:

A
B
1192.168.1.024
2=IPFROMDEC(IP2DEC(A1)+2^(32-B1)-1)192.168.1.255
 The result of the formula in A2 is shown in B2.  This can also be very handy when trying to determine whether or not a given IP address is within a given subnet.

Thursday, February 12, 2015

Extending Windows URL handling for SSH, RDP, and SCP

Sometimes, I've thought how cool it would be if I could design a web page with links like rdp://someserver and have it open an RDP session to someserver. It seems like it would be a simple thing. Just like http:// and ftp://, rdp:// specifies what I want to do, and someserver specifies the server I want to do it to. Turns out it is.

I always thought it was a little dumb that this wasn't built into Windows, but I couldn't figure out how to make it work. I was excited when I found this article with the associated scripts to configure it. Immediately, I thought about putting all three on all my computers (scp, rdp, ssh). Running two batch files on all my computers seemed too bulky, so I combined them all into one:
Save this as a .bat file and get a copy of WinSCP and Putty in the same directory. Run the batch file with elevated privileges (Right Click >> Run As Administrator) and you should see three successful messages.
You should then be able to design pages where the following links actually do something on your local system:
rdp://myfavserver
ssh://mylinuxserver
scp://mylinuxserver

Afterword:
I've been thinking for a while now that I need to use XAMPP portable and just build my own administrative GUI for the NetQoS systems. Theoretically, it would allow you to view the health check reports that I've built (which I still need to sanitize and post) and perhaps a page that automatically renders an architecture diagram, complete with rdp:// links to all the Windows servers and ssh:// links to all the Linux servers. I would also include all my browser view based tools for NPC. It's just an unhatched idea in my brain right now. Every time I think about it, I think how awesome it could be and how much work it would end up being.

Wednesday, January 28, 2015

SNMPGet for Windows and Community String Discovery

I recently needed to test SNMP connectivity from a Windows server to a device to prove that there was a problem outside my system causing SNMP polling to fail. Linux has NetSNMP, which comes with a command line snmpget utility. Windows has no such utility. A quick search on the internet helped with that. I found a utility from SNMPSoft, but of course I had to build a wrapper for it.



The wrapper is pretty simple since my objective is to do a quick check of SNMP connectivity. The version and the OID polled are hard coded to v2c and sysObjectID.

This could be used to discovery which community strings work on a system by using for loops in the command line. For example, if you wanted to test a bunch of community strings:

for %A in (public string1 string2 string3 string4 etc) do @snmpdiscover hostname %A

This will output something like this:

Host:localhost Community:public
1.3.6.1.4.1.311.1.1.3.1.1

Host:localhost Community:string1
%Failed to get value of SNMP variable. Timeout.

Host:localhost Community:string2
%Failed to get value of SNMP variable. Timeout.

Host:localhost Community:string3
%Failed to get value of SNMP variable. Timeout.

Host:localhost Community:string4
%Failed to get value of SNMP variable. Timeout.

Host:localhost Community:etc
%Failed to get value of SNMP variable. Timeout.

In this case, the first community string (public) worked, while the others didn't.

Friday, December 12, 2014

Enabling or Disabling the Flow Cloner in RA9.0

I know, 9.0 is an old version, but I had a customer who is transitioning and needed to temporarily enable and disable cloning of flows from the old harvesters to the new harvesters. Here's the resulting script. The first argument should be Y or N depending on whether you want to enable (Y) or disable (N) the flow cloner. The second argument is optional and is the IP address you want to clone to. If you specify the IP address, the flowclonedef.ini file is created. If you don't specify it, no changes are made.

Monday, November 3, 2014

Custom Device Polling in NetVoyant

This is a presentation I gave years ago but the recording on the community has been lost. So, I recorded it again and have posted it here.

Tuesday, August 5, 2014

The dangers of a guest wifi network


The site is associated with Walt Mossberg, so they usually have pretty cool stuff. However, I couldn't agree with this article. Before reading my response, you really need to read the article.

Essentially, the article makes the argument that getting to the internet from your phone via WiFi is better than via a cellular data connection, and therefore people should enable the guest WiFi network in their homes because it's pretty much safe.


Conceded: Enabling the guest WiFi in most residential routers does not pose any additional threat to the internal, private WiFi and local area network.

The big issue with allowing someone else to use your WiFi is that whatever they do with it is your responsibility. Your home internet router uses a very good, very legal technology called IP address overload (aka NAT) to allow multiple devices in your home to access the internet while you only pay for access for one device (your router). Your router acts as a proxy of sorts to the internet for all devices in your home and on your wifi. To anyone on the internet, when your phone accesses a website, it looks like your router is accessing that website. The router's NAT technology takes care of accessing the website for your phone and ferrying the data back to your phone. This is great because it allows you to pretty much have as many devices as you want on your home network, and they all have access to the internet, via your router.

Your router is masking the internal machinations of your home network. This means that it's practically impossible to determine which device on your home network your router is proxying. This is also great because it builds a barrier between the outside world (the internet) and your inside network, making it harder for malicious users to gain access to your inside devices. The best they could do would be to try to communicate with your router, which is usually pretty well protected against malicious attacks.

However, if you allow anyone to get onto your WiFi, their traffic is also proxied by your home router. So, if I come to your front curb and jump on your WiFi and download a movie and the MPAA/FBI happened to observe my download, they would not be able to determine the "inside" device that initiated the download. To them, it just looks like your router is downloading a movie. The owner of the internet access (you) could go to jail for piracy. The argument, "It wasn't me; it was someone who hacked me" doesn't fly in court.  Since authorities on the internet see one device doing everything, there is no way to determine whether the activity is coming from your guest wifi or your own computer. So, they hold you (the owner of the one device they can prove is doing something: your router) responsible.

Places that have guest WiFi networks have very powerful systems in place and/or legal agreements that you agree to before being allowed access that prevent you from doing anything malicious with their internet and which hold them blameless for any malicious activity you may do with their free WiFi.

If you have those mechanisms in place, feel free to open up your guest WiFi. I'm a network tools guy and I don't even have those kind of tools in place. I don't recommend that you do, despite the benefit it might give to someone walking by.

Wednesday, July 30, 2014

Raspberry Pi News

I know I'm late to the show with my own blog post about the new happenings issuing forth from the Raspberry Pi Foundation, but I figured better late than never.

A few new developments have made news recently and bode well for hobbyists and inventors alike. The first (chronologically) was the release of the compute module. This is a raspberry pi just like any other, except that the whole thing is designed onto a chip that looks just like a laptop memory module.
The cool part about this is that people can now design their own main board and slip in this tiny chip to get all the features or the raspberry. This means that the main board can be designed to fit just about any need out there from small point-and-shoot cameras to large supercomputers. The foundation came out with an example main board:
But this is just an example and a board like this could be designed to meet the inventor's needs, changing the number of pins, ports, connectors, etc.

The second bit of gooey goodness is the release of the Raspberry Pi Model B+. This is the next evolutionary (not revolutionary) step in the progression of this little platform.
This new model is pretty much backward compatible with the Model B, but adds a couple of really useful features:

  • More GPIO pins - 40 pins instead of 26. (This also allows old IDE hard drive ribbon cables to be used!)
  • More USB ports - 4 ports instead of 2.
  • Micro SD - the SD is smaller, has a secure latch, and doesn't stick out anymore.
  • Power redesign - the B+ uses less power due to better technology.
  • Better audio - this should be good for my PiTunes.
  • Better form factor - all the onboard ports now only come out of 2 sides instead of the 4. This should make stuffing the Pi into a small corner a bit easier. Also, the mounting holes are uniform and there are 4 of them, which should make building cases a bit easier. Also it helped pave the way for HATs (more on this later).


The third bit of really cool news is the release of specifications around HATs (Hardware Attached to the Top). To break it down very simply, this allows add-on boards to tell the Pi that they're connected and give specific information about themselves to the Pi. This could make connecting an add on board very simple since instructions could be included in the add on board itself that help set it up (install software, configure pins, setup shortcuts on the desktop, etc.). I haven't found the official blog post announcing it, but James Adams spoke about it in a recent interview. Here is what they're theoretically supposed to look like. I'm guessing Adafruit will be releasing a HAT starter board soon which would at least include the mounting hardware (since the holes should line up with the holes on the B+) and maybe the EEPROM and other components defined by the standard.

In case that wasn't enough, I've seen two articles recently that I've kept in my browser tabs so that I can refer to them the next time I purchase a Pi (usually every other month). The first is an update about the method used by many to turn the Pi into a video game emulator. This used to be a really complicated process that took a ton of time, but thanks to the guys over at petRockBlog and Emulation Station, this process is greatly improved. You can go straight to the source, or you can check out this article which gives instructions for the uninitiated (it's spelled out pretty clearly). I've got a B+ on order right now, so as soon as it comes in, this will be on of the first things I do with it.

And if that's not enough, here's an article about the first 5 things to do after powering on your Pi. While installing Minecraft and overclocking aren't required, they are mentioned as the most popular things to do.

Monday, July 14, 2014

Creating a Security Camera Page for the iPad

I may have posted before about the Foscam cameras I have around my house. I have one inside the house and three outside, covering all the doors. There are a myriad of apps out there that allow you to view live streams from Foscam cameras, however, most of them are either designed for iPhone (thus for iPad you have to use pixel doubling, which sucks) and/or they have a bunch of chrome that I'd rather not waste screen real estate showing.

A couple years ago I bought one of the first generation iPads. It was great, but given the OS upgrades that it's missing out on and the low resources that most modern apps blow right past, it's become less and less used. I decided to get some more use out of it by building a small web page with custom controls to stream each of my cameras' feeds to the iPad. The thought was to mount the iPad near the front door so that I could do a quick check of all the cameras while walking to the front door to answer a caller (since one of the cameras looks at the front door, I'd also get a quick look at the caller without looking through the peephole). After looking around at some of the DIY options, I decided to go with a Luxone iPad Wall Mount since they had one specifically built for the 1st generation iPad. It was more expensive than some of the DIY options, but the finished product looks cleaner (IMO). The place where I had decided to mount the iPad had a light switch right below it. A quick test with the multimeter showed that power is run to the switch instead of the light, so I could wire in an iPad charger which would draw power regardless of the state of the light switch. Fast forward a couple of hours and I had made some room in the circuit box for the iPad charger, soldered on some leads which were wired into the switch's hot wires and ran the iPad cable up and out of the switch to the wall mount. The end result is that the iPad sits in a landscape position and always has power. A quick change of the config so that it never auto-locks and the iPad stays on 24/7.

As for the page, I had several web servers around were I could host the page. It didn't take much to design the page, but I wanted some extra fun. I decided that tapping on a camera feed should blow the feed up to the full size of the screen. Tapping again would shrink it down to its original size. This was easily accomplished with a bit of CSS. Essentially, there are three things in the CSS:

  1. The standard classes that setup the body
  2. An option that sets the initial size of each stream and specifies the timing function for CSS animations.
  3. 4 classes that determine where the streams sit
  4. 2 classes that are tied to the animation (one to grow one to shrink)
  5. 2 animations (one to grow one to shrink)
After that, there's a simple javascript that does two main things depending on whether the image is its original size or has been blown up to full screen:

  1. Switch to the other class so that the animation happens
  2. Set the final style parameters of the stream so it stays the way it is at the end of the animation
Then there are the streams themselves. I added username and password parameters to the URLs so they don't have to be typed in every time. There were some other parameters that I added so that when I saved a bookmark to the home screen it would open up and look like an app. The details are here. I really only added <meta name="apple-mobile-web-app-capable" content="yes"> and <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">. Then I pulled up the page in Safari, added a shortcut from there to my home screen and closed Safari. Now when I open using the shortcut, it opens up as if it were a separate app from Safari and none of the Chrome is there.

Wednesday, March 26, 2014

Pinewood Derby Race Track Timer

Last week a cubmaster friend of mine mentioned his troubles with the upcoming Pinewood Derby that the local cubs were going to participate in. He had a track but no easy way to determine the winner except with some judges sitting at the finish line. He mentioned how he might forego using his own den's track and borrow one from another cubmaster friend of his that has an electronic timer. After talking with his friend, it became apparent that it wasn't in working order and would need some TLC from someone experienced enough with the microcontroller used. This is what spurred the conversation with me.

Not having any experience with that particular type of microcontroller, I told him I could research some ways that I could build one into his track (thinking perhaps that it could be another use for the RaspberryPi). I spent a few days "thinking about it" (i.e. on the back burner). I even started a python script that I could use in conjunction with the GPIO ports and some photo resistors to get the times. However, after digging a little deeper, it appeared while this was possible the result wouldn't be accurate.

In steps the Spark Core. I had seen this little beauty back when it was in its Kickstarter campaign and went ahead and purchased one back then thinking I could use it as the brains for my iPhone controlled garage door opener. While it turned out the RaspberryPi was more suited for that, the Spark Core seemed a perfect candidate for this project.

I posted to the Spark community that I wanted to try to use the Core for a Pinewood Derby Race timer and got tons of support. Total props to Brian Ogilvie (bko) and BDub who even gave up some sleep to help out a complete noob.

Here's the schematic:

Here's the finish line. I purchased 4 novelty LED flashlights from the local hardware store and mounted them above. You can barely see the holes, but they are there. There's about 3/4 of an inch between the surface of the track down to the photo resistors, so there shouldn't be any bleed over between lanes or from stray light sources.

I used this code on the Spark Core for testing the circuits to make sure the lights were powerful enough and that the shadows were dark enough. Here's the final code used during the race. I hooked up the Spark Core to my laptop, installed the driver, opened Putty, and connected to the COM port.

So, in the testing, I got an accuracy of ±0.00002 seconds. Once the photoresistors were hooked up, the accuracy became harder to test. However, by sliding a single board over all 4 lanes at the finish line "at the same time", I got about ±0.02 seconds. The degraded accuracy is probably due to the photoresistor reaction times and the fact that even though I tried, I might not have been pushing the board across all four holes at the exact same time.

Here's the video. I finally got around to getting it off my phone and editing it all together.

Putting a Hidden Help Section on a Web Page

Continuing in a series of posts, here's how to add a hidden div to a web page (and make it visible on demand).

For the health check report, I had built a way to transform the output of the script into a usable report and added editable content so that the report could be further tweaked after rendering the page. Given that others would eventually be using the report, I needed to add a way to help non-coders when inserting content into the report so that it looked cogent and coherent. Thus the help section.

However, I couldn't put a help section on the final report, that wouldn't look good when the report was delivered to the customer. So, I had to create a way for the help section to be normally hidden, with a button or link to display the help section. Also, the button had to be hidden!

Let's start with the help section itself. Take a look at the XSLT itself. The help section is simply a DIV containing the help content, with some special CSS applied to hide it until needed. Look at line 22. Notice that the display style is set to none. This hides the DIV entirely and collapses the space around it. It's as if the DIV isn't even there.

Now for a button to show the div when needed. Look at line 15 and you'll see an image with an onclick function. The function is contained in the external JavaScript file (lines 23-29). The JavaScript simply toggles the display style from none to block and back. Really, it wouldn't be too much to just put that function right in the img tag itself, but since I already had the external JS file, it was just as easy this way.

Another look at line 15 will show that the img is contained within a div with id="helptoggler". That div has three lines of CSS that essentially make it invisible until the mouse hovers over it and also puts it in the top left corner of the page:

  1. #helptoggler {position:absolute;left:0px;top:0px;}
  2. #helptoggler > img {visibility:hidden;width:32px;height:auto;}
  3. #helptoggler:hover > img {visibility:visible;}

This means that the image is in the top left corner of the page, is hidden until moused over, and when clicked shows the help section.

Since the JavaScript is built as a toggler, the same function can be called anywhere a link is desired to hide the help section. Clicking on the image in the top left corner hides the DIV, but notice that within the help section itself is a span with an onclick action calling the same JavaScript function (line 24).

Once again, if you want to play with the files themselves, just download, unzip, and open the XML file in IE.

Displaying Editable Content on a Web Page

In my previous post, I detailed how I went about transforming an XML document into a readable report, displaying data from the XML. If you downloaded and tried out the files, you should have noticed that the final report was more than I described.

Particularly, there were two things I glossed over:
  1. There are several boxes on the page that have edit buttons and can be modified after the page is rendered.
  2. There is a hidden div that shows the report author how to format additional content so that it shows up with the same style as the rest of the content on the page.
I'll cover #2 at a later time. Right now, I wanted to talk about how the editable content was built. Remember, the point of the project was to build a final report that could be delivered to the customer. A script was used to gather a bunch of data and output it to XML. An XSLT was used to transform that information into a more readable format. While the script was good at gathering much of the information needed, it didn't go into wordy detail about the recommended changes to be made. Thus a method of adding to the document was needed.  

Initially, I built a section of the XML that would allow the user to input all the information needed right into the XML. That way, the XML transformation and PDF generation would be the last step in generating the report. However, that wasn't too sexy and I still found myself needing a way to edit the content after it was rendered.

So, I came up with EditableContent. This comprised of a few components. Here is what part of the code looks like:

  1. <h2>Summary Recommendations</h2>
  2.     <div id="recsummary" class="editablecontent">
  3.      <img src="health_check_files/edit-icon.png" onclick="editcontent('recsummary','recsummary_content')" />
  4.      <div id="recsummary_content">
  5.       <xsl:if test="reportinfo/recsummary!=''"><xsl:value-of select="reportinfo/recsummary" disable-output-escaping="yes"/></xsl:if>
  6.       <xsl:if test="not(reportinfo/recsummary) or (reportinfo/recsummary='')">Provide a summary description of your recommendations<br /><span class="example">EXAMPLE</span>: The primary recommendations resulting from the data gathering, assessment, and analysis performed during this Health Check are to upgrade both hardware and software on the core NMS components of the infrastructure. In addition to hardware and software upgrades, a review of the alarm/event management process is recommended. Architecturally, the NMS deployment is in accordance with a “Best Practices” implementation for an organization of this size.</xsl:if>
  7.      </div>
  8.     </div>

First is the div containing the content. This div has a unique id and uses a CSS class of 'editablecontent'. This CSS class is what puts the red line around the editable content and also puts the edit button. Click here to see the CSS (pay attention to lines 63-85).

The main DIV has two children, the edit image and the content DIV. The edit button has some special CSS that make it only visible when the mouse moves over the parent DIV. The child DIV is the one containing the content.

Notice the image has a JavaScript function attached. The JavaScript is contained in a separate file (but could just have easily been included in the XSLT). The JavaScript simply switches from a static DIV to an editable textarea box and back again. Depending on which save button is pressed, the JavaScript will return to a DIV that looks like a draft or a final version.

Adding editable content to the XSLT all over the place made it easy to use the data from the XML but have a last minute override for any of the content (i.e. to fix a single misspelling without editing the XML manually).

To see it all in action, download the files, unzip, and open the XML file in IE.

XML and XSLT: Transforming Raw XML into Readable Reports

Not too long ago I was tasked with doing a health check for one of our customers. They used one of the products I was less familiar with, so I decided to look at some previous examples of health check reports and see what information I would need to gather and put in the report. It turns out one of my co-workers had already built a script that gathered some of the necessary information. At my request, he modified the output to XML so that I could then take that XML and use and XSLT (eXtensible Stylesheet Language Transformation) to convert it into a nice pretty report in a browser.
Stored data is really comprised of two parts: the data itself and the schema. The schema is the format or syntax of the stored data. For example, let's say I wanted to store my CD collection. For each CD, would probably store the name of the CD, the artist, what country it was released from, the record label, the price I paid, and the year it was released. These descriptive details form the schema of the data I'm going to store. I could store the data in an Excel spreadsheet, with column headers and one row for each CD. That would be pretty easy, but what if the person I was sending the data to didn't have Excel? Plus, if I took one row out of the spreadsheet, I'd also have to copy the column headers so that the person I was sending the one row to would know what each column means. Without the schema information, the data isn't as easy to understand. XML is a language that allows all my data to be transmitted along with complete schema information. Consider the XML for a CD collection:
See how each piece of data has surrounding tags that help identify what each piece of data means? See how things are nested within each other so that it's easy to see what data pertains to which objects (i.e. which Artist produced which CD)? That's the nice part about XML.

Now, back to the health check. My co-worker had modified his script so that the output was in XML format. That meant that I could then take the XML and easily interpret the data. It also meant that I could build an XSLT which would apply styles, chrome, and extra text to the XML to make it much more readable. Here is what the output of the script looks like. This is the XML, that I want to take and turn into a readable nice report. Ideally, I'd like to turn this into a PDF.
The way to transform this is to build an XSLT and reference that XSLT within the XML itself. See how line 1 has a link to a xsl stylesheet? That's the XSLT. When the XML is opened in a supported browser (IE works best surprisingly), the browser will go find the XSLT and perform the translation against the XML data.
Ok, that's not too bad, right? Ok, let's go through the magic one piece at a time. The first 6 lines are pretty standard XSLT. The good stuff starts online 7. In a way, the XSLT will be merged with the XML. Technically, the XSLT is inserted into the XML document, but it is almost easier to think about the XML being inserted into the XSLT (it's actually because of this that most people actually incorrectly say that XSLT is in HTML format thinking that it's the HTML that the XML gets inserted into, but I digress).
So, line 7 starts an XSL template. The XSL template here essentially says to go to the tag in the XML called 'nimsoft' and insert some HTML. (By the way, whenever an XSLT is applied, the underlying XML is pretty much hidden except where the XSLT specifies that it should be displayed.) So lines 8-19 are pretty standard HTML document headers. In another post, I'll go into the details about the helptoggler and the editablecontent parts. They have more to do with HTML and Javascript than XML/XSLT.
Line 20 is the first place where we're going to insert some of the XML data. The <xsl:value-of select="reportinfo/company" /> tag instructs the browser to display the value inside the company tag, which is under the reportinfo tag, which is under the nimsoft tag. In the final HTML, line 20 would look like this:
<div id="company_name_content">Health Check for Fake Company</div>
Lines 21-88 are more standard HTML. This section of the report is displayed to help the author make some changes after the initial version is rendered. I'll discuss this in another post.
Lines 89-96 make use of the xsl:value-of tag to pull in more XML data. This time, pulling from the nimsoft/reportinfo/authors tag (e.g. Mickey Mouse) and the nimsoft/reportinfo/reportdate tag (e.g. 06 Feb 2014).
Lines 97-107 contain a simple legal notice, another standard HTML block. Remember, all the standard HTML is just inserted at the point of the last template match. So, we're still inserting onto the root of the XML.
Lines 108-135 begins the first section of the actual report and is more standard HTML, with a couple xsl tags. The first is at line 130, which uses the xsl:if statement to check to see if there is a value at the nimsoft/reportinfo/recsummary tag. If there is something there, the xsl:value-of tag displays it. It also uses the disable-output-escaping attribute, which means that the XML can contain valid HTML. Line 131 uses the xsl:if again checking for the existence of the nimsoft/reportinfo/recsummary tag or if it's empty. If it doesn't exist or it's empty, some boiler plate HTML is inserted instead of what we would have expected from the XML. This is handy since it allows that tag to be optional in the XML.

Up to this point, I've been using something called XPath to reference particular tags within the XML. XPath is a specification that allows tags to be referenced using their path in the XML. So far, I've shown how the root template has worked. Within any of the xsl:value-of tags, the select attribute has used XPath to specify a particular tag. Since I've been working within the root template (nimsoft), that part of the path is implied. 

From lines 136-169, I follow the same pattern already established: put some raw HTML on the page, insert some values from particular XML tags using XPath. This also applies to lines 177-367. It all uses the same basic concepts.

However, lines 170, 172, 174, & 176 use the xsl:apply-templates tag. This xsl tag instructs the browser to move down to the specified node in the XML and loop through the children of that node. This is similar to calling a function from within a program.

Line 170 specifies to go to the NMS node. Since we're inside the nimsoft template, that means the browser will move to the nimsoft/NMS node and loop through the children. To see what the browser does, look at lines 368-394. These lines specify what to do whenever the xsl:apply-templates tag is used for any node called NMS.
Particularly, this piece of the XSLT builds a table and inserts the values of the various children nodes of the NMS node (lines 369-383 & 389-393). Lines 384-388 specify another xsl:apply-templates tag (disks/disk), which means the browser moves to that node and processes the children; see lines 422-427. This nested template outputs a single row for each disk under NMS/disks/. Once all the children are processed, the browser returns to the template that called the apply-templates tag.

Lines 172, 174, & 176 use similar templates, which either call their own templates or existing templates. For example, both the NMS and UMP templates call the disks/disk template since the disks are stored in the same way as the parent.

And that's about it. By walking through the XSLT with the XML right beside it, you can see how the final result is made. Simply opening the XML in IE was enough to get the information to display. It is pretty trivial from there to generate a PDF version of the report.

Here's a snipped of what the final report looks like. If you're interested in playing around with it yourself, you can download the sample XML, XSLT, and the other auxiliary files here. Next time, I'll talk about how I built the help section and the editable content.