Monday, June 3, 2019

How to share a ton of stuff with someone else

If you have the stuff to share:

  1. Download and install Resilio Sync
  2. After opening the app, click the plus sign in the top left corner and select "Standard Folder"
  3. Browse to the folder you want to share with someone else and click "Open"
  4. A new entry will appear in your list of folders. At the right end of this entry will appear three dots (when you mouseover the row). Click the three dots and select "Copy Read Only key" or "Copy Read & Write key" depending on whether or not you want the sharer to be able to change what's in the shared folder. They key is now in your clipboard.
  5. Send the key to the person you want to share with.

If you have received a key:

  1. Download and install Resilio Sync
  2. After opening the app, click the plus sign in the top left corner and select "Enter key or link"
  3. Paste in the key that was sent to you
  4. Browse to the folder you want to synchronize and select Open.
  5. Go grab a coffee and chips and wait until the synchronization finishes.

Disclaimer: using any protocol to transmit/receive data that you are not legally allowed to transmit/receive is obviously illegal. I'm not responsible if you use this to do something illegal.

Tuesday, April 16, 2019

One Trailer for Each Piece of the Saga

Saw an article bringing together all the trailers for all the Star Wars productions. They did the episodes 1-9 first, then the ancillary productions. I decided to put them into a YouTube playlist. You're welcome.

Don't forget the one that you can't find on YouTube, it's after Episode 6, before 7.

Friday, March 22, 2019

Web GL Globe

I have spent some time now working in the Oil Industry. I've been working in technology, so I haven't had much to do with actual oil. However, I did come across this really cool visualization of oil imports and exports when I was searching for a way to visualize some network performance data. It's based on a bit of code by Google called the WebGL Globe. WebGL Globe is built on a technology called WebGL, which is like OpenGL except that it runs natively in the browser. It allows for interaction like what you would expect from a 3D graphics game but right in the browser with web code. Some of the examples are pretty cool and load extremely quickly because they don't really use the normal document structure. (Some other examples of really neat uses of WebGL and/or three.js, a library to facilitate using WebGL: Galactic Neighbors Internet Safety game for Kids by Google Lubricious)

The concept is pretty expansive when you think about the kinds of data that can be shown. WebGL Globe combines data that has several dimensions of information encoded and visualized:

  1. Node Location - latitude and longitude
  2. Node connections - showing that two nodes are connected in some way (shown as an arc when you click on a country in the World of Oil visualization).
  3. Connection intensity - this one can have multiple dimensions depending on how creative you get. For example:
    1. the line color itself can indicate some sort of status (red/green/yellow/orange). It's even possible that you could show percentages of the arc length as certain colors to indicate distribution of the status (i.e. 10% is bad while 90% is good might mean a line that is mostly green with a segment of length 10% that is red).
    2. the thickness of the arc can be another dimension, indicating something like volume
    3. the maximum altitude of the arc can be another, indicating sample size
All this is neat, but what good does it do anyone. Well, if you've watched my Analyzing TCP Application Performance video, you know that monitoring of response time is the most important part of any infrastructure monitoring. If you're doing application response time monitoring, you should have data for each transaction showing how each transaction performed. That's a huge amount of data (big data anyone?). 

Visualizing that data requires a few steps of summarization and grouping. First of all, every transaction's metrics should be retained individually so that you can dive into the details if needed. Second, you can start grouping by user then by summarizing by time buckets (1 minute or 5 minute). Another level of grouping that can be done is to group by network location. This can legitimately be done because most users at a particular location are going to share a very high percentage of the network path that gets them to the services they are consuming. 

Let me rephrase: You should have data that describes sources, destinations, and the performance between them. Sound familiar? What I'm envisioning is taking this data and plotting it out using WebGL Globe. Each network location and each service hosting location is a node (hm, could cloud services actually be represented as clouds?!?). They'd be connected with arcs. Thickness of the arcs could represent the number of transactions between those two nodes. Height of the arc could represent the recent negative variability in performance or a way to highlight the selected arc. Color of the arc could show a measure of the deviance in performance. 

Click on a node and you'd see the arcs from all the user locations consuming services from that site (if any) and the arcs to all the service hosting locations consumed by that site (if any) pop up in altitude (normally they'd be displayed at sea level or hidden based on a GUI setting). This is similar to the action you see in the Global Oil visualization when you click on a country (you see the countries connected to it). 

From there, if you saw that all the arcs were showing problems, you would know (from problem domain isolation) that there is a problem common to all users at that site. You could click on the node again and get into performance stats for the infrastructure at that location (from a tool like LogicMonitor). Following the colors should get you to the root of the problem pretty quickly.

Alternatively, if you saw a problem in only one of the arcs (or a small selection) follow problem domain isolation tactics and click on the arc having the problem. That should dive you into the infrastructure connecting those two nodes so you can find the problem.

Nodes themselves can have problems that don't evidence themselves outside the node. If that's the case, you should be going into the node to look at why there are performance issues. You'd know to go to there because the node icon itself would be showing some color indicator that there's a problem.

Friday, March 8, 2019

Using Docker

Just a couple articles that have been camping out in my browser since I found them. They were very useful in helping me get docker doing useful stuff, like Ansible.

alpine/git          latest    a1d22e4b51ad      10 days ago     27.5MB
ansible-docker      latest    25b39c3ffd15      2 weeks ago     153MB
ubuntu              latest    47b19964fb50      4 weeks ago     88.1MB

Ansible-docker is a container I modified to suit my needs. You can use it by either cloning the git repository and building from source or just pull it from the hub:
docker pull sweenig/ansible-docker

I used this to know how to push images to docker:

alpine/git is the easiest way I've found to run git on Windows (hint, it's not actually running in Windows but in Linux inside the container).

Thursday, March 7, 2019

Object Oriented Programming

Today's blog post may have been posted before, but it's a really good one. If you are looking to get into object oriented programming, you should give this a look:

Tuesday, May 8, 2018

HTML Maps, Continued

Continuing a previous post, I decided to add some CSS to make it obvious that you can click on the mapped links.  Here's the CSS:


  a:hover {border:1px dotted gray;}

  a {position:absolute;}


Obviously, you may want to use CSS selectors to make sure that only your mapped links on images get styled this way.

Wednesday, December 13, 2017

Online circuit schematic design and simulation

This is pretty cool, although I haven't actually had a chance to use it since most of my circuits don't involve much signal processing and are quite simple DC circuits. However, LushProjects has this circuit simulator that is completely online and free to use (unlike SPICE). The neat thing is that you should be able to embed your own circuit onto any web page using this tool and iframe.

Tuesday, December 12, 2017

Hard puzzle with an easy solution

I've been keeping this tab open on my browser because I wanted to work out the solution myself. I finally figured it out (2 years later). Don't give in and look at the solution without giving it a good try. Hint, you can do it without any trigonometric functions and without Pythagoras' help.

This is known as Langley's Adventitious Angles. And a good visual solution can be seen here (warning Flash required).

Monday, December 11, 2017

Boolean Arithmetic

I had to explain Boolean arithmetic the other day to non-makers. These guys didn't have any real experience with electric logic circuits, but the pictures here seemed to help.

Friday, December 8, 2017

Prusa's ColorPrint tool

When 3D printing, there's usually a jump in cost and complexity for printers that can print multiple colors. As a workaround, you can pause the printer, switch the filament to a different color, print a few layers with that different color, then pause, switch filament, then resume printing. This can be a very tricky thing to do manually, so obviously, there is a tool to do it.  Here are some prints by a buddy of mine that utilized this tool for these prints. He prints a black surface with a cutout for the image, prints a few layers of a different color (glow in the dark in this case), then resumes printing in black. He didn't have to design the model with any major modifications, just the first few layers cut out so that the glow in the dark layer can shine through.

Thursday, December 7, 2017


Rolling your on VPN can have various benefits. The biggest of which is that when you're on an unsecured network (i.e. any wifi network that you don't own yourself) your traffic is encrypted back to your home and then goes out to the internet. This means that you don't have to trust that the WiFi owner (think Starbucks or McDonald's) isn't snooping on your packets. It doesn't matter if they do snoop it because you're packets are encrypted and nobody can understand it unless they are you or your RaspberryPi at home.

Before you contest, yes, I know that any form of encryption can eventually be beaten. If you're that paranoid about someone decrypting your packets (which would take years by the way) you should be off the grid.

That said, I looked into setting up a VPN option for myself and eventually found PiVPN. This little one line installer sets everything up on your RPi so that it becomes a VPN endpoint. Use it to generate a certificate which you can load on your device (I've tested on iOS and Windows 10) into the freely available OpenVPN client.

I have since found, but not installed/tried a web GUI that should let me manage PiVPN through a browser. I hope to try this eventually after I have some free time. So, probably next year!

Tuesday, December 5, 2017


I wanted to figure out the best way to put WordPress on a RaspberryPi. Turns out the best way is an image called PressPi. Load this up, lock down all the security, load CertBot so it's all running over https and you're good to go.

In case you're interested, here's a good set of instructions for installing Wordpress on Ubuntu 16.04. So many instructions! You might need phpMyAdmin as well.

Monday, December 4, 2017


I recently needed to overlay a bunch of links on top of an image. This is done one of two ways, CSS being the more modern way. Essentially, you create a div with a bunch of elements inside it, which elements are all positioned absolutely.

<div style="position:relative; height:786px; width:537px; background:url(myimage.png) 0 0 no-repeat;">
     <a style="position:absolute; top:393px; left:147px; width:87px; height:69px;" title="asdf" alt="asdf" href="asdf" target="_self"></a>

Instead of mapping out all the positions manually, there's a really neat tool that will let you do it right on top of your own image and then generate both the HTML map code as well as the more simple CSS code to render it on a webpage.

Friday, December 1, 2017

W3Schools and their incredible CSS Library

Anyone who has built a website in recent history knows the importance of good CSS. I myself have been compiling a master CSS sheet that I use on most of the web development projects I'm involved with. I realized that I was trying to accomplish the same set of outcomes over and over, so a standard library of CSS styles was a natural shortcut to a good end.

I know some have been critical of and their no nonsense way of explaining web development concepts, citing technical inaccuracies and nuances. I've found that those perceived inadequacies either can't be discerned by "normal" people or don't have a discernible impact on the end product. As such, I've been a fan for a couple years now. I've built their site into my Google searches so that I know I'll end up going to the answer that I'm sure they've provided straightaway.

I've used a few of their tools from the CSS section over the last few years, particularly pleased with their tooltip implementation. That's when I discovered that the CSS sheet that they use for their own site, which has all of the CSS needed to implement all of the cool, modern utilities is free to use. They even encourage it!

There are a couple things I like about it:

  1. All their examples use this single sheet. I don't have to understand a concept, then look up a different place to find out how to use the W3.CSS framework to implement it. 
  2. It uses pure CSS. I only include one CSS reference and I'm good to go. There's no need to import a jQuery/javascript library as well to make it all work.
  3. It treats responsiveness and mobile first as the highest priorities. This is what makes simple websites look like websites developed my multi-billion dollar corporations.
  4. Templates!
I used one of the templates here. It's one page. I don't host any javascript nor CSS files. Even the icons are used from frameworks referenced and explained by W3Schools.

Thursday, November 30, 2017

Installing LAMP in one step

Go educate yourself on LAMP.

Installing LAMP has been getting easier over the years. Now you can install it with a single command line:

sudo apt-get install lamp-server^

More information here.

Encryption Everywhere

Anybody who has stood up a web server knows the importance of securing that connection. Watch this video:

While I don't yet use the HTTPS Everywhere add-on, I do make use of Certbot. You can see an example here. This website runs on a LAMP server on AWS (the free tier). From beginning to end, except the coding of the site itself, I had the secured site running in about 15 minutes. Several cool things happen when using Certbot:

  1. It's aware of the multiple hosts you may have configured in your web server and lets you run for specific hosts.
  2. It automatically configures http redirect. This means that even if a user accidentally left of the https:// from the address to your site, they'll get redirected to the https version automatically. When I first did this manually, it took me several days to get it working right. 
  3. The certificates are free because they have a short life span. So, Certbot has to be run regularly to get a new certificate. You don't have to pay attention to that cycle though because you can run the checker daily or weekly and it won't do anything unless the existing certificate is close to expiration.

Mini blog posts

Since I don't really have the time anymore to do long, in-depth blog posts, I've decided that I'll start doing mini posts with tidbits of information. When I started this blog, it started out as a place for me to post stuff I needed to remember and/or have a place to write stuff down that I could recall easily. This is a continuation of those efforts. I'll be picking suspended Chrome tabs and detailing why I've kept that particular tab around.

Tuesday, September 12, 2017

Hurricane Harvey - Our Story

Hurricane Harvey started affecting us Friday, 25 AUG 2017.  It was my Friday off, and we were preparing for the next Monday when the twins would enter kindergarten. As such, Friday was "Meet the Teacher" at their school. The sky was dark and there was light rain. It felt like a good day to cuddle up and watch a movie. Friday evening, I was in touch with our ward leadership as we coordinated our response teams.
Our band is to perform on 9 SEP, so the next morning (Sat 26 AUG), Christy and I got together with the rest of the band and had practice over near Black Horse Ranch. We were finishing up with the first set as the bottom fell out so we decided to break. We were across Cypress creek from our kids and we wanted to make sure we got back to them before any flooding started. We had identified some friends of ours who were in a neighborhood near us that was likely to flood. They were preparing to move out the following Thursday (31 AUG) so they had most of their stuff in boxes already. She (Kelia) is almost 9 months pregnant so it was agreed that they would preemptively evacuate to our home. On the way home, we went over and helped him (Kris) lift some of their furniture onto blocks and 2x4's. They had a few other things to finish and he has a large pickup truck, so we left them to finish expecting them to come over later in the day. They could get out even if the flooding started. It's important to note that their neighborhood usually floods before anything else in the area and when the flooding is really bad, their neighborhood dumps out into our neighborhood. As long as they are not flooding into us, we don't have a problem draining our neighborhood.
They came over later that afternoon and the kids started playing. We got them setup in our spare bedroom with bunk bed cots for the kids (4 total people in that family). Kris and I went back over to his neighborhood. He went back over to the house to get a few things they had forgotten and I went to help another friend attempt to waterproof his garage door. We jammed some tarps into the hinges of the garage door and weighted it down with landscaping bricks. This turned out to be a pretty good barrier against the water that eventually rose a couple feet above the bottom of the garage door. When we returned, there was only water in the street gutters. A few hours later, water had risen to cover the street. We started keeping an eye on things. The reservoir we drain into was well below us, so I wasn't worried that flooding would get to dangerous levels for us. The last two major floods had not produced enough water quickly enough to have it come up more than halfway up the driveway.

Sunday morning (27 AUG) dawned with some water covering the streets, but less than the highest point overnight. The other friend in the same neighborhood that always floods first had not yet evacuated. The overnight rise of water had not receded so they were looking at evacuation options. Kris and I rode in his big truck and started to prepare their house for flooding and to convince them to evacuate. It became evident that the water was going to keep rising. Last year, this family had waited until it was too late to evacuate during the 2016 tax day flood and had to be evacuated by canoe. We emphasized how important it was to avoid getting to that point again. The father wanted to wait it out, so we took the mother and kids to another ward member's two story home which was serving as a dispatch location for the emergency crews. The father was left to his own devices to get out (which he eventually did on his own).
A large number of ward members had congregated at that two story home for a previously scheduled baptism. Since extended family had flown in for the event, it was decided that the baptism would be performed not at the presently closed Eldridge building, but in the pool in the rain (pretty memorable!). Since there was a large group and the Bishop was present, it was decided that the sacrament would be administered since all other church meetings were cancelled. Shortly afterward, the rain lightened up and most of our street drained.
Sunday afternoon, a rescue request came in for a family in Enchanted Valley. It was outside our area of responsibility, so our dispatch tried to find resources in the area. Unfruitful, he dispatched Scott with his big Yukon. They made it to the family and loaded everyone up. There wasn’t room for two of the four rescuers, so they stayed behind to be retrieved after dropping off the family at a safe location. Upon returning, Scott decided to splash around a little and stalled his Yukon. They pushed it up onto dry land and notified our dispatch. I saw that call come in and reached out to a few Jeeper groups who had been offering help and making rescues since the rain started. Allan and Z responded and we headed out toward Telge and 290. I made it past the Sheriff station before the water started getting deeper. I gave Allan and Z my tow strap and they continued on (they have a few inches more clearance than I do). Their two door jeep would only hold two of the four rescuers, so Scott and his nephew stayed with his truck while the two who had originally stayed behind were brought back to me. I had backtracked and waited under the 290 bridge at Telge. Upon returning, the water had risen. Allan commented that he could probably make it back in, but he was worried about getting out while the water was still rising. We decided to attempt to rescue via Huffmeister. It was dark by this time. In my lower Jeep, I led the charge. The streets were clear of water until about a half mile north of Cypress North Houston. I was cruising at about 40mph when we hit the water. Needless to say, it was a pucker moment. We were all fine, but it was one of those moments where everything went into slow motion. The water started getting deeper, so again Allan in his swamp thing went ahead to see what they could see. The water ended up being too deep (reportedly about 6’) so, we weren’t getting to Scott and his nephew any time soon. They would have to ride it out. The rain started coming down harder and we had just received news that the Addicks and Barker reservoirs would be opened up at 2am. Not yet knowing how this would affect the current water levels, we decided to break for the night.  We went to bed late Sunday night as we watched the waters begin to slowly creep over the street in front of our house.

Monday morning (28 AUG) when Christy got up, Kelia told her the water outside, which was up to the sidewalk, was no longer draining away.  Christy insisted that we start making plans to raise our important possessions and make an evacuation plan. I was hesitant because of past experience with extreme flooding had never given us any problems. I reluctantly conceded though and we figured out what we would do if we decided to evacuate.
We found out that the neighborhood that always floods first had breached the main road and was spilling into our neighborhood. It wasn't going to get any worse for them, but it was coming in quickly enough that our drainage system wouldn't be able to keep up for long. I went for a hike in my chest waders and saw our main drainage creek rising. This meant that what we were draining into was full and it was only going to get worse from here. This had never happened before. It turns out that the Addicks reservoir had filled up. <a href="">The Army corps of engineers had already opened it up to drain it</a> (which would send the water south toward the ocean) but the water leaving was less than what was coming in. I broadcasted my hike live over Facebook ( Upon returning home, I had decided it was time to get out while the water was low enough for my Jeep and Kris' truck. Our neighbors (4 adults and one infant) also needed to evacuate. We rallied everyone into motion and started implementing our plan. We got everything that we could think of up a as high as we could. Kris and I loaded our vehicles with the essentials we would be taking with us. I got my family loaded up in my Jeep and Kris got his and the neighbors loaded up in his truck. My neighbor got 3 videos of our escape (part 1, part 2, part 3) from the back of Kris’ truck.
My Jeep dove into the water and got us to a high point right before the exit of the neighborhood onto Barker Cypress (which was the spillover point for the first neighborhood that flooded). I parked there and we made sure the boys had their life jackets on and seatbelts off (in case we had to ditch the Jeep). About 8 minutes later (which seemed like an eternity) Kris' truck caught up with us and we pushed forward into the deeper water right before the exit onto Barker Cypress. It's at this point that I think I got water in my differential, more on that later. With no option but to push forward, we got water up to top of the Jeep tires before making it out onto the shallows of Barker Cypress. We turned south away from Cypress Creek and away from 290 where the water was coming from. We made it down to Tuckerton without any real issues except for some water up to the middle of the Jeep tires. It was dry from there on out. My plan was to head Southwest until we found a place to land. While en route, James, a fellow Cub Master, texted me offering to let us come to his place indefinitely, which we did. His neighborhood was wet but didn't have any water on the streets. I realized later that we were living out the story of the three little pigs, Kris' family fleeing the straw house from the big bad Harvey to our house of sticks, which we eventually fled to James' house of brick.
I spent most of Monday afternoon coordinating rescues, surveying potentially flooded/closed streets, and making various runs to the Longenbaugh Mormon church, which had been turned into a shelter. There was a ton of food and other donations to be received and sorted as well as families to take care of. I got a call later in the day asking to help with evacuation of a family of 13 near the intersection of Queenston and Tuckerton. I made it to the Shell station there, which had turned into a staging point for various high clearance vehicles that were going in to make rescues. I arranged for an ATV with a flatbed trailer to make the run into the house where the family was. They would bring them out to me and I would take them to a shelter. After the ATV was dispatched, they family called and cancelled. I still feel bad for the driver of the ATV. I got signed up to do a shift on Tuesday from 4-8pm at the Longenbaugh building. The shelter required two Elders or High Priests present at all times.

Tuesday morning (29 AUG), Christy and I ventured out in the Jeep to try to make it back to the house. Several reports on our neighborhood Facebook page indicated that the waters were receding. We found high water on Red Rugosa, but not too much covering the street in front of our house. We discovered that the water seems to have entered the garage and gotten to the front porch, but didn't come into the house. We also found a telephone/electrical pole which had been parked at the end of the street waiting to be installed that had floated into our front yard. This was surprising, but not unreasonable. It was wood and the water was high and apparently thrashing towards the three storm drains in front of our house. Some neighbors saw it churning around and lashed it to one of my trees. Christy and I elevated a few more things in case the water came up some more. I built a couple of impromptu sandbags out of wet towels, garbage bags, and landscaping bricks to put at the front and back doors. I spent the afternoon at the Longenbaugh building. The clouds moved off, the sun shone. It felt like a good sign that the storm was over. There was a rumor about a kicked in door across Barker Cypress, so I decided that I would spend the night at the house with my shotgun. It also gave me a chance to watch Guardians of the Galaxy 2.

Wednesday morning (30 AUG), our roads were dry and we had decided that we could probably come back to our house from James'. The sun had started to shine and only the lowest intersections still had water. Harvey had moved on to east Houston, so while we were technically still in the storm, we were now on the dry side. We came back home and started to put things back down on the ground. I spent the afternoon loading up small items from Kris’ house and being a shuttle driver for the ward team that was gutting a home. I eventually got some food brought in for the teams in both locations.

Thursday (1 SEP) we spent most of the day moving Kris and his family into their new home. The roads were dry, so it was a simple matter of loading up the U-Haul twice. Their new home is less than a mile away, but on our side of Barker Cypress (less chance of flooding).

Friday (2 SEP) I spent the morning getting some things back in place around the house until my brother, John, came in from Dallas. When he got here, we got together with the ward team to work on removing some wood flooring from a flooded house. That took the rest of the night and we only got the main living room (150/1200 sq. ft.).

Saturday (3 SEP) I dropped my brother off with the team that would continue for the next six hours working on that wood floor. I had arranged to attend a differential fluid changing party hosted by a shop owner on Clay road just inside the beltway. They were changing fluids for free, so it was a good opportunity to make sure everything was in working order and also make sure I got the water out of my gears. They also gave me some pointers which made installing the wiring harness for my trailer hitch dead simple. I got done with that around 1pm and went to act as a coordinator for the team that was finishing the wood floor removal and the other team that had begun gutting another house.

Sunday (4 SEP) began with gutting a few houses in our neighborhood. We had abbreviated church meetings at 1pm during which a new Bishopric was called and we were notified that we would be meeting back in the West road building for the foreseeable future.

Monday (5 SEP) was Labor day and our crew chief had advised that those of us who had been working for several days straight take some time off to recover. I heeded that advice and played with the kids. I brought out the slot car track and we raced. In the evening, grandpa invited us over to go fishing. He had just bought three new kids fishing poles (Star Wars themed, of course). Luke caught a baby brim and a baby bass. I caught a turtle and Grandpa caught a brim and another turtle. We let the first one go, but decided to relocate the second one since the turtles have a tendency to kill the ducklings. Cole became an expert caster, sometimes throwing his practice weight 25 feet from the shore.

Tuesday (6 SEP) meant a return to work; the Chevron offices had been closed since the storm. It appears the tunnels were flooded since the demo work had already been done and there were dozens of fans and dehumidifiers.

Wednesday, March 2, 2016

Rate, Volume, Utilization, and Parsecs

But wait, the parsec is not a unit of time, but a unit of distance! Wait, what? All arguments aside about how the Millennium Falcon could make the Kessel run in a shorter distance through enormous gravitational shears, knowing your unit is extremely important.
I work in network monitoring and one of the main reports my tools provide measures how much an interface is used. Because the tool is better than poo, it presents the utilization in several different units. First, let's review the units. Each of these units are SI units, so standard SI prefixes apply when talking about larger bases of the base unit:
  • Bytes - measures the total number of octets that were transmitted (or received depending on p.o.v.)
  • Bits per second - measures the number of 0's and 1's that were transmitted (or received depending on p.o.v.) in a single second.
  • Percent utilization (%) - measures the percentage of a period of time that the interface was transmitting (or receiving).
Let's break it down.


This one is pretty simple and is referred to as VOLUME. It's simply the total number of Bytes transmitted (or received) during the measurement window. An SNMP polling station would poll the octet counter at a regular interval. Every time the octet counter is polled, the delta between the previous poll results and the current poll results represents the total number of Bytes during the measurement interval.
V = B1 - B0
Polling too frequently will result in small values. Whenever rollups happen, the individual data points should be summed (integrate over the rollup interval). As long as rollups are done that way, the poll rate is less consequential.
Rollover is accounted for by assuming that a lower number than the previous measurement is caused by rollover and the new measurement (measured from 0) is added to whatever remained between the previous measurement and the max limit of the counter.

Layman's example

This is similar to tracking how many miles a car travels. You simply take a reading of the odometer before beginning a trip and another at the end of the trip. The difference is the total miles the trip entailed. You could take readings more often. You'd just need to add up all your measurements at the end of the trip to get the total for the trip.

Bits per second

Bits per second is a simple count measured over a unit of time, making it a RATE. It counts the number of bits that went through the interface, then normalizes the count over a standard unit of time, the second. It is calculated like this:
R = (Δ bits) / (Δ time)
That is, you take the total number of bits and divide it over the total time of the measurement. This is usually done through SNMP by looking at the octet counters. The NMS will poll the sysUpTime and the octet counters at a certain time (T0 and B0). It will then poll the sysUpTime and octet counters at some other time in the future (T1 and B1). The RATE is calculated by dividing the difference between these two measurements (and adjusting the octet counters to get it into bits instead of bytes 8 bits = 1 Byte):
R = 8 (B1 - B0) / T1 - T0
The resulting unit is bits/second and represents an average of the number of bits transmitted per second over the measurement interval (T1-T0). When doing the rollup, average is the most common descriptor. In addition, min, max, standard deviation, variance, and 50th, 75th, and 90th percentiles would be useful.
If you're already gathering VOLUME, you'll notice that B1 - B0 used in the RATE calculation comes from the volume calculation. That's on purpose and is why it is said that RATE is derived from the VOLUME measurement. In fact, if the polling interval is fairly regular, the rate can be said to be approximately linearly proportional to the volume.

Layman's example

This is not really any different than measuring the speed of your car while on a trip. You take a reading of the odometer and the clock at the beginning of the trip and again at the end of the trip. The difference in miles, divided by the total time of the trip (in hours in this case) will give you an average speed in mph. You could increase the resolution of your measurements by taking a reading and performing the calculation every 5 minutes. This would give you a data point describing the average speed for every 5 minutes of your trip.

Percent Utilization

Percent UTILIZATION measures how much capacity is used and is reported in percentage of the total capacity available. This is calculated by dividing the current RATE by the total rate capable by the interface. Alternatively, it could be calculated by dividing the VOLUME by the total volume capability of the interface. The latter requires a bit more derivation, so most use the former.
This metric requires knowledge about the interface's capabilities. This is usually obtained by polling the bandwidth statement (ifspeed) of the interface (, which is in bits per second (bps). Once obtained, the percent UTILIZATION can be calculated like this:
U = 8 (B1 - B0) / T1 - T0 / ifSpeed
You may notice that a part of this formula looks the same as the RATE calculation. It is. Simplifying the formulas:
U = 8 (B1 - B0) / T1 - T0 / ifSpeed * 100
R = 8 (B1 - B0) / T1 - T0
U = R / ifSpeed * 100
Since the UTILIZATION formula involves dividing a rate (in bps) by a speed (in bps), the result is unitless. This means that the unit can be thought of as % (percentage). Rollups of UTILIZATION should be treated the same way as rollups for RATE. You should also notice that the percent utilization should be linearly proportional to the rate, given a constant bandwidth capacity of the interface.

Layman's example

This calculation is similar to calculating how close a driver is to the speed limit. By dividing the current speed (derived using the formulas above for speed) by the total allowable speed, you can calculate what percentage of the limit the car is currently traveling. When driving, moving at 100% of the speed limit is actually good. You are actually making the most of the available resource. The only time 100% utilization is a problem is when you need to do something else with that speed (i.e. other cars on the road not travelling at the same speed). The same actually holds true for networking. Utilization of 100% is not bad until you need some percentage of those resources for another task.

Wednesday, February 3, 2016

SharePoint Kanban

I was recently asked to help reproduce a Kanban board I had built in SharePoint for one of my projects. Having built it only once previously, I learned a few things and the resulting reproduction had a few improvements over the original.
First, I start with a custom list. I never use the templates in SharePoint because they are constantly trying to make things more complicated in an effort to make things more simple.

The Easy Stuff

  • Rename the [Title] field to 'Task' or something more representative of the items you'll have on your Kanban board.
  • Create a [Person or Group] field to contain the person responsible for the item. It's important that this not be a simple text field. I'll explain why when we build the views.
  • Create any other meta data columns that you want for your items (notes, description, priority, estimated effort, etc.)
  • Create a [Date and Time] field to contain the due date. Call it [Due Date] if you want to use the formulas here without modification.
  • Create a [Date and Time] field for every phase. For example, if my phases were:
    Deploy Launchpad, Igniter Primed, Mount Rocket, Connect Detonator, Remove Safety Cap, Detonate
    Then I would create the following fields as [Date and Time] fields:
    [Launchpad], [Igniter], [Mount], [Connect], [Safety], [Detonate].
    Essentially, each field will contain the date and time that that phase was completed. If the date is blank, that stage hasn't been completed. If there is a value in the field, then that phase has been completed (and was completed at that date/time). 
  • Go to List Settings >> Advanced Settings and disable attachments for the list (this was a dumb feature for what we're using the list for). 

The Next Phase Calculation

Create a [Calcuated] column called "Next Phase". This column should evaluate the phase fields to determine which phase is the current phase being worked on. Continuing with my example, if I had already deployed the launchpad, primed the igniter, and mounted the rocket, the "Next Phase" would be to connect the detonator.

This is done by evaluating the last phase to see if it is complete. If the last phase has a date/time value, it is completed and the next stage is "Done". If the last phase does not have a value, we need to figure out if the "next phase" is this phase or the previous one. Here's the formula (using my example field names, you should be using yours):

=IF(NOT(ISBLANK([Detonate])), "Done",
IF(NOT(ISBLANK([Safety])), "Detonate",
IF(NOT(ISBLANK([Connect])), "Safety",
IF(NOT(ISBLANK([Mount])), "Connect",
IF(NOT(ISBLANK([Igniter])), "Mount",

Technically, the end of each line above can say anything you want. Since it would be nice to be able to sort the [Next Phase] column to show tasks in order, it would be nice if they were in some sort of sortable order. Unfortunately, alphabetical order won't work. We can easily fix this by prefixing each resulting string with a number to indicate the order, like this:

=IF(NOT(ISBLANK([Detonate])), "6 Done",

IF(NOT(ISBLANK([Safety])), "5 Detonate",
IF(NOT(ISBLANK([Connect])), "4 Safety",
IF(NOT(ISBLANK([Mount])), "3 Connect",
IF(NOT(ISBLANK([Igniter])), "2 Mount",
IF(NOT(ISBLANK([Launchpad])),"1 Igniter",
"0 Launchpad"))))))

The IF(NOT(ISBLANK( logically means, if the previous phase has a value but the current phase didn't, the current phase is the next phase.

The Status Column

This column is designed to figure out the status of each item as compared to the due date. Four possible states exist:
  1. If there's no [Due Date] (e.g. [Due Date] is blank), the the status is "No Due Date".
  2. If the item has not been completed (e.g. the last phase field is blank) and the item is not yet due (e.g. the [Due Date] is greater than today), its status would be "On Time for Completion".
  3. If the item has not been completed (e.g. the last phase field is blank) and the item is due (e.g. the [Due Date] is less than today), its status would be "Overdue".
  4. If the item has been completed (e.g. the last phase field is not blank) and the item was completed before the due date (e.g. the last phase field is less than the [Due Date]), its status would be "Completed On Time".
  5. If the item has been completed (e.g. the last phase field is not blank) and the item was completed after the due date (e.g. the last phase field is greater than the [Due Date]), its status would be "Completed Late".
Here's the formula:

=IF(ISBLANK([Due Date]),
      "No Due Date",
             IF([Due Date]>=Now(),"On Time for Completion","Overdue"),
             IF([Detonate]<=[Due Date],"Completed on Time","Completed Late")

The Views

I recommend 4 types of views:
  • Datasheet View - This should be a datasheet view of all items. Usually sorted by [Due Date].
  • All Items - This is a standard version view of the Datasheet View. You can alternatively add groupings based on Next Stage.
  • My Items - This is either a standard or datasheet view (your preference), also sorted by [Due Date], however, filtered by the person the item is assigned to (remember above where I said we'd use this later?). SharePoint has a session variable called [Me], which contains the username of the current user. By putting a filter where the [Assignee] field is equal to [Me], we create a view that only shows the items assigned to the current logged in user. This means that anyone on the team can log in and look at this view and see only their items. This won't work if you made the assignee field a simple text string; it needs to be a [Person or Group] field.
  • Phase specific views - these views aren't required but are often requested. You basically build a copy of the Datasheet View or the All Items view but filter it where [Next Phase] field equals a particular phase. You would repeat this for every phase. I find this tedious when those who want this type of breakdown could just look at the Datasheet or All Items views and just filter for a particular value in the [Next Phase] field. However, some people can't handle that level of sophistication, so statically defining views is the only way to please them.

Tuesday, June 9, 2015


UPDATE 6/9/2015: Version 1.7 now released. This update adds standalone support. Since CA is including newer versions of MySQL in their products, DBToolv3 is no longer going to work. This change allows you to specify to use MySQLDump instead of DBToolv3. Essentially, you unremark line 15 and remove/remark line 14. If I get enthusiastic about it, I may update the script to allow a switch from the command line to specify which method to use. I'm just not there yet.
UPDATE 2/10/15: Version 1.6 now released. This update changes the way harvesters and DSAs are backed up, by only backing up the ReaperArchive, ReaperArchive15, and HarvesterArchive directories to a single directory (no redundant rolling backups). It only backs up files that have the archive bit set, so before running it the first time, set the archive bit for all the files in those directories. I also fixed the date naming method so it's YYYYMMDD instead of YYYYDDMM. I also added timestamping to the log so you know how long it takes to perform the file backups vs. the database backups.
UPDATE 2/27/14: Version 1.5 now released. This version doesn't have too many changes. I just added the lines below that allow the NFA mess of data files to be backed up along with everything else. This one script can still be used on any product. However, when running on a Harvester or DSA, extra commands backup the data files.
The syntax for running the tool hasn't changed since 1.4 (but 1.4 introduces some major changes), so you should be able to drop the script in place without changing any scheduled tasks.

nqbackup.bat <dbname> <num_backups_to_keep>

Remember, if you need a reminder how to run the tool, just run it without any arguments (or just double click it from Windows Explorer).

Wednesday, January 28, 2015

SNMPGet for Windows and Community String Discovery

I recently needed to test SNMP connectivity from a Windows server to a device to prove that there was a problem outside my system causing SNMP polling to fail. Linux has NetSNMP, which comes with a command line snmpget utility. Windows has no such utility. A quick search on the internet helped with that. I found a utility from SNMPSoft, but of course I had to build a wrapper for it.

The wrapper is pretty simple since my objective is to do a quick check of SNMP connectivity. The version and the OID polled are hard coded to v2c and sysObjectID.

This could be used to discovery which community strings work on a system by using for loops in the command line. For example, if you wanted to test a bunch of community strings:

for %A in (public string1 string2 string3 string4 etc) do @snmpdiscover hostname %A

This will output something like this:

Host:localhost Community:public

Host:localhost Community:string1
%Failed to get value of SNMP variable. Timeout.

Host:localhost Community:string2
%Failed to get value of SNMP variable. Timeout.

Host:localhost Community:string3
%Failed to get value of SNMP variable. Timeout.

Host:localhost Community:string4
%Failed to get value of SNMP variable. Timeout.

Host:localhost Community:etc
%Failed to get value of SNMP variable. Timeout.

In this case, the first community string (public) worked, while the others didn't.

Friday, December 12, 2014

Enabling or Disabling the Flow Cloner in RA9.0

I know, 9.0 is an old version, but I had a customer who is transitioning and needed to temporarily enable and disable cloning of flows from the old harvesters to the new harvesters. Here's the resulting script. The first argument should be Y or N depending on whether you want to enable (Y) or disable (N) the flow cloner. The second argument is optional and is the IP address you want to clone to. If you specify the IP address, the flowclonedef.ini file is created. If you don't specify it, no changes are made.

Monday, November 3, 2014

Custom Device Polling in NetVoyant

This is a presentation I gave years ago but the recording on the community has been lost. So, I recorded it again and have posted it here.

Tuesday, August 5, 2014

The dangers of a guest wifi network

The site is associated with Walt Mossberg, so they usually have pretty cool stuff. However, I couldn't agree with this article. Before reading my response, you really need to read the article.

Essentially, the article makes the argument that getting to the internet from your phone via WiFi is better than via a cellular data connection, and therefore people should enable the guest WiFi network in their homes because it's pretty much safe.

Conceded: Enabling the guest WiFi in most residential routers does not pose any additional threat to the internal, private WiFi and local area network.

The big issue with allowing someone else to use your WiFi is that whatever they do with it is your responsibility. Your home internet router uses a very good, very legal technology called IP address overload (aka NAT) to allow multiple devices in your home to access the internet while you only pay for access for one device (your router). Your router acts as a proxy of sorts to the internet for all devices in your home and on your wifi. To anyone on the internet, when your phone accesses a website, it looks like your router is accessing that website. The router's NAT technology takes care of accessing the website for your phone and ferrying the data back to your phone. This is great because it allows you to pretty much have as many devices as you want on your home network, and they all have access to the internet, via your router.

Your router is masking the internal machinations of your home network. This means that it's practically impossible to determine which device on your home network your router is proxying. This is also great because it builds a barrier between the outside world (the internet) and your inside network, making it harder for malicious users to gain access to your inside devices. The best they could do would be to try to communicate with your router, which is usually pretty well protected against malicious attacks.

However, if you allow anyone to get onto your WiFi, their traffic is also proxied by your home router. So, if I come to your front curb and jump on your WiFi and download a movie and the MPAA/FBI happened to observe my download, they would not be able to determine the "inside" device that initiated the download. To them, it just looks like your router is downloading a movie. The owner of the internet access (you) could go to jail for piracy. The argument, "It wasn't me; it was someone who hacked me" doesn't fly in court.  Since authorities on the internet see one device doing everything, there is no way to determine whether the activity is coming from your guest wifi or your own computer. So, they hold you (the owner of the one device they can prove is doing something: your router) responsible.

Places that have guest WiFi networks have very powerful systems in place and/or legal agreements that you agree to before being allowed access that prevent you from doing anything malicious with their internet and which hold them blameless for any malicious activity you may do with their free WiFi.

If you have those mechanisms in place, feel free to open up your guest WiFi. I'm a network tools guy and I don't even have those kind of tools in place. I don't recommend that you do, despite the benefit it might give to someone walking by.

Wednesday, July 30, 2014

Raspberry Pi News

I know I'm late to the show with my own blog post about the new happenings issuing forth from the Raspberry Pi Foundation, but I figured better late than never.

A few new developments have made news recently and bode well for hobbyists and inventors alike. The first (chronologically) was the release of the compute module. This is a raspberry pi just like any other, except that the whole thing is designed onto a chip that looks just like a laptop memory module.
The cool part about this is that people can now design their own main board and slip in this tiny chip to get all the features or the raspberry. This means that the main board can be designed to fit just about any need out there from small point-and-shoot cameras to large supercomputers. The foundation came out with an example main board:
But this is just an example and a board like this could be designed to meet the inventor's needs, changing the number of pins, ports, connectors, etc.

The second bit of gooey goodness is the release of the Raspberry Pi Model B+. This is the next evolutionary (not revolutionary) step in the progression of this little platform.
This new model is pretty much backward compatible with the Model B, but adds a couple of really useful features:

  • More GPIO pins - 40 pins instead of 26. (This also allows old IDE hard drive ribbon cables to be used!)
  • More USB ports - 4 ports instead of 2.
  • Micro SD - the SD is smaller, has a secure latch, and doesn't stick out anymore.
  • Power redesign - the B+ uses less power due to better technology.
  • Better audio - this should be good for my PiTunes.
  • Better form factor - all the onboard ports now only come out of 2 sides instead of the 4. This should make stuffing the Pi into a small corner a bit easier. Also, the mounting holes are uniform and there are 4 of them, which should make building cases a bit easier. Also it helped pave the way for HATs (more on this later).

The third bit of really cool news is the release of specifications around HATs (Hardware Attached to the Top). To break it down very simply, this allows add-on boards to tell the Pi that they're connected and give specific information about themselves to the Pi. This could make connecting an add on board very simple since instructions could be included in the add on board itself that help set it up (install software, configure pins, setup shortcuts on the desktop, etc.). I haven't found the official blog post announcing it, but James Adams spoke about it in a recent interview. Here is what they're theoretically supposed to look like. I'm guessing Adafruit will be releasing a HAT starter board soon which would at least include the mounting hardware (since the holes should line up with the holes on the B+) and maybe the EEPROM and other components defined by the standard.

In case that wasn't enough, I've seen two articles recently that I've kept in my browser tabs so that I can refer to them the next time I purchase a Pi (usually every other month). The first is an update about the method used by many to turn the Pi into a video game emulator. This used to be a really complicated process that took a ton of time, but thanks to the guys over at petRockBlog and Emulation Station, this process is greatly improved. You can go straight to the source, or you can check out this article which gives instructions for the uninitiated (it's spelled out pretty clearly). I've got a B+ on order right now, so as soon as it comes in, this will be on of the first things I do with it.

And if that's not enough, here's an article about the first 5 things to do after powering on your Pi. While installing Minecraft and overclocking aren't required, they are mentioned as the most popular things to do.