Thursday, December 19, 2024

Premises vs. Premise

  • One theory or idea is a premise.
  • Two or more theories or ideas are premises.
  • Premises is also a plural noun referring to a piece of land with a set of buildings.

Thursday, December 12, 2024

Using Python Virtual Environments

tl;dr:

Setup:

mkdir my-new-project
cd my-new-project
python -m venv env

Linux Usage:

source env/bin/activate

Powershell Usage:

./env/Scripts/Activate.ps1

Either usage

pip install requests # and anything else you need

Exit

deactivate

Friday, December 6, 2024

Accessing LM with a bearer token through Postman

I'm a day late on this one, but at least it's good info:

Original content is here for now.
  1. Download and install Postman, or use http://postman.com
  2. Launch Postman and create a new collection that will be used for all LogicMonitor API requests:
    1. In Postman, click Import and paste https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/swagger.json. This should start the import process. 
    2. Before clicking the import button, click the gear to view import settings. 
    3. Make sure "Always inherit authentication" is checked on.
  3. Configure Authentication:
    1. Go to the collection root and select the auth tab.
    2. Change the auth type to "Bearer Token" and put {{bearer}} as the token.
    3. Go to the scripts tab and add this to the pre-request script:
      pm.request.headers.add({key: 'X-Version', value: '3'})
    4. Save the collection.
  4. Create a new environment with the following variables. You just need one for the bearer token. You should set the type to 'secret' for sensitive credentials.
    1. url – https://<portalname>.logicmonitor.com/santaba/rest
      1. If you want to work with the LM Ingestion API, duplicate this environment and change the url to 'https://<portalname>.logicmonitor.com/rest' (without "santaba")
    2. bearer – secret – For the current value, be sure to prepend the token with "bearer " (with space)
And that should do it. You should be able to open any request already defined in the collection you imported and run it.

Thursday, November 28, 2024

Adding Space to an Ubuntu VM

I ran out of space on an Ubuntu VM today and had to go through the process of expanding the hard drive. 

  1. First, we shut down the VM and reconfigured VMware to let it have a larger hard drive. 
  2. Another thing to note is that we used the default settings when configuring the disk when installing Ubuntu.
  3. We downloaded the ISO for GParted. This is an entirely self contained OS with Gparted installed (along with a few other tools) and mounted it in the optical drive of the VM and booted it up.
  4. When it finished booting, we used GParted to expand the partition to use the extra space on the drive.
  5. Then we ejected the ISO and booted up as normal.
  6. Since we used the default options when installing Ubuntu, it uses a logical volume. We expanded the logical volume to encompass the new expanded size of the partition on the physical (virtual) drive using this:
    sudo lvextend -l 100%VG ubuntu-vg-/ubuntu-lv
  7. Then we increased the size of the file system to use the available space in the logical volume using this:
    sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
A simple df -h later and we could see that the drive now had access to the extended space.

Thursday, November 21, 2024

Freeing up Disk Space when using Docker

 Turns out there's a lot of temporary data that is used by Docker. To clean it up, try the following (courtesy Mr Davron):

  1. Open an elevated Powershell or CMD prompt
  2. `docker system prune --all`
  3. Right mouse click on docker desktop in system tray -> quit
  4. `wsl --shutdown`
  5. `Optimize-VHD -Path "$env:LOCALAPPDATA\Docker\wsl\data\ext4.vhdx" -Mode Full`
  6. Reboot
This freed up about 25GB on my machine.

Thursday, November 14, 2024

Using Ansible in Docker (without installing Ansible)

tl;dr:

Powershell:

docker run --rm -v ${PWD}:/ansible/playbooks sweenig/ansible-docker playbook.yml -i inventory

Linux:

docker run --rm -v $(pwd):/ansible/playbooks sweenig/ansible-docker playbook.yml -i inventory

I love Ansible. I love Docker. Running Ansible in Docker not only makes me melt but it means I don't have to install anything but Docker to run Ansible. Anywhere Docker works, Ansible works. And since I already have Docker installed on any machine I call mine...

I built a lab that shows how this can be used. The lab spins up 4 Ubuntu servers, then uses Ansible in a docker container to install a few things. Here's the shortcut to get everything up and running if you already have Docker installed:

> docker run -it --rm -v ${HOME}:/root -v ${PWD}:/git alpine/git clone https://github.com/sweenig/docker-ansible-playbook.git
cd .\docker-ansible-playbook\ansible_lab\
> docker compose up -d
> docker run --rm -v ${PWD}:/ansible/playbooks --network=ansible_lab_default sweenig/ansible-docker playbook.yml -i inventory

With these four commands, you:

  • Pull down the lab files
  • Switch into the lab directory
  • Start up 4 containers (the equivalent of starting up 4 micro VMs)
  • Run the Ansible playbook
You used Git and Ansible and had neither installed.

To shut down the lab, do:
docker compose down
In the ansible-lab directory.

Thursday, November 7, 2024

Using Git without Installing it (through Docker)

If you follow this blog, you might already know that I'm a Docker fanboy. Docker containers are like micro-VMs, just lighter and faster. Git is version control software. The nice thing about version control software, or more specifically distributed version control software like Git, is that it not only allows for the storage of blobs of text or bytes but it also allows you to build workflows that enable people to contribute edits to the stored text along with approval and multiple branches.

Installing Git isn't always needed. Sometimes I just need to clone a repo. If I have Docker installed, I do this (in Powershell):

docker run -it --rm -v ${HOME}:/root -v ${PWD}:/git alpine/git clone https://github.com/sweenig/docker-ansible-playbook.git

Linux/Mac is just as easy:

docker run -it --rm -v ${HOME}:/root -v $(pwd):/git alpine/git clone https://github.com/sweenig/docker-ansible-playbook.git

If you want, you can even setup an alias in Bash:

Linux:
alias git="docker run -ti --rm -v $(pwd):/git -v $HOME/.ssh:/root/.ssh alpine/git"

Or in Windows with Powershell:
function git {
  $allArgs = $PsBoundParameters.values + $args
  docker run --rm -it -v ${PWD}:/git -v ${HOME}:/root alpine/git $allArgs
}
With Powershell, since the underlying container is running Linux, make sure that the path to the playbook and inventory files doesn't use the backslash.

Tuesday, October 29, 2024

Working with Beyond Trust's Privileged Remote Access API

 I recently started a trial with Beyond Trust for their Privileged Remote Access product (fka: Bomgar). It's an RMM. As with any tool I have, I'm looking to automate it. We have a system of record (SOR) where our targets reside. PRA requires that each of these have a record in PRA in order to use PRA to remote into the target. I'll be attempting to automate synchronization of our devices from our SOR to PRA using the API. Our trial involved the SaaS version of PRA.

Naturally, my first step was to download their collection into Postman and get started. Actually, the first thing I did was generate the API credentials, which came in the form of an ID and secret. Then I imported the collection into Postman. Unfortunately, I found it a little lacking, so I decided to enhance it using some techniques I've learned. This is not a slight against Beyond Trust. Postman is not their product and I didn't expect their collection to be any more than it was. However, that doesn't mean it couldn't be improved. ;-)

First things first, I created an environment. In it I created the ClientID, ClientSecret, and baseUrl variables. It looks like the collection file is dynamically generated from my trial portal, because the collection had a variable called baseUrl which pointed specifically to my trial portal. Because customer data should be in the environment and the collection should reference it using variables, I moved the value to the baseUrl environment variable and deleted the collection variable so that the environment variable would be used instead. 

BTPRA uses OAuth2.0, so to make any requests you have to first generate an ephemeral token which will be used as a bearer token in any subsequent requests. The collection didn't contain a request to obtain this ephemeral token, so I built one called "START HERE". 

The documentation states to make a POST request to https://access.beyondtrustcloud.com/oauth2/token. Unfortunately, this URL is smaller than the baseUrl, so I created a new environment variable called authURL and give it the value of https://access.example.com. Obviously, not access.example.com, but the URL to my portal. 

For the "START HERE" request, I have to include a basic authorization header. I also have to include a grant_type in the body of my post request. The other thing I want to do is parse the response and store the ephemeral access token in a new environment variable. Here's how I did it.

  1. Create a new POST request
  2. Set the url to {{authURL}}/oauth2/token
  3. On the Authorization tab
    1. Set the Auth Type to "Basic Auth"
    2. Set the Username to {{ClientID}}
    3. Set the Password to {{ClientSecret}}
  4. On the Headers tab, add a header:
    1. "Accept" : "application/json"
      This tells Postman to expect the response to be JSON, which we need it to be.
  5. On the Body tab:
    1. Pick "x-www-form-urlencoded" (there are other ways to do this, I know, but this works fine)
    2. Add "grant_type" : "client_credentials"
  6. On the Scripts tab, we're going to write a script that will parse the response and set an environment variable containing our ephemeral access token.
    1. Select "Post-response" and enter the following script:
      try {
          var json = JSON.parse(pm.response.text());
          pm.environment.set("bearToken", json.access_token);
      } catch (e) {console.log(e);}
Save and run your request. If you set everything up right, you should see a response containing your token. If you check your environment, you should see a new variable called bearToken, with the value of the access_token in the response. 

One thing remains: we need to tell all requests in the collection to use this token. Luckily this is pretty easy to do since all the requests already inherit the authorization from their parent, the collection. Opening the collection, I went to the Authorization tab and set the "Auth Type" to "Bearer Token". Then in the Token field, I put {{bearToken}}

And that's it. Now you should be able to open any request from the collection and run it, providing any parameters the request requires. 

This configuration should keep everything about the API separate from the my specific settings meaning I could delete and reimport the collection (don't delete the START HERE request). 

Thursday, October 24, 2024

Favorite way to troubleshoot Python scripts

I recently discovered a great way to make sure that Python scripts give you the information you need when there's a failure. I often run Python scripts inside Docker containers. They either log locally to a file or send logs to a log aggregator (LM Logs). As such, there's not always someone monitoring the stdout pipe of the Python script. If it fails, often the best piece of information is captured using a try/except block. You can have extra data printed out to stdout or even sent out to the log aggregator. This would look something like this:

>>> try:
...   {}["shrubbery"]
... except Exception as e:
...   print(e)
...
'shrubbery'

Now that wasn't helpful was it? If the only logs we had seen were logs about successful operation then suddenly a log that says "shrubbery", we really wouldn't know what was going on. Luckily, there are a few things we can add to the exception output that clarify things:

>>> import sys
>>> try:
...   {}["shrubbery"]
... except Exception as e:
...   print(f"There was an unexpected error: {e}: \nError on line {sys.exc_info()[-1].tb_lineno}")
...
There was an unexpected error: 'shrubbery':
Error on line 2

If we import the "sys" library, it gives us some options, one of which being the line number on which the failure happened, the failure that popped us out of our try block into the except block. This still doesn't give us everything we might want, but it provides the line number where the error happened. That gives us a great place to start looking at our code to see what happened.

We can do better:

>>> import sys
>>> try:
...   {}["shrubbery"]
... except Exception as e:
...   print(f"There was an unexpected {type(e).__name__} error: {e}: \nError on line {sys.exc_info()[-1].tb_lineno}")
...
There was an unexpected KeyError error: 'shrubbery':
Error on line 2

Ah, very nice. Now we know the type of error, a KeyError, we know the key that caused the error, and we know the line in our code where the error is happening.

There are more options for outputting more data. However, I haven't found more data to be that useful. With this information, I have just what I need and no extra fluff to work through. 

Thursday, October 3, 2024

Capturing packets on a Windows server without installing anything

 Ever wanted to do a pcap on a Windows server, but didn't have permission to install an app like Wireshark? Here's how you do it:

  1. Start an elevated command prompt or powershell terminal.
  2. Run `netsh trace start capture=yes tracefile=C:\temp\packetcapture.etl"
  3. Wait until you believe the desired packets have been captured or reproduce the issue you want to capture.
  4. Run `netsh trace stop`
  5. Your packet capture file will be in c:\temp called packetcapture.etl. You'll need to convert this into a file that Wireshark can open. In the past, you could open it with Microsoft Message Analyzer, but it isn't available anymore. You can use this tool to convert it. Simply download the release and run:
    `etl2pcapng.exe in.etl out.pcapng`
    Where in.etl points to the file output from your trace and out.pcapng points to the place where you want your output file to go. 
There are filters you can apply to the netsh command if needed. But I've found the filtering in Wireshark to be easier/better. 

Tuesday, December 10, 2019

Sending sysLog via Python

I had someone ask me the other day if we could ingest alarms into LogicMonitor from their app. For example, if the app had an exception raised, they'd like to send a message to LogicMonitor to open an alarm. The easiest way I could think of doing this was to have a standalone Python script that sends data via Syslog to LM. This is one way this could be done. I haven't tested this, but it's simple enough to see that it should work:

import logging
import logging.handlers
import sys
my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.INFO)
destIpAddress = sys.argv[1]
destPort = sys.argv[2]
handler = logging.handlers.SysLogHandler(address = (ipAddress,514))
my_logger.addHandler(handler)
my_logger.info(sys.argv[3:].join(" "))

You'd call it like this:
> send_syslog.py 192.168.25.64 514 My application had an error in the widget creator service.

Tuesday, September 3, 2019

Groovy SNMPwalk Helper Functions

tl;dr - Go here.

I've been doing a lot of SNMP polling through Groovy lately. One of the methods I use most often is Snmp.walkAsMap(), which returns a map that looks like this:
[8.6:2, 4.10:1500, 8.7:1, 8.8:1, 4.12:1500, 4.14:1500, 4.16:1500, 3.44:6, 22.5:0.0, 22.4:0.0, 
22.3:0.0, 22.2:0.0, 22.1:0.0, 22.8:0.0, 22.7:0.0, 22.6:0.0, 17.16:164136, 16.44:6639007, 17.12:8634086, 
18.8:0, 17.14:8745523, 18.7:0, 17.10:21076, 10.6:0, 10.5:0, 10.4:0, 10.3:0, 10.2:3968863511, 10.1:8980402, 
22.44:0.0, 18.6:0, 18.5:0, 7.1:1, 18.4:0, 7.2:1, 18.3:0, 7.3:1, 18.2:0, 7.4:1, 18.1:0, 7.5:1, 10.8:1684285, 
7.6:2, 10.7:4181064938, 7.7:1, 7.8:1, 15.44:0, 16.12:2114735865, 16.10:4077101, 16.16:16467729, 
16.14:2361323502, 22.14:0.0, 22.16:0.0, 9.44:2 days, 14:54:42.04, 21.6:0, 21.5:0, 21.4:0, 21.3:0, 
21.2:0, 21.1:0, 22.10:0.0, 22.12:0.0, 21.44:0, 21.8:0, 21.7:0, 5.10:4294967295, 4.44:1500, 5.12:4294967295, 
5.14:4294967295, 11.10:0, 5.16:4294967295, 10.44:1620014, 11.12:12487957, 17.8:30812, 11.14:12673362, 
17.7:17502386, 6.1:, 17.6:0, 6.2:90:e6:ba:59:1b:18, 11.16:136053, 17.5:0, 6.3:00:50:56:c0:00:01, 
17.4:20226, 6.4:00:50:56:c0:00:08, 17.3:20229, 6.5:52:54:00:9b:4b:e0, 17.2:37200831, 6.6:52:54:00:9b:4b:e0, 
17.1:44466, 6.7:02:42:39:1b:cc:e3, 6.8:02:42:55:44:3f:06, 10.10:0, 20.7:0, 20.6:0, 20.5:0, 20.4:0, 20.3:0, 
20.2:0, 20.1:0, 10.12:30849041, 10.14:131456967, 10.16:77954842, 20.8:0, 5.1:10000000, 16.8:17045376, 
5.2:100000000, 16.7:189459674, 5.3:0, 16.6:0, 5.4:0, 16.5:0, 5.5:0, 16.4:0, 5.6:10000000, 16.3:0, 5.7:0, 
16.2:1009509757, 5.8:0, 16.1:8980402, 6.12:4e:c1:4e:e6:07:90, 5.44:4294967295, 6.14:0a:2c:4f:3a:eb:5c, 
6.16:12:db:e0:82:ca:a2, 19.44:0, 6.10:06:31:50:31:a5:fb, 15.12:0, 14.44:0, 15.10:0, 21.16:0, 15.16:0, 
21.14:0, 15.14:0, 20.44:0, 21.12:0, 1.12:12, 1.10:10, 15.1:0, 4.1:65536, 4.2:1500, 4.3:1500, 15.8:0, 
21.10:0, 4.4:1500, 15.7:0, 4.5:1500, 15.6:0, 4.6:1500, 15.5:0, 4.7:1500, 15.4:0, 4.8:1500, 15.3:0, 
15.2:0, 14.10:0, 20.16:0, 14.14:0, 20.14:0, 13.44:0, 14.12:0, 20.12:0, 1.16:16, 1.14:14, 20.10:0, 14.16:0, 
6.44:62:d0:3f:14:ef:85, 7.12:1, 7.14:1, 7.16:1, 7.10:1, 14.2:0, 14.1:0, 3.1:24, 3.2:6, 3.3:6, 3.4:6, 3.5:6, 
14.8:0, 3.6:6, 14.7:0, 3.7:6, 14.6:0, 3.8:6, 14.5:0, 14.4:0, 14.3:0, 2.14:veth9a64fa1, 2.12:veth2bc8fbd, 
1.44:44, 2.10:veth92ca0b0, 19.14:0, 19.16:0, 19.10:0, 19.12:0, 18.44:0, 13.3:0, 13.2:0, 13.1:0, 2.1:lo, 
2.2:NVIDIA Corporation MCP77 Ethernet, 2.16:veth497df7f, 2.3:vmnet1, 2.4:vmnet8, 2.5:virbr0, 2.6:virbr0-nic, 
2.7:br-6a2604a91ac1, 13.8:0, 2.8:docker0, 13.7:0, 13.6:0, 13.5:0, 13.4:0, 18.16:0, 8.16:1, 8.14:1, 
17.44:18857, 18.12:0, 18.14:0, 18.10:0, 8.12:1, 7.44:1, 8.10:1, 13.10:0, 12.44:0, 13.14:0, 13.12:0, 
3.14:6, 2.44:vethd608288, 3.12:6, 3.10:6, 12.4:0, 12.3:0, 12.2:2662, 12.1:0, 1.1:1, 1.2:2, 1.3:3, 1.4:4, 
1.5:5, 1.6:6, 1.7:7, 1.8:8, 13.16:0, 9.1:0:00:00.00, 12.8:0, 9.2:0:00:00.00, 12.7:0, 9.3:0:00:06.31, 12.6:0, 
9.4:0:00:06.31, 12.5:0, 9.5:0:00:09.32, 9.6:0:00:12.32, 9.7:3:03:43.67, 9.8:0:00:18.32, 12.10:0, 11.44:4506, 
12.12:0, 3.16:6, 12.14:0, 12.16:0, 9.16:3:03:43.67, 9.14:3:03:43.67, 19.8:0, 19.7:0, 19.6:0, 8.44:1, 
9.12:3:03:43.67, 9.10:0:00:18.32, 11.5:0, 11.4:0, 11.3:0, 11.2:28692938, 11.1:44466, 19.5:0, 19.4:0, 
19.3:0, 8.1:1, 19.2:0, 8.2:1, 19.1:0, 8.3:1, 11.8:6882, 8.4:1, 11.7:25297372, 8.5:2, 11.6:0]



This great and usable, but I found myself constantly wanting to display the data easier. I also wanted the data to be structured a little more hierarchically, grouping by row in the SNMP table.  I also wanted to make it easier to address individual pieces of the data. I built a couple helper functions that transform the data and make it easier to address. They can be found here.




The output of the snmpMapToTable() function looks like this:
[6:[8:2, 22:0.0, 10:0, 18:0, 7:2, 21:0, 17:0, 6:52:54:00:9b:4b:e0, 20:0, 16:0, 5:10000000, 15:0, 4:1500, 
3:6, 14:0, 2:virbr0-nic, 13:0, 1:6, 12:0, 9:0:00:12.32, 19:0, 11:0], 10:[4:1500, 17:21076, 16:4077101, 22:0.0, 
5:4294967295, 11:0, 10:0, 6:06:31:50:31:a5:fb, 15:0, 1:10, 21:0, 14:0, 20:0, 7:1, 2:veth92ca0b0, 19:0, 18:0, 
8:1, 13:0, 3:6, 12:0, 9:0:00:18.32], 7:[8:1, 22:0.0, 18:0, 10:4181064938, 7:1, 21:0, 17:17502386, 
6:02:42:39:1b:cc:e3, 20:0, 16:189459674, 5:0, 15:0, 4:1500, 14:0, 3:6, 2:br-6a2604a91ac1, 13:0, 1:7, 
12:0, 9:3:03:43.67, 19:0, 11:25297372], 8:[8:1, 22:0.0, 18:0, 10:1684285, 7:1, 21:0, 17:30812, 
6:02:42:55:44:3f:06, 20:0, 16:17045376, 5:0, 15:0, 4:1500, 14:0, 3:6, 13:0, 2:docker0, 1:8, 12:0, 
9:0:00:18.32, 19:0, 11:6882], 12:[4:1500, 17:8634086, 16:2114735865, 22:0.0, 5:4294967295, 
11:12487957, 10:30849041, 6:4e:c1:4e:e6:07:90, 15:0, 21:0, 1:12, 14:0, 20:0, 7:1, 2:veth2bc8fbd, 
19:0, 18:0, 8:1, 13:0, 3:6, 12:0, 9:3:03:43.67], 14:[4:1500, 17:8745523, 16:2361323502, 22:0.0, 
5:4294967295, 11:12673362, 10:131456967, 6:0a:2c:4f:3a:eb:5c, 21:0, 15:0, 14:0, 20:0, 1:14, 7:1, 
2:veth9a64fa1, 19:0, 8:1, 18:0, 13:0, 3:6, 12:0, 9:3:03:43.67], 16:[4:1500, 17:164136, 16:16467729, 
22:0.0, 5:4294967295, 11:136053, 10:77954842, 6:12:db:e0:82:ca:a2, 21:0, 15:0, 20:0, 1:16, 14:0, 
7:1, 19:0, 2:veth497df7f, 18:0, 8:1, 13:0, 3:6, 12:0, 9:3:03:43.67], 44:[3:6, 16:6639007, 22:0.0, 
15:0, 9:2 days, 14:54:42.04, 21:0, 4:1500, 10:1620014, 5:4294967295, 19:0, 14:0, 20:0, 
13:0, 6:62:d0:3f:14:ef:85, 1:44, 18:0, 17:18857, 7:1, 12:0, 2:vethd608288, 11:4506, 8:1], 5:[22:0.0, 
10:0, 18:0, 7:1, 21:0, 17:0, 6:52:54:00:9b:4b:e0, 20:0, 16:0, 5:0, 4:1500, 15:0, 3:6, 14:0, 2:virbr0, 
13:0, 1:5, 12:0, 9:0:00:09.32, 11:0, 19:0, 8:2], 4:[22:0.0, 10:0, 18:0, 7:1, 21:0, 17:20226, 
6:00:50:56:c0:00:08, 20:0, 5:0, 16:0, 4:1500, 15:0, 3:6, 14:0, 2:vmnet8, 13:0, 12:0, 1:4, 9:0:00:06.31, 
11:0, 19:0, 8:1], 3:[22:0.0, 10:0, 18:0, 7:1, 21:0, 6:00:50:56:c0:00:01, 17:20229, 20:0, 5:0, 16:0, 
4:1500, 15:0, 3:6, 14:0, 13:0, 2:vmnet1, 12:0, 1:3, 9:0:00:06.31, 11:0, 19:0, 8:1], 2:[22:0.0, 
10:3968863511, 7:1, 18:0, 21:0, 6:90:e6:ba:59:1b:18, 17:37200831, 20:0, 5:100000000, 16:1009509757, 
4:1500, 15:0, 14:0, 3:6, 13:0, 2:NVIDIA Corporation MCP77 Ethernet, 12:2662, 1:2, 9:0:00:00.00, 
11:28692938, 19:0, 8:1], 1:[22:0.0, 10:8980402, 7:1, 18:0, 21:0, 6:, 17:44466, 20:0, 5:10000000, 
16:8980402, 15:0, 4:65536, 14:0, 3:24, 13:0, 2:lo, 12:0, 1:1, 9:0:00:00.00, 11:44466, 8:1, 19:0]]





It may not look much better, but if you add some carriage returns and tabs you get this:
[
6:[
  8:2, 22:0.0, 10:0, 18:0, 7:2, 21:0, 17:0, 6:52:54:00:9b:4b:e0, 
  20:0, 16:0, 5:10000000, 15:0, 4:1500, 3:6, 14:0, 2:virbr0-nic, 
  13:0, 1:6, 12:0, 9:0:00:12.32, 19:0, 11:0], 
10:[
  4:1500, 17:21076, 16:4077101, 22:0.0, 5:4294967295, 11:0, 10:0, 
  6:06:31:50:31:a5:fb, 15:0, 1:10, 21:0, 14:0, 20:0, 7:1, 2:veth92ca0b0, 
  19:0, 18:0, 8:1, 13:0, 3:6, 12:0, 9:0:00:18.32], 
7:[
  8:1, 22:0.0, 18:0, 10:4181064938, 7:1, 21:0, 17:17502386, 6:02:42:39:1b:cc:e3, 
  20:0, 16:189459674, 5:0, 15:0, 4:1500, 14:0, 3:6, 2:br-6a2604a91ac1, 13:0, 1:7, 
  12:0, 9:3:03:43.67, 19:0, 11:25297372], 
8:[
  8:1, 22:0.0, 18:0, 10:1684285, 7:1, 21:0, 17:30812, 6:02:42:55:44:3f:06, 20:0, 
  16:17045376, 5:0, 15:0, 4:1500, 14:0, 3:6, 13:0, 2:docker0, 1:8, 12:0, 9:0:00:18.32, 
  19:0, 11:6882], 
12:[
  4:1500, 17:8634086, 16:2114735865, 22:0.0, 5:4294967295, 11:12487957, 10:30849041, 
  6:4e:c1:4e:e6:07:90, 15:0, 21:0, 1:12, 14:0, 20:0, 7:1, 2:veth2bc8fbd, 19:0, 18:0, 8:1, 
  13:0, 3:6, 12:0, 9:3:03:43.67], 
14:[
  4:1500, 17:8745523, 16:2361323502, 22:0.0, 5:4294967295, 11:12673362, 10:131456967, 
  6:0a:2c:4f:3a:eb:5c, 21:0, 15:0, 14:0, 20:0, 1:14, 7:1, 2:veth9a64fa1, 19:0, 8:1, 18:0, 
  13:0, 3:6, 12:0, 9:3:03:43.67], 
16:[
  4:1500, 17:164136, 16:16467729, 22:0.0, 5:4294967295, 11:136053, 10:77954842, 
  6:12:db:e0:82:ca:a2, 21:0, 15:0, 20:0, 1:16, 14:0, 7:1, 19:0, 2:veth497df7f, 18:0, 
  8:1, 13:0, 3:6, 12:0, 9:3:03:43.67], 
44:[
  3:6, 16:6639007, 22:0.0, 15:0, 9:2 days, 14:54:42.04, 21:0, 4:1500, 10:1620014, 
  5:4294967295, 19:0, 14:0, 20:0, 13:0, 6:62:d0:3f:14:ef:85, 1:44, 18:0, 17:18857, 
  7:1, 12:0, 2:vethd608288, 11:4506, 8:1], 
5:[
  22:0.0, 10:0, 18:0, 7:1, 21:0, 17:0, 6:52:54:00:9b:4b:e0, 20:0, 16:0, 5:0, 4:1500, 
  15:0, 3:6, 14:0, 2:virbr0, 13:0, 1:5, 12:0, 9:0:00:09.32, 11:0, 19:0, 8:2], 
4:[
  22:0.0, 10:0, 18:0, 7:1, 21:0, 17:20226, 6:00:50:56:c0:00:08, 20:0, 5:0, 16:0, 
  4:1500, 15:0, 3:6, 14:0, 2:vmnet8, 13:0, 12:0, 1:4, 9:0:00:06.31, 11:0, 19:0, 8:1], 
3:[
  22:0.0, 10:0, 18:0, 7:1, 21:0, 6:00:50:56:c0:00:01, 17:20229, 20:0, 5:0, 16:0, 4:1500, 
  15:0, 3:6, 14:0, 13:0, 2:vmnet1, 12:0, 1:3, 9:0:00:06.31, 11:0, 19:0, 8:1], 
2:[
  22:0.0, 10:3968863511, 7:1, 18:0, 21:0, 6:90:e6:ba:59:1b:18, 17:37200831, 20:0, 
  5:100000000, 16:1009509757, 4:1500, 15:0, 14:0, 3:6, 13:0, 
  2:NVIDIA Corporation MCP77 Ethernet, 12:2662, 1:2, 9:0:00:00.00, 11:28692938, 19:0, 8:1], 
1:[
  22:0.0, 10:8980402, 7:1, 18:0, 21:0, 6:, 17:44466, 20:0, 5:10000000, 16:8980402, 15:0, 
  4:65536, 14:0, 3:24, 13:0, 2:lo, 12:0, 1:1, 9:0:00:00.00, 11:44466, 8:1, 19:0]
]




As you can see, it's getting easier to see what's going on. At this point, it'd be nice to order by instance ID and also order by metric ID.  One of the helper functions I built does just that, pprintSnmpWalkTable:
Wildvalue: 1:
Data sorted by column ID
  1.##WILDVALUE##: 1
  2.##WILDVALUE##: lo
  3.##WILDVALUE##: 24
  4.##WILDVALUE##: 65536
  5.##WILDVALUE##: 10000000
  6.##WILDVALUE##: 
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 1
  9.##WILDVALUE##: 0:00:00.00
  10.##WILDVALUE##: 8980402
  11.##WILDVALUE##: 44466
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 8980402
  17.##WILDVALUE##: 44466
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 2:
Data sorted by column ID
  1.##WILDVALUE##: 2
  2.##WILDVALUE##: NVIDIA Corporation MCP77 Ethernet
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 100000000
  6.##WILDVALUE##: 90:e6:ba:59:1b:18
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 1
  9.##WILDVALUE##: 0:00:00.00
  10.##WILDVALUE##: 3968863511
  11.##WILDVALUE##: 28692938
  12.##WILDVALUE##: 2662
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 1009509757
  17.##WILDVALUE##: 37200831
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 3:
Data sorted by column ID
  1.##WILDVALUE##: 3
  2.##WILDVALUE##: vmnet1
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 0
  6.##WILDVALUE##: 00:50:56:c0:00:01
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 1
  9.##WILDVALUE##: 0:00:06.31
  10.##WILDVALUE##: 0
  11.##WILDVALUE##: 0
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 0
  17.##WILDVALUE##: 20229
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 4:
Data sorted by column ID
  1.##WILDVALUE##: 4
  2.##WILDVALUE##: vmnet8
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 0
  6.##WILDVALUE##: 00:50:56:c0:00:08
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 1
  9.##WILDVALUE##: 0:00:06.31
  10.##WILDVALUE##: 0
  11.##WILDVALUE##: 0
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 0
  17.##WILDVALUE##: 20226
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 5:
Data sorted by column ID
  1.##WILDVALUE##: 5
  2.##WILDVALUE##: virbr0
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 0
  6.##WILDVALUE##: 52:54:00:9b:4b:e0
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 2
  9.##WILDVALUE##: 0:00:09.32
  10.##WILDVALUE##: 0
  11.##WILDVALUE##: 0
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 0
  17.##WILDVALUE##: 0
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 6:
Data sorted by column ID
  1.##WILDVALUE##: 6
  2.##WILDVALUE##: virbr0-nic
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 10000000
  6.##WILDVALUE##: 52:54:00:9b:4b:e0
  7.##WILDVALUE##: 2
  8.##WILDVALUE##: 2
  9.##WILDVALUE##: 0:00:12.32
  10.##WILDVALUE##: 0
  11.##WILDVALUE##: 0
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 0
  17.##WILDVALUE##: 0
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 7:
Data sorted by column ID
  1.##WILDVALUE##: 7
  2.##WILDVALUE##: br-6a2604a91ac1
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 0
  6.##WILDVALUE##: 02:42:39:1b:cc:e3
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 1
  9.##WILDVALUE##: 3:03:43.67
  10.##WILDVALUE##: 4181064938
  11.##WILDVALUE##: 25297372
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 189459674
  17.##WILDVALUE##: 17502386
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 8:
Data sorted by column ID
  1.##WILDVALUE##: 8
  2.##WILDVALUE##: docker0
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 0
  6.##WILDVALUE##: 02:42:55:44:3f:06
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 1
  9.##WILDVALUE##: 0:00:18.32
  10.##WILDVALUE##: 1684285
  11.##WILDVALUE##: 6882
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 17045376
  17.##WILDVALUE##: 30812
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 10:
Data sorted by column ID
  1.##WILDVALUE##: 10
  2.##WILDVALUE##: veth92ca0b0
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 4294967295
  6.##WILDVALUE##: 06:31:50:31:a5:fb
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 1
  9.##WILDVALUE##: 0:00:18.32
  10.##WILDVALUE##: 0
  11.##WILDVALUE##: 0
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 4077101
  17.##WILDVALUE##: 21076
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 12:
Data sorted by column ID
  1.##WILDVALUE##: 12
  2.##WILDVALUE##: veth2bc8fbd
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 4294967295
  6.##WILDVALUE##: 4e:c1:4e:e6:07:90
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 1
  9.##WILDVALUE##: 3:03:43.67
  10.##WILDVALUE##: 30849041
  11.##WILDVALUE##: 12487957
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 2114735865
  17.##WILDVALUE##: 8634086
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 14:
Data sorted by column ID
  1.##WILDVALUE##: 14
  2.##WILDVALUE##: veth9a64fa1
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 4294967295
  6.##WILDVALUE##: 0a:2c:4f:3a:eb:5c
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 1
  9.##WILDVALUE##: 3:03:43.67
  10.##WILDVALUE##: 131456967
  11.##WILDVALUE##: 12673362
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 2361323502
  17.##WILDVALUE##: 8745523
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 16:
Data sorted by column ID
  1.##WILDVALUE##: 16
  2.##WILDVALUE##: veth497df7f
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 4294967295
  6.##WILDVALUE##: 12:db:e0:82:ca:a2
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 1
  9.##WILDVALUE##: 3:03:43.67
  10.##WILDVALUE##: 77954842
  11.##WILDVALUE##: 136053
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 16467729
  17.##WILDVALUE##: 164136
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0
Wildvalue: 44:
Data sorted by column ID
  1.##WILDVALUE##: 44
  2.##WILDVALUE##: vethd608288
  3.##WILDVALUE##: 6
  4.##WILDVALUE##: 1500
  5.##WILDVALUE##: 4294967295
  6.##WILDVALUE##: 62:d0:3f:14:ef:85
  7.##WILDVALUE##: 1
  8.##WILDVALUE##: 1
  9.##WILDVALUE##: 2 days, 14:54:42.04
  10.##WILDVALUE##: 1620014
  11.##WILDVALUE##: 4506
  12.##WILDVALUE##: 0
  13.##WILDVALUE##: 0
  14.##WILDVALUE##: 0
  15.##WILDVALUE##: 0
  16.##WILDVALUE##: 6639007
  17.##WILDVALUE##: 18857
  18.##WILDVALUE##: 0
  19.##WILDVALUE##: 0
  20.##WILDVALUE##: 0
  21.##WILDVALUE##: 0
  22.##WILDVALUE##: 0.0



For me, this is pretty easy to read and find individual values that I'm looking for. The three lines that resulted in the above output were these:
walkResult = Snmp.walkAsMap(host, Oid, props, timeout)
entryRaw = snmpMapToTable(walkResult)
pprintSnmpWalkTable(entryRaw)



Accessing the data is pretty easy now:
ifEntryRaw.each {wildvalue, data ->
 println("""Interface ${wildvalue}:
 ifAlias: ${ifXEntryRaw[wildvalue]["1"]}
 ifDescr: ${data["2"]}
 ifType: ${data["3"]}
    """)
}



This gives us the following output:
Interface 6:
 ifAlias: virbr0-nic
 ifDescr: virbr0-nic
 ifType: 6

Interface 10:
 ifAlias: veth92ca0b0
 ifDescr: veth92ca0b0
 ifType: 6

Interface 7:
 ifAlias: br-6a2604a91ac1
 ifDescr: br-6a2604a91ac1
 ifType: 6

Interface 8:
 ifAlias: docker0
 ifDescr: docker0
 ifType: 6

Interface 12:
 ifAlias: veth2bc8fbd
 ifDescr: veth2bc8fbd
 ifType: 6

Interface 14:
 ifAlias: veth9a64fa1
 ifDescr: veth9a64fa1
 ifType: 6

Interface 16:
 ifAlias: veth497df7f
 ifDescr: veth497df7f
 ifType: 6

Interface 44:
 ifAlias: vethd608288
 ifDescr: vethd608288
 ifType: 6

Interface 5:
 ifAlias: virbr0
 ifDescr: virbr0
 ifType: 6

Interface 4:
 ifAlias: vmnet8
 ifDescr: vmnet8
 ifType: 6

Interface 3:
 ifAlias: vmnet1
 ifDescr: vmnet1
 ifType: 6

Interface 2:
 ifAlias: enp0s10
 ifDescr: NVIDIA Corporation MCP77 Ethernet
 ifType: 6

Interface 1:
 ifAlias: lo
 ifDescr: lo
 ifType: 24

Friday, July 5, 2019

Discovering Enumerated Properties

tl;dr - the scripts are here and here.

In the same vein as the previous post, this post will talk about SNMP enumeration. While the previous post talked about polling enumerated values and how to display them as intuitively as possible, this post will talk about polling more static values and using them as properties.

There are two levels of properties that can be obtained via SNMP: device level properties (that pertain to the whole device) and instance level properties (that pertain to a particular thing on the device, of which there may be more than one).  Polling those properties is easy; this post will go over how to improve the quality of the data stored so that the data can be intuitively used.

Polling properties vs. data points

Polling properties is easy. Knowing which properties to poll as properties and which to poll as data points is a separate discussion entirely. Suffice it to say that things that represent a characteristic about an object should be properties and things that represent a behavior should be data points. Depending on the tool, only data points can be alerted upon, so that may influence a decision to make something that would normally be a property also a data point.  In other cases, sometimes data points can't be used to influence enablement of monitoring; so sometimes data points need to be properties too.
Assuming you're past the point where you've looked at the MIB and figured out which OIDs should be properties and which should be data points, the next thing to look at is whether or not the OIDs you will be storing as properties have enumerations.

Enumerations

An enumeration is used in SNMP to keep the simple in Simple Network Management Protocol. Instead of passing back a string comprised of ASCII characters, a map is created that connects a meaningful string to a single integer, which is passed back to the NMS. Passing back a single integer is much simpler than passing back a string of characters of varying length.
For example, the admin status of an interface is found at 1.3.6.1.2.1.2.2.1.7. It's not a particularly good example of a property since it lends itself more to a data point, but having it as a property can allow for advanced filtering which may only be available using properties.
ifAdminStatus OBJECT-TYPE
    SYNTAX  INTEGER {
                up(1),       -- ready to pass packets
                down(2),
                testing(3)   -- in some test mode
            }
    MAX-ACCESS  read-write
    STATUS      current
    DESCRIPTION
            "The desired state of the interface.  The testing(3) state
            indicates that no operational packets can be passed.  When a
            managed system initializes, all interfaces start with
            ifAdminStatus in the down(2) state.  As a result of either
            explicit management action or per configuration information
            retained by the managed system, ifAdminStatus is then
            changed to either the up(1) or testing(3) states (or remains
            in the down(2) state)."
    ::= { ifEntry 7 }
You can see in the definition of the OID that it has a custom syntax that allows for one of three values. When this OID is polled, either a 1, a 2, or a 3 is returned. It's up to the NMS to interpret the different values to understand the meaning. If the NMS tool has MIB compilation, this may have a certain level of automation. If it doesn't, interpretation is up to you. Obviously, storing a 1, 2, or 3 as a property could be sufficient, but it would be better to store the interpreted meaning making use by humans easier and more intuitive.

Interpretation of an instance property enumeration

So how do we interpret this? It's actually pretty easy, but it involves some scripting to process the returned data. It also involves some research into the MIB to find out all the meanings. Let's start with a simple case of interfaces. In most cases a MIB will contain a "Table" OID with an "Entry" child OID containing all the instances with whatever OIDs go along with the instances. In the case of interfaces (ignoring the ifXTable), the table is called "ifTable" at 1.3.6.1.2.1.2.2. You'll see that there is a child OID called "ifEntry" at 1.3.6.1.2.1.2.2.1.  This kind of table typically has an index along with perhaps some properties and data points as columns. Each row is an instance. We're going to ignore the data points for now and focus on the OIDs that would do well stored as properties. MTU(4), speed(5), and MAC address(6) are good items to store as properties. They require no interpretation. However, ifType(3) and ifAdminStatus(7) require some interpretation to be useful.
LogicMonitor's multi-instance datasource can be configured to use a groovy script (yes, it's a real thing) to do auto-discovery, which is the mechanism that discovers poll instances and sets properties per poll instance. The concept is pretty simple, use the groovy SNMP libraries to retrieve the data, use groovy to interpret the data, then just print the data to standard output, one line per poll instance.
I recently wrote a script to do this. It's all self documented with explanations and sample output and everything. Some points to consider at the following lines:
  1. This line defines the address of the Entry table. Everything we will be polling happens to live under this branch of the OID tree.
  2. This is where we define which column of the ifEntry table contains the name that we should be using for each of our instances.
  3. This line shows the information from the MIB added to the script so that the script can interpret the meaning from the returned value for ifAdminStatus
  4. This line shows the information from the MIB added to the script so that the script can interpret the meaning from the returned value for ifOperStatus
  5. This line shows that we're still polling ifMtu as a property, but there is no interpretation available for the value. We could alternatively put ["1500":"Default (1500)"] instead of [:] to tell the script to add some meaning to the most common value of MTU
  6. ifPhysAddress is the MAC address and needs no interpretation
  7. This line shows an interpretation that isn't defined in the MIB. Instead of just passing the raw value through, we can provide our own interpretation of the speed to give some more intuitive values for common speeds. If the speed of the interface isn't in our map, the speed itself will be stored as the value. If it does happen to match on eof
  8. This is where we start to define the enumeration for ifType. Turns out ifType has over 200 different enumerated values. Each of these is defined in the script so that the proper type name can be stored as a property.
  9. Here's where you can see some sample output against a device here in my house. Notice that there's one line for each port (both physical and logical).
  10. This line is a good example showing the interpreted values of ifAdminStatus, ifOperStatus, ifSpeed, and ifType.

Interpretation of device level properties

Polling and storing device level properties is a bit simpler mainly because looping through instances is not required. We do have to provide each OID, the name we want the property stored under, and the interpretation, if any. All of this comes from the MIB. The script to do this is here. This example comes from the mGuard MIB, which is from some work I did recently. However, the OIDs can be replaced with any OID from any MIB (notice there's not a baseOID), as long as the OID returns a single value because the script does an SNMP get on that OID.

Conclusion

That's about it. These two scripts can be used to add real meaning to device and instance level properties. The only things that have to change are the data that come from the MIB itself. Perhaps one of these days I'll get around to writing a MIB parser that will output this information for all OIDs in the MIB. Yeah, when I have time.

Tuesday, July 2, 2019

Visualizing Status Codes

If you follow me on LinkedIn, you will have noticed that I changed jobs and moved from Houston to Austin. I'm now working for LogicMonitor as a Sales/Monitoring Engineer. It's a great job and I'm enjoying it a lot more than previous jobs. One of the advantages of this job is that I should have much more opportunity, desire, and content for blog posts.

If this is your first time here, know that this blog is not written for you. It's written for me. I increasingly need more and more reminders of how to do things. That goes especially for things that I devise since no one else knows it unless I tell them. This blog is primarily a place for me to keep those things written down.

Anyway, on to this blog post. LogicMonitor monitors IT infrastructure. After collecting data through various mechanisms, it stores the data in a big database in the cloud and then provides a cloud hosted front end website to display the data. Part of the display is graphs. Many times, the metrics being graphed lend themselves to being plotted on a Cartesian coordinated graph. However, sometimes the metric being polled is a status code. A good example of this is license status on Sophos' XG Firewall. This metric is found at .1.3.6.1.4.1.21067.2.1.3.4.1.
asSubStatus OBJECT-TYPE
    SYNTAX          SubscriptionStatusType
    MAX-ACCESS      read-only
    STATUS          current
    DESCRIPTION     " "
    ::= { liAntispam 1 }
The syntax is "SubscriptionStatusType", which is an enumerated type meaning that only a number is returned, but that number has a meaning depending on the different values returned. Looking at the syntax definition in the MIB will help illustrate:
SubscriptionStatusType ::= TEXTUAL-CONVENTION
        STATUS            current
        DESCRIPTION       "enumerated type for subscription status"
        SYNTAX INTEGER {
                 trial          ( 1 ),
                 unsubscribed   ( 2 ),
                 subscribed     ( 3 ),
                 expired        ( 4 )
        }
So each different value returned indicates a particular state of the license subscription. It's not like a percentage where 100% is good and 0% is bad and there might be values in between. It's not like a rate, where a high number is fast and a low number is slow. It only has discreet values and values in between don't actually have any meaning.
Normally, without putting in much effort, someone might easily create a graph that just plots this number, putting time on the x-axis and the value retuned on the y-axis. This results in what you see here:
As you can see, it's not very helpful. It's a flat line because the status has been the same for the entire time range. That's ok. However, there's no real indicator of meaning. Some effort was made to add the meaning to the data point description (which appears in the tooltip). However, the enumeration is so long that it doesn't really fit in the tooltip. What does a 3 mean? Also, it's not illustrated here, but what happens if the value changes from 3 to 4? There would be two flat lines, one at 3 before the change and one at 4 after the change. But what would be shown at the change? Would it be a vertical line from 3 to 4? Would it be slightly slanted? Also not illustrated here, but what happens when larger timeframes are chosen and values are aggregated together (most often using an average)? Imagine that line at 3 that transitions to 4. What if that happened in middle of the quarter and you viewed it at the end of the quarter? If this status was polled every hour, that would mean 2190 data points to display! That's too many. Almost every graphing solution would attempt to decrease the data points by grouping points and averaging every group. In the case of a quarterly timeframe, it might simplify by averaging all data points for a single day together. This could be fine for most days, except for the one where there was a change. That would show an average of 3's and 4's, yielding a value of 3.5. WTH does 3.5 mean? It gets worse if you have a 2 that transitions to a 3 which then later transitions to a 4. You could end up with an average of 3, indicating no problem at all!?!?  It's not intuitive; and graphs need to be intuitive.

So, what do we do?

Well, we might be tempted to normalize the data. This is actually a very good idea. Let me explain: normalizing the data transforms it into a scale that is more intuitive. For example, we might say that we will normalize the data using the following rules:
  1. A value of 3 is good, so we'll call that 1
  2. Any other value is bad, so we'll call that 0
Pretty cool. That's a pretty good one. Any time everything is ok, we would plot a 1. Any other status is undesirable and we would plot a 0. Transitions are still ugly and potentially troublesome. If we make one tweak, it could allow us to put some context around the resulting values. What if we changed it to 100% instead of 1 and 0% instead of 0? If we did that, we could actually put some additional meaning behind the resulting values. Thing about it, if everything is good, you're plotting 100%. What does that mean? It means that for 100% of the timeframe displayed, the status was good. If the status is 4, we'd see a line down at 0% meaning that for 100% of the timeframe displayed, the status was not good, or conversely: the status was good 0% of the timeframe. We could also create another data point which is the inverse of our normalized data:
  1. If value is 3, plot 100, else plot 0
  2. Plot (100 - the value from above)
Doing this would also let us use a more intuitive graph called a stacked area graph. We could plot our normalized data using a pleasant color like blue or green and plot the inverse data using a warning color like red or orange. This would give us two series of data that compliment each other and when plotted as a stacked graph would look like the second graph here (the first graph is a status code plot for reference):
See how much more intuitive that is? The first graph in the above picture is a plot of the raw status code. How easy is it to know when things aren't good (without reading the axis labels)? Doable but not instantly intuitive. Imagine you had 12 of these kinds of graphs on a single dashboard on a wall. Would you want to take the time and effort to read the axis labels of each one to know how things are going? No. Now look at the second graph. It's pretty easy to tell that there were two different problems between 8pm and 11am and 1pm and 5pm.  We could even remove the y-axis labels and you'd still probably be able to tell with a glance how things are doing. Imagine 12 different graphs like this. How easy is it to see if there's a problem? Just look for the red!

There are only two drawbacks. Have you noticed? We see that there are two problems, but are they the same problem? Actually, they're not. Also, are they the same severity of problem? The morning problem is that status goes from "subscribed" to "expired". The afternoon problem is that the status goes from "subscribed" (did you notice that it returned to a good status at noon?) to "trial". The morning problem is worse than the afternoon problem.  There's a way to visualize this so that it all looks good. Let me explain:

Essentially we want to normalize the data but still keep as much detail as possible. We'll need to have four series, each with its own color. For any one data point, we'll only have a value of 100% in one of the series. All the others will be 0%. Meaning that for the timeframe that data point represents, whichever series has a value of 100% indicates the status for that moment. Let's look at the normalization rules:
  1. If the status code is 1, return 100, else return nothing (100 here means Trial status)
  2. If the status code is 2, return 100, else return nothing (100 here means Unsubscribed status)
  3. If the status code is 3, return 100, else return nothing (100 here means Subscribed status)
  4. If the status code is 4, return 100, else return nothing (100 here means Expired status)
This is what it would look like (graph on the right, first two shown for reference):
Notice how the problem in the morning is highlighted with a red and the problem in the afternoon is highlighted with a yellow? Easy to tell that there are two problems, that they are different, and that the morning problem is the more severe. 

This is what the final version would look like in the LogicMonitor web gui (not interesting I know since the status code didn't change the whole time I was building this):



Here's how it's built in the GUI:
Notes on the screenshot:

  • it shows line types of "Area" but they should be "Stacked" to display properly
  • the formulas should be `if(in(StatusCode,3),100,unkn())`

Monday, June 3, 2019

How to share a ton of stuff with someone else


If you have the stuff to share:

  1. Download and install Resilio Sync
  2. After opening the app, click the plus sign in the top left corner and select "Standard Folder"
  3. Browse to the folder you want to share with someone else and click "Open"
  4. A new entry will appear in your list of folders. At the right end of this entry will appear three dots (when you mouseover the row). Click the three dots and select "Copy Read Only key" or "Copy Read & Write key" depending on whether or not you want the sharer to be able to change what's in the shared folder. They key is now in your clipboard.
  5. Send the key to the person you want to share with.

If you have received a key:


  1. Download and install Resilio Sync
  2. After opening the app, click the plus sign in the top left corner and select "Enter key or link"
  3. Paste in the key that was sent to you
  4. Browse to the folder you want to synchronize and select Open.
  5. Go grab a coffee and chips and wait until the synchronization finishes.


Disclaimer: using any protocol to transmit/receive data that you are not legally allowed to transmit/receive is obviously illegal. I'm not responsible if you use this to do something illegal.

Tuesday, April 16, 2019

One Trailer for Each Piece of the Saga

Saw an article bringing together all the trailers for all the Star Wars productions. They did the episodes 1-9 first, then the ancillary productions. I decided to put them into a YouTube playlist. You're welcome.

Don't forget the one that you can't find on YouTube, it's after Episode 6, before 7.

Friday, March 22, 2019

Web GL Globe

I have spent some time now working in the Oil Industry. I've been working in technology, so I haven't had much to do with actual oil. However, I did come across this really cool visualization of oil imports and exports when I was searching for a way to visualize some network performance data. It's based on a bit of code by Google called the WebGL Globe. WebGL Globe is built on a technology called WebGL, which is like OpenGL except that it runs natively in the browser. It allows for interaction like what you would expect from a 3D graphics game but right in the browser with web code. Some of the examples are pretty cool and load extremely quickly because they don't really use the normal document structure. (Some other examples of really neat uses of WebGL and/or three.js, a library to facilitate using WebGL: Galactic Neighbors Internet Safety game for Kids by Google Lubricious)

The concept is pretty expansive when you think about the kinds of data that can be shown. WebGL Globe combines data that has several dimensions of information encoded and visualized:

  1. Node Location - latitude and longitude
  2. Node connections - showing that two nodes are connected in some way (shown as an arc when you click on a country in the World of Oil visualization).
  3. Connection intensity - this one can have multiple dimensions depending on how creative you get. For example:
    1. the line color itself can indicate some sort of status (red/green/yellow/orange). It's even possible that you could show percentages of the arc length as certain colors to indicate distribution of the status (i.e. 10% is bad while 90% is good might mean a line that is mostly green with a segment of length 10% that is red).
    2. the thickness of the arc can be another dimension, indicating something like volume
    3. the maximum altitude of the arc can be another, indicating sample size
All this is neat, but what good does it do anyone. Well, if you've watched my Analyzing TCP Application Performance video, you know that monitoring of response time is the most important part of any infrastructure monitoring. If you're doing application response time monitoring, you should have data for each transaction showing how each transaction performed. That's a huge amount of data (big data anyone?). 

Visualizing that data requires a few steps of summarization and grouping. First of all, every transaction's metrics should be retained individually so that you can dive into the details if needed. Second, you can start grouping by user then by summarizing by time buckets (1 minute or 5 minute). Another level of grouping that can be done is to group by network location. This can legitimately be done because most users at a particular location are going to share a very high percentage of the network path that gets them to the services they are consuming. 

Let me rephrase: You should have data that describes sources, destinations, and the performance between them. Sound familiar? What I'm envisioning is taking this data and plotting it out using WebGL Globe. Each network location and each service hosting location is a node (hm, could cloud services actually be represented as clouds?!?). They'd be connected with arcs. Thickness of the arcs could represent the number of transactions between those two nodes. Height of the arc could represent the recent negative variability in performance or a way to highlight the selected arc. Color of the arc could show a measure of the deviance in performance. 

Click on a node and you'd see the arcs from all the user locations consuming services from that site (if any) and the arcs to all the service hosting locations consumed by that site (if any) pop up in altitude (normally they'd be displayed at sea level or hidden based on a GUI setting). This is similar to the action you see in the Global Oil visualization when you click on a country (you see the countries connected to it). 

From there, if you saw that all the arcs were showing problems, you would know (from problem domain isolation) that there is a problem common to all users at that site. You could click on the node again and get into performance stats for the infrastructure at that location (from a tool like LogicMonitor). Following the colors should get you to the root of the problem pretty quickly.

Alternatively, if you saw a problem in only one of the arcs (or a small selection) follow problem domain isolation tactics and click on the arc having the problem. That should dive you into the infrastructure connecting those two nodes so you can find the problem.

Nodes themselves can have problems that don't evidence themselves outside the node. If that's the case, you should be going into the node to look at why there are performance issues. You'd know to go to there because the node icon itself would be showing some color indicator that there's a problem.

Thursday, March 21, 2019

Friday, March 8, 2019

Using Docker

Just a couple articles that have been camping out in my browser since I found them. They were very useful in helping me get docker doing useful stuff, like Ansible.

https://blog.docker.com/2016/09/build-your-first-docker-windows-server-container/

https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04

alpine/git          latest    a1d22e4b51ad      10 days ago     27.5MB
ansible-docker      latest    25b39c3ffd15      2 weeks ago     153MB
ubuntu              latest    47b19964fb50      4 weeks ago     88.1MB

Ansible-docker is a container I modified to suit my needs. You can use it by either cloning the git repository and building from source or just pull it from the hub:
docker pull sweenig/ansible-docker

I used this to know how to push images to docker: https://ropenscilabs.github.io/r-docker-tutorial/04-Dockerhub.html

alpine/git is the easiest way I've found to run git on Windows (hint, it's not actually running in Windows but in Linux inside the container).

Thursday, March 7, 2019

Object Oriented Programming

Today's blog post may have been posted before, but it's a really good one. If you are looking to get into object oriented programming, you should give this a look: https://inventwithpython.com/blog/2014/12/02/why-is-object-oriented-programming-useful-with-a-role-playing-game-example/.

Tuesday, May 8, 2018

HTML Maps, Continued

Continuing a previous post, I decided to add some CSS to make it obvious that you can click on the mapped links.  Here's the CSS:

<style>

  a:hover {border:1px dotted gray;}

  a {position:absolute;}

</style>


Obviously, you may want to use CSS selectors to make sure that only your mapped links on images get styled this way.

Wednesday, December 13, 2017

Online circuit schematic design and simulation

This is pretty cool, although I haven't actually had a chance to use it since most of my circuits don't involve much signal processing and are quite simple DC circuits. However, LushProjects has this circuit simulator that is completely online and free to use (unlike SPICE). The neat thing is that you should be able to embed your own circuit onto any web page using this tool and iframe.

Tuesday, December 12, 2017

Hard puzzle with an easy solution

I've been keeping this tab open on my browser because I wanted to work out the solution myself. I finally figured it out (2 years later). Don't give in and look at the solution without giving it a good try. Hint, you can do it without any trigonometric functions and without Pythagoras' help.

This is known as Langley's Adventitious Angles. And a good visual solution can be seen here (warning Flash required).

Monday, December 11, 2017

Boolean Arithmetic

I had to explain Boolean arithmetic the other day to non-makers. These guys didn't have any real experience with electric logic circuits, but the pictures here seemed to help.

Friday, December 8, 2017

Prusa's ColorPrint tool

When 3D printing, there's usually a jump in cost and complexity for printers that can print multiple colors. As a workaround, you can pause the printer, switch the filament to a different color, print a few layers with that different color, then pause, switch filament, then resume printing. This can be a very tricky thing to do manually, so obviously, there is a tool to do it.  Here are some prints by a buddy of mine that utilized this tool for these prints. He prints a black surface with a cutout for the image, prints a few layers of a different color (glow in the dark in this case), then resumes printing in black. He didn't have to design the model with any major modifications, just the first few layers cut out so that the glow in the dark layer can shine through.

Thursday, December 7, 2017

PiVPN

Rolling your on VPN can have various benefits. The biggest of which is that when you're on an unsecured network (i.e. any wifi network that you don't own yourself) your traffic is encrypted back to your home and then goes out to the internet. This means that you don't have to trust that the WiFi owner (think Starbucks or McDonald's) isn't snooping on your packets. It doesn't matter if they do snoop it because you're packets are encrypted and nobody can understand it unless they are you or your RaspberryPi at home.

Before you contest, yes, I know that any form of encryption can eventually be beaten. If you're that paranoid about someone decrypting your packets (which would take years by the way) you should be off the grid.

That said, I looked into setting up a VPN option for myself and eventually found PiVPN. This little one line installer sets everything up on your RPi so that it becomes a VPN endpoint. Use it to generate a certificate which you can load on your device (I've tested on iOS and Windows 10) into the freely available OpenVPN client.

I have since found, but not installed/tried a web GUI that should let me manage PiVPN through a browser. I hope to try this eventually after I have some free time. So, probably next year!

Tuesday, December 5, 2017

PressPi

I wanted to figure out the best way to put WordPress on a RaspberryPi. Turns out the best way is an image called PressPi. Load this up, lock down all the security, load CertBot so it's all running over https and you're good to go.

In case you're interested, here's a good set of instructions for installing Wordpress on Ubuntu 16.04. So many instructions! You might need phpMyAdmin as well.

Monday, December 4, 2017

HTML Maps

I recently needed to overlay a bunch of links on top of an image. This is done one of two ways, CSS being the more modern way. Essentially, you create a div with a bunch of elements inside it, which elements are all positioned absolutely.

<div style="position:relative; height:786px; width:537px; background:url(myimage.png) 0 0 no-repeat;">
     <a style="position:absolute; top:393px; left:147px; width:87px; height:69px;" title="asdf" alt="asdf" href="asdf" target="_self"></a>
</div>

Instead of mapping out all the positions manually, there's a really neat tool that will let you do it right on top of your own image and then generate both the HTML map code as well as the more simple CSS code to render it on a webpage.