I'm an engineer who doesn't care for a lot of fluff for fluff's sake.
Thursday, December 19, 2024
Premises vs. Premise
Thursday, December 12, 2024
Using Python Virtual Environments
Setup:
Linux Usage:
Powershell Usage:
Either usage
Exit
Friday, December 6, 2024
Accessing LM with a bearer token through Postman
Original content is here for now.
- Download and install Postman, or use http://postman.com
- Launch Postman and create a new collection that will be used for all LogicMonitor API requests:
- In Postman, click Import and paste https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/swagger.json. This should start the import process.
- Before clicking the import button, click the gear to view import settings.
- Make sure "Always inherit authentication" is checked on.
- Configure Authentication:
- Go to the collection root and select the auth tab.
- Change the auth type to "Bearer Token" and put {{bearer}} as the token.
- Go to the scripts tab and add this to the pre-request script:
pm.request.headers.add({key: 'X-Version', value: '3'}) - Save the collection.
- Create a new environment with the following variables. You just need one for the bearer token. You should set the type to 'secret' for sensitive credentials.
- url – https://<portalname>.logicmonitor.com/santaba/rest
- If you want to work with the LM Ingestion API, duplicate this environment and change the url to 'https://<portalname>.logicmonitor.com/rest' (without "santaba")
- bearer – secret – For the current value, be sure to prepend the token with "bearer " (with space)
Thursday, November 28, 2024
Adding Space to an Ubuntu VM
I ran out of space on an Ubuntu VM today and had to go through the process of expanding the hard drive.
- First, we shut down the VM and reconfigured VMware to let it have a larger hard drive.
- Another thing to note is that we used the default settings when configuring the disk when installing Ubuntu.
- We downloaded the ISO for GParted. This is an entirely self contained OS with Gparted installed (along with a few other tools) and mounted it in the optical drive of the VM and booted it up.
- When it finished booting, we used GParted to expand the partition to use the extra space on the drive.
- Then we ejected the ISO and booted up as normal.
- Since we used the default options when installing Ubuntu, it uses a logical volume. We expanded the logical volume to encompass the new expanded size of the partition on the physical (virtual) drive using this:
sudo lvextend -l 100%VG ubuntu-vg-/ubuntu-lv - Then we increased the size of the file system to use the available space in the logical volume using this:
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
Thursday, November 21, 2024
Freeing up Disk Space when using Docker
Turns out there's a lot of temporary data that is used by Docker. To clean it up, try the following (courtesy Mr Davron):
- Open an elevated Powershell or CMD prompt
- `docker system prune --all`
- Right mouse click on docker desktop in system tray -> quit
- `wsl --shutdown`
- `Optimize-VHD -Path "$env:LOCALAPPDATA\Docker\wsl\data\ext4.vhdx" -Mode Full`
- Reboot
Thursday, November 14, 2024
Using Ansible in Docker (without installing Ansible)
tl;dr:
Powershell:
docker run --rm -v ${PWD}:/ansible/playbooks sweenig/ansible-docker playbook.yml -i inventory
Linux:
docker run --rm -v $(pwd):/ansible/playbooks sweenig/ansible-docker playbook.yml -i inventory
I love Ansible. I love Docker. Running Ansible in Docker not only makes me melt but it means I don't have to install anything but Docker to run Ansible. Anywhere Docker works, Ansible works. And since I already have Docker installed on any machine I call mine...
I built a lab that shows how this can be used. The lab spins up 4 Ubuntu servers, then uses Ansible in a docker container to install a few things. Here's the shortcut to get everything up and running if you already have Docker installed:
> docker run -it --rm -v ${HOME}:/root -v ${PWD}:/git alpine/git clone https://github.com/sweenig/docker-ansible-playbook.git
> cd .\docker-ansible-playbook\ansible_lab\
> docker compose up -d
> docker run --rm -v ${PWD}:/ansible/playbooks --network=ansible_lab_default sweenig/ansible-docker playbook.yml -i inventory
With these four commands, you:
- Pull down the lab files
- Switch into the lab directory
- Start up 4 containers (the equivalent of starting up 4 micro VMs)
- Run the Ansible playbook
Thursday, November 7, 2024
Using Git without Installing it (through Docker)
If you follow this blog, you might already know that I'm a Docker fanboy. Docker containers are like micro-VMs, just lighter and faster. Git is version control software. The nice thing about version control software, or more specifically distributed version control software like Git, is that it not only allows for the storage of blobs of text or bytes but it also allows you to build workflows that enable people to contribute edits to the stored text along with approval and multiple branches.
Installing Git isn't always needed. Sometimes I just need to clone a repo. If I have Docker installed, I do this (in Powershell):
docker run -it --rm -v ${HOME}:/root -v ${PWD}:/git alpine/git clone https://github.com/sweenig/docker-ansible-playbook.git
Linux/Mac is just as easy:
docker run -it --rm -v ${HOME}:/root -v $(pwd):/git alpine/git clone https://github.com/sweenig/docker-ansible-playbook.git
Tuesday, October 29, 2024
Working with Beyond Trust's Privileged Remote Access API
I recently started a trial with Beyond Trust for their Privileged Remote Access product (fka: Bomgar). It's an RMM. As with any tool I have, I'm looking to automate it. We have a system of record (SOR) where our targets reside. PRA requires that each of these have a record in PRA in order to use PRA to remote into the target. I'll be attempting to automate synchronization of our devices from our SOR to PRA using the API. Our trial involved the SaaS version of PRA.
Naturally, my first step was to download their collection into Postman and get started. Actually, the first thing I did was generate the API credentials, which came in the form of an ID and secret. Then I imported the collection into Postman. Unfortunately, I found it a little lacking, so I decided to enhance it using some techniques I've learned. This is not a slight against Beyond Trust. Postman is not their product and I didn't expect their collection to be any more than it was. However, that doesn't mean it couldn't be improved. ;-)
First things first, I created an environment. In it I created the ClientID, ClientSecret, and baseUrl variables. It looks like the collection file is dynamically generated from my trial portal, because the collection had a variable called baseUrl which pointed specifically to my trial portal. Because customer data should be in the environment and the collection should reference it using variables, I moved the value to the baseUrl environment variable and deleted the collection variable so that the environment variable would be used instead.
BTPRA uses OAuth2.0, so to make any requests you have to first generate an ephemeral token which will be used as a bearer token in any subsequent requests. The collection didn't contain a request to obtain this ephemeral token, so I built one called "START HERE".
The documentation states to make a POST request to https://access.beyondtrustcloud.com/oauth2/token. Unfortunately, this URL is smaller than the baseUrl, so I created a new environment variable called authURL and give it the value of https://access.example.com. Obviously, not access.example.com, but the URL to my portal.
For the "START HERE" request, I have to include a basic authorization header. I also have to include a grant_type in the body of my post request. The other thing I want to do is parse the response and store the ephemeral access token in a new environment variable. Here's how I did it.
- Create a new POST request
- Set the url to {{authURL}}/oauth2/token
- On the Authorization tab
- Set the Auth Type to "Basic Auth"
- Set the Username to {{ClientID}}
- Set the Password to {{ClientSecret}}
- On the Headers tab, add a header:
- "Accept" : "application/json"
This tells Postman to expect the response to be JSON, which we need it to be. - On the Body tab:
- Pick "x-www-form-urlencoded" (there are other ways to do this, I know, but this works fine)
- Add "grant_type" : "client_credentials"
- On the Scripts tab, we're going to write a script that will parse the response and set an environment variable containing our ephemeral access token.
- Select "Post-response" and enter the following script:
try {
var json = JSON.parse(pm.response.text());
pm.environment.set("bearToken", json.access_token);
} catch (e) {console.log(e);}
This configuration should keep everything about the API separate from the my specific settings meaning I could delete and reimport the collection (don't delete the START HERE request).
Thursday, October 24, 2024
Favorite way to troubleshoot Python scripts
I recently discovered a great way to make sure that Python scripts give you the information you need when there's a failure. I often run Python scripts inside Docker containers. They either log locally to a file or send logs to a log aggregator (LM Logs). As such, there's not always someone monitoring the stdout pipe of the Python script. If it fails, often the best piece of information is captured using a try/except block. You can have extra data printed out to stdout or even sent out to the log aggregator. This would look something like this:
>>> try:
... {}["shrubbery"]
... except Exception as e:
... print(e)
...
'shrubbery'
Now that wasn't helpful was it? If the only logs we had seen were logs about successful operation then suddenly a log that says "shrubbery", we really wouldn't know what was going on. Luckily, there are a few things we can add to the exception output that clarify things:
>>> import sys
>>> try:
... {}["shrubbery"]
... except Exception as e:
... print(f"There was an unexpected error: {e}: \nError on line {sys.exc_info()[-1].tb_lineno}")
...
There was an unexpected error: 'shrubbery':
Error on line 2
If we import the "sys" library, it gives us some options, one of which being the line number on which the failure happened, the failure that popped us out of our try block into the except block. This still doesn't give us everything we might want, but it provides the line number where the error happened. That gives us a great place to start looking at our code to see what happened.
We can do better:
>>> import sys
>>> try:
... {}["shrubbery"]
... except Exception as e:
... print(f"There was an unexpected {type(e).__name__} error: {e}: \nError on line {sys.exc_info()[-1].tb_lineno}")
...
There was an unexpected KeyError error: 'shrubbery':
Error on line 2
Ah, very nice. Now we know the type of error, a KeyError, we know the key that caused the error, and we know the line in our code where the error is happening.
There are more options for outputting more data. However, I haven't found more data to be that useful. With this information, I have just what I need and no extra fluff to work through.
Thursday, October 3, 2024
Capturing packets on a Windows server without installing anything
Ever wanted to do a pcap on a Windows server, but didn't have permission to install an app like Wireshark? Here's how you do it:
- Start an elevated command prompt or powershell terminal.
- Run `netsh trace start capture=yes tracefile=C:\temp\packetcapture.etl"
- Wait until you believe the desired packets have been captured or reproduce the issue you want to capture.
- Run `netsh trace stop`
- Your packet capture file will be in c:\temp called packetcapture.etl. You'll need to convert this into a file that Wireshark can open. In the past, you could open it with Microsoft Message Analyzer, but it isn't available anymore. You can use this tool to convert it. Simply download the release and run:
`etl2pcapng.exe in.etl out.pcapng`
Where in.etl points to the file output from your trace and out.pcapng points to the place where you want your output file to go.