Thursday, December 19, 2024

Premises vs. Premise

  • One theory or idea is a premise.
  • Two or more theories or ideas are premises.
  • Premises is also a plural noun referring to a piece of land with a set of buildings.

Thursday, December 12, 2024

Using Python Virtual Environments

tl;dr:

Setup:

mkdir my-new-project
cd my-new-project
python -m venv env

Linux Usage:

source env/bin/activate

Powershell Usage:

./env/Scripts/Activate.ps1

Either usage

pip install requests # and anything else you need

Exit

deactivate

Friday, December 6, 2024

Accessing LM with a bearer token through Postman

I'm a day late on this one, but at least it's good info:

Original content is here for now.
  1. Download and install Postman, or use http://postman.com
  2. Launch Postman and create a new collection that will be used for all LogicMonitor API requests:
    1. In Postman, click Import and paste https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/swagger.json. This should start the import process. 
    2. Before clicking the import button, click the gear to view import settings. 
    3. Make sure "Always inherit authentication" is checked on.
  3. Configure Authentication:
    1. Go to the collection root and select the auth tab.
    2. Change the auth type to "Bearer Token" and put {{bearer}} as the token.
    3. Go to the scripts tab and add this to the pre-request script:
      pm.request.headers.add({key: 'X-Version', value: '3'})
    4. Save the collection.
  4. Create a new environment with the following variables. You just need one for the bearer token. You should set the type to 'secret' for sensitive credentials.
    1. url – https://<portalname>.logicmonitor.com/santaba/rest
      1. If you want to work with the LM Ingestion API, duplicate this environment and change the url to 'https://<portalname>.logicmonitor.com/rest' (without "santaba")
    2. bearer – secret – For the current value, be sure to prepend the token with "bearer " (with space)
And that should do it. You should be able to open any request already defined in the collection you imported and run it.

Thursday, November 28, 2024

Adding Space to an Ubuntu VM

I ran out of space on an Ubuntu VM today and had to go through the process of expanding the hard drive. 

  1. First, we shut down the VM and reconfigured VMware to let it have a larger hard drive. 
  2. Another thing to note is that we used the default settings when configuring the disk when installing Ubuntu.
  3. We downloaded the ISO for GParted. This is an entirely self contained OS with Gparted installed (along with a few other tools) and mounted it in the optical drive of the VM and booted it up.
  4. When it finished booting, we used GParted to expand the partition to use the extra space on the drive.
  5. Then we ejected the ISO and booted up as normal.
  6. Since we used the default options when installing Ubuntu, it uses a logical volume. We expanded the logical volume to encompass the new expanded size of the partition on the physical (virtual) drive using this:
    sudo lvextend -l 100%VG ubuntu-vg-/ubuntu-lv
  7. Then we increased the size of the file system to use the available space in the logical volume using this:
    sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
A simple df -h later and we could see that the drive now had access to the extended space.

Thursday, November 21, 2024

Freeing up Disk Space when using Docker

 Turns out there's a lot of temporary data that is used by Docker. To clean it up, try the following (courtesy Mr Davron):

  1. Open an elevated Powershell or CMD prompt
  2. `docker system prune --all`
  3. Right mouse click on docker desktop in system tray -> quit
  4. `wsl --shutdown`
  5. `Optimize-VHD -Path "$env:LOCALAPPDATA\Docker\wsl\data\ext4.vhdx" -Mode Full`
  6. Reboot
This freed up about 25GB on my machine.

Thursday, November 14, 2024

Using Ansible in Docker (without installing Ansible)

tl;dr:

Powershell:

docker run --rm -v ${PWD}:/ansible/playbooks sweenig/ansible-docker playbook.yml -i inventory

Linux:

docker run --rm -v $(pwd):/ansible/playbooks sweenig/ansible-docker playbook.yml -i inventory

I love Ansible. I love Docker. Running Ansible in Docker not only makes me melt but it means I don't have to install anything but Docker to run Ansible. Anywhere Docker works, Ansible works. And since I already have Docker installed on any machine I call mine...

I built a lab that shows how this can be used. The lab spins up 4 Ubuntu servers, then uses Ansible in a docker container to install a few things. Here's the shortcut to get everything up and running if you already have Docker installed:

> docker run -it --rm -v ${HOME}:/root -v ${PWD}:/git alpine/git clone https://github.com/sweenig/docker-ansible-playbook.git
cd .\docker-ansible-playbook\ansible_lab\
> docker compose up -d
> docker run --rm -v ${PWD}:/ansible/playbooks --network=ansible_lab_default sweenig/ansible-docker playbook.yml -i inventory

With these four commands, you:

  • Pull down the lab files
  • Switch into the lab directory
  • Start up 4 containers (the equivalent of starting up 4 micro VMs)
  • Run the Ansible playbook
You used Git and Ansible and had neither installed.

To shut down the lab, do:
docker compose down
In the ansible-lab directory.

Thursday, November 7, 2024

Using Git without Installing it (through Docker)

If you follow this blog, you might already know that I'm a Docker fanboy. Docker containers are like micro-VMs, just lighter and faster. Git is version control software. The nice thing about version control software, or more specifically distributed version control software like Git, is that it not only allows for the storage of blobs of text or bytes but it also allows you to build workflows that enable people to contribute edits to the stored text along with approval and multiple branches.

Installing Git isn't always needed. Sometimes I just need to clone a repo. If I have Docker installed, I do this (in Powershell):

docker run -it --rm -v ${HOME}:/root -v ${PWD}:/git alpine/git clone https://github.com/sweenig/docker-ansible-playbook.git

Linux/Mac is just as easy:

docker run -it --rm -v ${HOME}:/root -v $(pwd):/git alpine/git clone https://github.com/sweenig/docker-ansible-playbook.git

If you want, you can even setup an alias in Bash:

Linux:
alias git="docker run -ti --rm -v $(pwd):/git -v $HOME/.ssh:/root/.ssh alpine/git"

Or in Windows with Powershell:
function git {
  $allArgs = $PsBoundParameters.values + $args
  docker run --rm -it -v ${PWD}:/git -v ${HOME}:/root alpine/git $allArgs
}
With Powershell, since the underlying container is running Linux, make sure that the path to the playbook and inventory files doesn't use the backslash.

Tuesday, October 29, 2024

Working with Beyond Trust's Privileged Remote Access API

 I recently started a trial with Beyond Trust for their Privileged Remote Access product (fka: Bomgar). It's an RMM. As with any tool I have, I'm looking to automate it. We have a system of record (SOR) where our targets reside. PRA requires that each of these have a record in PRA in order to use PRA to remote into the target. I'll be attempting to automate synchronization of our devices from our SOR to PRA using the API. Our trial involved the SaaS version of PRA.

Naturally, my first step was to download their collection into Postman and get started. Actually, the first thing I did was generate the API credentials, which came in the form of an ID and secret. Then I imported the collection into Postman. Unfortunately, I found it a little lacking, so I decided to enhance it using some techniques I've learned. This is not a slight against Beyond Trust. Postman is not their product and I didn't expect their collection to be any more than it was. However, that doesn't mean it couldn't be improved. ;-)

First things first, I created an environment. In it I created the ClientID, ClientSecret, and baseUrl variables. It looks like the collection file is dynamically generated from my trial portal, because the collection had a variable called baseUrl which pointed specifically to my trial portal. Because customer data should be in the environment and the collection should reference it using variables, I moved the value to the baseUrl environment variable and deleted the collection variable so that the environment variable would be used instead. 

BTPRA uses OAuth2.0, so to make any requests you have to first generate an ephemeral token which will be used as a bearer token in any subsequent requests. The collection didn't contain a request to obtain this ephemeral token, so I built one called "START HERE". 

The documentation states to make a POST request to https://access.beyondtrustcloud.com/oauth2/token. Unfortunately, this URL is smaller than the baseUrl, so I created a new environment variable called authURL and give it the value of https://access.example.com. Obviously, not access.example.com, but the URL to my portal. 

For the "START HERE" request, I have to include a basic authorization header. I also have to include a grant_type in the body of my post request. The other thing I want to do is parse the response and store the ephemeral access token in a new environment variable. Here's how I did it.

  1. Create a new POST request
  2. Set the url to {{authURL}}/oauth2/token
  3. On the Authorization tab
    1. Set the Auth Type to "Basic Auth"
    2. Set the Username to {{ClientID}}
    3. Set the Password to {{ClientSecret}}
  4. On the Headers tab, add a header:
    1. "Accept" : "application/json"
      This tells Postman to expect the response to be JSON, which we need it to be.
  5. On the Body tab:
    1. Pick "x-www-form-urlencoded" (there are other ways to do this, I know, but this works fine)
    2. Add "grant_type" : "client_credentials"
  6. On the Scripts tab, we're going to write a script that will parse the response and set an environment variable containing our ephemeral access token.
    1. Select "Post-response" and enter the following script:
      try {
          var json = JSON.parse(pm.response.text());
          pm.environment.set("bearToken", json.access_token);
      } catch (e) {console.log(e);}
Save and run your request. If you set everything up right, you should see a response containing your token. If you check your environment, you should see a new variable called bearToken, with the value of the access_token in the response. 

One thing remains: we need to tell all requests in the collection to use this token. Luckily this is pretty easy to do since all the requests already inherit the authorization from their parent, the collection. Opening the collection, I went to the Authorization tab and set the "Auth Type" to "Bearer Token". Then in the Token field, I put {{bearToken}}

And that's it. Now you should be able to open any request from the collection and run it, providing any parameters the request requires. 

This configuration should keep everything about the API separate from the my specific settings meaning I could delete and reimport the collection (don't delete the START HERE request). 

Thursday, October 24, 2024

Favorite way to troubleshoot Python scripts

I recently discovered a great way to make sure that Python scripts give you the information you need when there's a failure. I often run Python scripts inside Docker containers. They either log locally to a file or send logs to a log aggregator (LM Logs). As such, there's not always someone monitoring the stdout pipe of the Python script. If it fails, often the best piece of information is captured using a try/except block. You can have extra data printed out to stdout or even sent out to the log aggregator. This would look something like this:

>>> try:
...   {}["shrubbery"]
... except Exception as e:
...   print(e)
...
'shrubbery'

Now that wasn't helpful was it? If the only logs we had seen were logs about successful operation then suddenly a log that says "shrubbery", we really wouldn't know what was going on. Luckily, there are a few things we can add to the exception output that clarify things:

>>> import sys
>>> try:
...   {}["shrubbery"]
... except Exception as e:
...   print(f"There was an unexpected error: {e}: \nError on line {sys.exc_info()[-1].tb_lineno}")
...
There was an unexpected error: 'shrubbery':
Error on line 2

If we import the "sys" library, it gives us some options, one of which being the line number on which the failure happened, the failure that popped us out of our try block into the except block. This still doesn't give us everything we might want, but it provides the line number where the error happened. That gives us a great place to start looking at our code to see what happened.

We can do better:

>>> import sys
>>> try:
...   {}["shrubbery"]
... except Exception as e:
...   print(f"There was an unexpected {type(e).__name__} error: {e}: \nError on line {sys.exc_info()[-1].tb_lineno}")
...
There was an unexpected KeyError error: 'shrubbery':
Error on line 2

Ah, very nice. Now we know the type of error, a KeyError, we know the key that caused the error, and we know the line in our code where the error is happening.

There are more options for outputting more data. However, I haven't found more data to be that useful. With this information, I have just what I need and no extra fluff to work through. 

Thursday, October 3, 2024

Capturing packets on a Windows server without installing anything

 Ever wanted to do a pcap on a Windows server, but didn't have permission to install an app like Wireshark? Here's how you do it:

  1. Start an elevated command prompt or powershell terminal.
  2. Run `netsh trace start capture=yes tracefile=C:\temp\packetcapture.etl"
  3. Wait until you believe the desired packets have been captured or reproduce the issue you want to capture.
  4. Run `netsh trace stop`
  5. Your packet capture file will be in c:\temp called packetcapture.etl. You'll need to convert this into a file that Wireshark can open. In the past, you could open it with Microsoft Message Analyzer, but it isn't available anymore. You can use this tool to convert it. Simply download the release and run:
    `etl2pcapng.exe in.etl out.pcapng`
    Where in.etl points to the file output from your trace and out.pcapng points to the place where you want your output file to go. 
There are filters you can apply to the netsh command if needed. But I've found the filtering in Wireshark to be easier/better.