Author Archives: Eric256

Reducing the number of replicas in Elasticsearch

Sometimes you just want to run a single Elasticsearch node and not have it constantly alert that it has no were to write its replicas. Since Elasticsearch and more templates default to at least 1 replica we have to make changes to Elasticsearch and to the templates. First change the default:

curl -XPUT 'localhost:9200/_settings' -d '
{
    "index" : {
        "number_of_replicas" : 0
    }
}
'

Then we can list all the templates and figure out which ones need updates as well

curl -XGET -H 'Content-Type: application/json' 'localhost:9200/_template/*?pretty'

Then for each of the templates you want to update. For instance to update a Logstash template use the following:

curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/logstash-*/_settings' -d '{ "number_of_replicas" : 0 } }'
Advertisement

Python OAuth2 auth and bearer token caching

To access many APIs you need to use OAuth2, sending a client id and secret to an endpoint to get a token back. Then send that token with future calls as authentication.

The particular API I was calling would also return a number of seconds the bearer token would be good for.

Hopefully this code will help jumpstart someone else along the way to using python and APIs.

#!/usr/bin/env python3

import requests
import json
import datetime

import os.path
from os import path

cache_file = "/var/tmp/oauth.json"
client_id = '**your client_id here**'
client_secret = '**your secret here**'

def getToken():
    url = "https://api.wherever.com/oauth2/token"

    data = {
      'client_id': client_id,
      'client_secret': client_secret,
      'grant_type': 'client_credentials'
    }

    response = requests.post(url, data=data)

    data = json.loads(response.text)
    expiration_time =  datetime.datetime.now() +  datetime.timedelta(seconds=data['expires_in'])
    data['expiration_date'] =   int(expiration_time.utcnow().timestamp())

    with open(cache_file, "w") as outfile:
        json.dump(data, outfile)
    return data



if path.exists(cache_file):
    #Reading cache
    with open(cache_file, "r") as infile:
        access_token = json.load(infile)
    if int(datetime.datetime.now().timestamp()) > access_token['expiration_date']:
        #Token expired, get new
        access_token = getToken()
else:
    #No cached value, get and cache
    access_token = getToken()


bearer_token = access_token["access_token"]
headers = {
        'Authorization': f'bearer {bearer_token}'
}

#The rest of the requests go here and pass that header

4/24/2023: updated to fix typo!

Use PowerShell to find what process is using a port

Sometimes you just need to know what ports a process is listening on, or what process is listening on a port…so here you go, just replace the process or port and have at it.

$process = "svcHost"
Get-NetTCPConnection | Where OwningProcess -in (Get-Process | Where ProcessName -eq $process | Select -ExpandProperty Id)
$port = 135
Get-Process -Id (Get-NetTCPConnection -LocalPort $port).OwningProcess

Steps for Setting up Rancid + CentOS + Git

Setting up Rancid so that it pushes config up into a Git repository. This way you have Rancid getting the config when it changes, and Git storing a history of those changes.

Install CentOS & update

# install pre-requisites
sudo yum install wget gcc perl tcl expect
#setup needed groups and users
sudo groupadd netadm
sudo useradd -g netadm -c "Networking Backups" -d /home/rancid rancid

#working directory to store source
sudo mkdir /home/rancid/tar

#download source and extract
sudo su
cd /home/rancid/tar
wget ftp://ftp.shrubbery.net/pub/rancid/rancid-3.9.tar.gz
tar -zxvf rancid-3.9.tar.gz

# configure/make and install
cd ./rancid-3.9
./configure --prefix=/usr/local/rancid
make install

# copy template config and set file permissions
chmod 0640 /home/rancid/.cloginrc
chown -R rancid:netadm /home/rancid/.cloginrc
chown -R rancid:netadm /usr/local/rancid/
chmod 775 /usr/local/rancid/

Configure Git

  1. yum install git
  2. Modify /usr/local/rancid/etc/rancid.conf
  3. Change RCSSYS=git
  4. Save and close
  5. Switch to rancid user su - rancid
  6. Create SSH key ssh-keygen -o -t rsa -b 4096 -C "email@example.com"
  7. Copy the key to insert into github/gitlab `vim ~/.ssh/id_rsa.pub`
  8. Setup Defaults
    1. git config --global user.name "Rancid" git config --global user.email "email@example.com"
  9. Configure Rancid Groups
  10. Build initial group folders `​/usr/local/rancid/bin/rancid-cvs`
  11. Go to the device group folder cd /usr/local/rancid/var
  12. For each Device group
  13. cd {device group} git remote rename origin old-origin git remote add origin git@{git server}:{repo}/{git project}.git git push -u origin --all
  14. Setup hook to push to the git server. In each device group
    1. Open the  post-commit hoof file
vim .git/hooks/post-commit
#!/bin/sh
# push the local repo to the remote server on commit
git push origin

Finding who is using a Windows Share in PowerShell

You can find the users and computers connected to a share pretty easily in powershell.

$share_name = "*"
$shares = Get-WmiObject Win32_ServerConnection 
$shares = $shares | Where-Object {$_.ShareName -like $share_name} | `
                    Select-Object ShareName, UserName, `
                                  @{n="Computer";e={[System.Net.Dns]::GetHostEntry($_.ComputerName).HostName}}
$shares | Group-Object -Property ShareName | ` 
          Select Name, `
                 @{n="Computers";e={$_.Group | Select -ExpandProperty Computer | Get-Unique}},`
                 @{n="Users";e={$_.Group | Select -ExpandProperty UserName | Get-Unique}},`
                 Group

$share_name is a filter to narrow down the shares you are checking. Other than that it gets all the share connections then groups them by share. It shows unique users and computers connected to each share.

SCCM PXE Booting with NSX-T

Symptoms:

PXE boot stuck at “Waiting for Approval”

If you’re experiencing a “Waiting for Approval” message when attempting to PXE boot in SCCM, it’s likely due to a common issue with NSX-T. When virtualizing a Distribution Point in SCCM and attempting to use it for PXE booting, you need to make some changes in NSX-T to allow it to function properly.

By default, NSX-T blocks servers from receiving or replying to DHCP requests, which PXE booting relies on heavily. To enable PXE booting for specific servers, you’ll need to create a “Segment Security Policy” profile in NSX-T and disable DHCP Server Block. Once you’ve done this, create a new network segment with the appropriate VLAN and assign the policy you just created to it.

Finally, move your Distribution Points to the newly created segment in VCenter. This should resolve the “Waiting for Approval” issue and allow you to PXE boot successfully.

By following these steps, you’ll be able to use virtualized Distribution Points in SCCM for PXE booting without any further issues. Don’t let a simple configuration issue hold you back from taking advantage of all SCCM has to offer!

Find GPOs with LoopBack Enabled

You can get a list of all Group Policy Objects (GPOs) with loopback very easily:


Get-GPO -All | Where { $($_ | Get-GPRegistryValue -Key "HKLM\Software\Policies\Microsoft\Windows\System" -Value UserPolicyMode -ErrorAction SilentlyContinue -WarningAction SilentlyContinue | Select -ExpandProperty Value) -eq 1}

This will query all GPOs and then conditionally return them if they have the value for UserPolicyMode set.

You could replace the “Get-GPO -all” with a filtered version if you were only interested in certain GPOs

A little longer version if you want to abstract it or modify it earlier:

Function GetLoopBack {
    param($gpo)
    $gpo | Get-GPRegistryValue -Key "HKLM\Software\Policies\Microsoft\Windows\System" -Value UserPolicyMode `
                               -WarningAction SilentlyContinue | Select -ExpandProperty Value
}

$GPOs = $GPOs | SELECT -Property *, @{Name='LoopBack';Expression={GetLoopBack2 $_}}

 

Querying the BitLocker SQL Database

Just a quick snippet to get all the Recovery Key’s for a specific user in the BitLocker database.

With MBAMRecoveryandHardware selected:

SELECT DomainName, u.Name Username, m.Name MachineName, k.LastUpdateTime KeyUpdate, VolumeGuid, RecoveryKey,RecoveryKeyId, Disclosed
  FROM [RecoveryAndHardwareCore].[Users] u 
  JOIN [RecoveryAndHardwareCore].[Domains] d ON (u.DomainId = d.Id)
  JOIN [RecoveryAndHardwareCore].[Volumes_Users] v_u ON (u.Id = v_u.UserId)
  JOIN [RecoveryAndHardwareCore].[Volumes] v ON (v_u.VolumeId = v.Id)
  JOIN [RecoveryAndHardwareCore].[Keys] k ON (v.Id = k.VolumeId)
  JOIN [RecoveryAndHardwareCore].[Machines_Volumes] m_v ON (m_v.VolumeId = v.Id)
  JOIN [RecoveryAndHardwareCore].[Machines] m ON (m.Id = m_v.MachineId)
  WHERE u.Name = '**username here**'

 

Collecting MS SQL Query Data into Telegraf

Sometimes you just want to record results from a SQL query into Telegraf so you can graph it over time with Grafana. I have several queries that I want to see trend data for so I wrote this script to allow me to easily configure queries and throw them into a nice graph for analysis.

For the collection part I have a simple python script. I put the following in /usr/local/sbin/check_mssql.py

#! /usr/bin/env python

__author__ = 'Eric Hodges'
__version__= 0.1

import os, sys
import pymssql
import json
import configparser

from optparse import OptionParser, OptionGroup

parser = OptionParser(usage='usage: %prog [options]')
parser.add_option('-c', '--config', help="Config File Location", default="/etc/mssql_check.conf")

(options, args) = parser.parse_args()
config = configparser.ConfigParser()
config.read(options.config)

settings = config['Settings']
conn = pymssql.connect(host=settings['hostname'],user=settings['username'], password=settings['password'], database=settings['database'])

def return_dict_pair(cur, row_item):
    return_dict = {}
    for column_name, row in zip(cur.description, row_item):
        return_dict[column_name[0]] = row
    return return_dict

queries = config.sections()

items = []

for query in queries:
    if (query != 'Settings'):
        cursor = conn.cursor()

        cursor.execute(config[query]['query'])
        description = cursor.description

        row = cursor.fetchone()
        while row:
            items.append(return_dict_pair(cursor,row))
            row = cursor.fetchone()

conn.close()
print(json.dumps(items))

sys.exit()
<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

This script expects to be passed a config file on the command line i.e. ‘check_mssql.py –config test.conf’

The config file is very simple, it contains a Settings section with the database connection options, and then one section for each query you want to run. It runs the query and then converts the rows into a dictionary, then pushes each row onto an array. It repeats for each query adding them all to the same array and finally returns them as JSON. (Replace the right hand side of the = with the correct info). The query needs at least one column to be a string to serve as the key for telegraf.

Example test.conf config:

[Settings]
hostname=server[:port]
database=database_name
username=readonly_user
password=readonly_user_password

[Sample]
query=SELECT measurement, data FROM sample_table

You can make new section like Sample with different names and different queries and it will run them all and combine them together.

Then all we need to do is setup Telegraf to run this config (/etc/telegraf/telegraf.d/sample.conf):

[[inputs.exec]]
commands = ["/usr/local/sbin/check_mssql.py --config /etc/check_mssql/sccm.conf"]
tag_keys = ["measurement"]
interval = "60s"
data_format = "json"
name_override = "Sample"

Make sure to change the tag_keys and name_override to whatever you would like to be tags in Grafana. You can test the config by running ‘telegraf -test -config sample.conf’

Now in grafana choose your telegraf data source, then set the where to any tags you want, select one (or more of the fields) and off you go.

grafana

 

I hope you find this useful and can make many great graphs with it!