Author Archives: Eric256

Find GPOs with LoopBack Enabled

You can get a list of all Group Policy Objects (GPOs) with loopback very easily:


Get-GPO -All | Where { $($_ | Get-GPRegistryValue -Key "HKLM\Software\Policies\Microsoft\Windows\System" -Value UserPolicyMode -ErrorAction SilentlyContinue -WarningAction SilentlyContinue | Select -ExpandProperty Value) -eq 1}

This will query all GPOs and then conditionally return them if they have the value for UserPolicyMode set.

You could replace the “Get-GPO -all” with a filtered version if you were only interested in certain GPOs

A little longer version if you want to abstract it or modify it earlier:

Function GetLoopBack {
    param($gpo)
    $gpo | Get-GPRegistryValue -Key "HKLM\Software\Policies\Microsoft\Windows\System" -Value UserPolicyMode `
                               -WarningAction SilentlyContinue | Select -ExpandProperty Value
}

$GPOs = $GPOs | SELECT -Property *, @{Name='LoopBack';Expression={GetLoopBack2 $_}}

 

Querying the BitLocker SQL Database

Just a quick snippet to get all the Recovery Key’s for a specific user in the BitLocker database.

With MBAMRecoveryandHardware selected:

SELECT DomainName, u.Name Username, m.Name MachineName, k.LastUpdateTime KeyUpdate, VolumeGuid, RecoveryKey,RecoveryKeyId, Disclosed
  FROM [RecoveryAndHardwareCore].[Users] u 
  JOIN [RecoveryAndHardwareCore].[Domains] d ON (u.DomainId = d.Id)
  JOIN [RecoveryAndHardwareCore].[Volumes_Users] v_u ON (u.Id = v_u.UserId)
  JOIN [RecoveryAndHardwareCore].[Volumes] v ON (v_u.VolumeId = v.Id)
  JOIN [RecoveryAndHardwareCore].[Keys] k ON (v.Id = k.VolumeId)
  JOIN [RecoveryAndHardwareCore].[Machines_Volumes] m_v ON (m_v.VolumeId = v.Id)
  JOIN [RecoveryAndHardwareCore].[Machines] m ON (m.Id = m_v.MachineId)
  WHERE u.Name = '**username here**'

 

Collecting MS SQL Query Data into Telegraf

Sometimes you just want to record results from a SQL query into Telegraf so you can graph it over time with Grafana. I have several queries that I want to see trend data for so I wrote this script to allow me to easily configure queries and throw them into a nice graph for analysis.

For the collection part I have a simple python script. I put the following in /usr/local/sbin/check_mssql.py

#! /usr/bin/env python

__author__ = 'Eric Hodges'
__version__= 0.1

import os, sys
import pymssql
import json
import configparser

from optparse import OptionParser, OptionGroup

parser = OptionParser(usage='usage: %prog [options]')
parser.add_option('-c', '--config', help="Config File Location", default="/etc/mssql_check.conf")

(options, args) = parser.parse_args()
config = configparser.ConfigParser()
config.read(options.config)

settings = config['Settings']
conn = pymssql.connect(host=settings['hostname'],user=settings['username'], password=settings['password'], database=settings['database'])

def return_dict_pair(cur, row_item):
    return_dict = {}
    for column_name, row in zip(cur.description, row_item):
        return_dict[column_name[0]] = row
    return return_dict

queries = config.sections()

items = []

for query in queries:
    if (query != 'Settings'):
        cursor = conn.cursor()

        cursor.execute(config[query]['query'])
        description = cursor.description

        row = cursor.fetchone()
        while row:
            items.append(return_dict_pair(cursor,row))
            row = cursor.fetchone()

conn.close()
print(json.dumps(items))

sys.exit()
<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

This script expects to be passed a config file on the command line i.e. ‘check_mssql.py –config test.conf’

The config file is very simple, it contains a Settings section with the database connection options, and then one section for each query you want to run. It runs the query and then converts the rows into a dictionary, then pushes each row onto an array. It repeats for each query adding them all to the same array and finally returns them as JSON. (Replace the right hand side of the = with the correct info). The query needs at least one column to be a string to serve as the key for telegraf.

Example test.conf config:

[Settings]
hostname=server[:port]
database=database_name
username=readonly_user
password=readonly_user_password

[Sample]
query=SELECT measurement, data FROM sample_table

You can make new section like Sample with different names and different queries and it will run them all and combine them together.

Then all we need to do is setup Telegraf to run this config (/etc/telegraf/telegraf.d/sample.conf):

[[inputs.exec]]
commands = ["/usr/local/sbin/check_mssql.py --config /etc/check_mssql/sccm.conf"]
tag_keys = ["measurement"]
interval = "60s"
data_format = "json"
name_override = "Sample"

Make sure to change the tag_keys and name_override to whatever you would like to be tags in Grafana. You can test the config by running ‘telegraf -test -config sample.conf’

Now in grafana choose your telegraf data source, then set the where to any tags you want, select one (or more of the fields) and off you go.

grafana

 

I hope you find this useful and can make many great graphs with it!

Finding Orphaned GPO Folders with PowerShell

During years and years of working in AD occasionally the sysvol folders gets out of sync with the actual GPOs. The following script will return all folders in sysvol\policies that no long have a corresponding GPO. **Please be sure to backup folders before taking any action based on this**

#Initial Source: https://4sysops.com/archives/find-orphaned-active-directory-gpos-in-the-sysvol-share-with-powershell/

function Get-OrphanedGPOs {
    

    [CmdletBinding()]
    param (
        [parameter(Mandatory=$true,ValueFromPipelineByPropertyName)]
        [string]$Domain
    )
    begin {
        $orphaned = @()
    }
    process {
        Write-Verbose "Domain: $Domain"
        # Get GPOs and convert guid into same format as folders
        $gpos = Get-GPO -All -Domain $domain | Select @{ n='GUID'; e = {'{' + $_.Id.ToString().ToUpper() + '}'}}| Select -ExpandProperty GUID
        Write-Verbose "GPOs: $($gpos | Measure-Object | Select -ExpandProperty Count)"
        
        # Get GPOs policy folder
        $polPath = "\\$domain\SYSVOL\$domain\Policies\"
        Write-Verbose "Policy Path: $polPath"

        # Get all folders in the policy path
        $folders = Get-ChildItem $polPath -Exclude 'PolicyDefinitions'
        Write-Verbose "Folders: $($folders | Measure-Object | SElect -ExpandProperty Count)"

        #compare and return only the Folders that exist without a GPO
        $gpoArray = $gpos.GUID
        ForEach ($folder in $folders) {
            if (-not $gpos.contains($folder.Name)) {
                $orphaned += $folder
            }
        }
        Write-Verbose "Orphaned: $($orphaned | Measure-Object | SElect -ExpandProperty Count)"
        return $orphaned
    }
    end {
    }

}

Running anything as a Service

You can use NSSM to run anything you want as a service very quickly.  In my case, I was looking to run AcuRite even when not logged in or while locked so that my weather station is always updating the cloud.

  1. Download NSSM and extract somewhere on your C Drive (i just put it in c:\nssm\
  2. Open a command prompt and change directories to where NSSM was extracted and run the following command (replace AcuRite with the name of the service you want to create)
  3. nssm.exe install AcuRite
  4. In the path field select the exe you would normally be running
  5. On the details tab set a description so you remember in the future why you created this.
  6. Click “Install Service”
  7. Start the service from the Services control panel

Easy and quick you now have AcuRite (or whatever you want) running as a service.

Setting a DHCP Option Value to Hex Bytes

So sometimes, some annoying times, you have a Vendor scoped DHCP option that you need to set and it is hex bytes. This can be quite frustrating as the GUI doesn’t provide an option to set the value (at least I haven’t found a way) and even when it does, you have to set it using the hex value. The second problem being, I don’t speak Hex.

The way I’d always done it before, and that comes up when I search for it right now uses netsh.

netsh dhcp server scope 10.200.100.0 set optionvalue 125 ENCAPSULATED 000003045669643A697070686F6E652E6D6974656C2E636F6D3B73775F746674703D3139322E3136382E312E313B63616C6C5F7372763D3139322E3136382E312E312C3139322E3136382E312E323B

This is fine if you already have the hex value, or the way to get it. It just isn’t idea if you want to do it for a ton of scopes, or if you don’t have the hex value.

Format-Hex will take a string and make it into an array of hex bytes nicely, so you can either use Powershell to produce that hex string

$hex  = [convert]::ToChar(0) + [convert]::ToChar(0)+ [convert]::ToChar(3) +  [convert]::ToChar(4) + "Vid:ipphone.mitel.com;sw_tftp=192.168.1.1;call_srv=192.168.1.1,192.168.1.2;"  | Format-Hex
$string = ($hex.Bytes|ForEach-Object ToString X2) -join ''

and Set-DhcpServerv4OptionValue will take that array and save it into the DHCP server for us, so you can then bulk setup scopes:

$hex  = [convert]::ToChar(0) + [convert]::ToChar(0)+ [convert]::ToChar(3) +  [convert]::ToChar(4) + "Vid:ipphone.mitel.com;sw_tftp=192.168.1.1;call_srv=192.168.1.1,192.168.1.2;"  | Format-Hex
Set-DhcpServerv4OptionValue -ComputerName dhcpserver -ScopeId 10.200.100.0 -OptionID 125 -Value $hex.Bytes

Replace the dchpserver with your actual dhcp server name, the scope ID would be the IP of the scope, etc… The nice thing in powershell is we can then script this and loop over scopes. Building the string per scope and setting it, which would be more complicated with the netsh version. Also sense it is in plain text instead of HEX it is far easier to read and update in the future.

We can reverse the process though it is slightly messier

$value = Get-DhcpServerv4OptionValue -computername dhcpserver -scopeid 10.200.100.0 -OptionId 125 
($value.value|ForEach-Object {[char][byte]"$_"}) -join ''

This will grab the value as an array of strings, in the form of “0x00”, convert that to bytes, then convert that to chars, then join it all back together into a string for viewing.

Finding VMs with IOPs Limiting Set

Using PowerCLI it’s pretty easy to find all the VM’s in your environment that have IOPs limits set.

get-vm | Get-VMResourceConfiguration | `
         Select VM -ExpandProperty DiskResourceConfiguration | `
         where {$_.DiskLimitIOPerSecond -gt 0} | `
         Select VM, DiskLimitIOPerSecond

This will give you a list of all VM’s where the IOps limits are not 0. You could change the condition to find VM’s with different disk shares, specific limits or any combinations there of.

Hope you find this useful 🙂

Collecting DHCP Scope Data with Grafana

In order to collect my DHCP scope statistics data into Grafana I turned to PowerShell.  We can use Get-DhcpServerv4Scope to list our all our scopes, Get-DhcpServerv4ScopeStatistics to get the stats for each, and then a little bit of regex and math to add some additional stats that we then bring into an InfluxDB, which then ultimately gets mapped be Grafana.

I have multiple sites, with multiple scopes, which ends up with tons and tones of data.  I already have Nagios alerts that tell me if individual scopes are in danger ranges of available IP’s etc, so for Grafana I was more interested in aggregated data about groups of scopes and how users in my network were changing.  In our case, the actual scope names are contained inside the parenthesis, so I used some regex to match scope names between parenthesis and then build a hash table of stats with those scope names and total up the free and used IPs in each range.

Enough chatter, here is the script:

Function Get-DHCPStatistics {
    Param(
        [string]$ComputerName=$env:computername,
        [string]$option
    )
    Process {
        # retrieve all scopes
        $scopes = Get-DhcpServerv4Scope -ComputerName $ComputerName -ErrorAction:SilentlyContinue 

        # setup all variables we are going to use
        $report = @{}
        $totalScopes = 0
        $totalFree =  0
        $totalInUse = 0

        ForEach ($scope In $scopes) {
            # We have multiple sites and include the scope name inside () at each scope
            # this aggregates scope data by name
            if ($scope.Name -match '.*\((.*)\).*') {
                $ScopeName = $Matches[1]
            } else {
                $ScopeName = $scope.Name
            }

            # initials a named scope if it doens't exist already
            if (!($report.keys -contains $ScopeName )) {
                $report[$ScopeName] = @{
                    Free = 0
                    InUse = 0
                    Scopes = 0
                }
            }

            $ScopeStatistics = Get-DhcpServerv4ScopeStatistics -ScopeID $scope.ScopeID -ComputerName $ComputerName -ErrorAction:SilentlyContinue
            $report[$ScopeName].Free += $ScopeStatistics.Free
            $report[$ScopeName].InUse += $ScopeStatistics.InUse
            $report[$ScopeName].Scopes += 1

            $totalFree += $ScopeStatistics.Free
            $totalInUse += $ScopeStatistics.InUse
            $totalScopes += 1
        }

        ForEach ($scope in $report.keys) {
            if ($report[$scope].InUse -gt 0) {
                [pscustomobject]@{
                    Name = $scope
                    Free = $report[$scope].Free
                    InUse = $report[$scope].InUse
                    Scopes = $report[$scope].Scopes
                    PercentFull = [math]::Round(100 *  $report[$scope].InUse / $report[$scope].Free , 2)
                    PercentOfTotal = [math]::Round( 100 * $report[$scope].InUse / $totalInUse, 2)
                }
            }
        }

        #Return one last summary object
        [pscustomobject]@{
            Name = "Total"
            Free = $totalFree
            InUse = $totalInUse
            Scopes = $totalScopes
            PercentFull = [math]::Round(100 *  $totalInUse / $totalFree , 2)
            PercentOfTotal = 0
         }

    }

}

Get-DHCPStatistics | ConvertTo-JSon

I then place that script on my DHCP server and use a telegraf service to run it and send data to InfluxDB. That config is pretty straightforward, aside from all the normal configuration to send it off, I just setup inputs.exec:

[[inputs.exec]]
  name_suffix = "_dhcp"
  commands = ['powershell c:\\GetDHCPStats.ps1']
  timeout = "60s"
  data_format = "json"
  tag_keys = ["Name"]

This is pretty easy, I tell it to expect JSON and the PowerShell was set up to output JSON. I also let it know that each record in the JSON will have one key labeled “Name” that will have the scope name in it. Honestly, this should probably be ScopeName and the PowerShell should be updated to reflect that as now my tags in InfluxDB are a bit polluted if anything else ever uses a tag of Name.

Once this is all done and configured, now my DHCP server is reporting statistics about our server into InfluxDB.

I then setup a graph in Grafana using this data. I just did a pretty straight forward graph that mapped each scopes percent of the total IPs that we use. It gives a nice easy way to see how the users on my network are moving around.  The source for the query ends up being something like:

SELECT mean("PercentOfTotal") FROM "exec_dhcp" WHERE ("Name" != 'Total') AND $timeFilter GROUP BY time($__interval), "Name" fill(linear)

This gives me a graph like the following (cropped to leave off some sensitive data):

DHCP Stats

Looks a little boring overall, but individual scope graphs can be kinda interesting and informative as to how the system in performing:

 

DHCP Stats1

This gives a fun view of one scope as devices join and then as lease are cleaned up, and new devices join again.

Hope this helps!

Setup Telegraf+InfluxDB+Grafana to Monitor Windows

Monitoring Windows with Grafana is pretty easy, but there are multiple systems that have to be set up to work together.

Prerequisites:

  • Grafana
  • InfluxDB

Main Steps:

  1. Create an InfluxDB database and users for Telegraf and Grafana
  2. Install Telegraf on Windows and configure it
  3. Setup a data source and dashboards in Grafana

It really sounds more daunting than it is

InfluxDB setup

We want to create an InfluxDB database,  create a user for telegraf to write data into influx, and a user for Grafana to read data out of influx.  From an SSH terminal, the commands will be

influx
CREATE DATABASE telegraf
CREATE USER telegraf WITH PASSWORD 'telegraf123'
CREATE USER grafana WITH PASSWORD 'grafana123'
GRANT WRITE ON telegraf TO telegraf 
GRANT READ ON telegraf TO grafana

Install Telegraf

You can go to https://portal.influxdata.com/downloads to get links download the client for different OS’s. Grab the link for windows, which at this time is https://dl.influxdata.com/telegraf/releases/telegraf-1.5.3_windows_amd64.zip and download that file using whatever method suits you best.   Extra the contents to a location you like,  I use c:\Program Files\telegraf\.  now you will need to modify the contents of telegraf.conf, I like to use Notepad++ but any text editor should be fine.

I like to modify the section called [global_tags] and put machine identifiers in there.

[global_tags]
 environment = "production"

You can add as many different tags under there as you would like, it takes some time to figure out what will be useful here.

When you have that completed, update the section for InfluxDB with the needed info. Make sure to update the IP and the passwords to the correct ones for your install.  Also if needed on the destination machine add the port 8086

# Configuration for influxdb server to send metrics to
[[outputs.influxdb]]
 urls = ["http://192.168.86.167:8086"] # required
 database = "telegraf" # required
 precision = "s"
 timeout = "5s"
 username = "telegraf"
 password = "telegraf123"

Now run a command prompt as administrator. Change to the directory where you have telegraf and its config file and run the following command to test your config:

C:\Program Files\telegraf>telegraf.exe --config telegraf.conf --test

The output should include a bunch of lines like the following:

 >win_perf_counters,instance=Intel[R]\ Ethernet\ Connection\ [2]\ I219-V,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Received_Errors=0 1522202369000000000
 > win_perf_counters,instance=Qualcomm\ Atheros\ QCA61x4A\ Wireless\ Network\ Adapter,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Received_Errors=0 1522202369000000000
 > win_perf_counters,instance=Teredo\ Tunneling\ Pseudo-Interface,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Received_Errors=0 1522202369000000000
 > win_perf_counters,objectname=Network\ Interface,host=DESKTOP-MAIN,instance=Intel[R]\ Ethernet\ Connection\ [2]\ I219-V Packets_Outbound_Discarded=0 1522202369000000000
 > win_perf_counters,objectname=Network\ Interface,host=DESKTOP-MAIN,instance=Qualcomm\ Atheros\ QCA61x4A\ Wireless\ Network\ Adapter Packets_Outbound_Discarded=0 1522202369000000000
 > win_perf_counters,instance=Teredo\ Tunneling\ Pseudo-Interface,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Outbound_Discarded=0 1522202369000000000
 > win_perf_counters,instance=Intel[R]\ Ethernet\ Connection\ [2]\ I219-V,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Outbound_Errors=0 1522202369000000000
 > win_perf_counters,instance=Qualcomm\ Atheros\ QCA61x4A\ Wireless\ Network\ Adapter,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Outbound_Errors=0 1522202369000000000
 > win_perf_counters,instance=Teredo\ Tunneling\ Pseudo-Interface,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Outbound_Errors=2 1522202369000000000

If it doesn’t, it should include error information that will help you determine what the issue is.  Once that works, you can then install telegraf as a service by running the following:

C:\Program Files\telegraf>telegraf.exe --service install

The service will not start automatically the first time however,  so then to start it run

net start telegraf

Now you should have telegraf collecting data from windows on a regular basis and dumping that data into InfluxDB, the only thing remaining is to graph it

Grafana Setup

In Grafana, set up a new data source.  It should look like the following:

teelgraf source

Once that is set up, then you can go create a dashboard and add a graph.  I created the following graph:

Windows CPU Graph

The query is:

SELECT mean("Percent_Processor_Time") FROM "win_cpu" WHERE ("host" = 'DESKTOP-MAIN' AND "instance" != '_Total') AND time >= now() - 5m GROUP BY time(500ms), "instance" fill(linear)

This basically tells InfluxDB to go get all of the win_cpu values where the host tag is set as “DESKTOP-MAIN” and the counter instance is not _Total.  For CPU values this means it gets the individual totals so that I can graph each CPU. Make that an equals instead and you’ll get just the overall CPU usage instead of the breakdown.

Then I group by tag(instance) which is how you get one line (or series) per CPU (performance counter).  After that, I use an alias by to make the name “CPU ” followed by the instance value.  If you don’t do that, you end up with some funky named series that just aren’t pretty to look at.   If anyone finds this interesting (or even if they don’t probably) I will make a post about how to use template variables to generate a whole dashboard of graphs for a whole set of hosts automagically.

This is a lot the first time you do it, maybe even the second.  But it really pays off and gives you some amazing ways to monitor computers and servers and pays off big in the end.