Finding VMs with IOPs Limiting Set

Using PowerCLI it’s pretty easy to find all the VM’s in your environment that have IOPs limits set.

get-vm | Get-VMResourceConfiguration | `
         Select VM -ExpandProperty DiskResourceConfiguration | `
         where {$_.DiskLimitIOPerSecond -gt 0} | `
         Select VM, DiskLimitIOPerSecond

This will give you a list of all VM’s where the IOps limits are not 0. You could change the condition to find VM’s with different disk shares, specific limits or any combinations there of.

Hope you find this useful ūüôā

Collecting DHCP Scope Data with Grafana

In order to collect my DHCP scope statistics data into Grafana I turned to PowerShell.  We can use Get-DhcpServerv4Scope to list our all our scopes, Get-DhcpServerv4ScopeStatistics to get the stats for each, and then a little bit of regex and math to add some additional stats that we then bring into an InfluxDB, which then ultimately gets mapped be Grafana.

I have multiple sites, with multiple scopes, which ends up with tons and tones of data.¬† I already have Nagios alerts that tell me if individual scopes are in danger ranges of available IP’s etc, so for Grafana I was more interested in aggregated data about groups of scopes and how users in my network were changing.¬† In our case, the actual scope names are contained inside the parenthesis, so I used some regex to match scope names between parenthesis and then build a hash table of stats with those scope names and total up the free and used IPs in each range.

Enough chatter, here is the script:

Function Get-DHCPStatistics {
    Param(
        [string]$ComputerName=$env:computername,
        [string]$option
    )
    Process {
        # retrieve all scopes
        $scopes = Get-DhcpServerv4Scope -ComputerName $ComputerName -ErrorAction:SilentlyContinue 

        # setup all variables we are going to use
        $report = @{}
        $totalScopes = 0
        $totalFree =  0
        $totalInUse = 0

        ForEach ($scope In $scopes) {
            # We have multiple sites and include the scope name inside () at each scope
            # this aggregates scope data by name
            if ($scope.Name -match '.*\((.*)\).*') {
                $ScopeName = $Matches[1]
            } else {
                $ScopeName = $scope.Name
            }

            # initials a named scope if it doens't exist already
            if (!($report.keys -contains $ScopeName )) {
                $report[$ScopeName] = @{
                    Free = 0
                    InUse = 0
                    Scopes = 0
                }
            }

            $ScopeStatistics = Get-DhcpServerv4ScopeStatistics -ScopeID $scope.ScopeID -ComputerName $ComputerName -ErrorAction:SilentlyContinue
            $report[$ScopeName].Free += $ScopeStatistics.Free
            $report[$ScopeName].InUse += $ScopeStatistics.InUse
            $report[$ScopeName].Scopes += 1

            $totalFree += $ScopeStatistics.Free
            $totalInUse += $ScopeStatistics.InUse
            $totalScopes += 1
        }

        ForEach ($scope in $report.keys) {
            if ($report[$scope].InUse -gt 0) {
                [pscustomobject]@{
                    Name = $scope
                    Free = $report[$scope].Free
                    InUse = $report[$scope].InUse
                    Scopes = $report[$scope].Scopes
                    PercentFull = [math]::Round(100 *  $report[$scope].InUse / $report[$scope].Free , 2)
                    PercentOfTotal = [math]::Round( 100 * $report[$scope].InUse / $totalInUse, 2)
                }
            }
        }

        #Return one last summary object
        [pscustomobject]@{
            Name = "Total"
            Free = $totalFree
            InUse = $totalInUse
            Scopes = $totalScopes
            PercentFull = [math]::Round(100 *  $totalInUse / $totalFree , 2)
            PercentOfTotal = 0
         }

    }

}

Get-DHCPStatistics | ConvertTo-JSon

I then place that script on my DHCP server and use a telegraf service to run it and send data to InfluxDB. That config is pretty straightforward, aside from all the normal configuration to send it off, I just setup inputs.exec:

[[inputs.exec]]
  name_suffix = "_dhcp"
  commands = ['powershell c:\\GetDHCPStats.ps1']
  timeout = "60s"
  data_format = "json"
  tag_keys = ["Name"]

This is pretty easy, I tell it to expect JSON and the PowerShell was set up to output JSON. I also let it know that each record in the JSON will have one key labeled “Name” that will have the scope name in it. Honestly, this should probably be ScopeName and the PowerShell should be updated to reflect that as now my tags in InfluxDB are a bit polluted if anything else ever uses a tag of Name.

Once this is all done and configured, now my DHCP server is reporting statistics about our server into InfluxDB.

I then setup a graph in Grafana using this data. I just did a pretty straight forward graph that mapped each scopes percent of the total IPs that we use. It gives a nice easy way to see how the users on my network are moving around.  The source for the query ends up being something like:

SELECT mean("PercentOfTotal") FROM "exec_dhcp" WHERE ("Name" != 'Total') AND $timeFilter GROUP BY time($__interval), "Name" fill(linear)

This gives me a graph like the following (cropped to leave off some sensitive data):

DHCP Stats

Looks a little boring overall, but individual scope graphs can be kinda interesting and informative as to how the system in performing:

 

DHCP Stats1

This gives a fun view of one scope as devices join and then as lease are cleaned up, and new devices join again.

Hope this helps!

Setup Telegraf+InfluxDB+Grafana to Monitor Windows

Monitoring Windows with Grafana is pretty easy, but there are multiple systems that have to be set up to work together.

Prerequisites:

  • Grafana
  • InfluxDB

Main Steps:

  1. Create an InfluxDB database and users for Telegraf and Grafana
  2. Install Telegraf on Windows and configure it
  3. Setup a data source and dashboards in Grafana

It really sounds more daunting than it is

InfluxDB setup

We want to create an InfluxDB database,  create a user for telegraf to write data into influx, and a user for Grafana to read data out of influx.  From an SSH terminal, the commands will be

influx
CREATE DATABASE telegraf
CREATE USER telegraf WITH PASSWORD 'telegraf123'
CREATE USER grafana WITH PASSWORD 'grafana123'
GRANT WRITE ON telegraf TO telegraf 
GRANT READ ON telegraf TO grafana

Install Telegraf

You can go to¬†https://portal.influxdata.com/downloads to get links download the client for different OS’s. Grab the link for windows, which at this time is¬†https://dl.influxdata.com/telegraf/releases/telegraf-1.5.3_windows_amd64.zip and download that file using whatever method suits you best.¬† ¬†Extra the contents to a location you like,¬† I use c:\Program Files\telegraf\.¬† now you will need to modify the contents of telegraf.conf, I like to use Notepad++ but any text editor should be fine.

I like to modify the section called [global_tags] and put machine identifiers in there.

[global_tags]
 environment = "production"

You can add as many different tags under there as you would like, it takes some time to figure out what will be useful here.

When you have that completed, update the section for InfluxDB with the needed info. Make sure to update the IP and the passwords to the correct ones for your install.  Also if needed on the destination machine add the port 8086

# Configuration for influxdb server to send metrics to
[[outputs.influxdb]]
 urls = ["http://192.168.86.167:8086"] # required
 database = "telegraf" # required
 precision = "s"
 timeout = "5s"
 username = "telegraf"
 password = "telegraf123"

Now run a command prompt as administrator. Change to the directory where you have telegraf and its config file and run the following command to test your config:

C:\Program Files\telegraf>telegraf.exe --config telegraf.conf --test

The output should include a bunch of lines like the following:

 >win_perf_counters,instance=Intel[R]\ Ethernet\ Connection\ [2]\ I219-V,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Received_Errors=0 1522202369000000000
 > win_perf_counters,instance=Qualcomm\ Atheros\ QCA61x4A\ Wireless\ Network\ Adapter,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Received_Errors=0 1522202369000000000
 > win_perf_counters,instance=Teredo\ Tunneling\ Pseudo-Interface,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Received_Errors=0 1522202369000000000
 > win_perf_counters,objectname=Network\ Interface,host=DESKTOP-MAIN,instance=Intel[R]\ Ethernet\ Connection\ [2]\ I219-V Packets_Outbound_Discarded=0 1522202369000000000
 > win_perf_counters,objectname=Network\ Interface,host=DESKTOP-MAIN,instance=Qualcomm\ Atheros\ QCA61x4A\ Wireless\ Network\ Adapter Packets_Outbound_Discarded=0 1522202369000000000
 > win_perf_counters,instance=Teredo\ Tunneling\ Pseudo-Interface,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Outbound_Discarded=0 1522202369000000000
 > win_perf_counters,instance=Intel[R]\ Ethernet\ Connection\ [2]\ I219-V,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Outbound_Errors=0 1522202369000000000
 > win_perf_counters,instance=Qualcomm\ Atheros\ QCA61x4A\ Wireless\ Network\ Adapter,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Outbound_Errors=0 1522202369000000000
 > win_perf_counters,instance=Teredo\ Tunneling\ Pseudo-Interface,objectname=Network\ Interface,host=DESKTOP-MAIN Packets_Outbound_Errors=2 1522202369000000000

If it doesn’t, it should include error information that will help you determine what the issue is.¬† Once that works, you can then install telegraf as a service by running the following:

C:\Program Files\telegraf>telegraf.exe --service install

The service will not start automatically the first time however,  so then to start it run

net start telegraf

Now you should have telegraf collecting data from windows on a regular basis and dumping that data into InfluxDB, the only thing remaining is to graph it

Grafana Setup

In Grafana, set up a new data source.  It should look like the following:

teelgraf source

Once that is set up, then you can go create a dashboard and add a graph.  I created the following graph:

Windows CPU Graph

The query is:

SELECT mean("Percent_Processor_Time") FROM "win_cpu" WHERE ("host" = 'DESKTOP-MAIN' AND "instance" != '_Total') AND time >= now() - 5m GROUP BY time(500ms), "instance" fill(linear)

This basically tells InfluxDB to go get all of the¬†win_cpu values where the host tag is set as “DESKTOP-MAIN” and the counter instance is not _Total.¬† For CPU values this means it gets the individual¬†totals so that I can graph each CPU. Make that an equals instead and you’ll get just the overall CPU usage instead of the breakdown.

Then I group by tag(instance) which is how you get one line (or series) per CPU (performance counter).¬† After that, I use an alias by to make the name “CPU ” followed by the instance value.¬† If you don’t do that, you end up with some funky named series that just aren’t¬†pretty to look at.¬† ¬†If anyone finds this interesting (or even if they don’t probably) I will make a post about how to use template variables to generate a whole dashboard of graphs for a whole set of hosts automagically.

This is a lot the first time you do it, maybe even the second.  But it really pays off and gives you some amazing ways to monitor computers and servers and pays off big in the end.

Finding Recently Updated Files

So I needed to find what log files were getting updated.¬† The files where inC:\ProgramData\VMware\vCenterServer\logs and that¬†folder has many many folders and I wasn’t sure which one would have the files I needed. I was sure that they would have been updated recently.¬† So a quick little PowerShell to the rescue

Get-ChildItem -Recurse | Where {$_.LastWriteTime -gt (Get-Date).AddMinutes(-15)}

This returns all the files in the current folder and below that have been modified in the last 15 minutes.  It is easy enough to change up to look for other criteria, like *.log files int he last 5 minutes

Get-ChildItem -Recurse -Filter *.log | Where {$_.LastWriteTime -gt (Get-Date).AddMinutes(-5)}

Or all files with pid in their name

Get-ChildItem -Recurse -Filter *pid*

Powershell can be very very handy in a pinch! Hope this helps

 

Using Python,Telegraf and Grafana to monitor your Ethermine.org miner!

I have a couple mining computers going and compulsion to Grafana¬†everything that comes along. So I wondered how hard it would be to track my miner with Grafana and it turns out, not hard at all.¬† I use ethermine and they actually provide an API that allows you to call it with your miners address and it returns all sorts of stats, they also have some of the best documentation I’ve seen, and a site to let you test calls to their API (https://api.ethermine.org/docs/)¬† ¬† Heading over there I found that I wanted to make calls to miner/{mineraddress}/currentStats to get the juicy information I wanted, the info I wanted was returned in JSON and it would be in the data key… Well that’s¬†easy enough, it’s not the prettiest script, and it doesn’t check for errors but here it is

#!/usr/bin/env python
import json
import requests

key = '{mineraddress}'
url = 'https://api.ethermine.org/miner/' + key + '/currentStats'

stats = requests.get(url)

print json.dumps(json.loads(stats.text)['data'])

Replace {mineraddress} with your miner address, and run it, and there you go.

You should get something back similar to

{"averageHashrate": 26047453.703703698, "usdPerMin": 0.0006877163189934768, "unpaid": 6366263118749017, "staleShares": 0, "activeWorkers": 1, "btcPerMin": 8.165787167840064e-08, "invalidShares": 0 , "validShares": 29, "lastSeen": 1521771528, "time": 1521771600, "coinsPerMin": 1.3241101293724766e-06, "reportedHashrate": 25752099, "currentHashrate": 32222222.222222224, "unconfirmed": null}

 

Which shows that currently, I’m making 0.0006 $/minute so I’ll be rich very very soon!

Now all I needed was to get this into Grafana,¬† my current database of choice has been InfluxDB, mostly because that is what I’ve been using, and the current collector of choice Telegraf.

So I:

  1. Setup influxdb
  2. Created a database for telegraf
  3. Created a write user for telegraf
  4. Setup telegraf
  5. Configured telegraf to use its user and write to influxdb

With that all done (that is basic setup needed for Grafana and I will probably cover it some other time)

I needed a telegraf collector for ethermine.  I moved my ethermine script to /usr/local/sbin and changed then ran

chown telegraf ethermine.py

This might not be the best practice, but it made the script runnable by telegraf

Then I set up an exec config file for ethermine.py in /etc/telegraf.d/ called ethermine.conf

[[inputs.exec]]
command = "/usr/local/sbin/ethermine.py"
data_format = "json"
interval = "120s"
name_suffix = "-ethermine"

This is pretty straightforward, it tells telegraf to call ethermine.py every 2 minutes (checking the nice API documents show that this is the most often they update the data), expect the data to be returned in json format, and append¬†-ethermine¬†to ‘exec’ so that the data shows up in a separate¬†field in the from selection in Grafana.

Once you have the config file in place test it:

sudo -u telegraf telegraf --config ethermine.conf --test

This should give you a nice line like:

* Plugin: inputs.exec, Collection 1
* Internal: 2m0s
> exec-ethermine,host=ubuntu staleShares=0,activeWorkers=1,reportedHashrate=25873123,usdPerMin=0.0006867597567563183,averageHashrate=26057870.370370366,invalidShares=0,lastSeen=1521771782,btcPerMin=0.00000008182049517386047,currentHashrate=30000000,time=1521772200,coinsPerMin=0.0000013243848360935654,unpaid=6379763169623989,validShares=27 1521772669000000000

That way you know its working, then restart the telegraf service.

sudo service telegraf restart

Now all you have to do is setup some queries that you like in Grafana.   Connect it to your influxdb (setup a read user first) and then I set up some queries like the following:

Hash Rate

Setup a couple of graphs on your dashboard, sit back, and watch your miner rake in the dough ūüôā

Mining

Finding a VM by MAC address

Sometimes you only have a MAC address.   Whether you are starting from a DHCP log, or DNS entry, or some other source, occasionally you have less info than you would like.  If you find yourself with only a MAC address and a bunch of VMs to dig through then PowerCLI can help you find the machine you want.  It might also give you some tools to audit your environment and make sure everything is actually exactly as you expect it to be.

Once in PowerCLI and connected to VCenter a simple command will list all Network Adapters in our vCenter

Get-NetworkAdapter -VM *

It is then just a matter of filtering this output to match the MAC address we have:

Get-NetworkAdapter -VM * | Where {$_.MacAddress -eq "00:50:56:B2:2E:D9"}

Now that you have the adapter for the Virtual machine you want you can get the VM you want by expanding the parent attribute:

Get-NetworkAdapter -VM * | Where {$_.MacAddress -eq "00:50:56:B2:2E:D9"} | SELECT -expand parent | FT *

You now have all the attributes of the parent machine you could want, maybe just select Name, Host, and notes to narrow it down so you can get right to your target machine.

Get-NetworkAdapter -VM * | Where {$_.MacAddress -eq "00:50:56:B2:2E:D9"} | SELECT -expand parent | SELECT name, vmhost, notes

As a bonus, when using this method we can switch the where clause out and hunt for partial MAC addresses:

Get-NetworkAdapter -VM * | Where {$_.MacAddress -like "00:50:56:B2:*:D9"} | SELECT -expand parent | SELECT name, vmhost, notes

or if you want to find the IP address of the host you can use Get-VMGuest

Get-NetworkAdapter -VM * | Where {$_.MacAddress -like "00:50:56:B2:*:D9"} | SELECT -expand parent | Get-VMGuest

I hope this helps someone else in their time of night hunting down a rouge machine(s) ūüôā

References:

http://terenceluk.blogspot.com/2013/11/finding-virtual-machine-in-vmware.html
https://www.vmguru.com/2016/04/powershell-friday-getting-vm-network-information/

 

Query Microsoft DHCP Scopes

Sometimes you just have a ton of DHCP scopes and you just need to make sure they all have some specific options set the way you want. Scanning through them by hand can be a pain, so here is a quick script to scan over them rapidly.

Param(
 [Parameter(Mandatory=$True)]
 [string]$dnsServer,
 [string]$match,
 [string]$option
)
 $scopes = Get-DhcpServerv4Scope -ComputerName $dnsServer -ErrorAction:SilentlyContinue | Where {$_.Name -like "*$match*"}
 $Report = @()

ForEach ($scope In $scopes) {
 $row = "" | Select ScopeID, Name, Option
 $OptionData = (Get-DhcpServerv4OptionValue -OptionID $option -ScopeID $scope.ScopeID -ComputerName $dnsServer -ErrorAction:SilentlyContinue).Value
 $OptionData = (Get-DhcpServerv4OptionValue -OptionID $option -ScopeID $scope.ScopeID -ComputerName $dnsServer -ErrorAction:SilentlyContinue).Value
 $row.ScopeID = $scope.ScopeID
 $row.Name = $scope.Name
 $row.Option = $OptionData -Join ","
 $Report += $row
 }
$Report

 

This script takes a couple of parameters.  Match lets you specify the name of the scope so that you can filter it down by the specific scopes, and option lets you specify the attribute number you would like to report on and dnsServer lets you specify the server.  Some usage examples:

#report on each scopes gateway where the scope name has "vlan110"
.\dhcp_query.ps1 -dnsServer dhcpServer1 -match vlan110 -option 3

#report on each scopes DNS where the scope name has "vlan110"
.\dhcp_query.ps1 -dnsServer dhcpServer1 -match vlan110 -option 6

#report and then export into a CSV
.\dhcp_query.ps1 -dnsServer dhcpServer1 -match vlan110 -option 6 | Export-CSV -Path dns_voip_options.csv -NoTypeInformation