TheMadIndian

About TheMadIndian

A little background about me, I’ve been an IT professional officially since 1995, unofficially I’ve been around computers my whole life and have been writing simple DOS programs since DOS 2.2. I was a die hard Windows guy until 2007 where I finally succumbed to Linux out of need. I’ve never looked back, I’ve fallen in love with linux especially Fedora and Slackware. My posts will be mostly centered around Windows, Linux and FreeBSD. In my day to day life I run 3 infrastructure teams. Information Security, Operations and IT Support. I look forward to hearing from you.

DOS shell one liner for enumerating IIS 7 and up sites

Image

I was looking for a way to quickly compare IIS sites across my servers. I turned to my old friend DOS shell scripts and wrote a quick one liner

FOR /F "tokens=2 delims= " %i IN ('"%windir%\system32\inetsrv\appcmd.exe list site"') DO FOR %A IN (%windir%\system32\inetsrv\appcmd) DO %A list site %i /config >> sites-%computername%.txt

The script defines the field(s) I want which is the 2nd field, enumerates the sites on the IIS server and then enumerates the sites’ configurations and bindings and outputs the results to a file.  The script appends to the file so either make sure to rename an existing one or delete it to avoid confusion.  I may update the script to do that itself but for the meantime…

 

awk scripts for web server reporting and troubleshooting

Here are some scripts that well help us in our daily analysis of our servers

A few things we care about and the business will generally ask for are how the sites/servers are performing, in the absence of tools like splunk or awstats or similar tools these scripts can come in handy

This awk script is expecting apache format and is using the 9th field to capture the status codes for all hits

awk '{count[$9]++}END{for (j in count) {print count[j], j; total+=count[j]} print total" Total status codes"}' /logfile |sort -n

This awk script uses the 7th field and provides top requested urls

awk '{count[$7]++}END{for (j in count) {print count[j], j; total+=count[j]} print total" Total urls"}' /logfile |sort -n

This awk script uses the 11th field and provides top referering urls

awk '{count[$11]++}END{for (j in count) {print count[j], j; total+=count[j]} print total" Total urls"}' /logfile |sort -n

This awk script uses a new field separator to get the whole useragent and provides a count of top agents

awk -F \" '{count[$6]++}END{for (j in count) {print count[j], j; total+=count[j]} print total" Total agents"}' /logfile |sort -n

This awk script uses the 4th field to provide hits by sec

awk '{count[$4]++}END{for (j in count) {print count[j], j; total+=count[j]} print total" Total hits"}' /logfile |sort -n

This awk script uses a new field separator to get the hour field and provide a count of hits by hour

awk -F \: '{count[$2]++}END{for (j in count) {print count[j], j; total+=count[j]} print total" Total hits"}' /logfile |sort -n

Now reporting is nice but being able to troubleshoot is better

Let’s say you ran the status codes script and you notice more 500 status codes than you’d like and you’d like to see when they’re occuring
This script combines two awk scripts, this first part looks for any 50(0 1 3 & so on) error and that output is piped to our count by hour script and provides a breakdown of when most of our errors are occuring

awk '($9 ~ /50./)' /logfile | awk -F \: '{count[$2]++}END{for (j in count) {print count[j], j; total+=count[j]} print total" Total 500s"}'|sort -n

Let’s say now you’ve found a high concentration of errors at 3am and now let’s see what urls are causing the errors
This awk script uses field 9 to check for any 500 type error AND field 4 for the 3AM hour, then counts and produces top erroring urls during that time

awk '($9 ~ /50./) && ($4 ~ /\[05\/Dec\/2013\:03/){count[$7]++}END{for (j in count) {print count[j], j; total+=count[j]} print total" Total errors"}' /logfile |sort -n

What if I wanted to see what urls didn’t fail but were successful, redirected or missing? Well we use the not equal comparison for awk, using the previous script makes it easier to note the difference

awk '!($9 ~ /50./) && ($4 ~ /\[05\/Dec\/2013\:03/){count[$7]++}END{for (j in count) {print count[j], j; total+=count[j]} print total" Total errors"}' /logfile |sort -n

These scripts should provide you with a good basis for being able to get the information the business people need and insight for you on how your sites & systems are performing. All scripts are based on the apache log format and use the default “space” delimeter unless otherwise noted.

Happy scripting

 

awk – Hits by hour for apache and IIS log format

I’ve done this many different ways but this has to be the most efficient way I’ve found to get hits by hour and I get to use the power of awk

The script sets the delimiter of : and counts the number of occurrences of the 2nd field which is the hour field, I add a little word formatting for the date file I’m working with and sort the output

Apache log format

awk -F":" '{count[$2]++}END{for (j in count) {print "11-25-13 hour "j, count[j]" hits"; total+=count[j]} print "Total hits="total}' /log/file |sort

and in an 1/8 of the time it used to take me mixing several commands I now have this output

11-25-13 hour 00 34278 hits
11-25-13 hour 01 28582 hits
11-25-13 hour 02 27139 hits
11-25-13 hour 03 219542 hits
11-25-13 hour 04 33612 hits
11-25-13 hour 05 29900 hits
11-25-13 hour 06 36313 hits
11-25-13 hour 07 48721 hits
11-25-13 hour 08 60941 hits
11-25-13 hour 09 77082 hits
11-25-13 hour 10 99376 hits
11-25-13 hour 11 142141 hits
11-25-13 hour 12 191163 hits
11-25-13 hour 13 218150 hits
11-25-13 hour 14 238086 hits
11-25-13 hour 15 236122 hits
11-25-13 hour 16 268599 hits
11-25-13 hour 17 224519 hits
11-25-13 hour 18 220107 hits
11-25-13 hour 19 223004 hits
11-25-13 hour 20 182992 hits
11-25-13 hour 21 180524 hits
11-25-13 hour 22 182396 hits
11-25-13 hour 23 173174 hits
Total hits=3376463

IIS was slightly different due to the different log format, again I set the delimiter to : this time I count the 1st field add different word formatting and sort based on the 2nd column
IIS Log format

sudo awk -F":" '{count[$1]++}END{for (j in count) {print j" hour", count[j]" hits"; total+=count[j]} print "Total hits="total}' /log/file |sort -k 2

Produces this output

2013-11-25 00 hour 59259 hits
2013-11-25 01 hour 55567 hits
2013-11-25 02 hour 51763 hits
2013-11-25 03 hour 44487 hits
2013-11-25 04 hour 43262 hits
2013-11-25 05 hour 46869 hits
2013-11-25 06 hour 33495 hits
2013-11-25 07 hour 24929 hits
2013-11-25 08 hour 20310 hits
2013-11-25 09 hour 18445 hits
2013-11-25 10 hour 17211 hits
2013-11-25 11 hour 22776 hits
2013-11-25 12 hour 26052 hits
2013-11-25 13 hour 36508 hits
2013-11-25 14 hour 38920 hits
2013-11-25 15 hour 43963 hits
2013-11-25 16 hour 45905 hits
2013-11-25 17 hour 41755 hits
2013-11-25 18 hour 42276 hits
2013-11-25 19 hour 41531 hits
2013-11-25 20 hour 41101 hits
2013-11-25 21 hour 44790 hits
2013-11-25 22 hour 45541 hits
2013-11-24 23 hour 18 hits
2013-11-25 23 hour 49687 hits
Total hits=936444

Good luck and happy reporting

awk – Who is connected, how many connections do they have and what is their connection state

I’ve been busy tuning Varnish to prevent outages and that makes me take a much closer look at things and how to get through things quickly especially when it comes to working with farms of servers. I noticed in my cacti graphs a huge rise in FIN_WAIT_1 connection states on my FreeBSD servers to a degree of 25x the number of established connections so I decided to do some work to see what’s going. What I came to I think will provide some nice insight for many different scenarios.

First how I made my determination of my ratios, I know what connection states I’m looking so I create an array with those values, I know that it’s the 6th field and I’d like a count of where each stands. This is the oneliner

connstates=( FIN_WAIT_1 FIN_WAIT_2 TIME_WAIT ESTABLISHED SYN_RCVD LAST_ACK); for a in ${connstates[@]}; do netstat -n |awk '($6 = /'"${a}"'/){freq['"${a}"']++} END {for (x in freq) {print freq[x], "'"${a}"'", "\n" }}' |sort -nr;done

I added the carriage return to clean up the output

767 FIN_WAIT_1 

103 FIN_WAIT_2

866 TIME_WAIT

54 ESTABLISHED

220 SYN_RCVD

7 LAST_ACK

Now that I have that I want to know who’s connected, how many connections they have by connection state this oneliner takes care of that

connstates=( FIN_WAIT_1 FIN_WAIT_2 TIME_WAIT ESTABLISHED SYN_RCVD LAST_ACK); for a in ${connstates[@]}; do echo -e "\n" $a; netstat -n |awk '($6 = /'"${a}"'/){print $5}'|awk -F \. '{$5 = ""; gsub(/\ /, "."); gsub(/.$/,""); print}' |awk '{freq[$0]++} END {for (x in freq) {print freq[x], x }}' |sort -nr;done

and that produces an output like this

FIN_WAIT_1
627 67.198.71.114
390 64.90.159.102
264 89.241.213.178
221 174.108.76.131
71 186.32.54.114

FIN_WAIT_2
5 66.66.133.117
5 65.0.139.23
5 173.169.40.34
5 166.147.120.16
4 96.246.240.75

TIME_WAIT
98 10.10.10.20
68 10.10.10.22
60 10.10.10.23
60 10.10.10.24
49 10.10.10.25

ESTABLISHED
13 186.32.54.114
9 98.255.88.48
7 64.90.159.102
6 70.208.78.19
5 98.204.42.34

SYN_RCVD
22 108.45.28.123
2 12.160.201.226
1 98.255.88.48

LAST_ACK
6 71.215.198.128
2 66.104.42.194
1 80.58.205.48
1 41.79.120.19
1 121.120.116.205

Some of these gave me a hard time so let’s take a closer look at some of these
I’ve covered the gotchas from using a bash array variable in an earlier blog so I won’t cover that again. What I did discover is awk unkowingly modifies the output. I didn’t understand why I was getting this strange frequency output until I broke down what was happening and that’s when I saw what awk was doing

netstat -n |awk '($6 = /FIN_WAIT_1/)' |head -n 5
tcp4 0 532 10.10.10.169.80 173.175.47.91.55257 1
tcp4 0 532 10.10.10.169.80 173.175.47.91.55256 1
tcp4 0 533 10.10.10.169.80 108.45.28.123.49944 1
tcp4 0 0 10.10.10.169.80 113.197.8.162.41647 1
tcp4 0 0 10.10.10.169.80 113.197.8.162.2879 1

UH? I thought maybe it was how netstat behaved and I just never noticed

 netstat -n |grep FIN_WAIT_1 |head -n 5
tcp4       0  24606 10.10.10.169.80      199.224.118.221.62975  FIN_WAIT_1
tcp4       0   3579 10.10.10.169.80      74.12.184.206.1323     FIN_WAIT_1
tcp4       0    533 10.10.10.169.80      173.175.47.91.57498    FIN_WAIT_1
tcp4       0    533 10.10.10.169.80      173.175.47.91.57497    FIN_WAIT_1
tcp4       0    533 10.10.10.169.80      173.175.47.91.57496    FIN_WAIT_1

Nope

OK so now to account for this we’ve passed the bash variable in and accounted for it in the search, in the print section I’d normally have x as its my variable for the for loop but x was returning a blank field

connstates=( FIN_WAIT_1 FIN_WAIT_2 TIME_WAIT ESTABLISHED SYN_RCVD LAST_ACK); for a in ${connstates[@]};  do netstat -n |awk '($6 = /'"${a}"'/){freq['"${a}"']++} END {for (x in freq) {print freq[x], x, "\n" }}' |sort -nr;done
2513  

167  

789  

128  

213  

6

this is no good so I try passing the bash variable like I did in the search

connstates=( FIN_WAIT_1 FIN_WAIT_2 TIME_WAIT ESTABLISHED SYN_RCVD LAST_ACK); for a in ${connstates[@]};  do netstat -n |awk '($6 = /'"${a}"'/){freq['"${a}"']++} END {for (x in freq) {print freq[x], '"${a}"', "\n" }}' |sort -nr;done
2495  

230  

892  

149  

124  

6

Same thing, then it hit me maybe its not being passed as a string so I added double quotes

connstates=( FIN_WAIT_1 FIN_WAIT_2 TIME_WAIT ESTABLISHED SYN_RCVD LAST_ACK); for a in ${connstates[@]};  do netstat -n |awk '($6 = /'"${a}"'/){freq['"${a}"']++} END {for (x in freq) {print freq[x], "'"${a}"'", "\n" }}' |sort -nr;done
2563 FIN_WAIT_1 

194 FIN_WAIT_2 

901 TIME_WAIT 

108 ESTABLISHED 

158 SYN_RCVD 

5 LAST_ACK

Perfect

The next one I hadn’t had to deal with previously on linux was FreeBSD doesn’t use : for the port associated with the IP address, it uses a .
I used gsub to clean up my formatting

bash variable is looking for one of my states, I know I want the 5th field so I print and pipe the output and want to strip the port so I use gsub, now the 1st gsub adds an additional period at the end of the address so I add a 2nd gsub and I look for any period at the end of the line and remove it and then print the output

netstat -n |awk '($6 = /'"${a}"'/){print $5}'|awk -F \. '{$5 = ""; gsub(/\ /, "."); gsub(/.$/,""); print}'

I hope you’ve found this useful and find ways to expand and modify it for your needs

awk – getting status code by request type

I started off by wanting to see what my request types were and then grew curious what my status codes were for those request types. This is searching my varnishlogs but they are in apache format.

Since I like awk… I started by searching the 6th field for GET request types
awk ‘($6 ~ /GET/)’ varnishlog |wc -l 
While this is nice its very manual and doesn’t give me a whole lot of useful info except every request that’s a get

So let’s take it a step farther, using the frequency function from awk
awk ‘{freq[$6]++} END {for (x in freq) {print x, freq[x]}} varnishlog
This produces a nicer output and tells me how many of each request type that I got but doesn’t tell the status of those request types

let’s take it another step further, we’re going to search the 9th field for 200 statuses and report against the request type and lets add numeric sorting on the 2nd output field
awk ‘($9 ~ /200/){freq[$6]++} END {for (x in freq) {print x, freq[x]}}’ varnishlog |sort -nr -k 2

Well now we can see all of our requests are not 200s, let’s get a clear view of what’s happening then.  I’ll create an array of the status codes I care about and move to a for loop, due to my use of the array I have to take special care using it in the search field, I also add a carriage return to the output of awk to make the output easier to read
statuscodes=( 200 301 302 400 403 404 410 500 501 503 );for a in ${statuscodes[@]};do echo $a statuses;awk ‘($9 ~ /'”${a}”‘/){freq[$6]++} END {for (x in freq) {print x, freq[x], “\n”}}’ varnishlog |sort -nr -k 2;done

 

Working with squid access files and the timestamp

I’m not a big fan of perl (just a personal preference) and I always look for ways that I can minimize its use.

To normalize the timestamp field most people pipe to this and it works quite well for them

|perl -p -e ‘s/^([0-9]*)/”[“.localtime($1).”]”/e’

I’m a big fan of awk and whenever possible I like to use it exclusively so now instead of piping to the above perl statement we can now use awk like this

| awk ‘{timestamp=strftime(“%D”,$1); $1=””; print timestamp$0}’

I use capital D for the format being an American.Yes I know its not internationally compatible, for available time formats you can look here http://www.gnu.org/software/gawk/manual/html_node/Time-Functions.html

 

Adding static routes in FreeBSD

Adding routes is easy but doesn’t survive reboots. In order to make a route static or persistent we add an entry to the /etc/rc.conf file

This
route add 192.168.1.0/24 192.168.3.1

becomes this in /etc/rc.conf
static_routes=”net1″
net1=”-net 192.1681.0/24 192.168.3.1

or if you have multiple routes they are space delimited like this
static_routes=”net1 net2″
route_net1=”-net  192.1681.0/24 192.168.3.1″
route_net2=”-net  192.1682.0/24 192.168.3.1″

Upgrading from FreeBSD 9.0 to 9.1

This weekend I was having an issue updating chromium (chrome for FreeBSD) on my FreeBSD laptop and I finally succumbed and went on the boards and was told that my version 9.0 was EOL. Crap!! Since I have 3 9.0 systems at the house I taught myself how to upgrade to 9.1 and vetted the upgrade against my other two machines. The biggest gotcha I found was to make sure the system is as up to date as possible (ie I couldn’t update chromium one one), I left another system with 35 patches on it and let’s just say things didn’t go well on that system. I was lucky enough it was VM and smart enough to clone the system that gave me a problem so I was able to test this a few times.

Here are the steps I took to perform the upgrade

#Update your ports tree
sudo portsnap fetch update

#Determine what packages require updating
sudo portversion |grep –v \=

#To update all packages out of date or pass the packages to upgrade individually to portupgrade
sudo portupgrade –rRv ` portversion |grep –v \=`

#Prepare the upgrade
sudo freebsd-update fetch install

#If patches were installed reboot
sudo shutdown –r now

#Upgrade to 9.1 release
sudo freebsd-update upgrade –r 9.1-RELEASE
sudo freebsd-update install

#Reboot to the new kernel
sudo shutdown –r now

#Update needs to run again against userland components
sudo freebsd-update install

#I hear with 8.X you have to run the last command again to rebuild 3rd party packages but it said no packages when I did it on 9.1 Kernel from 9.0, I’ll be testing upgrading from 8.3 later this week.

#Reboot to the 9.1 release
sudo shutdown –r now