aboutprojectslinkslinks
 

b.l.o.g.

(blogs let others gawk)

January 23, 2014

An example of random not so random

Filed under: Grody Hacks — Tags: , , , , , , — Bryan @ 6:00 pm

So… this is a carry over from my last post. Now that I had a proof of concept confirming the viability of the data collection I’m doing, it was time to make something a little more automated on my Linux servers.

So, I’ve got about 20-30 servers that all ping a single point. The ping does four samples and the gives me the average of the results. I then take those results and do a wget post to another central server location that uses a Perl script to accept the post and drop it to a .log file (I don’t have a database option on this server, so this a quick disk write to a unique file is the brain dead option that gives me the least risk of concurrence issues and dealing with open files).

Now, I know I’m dealing with a small amount of servers but to really insure that I’m not polluting the results with the test, I decided to add to my Bash script that does the ping/wget, a couple of randomized sleep steps as seen below:

regex='= [^/]*/([0-9]+\.[0-9]+).*ms'

RANDOM=$$$
interval1=$(( $RANDOM % 49 + 10 ))
sleep $interval1

[[ $(/bin/ping -q -c 4 127.0.0.1) =~ $regex ]]

(note, I’m using localhost here, but in the live script it points to the actual server)

So, when this runs, it sleeps 10-59 seconds, runs a ping 4 times and calculates the average.

Then I do another sleep like the one above and send the results via wget to the remote collector.

I’ll also, note that the Bash script is run by cron on each of the servers and all of them pull time synchronization from the same source. So in theory, these scripts are launching at the exact same time on each of the servers.

The uninitiated would think intuitively that the chance of getting two wget’s posting at the exact same time, down to the second would be rare if not impossible with random numbers, but sure enough, by default, each run consistently will create multiple log drops that have identical time stamps for their log impressions.

Why? Well, random isn’t really random from one computer to the next unless you reset the seed value before you run it (and alot of people go to exceptional efforts to come up with highly convoluted methods to do just this). In this case, true random is not critical, so I’m reseeding Random with the PID of the process running the script. This changes each execution and the chance that the same PID will be issued at the same time on any given server is remote if unlikely, which is good enough for what I need here.

Now I’ve got staggered activity.

Cheers!