Category Archives: linux

rancid-run exits immediately after upgrade to 3.x from 2.x

After upgrading I had the issue where rancid-run exited straight away after launching. As a result no system configurations were collected. The following is seen in the latest log file in the var/logs directory:

starting: Wed Oct 29 12:30:15 GMT 2014

ending: Wed Oct 29 12:30:15 GMT 2014

It turns out that in version 3 the delimiter in the router.db has changed from a colon : to a semi-colon ;. This is to avoid problems with IPv6 addresses. More here.

Old format of router.db:

New format:;cisco;up


A hacked together bash script to scan for heartbleed vulnerable services

Here’s a script to scan networks to look for services vulnerable to the heartbleed OpenSSL bug. It uses NMAP to scan for pingable hosts and obtain a list of network ports open and then uses the script to check if they’re vulnerable.

Grab the script here.

Edit the network variable in the script to change the list of networks to scan.


# This is the string we're looking for in the python script output
pattern="server is vulnerable"

#Networks to scan

echo -ne "Scanning network(s) $network                                                 \r"

#Use NMAP to find the IPs that ping in the networks listed
network_ips=(`nmap -n -sn -PE $network | grep "Nmap scan" | awk {'print $5'} | awk 1 ORS=' '`)

for ip in "${network_ips[@]}"
        echo -ne "NMAP port scan on $ip                                              \r"
		# Get a list of open ports on the IP address
        ports=(`nmap  -sT $ip | grep open | awk '{print $1}' | awk -F '/' '{print $1}' | awk 1 ORS=' '`)
        if [[ $rc -eq 0 ]] ; then
                for p in "${ports[@]}"
                        echo -ne "Scanning $ip on port $p                            \r"
						# If the port is 25 then let's try STARTTLS
                        if [[ $p -eq 25 ]] ; then
                                echo -ne "Scanning $ip on port $p (smtp)             \r"
                                output=`timeout 1 python ./ $ip -s -p $p 2>&1`
								# Otherwise just do normal SSL
                                output=`timeout 1 python ./ $ip -p $p 2>&1`
						# Check if the text output matched
                        if [ "x" != "x`echo $output | grep "$pattern"`" ]; then
                                echo "$ip - port $p - VULNERABLE"


Tagged ,

Mimicking a multi-stage update server for RHEL servers on Classic

One of Redhat Satellite’s functions is update release management. Unfortunately it also quite expensive. If you have classic you can mimic the update features (albeit not the security updates) through some scripting.

This solution uses a server which is registered with Redhat with a Classic subscription so it’s not quite free. The server is able to reposync all the updates for its architecture and major release version (e.g. RHEL 5 x86_64).

1. Register the server you want to use as the reposync server on Redhat Classic and assign the correct channel.

2. Using the following command to create a new repo:

reposync -p /mypath/mirror/ --repoid=rhel-x86_64-server-5 -l -n

This syncs the whole of Redhat’s rhel-x86_64-server-5 repository to the /mypath/mymirror folder. You can change these paths and repoid to however you please. The -n switch means that only the newest files are synced each run so you can schedule this in your crontab to run regularly.

3. Now you want to create a point in time snapshot of all the updates contained in the mirror at that point in time. To do that you can use lndir. Here is my script:



function usage()
 echo "This script creates a new snapshot folder for yum updates"
 echo " MYGROUP"
 echo "-h --help Show this help"
 echo "--environment=MYGROUP Set the environment to create the repo for"
while [ "$1" != "" ]; do
 PARAM=`echo $1 | awk -F= '{print $1}'`
 VALUE=`echo $1 | awk -F= '{print $2}'`
 case $PARAM in
 -h | --help)
 echo "ERROR: unknown parameter \"$PARAM\""
 exit 1
if [ "$currEnvironment" == "" ]; then
 echo "Environment not set properly. Exiting."
 exit 1
currDate=$(date +%Y%m%d-%k%m)

echo "Creating yum snapshot for environment $currEnvironment in $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/packages"

echo "Creating new folder for the snapshot to go in"
mkdir -p $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/packages

echo "Creating new folder for the repodata to go in"
mkdir -p $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/repodata

echo "Creating a directory of links in the snapshot folder"
lndir $baseRepoPath/mirror/rhel-x86_64-server-5/getPackage/ $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/packages

echo "Creating a repo for the client to read"
createrepo $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate

echo "Creating a link for the client to the snapshot directory"
if [ -e $baseRepoPath/updates/$currEnvironment ]; then
 rm -f $baseRepoPath/updates/$currEnvironment

ln -s $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate $baseRepoPath/updates/$currEnvironment

echo "All done"
echo "Please use the following yum config in your yum.conf:"
echo "[update]"
echo "gpgcheck=0"
echo "name=Red Hat Linux Updates"
echo "baseurl=http://<servername>/updates/$currEnvironment"

If you run


it creates a folder structure like this:

                            <list of reposync'd packages>
            MYGROUP -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
                                   <directory of symbolic links to /mypath/mirror/rhel-x86_64-server-5/getPackage/<filename>

Each time the script is run is creates a new dated snapshot directory under /mypath/snapshots/rhel-x86_64-server-5 containing a packages folder with a directory full of symbolic links of the files that were in /mypath/mirror/rhel-x86_64-server-5/getPackage at the time it was run. A repo is then created on this list of files. The only space the snapshot takes up is for the repodata files, which is about 200MB in size.

So now you can roll patches through groups of servers by setting the baseurl in their yum.conf to point to different updates/MYGROUP. For example:

TEST -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
UAT -> /mypath/snapshots/rhel-x86_64-server-5/2013-09-11-1001
PROD -> /mypath/snapshots/rhel-x86_64-server-5/2013-17-05-1630

The PROD group could be getting a different set of packages when yum update is run than UAT or LIVE. All you need to do to release TEST packages to UAT is to change the symbolic link. Using the above example:

rm -f /mypath/updates/UAT
ln -s /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104 /mypath/updates/UAT

which now gives:

TEST -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
UAT -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
PROD -> /mypath/snapshots/rhel-x86_64-server-5/2013-17-05-1630

Now going to UAT machine and running yum update should list new updates. You could no doubt script that if you wanted.

Tagged , , , , , ,

Simple btrfs snapshot script

I bought a Raspberry Pi sometime ago and have been using it as my backup device via a USB disk. My other machines use rsync to backup their data to the pi’s /mnt/backupdisk. Instead of running backups to the same disk I decided to use BTRFS and snapshotting. I looked around for a script to do this for me in the same way as time machine, back in time or rsnapshot would do but couldn’t find any. So I’ve written a simple script.

What it does:

  • Creates a primary snapshot every hour which is overwritten every day
  • Creates a primary snapshot every day which is overwritten a week later
  • Creates a primary snapshot on the 1st of the month which is overwritten a year later
  • Create a primary snapshot on the 1st January every year which is kept forever

I guess you’ll be thinking that this could end up chewing up disk space as I’ll be keeping very old snapshots. Whether you want to do that is up to you. I have plenty of disk space and my data doesn’t change all that often.

The script:

if [ $# -eq 0 ]
 echo "Syntax: Hourly|Daily|Weekly|Monthly|Yearly|list"
# Hourly Snaps (24)
if [ "$1" == "Hourly" ]
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Hourly-$(date +%H)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Hourly-$(date +%H)
#Daily Snaps (7)
if [ "$1" == "Daily" ]
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Daily-$(date +%a)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Daily-$(date +%a)
#Weekly Snaps (52)
if [ "$1" == "Weekly" ]
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Weekly-$(date +%V)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Weekly-$(date +%V)
#Yearly Snaps (1 per year)
if [ "$1" == "Yearly" ]
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Yearly-$(date +%Y)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Yearly-$(date +%Y)
#List Snaps
if [ "$1" == "list" ]
 /sbin/btrfs subvolume list /mnt/backupdisk/

Also you’ll want to schedule it in root’s crontab:

sudo crontab -e
#BTRFS snaps of /mnt/backupdisk--------------------------------------------------
0 * * * * /bin/bash /scripts/ Hourly 2>&1 >> /var/log/btrfs_snap.log
5 0 * * * /bin/bash /scripts/ Daily 2>&1 >> /var/log/btrfs_snap.log
10 0 * * 0 /bin/bash /scripts/ Weekly 2>&1 >> /var/log/btrfs_snap.log
15 0 1 * * /bin/bash /scripts/ Monthly 2>&1 >> /var/log/btrfs_snap.log
20 0 1 1 * /bin/bash /scripts/ Yearly 2>&1 >> /var/log/btrfs_snap.log

It logs to /var/log/btrfs_snap.log.

The snapshots are fully writable and can be found under /mnt/backupdisk/.snapshot/

You can check your snapshots using

# sudo /scripts/ list
ID 415 top level 5 path .snapshot/Hourly-20
ID 416 top level 5 path .snapshot/Hourly-21
ID 417 top level 5 path .snapshot/Hourly-22
ID 418 top level 5 path .snapshot/Hourly-23
ID 419 top level 5 path .snapshot/Hourly-00
ID 420 top level 5 path .snapshot/Daily-Mon
ID 421 top level 5 path .snapshot/Hourly-01
ID 422 top level 5 path .snapshot/Hourly-02
ID 423 top level 5 path .snapshot/Hourly-03
ID 424 top level 5 path .snapshot/Hourly-04
ID 425 top level 5 path .snapshot/Hourly-05
ID 426 top level 5 path .snapshot/Hourly-06
ID 427 top level 5 path .snapshot/Hourly-07
ID 428 top level 5 path .snapshot/Hourly-08
ID 429 top level 5 path .snapshot/Hourly-09
Tagged ,

Script to control NVidia GPU fan under linux using nvclock

This is a modified script I found at The script didn’t work for me so I edited a little to work under shell.

This script tries to keep the GPU temperature within 1C of 54C. The max clock speed is 100% and the lowest 20%. It clocks up and down at 5% at a time at intervals of 5 seconds until the target temperature is reached.

You will need to install the proprietary driver and nvclock from your distribution’s repository.

I have added this script to my rc.local to run at boot-time in the back ground and I find it works well.

/bin/sh /$SCRIPTDIR/ >> /var/log/messages 2>&1

The script:

# Adjust fan speed automatically.
# This version by StuJordan
# Based on Version by DarkPhoinix
# Original script by Mitch Frazier

# Location of the nvclock program.

# Target temperature for video card.

# Value used to calculate the temperature range (+/- target_temp).

# Time to wait before re-checking.

# Minimum fan speed.

# Fan speed increment.

if [ "$1" ]; then target_temp=$1; fi

target_temp_low=$(expr $target_temp - $target_range)
target_temp_high=$(expr $target_temp + $target_range)

while true
    temp_val=$(echo $($nvclock_bin --info | grep -i 'GPU temperature' | cut -d ':' -f 2) | cut -d C -f 1)
#    pwm_val=$(echo $($nvclock_bin --info | grep -i 'PWM' | cut -d ':' -f 2) | cut -d " " -f 1)
     pwm_val=$(echo $($nvclock_bin --info | grep -i 'PWM' | cut -d ':' -f 2 | cut -d " " -f 2 | cut -d "%" -f 1 | cut -d "." -f 1))

    echo "Current temp is $temp_val. Current pwm is $pwm_val"
    echo "Target temp high is $target_temp_high and low is $target_temp_low"

    if [ $temp_val -gt $target_temp_high ]; then
	echo "Temperature too high"

	# Temperature above target, see if the fan has any more juice.

        if [ $pwm_val -lt 100 ]; then
            echo "Increasing GPU fan speed, temperature: $temp_val"
            pwm_val=$(expr $pwm_val + $adj_fanspeed)
            if [ $pwm_val -gt 100 ]; then pwm_val=100; fi
            $nvclock_bin -f --fanspeed $pwm_val
    elif [ $temp_val -lt $target_temp_low ]; then

# Temperature below target, lower the fan speed
# if we're not already at the minimum.

        if [ $pwm_val -gt $min_fanspeed ]; then
            echo "Decreasing GPU fan speed, temperature: $temp_val"
            pwm_val=$(expr $pwm_val - $adj_fanspeed)
            if [ $pwm_val -lt $min_fanspeed ]; then pwm_val=$min_fanspeed; fi
            $nvclock_bin -f --fanspeed $pwm_val
    sleep $sleep_time
Tagged , ,

Updating snort and openvas rules

Openvas and snort rules in Alienvault OSSIM are deployed as part of the updates. However, you can update them more frequently directly from the Openvas and Snort repositories.

Openvas Plugin Update Script

Most of this is directly from the Alienvault configuration guide, but in assorted places. Here’s the script to update the openvas rules:

openvas-nvt-sync --wget /etc/init.d/openvas-scanner restart 
perl /usr/share/ossim/scripts/vulnmeter/ migrate

Save this as a .sh file (e.g. and chmod to 700 with owner root

chmod 700 
chown root.root

Then add to root’s crontab:

crontab -e

and add the following line:

0 3 * * 6 /bin/sh /scripts/

where this one runs weekly on Saturday at 3am. For more info on editing crontab see here.

Snort Plugin Update Script

Here’s the script to update snort:

perl /usr/share/ossim/scripts/ /etc/snort/rules/
/etc/init.d/ossim-server restart

If the box is just a snort collector and doesn’t have the ossim-server running you’ll want to change that last line to read:

/etc/init.d/snort restart


/etc/init.d/snort_eth1 restart

Where eth1 is the interface snort is attached to.

Then edit crontab again and add in the line:

0 4 * * 6 /bin/sh /scripts/

This one runs every Saturday at 4am.

Tagged , , , , , , , , , ,

Changing default editor from Joe’s Own Editor to vim in Alienvault OSSIM

Personally I find the default editor in Alienvault OSSIM a real pain to use, especially when I’m trying to edit crontabs.

Here’s a quick way to change it:

1. Logon to the system via SSH with the account you want to change the editor on

2. Open the bashrc for that user:

vi ~/bashrc

3. At the bottom add in

export EDITOR=vi

4. Then hit the escape key and type


to save.

If you want nano just use nano in-place of vi. Then the next time you open crontab it’ll use your preferred editor (e.g. vi or nano).


Tagged , , , , , ,

Churning the butter

I recently had a problem on my test box running btrfs where it would not boot up past the grub menu. I managed to boot off a live CD and found that I couldn’t mount the boot disk either and it gave the error:

mount: wrong fs type, bad option, bad superblock on /dev/sda1, or too many mounted file systems

Looking in dmesg I found this error over and over:

parent transid verify failed on 5413130240 wanted 22358 found 22337

It appears that the disk filesystem had become corrupted after a “power failure”. Running the fsck -t btfs /dev/sda1 returned no errors and found no problems.

It turns out the fsck tools bundled in ubuntu 11.04 (and even those downloadable from the btrfs website) weren’t able to fix this specific issue.

I did however manage to fix it by checking out a copy from btrfs-tools git repository and using the latest files from the lead developer’s git, even though his last change was some time back I guess this hadn’t been merged into the main tree.;a=commit;h=70c6c10134b502fa69955746554031939b85fb0c

So the fix

boot from the live cd cd ~ mkdir btrfs-tools cd btrfs-tools git clone git://

Download the btrfs-select-super.c, disk-io-c and disk-io.h from the link above and drop them into the git cloned folder above.

cd into the cloned folder and run

./configure make all make btrfs-select-super chmod 750 ./btrfs-select-super ./btrfs-select-super -s 1 /dev/sda1

where “1” is the second superblock copy (zero being the default which has become corrupted) and /dev/sda1 the target disk/partition.

Once you’ve done this run ./btrfsck /dev/sda1 and reboot. If all is well your PC should now boot.

If that doesn’t work try superblock 2.

./btrfs-select-super -s 2 /dev/sda1


You might also like to try the patch in the mailing-list here and mount the filesystem as read-only. If it works this would let you get your data off at least.

Update 2:

Check out the new restore tool to recover files from the broken filesystem.

Tagged , , ,