Author Archives: stujordan

A hacked together bash script to scan for heartbleed vulnerable services

Here’s a script to scan networks to look for services vulnerable to the heartbleed OpenSSL bug. It uses NMAP to scan for pingable hosts and obtain a list of network ports open and then uses the hb_test.py script to check if they’re vulnerable.

Grab the hb_test.py script here.

Edit the network variable in the script to change the list of networks to scan.

#!/bin/bash

# This is the string we're looking for in the python script output
pattern="server is vulnerable"

#Networks to scan
network="192.168.0.0/24 192.168.1.0/24"

echo -ne "Scanning network(s) $network                                                 \r"

#Use NMAP to find the IPs that ping in the networks listed
network_ips=(`nmap -n -sn -PE $network | grep "Nmap scan" | awk {'print $5'} | awk 1 ORS=' '`)

for ip in "${network_ips[@]}"
do
        echo -ne "NMAP port scan on $ip                                              \r"
		# Get a list of open ports on the IP address
        ports=(`nmap  -sT $ip | grep open | awk '{print $1}' | awk -F '/' '{print $1}' | awk 1 ORS=' '`)
        rc=$?
        if [[ $rc -eq 0 ]] ; then
                for p in "${ports[@]}"
                do
                        echo -ne "Scanning $ip on port $p                            \r"
						# If the port is 25 then let's try STARTTLS
                        if [[ $p -eq 25 ]] ; then
                                echo -ne "Scanning $ip on port $p (smtp)             \r"
                                output=`timeout 1 python ./hb_test.py $ip -s -p $p 2>&1`
                        else
								# Otherwise just do normal SSL
                                output=`timeout 1 python ./hb_test.py $ip -p $p 2>&1`
                        fi
						# Check if the text output matched
                        if [ "x" != "x`echo $output | grep "$pattern"`" ]; then
                                echo "$ip - port $p - VULNERABLE"
                        fi
                done
        fi
done

 

Tagged ,

OSSIM directive taxonomy settings do not update / save

When you try to edit the Taxonomy settings for a user generated directive in OSSIM the changes do not save. Instead the webpage updates and shows the old settings.

This happened for me when I upgraded to 4.3.4.

To fix you can clear out the taxonomy values in the alarm_taxonomy table and then re-enter them using the webGUI. The problem seems to be that OSSIM adds a second entry to the table rather than updating the existing one.

1. SSH to the OSSIM box holding the mysql database
2. Backup your database before editing the tables
3. Then type

ossim-db
select * from alarm_taxonomy WHERE sid like '5000%';

This should list the taxonomy for your generated directives (since they’re all in the 50000 range. For the exact sids check the /etc/ossim/server/<GUID>/user.xml file.

Now to clear the problem directive that won’t update (for example sid number 500010)

delete from alarm_taxonomy WHERE sid='500010';

Now open the web interface and the taxonomy for that directive should have cleared. Now edit it and set it correctly and restart the ossim-server by clicking on the button at the top.

Your taxonomy settings should have updated OK.

Tagged ,

Alienvault OSSIM: Asset page broken after upgrading to 4.4

After upgrading OSSIM to 4.4.0 (or 4.4.1) the Asset section may show the error:

Operation was not completed due to an database error

If you then check the status of the table on the CLI you’ll find the table is missing!

alienvault:~# ossim-db
mysql> select * from asset limit 1;
ERROR 1146 (42S02): Table 'alienvault.asset' doesn't exist
mysql> quit

To resolve re-run the SQL upgrade script which should recreate your table (albeit empty):

cd /usr/share/ossim/include/upgrades
gunzip 4.4.0_mysql.sql.gz
gunzip 4.4.1_mysql.sql.gz
ossim-db < 4.4.0_mysql.sql
ossim-db < 4.4.1_mysql.sql

Then reload the Assets page and it should work.

Tagged , , ,

Logstash: Received an event that has a different character encoding

When using logstash you may see an error like this one:

Received an event that has a different character encoding than you configured. {:text=>"1.2.3.4\\t\\\"www.google.com\\\"\\t-\\t-\\t[01/Feb/2014:11:45:56 +0000]\\t\\\"-\\\"\\t\\\"GET /index.html\\xA0 HTTP/1.1\\\"\\t404\\t14015\\t\\\"80778000924267169,0:1:1\\\"\\tN\\t0.041725\\t0.040730\\t0.000695", :expected_charset=>"UTF-8", :level=>:warn}

This is because the default charset is UTF-8 and the incoming message contained a character not in the UTF-8 set, for example special characters:

\xA0                 non-breaking space
\xA3                 £

To fix this set the charset in the input section using codec and the correct charset. For example for

file {
                path => "var/log/http/access_log"
                type => apache_access_log
                codec => plain {
                        charset => "ISO-8859-1"
                }
                stat_interval => 60
}

For a full list of charset options you can use check out the website.

Tagged

Mimicking a multi-stage update server for RHEL servers on Classic

One of Redhat Satellite’s functions is update release management. Unfortunately it also quite expensive. If you have classic you can mimic the update features (albeit not the security updates) through some scripting.

This solution uses a server which is registered with Redhat with a Classic subscription so it’s not quite free. The server is able to reposync all the updates for its architecture and major release version (e.g. RHEL 5 x86_64).

1. Register the server you want to use as the reposync server on Redhat Classic and assign the correct channel.

2. Using the following command to create a new repo:

reposync -p /mypath/mirror/ --repoid=rhel-x86_64-server-5 -l -n

This syncs the whole of Redhat’s rhel-x86_64-server-5 repository to the /mypath/mymirror folder. You can change these paths and repoid to however you please. The -n switch means that only the newest files are synced each run so you can schedule this in your crontab to run regularly.

3. Now you want to create a point in time snapshot of all the updates contained in the mirror at that point in time. To do that you can use lndir. Here is my rhn_create_snapshot.sh script:

#!/bin/sh

baseRepoPath=/mypath

function usage()
{
 echo "This script creates a new snapshot folder for yum updates"
 echo
 echo "rhn-create-snapshot.sh MYGROUP"
 echo "-h --help Show this help"
 echo "--environment=MYGROUP Set the environment to create the repo for"
 echo
}
while [ "$1" != "" ]; do
 PARAM=`echo $1 | awk -F= '{print $1}'`
 VALUE=`echo $1 | awk -F= '{print $2}'`
 case $PARAM in
 -h | --help)
 usage
 exit
 ;;
 --environment)
 currEnvironment=$VALUE
 ;;
 *)
 echo "ERROR: unknown parameter \"$PARAM\""
 usage
 exit 1
 ;;
 esac
 shift
done
if [ "$currEnvironment" == "" ]; then
 echo "Environment not set properly. Exiting."
 usage
 exit 1
fi
currDate=$(date +%Y%m%d-%k%m)

echo "Creating yum snapshot for environment $currEnvironment in $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/packages"

echo "Creating new folder for the snapshot to go in"
mkdir -p $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/packages

echo "Creating new folder for the repodata to go in"
mkdir -p $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/repodata

echo "Creating a directory of links in the snapshot folder"
lndir $baseRepoPath/mirror/rhel-x86_64-server-5/getPackage/ $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/packages

echo "Creating a repo for the client to read"
createrepo $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate

echo "Creating a link for the client to the snapshot directory"
if [ -e $baseRepoPath/updates/$currEnvironment ]; then
 rm -f $baseRepoPath/updates/$currEnvironment
fi

ln -s $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate $baseRepoPath/updates/$currEnvironment

echo "All done"
echo "Please use the following yum config in your yum.conf:"
echo
echo "[update]"
echo "gpgcheck=0"
echo "name=Red Hat Linux Updates"
echo "baseurl=http://<servername>/updates/$currEnvironment"
echo

If you run

sh ./rhn_create_snapshot.sh MYGROUP

it creates a folder structure like this:

mypath
       mirror
             rhel-x86_64-server-5
                    getPackage
                            <list of reposync'd packages>
       updates
            MYGROUP -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
       snapshots
            rhel-x86_64-server-5
                    2013-12-11-1104
                            packages
                                   <directory of symbolic links to /mypath/mirror/rhel-x86_64-server-5/getPackage/<filename>
                            repodata

Each time the rhn_create_snapshot.sh script is run is creates a new dated snapshot directory under /mypath/snapshots/rhel-x86_64-server-5 containing a packages folder with a directory full of symbolic links of the files that were in /mypath/mirror/rhel-x86_64-server-5/getPackage at the time it was run. A repo is then created on this list of files. The only space the snapshot takes up is for the repodata files, which is about 200MB in size.

So now you can roll patches through groups of servers by setting the baseurl in their yum.conf to point to different updates/MYGROUP. For example:

TEST -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
UAT -> /mypath/snapshots/rhel-x86_64-server-5/2013-09-11-1001
PROD -> /mypath/snapshots/rhel-x86_64-server-5/2013-17-05-1630

The PROD group could be getting a different set of packages when yum update is run than UAT or LIVE. All you need to do to release TEST packages to UAT is to change the symbolic link. Using the above example:

rm -f /mypath/updates/UAT
ln -s /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104 /mypath/updates/UAT

which now gives:

TEST -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
UAT -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
PROD -> /mypath/snapshots/rhel-x86_64-server-5/2013-17-05-1630

Now going to UAT machine and running yum update should list new updates. You could no doubt script that if you wanted.

Tagged , , , , , ,

“Couldn’t install on USB storage or SD card” error on kitkat-based ROM

When trying to install some applications on a Kitkat (4.4) ROM (for example OmniRom) you may get the following error:

Couldn't install on USB storage or SD card

This may be caused by SELinux which blocks access to the sdcard. This happens on Kitkat as the SELinux mode is set by default as Enforcing whereas on Jelly Bean 4.3 it was permissive.

To resolve this you can change the policy back to permissive by installing the “SELinux Mode Changer” app from the play store:

https://play.google.com/store/apps/details?id=com.mrbimc.selinux

You will of course need to be root but since you’ve installed a ROM I guess you are already.

  1. Run the app and click on Permissive
  2. Install the app or restore the apk as normal
  3. Re-enable enforcing mode if you wish
Tagged , , , , ,

Simple btrfs snapshot script

I bought a Raspberry Pi sometime ago and have been using it as my backup device via a USB disk. My other machines use rsync to backup their data to the pi’s /mnt/backupdisk. Instead of running backups to the same disk I decided to use BTRFS and snapshotting. I looked around for a script to do this for me in the same way as time machine, back in time or rsnapshot would do but couldn’t find any. So I’ve written a simple script.

What it does:

  • Creates a primary snapshot every hour which is overwritten every day
  • Creates a primary snapshot every day which is overwritten a week later
  • Creates a primary snapshot on the 1st of the month which is overwritten a year later
  • Create a primary snapshot on the 1st January every year which is kept forever

I guess you’ll be thinking that this could end up chewing up disk space as I’ll be keeping very old snapshots. Whether you want to do that is up to you. I have plenty of disk space and my data doesn’t change all that often.

The script:

#!/bin/bash
if [ $# -eq 0 ]
 then
 echo "Syntax: btrfs_snap.sh Hourly|Daily|Weekly|Monthly|Yearly|list"
fi
# Hourly Snaps (24)
if [ "$1" == "Hourly" ]
 then
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Hourly-$(date +%H)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Hourly-$(date +%H)
fi
#Daily Snaps (7)
if [ "$1" == "Daily" ]
 then
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Daily-$(date +%a)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Daily-$(date +%a)
fi
#Weekly Snaps (52)
if [ "$1" == "Weekly" ]
 then
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Weekly-$(date +%V)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Weekly-$(date +%V)
fi
#Yearly Snaps (1 per year)
if [ "$1" == "Yearly" ]
 then
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Yearly-$(date +%Y)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Yearly-$(date +%Y)
fi
#List Snaps
if [ "$1" == "list" ]
 then
 /sbin/btrfs subvolume list /mnt/backupdisk/
fi

Also you’ll want to schedule it in root’s crontab:

sudo crontab -e
#BTRFS snaps of /mnt/backupdisk--------------------------------------------------
0 * * * * /bin/bash /scripts/btrfs_snap.sh Hourly 2>&1 >> /var/log/btrfs_snap.log
5 0 * * * /bin/bash /scripts/btrfs_snap.sh Daily 2>&1 >> /var/log/btrfs_snap.log
10 0 * * 0 /bin/bash /scripts/btrfs_snap.sh Weekly 2>&1 >> /var/log/btrfs_snap.log
15 0 1 * * /bin/bash /scripts/btrfs_snap.sh Monthly 2>&1 >> /var/log/btrfs_snap.log
20 0 1 1 * /bin/bash /scripts/btrfs_snap.sh Yearly 2>&1 >> /var/log/btrfs_snap.log
#-------------------------------------------------------------------------------

It logs to /var/log/btrfs_snap.log.

The snapshots are fully writable and can be found under /mnt/backupdisk/.snapshot/

You can check your snapshots using

# sudo /scripts/btrfs_snap.sh list
ID 415 top level 5 path .snapshot/Hourly-20
ID 416 top level 5 path .snapshot/Hourly-21
ID 417 top level 5 path .snapshot/Hourly-22
ID 418 top level 5 path .snapshot/Hourly-23
ID 419 top level 5 path .snapshot/Hourly-00
ID 420 top level 5 path .snapshot/Daily-Mon
ID 421 top level 5 path .snapshot/Hourly-01
ID 422 top level 5 path .snapshot/Hourly-02
ID 423 top level 5 path .snapshot/Hourly-03
ID 424 top level 5 path .snapshot/Hourly-04
ID 425 top level 5 path .snapshot/Hourly-05
ID 426 top level 5 path .snapshot/Hourly-06
ID 427 top level 5 path .snapshot/Hourly-07
ID 428 top level 5 path .snapshot/Hourly-08
ID 429 top level 5 path .snapshot/Hourly-09
Tagged ,

How to run Alienvault OSSIM 4.2 in (custom) text mode

This is also a fix for

  1. GUI installer hanging on “Configure network” when you try and enter the IP address
  2. Configuring disk setup
  3. Selecting which components to install

These options were available in 4.1 but were removed from the boot menu of the installer in 4.2.

The options are still there though. To run the custom text installer do the following:

  1. Boot from the OSSIM 4.2 CD
  2. At the installer menu highlight USM 4.2 (the top one)
  3. Hit the TAB button
  4. Edit the kernel boot line so it shows as (all one line)
/install.amd/vmlinux preseed/file=/cdrom/preseed debian/priority=low preseed/interactive=true vga=normal initrd=/install.amd/initrd.gz quiet ALLinONEauto --

5. Then hit enter to boot into custom text mode.

For the lazy out there you can also:

  1. Put the 4.1 installer CD in the CDROM and boot to the menu.
  2. Swap the CD over and put in the 4.2 CD
  3. Select custom text mode from the menu

It’ll then boot.

Q.E.D?

Tagged , , ,

Script to control NVidia GPU fan under linux using nvclock

This is a modified script I found at  http://www.linuxjournal.com/content/nvidia-fan-speed-revisited. The script didn’t work for me so I edited a little to work under shell.

This script tries to keep the GPU temperature within 1C of 54C. The max clock speed is 100% and the lowest 20%. It clocks up and down at 5% at a time at intervals of 5 seconds until the target temperature is reached.

You will need to install the proprietary driver and nvclock from your distribution’s repository.

I have added this script to my rc.local to run at boot-time in the back ground and I find it works well.

/bin/sh /$SCRIPTDIR/nvclock_fan.sh >> /var/log/messages 2>&1

The script:

#!/bin/bash
#
# Adjust fan speed automatically.
# This version by StuJordan
# Based on Version by DarkPhoinix
# Original script by Mitch Frazier

# Location of the nvclock program.
nvclock_bin=/usr/bin/nvclock

# Target temperature for video card.
target_temp=54

# Value used to calculate the temperature range (+/- target_temp).
target_range=1

# Time to wait before re-checking.
sleep_time=5

# Minimum fan speed.
min_fanspeed=20

# Fan speed increment.
adj_fanspeed=5

if [ "$1" ]; then target_temp=$1; fi

target_temp_low=$(expr $target_temp - $target_range)
target_temp_high=$(expr $target_temp + $target_range)

while true
do
    temp_val=$(echo $($nvclock_bin --info | grep -i 'GPU temperature' | cut -d ':' -f 2) | cut -d C -f 1)
#    pwm_val=$(echo $($nvclock_bin --info | grep -i 'PWM' | cut -d ':' -f 2) | cut -d " " -f 1)
     pwm_val=$(echo $($nvclock_bin --info | grep -i 'PWM' | cut -d ':' -f 2 | cut -d " " -f 2 | cut -d "%" -f 1 | cut -d "." -f 1))

    echo "Current temp is $temp_val. Current pwm is $pwm_val"
    echo "Target temp high is $target_temp_high and low is $target_temp_low"

    if [ $temp_val -gt $target_temp_high ]; then
	echo "Temperature too high"

	# Temperature above target, see if the fan has any more juice.

        if [ $pwm_val -lt 100 ]; then
            echo "Increasing GPU fan speed, temperature: $temp_val"
            pwm_val=$(expr $pwm_val + $adj_fanspeed)
            if [ $pwm_val -gt 100 ]; then pwm_val=100; fi
            $nvclock_bin -f --fanspeed $pwm_val
        fi
    elif [ $temp_val -lt $target_temp_low ]; then

# Temperature below target, lower the fan speed
# if we're not already at the minimum.

        if [ $pwm_val -gt $min_fanspeed ]; then
            echo "Decreasing GPU fan speed, temperature: $temp_val"
            pwm_val=$(expr $pwm_val - $adj_fanspeed)
            if [ $pwm_val -lt $min_fanspeed ]; then pwm_val=$min_fanspeed; fi
            $nvclock_bin -f --fanspeed $pwm_val
        fi
    fi
    sleep $sleep_time
done
Tagged , ,

Scheduled backup over SCP fails to logon when configured in SPLAT Web GUI

In R75 when you create a backup job using the SCP method you may find that it fails to logon to the SCP server. If you check the logs it will show that the username/password failed.

This is because when you schedule the backup job in the web GUI it saves the password incorrectly in /var/CPbackup/conf/backup_sched.conf.

To fix this you must use the CLI to schedule the job. This saves the password correctly.

1. SSH from an allowed host to the management server

2. Schedule the backup:

backup  -l --sched on 07:00 -w 1 --scp <server IP> <username> <password>

3. Check that it worked OK in the GUI.

Tagged , ,