Alienvault OSSIM: Asset page broken after upgrading to 4.4

After upgrading OSSIM to 4.4.0 (or 4.4.1) the Asset section may show the error:

Operation was not completed due to an database error

If you then check the status of the table on the CLI you’ll find the table is missing!

alienvault:~# ossim-db
mysql> select * from asset limit 1;
ERROR 1146 (42S02): Table 'alienvault.asset' doesn't exist
mysql> quit

To resolve re-run the SQL upgrade script which should recreate your table (albeit empty):

cd /usr/share/ossim/include/upgrades
gunzip 4.4.0_mysql.sql.gz
gunzip 4.4.1_mysql.sql.gz
ossim-db < 4.4.0_mysql.sql
ossim-db < 4.4.1_mysql.sql

Then reload the Assets page and it should work.

Tagged , , ,

Logstash: Received an event that has a different character encoding

When using logstash you may see an error like this one:

Received an event that has a different character encoding than you configured. {:text=>"\\t\\\"\\\"\\t-\\t-\\t[01/Feb/2014:11:45:56 +0000]\\t\\\"-\\\"\\t\\\"GET /index.html\\xA0 HTTP/1.1\\\"\\t404\\t14015\\t\\\"80778000924267169,0:1:1\\\"\\tN\\t0.041725\\t0.040730\\t0.000695", :expected_charset=>"UTF-8", :level=>:warn}

This is because the default charset is UTF-8 and the incoming message contained a character not in the UTF-8 set, for example special characters:

\xA0                 non-breaking space
\xA3                 £

To fix this set the charset in the input section using codec and the correct charset. For example for

file {
                path => "var/log/http/access_log"
                type => apache_access_log
                codec => plain {
                        charset => "ISO-8859-1"
                stat_interval => 60

For a full list of charset options you can use check out the website.


Mimicking a multi-stage update server for RHEL servers on Classic

One of Redhat Satellite’s functions is update release management. Unfortunately it also quite expensive. If you have classic you can mimic the update features (albeit not the security updates) through some scripting.

This solution uses a server which is registered with Redhat with a Classic subscription so it’s not quite free. The server is able to reposync all the updates for its architecture and major release version (e.g. RHEL 5 x86_64).

1. Register the server you want to use as the reposync server on Redhat Classic and assign the correct channel.

2. Using the following command to create a new repo:

reposync -p /mypath/mirror/ --repoid=rhel-x86_64-server-5 -l -n

This syncs the whole of Redhat’s rhel-x86_64-server-5 repository to the /mypath/mymirror folder. You can change these paths and repoid to however you please. The -n switch means that only the newest files are synced each run so you can schedule this in your crontab to run regularly.

3. Now you want to create a point in time snapshot of all the updates contained in the mirror at that point in time. To do that you can use lndir. Here is my script:



function usage()
 echo "This script creates a new snapshot folder for yum updates"
 echo " MYGROUP"
 echo "-h --help Show this help"
 echo "--environment=MYGROUP Set the environment to create the repo for"
while [ "$1" != "" ]; do
 PARAM=`echo $1 | awk -F= '{print $1}'`
 VALUE=`echo $1 | awk -F= '{print $2}'`
 case $PARAM in
 -h | --help)
 echo "ERROR: unknown parameter \"$PARAM\""
 exit 1
if [ "$currEnvironment" == "" ]; then
 echo "Environment not set properly. Exiting."
 exit 1
currDate=$(date +%Y%m%d-%k%m)

echo "Creating yum snapshot for environment $currEnvironment in $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/packages"

echo "Creating new folder for the snapshot to go in"
mkdir -p $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/packages

echo "Creating new folder for the repodata to go in"
mkdir -p $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/repodata

echo "Creating a directory of links in the snapshot folder"
lndir $baseRepoPath/mirror/rhel-x86_64-server-5/getPackage/ $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate/packages

echo "Creating a repo for the client to read"
createrepo $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate

echo "Creating a link for the client to the snapshot directory"
if [ -e $baseRepoPath/updates/$currEnvironment ]; then
 rm -f $baseRepoPath/updates/$currEnvironment

ln -s $baseRepoPath/snapshots/rhel-x86_64-server-5/$currDate $baseRepoPath/updates/$currEnvironment

echo "All done"
echo "Please use the following yum config in your yum.conf:"
echo "[update]"
echo "gpgcheck=0"
echo "name=Red Hat Linux Updates"
echo "baseurl=http://<servername>/updates/$currEnvironment"

If you run


it creates a folder structure like this:

                            <list of reposync'd packages>
            MYGROUP -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
                                   <directory of symbolic links to /mypath/mirror/rhel-x86_64-server-5/getPackage/<filename>

Each time the script is run is creates a new dated snapshot directory under /mypath/snapshots/rhel-x86_64-server-5 containing a packages folder with a directory full of symbolic links of the files that were in /mypath/mirror/rhel-x86_64-server-5/getPackage at the time it was run. A repo is then created on this list of files. The only space the snapshot takes up is for the repodata files, which is about 200MB in size.

So now you can roll patches through groups of servers by setting the baseurl in their yum.conf to point to different updates/MYGROUP. For example:

TEST -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
UAT -> /mypath/snapshots/rhel-x86_64-server-5/2013-09-11-1001
PROD -> /mypath/snapshots/rhel-x86_64-server-5/2013-17-05-1630

The PROD group could be getting a different set of packages when yum update is run than UAT or LIVE. All you need to do to release TEST packages to UAT is to change the symbolic link. Using the above example:

rm -f /mypath/updates/UAT
ln -s /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104 /mypath/updates/UAT

which now gives:

TEST -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
UAT -> /mypath/snapshots/rhel-x86_64-server-5/2013-12-11-1104
PROD -> /mypath/snapshots/rhel-x86_64-server-5/2013-17-05-1630

Now going to UAT machine and running yum update should list new updates. You could no doubt script that if you wanted.

Tagged , , , , , ,

“Couldn’t install on USB storage or SD card” error on kitkat-based ROM

When trying to install some applications on a Kitkat (4.4) ROM (for example OmniRom) you may get the following error:

Couldn't install on USB storage or SD card

This may be caused by SELinux which blocks access to the sdcard. This happens on Kitkat as the SELinux mode is set by default as Enforcing whereas on Jelly Bean 4.3 it was permissive.

To resolve this you can change the policy back to permissive by installing the “SELinux Mode Changer” app from the play store:

You will of course need to be root but since you’ve installed a ROM I guess you are already.

  1. Run the app and click on Permissive
  2. Install the app or restore the apk as normal
  3. Re-enable enforcing mode if you wish
Tagged , , , , ,

Simple btrfs snapshot script

I bought a Raspberry Pi sometime ago and have been using it as my backup device via a USB disk. My other machines use rsync to backup their data to the pi’s /mnt/backupdisk. Instead of running backups to the same disk I decided to use BTRFS and snapshotting. I looked around for a script to do this for me in the same way as time machine, back in time or rsnapshot would do but couldn’t find any. So I’ve written a simple script.

What it does:

  • Creates a primary snapshot every hour which is overwritten every day
  • Creates a primary snapshot every day which is overwritten a week later
  • Creates a primary snapshot on the 1st of the month which is overwritten a year later
  • Create a primary snapshot on the 1st January every year which is kept forever

I guess you’ll be thinking that this could end up chewing up disk space as I’ll be keeping very old snapshots. Whether you want to do that is up to you. I have plenty of disk space and my data doesn’t change all that often.

The script:

if [ $# -eq 0 ]
 echo "Syntax: Hourly|Daily|Weekly|Monthly|Yearly|list"
# Hourly Snaps (24)
if [ "$1" == "Hourly" ]
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Hourly-$(date +%H)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Hourly-$(date +%H)
#Daily Snaps (7)
if [ "$1" == "Daily" ]
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Daily-$(date +%a)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Daily-$(date +%a)
#Weekly Snaps (52)
if [ "$1" == "Weekly" ]
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Weekly-$(date +%V)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Weekly-$(date +%V)
#Yearly Snaps (1 per year)
if [ "$1" == "Yearly" ]
 /sbin/btrfs subvolume delete /mnt/backupdisk/.snapshot/Yearly-$(date +%Y)
 /sbin/btrfs subvolume snapshot /mnt/backupdisk/ /mnt/backupdisk/.snapshot/Yearly-$(date +%Y)
#List Snaps
if [ "$1" == "list" ]
 /sbin/btrfs subvolume list /mnt/backupdisk/

Also you’ll want to schedule it in root’s crontab:

sudo crontab -e
#BTRFS snaps of /mnt/backupdisk--------------------------------------------------
0 * * * * /bin/bash /scripts/ Hourly 2>&1 >> /var/log/btrfs_snap.log
5 0 * * * /bin/bash /scripts/ Daily 2>&1 >> /var/log/btrfs_snap.log
10 0 * * 0 /bin/bash /scripts/ Weekly 2>&1 >> /var/log/btrfs_snap.log
15 0 1 * * /bin/bash /scripts/ Monthly 2>&1 >> /var/log/btrfs_snap.log
20 0 1 1 * /bin/bash /scripts/ Yearly 2>&1 >> /var/log/btrfs_snap.log

It logs to /var/log/btrfs_snap.log.

The snapshots are fully writable and can be found under /mnt/backupdisk/.snapshot/

You can check your snapshots using

# sudo /scripts/ list
ID 415 top level 5 path .snapshot/Hourly-20
ID 416 top level 5 path .snapshot/Hourly-21
ID 417 top level 5 path .snapshot/Hourly-22
ID 418 top level 5 path .snapshot/Hourly-23
ID 419 top level 5 path .snapshot/Hourly-00
ID 420 top level 5 path .snapshot/Daily-Mon
ID 421 top level 5 path .snapshot/Hourly-01
ID 422 top level 5 path .snapshot/Hourly-02
ID 423 top level 5 path .snapshot/Hourly-03
ID 424 top level 5 path .snapshot/Hourly-04
ID 425 top level 5 path .snapshot/Hourly-05
ID 426 top level 5 path .snapshot/Hourly-06
ID 427 top level 5 path .snapshot/Hourly-07
ID 428 top level 5 path .snapshot/Hourly-08
ID 429 top level 5 path .snapshot/Hourly-09
Tagged ,

How to run Alienvault OSSIM 4.2 in (custom) text mode

This is also a fix for

  1. GUI installer hanging on “Configure network” when you try and enter the IP address
  2. Configuring disk setup
  3. Selecting which components to install

These options were available in 4.1 but were removed from the boot menu of the installer in 4.2.

The options are still there though. To run the custom text installer do the following:

  1. Boot from the OSSIM 4.2 CD
  2. At the installer menu highlight USM 4.2 (the top one)
  3. Hit the TAB button
  4. Edit the kernel boot line so it shows as (all one line)
/install.amd/vmlinux preseed/file=/cdrom/preseed debian/priority=low preseed/interactive=true vga=normal initrd=/install.amd/initrd.gz quiet ALLinONEauto --

5. Then hit enter to boot into custom text mode.

For the lazy out there you can also:

  1. Put the 4.1 installer CD in the CDROM and boot to the menu.
  2. Swap the CD over and put in the 4.2 CD
  3. Select custom text mode from the menu

It’ll then boot.


Tagged , , ,

Script to control NVidia GPU fan under linux using nvclock

This is a modified script I found at The script didn’t work for me so I edited a little to work under shell.

This script tries to keep the GPU temperature within 1C of 54C. The max clock speed is 100% and the lowest 20%. It clocks up and down at 5% at a time at intervals of 5 seconds until the target temperature is reached.

You will need to install the proprietary driver and nvclock from your distribution’s repository.

I have added this script to my rc.local to run at boot-time in the back ground and I find it works well.

/bin/sh /$SCRIPTDIR/ >> /var/log/messages 2>&1

The script:

# Adjust fan speed automatically.
# This version by StuJordan
# Based on Version by DarkPhoinix
# Original script by Mitch Frazier

# Location of the nvclock program.

# Target temperature for video card.

# Value used to calculate the temperature range (+/- target_temp).

# Time to wait before re-checking.

# Minimum fan speed.

# Fan speed increment.

if [ "$1" ]; then target_temp=$1; fi

target_temp_low=$(expr $target_temp - $target_range)
target_temp_high=$(expr $target_temp + $target_range)

while true
    temp_val=$(echo $($nvclock_bin --info | grep -i 'GPU temperature' | cut -d ':' -f 2) | cut -d C -f 1)
#    pwm_val=$(echo $($nvclock_bin --info | grep -i 'PWM' | cut -d ':' -f 2) | cut -d " " -f 1)
     pwm_val=$(echo $($nvclock_bin --info | grep -i 'PWM' | cut -d ':' -f 2 | cut -d " " -f 2 | cut -d "%" -f 1 | cut -d "." -f 1))

    echo "Current temp is $temp_val. Current pwm is $pwm_val"
    echo "Target temp high is $target_temp_high and low is $target_temp_low"

    if [ $temp_val -gt $target_temp_high ]; then
	echo "Temperature too high"

	# Temperature above target, see if the fan has any more juice.

        if [ $pwm_val -lt 100 ]; then
            echo "Increasing GPU fan speed, temperature: $temp_val"
            pwm_val=$(expr $pwm_val + $adj_fanspeed)
            if [ $pwm_val -gt 100 ]; then pwm_val=100; fi
            $nvclock_bin -f --fanspeed $pwm_val
    elif [ $temp_val -lt $target_temp_low ]; then

# Temperature below target, lower the fan speed
# if we're not already at the minimum.

        if [ $pwm_val -gt $min_fanspeed ]; then
            echo "Decreasing GPU fan speed, temperature: $temp_val"
            pwm_val=$(expr $pwm_val - $adj_fanspeed)
            if [ $pwm_val -lt $min_fanspeed ]; then pwm_val=$min_fanspeed; fi
            $nvclock_bin -f --fanspeed $pwm_val
    sleep $sleep_time
Tagged , ,

Scheduled backup over SCP fails to logon when configured in SPLAT Web GUI

In R75 when you create a backup job using the SCP method you may find that it fails to logon to the SCP server. If you check the logs it will show that the username/password failed.

This is because when you schedule the backup job in the web GUI it saves the password incorrectly in /var/CPbackup/conf/backup_sched.conf.

To fix this you must use the CLI to schedule the job. This saves the password correctly.

1. SSH from an allowed host to the management server

2. Schedule the backup:

backup  -l --sched on 07:00 -w 1 --scp <server IP> <username> <password>

3. Check that it worked OK in the GUI.

Tagged , ,

Updating to OSSIM 4.1.3 causes ossim-agent not to start

On updating OSSIM via the update the ossim-agent starts and then stops. No logs are parsed and both /var/log/ossim/agent.log and /var/log/ossim/agent_error.log are empty or contain old information. Listing the processes shows that the agent is not running.

When the agent is started manually using

/usr/bin/ossim-agent -v 

the following error is logged:

OSError: [Errno 2] No such file or directory: '/etc/ossim/agent/host_cache_pro.dic

Looking in the /etc/ossim/agent directory there is no host_cache_pro.dic file but there is a host_cache.dic.

To fix, rename the host_cache.dic to host_cache.dic.old and restart the ossim-agent.

cd /etc/ossim/agent
mv host_cache.dic host_cache.dic.old
/etc/init.d/ossim-agent restart

The agent should now start and write to the agent.log and start processing.

PHP-IDS warning when submitting rule on Alienvault OSSIM 4.x

When building a new correlation rule in Alienvault OSSIM 4.x you may get an error like:

"Sorry, operation not completed due to security reasons. An attack attempt has been logged to the system"


This is caused by the PHP-IDS implementation within OSSIM and can be fixed by adding an exemption rule:

  1. In the error note the “Variable” that caused the error. In this example it was Get.product_list
  2. SSH to your OSSIM server
  3. Open the file /usr/share/ossim/include/php-ids.ini in your favourite editor.
  4. In the [General] section are a list of exceptions. Scroll to the bottom of the exceptions list and add a new entry:
exceptions[] = GET.product_list

5. Restart ossim-framework and try submitting the rule again.

service ossim-framework restart
Tagged ,