Hack 84 Real-Time Monitoring

figs/expert.gif figs/hack84.gif

Use Sguil's advanced GUI to monitor and analyze IDS events in a timely manner.

One thing that's crucial when analyzing your IDS events is to be able to correlate all your audit data from various sources, to determine the exact trigger for the alert and what actions should be taken. This could involve anything from simply querying a database for similar alerts to viewing TCP stream conversations. One tool to help facilitate this is Sguil (http://sguil.sourceforge.net), the Snort GUI for Lamerz. In case you're wondering, Sguil is pronounced "sgweel" (to rhyme with "squeal").

Sguil is a graphical analysis console written in Tcl/Tk that brings together the power of such tools as Ethereal (http://www.ethereal.com), TcpFlow (http://www.circlemud.org/~jelson/software/tcpflow/), and Snort's portscan and TCP stream decoding processors into a single unified application, where it correlates all the data from each of these sources. Sguil uses a client/server model and is made up of three parts: a plug-in for Barnyard (op_guil), a server (sguild), and a client (sguil.tk). Agents installed on each of your NIDS sensors are used to report back information to the Sguil server. The server takes care of collecting and correlating all the data from the sensor agents, and handles information and authentication requests from the GUI clients.

Before you begin, you'll need to download the Sguil distribution from the project's web site and unpack it somewhere. This will create a directory that reflects the package and its version number (e.g., sguil-0.3.0).

The first step in setting up Sguil is creating a MySQL database for storing its information. You should also create a user that Sguil can use to access the database:

$ mysql -u root -p 

Enter password: 

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 546 to server version: 3.23.55

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.


Query OK, 1 row affected (0.00 sec)


 TO sguil IDENTIFIED BY 'sguilpass';  

Query OK, 0 rows affected (0.06 sec)


Query OK, 0 rows affected (0.06 sec)


Now you'll need to create Sguil's database tables. To do this, locate the create_sguildb.sql file. It should be in the server/sql_scripts subdirectory of the directory that was created when you unpacked the Sguil distribution. You'll need to feed this as input to the mysql command like this:

$ mysql -u root -p SGUIL < create_sguildb.sql

sguild requires several Tcl packages in order to run. The first is Tclx (http://tclx.sourceforge.net), which is an extensions library for Tcl. The second is mysqltcl (http://www.xdobry.de/mysqltcl/). Both of these can be installed with the standard ./configure && make install routine.

You can verify that they were installed correctly by running the following commands:

$ tcl

tcl>package require Tclx


tcl>package require mysqltcl



If you want to use SSL to encrypt the traffic between the GUI and the server, you will also need to install tcltls (http://sourceforge.net/projects/tls/). After installing it, you can verify that it was installed correctly by running this command:

$ tcl

tcl>package require tls



Now you'll need to go about configuring sguild. First, you'll need to create a directory suitable for holding its configuration files (i.e., /etc/sguild). Then copy sguild.users, sguild.conf, sguild.queries, and autocat.conf to the directory that you created.

For example:

# mkdir /etc/sguild

# cd server

# cp autocat.conf sguild.conf sguild.queries \

 sguild.users /etc/sguild

This assumes that you're in the directory that was created when you unpacked the Sguil distribution. You'll also want to copy the sguild script to somewhere more permanent, such as /usr/local/sbin or something similar.

Now edit sguild.conf and tell it how to access the database you created. If you used the database commands shown previously to create the database and user for Sguil, you would set these variables to the following values:


set DBPASS sguilpass

set DBHOST localhost

set DBPORT 3389

set DBUSER sguil

In addition, sguild requires access to the Snort rules used on each sensor in order for it to correlate the different pieces. You can tell sguild where to look for these by setting the RULESDIR variable.

For instance, the following line will tell sguild to look for rules in /etc/snort/rules:

set RULESDIR /etc/snort/rules

However, sguild needs to find rules for each sensor that it monitors here, so this is really just the base directory for the rules. When looking up rules for a specific host it will look for them in a directory corresponding to the hostname within the directory that you specified (e.g., zul's rules would be in /etc/snort/rules/zul).

Optionally, if you want to use SSL to encrypt sguild's traffic (which you should), you'll need to create an SSL certificate and key pair [Hack #45] . After you've done that, move them to /etc/sguild/certs and make sure they're named sguild.key and sguild.pem.

Next, you'll need to add users for accessing sguild from the Sguil GUI. To do this, use a command similar to this:

# sguild -adduser andrew

Please enter a passwd for andrew: 

Retype passwd: 

User 'andrew' added successfully

You can test out the server at this point by connecting to it with the GUI client. All you need to do is edit the sguil.conf file and change the SERVERHOST variable to point to the machine on which sguild is installed. In addition, if you want to use SSL, you'll need to change the following variables to values similar to these:


set TLS_PATH /usr/lib/tls1.4/libtls1.4.so

Now test out the client and server by running sguil.tk. After a moment you should see a login window like Figure 7-3.

Figure 7-3. The Sguil login dialog

Enter in the information that you used when you created the user and click OK. After you've done that, you should see a dialog like Figure 7-4.

Figure 7-4. Sguil's no available sensors dialog

Since you won't have any sensors to monitor yet, click Exit.

To set up a Sguil sensor, you'll need to patch your Snort source code. You can find the patches that you'll need in the sensor/snort_mods/2_0/ subdirectory of the Sguil source distribution. Now change to the directory that contains the Snort source code, go to the src/preprocessors subdirectory, and patch spp_portscan.c and spp_stream4.c.

For example:

$ cd ~/snort-2.0.5/src/preprocessors

$ patch spp_portscan.c < \


patching file spp_portscan.c

$ patch spp_stream4.c < \


patching file spp_stream4.c

Hunk #9 succeeded at 988 (offset -5 lines).

Hunk #11 succeeded at 3324 (offset -5 lines).

Hunk #13 succeeded at 3674 (offset -5 lines).

Then compile Snort just as you normally would [Hack #82] . After you've done that, edit your snort.conf and enable the portscan and stream4 preprocessors:

preprocessor portscan: $HOME_NET 4 3 /var/log/snort/portscans gw-ext0

preprocessor stream4: detect_scans, disable_evasion_alerts, keepstats db \ 


The first line enables the portscan preprocessor and tells it to trigger a portscan alert if connections to four different ports within a three-second interval have been received from the same host. In addition, the portscan preprocessor will keep its logs in /var/log/snort/portscans. The last field on the line is the name of the sensor. The second line enables the stream4 preprocessor, directs it to detect stealth portscans, and to not alert on overlapping TCP datagrams. It also tells the stream4 preprocessor to keep its logs in /var/log/snort/ssn_logs.

You'll also need to set up Snort to use its unified output format, so that you can use Barnyard to handle logging Snort's alert and log events:

output alert_unified: filename snort.alert, limit 128

output log_unified: filnemae snort.log, limit 128

Next, create a crontab entry for the log_packets.sh script that comes with Sguil. This script starts an instance of Snort solely to log packets. This crontab line will have the script restart the Snort logging instance every hour:

00 0-23/1 * * * /usr/local/bin/log_packets.sh restart

You should also edit the variables at the beginning of the script and change them to suit your needs. These variables tell the script where to find the Snort binary (SNORT_PATH), where to have Snort log packets to (LOG_DIR), what interface to sniff on (INTERFACE), and what command-line options to use (OPTIONS). Pay special attention to the OPTIONS variable. Here is where you can tell snort what user and group to run as; the default won't work unless you've created a sguil user and group. In addition, you can specify what traffic to not log by setting the FILTER variable to a BPF (i.e., tcpdump-style) filter.

Next, you'll need to compile and install Barnyard [Hack #92], but only run the configure step for now. After that, patch in the op_sguil output plug-in provided by Sguil. To do this, copy sensor/barnyard_mods/op_sguil.* to the output-plugins directory in the Barnyard source tree.

For instance:

$ cd ~/barnyard-0.1.0/src/output-plugins

$ cp ~/sguil-0.3.0/sensor/barnyard_mods/op_sguil.* .

Now edit the Makefile in that directory to add op_sguil.c and op_sguil.h to the libop_a_SOURCES variable, and add op_sguil.o to the libop_a_OBJECTS variable.

After you've done that, edit op_plugbase.c and look for a line that says:

#include "op_acid_db.h"

Add another line below it so that it becomes:

#include "op_acid_db.h"

#include "op_sguil.h"

Now look for another line like this:

AcidDbOpInit( );

and add another line below it so that it looks like this:

AcidDbOpInit( );

SguilOpInit( );

Now run make from the current directory; when that completes, change to the top-level directory of the source distribution and run make install. To configure Barnyard to use the Sguil output plug-in, add a line similar to this one to your barnyard.conf:

output sguil: mysql, sensor_id 0, database SGUIL, server localhost, user sguil, 

sguilpass, sguild_host localhost, sguild_port 7736

Now you can start Barnyard as you would normally. After you do that, you'll need to set up Sguil's sensor agent script, sensor_agent.tcl, which can be found in the sensor directory of the source distribution. Before running the script, you'll need to edit several variables to fit your situation:

set SERVER_HOST localhost

set SERVER_PORT 7736

set HOSTNAME gw-ext0

set PORTSCAN_DIR /var/log/snort/portscans

set SSN_DIR /var/log/snort/ssn_logs

set WATCH_DIR /var/log/snort

The PORTSCAN_DIR and SSN_DIR variables should be set to where the Snort portscan and stream4 preprocessors log to.

Now all you need to do is set up xscriptd on the same system that you installed sguild on. This script is responsible for collecting the packet dumps from each sensor, pulling out the requested information, and then sending it back to the GUI client. Before running it, you'll need to edit some variables in this script too:


set LOCAL_LOG_DIR /var/log/snort/archive

set REMOTE_LOG_DIR /var/log/snort/dailylogs

If you're running xscriptd on the same host as the sensor, set LOCALSENSOR to 1. Otherwise, set it to 0. The LOCAL_LOG_DIR variable sets where xscriptd will archive the data it receives when it queries the sensor, and REMOTE_LOG_DIR sets where xscriptd will look on the remote host for the packet dumps. If you're installing xscriptd on a host other than the sensor agent, you'll need to set up SSH client keys [Hack #73] in order for it to retrieve data from the sensors. You'll also need to install tcpflow (http://www.circlemud.org/~jelson/software/tcpflow/) and p0f (http://www.stearns.org/p0f/) on the host that you install xscriptd on.

Now that everything's set up, you can start sguild and xscriptd with commands similar to these:

# sguild -O /usr/lib/tls1.4/libtls1.4.so

# xscriptd -O /usr/lib/tls1.4/libtls1.4.so

If you're not using SSL, you should omit the -O /usr/lib/tls1.4/libtls1.4.so portions of the commands. Otherwise, you should make sure that the argument to -O points to the location of libtls on your system.

Getting Sguil running isn't trivial, but it is well worth the effort. Once everything is running, you will have a very good overview of precisely what is happening on your network. Sguil presents data from a bunch of sources simultaneously, giving you a good view of the big picture that is sometimes impossible to see when simply looking at your NIDS logs.