How to install CACTI in LInux Box

Description: Cacti is a GPL-licensed, scalable, RRDtool-based monitoring program with flexible graphing options. This article describes the process of installing and configuring Cacti on CentOS 5.2.

Useful links to this installation were BXtra and TechDB.

Per the Cacti documentation, Cacti requires:

RRDTool 1.0.49 or 1.2.x or greater

MySQL 4.1.x or 5.x or greater

PHP 4.3.6 or greater, 5.x greater highly recommended for advanced features

A Web Server e.g. Apache or IIS

I’d also recommend installing vim, net-snmp, net-snmp-utils, php-snmp, initscripts, perl-rrdtool, and any dependencies.

To perform this install, I am logged into Gnome as a normal user, and opened a terminal that is switched to the root user using the su command. I had already installed apache, mysql, and PHP during the original install process of CentOS 5.2.

I added a new repository to facilitate this install. To do this, I created a file
(/etc/yum.repos.d/dag.repo) containing Dag Wiers repository, which contains rrdtool, among other things.

[dag] name=Dag RPM Repository for Red Hat Enterprise Linux baseurl= gpgcheck=1 gpgkey= enabled=1

You can create this file by typing vim /etc/yum.repos.d/dag.repo and copying and pasting the above information into the file. Be warned that the above text containing the repository is version and architecture-specific.

I then typed yum update to update CentOS and the repository list before installing additional software.

I installed everything but cacti through yum. You can verify that you have the packages in question (or the version numbers of installed packages) by attempting to install them, as yum will remind you that you already have the latest version installed, as well as the version status of the packages, like shown here:

# yum install php httpd mysql mysql-server php-mysql vim-enhanced net-snmp net-snmp-utils php-snmp initscripts perl-rrdtool rrdtool initscripts
Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
* base:
* updates:
* addons:
* extras:
Setting up Install Process
Parsing package install arguments
Package php-5.1.6-23.2.el5_3.i386 already installed and latest version
Package httpd-2.2.3-22.el5.centos.1.i386 already installed and latest version
Package mysql-5.0.45-7.el5.i386 already installed and latest version
Package mysql-server-5.0.45-7.el5.i386 already installed and latest version
Package php-mysql-5.1.6-23.2.el5_3.i386 already installed and latest version
Package 2:vim-enhanced-7.0.109-4.el5_2.4z.i386 already installed and latest version
Package 1:net-snmp- already installed and latest version
Package 1:net-snmp-utils- already installed and latest version
Package php-snmp-5.1.6-23.2.el5_3.i386 already installed and latest version
Package initscripts-8.45.25-1.el5.centos.i386 already installed and latest version
Package perl-rrdtool-1.3.7-1.el5.rf.i386 already installed and latest version
Package rrdtool-1.3.7-1.el5.rf.i386 already installed and latest version
Package initscripts-8.45.25-1.el5.centos.i386 already installed and latest version
Nothing to do

Download the latest version of Cacti (0.8.7e, as of the writing of this article) from here. I downloaded it to my desktop and unzipped it by right clicking it and selecting “Extract here”. I also renamed the cacti-0.8.7e directory by right clicking and selecting “Rename”. You could do this in the command line, if you wanted to:

[your root shell] # tar xzvf cacti-0.8.7e.tar.gz
[your root shell] # mv cacti-0.8.7e cacti

Move the entire cacti directory to /var/www/html/ :

[your root shell] # mv cacti /var/www/html

I chose to create a ‘cactiuser’ user (and cacti group) to run cacti commands and to have ownership of the relavent cacti files. It was here that I noticed that my install did not have any of the /sbin directories in its $PATH , so I simply typed the absolute path:

[your root shell] # /usr/sbin/groupadd cacti

[your root shell] # /usr/sbin/useradd -g cacti cactiuser

[your root shell] # passwd cactiuser

Change the ownership of the /var/www/html/cacti/rra/ and /var/www/html/cacti/log/ directories to the cactiuser we just created:

[your root shell] # cd /var/www/html/cacti
[your root shell] # chown -R cactiuser rra/ log/

Create a mysql root password, if you haven’t already (password in this example is samplepass:

[your root shell] # /usr/bin/mysqladmin -u root password samplepass

Create a MySQL database for cacti:

[your root shell] # mysqladmin –user=root –password=samplepass create cacti

Change directories to the cacti directory, and use the cacti.sql file to create tables for your database:

[your root shell] # cd /var/www/html/cacti
[your root shell- cacti] # mysql –user=root –password=samplepass cacti GRANT ALL ON cacti.* TO cactiuser@localhost IDENTIFIED BY ‘samplepass’;
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> exit

Edit /var/www/html/cacti/include/config.php with your favorite editor, and update the information to reflect our cacti configuration (you can leave the other text in the file alone):

/* make sure these values refect your actual database/host/user/password */
$database_type = “mysql”;
$database_default = “cacti”;
$database_hostname = “localhost”;
$database_username = “cactiuser”;
$database_password = “samplepass”;
$database_port = “3306”;

Create a cron job that polls for information for Cacti (I’m choosing to use /etc/crontab here):

[your root shell] # vim /etc/crontab

Add this line to your crontab:

*/5 * * * * cactiuser /usr/bin/php /var/www/html/cacti/poller.php > /dev/null 2>&1

Edit your PHP config file at /etc/php.ini to allow more memory usage for Cacti. It is a relatively large text file- using vim, I search for “memory_limit” by typing /memory_limit in command mode.

[your root shell] # vim /etc/php.ini
I changed memory_limit = 8M to memory_limit = 128M

Before I check to see if Cacti works, I want to check and see if mysqld and httpd are running using the service command.

[your root shell] # /sbin/service mysqld status
[your root shell] # /sbin/service httpd status

If mysqld and httpd are running, great. If not, type:

[your root shell] # /sbin/service mysqld start
[your root shell] # /sbin/service httpd start

If you’re an “I need to see what the output looks like” type, here is an example of the previous command:

[your root shell] # /sbin/service mysqld status
mysqld is stopped
[your root shell] # /sbin/service mysqld start
Initializing MySQL database: Installing MySQL system tables…
Filling help tables…

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

To do so, start the server, then issue the following commands:
/usr/bin/mysqladmin -u root password ‘new-password’
/usr/bin/mysqladmin -u root -h localhost.localdomain password ‘new-password’
See the manual for more instructions.
You can start the MySQL daemon with:
cd /usr ; /usr/bin/mysqld_safe &

You can test the MySQL daemon with
cd mysql-test ; perl

Please report any problems with the /usr/bin/mysqlbug script!

The latest information about MySQL is available on the web at
Support MySQL by buying support/licenses at
[ OK ]
Starting MySQL: [ OK ]

You should now be able to access cacti at http://localhost/cacti from the local computer or from any computer within your LAN network at http://your.internal.IP.address/cacti .

There should be a Cacti Installation Guide window that shows up, giving licensing info and the like. Click “Next”.

Select “New Installation”, since this is a new installation.

The next window to pop up should tell you whether Cacti could find the paths to all of the elements that Cacti needs to run, such as RRDtool, PHP, snmp stuff, etc. If everything but Cacti was installed via yum, you should be good here. Click “Finish” to save the settings and bring up the login window.

Below is a screenshot of the login window. The default user name is admin. The default password is admin. It should prompt an automatic password change for the admin account when you log in the first time.

If you successfully log in, I’d recommend taking a break here. Depending on how fast you are, your cron job may not have had enough time to run the poller program and create data for your graphs. I’d suggest taking a deep breath, or brewing a cup of tea (or coffee) for yourself.

The localhost machine should have some graph templates that are already created, but you can click the “Create Additional Devices” link to add graphs for any other machines on your network. I added my FreeNAS box (tutorial for that to follow).

After having consumed your beverage of choice, press the “Graphs” button. Cacti should have a graph showing you a couple minutes of data for the machines you have added. The longer your machine is on, the more informational the graphs will be. Also, if you click on a particular graph, Cacti will Congratulations! You’re now monitoring!

View the Cacti documentation page for more information on how to take advantages of Cacti.

Below are some graphs that were made using Cacti.


Step 1: Generating a server key needed to create the .ca and .crt files

[root@home test]# openssl genrsa -des3 -out server.key 4096
Generating RSA private key, 4096 bit long modulus
e is 65537 (0×10001)
Enter pass phrase for server.key:
Verifying – Enter pass phrase for server.key:
[root@home test]# ls

Step 2: Remove the passphrase from the key file. This is done so you do not have to type in the passphrase everytime the Apache is started. This especially useful in the event of server reboot when there is no one to manually type the passphrase. (Note: Once the passphrase is removed from the key, make sure that the file is readable only by root.)

[root@home test]# mv server.key
`server.key’ -> `’
[root@home test]# openssl rsa -in -out server.key
Enter pass phrase for
writing RSA key
[root@home test]# ls

Step 3: Generate the CA file. The Certificate Authority file identifies the body that signed the certificate. The certificate validity in this example is 365 days after which, you will have to generate a new CA and CRT files again.

[root@home test]# openssl req -new -x509 -days 365 -key server.key -out
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
Country Name (2 letter code) [GB]: PH
State or Province Name (full name) [Berkshire]:Manila
Locality Name (eg, city) [Newbury]:Manila
Organization Name (eg, company) [My Company Ltd]:ZXY Corp
Organizational Unit Name (eg, section) []:IT
Common Name (eg, your name or your server’s hostname) []
Email Address []
[root@home test]# ls server.key

Step 4: Generate the CSR file. The Certificate Signing Request is the file that contains the information of the certificate file itself. Note that in the Common Name field, you will have the use the fully qualified domain name (FQDN) of the actual site where the certificate is going to be used. For example, if your secure site is, put in the Common Name field. If you don’t have FQDN, use the server’s ip address instead. If the site url and Common Name are different, users will see a pop-up box whenever they visit your site.

[root@home test]# openssl req -new -key server.key -out server.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
Country Name (2 letter code) [GB]: PH
State or Province Name (full name) [Berkshire]:Manila
Locality Name (eg, city) [Newbury]:Manila
Organization Name (eg, company) [My Company Ltd]:ZXY Corp
Organizational Unit Name (eg, section) []:IT
Common Name (eg, your name or your server’s hostname) []
Email Address []

Please enter the following ‘extra’ attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
[root@home test]# ls server.csr server.key

Step 5: Generate the CRT file.

[root@home test]# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Signature ok
subject=/C=PH/ST=Manila/L=Manila/O=ZXY Corp/OU=IT/
Getting Private key
[root@home test]# ls server.crt server.csr server.key

There you have it. You have the files that you need to create a secure http site with a self-signed certificate. All you have to do now is to install there certificates to your Apache server. Your server needs to have mod_ssl enabled to use the secure http port (443).

Step 6: Install the certificates. Just copy the files to where you want your SSL certificate to be, like /etc/httpd/conf.d/ssl.crt. The setup your ssl.conf file to point the directives to the location of your SSL files.

How to Mirror a Subversion Repository


Subversion (SVN) is a version control system initiated in 1999 by CollabNet Inc. It is used to maintain current and historical versions of files such as source code, web pages, and documentation. Its goal is to be a mostly-compatible successor to the widely used Concurrent Versions System (CVS).

Subversion is well-known in the open source community and is used on many open source projects, including Apache Software Foundation, Free Pascal, FreeBSD, GCC, Django, Ruby, Mono,, ExtJS,, and PHP. Google Code also provides Subversion hosting for their open source projects. BountySource systems use it exclusively. Codeplex offers access to both Subversion as well as other types of clients.

How to Mirror a Subversion Repository

* Treating the Repository as a Filesystem
* Using Svnadmin Dump and Load
* Using Svnadmin Hotcopy
* Summary
* Checking the mirror
* Mirroring
* Reference Information

Here are three ways to create a full mirror of a Subversion repository:

1. Treat the repository like any other filesystem and recursively copy it to the mirror location.
2. Use svnadmin dump and svnadmin load.
3. Use svnadmin hotcopy.

There are important differences between these three strategies.
Treating the Repository as a Filesystem

You can of course:


This is a bad idea if the repository is in use — you’re copying a moving target — so you’ll have to take down the Subversion server while making the mirror. If you’re prepared to accept this downtime, netcat, nc, combined with tar is a neat way to recursively copy a directory across a network connection using TCP/IP.

# On the destination “mirror” machine
nc -l -p 2345 | tar xv
# On the source machine
tar c PATH_TO_REPOS > /dev/tcp/DOTTED.IP.OF.MIRROR/2345

Here, 2345 has been chosen as a suitable port for the data transfer.
Using Svnadmin Dump and Load

Perhaps the most obvious way to mirror a Subversion repository is to combine svnadmin dump with svadmin load.

svnadmin dump PATH_TO_REPOS | svnadmin load PATH_TO_MIRROR

Run on its own, svnadmin dump is designed to create a portable repository dump. The resulting dumpfile can be loaded into a new Subversion repository — even if the new repository is using a different database backend, or even a different revision of Subversion. Svnadmin dump will happily run on a live repository (no need to take the server down).

In short, combining svnadmin dump with svnadmin load is probably more powerful than we need if we just want to mirror our repository to a new location. Svnadmin dump — on its own — is the best way to fully backup a repository, since the dumpfile it creates is portable (as described above). If we replicate a repository by piping svnadmin dump to svnadmin load, we lose the dumpfile in the pipeline and do far more work than we need to.

Actually, it’s the computer which does the work — we just type a command and let it run. As a rough guideline, I have worked on a repository which occupies about 10Gb on disk, contains ~50K files and maybe a hundred branches. To dump and load this repository takes about 4 hours. A recursive copy completes in a few minutes.

One more point: svnadmin dump does not dump your repository configuration files and hook scripts. If your backup strategy is based around these commands, you will need separate arrangements for backing up hook scripts and configuration files.
Using Svnadmin Hotcopy

The third option combines the best features of the previous two. Using svnadmin hotcopy doesn’t require any server downtime, completes in minutes, and replicates server configuration and hook scripts.


The command is disconcertingly silent — no indication of progress, no verbose option. As is usual in UNIX-world, however, no news is good news. I just ran:


to confirm the hotcopy was running and to check on its progress.
Method Server
Downtime? Replicates
Config? Speed
File Copy Yes Yes Quickest
Dump/Load No No Slowest
Hotcopy No Yes Quick
Checking the mirror

Svnadmin provides another option, svnadmin verify to check a repository. This basically iterates through all revisions in the repository by internally dumping all revisions and discarding the output — so it takes a while.

svnadmin verify PATH_TO_MIRROR


Software developers don’t feel secure unless their source repository is safely backed up – or at least they shouldn’t – and they are reluctant to suffer repository downtime or excessive maintenance overheads. Live Subversion repositories can be mirrored quickly and safely using a simple command. With a little extra effort, this command can be scheduled to run automatically, every week, say.

As a next step, by using the Subversion post-commit hook every check-in to the repository can instantly and incrementally be copied to the repository mirror. I’ll provide details of how to do this in my next post.
Reference Information

For more information see:

* svnadmin dump reference
* svnadmin load reference
* svnadmin hotcopy reference

Understanding Linux CPU Load – when should you be worried?

You might be familiar with Linux load averages already. Load averages are the three numbers shown with the uptime and top commands – they look like this:
load average: 0.09, 0.05, 0.01

Most people have an inkling of what the load averages mean: the three numbers represent averages over progressively longer periods of time (one, five, and fifteen minute averages), and that lower numbers are better. Higher numbers represent a problem or an overloaded machine. But, what’s the the threshold? What constitutes “good” and “bad” load average values? When should you be concerned over a load average value, and when should you scramble to fix it ASAP?

First, a little background on what the load average values mean. We’ll start out with the simplest case: a machine with one single-core processor.
The traffic analogy

A single-core CPU is like a single lane of traffic. Imagine you are a bridge operator … sometimes your bridge is so busy there are cars lined up to cross. You want to let folks know how traffic is moving on your bridge. A decent metric would be how many cars are waiting at a particular time. If no cars are waiting, incoming drivers know they can drive across right away. If cars are backed up, drivers know they’re in for delays.

So, Bridge Operator, what numbering system are you going to use? How about:

* 0.00 means there’s no traffic on the bridge at all. In fact, between 0.00 and 1.00 means there’s no backup, and an arriving car will just go right on.
* 1.00 means the bridge is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow down.
* over 1.00 means there’s backup. How much? Well, 2.00 means that there are two lanes worth of cars total — one lane’s worth on the bridge, and one lane’s worth waiting. 3.00 means there are three lane’s worth total — one lane’s worth on the bridge, and two lanes’ worth waiting. Etc.

= load of 1.00

= load of 0.50

= load of 1.70

This is basically what CPU load is. “Cars” are processes using a slice of CPU time (“crossing the bridge”) or queued up to use the CPU. Unix refers to this as the run-queue length: the sum of the number of processes that are currently running plus the number that are waiting (queued) to run.

Like the bridge operator, you’d like your cars/processes to never be waiting. So, your CPU load should ideally stay below 1.00. Also like the bridge operator, you are still ok if you get some temporary spikes above 1.00 … but when you’re consistently above 1.00, you need to worry.
So you’re saying the ideal load is 1.00?

Well, not exactly. The problem with a load of 1.00 is that you have no headroom. In practice, many sysadmins will draw a line at 0.70:


The “Need to Look into it” Rule of Thumb: 0.70 If your load average is staying above > 0.70, it’s time to investigate before things get worse.

The “Fix this now” Rule of Thumb: 1.00. If your load average stays above 1.00, find the problem and fix it now. Otherwise, you’re going to get woken up in the middle of the night, and it’s not going to be fun.

The “Arrgh, it’s 3AM WTF?” Rule of Thumb: 5.0. If your load average is above 5.00, you could be in serious trouble, your box is either hanging or slowing way down, and this will (inexplicably) happen in the worst possible time like in the middle of the night or when you’re presenting at a conference. Don’t let it get there.

What about Multi-processors? My load says 3.00, but things are running fine!

Got a quad-processor system? It’s still healthy with a load of 3.00.

On multi-processor system, the load is relative to the number of processor cores available. The “100% utilization” mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.

If we go back to the bridge analogy, the “1.00” really means “one lane’s worth of traffic”. On a one-lane bridge, that means it’s filled up. On a two-late bridge, a load of 1.00 means its at 50% capacity — only one lane is full, so there’s another whole lane that can be filled.

= load of 2.00 on two-lane road

Same with CPUs: a load of 1.00 is 100% CPU utilization on single-core box. On a dual-core box, a load of 2.00 is 100% CPU utilization.
Multicore vs. multiprocessor

While we’re on the topic, let’s talk about multicore vs. multiprocessor. For performance purposes, is a machine with a single dual-core processor basically equivalent to a machine with two processors with one core each? Yes. Roughly. There are lots of subtleties here concerning amount of cache, frequency of process hand-offs between processors, etc. Despite those finer points, for the purposes of sizing up the CPU load value, the total number of cores is what matters, regardless of how many physical processors those cores are spread across.

Which leads us to a two new Rules of Thumb:


The “number of cores = max load” Rule of Thumb: on a multicore system, your load should not exceed the number of cores available.

The “cores is cores” Rule of Thumb: How the cores are spread out over CPUs doesn’t matter. Two quad-cores == four dual-cores == eight single-cores. It’s all eight cores for these purposes.

Bringing It Home

Let’s take a look at the load averages output from uptime:
~ $ uptime
23:05 up 14 days, 6:08, 7 users, load averages: 0.65 0.42 0.36

This is on a dual-core CPU, so we’ve got lots of headroom. I won’t even think about it until load gets and stays above 1.7 or so.

Now, what about those three numbers? 0.65 is the average over the last minute, 0.42 is the average over the last five minutes, and 0.36 is the average over the last 15 minutes. Which brings us to the question:

Which average should I be observing? One, five, or 15 minute?

For the numbers we’ve talked about (1.00 = fix it now, etc), you should be looking at the five or 15-minute averages. Frankly, if your box spikes above 1.0 on the one-minute average, you’re still fine. It’s when the 15-minute average goes north of 1.0 and stays there that you need to snap to. (obviously, as we’ve learned, adjust these numbers to the number of processor cores your system has).

So # of cores is important to interpreting load averages … how do I know how many cores my system has?

cat /proc/cpuinfo to get info on each processor in your system. Note: not available on OSX, Google for alternatives. To get just a count, run it through grep and word count: grep ‘model name’ /proc/cpuinfo | wc -l
Monitoring Linux CPU Load with Scout

Scout provides 2 ways to modify the CPU load. Our original server load plugin and Jesse Newland’s Load-Per-Processor plugin both report the CPU load and alert you when the load peaks and/or is trending in the wrong direction:

load alert
More Reading

* Wikipedia – A good, brief explanation of Load Average; it goes a bit deeper into the mathematics
* Linux Journal – very well-written article, goes deeper than either this post or the wikipedia entry.

How to install APC (Alternative php cache)

Definition: The Alternative PHP Cache (APC) is a free and open opcode cache for PHP. Its goal is to provide a free, open, and robust framework for caching and optimizing PHP intermediate code.

I am posting a quick step-by-step guide to install APC on servers (dedicated or VPS) with cpanel/whm working. This is for those who have a hard time installing apc.

First login as a root to your server/vps and make a directory to work with this plugin,

#mkdir /home/APC-php

#cd /home/APC-php

now here we will first download the APC with following command


you can check for the latest version

now you can use gzip and tar separately or tar -xzvf to unzip this file

#tar -xzvf APC-3.0.14.tgz

now you will have a APC-3.0.14 folder.

#cd APC-3.0.14

now you have to make php configuration files by following command


after this use following three commands

# ./configure –enable-apc –enable-apc-mmap –with-apxs –with-php-config=/usr/bin/php-config

*if you do not know the php path then execute ( which php ) command it will display the path. on a typical cpanel vps it could be /usr/bin/php-config or /usr/local/bin/php-config but you better check it before executing the above command)


#make test

#make install

NOTE: if you are using suPHP then skip –with-apxs

*one more thing, if you use

#make test

command it shows 3 tests failed then do not worry, it showed at least to me but worked with the final steps.

the (make install) command will return the module path, note down that with you as you will have to feed it in the php.ini file in the next step.

check your php.ini location by

#php -i | grep php.ini

then open it with your favorite editor. mine was at

#vi /usr/local/lib/php.ini

and go to the last line and paste the following


now there is a catch in it, if you have other modules installed and their extension directory is different than the one MAKE INSTALL showed for APC so you have to move your to that directory so that all modules are in the same directory. in my case my APC directory was


but i moved from this location to my other location where my other files were.

you can check that path in php.ini sectiion of

extension_dir = “”

after this restart your apache, for different servers it may vary mine worked with

#service httpd restart

CentOS 5.2 ModSecurity Installation

CentOS 5.2 ModSecurity Installation

While this guide is CentOS specific, it contains enough detail to be adaptable to most other distributions.

ModSecurity is essentially a firewall for Apache, it checks all traffic against a set of rules which detect and prevent potentially malicious activity. There are three parts to this ModSec installation.

1. ModSecurity
2. mlogc
3. ModSecurity Console

Modsecurity is the ‘firewall’, mlogc is responsible for sending logs to the management console.

The console can be downloaded from, I used the Windows version for simplicity. Each console installation can support multiple sensors (ModSec installations), so it provides centralised monitoring. The console installation isn’t covered here, theres nothing to it – download, install, create sensors – done. Just make sure to install a valid license (free ones which support upto 3 sensors are currently available from

Versions used:

Apache: 2.2.3
ModSecurity: 2.5.7

Install Dependencies:

yum install httpd-devel libxml2 libxml2-devel curl-devel pcre-devel gcc-c++

note: curl-devel is only required for mlogc

Download and Installation


or, get the latest from

Stop Apache

service httpd stop

Untar it and install:

tar -xvzf modsecurity-apache_2.5.7.tar.gz

cd modsecurity-apache_2.5.7/apache2/

make mlogc
make install


Configure mlogc:

Copy the binary from mlogc-src/ to /usr/local/bin/

cp mlogc-src/mlogc /usr/local/bin/

Copy the default config to /etc/

cp mlogc-src/mlogc-default.conf /etc/mlogc.conf

Edit the configuration file: /etc/mlogc.conf:

Change the following:

ConsoleURI https://CONSOLE_IP_ADDRESS:8886/rpc/auditLogReceiver

SensorUsername “SENSOR_USERNAME”
SensorPassword “SENSOR_PASSWORD”

The above values need to reflect the Console installation and sensor configuration, also ensure the port is correct, it should be either 8886 or 8888. Save and exit

Configure ModSecurity:

Edit httpd.conf and add the following

# ModSecurity

Include conf/modsecurity/*.conf
LoadFile /usr/lib/
LoadModule unique_id_module modules/
LoadModule security2_module modules/

Still in the httpd.conf, go down to the main server configuration section and add:

# ModSecurity Configuration

# Turn the filtering engine On or Off
SecFilterEngine On

# Make sure that URL encoding is valid
SecFilterCheckURLEncoding On

# Unicode encoding check
SecFilterCheckUnicodeEncoding Off

# Only allow bytes from this range
SecFilterForceByteRange 0 255

# Only log suspicious requests
SecAuditEngine RelevantOnly

# Debug level set to a minimum
SecFilterDebugLog logs/modsec_debug_log
SecFilterDebugLevel 0

# Should mod_security inspect POST payloads
SecFilterScanPOST On

# By default log and deny suspicious requests
# with HTTP status 500
SecFilterDefaultAction “deny,log,status:500”

# Use ReleventOnly auditing
SecAuditEngine RelevantOnly

# Must use concurrent logging
SecAuditLogType Concurrent

# Send all audit log parts
SecAuditLogParts ABIDEFGHZ

# Use the same /CollectorRoot/LogStorageDir as in mlogc.conf
SecAuditLogStorageDir /var/log/mlogc/data

# Pipe audit log to mlogc with your configuration
SecAuditLog “|/usr/local/bin/mlogc /etc/mlogc.conf”

Save and Exit.

Copy rules to Apache directory

mkdir /etc/httpd/conf/modsecurity

from the rules direcotry:

cp *.conf /etc/httpd/conf/modsecurity

make necessary changes to modsecurity_crs_10_config.conf (mainly the logging section – use values from httpd.conf)

# Log files structure

SecAuditLogType Concurrent
SecAuditLog “|/usr/local/bin/mlogc /etc/mlogc.conf”
SecAuditLogStorageDir /var/log/mlogc/data

SecAuditLogParts “ABIDEFGHZ”

Create mlogc logs direcotry and configure permissions

mkdir /var/log/mlogc
mkdir /var/log/mlogc/data

chown :apache /var/log/mlogc
chown :apache /var/log/mlogc/data

chmod g+w /var/log/mlogc
chmod g+w /var/log/mlogc/data

Restart Apache

service httpd start

Confirm ModSecurity is running:

tail /var/log/httpd/error_log

[Wed Oct 22 21:37:45 2008] [notice] ModSecurity for Apache/2.5.7 ( configured.
[Wed Oct 22 21:37:45 2008] [notice] Digest: generating secret for digest authentication …
[Wed Oct 22 21:37:45 2008] [notice] Digest: done
[Wed Oct 22 21:37:46 2008] [notice] Apache/2.2.3 (CentOS) configured — resuming normal operations

Done! Generate some suspicous traffic (ie. run an nmap scan against port 80) and check the console for alerts.

Files to check if things don’t work:


Nagios Installation


Nagios (pronounced /ˈnɑːdʒioʊs/) is a popular open source computer system and network monitoring software application. It watches hosts and services, alerting users when things go wrong and again when they get better.

Nagios, originally created under the name NetSaint, was written and is currently maintained by Ethan Galstad, along with a group of developers actively maintaining both official and unofficial plugins. N.A.G.I.O.S. is a recursive acronym: “Nagios Ain’t Gonna Insist On Sainthood”[3], “Sainthood” being a reference to the original name of the software, which was changed in response to a legal challenge by owners of a similar trademark.

Nagios was originally designed to run under Linux, but also runs well on other Unix variants. It is free software, licensed under the terms of the GNU General Public License version 2 as published by the Free Software Foundation.

1) Create Account Information
Become the root user.
su -l
Create a new nagios user account and give it a password.

/usr/sbin/useradd -m nagios
passwd nagios

Create a new nagcmd group for allowing external commands to be submitted through the web interface.
Add both the nagios user and the apache user to the group.
/usr/sbin/groupadd nagcmd
/usr/sbin/usermod -a -G nagcmd nagios
/usr/sbin/usermod -a -G nagcmd apache

2) Download Nagios and the Plugins
Create a directory for storing the downloads.
mkdir ~/downloads
cd ~/downloads
Download the source code tarballs of both Nagios and the Nagios plugins (visit for links to the latest versions). These directions were tested with
Nagios 3.1.1 and Nagios Plugins 1.4.11.


3) Compile and Install Nagios
Extract the Nagios source code tarball.
cd ~/downloads
tar xzf nagios-3.2.0.tar.gz
cd nagios-3.2.0
Run the Nagios configure script, passing the name of the group you created earlier like so:
./configure –with-command-group=nagcmd
Compile the Nagios source code.
make all
Install binaries, init script, sample config files and set permissions on the external command directory.
make install
make install-init
make install-config
make install-commandmode

4) Customize Configuration
Sample configuration files have now been installed in the /usr/local/nagios/etc directory. These sample
files should work fine for getting started with Nagios. You’ll need to make just one change before you
Edit the /usr/local/nagios/etc/objects/contacts.cfg config file with your favorite editor and change the email
address associated with the nagiosadmin contact definition to the address you’d like to use for receiving
vi /usr/local/nagios/etc/objects/contacts.cfg

5) Configure the Web Interface
Install the Nagios web config file in the Apache conf.d directory.
make install-webconf
Create a nagiosadmin account for logging into the Nagios web interface. Remember the password you
assign to this account – you’ll need it later.
htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
Restart Apache to make the new settings take effect.
service httpd restart

6) Compile and Install the Nagios Plugins
Extract the Nagios plugins source code tarball.
cd ~/downloads
tar xzf nagios-plugins-1.4.11.tar.gz
cd nagios-plugins-1.4.11
Compile and install the plugins.
./configure –with-nagios-user=nagios –with-nagios-group=nagios
make install

7) Start Nagios
Add Nagios to the list of system services and have it automatically start when the system boots.
chkconfig –add nagios
chkconfig nagios on
Verify the sample Nagios configuration files.
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
If there are no errors, start Nagios.
service nagios start

8) Modify SELinux Settings
Fedora ships with SELinux (Security Enhanced Linux) installed and in Enforcing mode by default. This
can result in “Internal Server Error” messages when you attempt to access the Nagios CGIs.
See if SELinux is in Enforcing mode.
Put SELinux into Permissive mode.
setenforce 0
To make this change permanent, you’ll have to modify the settings in /etc/selinux/config and reboot.
Instead of disabling SELinux or setting it to permissive mode, you can use the following command to
run the CGIs under SELinux enforcing/targeted mode:
chcon -R -t httpd_sys_content_t /usr/local/nagios/sbin/
chcon -R -t httpd_sys_content_t /usr/local/nagios/share/

9.) Access the monitoring system through your url

10. Install NRPE
Monitoring Host Setup
On the monitoring host (the machine that runs Nagios), you’ll need to do just a few things:
– Install the check_nrpe plugin
– Create a Nagios command definition for using the check_nrpe plugin
– Create Nagios host and service definitions for monitoring the remote host
These instructions assume that you have already installed Nagios on this machine according to the quickstart
installation guide. The configuration examples that are given reference templates that are defined in the sample
localhost.cfg and commands.cfg files that get installed if you follow the quickstart.
i. Install the check_nrpe plugin
Become the root user. You may have to use sudo -s on Ubuntu and other distros.
su -l
Create a directory for storing the downloads.
mkdir ~/downloads
cd ~/downloads
Download the source code tarball of the NRPE addon (visit for links to the latest
versions). At the time of writing, the latest version of NRPE was 2.8.
Extract the NRPE source code tarball.
tar xzf nrpe-2.8.tar.gz
cd nrpe-2.8
Compile the NRPE addon.
make all
Install the NRPE plugin.
make install-plugin
Last Updated: May 1, 2007 Page 9 of 18 Copyright (c) 1999-2007 Ethan Galstad
NRPE Documentation
ii. Test communication with the NRPE daemon
Make sure the check_nrpe plugin can talk to the NRPE daemon on the remote host. Replace “” in the
command below with the IP address of the remote host that has NRPE installed.
/usr/local/nagios/libexec/check_nrpe -H
You should get a string back that tells you what version of NRPE is installed on the remote host, like this:
NRPE v2.8
If the plugin returns a timeout error, check the following:
– Make sure there isn’t a firewall between the remote host and the monitoring server that is blocking
– Make sure that the NRPE daemon is installed properly under xinetd
– Make sure the remote host doesn’t have local (iptables) firewall rules that prevent the monitoring server from
talking to the NRPE daemon
iii. Create a command definition
You’ll need to create a command definition in one of your Nagios object configuration files in order to use the
check_nrpe plugin. Open the sample commands.cfg file for editing…
vi /usr/local/nagios/etc/commands.cfg
and add the following definition to the file:
define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
You are now ready to start adding services that should be monitored on the remote machine to the Nagios
Last Updated: May 1, 2007 Page 10 of 18 Copyright (c) 1999-2007 Ethan Galstad
NRPE Documentation
iv. Create host and service definitions
You’ll need to create some object definitions in order to monitor the remote Linux/Unix machine. These definitions
can be placed in their own file or added to an already exiting object configuration file.
First, its best practice to create a new template for each different type of host you’ll be monitoring. Let’s create a
new template for linux boxes.
define host{
name linux-box ; Name of this template
use generic-host ; Inherit default values
check_period 24×7
check_interval 5
retry_interval 1
max_check_attempts 10
check_command check-host-alive
notification_period 24×7
notification_interval 30
notification_options d,r
contact_groups admins
Notice that the linux-box template definition is inheriting default values from the generic-host template, which is
defined in the sample localhost.cfg file that gets installed when you follow the Nagios quickstart installation guide.
Next, define a new host for the remote Linux/Unix box that references the newly created linux-box host template.
define host{
use linux-box ; Inherit default values from a template
host_name remotehost ; The name we’re giving to this server
alias Fedora Core 6 ; A longer name for the server
address ; IP address of the server
Next, define some services for monitoring the remote Linux/Unix box. These example service definitions will use
the sample commands that have been defined in the nrpe.cfg file on the remote host.
The following service will monitor the CPU load on the remote host. The “check_load” argument that is passed to
the check_nrpe command definition tells the NRPE daemon to run the “check_load” command as defined in the
nrpe.cfg file.
define service{
use generic-service
host_name remotehost
service_description CPU Load
check_command check_nrpe!check_load
The following service will monitor the the number of currently logged in users on the remote host.
define service{
use generic-service
host_name remotehost
service_description Current Users
check_command check_nrpe!check_users
The following service will monitor the free drive space on /dev/hda1 on the remote host.
define service{
use generic-service
host_name remotehost
service_description /dev/hda1 Free Space
check_command check_nrpe!check_hda1
Last Updated: May 1, 2007 Page 11 of 18 Copyright (c) 1999-2007 Ethan Galstad
NRPE Documentation
The following service will monitor the total number of processes on the remote host.
define service{
use generic-service
host_name remotehost
service_description Total Processes
check_command check_nrpe!check_total_procs
The following service will monitor the number of zombie processes on the remote host.
define service{
use generic-service
host_name remotehost
service_description Zombie Processes
check_command check_nrpe!check_zombie_procs
Those are the basic service definitions for monitoring the remote host. If you would like to add additional services
to be monitored, read the “Customizing Your Configuration” section starting on page 13.
v. Restart Nagios
At this point you’ve installed the check_nrpe plugin and addon host and service definitions for monitoring the
remote Linux/Unix machine. Now its time to make those changes live…
Verify your Nagios configuration files.
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
If there are errors, fix them. If everything is fine, restart Nagios.
service nagios restart
That’s it! You should see the host and service definitions you created in the Nagios web interface. In a few
minutes Nagios should have the current status information for the remote Linux/Unix machine.
Since you might want to monitor more services on the remote machine, I would suggest you read the next section
as well. :-)
Also, when it comes time to upgrade the version of NRPE you’re running, its pretty easy to do. The initial
installation was the toughest, but upgrading is a snap.

Actual Steps made on the server(volt):
1. Edited the commands.cfg file under the directory /usr/local/nagios/etc/objects to add

#######Me’s additional commands#######
#check nrpe setup

define command{
command_name    check_nrpe_disk
command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -p $ARG1$ -c $ARG2$

define command{
command_name    check_nrpe_load
command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -p $ARG1$ -c $ARG2$

define command{
command_name    check_nrpe_swap
command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -p $ARG1$ -c $ARG2$

define command{
command_name    check_nrpe_zombie_procs
command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -p $ARG1$ -c $ARG2$

define command{
command_name    check_nrpe_total_procs
command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -p $ARG1$ -c $ARG2$

2. Create cfg files under the directory /usr/local/nagios/etc/objects

c) Remote Host Setup
These instructions should be completed on the remote Linux/Unix host that the NRPE daemon will be installed on.
You’ll be installing the Nagios plugins and the NRPE daemon…
i. Create Account Information
Become the root user. You may have to use sudo -s on Ubuntu and other distros.
su -l
Create a new nagios user account and give it a password.
/usr/sbin/useradd nagios
passwd nagios
ii. Install the Nagios Plugins
Create a directory for storing the downloads.
mkdir ~/downloads
cd ~/downloads
Download the source code tarball of the Nagios plugins (visit for links to the latest
versions). At the time of writing, the latest stable version of the Nagios plugins was 1.4.6.
Extract the Nagios plugins source code tarball.
tar xzf nagios-plugins-1.4.6.tar.gz
cd nagios-plugins-1.4.6
Compile and install the plugins.
make install
The permissions on the plugin directory and the plugins will need to be fixed at this point, so run the following
chown nagios.nagios /usr/local/nagios
chown -R nagios.nagios /usr/local/nagios/libexec
iii. Install xinetd
Fedora Core 6 doesn’t ship with xinetd installed by default, so install it with the following command:
yum install xinetd
Last Updated: May 1, 2007 Page 5 of 18 Copyright (c) 1999-2007 Ethan Galstad
NRPE Documentation
iv. Install the NRPE daemon
Download the source code tarball of the NRPE addon (visit for links to the latest
versions). At the time of writing, the latest version of NRPE was 2.8.
cd ~/downloads
Extract the NRPE source code tarball.
tar xzf nrpe-2.8.tar.gz
cd nrpe-2.8
Compile the NRPE addon.
make all
Install the NRPE plugin (for testing), daemon, and sample daemon config file.
make install-plugin
make install-daemon
make install-daemon-config
Install the NRPE daemon as a service under xinetd.
make install-xinetd
Edit the /etc/xinetd.d/nrpe file and add the IP address of the monitoring server to the only_from directive.
only_from =
Add the following entry for the NRPE daemon to the /etc/services file.
nrpe 5666/tcp # NRPE
Restart the xinetd service.
service xinetd restart