The linux admins one useful command called Screen in Linux


Screen command offers the ability to detach a long running process (or program, or shell-script) from a session and then attach it back at a later time.

When the session is detached, the process that was originally started from the screen is still running and managed by the screen. You can then re-attach the session at a later time, and your terminals are still there, the way you left them.

In this article, let us review the how to manage the virtual terminal sessions using screen command with examples.

Screen Command Example 1: Execute a command (or shell-script), and detach the screen

Typically you’ll execute a command or shell-script as shown below from the command.

$ unix-command-to-be-executed

$ ./unix-shell-script-to-be-executed

Instead, use the screen command as shown below.

$ screen unix-command-to-be-executed

$ screen ./unix-shell-script-to-be-executed

Once you’ve used the screen command, you can detach it from the terminal using any one of the following method.

Screen Detach Method 1: Detach the screen using CTRL+A d

When the command is executing, press CTRL+A followed by d to detach the screen.

Screen Detach Method 2: Detach the screen using -d option

When the command is running in another terminal, type the command as following.

$ screen -d SCREENID

Screen Command Example 2: List all the running screen processes

You can list all the running screen processes using screen -ls command.

For example:

On terminal 1 you did the following:

$ screen ./

From terminal 2 you can view the list of all screen processes. You can also detach it from terminal 2 as shown below.

$ screen -ls
There is a screen on:
	4491.pts-2.FC547	(Attached)
1 Socket in /var/run/screen/S-sathiya.

$ screen -d 4491.pts-2.FC547
[4491.pts-2.FC547 detached.]

Screen Command Example 3: Attach the Screen when required

You can attach the screen at anytime by specifying the screen id as shown below. You can get the screen id from the “screen -ls” command output.

$ screen -r 4491.pts-2.FC547

Screen Command Usage Scenario 1

When you have access to only one terminal, you can use screen command to multiplex the single terminal into multiple, and execute several commands. You might also find it very useful to combine the usage of screen command along with the usage of SSH ControlMaster.

Screen Command Usage Scenario 2

When you are working in a team environment, you might walk over to your colleagues desk and get few things clarified. At that time, if needed, you can even start some process from their machine using screen command and detach it when you are done. Later when you get back to your desk, you can login and attach the screen back to your terminal.


How to Install memcached in Centos 6


I was building a website of my own and was onmy testing phase when I noticed its a little bit slow. It might be because i have too many graphics loading and heavy database when doing some searching. I searched Google ways to speed up websites and i found out about memcached and found some article on how to install the said application. I tried it and I noticed a big difference on my website’s performance.


Memcached is a distributed, high-performance, in-memory caching system that is primarily used to speed up sites that make heavy use of databases. It can however be used to store objects of any kind. Nearly every popular CMS has a plugin or module to take advantage of memcached, and many programming languages have a memcached library, including PHP, Perl, Ruby, and Python. Memcached runs in-memory and is thus quite speedy, since it does not need

to write to disk.

Here’s how to install it on CentOS 6:

Memcached does have some dependencies that need to be in place. Install libevent using yum:

yum install libevent libevent-devel

The memcached install itself starts with

To start installing memcached, change your working directory to /usr/local/src and download the latest memcached source:

cd /usr/local/src 

Uncompress the tarball you downloaded and change into the directory that is created:

tar xvzf memcached-1.4.15.tar.gz
cd memcached-1.4.15


Check for a newer version before proceeding with the installation. Their might be newer version existing after the publication of this post. Please note the tarball version we are using is 1.4.15.

Next, configure your Makefile. The simplest way is to run:


Additional configure flags are available and can improve performance if your server is capable. For 64-bit OSes, you can enable memcached to utilize a larger memory allocation than is possible with 32-bit OSes:

./configure --enable-64bit

If your server has multiple CPUs or uses multi-core CPUs, enable threading:

./configure --enable-threads

If your server supports it, you can use both flags:

./configure --enable-threads --enable-64bit

n.b.: if the configure script does not run, you may have to install compiling tools on your server. That is as simple as

yum install gcc
yum install make

Once the configure script completes, build and install memcached:

make && make install

Last but not least, start a memcached server:

memcached -d -u nobody -m 512 -p 11211

Put another way, the previous command can be laid out like this:

memcached -d -u [user] -m [memory size] -p [port] [listening IP]

Let’s go over what each switch does in the above command:

Tell memcached to start up as a backgrounded daemon process
Specify the user that you want to run memcached
Set the memory that you want to be allocated my memcached
The port on which memcached will listen.

Now your site is ready for a fast run literally.


20 Most used Linux-based system monitoring tools


The following are some of the  basic CLI, GUI, TUI commands to use in monitoring linux operated systems and helps in in-depth system analysis and debugging of server problems. These commands commonly help resolve issues regarding CPU, Memory, Network, and Storage.

1: top – Process Activity Command

The top program provides a dynamic real-time view of a running system i.e. actual process activity. By default, it displays the most CPU-intensive tasks running on the server and updates the list every five seconds.



2: vmstat – System Activity, Hardware and System Information

The command vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.

#vmstat -a


3: w – Find Out Who Is Logged on And What They Are Doing

w command displays information about the users currently on the machine, and their processes.

# w testuser 


4: uptime – Tell How Long The System Has Been Running

The uptime command can be used to see how long the server has been running. The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.

# uptime


1 can be considered as optimal load value. The load can change from system to system. For a single CPU system 1 – 3 and SMP systems 6-10 load value might be acceptable.

5: ps – Displays The Processes



ps command will report a snapshot of the current processes. To select all processes use the -A or -e option:

# ps -A

ps is just like top but provides more information.

Show Long Format Output

# ps -Al
To turn on extra full mode (it will show command line arguments passed to process):
# ps -AlF

To See Threads ( LWP and NLWP)

# ps -AlFH

To See Threads After Processes

# ps -AlLm

Print All Process On The Server

# ps ax
# ps axu

Print A Process Tree

# ps -ejH
# ps axjf
# pstree

Print Security Information

# ps -eo euser,ruser,suser,fuser,f,comm,label
# ps axZ
# ps -eM

See Every Process Running As User testuser

# ps -U testuser -u testuser u

Set Output In a User-Defined Format

# ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm
# ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm
# ps -eopid,tt,user,fname,tmout,f,wchan

Display Only The Process IDs of Lighttpd

# ps -C lighttpd -o pid=
# pgrep lighttpd
# pgrep -u testuser php-cgi

Display The Name of PID 55977

# ps -p 55977 -o comm=

Find Out The Top 10 Memory Consuming Process

# ps -auxf | sort -nr -k 4 | head -10

Find Out top 10 CPU Consuming Process

# ps -auxf | sort -nr -k 3 | head -10

6: free - Memory Usage 

The command free displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel.

# free 


7: iostat – Average CPU Load, Disk Activity

The command iostat report Central Processing Unit (CPU) statistics and input/output statistics for devices, partitions and network filesystems (NFS).

# iostat 


8: sar – Collect and Report System Activity

The sar command is used to collect, report, and save system activity information. To see network counter, enter:



# sar -n DEV | more
To display the network counters from the 24th:
# sar -n DEV -f /var/log/sa/sa24 | more
You can also display real time usage using sar:
# sar 4 5

9: mpstat – Multiprocessor Usage

The mpstat command displays activities for each available processor, processor 0 being the first one. mpstat -P ALL to display average CPU utilization per processor:

# mpstat -P ALL


10: pmap – Process Memory Usage

The command pmap report memory map of a process. Use this command to find out causes of memory bottlenecks.

# pmap -d PID 

To display process memory information for pid # 47394, enter:

# pmap -d 47394 


11 netstat – Network Statistics

The command netstat displays network connections, routing tables, interface statistics, masquerade connections, and multicast memberships



12: ss – Network Statistics

ss command is used to dump socket statistics. It allows showing information similar to netstat.



13: iptraf – Real-time Network Statistics

The iptraf command is interactive colorful IP LAN monitor. It is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others. It can provide the following info in easy to read format:

  • Network traffic statistics by TCP connection
  • IP traffic statistics by network interface
  • Network traffic statistics by protocol
  • Network traffic statistics by TCP/UDP port and by packet size
  • Network traffic statistics by Layer2 address




14: tcpdump – Detailed Network Traffic Analysis

The tcpdump is simple command that dump traffic on a network. However, you need good understanding of TCP/IP protocol to utilize this tool. For.e.g to display traffic info about DNS, enter:

 # tcpdump -i eth0 'tcp port 80'

To display all IPv4 HTTP packets to and from port 80, i.e. print only packets that contain data, not, for example, SYN and FIN packets and ACK-only packets, enter:

# tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' 

To display all FTP session to, enter:

# tcpdump -i eth1 'dst and (port 21 or 20' 

To display all HTTP session to

# tcpdump -ni eth0 'dst and tcp and port http'

Use wireshark to view detailed information about files, enter:

# tcpdump -n -i eth1 -s 0 -w output.txt src or dst port 80 


15: strace – System Calls

Trace system calls and signals. This is useful for debugging webserver and other server problems. See how to use to trace the process and see What it is doing.



16: /Proc file system – Various Kernel Statistics

/proc file system provides detailed information about various hardware devices and other Linux kernel information. See Linux kernel /proc documentations for further details. Common /proc examples:

# cat /proc/cpuinfo
# cat /proc/meminfo
# cat /proc/zoneinfo
# cat /proc/mounts


17: Nagios – Server And Network Monitoring

Nagios is a popular open source computer system and network monitoring application software. You can easily monitor all your hosts, network equipment and services. It can send alert when things go wrong and again when they get better. FAN is “Fully Automated Nagios”. FAN goals are to provide a Nagios installation including most tools provided by the Nagios Community. FAN provides a CDRom image in the standard ISO format, making it easy to easilly install a Nagios server. Added to this, a wide bunch of tools are including to the distribution, in order to improve the user experience around Nagios. See how to install Nagios



18: Cacti – Web-based Monitoring Tool

Cacti is a complete network graphing solution designed to harness the power of RRDTool’s data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices. It can provide data about network, CPU, memory, logged in users, Apache, DNS servers and much more. See how to install and configure Cacti network graphing tool on linux box.



19: KDE System Guard – Real-time Systems Reporting and Graphing

KSysguard is a network enabled task and system monitor application for KDE desktop. This tool can be run over ssh session. It provides lots of features such as a client/server architecture that enables monitoring of local and remote hosts. The graphical front end uses so-called sensors to retrieve the information it displays. A sensor can return simple values or more complex information like tables. For each type of information, one or more displays are provided. Displays are organized in worksheets that can be saved and loaded independently from each other. So, KSysguard is not only a simple task manager but also a very powerful tool to control large server farms.



20: Gnome System Monitor – Real-time Systems Reporting and Graphing

The System Monitor application enables you to display basic system information and monitor system processes, usage of system resources, and file systems. You can also use System Monitor to modify the behavior of your system. Although not as powerful as the KDE System Guard, it provides the basic information which may be useful for new users:

  • Displays various basic information about the computer’s hardware and software.
  • Linux Kernel version
  • GNOME version
  • Hardware
  • Installed memory
  • Processors and speeds
  • System Status
  • Currently available disk space
  • Processes
  • Memory and swap space
  • Network usage
  • File Systems
  • Lists all mounted filesystems along with basic information about each.


More Tools of interest

A few more tools:

  • nmap – scan your server for open ports.
  • lsof – list open files, network connections and much more.
  • ntop web based tool – ntop is the best tool to see network usage in a way similar to what top command does for processes i.e. it is network traffic monitoring software. You can see network status, protocol wise distribution of traffic for UDP, TCP, DNS, HTTP and other protocols.
  • Conky – Another good monitoring tool for the X Window System. It is highly configurable and is able to monitor many system variables including the status of the CPU, memory, swap space, disk storage, temperatures, processes, network interfaces, battery power, system messages, e-mail inboxes etc.
  • GKrellM – It can be used to monitor the status of CPUs, main memory, hard disks, network interfaces, local and remote mailboxes, and many other things.
  • vnstat – vnStat is a console-based network traffic monitor. It keeps a log of hourly, daily and monthly network traffic for the selected interface(s).
  • htop – htop is an enhanced version of top, the interactive process viewer, which can display the list of processes in a tree form.
  • mtr – mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool.
  • wireshark - is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education. Originally named Ethereal, in May 2006 the project was renamed Wireshark due to trademark issues.
  • snort – Snort’s open source network-based intrusion detection system (NIDS) has the ability to perform real-time traffic analysis and packet logging on Internet Protocol (IP) networks. Snort performs protocol analysis, content searching, and content matching. The program can also be used to detect probes or attacks, including, but not limited to, operating system fingerprinting attempts, common gateway interface, buffer overflows, server message block probes, and stealth port scans.
  • Centreon – Centreon is an Open Source software package that lets you supervise all the infrastructures and applications comprising your information system.


Snort (Intrusion Detection Utility) Installation in Centos 6



snort2SNORT  is a free and open source network intrusion prevention system (NIPS) and network intrusion detection system (NIDS)[2] created by Martin Roesch in 1998.Snort is now developed by Sourcefire, of which Roesch is the founder and CTO.In 2009, Snort entered InfoWorld’s Open Source Hall of Fame as one of the “greatest [pieces of] open source software of all time”.

Snort’s open source network-based intrusion detection system (NIDS) has the ability to perform real-time traffic analysis and packet logging on Internet Protocol (IP) networks. Snort performs protocol analysis, content searching, and content matching. The program can also be used to detect probes or attacks, including, but not limited to, operating system fingerprinting attempts, common gateway interface, buffer overflows, server message block probes, and stealth port scans.

Snort can be configured in three main modes: sniffer, packet logger, and network intrusion detection. In sniffer mode, the program will read network packets and display them on the console. In packet logger mode, the program will log packets to the disk. In intrusion detection mode, the program will monitor network traffic and analyze it against a rule set defined by the user. The program will then perform a specific action based on what has been identified.

Before proceeding with Snort installation you will need to install the required packages.  Follow the steps to do prior to snort’s installation.


Make sure to have the latest version of  MySQL, HTTP, Development Tools and Development Libraries.

     Install the necessary packages needed to run snort successfully.

 #yum install mysql-bench mysql-devel php-mysql gcc php-gd gd glib2-devel gcc-c++

      Yum install libcap, libpcap and pcre

#yum install libcap*
#yum install libpcap*
#yum install pcre*

      Install libdnet 1.12

#cd /
#mkdir snort_install
#cd snort_install
#tar -zxvf libdnet-1.12.tgz
#cd libdnet-1.12
#make && make install

     Install daq version 2.0.0

#cd /snort_install
#tar -zxvf daq-2.0.0.tar.gz
#cd daq-2.0.0
#make && make install

     Install snort version 2.9.4

#cd /snort_install
#tar -zxvf snort-2.9.4.tar.gz
#cd snort-2.9.4
#make && make install

Post Installation Instruction

      prepare for rules installation

# groupadd snort
# useradd -g snort snort -s /sbin/nologin
# mkdir /etc/snort
# mkdir /etc/snort/rules
# mkdir /etc/snort/so_rules
# mkdir /etc/snort/preproc_rules
# mkdir /var/log/snort
# chown snort:snort /var/log/snort
# mkdir /usr/local/lib/snort_dynamicrules
# cd /snort_install/snort-2.9.4/etc/
# cp * /etc/snort/

      Register on Snort official web site and download rules to  /snort_install directory

#cd /snort_install
#tar -zxvf snortrules-snapshot-2940.tar.gz
#cd rules/
#cp * /etc/snort/rules
#cp ../so_rules/precompiled/Centos-5-4/i386/* /etc/snort/so_rules
#cp ../preproc_rules/* /etc/snort/preproc_rules/

     Edit /etc/snort/snort.conf file

1.change “var RULE_PATH ../rules” to “var RULE_PATH /etc/snort/rules”,
change “var SO_RULE_PATH ../so_rules” to “var SO_RULE_PATH /etc/snort/so_rules”,
change “var PREPROC_RULE_PATH ../preproc_rules” to “var PREPROC_RULE_PATH /etc/snort/preproc_rules”
2. comment on the whole “Reputation preprocessor” section, because we haven’t whitelist file
3. find “Configure output plugins” section and add the line “output unified2: filename snort.log, limit 128″

    Install Barnyard 2

#cd /snort_install
#tar -zxvf barnyard2-1.9.tar.gz 
#cd barnyard2-1.9
#./configure --with-mysql-libraries=/usr/lib/mysql/
#make install
#cp etc/barnyard2.conf /etc/snort/
#mkdir /var/log/barnyard2
#chmod 666 /var/log/barnyard2
#touch /var/log/snort/barnyard2.waldo

       Setup MySQL Database

#echo "SET PASSWORD FOR root@localhost=PASSWORD('yourpassword');"| mysql -u root -p
#echo "create database snort;"| mysql -u root -p
#cd /snort_install/barnyard2-1.9
#mysql -u root -p -D snort < schemas/create_mysql
#echo "grant create, insert on root.* to snort@localhost;" | mysql -u root -p
#echo "SET PASSWORD FOR snort@localhost=PASSWORD('yourpassword');" | mysql -u root -p
#echo "grant create,insert,select,delete,update on snort.* to snort@localhost" | mysql -u root -p

     Edit the file /etc/snort/barnyard2.conf

change “config hostname: thor” to “config hostname: localhost”

change “config interface: eth0″ to “config interface: eth1″

add the line at the end of file “output database: log, mysql, user=snort password=yourpassword dbname=snort     host=localhost”
Note: the device eth1 may vary depending on your system set-up. The example given above is a 2 network device(eth0,eth1) setup where snort was applied to the second network device(eth1)


#/usr/local/bin/snort -u snort -g snort -c /etc/snort/snort.conf -i eth1

    If it prompts “Initialization Complete”, it proves to work.

      or  Execute snort from command line

#snort -c /etc/snort/snort.conf -l /var/log/snort/

If testing and manual run working perfectly fine proceed with the next step

      Make Snort and Barnyard2 boot up automatically

Edit the file /etc/rc.local, add the below lines

/sbin/ifconfig eth1 up /usr/local/bin/snort -D -u snort -g snort -c /etc/snort/snort.conf -i eth1

/usr/local/bin/barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.log -w /var/log/snort   /barnyard2.waldo -D

Restart to test changes.

#init 6


SVN (Subversion) Commands

subversion by apache

What is SVN (Subversion)?

Apache Subversion (often abbreviated SVN, after the command name svn) is a software versioning and revision control system distributed under an open source license. Developers use Subversion to maintain current and historical versions of files such as source code, web pages, and documentation. Its goal is to be a mostly compatible successor to the widely used Concurrent Versions System (CVS).

The open source community has used Subversion widely: for example in projects such as Apache Software Foundation, Free Pascal, FreeBSD, GCC, Mono and SourceForge. Google Code also provides Subversion hosting for their open source projects. BountySource systems use it exclusively. CodePlex offers access to Subversion as well as to other types of clients.

The corporate world has also started to adopt Subversion. A 2007 report by Forrester Research recognized Subversion as the sole leader in the Standalone Software Configuration Management (SCM) category and as a strong performer in the Software Configuration and Change Management (SCCM) category.[1]

Subversion was created by CollabNet Inc. in 2000 and is now a top-level Apache project being built and used by a global community of contributors.

Some of the commands frequently used related to SVN is listed below.

* Create Svn Repository

Syntax: svnadmin create <svn data directory><$repository_name>


  $ svnadmin create /var/www/svn/repotest

-Command to create a repository using the default configuration.

$ chown apache.apache /var/www/svn/repotest 

- Command to use after creating the repository to assign ownership of the repository root directory to apache.


Syntax: svnadmin create –fs-type fsfs <svn data directory><$repository_name>

  $ svnadmin create --fs-type fsfs /var/www/svn/repotest

-Command to create a repository using specifying the repository type.

$ chown apache.apache /var/www/svn/repotest 

- Command to use after creating the repository to assign ownership of the repository root directory to apache.

  * Initially input repo data to repository

$ svn import -m "Initial import." /var/www/svn/repotest/ file:///var/www/svn/myrepo

- where /var/www/svn/repotest is the root directory of the repository and file:///var/www/svn/repotest will be the online repository directory

  * list and view contents of a repo in tree view

Syntax: svnlook tree <repository absolute path>

  $ svnlook tree /var/www/svn/repotest/ 

  * View SVN Information

$ svn info

  * List down all repo

$ svn st

  * Add to svn a file or folder

$ svn add <file>

  * Create a Directory for svn

$ svn mkdir <directory>

 * Log svn

$ svn log

  *To revert to original file

$  svn revert <path>

  - revert a whole directory of files, use the --depth=infinity option:

$ svn revert --depth=infinity <path>


  *Delete a file or directory from svn

$ svn delete <directory>

  *Commit all changes made to file (Note: must be inside the repository path

$ svn ci -m "adding directories"

- Where ci is the commit command and -m is the parameter for the additional message and “adding directories” would be the notes included for the commit .

If you want to use a file that’s under version control for your commit message with --file, you need to pass the --force-log s witch:

$ svn commit --file file_under_vc.txt foo.c
svn: The log message file is under version control
svn: Log message file is a versioned file; use '--force-log' to override

$ svn commit --force-log --file file_under_vc.txt foo.c
Sending        foo.c
Transmitting file data .
Committed revision 6.

  *Checkout contents of Repository/get files and details

Syntax: svn co <repository_site> <path>

$ svn co  /var/www/svn/repotest/ 

For more commands on SVN Click here.

Happy versioning.

Installing Centreon on Centos

What is Centreon?

Centreon is an Open Source software package that lets you supervise all the infrastructures and applications comprising your information system.
please check this for more information.

_ nagios
_ nagios-plugins
_ ndoutils
_ nrpe
_ make
_ sudo
_ apache (httpd server)
_ mysql (database server)
_ php
_ gd
_ gd-devel
_ perl
_ gcc
_ rrdtool
_ net-snmp

Step 1: Install Dependencies

root@linux: ~ # yum -y install make sudo gd gd-devel httpd* mysql* php* perl* gcc rrdtool* net-snmp*

if you can’t install rrdtool via yum, you can follow this:

1) add rpmforge repository

root@linux: ~ # yum -y install yum-priorities

2) edit file priorities.conf

root@linux: ~ # vim /etc/yum/pluginconf.d/priorities.conf


3) download and install rpm forge

root@linux: ~ # wget
root@linux: ~ # rpm –import
root@linux: ~ # rpm -K rpmforge-release-0.3.6-1.el5.rf.i386.rpm
root@linux: ~ # rpm -i rpmforge-release-0.3.6-1.el5.rf.i386.rpm

4) install rrdtool using yum

root@linux: ~ # yum -y install rrdtool rrdtool-devel perl-rrdtool

to install nagios, nagios-plugins, ndoutil, nrpe, you can check this.

Step 2: Compile Centreon

1)download package

root@linux: ~ # wget

2) extract package

root@linux: ~ # tar -xzvf centreon-2.0.tar.gz

3) compile centreon

root@linux: ~ # cd centreon-2.0
root@linux: centreon-2.0 # ./ -i

answer all question script. you can just press return as default.
or press key y for yes or n for no.

the location of config will be :


the location of  PEAR.php config will be:


When done with the compilation you can check in http://yourdomain/centreon, and continue with web interface installation.

4)web interface installation
follow instruction on-screen and click Next button to continue installation. make sure those is OK result.
if there’s any problem, it’s give you declaration for help (step 4). if u get error like step 4 you can use chown and chmod for solution.

root@linux: ~ # chown -R nagios.apache /usr/local/nagios
root@linux: ~ # chmod -R 775 /usr/local/nagios

5) Test. Open your browser and goto http://yourdomain/centreon



Step 1: Generating a server key needed to create the .ca and .crt files

[root@home test]# openssl genrsa -des3 -out server.key 4096
Generating RSA private key, 4096 bit long modulus
e is 65537 (0×10001)
Enter pass phrase for server.key:
Verifying – Enter pass phrase for server.key:
[root@home test]# ls

Step 2: Remove the passphrase from the key file. This is done so you do not have to type in the passphrase everytime the Apache is started. This especially useful in the event of server reboot when there is no one to manually type the passphrase. (Note: Once the passphrase is removed from the key, make sure that the file is readable only by root.)

[root@home test]# mv server.key
`server.key’ -> `’
[root@home test]# openssl rsa -in -out server.key
Enter pass phrase for
writing RSA key
[root@home test]# ls

Step 3: Generate the CA file. The Certificate Authority file identifies the body that signed the certificate. The certificate validity in this example is 365 days after which, you will have to generate a new CA and CRT files again.

[root@home test]# openssl req -new -x509 -days 365 -key server.key -out
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
Country Name (2 letter code) [GB]: PH
State or Province Name (full name) [Berkshire]:Manila
Locality Name (eg, city) [Newbury]:Manila
Organization Name (eg, company) [My Company Ltd]:ZXY Corp
Organizational Unit Name (eg, section) []:IT
Common Name (eg, your name or your server’s hostname) []
Email Address []
[root@home test]# ls server.key

Step 4: Generate the CSR file. The Certificate Signing Request is the file that contains the information of the certificate file itself. Note that in the Common Name field, you will have the use the fully qualified domain name (FQDN) of the actual site where the certificate is going to be used. For example, if your secure site is, put in the Common Name field. If you don’t have FQDN, use the server’s ip address instead. If the site url and Common Name are different, users will see a pop-up box whenever they visit your site.

[root@home test]# openssl req -new -key server.key -out server.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
Country Name (2 letter code) [GB]: PH
State or Province Name (full name) [Berkshire]:Manila
Locality Name (eg, city) [Newbury]:Manila
Organization Name (eg, company) [My Company Ltd]:ZXY Corp
Organizational Unit Name (eg, section) []:IT
Common Name (eg, your name or your server’s hostname) []
Email Address []

Please enter the following ‘extra’ attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
[root@home test]# ls server.csr server.key

Step 5: Generate the CRT file.

[root@home test]# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Signature ok
subject=/C=PH/ST=Manila/L=Manila/O=ZXY Corp/OU=IT/
Getting Private key
[root@home test]# ls server.crt server.csr server.key

There you have it. You have the files that you need to create a secure http site with a self-signed certificate. All you have to do now is to install there certificates to your Apache server. Your server needs to have mod_ssl enabled to use the secure http port (443).

Step 6: Install the certificates. Just copy the files to where you want your SSL certificate to be, like /etc/httpd/conf.d/ssl.crt. The setup your ssl.conf file to point the directives to the location of your SSL files.

How to Mirror a Subversion Repository


Subversion (SVN) is a version control system initiated in 1999 by CollabNet Inc. It is used to maintain current and historical versions of files such as source code, web pages, and documentation. Its goal is to be a mostly-compatible successor to the widely used Concurrent Versions System (CVS).

Subversion is well-known in the open source community and is used on many open source projects, including Apache Software Foundation, Free Pascal, FreeBSD, GCC, Django, Ruby, Mono,, ExtJS,, and PHP. Google Code also provides Subversion hosting for their open source projects. BountySource systems use it exclusively. Codeplex offers access to both Subversion as well as other types of clients.

How to Mirror a Subversion Repository

* Treating the Repository as a Filesystem
* Using Svnadmin Dump and Load
* Using Svnadmin Hotcopy
* Summary
* Checking the mirror
* Mirroring
* Reference Information

Here are three ways to create a full mirror of a Subversion repository:

1. Treat the repository like any other filesystem and recursively copy it to the mirror location.
2. Use svnadmin dump and svnadmin load.
3. Use svnadmin hotcopy.

There are important differences between these three strategies.
Treating the Repository as a Filesystem

You can of course:


This is a bad idea if the repository is in use — you’re copying a moving target — so you’ll have to take down the Subversion server while making the mirror. If you’re prepared to accept this downtime, netcat, nc, combined with tar is a neat way to recursively copy a directory across a network connection using TCP/IP.

# On the destination “mirror” machine
nc -l -p 2345 | tar xv
# On the source machine
tar c PATH_TO_REPOS > /dev/tcp/DOTTED.IP.OF.MIRROR/2345

Here, 2345 has been chosen as a suitable port for the data transfer.
Using Svnadmin Dump and Load

Perhaps the most obvious way to mirror a Subversion repository is to combine svnadmin dump with svadmin load.

svnadmin dump PATH_TO_REPOS | svnadmin load PATH_TO_MIRROR

Run on its own, svnadmin dump is designed to create a portable repository dump. The resulting dumpfile can be loaded into a new Subversion repository — even if the new repository is using a different database backend, or even a different revision of Subversion. Svnadmin dump will happily run on a live repository (no need to take the server down).

In short, combining svnadmin dump with svnadmin load is probably more powerful than we need if we just want to mirror our repository to a new location. Svnadmin dump — on its own — is the best way to fully backup a repository, since the dumpfile it creates is portable (as described above). If we replicate a repository by piping svnadmin dump to svnadmin load, we lose the dumpfile in the pipeline and do far more work than we need to.

Actually, it’s the computer which does the work — we just type a command and let it run. As a rough guideline, I have worked on a repository which occupies about 10Gb on disk, contains ~50K files and maybe a hundred branches. To dump and load this repository takes about 4 hours. A recursive copy completes in a few minutes.

One more point: svnadmin dump does not dump your repository configuration files and hook scripts. If your backup strategy is based around these commands, you will need separate arrangements for backing up hook scripts and configuration files.
Using Svnadmin Hotcopy

The third option combines the best features of the previous two. Using svnadmin hotcopy doesn’t require any server downtime, completes in minutes, and replicates server configuration and hook scripts.


The command is disconcertingly silent — no indication of progress, no verbose option. As is usual in UNIX-world, however, no news is good news. I just ran:


to confirm the hotcopy was running and to check on its progress.
Method Server
Downtime? Replicates
Config? Speed
File Copy Yes Yes Quickest
Dump/Load No No Slowest
Hotcopy No Yes Quick
Checking the mirror

Svnadmin provides another option, svnadmin verify to check a repository. This basically iterates through all revisions in the repository by internally dumping all revisions and discarding the output — so it takes a while.

svnadmin verify PATH_TO_MIRROR


Software developers don’t feel secure unless their source repository is safely backed up – or at least they shouldn’t – and they are reluctant to suffer repository downtime or excessive maintenance overheads. Live Subversion repositories can be mirrored quickly and safely using a simple command. With a little extra effort, this command can be scheduled to run automatically, every week, say.

As a next step, by using the Subversion post-commit hook every check-in to the repository can instantly and incrementally be copied to the repository mirror. I’ll provide details of how to do this in my next post.
Reference Information

For more information see:

* svnadmin dump reference
* svnadmin load reference
* svnadmin hotcopy reference

How to install APC (Alternative php cache)

Definition: The Alternative PHP Cache (APC) is a free and open opcode cache for PHP. Its goal is to provide a free, open, and robust framework for caching and optimizing PHP intermediate code.

I am posting a quick step-by-step guide to install APC on servers (dedicated or VPS) with cpanel/whm working. This is for those who have a hard time installing apc.

First login as a root to your server/vps and make a directory to work with this plugin,

#mkdir /home/APC-php

#cd /home/APC-php

now here we will first download the APC with following command


you can check for the latest version

now you can use gzip and tar separately or tar -xzvf to unzip this file

#tar -xzvf APC-3.0.14.tgz

now you will have a APC-3.0.14 folder.

#cd APC-3.0.14

now you have to make php configuration files by following command


after this use following three commands

# ./configure –enable-apc –enable-apc-mmap –with-apxs –with-php-config=/usr/bin/php-config

*if you do not know the php path then execute ( which php ) command it will display the path. on a typical cpanel vps it could be /usr/bin/php-config or /usr/local/bin/php-config but you better check it before executing the above command)


#make test

#make install

NOTE: if you are using suPHP then skip –with-apxs

*one more thing, if you use

#make test

command it shows 3 tests failed then do not worry, it showed at least to me but worked with the final steps.

the (make install) command will return the module path, note down that with you as you will have to feed it in the php.ini file in the next step.

check your php.ini location by

#php -i | grep php.ini

then open it with your favorite editor. mine was at

#vi /usr/local/lib/php.ini

and go to the last line and paste the following


now there is a catch in it, if you have other modules installed and their extension directory is different than the one MAKE INSTALL showed for APC so you have to move your to that directory so that all modules are in the same directory. in my case my APC directory was


but i moved from this location to my other location where my other files were.

you can check that path in php.ini sectiion of

extension_dir = “”

after this restart your apache, for different servers it may vary mine worked with

#service httpd restart

CentOS 5.2 ModSecurity Installation

CentOS 5.2 ModSecurity Installation

While this guide is CentOS specific, it contains enough detail to be adaptable to most other distributions.

ModSecurity is essentially a firewall for Apache, it checks all traffic against a set of rules which detect and prevent potentially malicious activity. There are three parts to this ModSec installation.

1. ModSecurity
2. mlogc
3. ModSecurity Console

Modsecurity is the ‘firewall’, mlogc is responsible for sending logs to the management console.

The console can be downloaded from, I used the Windows version for simplicity. Each console installation can support multiple sensors (ModSec installations), so it provides centralised monitoring. The console installation isn’t covered here, theres nothing to it – download, install, create sensors – done. Just make sure to install a valid license (free ones which support upto 3 sensors are currently available from

Versions used:

Apache: 2.2.3
ModSecurity: 2.5.7

Install Dependencies:

yum install httpd-devel libxml2 libxml2-devel curl-devel pcre-devel gcc-c++

note: curl-devel is only required for mlogc

Download and Installation


or, get the latest from

Stop Apache

service httpd stop

Untar it and install:

tar -xvzf modsecurity-apache_2.5.7.tar.gz

cd modsecurity-apache_2.5.7/apache2/

make mlogc
make install


Configure mlogc:

Copy the binary from mlogc-src/ to /usr/local/bin/

cp mlogc-src/mlogc /usr/local/bin/

Copy the default config to /etc/

cp mlogc-src/mlogc-default.conf /etc/mlogc.conf

Edit the configuration file: /etc/mlogc.conf:

Change the following:

ConsoleURI https://CONSOLE_IP_ADDRESS:8886/rpc/auditLogReceiver

SensorUsername “SENSOR_USERNAME”
SensorPassword “SENSOR_PASSWORD”

The above values need to reflect the Console installation and sensor configuration, also ensure the port is correct, it should be either 8886 or 8888. Save and exit

Configure ModSecurity:

Edit httpd.conf and add the following

# ModSecurity

Include conf/modsecurity/*.conf
LoadFile /usr/lib/
LoadModule unique_id_module modules/
LoadModule security2_module modules/

Still in the httpd.conf, go down to the main server configuration section and add:

# ModSecurity Configuration

# Turn the filtering engine On or Off
SecFilterEngine On

# Make sure that URL encoding is valid
SecFilterCheckURLEncoding On

# Unicode encoding check
SecFilterCheckUnicodeEncoding Off

# Only allow bytes from this range
SecFilterForceByteRange 0 255

# Only log suspicious requests
SecAuditEngine RelevantOnly

# Debug level set to a minimum
SecFilterDebugLog logs/modsec_debug_log
SecFilterDebugLevel 0

# Should mod_security inspect POST payloads
SecFilterScanPOST On

# By default log and deny suspicious requests
# with HTTP status 500
SecFilterDefaultAction “deny,log,status:500″

# Use ReleventOnly auditing
SecAuditEngine RelevantOnly

# Must use concurrent logging
SecAuditLogType Concurrent

# Send all audit log parts
SecAuditLogParts ABIDEFGHZ

# Use the same /CollectorRoot/LogStorageDir as in mlogc.conf
SecAuditLogStorageDir /var/log/mlogc/data

# Pipe audit log to mlogc with your configuration
SecAuditLog “|/usr/local/bin/mlogc /etc/mlogc.conf”

Save and Exit.

Copy rules to Apache directory

mkdir /etc/httpd/conf/modsecurity

from the rules direcotry:

cp *.conf /etc/httpd/conf/modsecurity

make necessary changes to modsecurity_crs_10_config.conf (mainly the logging section – use values from httpd.conf)

# Log files structure

SecAuditLogType Concurrent
SecAuditLog “|/usr/local/bin/mlogc /etc/mlogc.conf”
SecAuditLogStorageDir /var/log/mlogc/data

SecAuditLogParts “ABIDEFGHZ”

Create mlogc logs direcotry and configure permissions

mkdir /var/log/mlogc
mkdir /var/log/mlogc/data

chown :apache /var/log/mlogc
chown :apache /var/log/mlogc/data

chmod g+w /var/log/mlogc
chmod g+w /var/log/mlogc/data

Restart Apache

service httpd start

Confirm ModSecurity is running:

tail /var/log/httpd/error_log

[Wed Oct 22 21:37:45 2008] [notice] ModSecurity for Apache/2.5.7 ( configured.
[Wed Oct 22 21:37:45 2008] [notice] Digest: generating secret for digest authentication …
[Wed Oct 22 21:37:45 2008] [notice] Digest: done
[Wed Oct 22 21:37:46 2008] [notice] Apache/2.2.3 (CentOS) configured — resuming normal operations

Done! Generate some suspicous traffic (ie. run an nmap scan against port 80) and check the console for alerts.

Files to check if things don’t work: