The linux admins one useful command called Screen in Linux

screen

Screen command offers the ability to detach a long running process (or program, or shell-script) from a session and then attach it back at a later time.

When the session is detached, the process that was originally started from the screen is still running and managed by the screen. You can then re-attach the session at a later time, and your terminals are still there, the way you left them.

In this article, let us review the how to manage the virtual terminal sessions using screen command with examples.

Screen Command Example 1: Execute a command (or shell-script), and detach the screen

Typically you’ll execute a command or shell-script as shown below from the command.

$ unix-command-to-be-executed

$ ./unix-shell-script-to-be-executed

Instead, use the screen command as shown below.

$ screen unix-command-to-be-executed

$ screen ./unix-shell-script-to-be-executed

Once you’ve used the screen command, you can detach it from the terminal using any one of the following method.

Screen Detach Method 1: Detach the screen using CTRL+A d

When the command is executing, press CTRL+A followed by d to detach the screen.

Screen Detach Method 2: Detach the screen using -d option

When the command is running in another terminal, type the command as following.

$ screen -d SCREENID

Screen Command Example 2: List all the running screen processes

You can list all the running screen processes using screen -ls command.

For example:

On terminal 1 you did the following:

$ screen ./myscript.sh

From terminal 2 you can view the list of all screen processes. You can also detach it from terminal 2 as shown below.

$ screen -ls
There is a screen on:
	4491.pts-2.FC547	(Attached)
1 Socket in /var/run/screen/S-sathiya.

$ screen -d 4491.pts-2.FC547
[4491.pts-2.FC547 detached.]

Screen Command Example 3: Attach the Screen when required

You can attach the screen at anytime by specifying the screen id as shown below. You can get the screen id from the “screen -ls” command output.

$ screen -r 4491.pts-2.FC547

Screen Command Usage Scenario 1

When you have access to only one terminal, you can use screen command to multiplex the single terminal into multiple, and execute several commands. You might also find it very useful to combine the usage of screen command along with the usage of SSH ControlMaster.

Screen Command Usage Scenario 2

When you are working in a team environment, you might walk over to your colleagues desk and get few things clarified. At that time, if needed, you can even start some process from their machine using screen command and detach it when you are done. Later when you get back to your desk, you can login and attach the screen back to your terminal.

 source/reference

How to Install memcached in Centos 6

memcached

I was building a website of my own and was onmy testing phase when I noticed its a little bit slow. It might be because i have too many graphics loading and heavy database when doing some searching. I searched Google ways to speed up websites and i found out about memcached and found some article on how to install the said application. I tried it and I noticed a big difference on my website’s performance.

MEMCACHED DEFINITION

Memcached is a distributed, high-performance, in-memory caching system that is primarily used to speed up sites that make heavy use of databases. It can however be used to store objects of any kind. Nearly every popular CMS has a plugin or module to take advantage of memcached, and many programming languages have a memcached library, including PHP, Perl, Ruby, and Python. Memcached runs in-memory and is thus quite speedy, since it does not need

to write to disk.

Here’s how to install it on CentOS 6:

Memcached does have some dependencies that need to be in place. Install libevent using yum:

yum install libevent libevent-devel

The memcached install itself starts with

To start installing memcached, change your working directory to /usr/local/src and download the latest memcached source:

cd /usr/local/src 
wget http://memcached.googlecode.com/files/memcached-1.4.15.tar.gz

Uncompress the tarball you downloaded and change into the directory that is created:

tar xvzf memcached-1.4.15.tar.gz
cd memcached-1.4.15

Note:

Check memcached.org for a newer version before proceeding with the installation. Their might be newer version existing after the publication of this post. Please note the tarball version we are using is 1.4.15.

Next, configure your Makefile. The simplest way is to run:

./configure

Additional configure flags are available and can improve performance if your server is capable. For 64-bit OSes, you can enable memcached to utilize a larger memory allocation than is possible with 32-bit OSes:

./configure --enable-64bit

If your server has multiple CPUs or uses multi-core CPUs, enable threading:

./configure --enable-threads

If your server supports it, you can use both flags:

./configure --enable-threads --enable-64bit

n.b.: if the configure script does not run, you may have to install compiling tools on your server. That is as simple as

yum install gcc
yum install make

Once the configure script completes, build and install memcached:

make && make install

Last but not least, start a memcached server:

memcached -d -u nobody -m 512 -p 11211 127.0.0.1

Put another way, the previous command can be laid out like this:

memcached -d -u [user] -m [memory size] -p [port] [listening IP]

Let’s go over what each switch does in the above command:

-d
Tell memcached to start up as a backgrounded daemon process
-u
Specify the user that you want to run memcached
-m
Set the memory that you want to be allocated my memcached
-p
The port on which memcached will listen.

Now your site is ready for a fast run literally.

Resources:

http://memcached.org/

http://www.liquidweb.com/kb/how-to-install-memcached-on-centos-6/

20 Most used Linux-based system monitoring tools

tuxsysad

The following are some of the  basic CLI, GUI, TUI commands to use in monitoring linux operated systems and helps in in-depth system analysis and debugging of server problems. These commands commonly help resolve issues regarding CPU, Memory, Network, and Storage.

1: top – Process Activity Command

The top program provides a dynamic real-time view of a running system i.e. actual process activity. By default, it displays the most CPU-intensive tasks running on the server and updates the list every five seconds.

#top 
top

top

2: vmstat – System Activity, Hardware and System Information

The command vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.

#vmstat -a
vmstat

vmstat

3: w – Find Out Who Is Logged on And What They Are Doing

w command displays information about the users currently on the machine, and their processes.

# w testuser 
w

w

4: uptime – Tell How Long The System Has Been Running

The uptime command can be used to see how long the server has been running. The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.

# uptime
uptime

uptime

1 can be considered as optimal load value. The load can change from system to system. For a single CPU system 1 – 3 and SMP systems 6-10 load value might be acceptable.

5: ps – Displays The Processes

ps

ps

ps command will report a snapshot of the current processes. To select all processes use the -A or -e option:

# ps -A

ps is just like top but provides more information.

Show Long Format Output

# ps -Al
To turn on extra full mode (it will show command line arguments passed to process):
# ps -AlF

To See Threads ( LWP and NLWP)

# ps -AlFH

To See Threads After Processes

# ps -AlLm

Print All Process On The Server

# ps ax
# ps axu

Print A Process Tree

# ps -ejH
# ps axjf
# pstree

Print Security Information

# ps -eo euser,ruser,suser,fuser,f,comm,label
# ps axZ
# ps -eM

See Every Process Running As User testuser

# ps -U testuser -u testuser u

Set Output In a User-Defined Format

# ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm
# ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm
# ps -eopid,tt,user,fname,tmout,f,wchan

Display Only The Process IDs of Lighttpd

# ps -C lighttpd -o pid=
OR
# pgrep lighttpd
OR
# pgrep -u testuser php-cgi

Display The Name of PID 55977

# ps -p 55977 -o comm=

Find Out The Top 10 Memory Consuming Process

# ps -auxf | sort -nr -k 4 | head -10

Find Out top 10 CPU Consuming Process

# ps -auxf | sort -nr -k 3 | head -10

6: free - Memory Usage 

The command free displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel.

# free 
free

free

7: iostat – Average CPU Load, Disk Activity

The command iostat report Central Processing Unit (CPU) statistics and input/output statistics for devices, partitions and network filesystems (NFS).

# iostat 
iostat

iostat

8: sar – Collect and Report System Activity

The sar command is used to collect, report, and save system activity information. To see network counter, enter:

sar

sar

# sar -n DEV | more
To display the network counters from the 24th:
# sar -n DEV -f /var/log/sa/sa24 | more
You can also display real time usage using sar:
# sar 4 5

9: mpstat – Multiprocessor Usage

The mpstat command displays activities for each available processor, processor 0 being the first one. mpstat -P ALL to display average CPU utilization per processor:

# mpstat -P ALL
mpstat

mpstat

10: pmap – Process Memory Usage

The command pmap report memory map of a process. Use this command to find out causes of memory bottlenecks.

# pmap -d PID 

To display process memory information for pid # 47394, enter:

# pmap -d 47394 
pmap

pmap

11 netstat – Network Statistics

The command netstat displays network connections, routing tables, interface statistics, masquerade connections, and multicast memberships

netstat

netstat

12: ss – Network Statistics

ss command is used to dump socket statistics. It allows showing information similar to netstat.

ss

ss

13: iptraf – Real-time Network Statistics

The iptraf command is interactive colorful IP LAN monitor. It is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others. It can provide the following info in easy to read format:

  • Network traffic statistics by TCP connection
  • IP traffic statistics by network interface
  • Network traffic statistics by protocol
  • Network traffic statistics by TCP/UDP port and by packet size
  • Network traffic statistics by Layer2 address
iptraf1

iptraf1

iptraf2

iptraf2

14: tcpdump – Detailed Network Traffic Analysis

The tcpdump is simple command that dump traffic on a network. However, you need good understanding of TCP/IP protocol to utilize this tool. For.e.g to display traffic info about DNS, enter:

 # tcpdump -i eth0 'tcp port 80'

To display all IPv4 HTTP packets to and from port 80, i.e. print only packets that contain data, not, for example, SYN and FIN packets and ACK-only packets, enter:

# tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' 

To display all FTP session to 202.54.1.5, enter:

# tcpdump -i eth1 'dst 202.54.1.5 and (port 21 or 20' 

To display all HTTP session to 192.168.1.5:

# tcpdump -ni eth0 'dst 192.168.1.5 and tcp and port http'

Use wireshark to view detailed information about files, enter:

# tcpdump -n -i eth1 -s 0 -w output.txt src or dst port 80 
tcpdump

tcpdump

15: strace – System Calls

Trace system calls and signals. This is useful for debugging webserver and other server problems. See how to use to trace the process and see What it is doing.

Strace

Strace

16: /Proc file system – Various Kernel Statistics

/proc file system provides detailed information about various hardware devices and other Linux kernel information. See Linux kernel /proc documentations for further details. Common /proc examples:

# cat /proc/cpuinfo
# cat /proc/meminfo
# cat /proc/zoneinfo
# cat /proc/mounts
proc

proc

17: Nagios – Server And Network Monitoring

Nagios is a popular open source computer system and network monitoring application software. You can easily monitor all your hosts, network equipment and services. It can send alert when things go wrong and again when they get better. FAN is “Fully Automated Nagios”. FAN goals are to provide a Nagios installation including most tools provided by the Nagios Community. FAN provides a CDRom image in the standard ISO format, making it easy to easilly install a Nagios server. Added to this, a wide bunch of tools are including to the distribution, in order to improve the user experience around Nagios. See how to install Nagios

nagios

nagios

18: Cacti – Web-based Monitoring Tool

Cacti is a complete network graphing solution designed to harness the power of RRDTool’s data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices. It can provide data about network, CPU, memory, logged in users, Apache, DNS servers and much more. See how to install and configure Cacti network graphing tool on linux box.

cacti

cacti

19: KDE System Guard – Real-time Systems Reporting and Graphing

KSysguard is a network enabled task and system monitor application for KDE desktop. This tool can be run over ssh session. It provides lots of features such as a client/server architecture that enables monitoring of local and remote hosts. The graphical front end uses so-called sensors to retrieve the information it displays. A sensor can return simple values or more complex information like tables. For each type of information, one or more displays are provided. Displays are organized in worksheets that can be saved and loaded independently from each other. So, KSysguard is not only a simple task manager but also a very powerful tool to control large server farms.

kdesystemguard

kdesystemguard

20: Gnome System Monitor – Real-time Systems Reporting and Graphing

The System Monitor application enables you to display basic system information and monitor system processes, usage of system resources, and file systems. You can also use System Monitor to modify the behavior of your system. Although not as powerful as the KDE System Guard, it provides the basic information which may be useful for new users:

  • Displays various basic information about the computer’s hardware and software.
  • Linux Kernel version
  • GNOME version
  • Hardware
  • Installed memory
  • Processors and speeds
  • System Status
  • Currently available disk space
  • Processes
  • Memory and swap space
  • Network usage
  • File Systems
  • Lists all mounted filesystems along with basic information about each.
gnome-system-monitor

gnome-system-monitor

More Tools of interest

A few more tools:

  • nmap – scan your server for open ports.
  • lsof – list open files, network connections and much more.
  • ntop web based tool – ntop is the best tool to see network usage in a way similar to what top command does for processes i.e. it is network traffic monitoring software. You can see network status, protocol wise distribution of traffic for UDP, TCP, DNS, HTTP and other protocols.
  • Conky – Another good monitoring tool for the X Window System. It is highly configurable and is able to monitor many system variables including the status of the CPU, memory, swap space, disk storage, temperatures, processes, network interfaces, battery power, system messages, e-mail inboxes etc.
  • GKrellM – It can be used to monitor the status of CPUs, main memory, hard disks, network interfaces, local and remote mailboxes, and many other things.
  • vnstat – vnStat is a console-based network traffic monitor. It keeps a log of hourly, daily and monthly network traffic for the selected interface(s).
  • htop – htop is an enhanced version of top, the interactive process viewer, which can display the list of processes in a tree form.
  • mtr – mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool.
  • wireshark - is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education. Originally named Ethereal, in May 2006 the project was renamed Wireshark due to trademark issues.
  • snort – Snort’s open source network-based intrusion detection system (NIDS) has the ability to perform real-time traffic analysis and packet logging on Internet Protocol (IP) networks. Snort performs protocol analysis, content searching, and content matching. The program can also be used to detect probes or attacks, including, but not limited to, operating system fingerprinting attempts, common gateway interface, buffer overflows, server message block probes, and stealth port scans.
  • Centreon – Centreon is an Open Source software package that lets you supervise all the infrastructures and applications comprising your information system.

References:

http://www.cyberciti.biz/tips/top-linux-monitoring-tools.html

http://www.wireshark.org/about.html

Snort (Intrusion Detection Utility) Installation in Centos 6

snort

Definition

snort2SNORT  is a free and open source network intrusion prevention system (NIPS) and network intrusion detection system (NIDS)[2] created by Martin Roesch in 1998.Snort is now developed by Sourcefire, of which Roesch is the founder and CTO.In 2009, Snort entered InfoWorld’s Open Source Hall of Fame as one of the “greatest [pieces of] open source software of all time”.

Snort’s open source network-based intrusion detection system (NIDS) has the ability to perform real-time traffic analysis and packet logging on Internet Protocol (IP) networks. Snort performs protocol analysis, content searching, and content matching. The program can also be used to detect probes or attacks, including, but not limited to, operating system fingerprinting attempts, common gateway interface, buffer overflows, server message block probes, and stealth port scans.

Snort can be configured in three main modes: sniffer, packet logger, and network intrusion detection. In sniffer mode, the program will read network packets and display them on the console. In packet logger mode, the program will log packets to the disk. In intrusion detection mode, the program will monitor network traffic and analyze it against a rule set defined by the user. The program will then perform a specific action based on what has been identified.

Before proceeding with Snort installation you will need to install the required packages.  Follow the steps to do prior to snort’s installation.

Pre-Installation

Make sure to have the latest version of  MySQL, HTTP, Development Tools and Development Libraries.

     Install the necessary packages needed to run snort successfully.

 #yum install mysql-bench mysql-devel php-mysql gcc php-gd gd glib2-devel gcc-c++

      Yum install libcap, libpcap and pcre

#yum install libcap*
#yum install libpcap*
#yum install pcre*

      Install libdnet 1.12

#cd /
#mkdir snort_install
#cd snort_install
#wget http://libdnet.googlecode.com/files/libdnet-1.12.tgz
#tar -zxvf libdnet-1.12.tgz
#cd libdnet-1.12
#./configure
#make && make install

     Install daq version 2.0.0

#cd /snort_install
#wget http://www.snort.org/downloads/2103
#tar -zxvf daq-2.0.0.tar.gz
#cd daq-2.0.0
#./configure
#make && make install

     Install snort version 2.9.4

#cd /snort_install
#wget http://www.snort.org/downloads/2112
#tar -zxvf snort-2.9.4.tar.gz
#cd snort-2.9.4
#./configure
#make && make install

Post Installation Instruction

      prepare for rules installation

# groupadd snort
# useradd -g snort snort -s /sbin/nologin
# mkdir /etc/snort
# mkdir /etc/snort/rules
# mkdir /etc/snort/so_rules
# mkdir /etc/snort/preproc_rules
# mkdir /var/log/snort
# chown snort:snort /var/log/snort
# mkdir /usr/local/lib/snort_dynamicrules
# cd /snort_install/snort-2.9.4/etc/
# cp * /etc/snort/

      Register on Snort official web site and download rules to  /snort_install directory

#cd /snort_install
#tar -zxvf snortrules-snapshot-2940.tar.gz
#cd rules/
#cp * /etc/snort/rules
#cp ../so_rules/precompiled/Centos-5-4/i386/2.9.4.0/* /etc/snort/so_rules
#cp ../preproc_rules/* /etc/snort/preproc_rules/

     Edit /etc/snort/snort.conf file

1.change “var RULE_PATH ../rules” to “var RULE_PATH /etc/snort/rules”,
change “var SO_RULE_PATH ../so_rules” to “var SO_RULE_PATH /etc/snort/so_rules”,
change “var PREPROC_RULE_PATH ../preproc_rules” to “var PREPROC_RULE_PATH /etc/snort/preproc_rules”
2. comment on the whole “Reputation preprocessor” section, because we haven’t whitelist file
3. find “Configure output plugins” section and add the line “output unified2: filename snort.log, limit 128″

    Install Barnyard 2

#cd /snort_install
#wget http://www.securixlive.com/download/barnyard2/barnyard2-1.9.tar.gz
#tar -zxvf barnyard2-1.9.tar.gz 
#cd barnyard2-1.9
#./configure 
#./configure --with-mysql-libraries=/usr/lib/mysql/
#make 
#make install
#cp etc/barnyard2.conf /etc/snort/
#mkdir /var/log/barnyard2
#chmod 666 /var/log/barnyard2
#touch /var/log/snort/barnyard2.waldo

       Setup MySQL Database

#echo "SET PASSWORD FOR root@localhost=PASSWORD('yourpassword');"| mysql -u root -p
#echo "create database snort;"| mysql -u root -p
#cd /snort_install/barnyard2-1.9
#mysql -u root -p -D snort < schemas/create_mysql
#echo "grant create, insert on root.* to snort@localhost;" | mysql -u root -p
#echo "SET PASSWORD FOR snort@localhost=PASSWORD('yourpassword');" | mysql -u root -p
#echo "grant create,insert,select,delete,update on snort.* to snort@localhost" | mysql -u root -p

     Edit the file /etc/snort/barnyard2.conf

change “config hostname: thor” to “config hostname: localhost”

change “config interface: eth0″ to “config interface: eth1″

add the line at the end of file “output database: log, mysql, user=snort password=yourpassword dbname=snort     host=localhost”
Note: the device eth1 may vary depending on your system set-up. The example given above is a 2 network device(eth0,eth1) setup where snort was applied to the second network device(eth1)
 

      Test

#/usr/local/bin/snort -u snort -g snort -c /etc/snort/snort.conf -i eth1

    If it prompts “Initialization Complete”, it proves to work.

      or  Execute snort from command line

#snort -c /etc/snort/snort.conf -l /var/log/snort/

If testing and manual run working perfectly fine proceed with the next step

      Make Snort and Barnyard2 boot up automatically

Edit the file /etc/rc.local, add the below lines

/sbin/ifconfig eth1 up /usr/local/bin/snort -D -u snort -g snort -c /etc/snort/snort.conf -i eth1

/usr/local/bin/barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.log -w /var/log/snort   /barnyard2.waldo -D

Restart to test changes.

#init 6

References:

http://www.snort.org/

http://en.wikipedia.org/wiki/Snort_%28software%29

http://www.securixlive.com/

http://kezhong.wordpress.com/2012/04/07/install-snort-2-9-2-2-on-centos5-8x86_64/http://www.securixlive.com/

SVN (Subversion) Commands

subversion by apache

What is SVN (Subversion)?

Apache Subversion (often abbreviated SVN, after the command name svn) is a software versioning and revision control system distributed under an open source license. Developers use Subversion to maintain current and historical versions of files such as source code, web pages, and documentation. Its goal is to be a mostly compatible successor to the widely used Concurrent Versions System (CVS).

The open source community has used Subversion widely: for example in projects such as Apache Software Foundation, Free Pascal, FreeBSD, GCC, Mono and SourceForge. Google Code also provides Subversion hosting for their open source projects. BountySource systems use it exclusively. CodePlex offers access to Subversion as well as to other types of clients.

The corporate world has also started to adopt Subversion. A 2007 report by Forrester Research recognized Subversion as the sole leader in the Standalone Software Configuration Management (SCM) category and as a strong performer in the Software Configuration and Change Management (SCCM) category.[1]

Subversion was created by CollabNet Inc. in 2000 and is now a top-level Apache project being built and used by a global community of contributors.

Some of the commands frequently used related to SVN is listed below.

* Create Svn Repository

Syntax: svnadmin create <svn data directory><$repository_name>

E.g:

  $ svnadmin create /var/www/svn/repotest

-Command to create a repository using the default configuration.

$ chown apache.apache /var/www/svn/repotest 

- Command to use after creating the repository to assign ownership of the repository root directory to apache.

Or

Syntax: svnadmin create –fs-type fsfs <svn data directory><$repository_name>

  $ svnadmin create --fs-type fsfs /var/www/svn/repotest

-Command to create a repository using specifying the repository type.

$ chown apache.apache /var/www/svn/repotest 

- Command to use after creating the repository to assign ownership of the repository root directory to apache.

  * Initially input repo data to repository

$ svn import -m "Initial import." /var/www/svn/repotest/ file:///var/www/svn/myrepo

- where /var/www/svn/repotest is the root directory of the repository and file:///var/www/svn/repotest will be the online repository directory

  * list and view contents of a repo in tree view

Syntax: svnlook tree <repository absolute path>

  $ svnlook tree /var/www/svn/repotest/ 

  * View SVN Information

$ svn info

  * List down all repo

$ svn st

  * Add to svn a file or folder

$ svn add <file>

  * Create a Directory for svn

$ svn mkdir <directory>

 * Log svn

$ svn log

  *To revert to original file

$  svn revert <path>

  - revert a whole directory of files, use the --depth=infinity option:

$ svn revert --depth=infinity <path>

 

  *Delete a file or directory from svn

$ svn delete <directory>

  *Commit all changes made to file (Note: must be inside the repository path

$ svn ci -m "adding directories"

- Where ci is the commit command and -m is the parameter for the additional message and “adding directories” would be the notes included for the commit .
or

If you want to use a file that’s under version control for your commit message with --file, you need to pass the --force-log s witch:

$ svn commit --file file_under_vc.txt foo.c
svn: The log message file is under version control
svn: Log message file is a versioned file; use '--force-log' to override

$ svn commit --force-log --file file_under_vc.txt foo.c
Sending        foo.c
Transmitting file data .
Committed revision 6.

  *Checkout contents of Repository/get files and details

Syntax: svn co <repository_site> <path>

$ svn co http://svnrepo.com/svn/repos/  /var/www/svn/repotest/ 

For more commands on SVN Click here.

Happy versioning.

Installing Centreon on Centos

What is Centreon?

Centreon is an Open Source software package that lets you supervise all the infrastructures and applications comprising your information system.
please check this for more information.

Dependencies
_ nagios
_ nagios-plugins
_ ndoutils
_ nrpe
_ make
_ sudo
_ apache (httpd server)
_ mysql (database server)
_ php
_ gd
_ gd-devel
_ perl
_ gcc
_ rrdtool
_ net-snmp

Step 1: Install Dependencies

root@linux: ~ # yum -y install make sudo gd gd-devel httpd* mysql* php* perl* gcc rrdtool* net-snmp*

if you can’t install rrdtool via yum, you can follow this:

1) add rpmforge repository

root@linux: ~ # yum -y install yum-priorities

2) edit file priorities.conf

root@linux: ~ # vim /etc/yum/pluginconf.d/priorities.conf

[main]
enable=1

3) download and install rpm forge

root@linux: ~ # wget http://apt.sw.be/redhat/el5/en/i386/RPMS.dag/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
root@linux: ~ # rpm –import http://dag.wieers.com/rpm/packages/RPM-GPG-KEY.dag.txt
root@linux: ~ # rpm -K rpmforge-release-0.3.6-1.el5.rf.i386.rpm
root@linux: ~ # rpm -i rpmforge-release-0.3.6-1.el5.rf.i386.rpm

4) install rrdtool using yum

root@linux: ~ # yum -y install rrdtool rrdtool-devel perl-rrdtool

to install nagios, nagios-plugins, ndoutil, nrpe, you can check this.

Step 2: Compile Centreon

1)download package

root@linux: ~ # wget http://download.centreon.com/centreon/centreon-2.0.tar.gz

2) extract package

root@linux: ~ # tar -xzvf centreon-2.0.tar.gz

3) compile centreon

root@linux: ~ # cd centreon-2.0
root@linux: centreon-2.0 # ./install.sh -i

answer all question script. you can just press return as default.
or press key y for yes or n for no.

the location of RRDs.pm config will be :

/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi/RRDs.pm

the location of  PEAR.php config will be:

/usr/share/pear/PEAR.php

When done with the compilation you can check in http://yourdomain/centreon, and continue with web interface installation.

4)web interface installation
follow instruction on-screen and click Next button to continue installation. make sure those is OK result.
if there’s any problem, it’s give you declaration for help (step 4). if u get error like step 4 you can use chown and chmod for solution.

root@linux: ~ # chown -R nagios.apache /usr/local/nagios
root@linux: ~ # chmod -R 775 /usr/local/nagios


5) Test. Open your browser and goto http://yourdomain/centreon

Links:
+ http://www.google.com
+ http://tech-db.com/node/26
+ http://nagioswiki.com/wiki/index.php/Installing_Centreon_on_Centos_5

HOW TO CREATE A SELF-SIGNED CERTIFICATE

Step 1: Generating a server key needed to create the .ca and .crt files

[root@home test]# openssl genrsa -des3 -out server.key 4096
Generating RSA private key, 4096 bit long modulus
…………………………………………..++
……………………………………++
e is 65537 (0×10001)
Enter pass phrase for server.key:
Verifying – Enter pass phrase for server.key:
[root@home test]# ls
server.key

Step 2: Remove the passphrase from the key file. This is done so you do not have to type in the passphrase everytime the Apache is started. This especially useful in the event of server reboot when there is no one to manually type the passphrase. (Note: Once the passphrase is removed from the key, make sure that the file is readable only by root.)

[root@home test]# mv server.key server.key.secure
`server.key’ -> `server.key.secure’
[root@home test]# openssl rsa -in server.key.secure -out server.key
Enter pass phrase for server.key.secure:
writing RSA key
[root@home test]# ls
server.key server.key.secure

Step 3: Generate the CA file. The Certificate Authority file identifies the body that signed the certificate. The certificate validity in this example is 365 days after which, you will have to generate a new CA and CRT files again.

[root@home test]# openssl req -new -x509 -days 365 -key server.key -out server.ca
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
—–
Country Name (2 letter code) [GB]: PH
State or Province Name (full name) [Berkshire]:Manila
Locality Name (eg, city) [Newbury]:Manila
Organization Name (eg, company) [My Company Ltd]:ZXY Corp
Organizational Unit Name (eg, section) []:IT
Common Name (eg, your name or your server’s hostname) []:myhomies.com.ph
Email Address []:admin@myhomies.com.ph
[root@home test]# ls
server.ca server.key server.key.secure

Step 4: Generate the CSR file. The Certificate Signing Request is the file that contains the information of the certificate file itself. Note that in the Common Name field, you will have the use the fully qualified domain name (FQDN) of the actual site where the certificate is going to be used. For example, if your secure site is https://mywork.here.com, put mywork.here.com in the Common Name field. If you don’t have FQDN, use the server’s ip address instead. If the site url and Common Name are different, users will see a pop-up box whenever they visit your site.

[root@home test]# openssl req -new -key server.key -out server.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
—–
Country Name (2 letter code) [GB]: PH
State or Province Name (full name) [Berkshire]:Manila
Locality Name (eg, city) [Newbury]:Manila
Organization Name (eg, company) [My Company Ltd]:ZXY Corp
Organizational Unit Name (eg, section) []:IT
Common Name (eg, your name or your server’s hostname) []:myhomies.com.ph
Email Address []:admin@myhomies.com.ph

Please enter the following ‘extra’ attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
[root@home test]# ls
server.ca server.csr server.key server.key.secure

Step 5: Generate the CRT file.

[root@home test]# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Signature ok
subject=/C=PH/ST=Manila/L=Manila/O=ZXY Corp/OU=IT/CN=myhomies.com.ph/emailAddress=admin@myhomies.com.ph
Getting Private key
[root@home test]# ls
server.ca server.crt server.csr server.key server.key.secure

There you have it. You have the files that you need to create a secure http site with a self-signed certificate. All you have to do now is to install there certificates to your Apache server. Your server needs to have mod_ssl enabled to use the secure http port (443).

Step 6: Install the certificates. Just copy the files to where you want your SSL certificate to be, like /etc/httpd/conf.d/ssl.crt. The setup your ssl.conf file to point the directives to the location of your SSL files.