That’s what my brain felt like after trying to figure all of the options for Apache and PHP. To combat my rubber brain, I created this flow-chart to help me keep track of the options, the pros and cons for each, and the path I finally chose.
First off, a list of requirements and goals:
Here’s what I eventually figured out about Apache and PHP:
These sites were helpful for the initial set up of PHP as CGI with mod_fcgi and Apache in chroot (mod_fcgi sends one request to each PHP process regardless if PHP children are available to handle more, and no sharing of APC opcode cache across PHP processes):
This site was helpful for setting up PHP as CGI with mod_fastcgi and Apache in chroot (mod_fastcgi sends multiple requests to a PHP process, so the process can send them to children processes, and having one PHP process for each site allows for APC opcode cache to be usable.)
These sites helped me learn about php-fpm and how it is not quite ready for what I have in mind:
I ended up going with Apache’s mod_fastcgi for using PHP as a CGI, and NOT using PHP-FPM, while running Apache in threaded mode with apache.worker enabled.
To get this set up is pretty easy. I already had Apache and PHP installed and running (with PHP as CGI using mod_fcgi), so here are the steps I used to convert it to run mod_fastcgi and apache.worker. I’m running CentOS 6.3.
rpm -Uvh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
yum --enablerepo=rpmforge install mod_fastcgi
/etc/httpd/conf/httpd.conf
fileServerTokens Prod
KeepAlive On
<IfModule worker.c> StartServers 8 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>
LoadModule php5_module modules/libphp5.so
AddType application/x-httpd-php .php
Include conf/virtual_hosts.conf
/etc/httpd/conf/virtual_hosts.conf
fileEach virtual host needs to have an entry similar to this in the httpd.conf file, or I like to create a separate virtual_host.conf and include that in the main httpd.conf.
# Name-based virtual hosts # # Default NameVirtualHost *:80 # Begin domain-name.com section <VirtualHost *:80> DocumentRoot /var/domain-name/home/html/ ServerName domain-name.com ServerAlias www.domain-name.com # Rewrite domain name to not use the 'www' RewriteEngine On RewriteCond %{HTTP_HOST} !^domain-name\.com$ [NC] RewriteRule ^/(.*) http://domain-name.com/$1 [R=301] # Specify where the error logs go for each domain ErrorLog /var/logs/httpd/current/domain-name.com-error_log CustomLog /var/logs/httpd/current/domain-name.com-access_log combined <IfModule mod_fastcgi.c> SuexecUserGroup domain-name domain-name ScriptAlias /cgi-bin/ "/var/www/cgi-bin/domain-name/" <Directory "/var/domain-name/home/html"> Options -Indexes FollowSymLinks +ExecCGI AddHandler php5-fastcgi .php Action php5-fastcgi /cgi-bin/php-fastcgi Order allow,deny Allow from all </Directory> </IfModule> </VirtualHost> # End domain-name.com section
Things to note:
SuexecUserGroup
should have the user/group for the project.Add a /var/www/cgi-bin/projectname/php-fastcgi
file for each project. This allows php to run as FastCGI, and use suEXEC. The php-fastcgi file needs to be under suexec’s default directory path /var/www/cgi-bin/
.
#!/bin/bash # Set PHPRC to the path for the php.ini file. Change this to # /var/projectname/home/ to let projects have their own php.ini file PHPRC=/var/domain-name/home/ export PHPRC export PHP_FCGI_MAX_REQUESTS=5000 export PHP_FCGI_CHILDREN=5 exec /usr/bin/php-cgi
Things to note:
/var/log/httpd/suexec.log
/var/www/cgi-bin/projectname/php-fastcgi
file:
exec /usr/bin/php-cgi -d apc.shm_size=128M
Comment out everything in the /etc/httpd/conf.d/php.conf
file so php is not loaded as a module when Apache starts.
Edit the /etc/sysconfig/httpd
file to allow Apache to use multi-threaded mode (httpd.worker) which handles basic HTML files much nicer (less RAM). Uncomment the line with HTTPD=/usr/sbin/httpd.worker
Check the Apache configuration files to see if there are any errors.
service httpd configtest
If all good, restart Apache
service httpd restart
This will stop the running httpd service, and then start it again. Use this command after installing or removing a dynamically loaded module such as PHP. ORservice httpd reload
This will cause the running httpd service to reload the configuration file. Note that any requests being currently processed will be interrupted, which may cause a client browser to display an error message or render a partial page. ORservice httpd graceful
This will cause the running httpd service to reload the configuration file. Note that any requests being currently processed will use the old configuration.pecl install apc
Set up log rotation for Apache
/etc/logrotate.d/httpd.monti
/var/logs/httpd/*log { daily rotate 365 compress missingok notifempty copytruncate olddir /var/logs/httpd/archives/ sharedscripts postrotate /bin/kill -HUP `cat /var/run/httpd/httpd.pid 2>/dev/null` 2> /dev/null || true endscript }
Part 1: Setting up the servers
yum install ntp
ntpdate time.nist.gov
service ntpd start
/etc/ntp.conf
file to use the following servers:server 0.pool.ntp.org server 1.pool.ntp.org server 2.pool.ntp.org server 3.pool.ntp.org
service ntpd restart
chkconfig ntpd on
RedHat Cluster must be set up before the GFS2 File systems can be created and mounted.
yum install openais cman rgmanager lvm2-cluster gfs2-utils ccs
/etc/cluster/cluster.conf
REMEMBER: Always increment the “config_version” parameter in the cluster
tag!
<?xml version=“1.0”?> <cluster config_version=“24” name=“web-production”> <cman expected_votes=“1” two_node=“1”/> <fence_daemon clean_start=“1” post_fail_delay=“6” post_join_delay=“3”/> <totem rrp_mode=“none” secauth=“off”/> <clusternodes> <clusternode name=“bill” nodeid="1"> <fence> <method name="ipmi"> <device action=“reboot” name=“ipmi_bill”/> </method> </fence> </clusternode> <clusternode name=“ted” nodeid="2"> <fence> <method name="ipmi"> <device action=“reboot” name=“ipmi_ted”/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent=“fence_ipmilan” ipaddr=“billsp” login=“root” name=“ipmi_bill” passwd=“PASSWORD-HERE”/> <fencedevice agent=“fence_ipmilan” ipaddr=“tedsp” login=“root” name=“ipmi_ted” passwd=“PASSWORD-HERE”/> </fencedevices> <rm log_level="5"> <resources> <clusterfs device=“/dev/mapper/StorageTek2530-sites” fstype=“gfs2” mountpoint=“/sites” name=“sites”/> <clusterfs device=“/dev/mapper/StorageTek2530-databases” fstype=“gfs2” mountpoint=“/databases” name=“databases”/> <clusterfs device=“/dev/mapper/StorageTek2530-logs” fstype=“gfs2” mountpoint=“/logs” name=“logs”/> </resources> <failoverdomains> <failoverdomain name=“bill-only” nofailback=“1” ordered=“0” restricted="1"> <failoverdomainnode name=“bill”/> </failoverdomain> <failoverdomain name=“ted-only” nofailback=“1” ordered=“0” restricted="1"> <failoverdomainnode name=“ted”/> </failoverdomain> </failoverdomains> </rm> </cluster>
ccs_config_validate
passwd ricci
service ricci start
chkconfig ricci on
service modclusterd start
chkconfig modclusterd on
ccs -f /etc/cluster/cluster.conf -h ted --setconf
service cman start
chkconfig cman on
Create a partition on the new scsi device /dev/mapper/mpatha using parted. NOTE: This part only needs to be done once on one server
parted /dev/mapper/mpatha
mklabel gpt
mkpart primary 1 -1
set 1 lvm on
quit
parted -l
Edit the /etc/lvm/lvm.conf file
and set the value for locking_type = 3
to allow for cluster locking.
In order to enable the LVM volumes you are creating in a cluster, the cluster infrastructure must be running and the cluster must be quorate.
service clvmd start
chkconfig clvmd on
chkconfig gfs2 on
Create LVM partitions on the raw drive available from the StorageTek. NOTE: This part only needs to be done once on one server.
pvcreate /dev/mapper/mpatha1
vgcreate -c y StorageTek2530 /dev/mapper/mpatha1
Now create the different partitions for the system: sites, databases, logs, home, root
lvcreate --name sites --size 350GB StorageTek2530
lvcreate --name databases --size 100GB StorageTek2530
lvcreate --name logs --size 50GB StorageTek2530
lvcreate --name root --size 50GB StorageTek2530
Make a temporary directory /root-b
and copy everything from root’s home directory to there, because it will be erased when we make the GFS2 file system.
Copy /root/.ssh/known_hosts to /etc/ssh/root_known_hosts so the file is different for both servers.
Before doing the home directory, we have to remove it from the local LVM.
unmount /home
lvremove bill_local/home
and on ted lvremove ted_local/home
/etc/fstab
referring to the /home directory on the local LVMlvcreate --name home --size 50GB StorageTek2530
Create GFS2 files systems on the LVM partitions created on the StorageTek. Make sure they are unmounted, first. NOTE: This part only needs to be done once on one server.
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:sites /dev/mapper/StorageTek2530-sites
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:databases /dev/mapper/StorageTek2530-databases
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:logs /dev/mapper/StorageTek2530-logs
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:root /dev/mapper/StorageTek2530-root
mkfs.gfs2 -p lock_dlm -j 2 -t web-production:home /dev/mapper/StorageTek2530-home
Mount the GFS2 partitions
Make the appropriate folders on each node (/home is already there).
mkdir /sites /logs /databases
Make sure the appropriate lines are in /etc/fstab
#GFS2 partitions shared in the cluster /dev/mapper/StorageTek2530-root /root gfs2 defaults,acl 0 0 /dev/mapper/StorageTek2530-home /home gfs2 defaults,acl 0 0 /dev/mapper/StorageTek2530-databases /databases gfs2 defaults,acl 0 0 /dev/mapper/StorageTek2530-logs /logs gfs2 defaults,acl 0 0 /dev/mapper/StorageTek2530-sites /sites gfs2 defaults,acl 0 0
Once the GFS2 partitions are set up and in /etc/fstab
, rgmanager can be started. This will mount the GFS2 partions.
service rgmanager start
chkconfig rgmanager on
To start the cluster software on a node, type the following commands in this order:
service cman start
service clvmd start
service gfs2 start
service rgmanager start
To stop the cluster software on a node, type the following commands in this order:
service ossec-hids stop
service rgmanager stop
service gfs2 stop
umount -at gfs2
service clvmd stop
service cman stop
If a service shows as ‘failed’ when checking on services with clustat
clusvcadm -d service-name
clusvcadm -e service-name
Have Shorewall start sooner in the boot process.
/etc/init.d/shorewall
and change the line near the top from # chkconfig: - 28 90
to
# chkconfig: - 18 90
chkconfig shorewall off
chkconfig shorewall on
One of the most frustrating parts of this set up was getting the storage array talking to the servers. I finally got it figured out. I’m using a StorageTek 2530 to connect to two SunFire X2100 M2’s via SAS (Serial Attached SCSI) cables. I put in a dual port SAS HBA (Host Bus Adapter) in the X2100 M2’s, but for real redundancy, I should have used two single port HBA’s. The Sun/Oracle documentation is pretty good about how to physically set up the servers and storage array, but are pretty lacking from there on.
Replace the parts in squares brackets below with whatever you want.
yum install ksh bc /lib/ld-linux.so.2 libgcc.i686 libstdc++.i686 libzip.i686 gettext
rpm -Uvh jdk-6u20-linux-i586.rpm
./RunMe.bin -c
/opt/sun/cam/bin
folder to path
setenv PATH ${PATH}:/opt/sun/cam/bin
source .tcshrc
/etc/sysconfig/network-scripts/ifcfg-eth1:1
file and put this in there
rpm -ivh SMruntime.xx.xx.xx.xx-xxxx.rpm
rpm -ivh SMagent-LINUX-xx.xx.xx.xx-xxxx.rpm
sscs register -d storage-system
sscs modify -T [Array-Name] array ARRAY1
sscs create -a knox pool [Pool-Name]
sscs create -a knox -p [Pool-Name] -n 11 vdisk [Vdisk-Name]
sscs create -a knox -p [Pool-Name] -s max -v [Vdisk-Name] volume [Volume-Name]
sscs create -a knox hostgroup [ApacheHosts]
sscs create -a knox -g [ApacheHosts] host [Host-Name]
and repeat for other hosts.sscs map -a knox -g ApacheHosts volume Volume-Name
It took me a while to grasp the meaning for the different terms: pool, volume, volume groups, disks, etc. I drew up a chart with the appropriate commands to create the different aspects.
To utilize both cables connecting the server to the storage array, the OS needs to use multi-pathing. I had lots of troubles trying to set this up after the OS was installed, so I just let it be done by the installer. Here’s what should happen if you find the OS already installed and need to set up multi-paths.
/dev/mapper/mpatha
. This will be the device to partition, format, and throw LVM on.yum install device-mapper-multipath
mpathconf --enable
to create a default /etc/multipath.conf
file or create one using the following:
chkconfig multipathd on
service multipathd start
First of all, I’ll cover what set up I would like to achieve and why.
I’m using two Sun SunFire X2100 M2 connected to a StorageTek 2530 with 4.5TB of drive space. The servers attach to the storage array via SCSI cables for quick data transfer speeds. The array also has the ability to handle iSCSI connections. This will give me a decent base set up, with room to grow.
I’ll put the two servers in a cluster and make the services available over the cluster. They will share the storage using GFS2. In the future, I’ll add a couple of load balancer/proxy machines to farm out the Web traffic, and add a couple more SunFire X2100 M2’s to take that load. One of the main reasons to set up a new configuration with new servers is to provide a clean environment for the many WordPress and Omeka installations we host. We’ve had to hang on to some legacy services to support some older projects, so this will allow us to keep up to date. It will also allow me to set up Apache and PHP to run as a server user, locked down to it’s own directory. That way each of the 100+ sites won’t be able to access any other site’s content. I picked CentOS as the OS because it has cluster and GFS2 options of RedHat, but without the cost.
/boot
, the File System Type as ‘ext4’ and the Size (MB) as 500, then click ‘OK’One of the most important things to have with servers is some form of remote management. That way you don’t need to trek down to the data center each time the server hangs while testing (and it happens a lot). For Sun systems, that means setting up the ELOM (Embedded Lights Out Manager).
IPMI Config Set LAN Config Set PEF Config PEF Support ........ [Enabled] PEF Action Global All of them ..... [Enabled] Alert Startup Discover ..... [Disabled] Startup Delay .............. [Disabled] Event Message For PEF ...... [Disabled] BMC Watch Dog Timer Action ... [Disabled] External Com Port ............ [BMC] Remote Access Remote Access ................ [Serial] Serial Port Number ........... [Com2] Serial Port Mode ............. [115200 8,n,1] Flow Control ................. [Hardware] Post-Boot Support ............ [Always] Terminal Type ................ [VT100] VT-UTF8 Combo Key ............ [Enabled]
RedHat in EL 6, and thereby CentOS, moved to Upstart instead of Sysv, so we create a new serial-ttyS1.conf file instead of editing the /etc/inittab file.
# This service maintains a getty on /dev/ttyS1. stop on runlevel [016] respawn instance $TTY exec /sbin/mingetty $TTY
# grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/Logical/root # initrd /initrd-version.img #boot=/dev/sda default=0 timeout=5 #splashimage=(hd0,0)/grub/splash.xpm.gz #hiddenmenu serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1 terminal --timeout=10 serial console title CentOS Linux (2.6.32-71.29.1.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-71.el6.x86_64 ro root=/dev/mapper/Local-root \ rd_LVM_LV=Local/root rd_LVM_LV=Local/swap rd_NO_LUKS rd_NO_MD rd_NO_DM \ console=tty1 console=ttyS1,115200n8 initrd /initramfs-2.6.32-71.29.1.el6.x86_64.img
console vc/1 vc/2 vc/3 vc/4 vc/5 vc/6 vc/7 vc/8 vc/9 vc/10 vc/11 tty1 tty2 tty3 tty4 tty5 tty6 tty7 tty8 tty9 tty10 tty11 ttyS1
Connect to the ELOM by ssh into the IP address.
ssh [email protected]
set /SP/SystemInfo/CtrlInfo PowerCtrl=on
set /SP/SystemInfo/CtrlInfo PowerCtrl=gracefuloff
set /SP/SystemInfo/CtrlInfo PowerCtrl=forceoff
set /SP/SystemInfo/CtrlInfo PowerCtrl=reset
set /SP/SystemInfo/CtrlInfo BootCtrl=BIOSSetup
set /SP/AgentInfo IpAddress=xxx.xxx.xxx.xxx
root
, and the default password is changeme
.
set /SP/User/[username] Password=[password]
start /SP/AgentInfo/console
Esc-Shift-9
keys.stop /SP/AgentInfo/console
Next we secure the new servers with some software updates and a firewall.
/etc/resolve.conf
options single-request-reopen
takes care of slow SSH logins. See here https://stomp.colorado.edu/blog/blog/2011/06/29/on-rhel-6-ssh-dns-firewalls-and-slow-logins/ and here http://www.linuxquestions.org/questions/showthread.php?p=4399340#post4399340 for more info.yum install openssh-clients tcsh ksh bc rpm-build gcc gcc-c++ redhat-rpm-config acl gcc gnupg make vim-enhanced man wget which mlocate bzip2-devel libxml2-devel screen sudo parted gd-devel pam_passwdqc.x86_64 rsync zip xorg-x11-server-utils gettext
/etc/sysconfig/selinux
file and set SELINUX=disabled
.
/etc/vimrc
file:
tcsh
/etc/passwd
file to have root use tcshroot:x:0:0:root:/root:/bin/tcsh
.tcshrc
file in root’s home.
setenv PATH ${PATH}:/opt/sun/cam/bin
# Make command completion (TAB key) cycle through all possible choices
# (The default is to simply display a list of all choices when more than one
# match is available.)
bindkey “^I” complete-word-fwd
/etc/hosts
. Add a line with IP and domain name.
# Internal Services
192.168.1.100 http.localdomain httpd.localdomain
192.168.1.101 mysql.localdomain
192.168.1.102 memcached.localdomain
updatedb
to set up the locate
database./etc/pam.d/system-auth
file. Change the linepassword requisite pam_cracklib.so try_first_pass retry=3
to thispassword requisite pam_passwdqc.so min=disabled,disabled,16,12,8
yum update
, and then ayum install firefox xorg-x11-xauth xorg-x11-fonts-Type1
There will be more you’ll need too.
process 702: D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open "/var/lib/dbus/machine-id": No such file or directory
. Then run the following command as root.
dbus-uuidgen > /var/lib/dbus/machine-id
ssh-keygen
cat id_rsa.pub >> ~/.ssh/authorized_keys
authorized_keys
and id_rsa
both set to rw-------
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm
on the machine./etc/yum.repos.d/epel.repo
file and set the first “enabled” line to equal 0. That disables yum from using the EPEL repo by default.yum --enablerepo=epel install shorewall
/etc/shorewall/shorewall.conf
file. Change the STARTUP_ENABLED=NO
toSTARTUP_ENABLED=Yes
/etc/shorewall/zones
file:
/etc/shorewall/interfaces
file:
/etc/shorewall/policy
file:
/etc/shorewall/rules
file:
#LAST LINE — ADD YOUR ENTRIES BEFORE THIS ONE — DO NOT REMOVE
/etc/shorewall/routestopped
file:
chkconfig shorewall on
service shorewall start
The next part will be connecting the servers to the storage array.
]]>