Using php-fpm and mod_proxy_fcgi to optimize and secure LAMP servers

So up until now I’ve been using mpm_itk or mpm_peruser – both with advantages and disadvantages in an attempt to secure web content. Both of these is essentially a forking mpm, kills Keepalive to a greater or lesser extent, and almost as important – neither is supported by mainline apache (so you’re on you own). Personally I prefer mpm_worker (or more recently mpm_event) since it’s threaded, and I find that it uses less resources (in terms of memory mostly). A lot of movement has also been happening with respect to FastCGI and the advantages are very good, both in terms of security and reliability (in theory).

Older solutions that I tried.

mod_fcgid

I’ll be honest – I can’t get this thing tweaked to my liking. Not even anywhere close. It seems to spawn processes on a per php script basis, and process counts just rack up too quickly and brought a number of servers for clients to a grinding halt. It’s difficult to control concurrency and in general had me pulling my hair out of my head. Interaction between php-fcgi and mod_fcgid is unclear, and the wrapper script requirements are just outright insane.

mod_fastcgi(_handler)

I once tried to get these up and running, but under the pressures had to revert to mod_fcgid – it looked better, but you still had to spawn php-fcgi processes and the wrapper script issue was still there.

I must make note that both of these solutions are passable.

mpm_{itk,peruser} with mod_php

The only real issue here was massive memory consumption. Ultimate Linux Solutions has clients that has machines with virtual host counts in excess of 500, each running as it’s own user/group combination. Whilst this may seem acceptable once you start forking a few thousand processes and start counting up the memory it becomes impossible to scale.

php-fpm

PHP-FPM allows us to start up a single process that implements the FastCGI mechanism, that in turn spawns multiple php processes as per configuration to deal with various different virtual hosting requirements. There is an extremely thorough explanation of this on the apache wiki. What I’d like to step into is more the security considerations aspects of the entire configuration. I will however give a very quick rundown for configuring php-fpm for a single virtual host and then the vhost config in apache to make it connect to the correct FCGI instance. Then I’d like to speculate a little on the pros and cons of the three different pm models and the different trade-offs.

The configuration manual at the above php-fpm link is rather thorough.

Configuring php-fpm is very simple, a very basic config file (with comments internally) looks like this (I use this for my development machine):

[global]
error_log = /var/log/php-fpm.log
process.max = 100

[www]
; I'd prefer unix domain sockets, but apache requires a patch.
;listen = /var/run/php-fpm/www
;listen.owner = apache
;listen.group = apache
;listen.mode = 0660
listen = 127.0.0.1:9000

; This is a hard kill switch on php execution.  It ignores the
; max_execution_time that can be set/changed with php_ini.  Basically
; it avoids timeout issues between apache and php-fpm.
request_terminate_timeout=25

; More on this later - for me on my desktop for testing this works well.
user = nobody
group = nobody

; Can be used to control chroot()s.  Useful for security in general,
; too restrictive for my use-case.
;chroot = /var/www/htdocs
;chdir = /

;pm = dynamic
;pm.max_children = 50
;pm.start_servers = 1
;pm.min_spare_servers = 1
;pm.max_spare_servers = 5
;pm.max_requests = 500

; This is basically pm = dynamic, but spawns processes on-demand.
pm = ondemand
pm.max_children = 100
pm.process_idle_timeout = 30

There is a feature I’d love to have, and that’s to be able to create a template and then just inherit that, just to avoid having duplicate content, so something like (this would be nice to have and I have alternate mechanisms of generating templates, so seriously not a major concern, the files are small enough anyway):

[vhost]
istemplate = 1
;most of the settings from above here

[www]
template = vhost
user = 
group =
listen =

The apache config turns out to be very simple, but there are a few caveats that cought me:

ProxyPassMatch ^.*[.]php(/.*)?$ fcgi://127.0.0.1:9000/var/www/htdocs

This varies slightly from the one in the apache WIKI, the one in the apache Wiki is also perfectly workable. It seems that in the lack of a $ in the destination URI apache automatically appends $0, so the above works. I did run into a snag where I needed to flag some rewrite rules with [PT] in order to have the ProxyPassMatch pick up on it, specifically:

RewriteRule /static(/.*)?$ /static.php$1

Apparently this has something to do with the way modules stack, so apache would end up looking for a file called /var/www/htdocs/static.php/foo upon receive of /static/foo – skipping the match for passing to the proxy … something to do with ordering, so basically with [PT] on there it works.

Performance

I won’t claim that I’ve done extensive testing. I basically switched on mod_php and mod_proxy_fcgi off and did a few comparisons, so here the results.

~15ms

Measurement Apache+mod_php5 Apache+php-fpm
Apache startup Memory RES: 9.1MB,
SHR: 1.0MB,
VIRT: 424MB
RES: 5.0MB,
SHR: 1.0MB,
VIRT: 368MB
Apache memory RES: 48MB,
SHR: 4.4MB,
VIRT: 1961M
RES: 5.6MB,
SHR: 1.6MB,
VIRT: 1904M
Page load times: ~20ms

The second set of memory metrics was measured (using top) after realoading / on the particular machine 10 times in succession.

The page load times is an eyeball average of the initial requests to / as measured by firebug. So none of these values are overly scientific, however, pretty much all the load times for mod_proxy_fcgi was faster than for mod_php (with values as low as 11ms whereas the lowest I saw for mod_php was 16ms, higest was 18 and 42ms respectively). Again, on the code in question this is so fast that no real conclusions can be drawn other than php-fpm is at least as good (if not slightly better) than mod_php.

Firstly – it cannot be stated clearly enough – there are NUMEROUS factors that can (and will) influence these values. The exact MPM in use (I’ve loaded mpm_event in both cases, so the only functional config difference is the config required to load mod_php5 vs mod_proxy_fcgi. Even memory depends on things like number of virtual hosts, whether you’re using sendfile or not, keepalive, pretty much every config setting in apache has some kind of influence on CPU, RAM and response times.

You’ll note that I haven’t measured the fpm-php memory usage above – the processes died before I could take their values, so I did a few more requests and provoked a few processes to be spawned (5 of them), and on average their resident size was 7MB, potentially shared size of 3.5MB, and a 147MB virtual image size. No matter how I look at this php-fpm keeps turning out to be the better solution. Since the php-fpm processes each serves maximum 500 requests before being cycled anyway any RAM buildup will relatively quickly be killed, and RAM reclaimed for re-use. The apache resident size remains low. It just seems there is no real comparison. And based on previous experience with mod_fcgid compared to mod_php I was (and still is on a few servers) pulling my hair out to get mod_fcgid to work reliably and without issues.

Conclusion: Need performance? Use php-fpm. Need to scale? Use php-fpm. Need to use php? Use php-fpm.

Security considerations

In a shared hosting environment users has to protected from themselves. This limits our options, and enforces use of (relatively) expensive mechanisms such as suEXEC, which in and by itself has a few design features which I find disturbing (like the requirement that the target user be the owner of the wrapper script – this is dangerous for many reasons, the biggest being that an exploit ends up with the ability to rewrite the wrapper script). The requirement that this wrapper be in DocumentRoot too doesn’t make to me either – fortunately debian provides a suexec-custom which lifts this limit.

Now, basically to protect users from each other there is a few requirements:

  • They MUST not be able to read files from each other’s vhosts.
  • They MUST not be able to write to each other’s files.

Basically if I can read other vhosts’s .php files I can scavenge them for things like passwords, or just make a clone and be done with it. So ideally we want a separate web server for each virtual host, running as some arb user, with a group shared between that instance of the web server and the user only, so let’s say my ftp user is uls with a group of uls (as the only group for uls). So ideally I want to run any apache instance server files, or any process running code on behalf of uls as … no, not uls (explanation below) … apache:uls, or nobody:uls. I prefer apache:uls for various reasons.

Why not as the user owning the code? Mutual write and execute principle – if I own the file I can chmod it to be able to write to it to replace it with my own custom version. This does NOT say that this fixes all – because there are good reasons for apache to be able to write to the vhost and that file could well end up with a .php extension and be requested via apache and be executed anyway. If however my active group is the group of the file, and that group only has read access things looks a lot better from a security perspective.

So with the process serving the files running as apache:uls, and the files being owned by uls:uls (/home/uls/vhost_uls.co.za/htdocs with /home/uls downward being uls:uls) then as the uls user I can set the following permissions:

/home/uls - rwx--x--- - group uls can chdir() in.
/home/uls/vhost_uls.co.za - rwx--x--- - same.
/home/uls/vhost_uls.co.za/htdocs - rwxr-x--- - And can now generate directory listings (g-r should still work but directory listings will fail).

As can be quite plainly seen, when another vhost is served, eg apache:attacker, then the apache process (and mod_php by implication) will now even be able to chdir() into the uls vhosts! This is a VERY good thing.

In terms of php code being able to run it will now execute as apache:uls, and thus any newly created files will be created as apache:uls, and by default with permissions rw-r–r–. Newly created directories will be rwx-r-xr-x. The important thing to note, however, is that we no longer need to chmod 777 /target/write/location as per previous “assumption” by many clients. A simple g+w permission change (ie, giving the uls group write access) is sufficient, so for those folders where apache(php) must be able to write:

chmod g+w /path/to/location

This is particularly awesome, and the only downside is that you can’t differentiate between code trying to access files and apache itself. This is acceptable in my personal opinion.

The overheads in going to these extremes are however unacceptable in terms of mpm_itk and mpm_peruser when attempting to scale the installations. So middle ground needs to be found. And that middle ground is fastcgi (and php-fpm specifically in our case).

Specifically we still run apache as apache:apache, and that means we need to use either ACLs to restrict access at the apache group level to read-only (which is beyond the scope of the majority of people configuring web applications) or we need to give eXecute access on directories back to the “other” bits. So with php-fpm there are three “parties” involved:

  1. uls user – uploading code as uls:uls
  2. php-fpm – executing code as nobody:uls (another user here could work as well, I’ll probably end up creating a user such as php-fpm since I really do NOT want the nobody user to own any files – ever, another alternative is apache:uls – which may also be acceptable in most cases).
  3. apache – running as apache:apache

A few things to consider:

  • For apache to serve static content it requires eXecute on the entire path leading up to the file, and Read access on the file itself. So chmod o+x on the folders leading up to the static content, and chmod o+r on the content itself. Depending on whether or not you want DirectoryListings to function you can chmod o-r on directories leading up to the content.
  • For the php-fpm process to be able to execute the code it needs read access to the file (g+r) and x on the path to the file (g+x).
  • For the php-fpm process to be able to read from files it needs read access (g+r should do – in this way if apache runs as an entirely different user from php you can separate which files apache can read and which php can read – that’s pretty nifty).
  • Files/folders created by php will be created nobody:uls and not uls:uls – so if the uls user needs to be able to mess with those files you should probably umask(02) or umask(07) before you create them (in your code). umask(02) will allow read access to “other”, umask(07) will revoke all access from “other”.

The only downside of this that I can currently see is that by giving o+x *all* processess on the system running gains eXecute access to the folders to be served statically by apache. This in my opinion is minor as you can most likely obtain that content by requesting it via http anyway. This may potentially be “fixed” by mod_privileges (currently marked experimental, and only available on solaris).

So even with mpm_event I get the level of inter-vhost security that I previously only managed with mpm_peruser or mpm_itk. This is really good news.

chroot

You will have noticed the chroot option above (hopefully) in php-fpm.conf. This function is really cool in that it (hopefully) effectively jails the php interpreter into the htdocs (or vhost) directory. Some code (and a lot of my systems in particular) requires access outside of this, so won’t work. Also, if you’re running mysql on the same system you should probably clone the mysql socket file into the chroot too. My recommendation would be to explicitly set chdir=/ if you set chroot since in order to actually enter a root a chdir(“/”) is required since chroot() doesn’t change the effective working directory.

If your code does not need to access stuff outside of your home dir or the htdocs folder (even better – there are GOOD reasons not to store everything under htdocs however) then do use the chroot feature. Especially if your database server is over the network anyway (the case on most scalable solutions).

thoughts on php-fpm performance

Combining php-fpm with caching (APC or xcache) support should be able to leave even more dust in the face of mod_php, and actually do-able compared to mod_php. I have yet to measure the effect of this, as it stands I’ll probably let this stand over for the moment. The majority of my environments is just not quite this demanding (there is one or two exceptions so I’ll definitely start looking at this soon).

There are three process-management implementations currently available, they are explained on the php page. I just a few random thoughts about them, and hopefully someone may find this helpful when thinking about how to implement their own php-fpm implementations.

Ondemand will most likely generate a fork() on a signficant portion of requests. So that’s a downer, however, in a shared environment where access to sites are typically bursty it may be worthwhile to take this hit for the sake of reclaiming memory. Also, since the processes linger for a few seconds (I reckon 30-60 seconds should be good values) this impact should be minimized as the system should scale somewhat. It’ll probably also adjust quicker to sporadic spikes than dynamic, depending on whether or not fork()s are rate limited or not (the original config options I found indicated that there was a rate limit, but this config option is no longer in the production version). If you’re getting a constant stream of requests then some of the idle workers should pick up on the requests instead of spawning new ones.

Dynamic will probably give slightly better response times on average permitting load is relatively constant. We have a number of sites that gets one or two visits a day. To have php-fpm processes (even one) just lingering around on the server for them is probably pointless, also, once you’ve received requests they’re going to be higher unless you have max spare processes also equal to one – which kinda negates the whole point. As I understand it maintenance on the number of spares is also only done once a second, so if you suddently get a spike (and I have clients who gets insane spikes in the form of no requests for a minute or two and then 300 requests per second for a minute after that) and if you don’t have a sufficient number of min spare processors you’re going to be in trouble.

Static should in theory give the most consistent results, but in my personal opinion only really makes sense if you have a rock-solid constant load, and highly predictable. Here a constant number of workers is spawned, and exactly that number of php requests can be served at any given point in time.

I suspect ondemand should ramp-up faster than dynamic, in terms of number of processes available to serve php requests, unless you have min spare servers set to a large value. ramp-down in the case of ondemand is a bit trickier, essentially ondemand will kill a *single* worker process every second, permitting that it’s been idle for longer than the process idle time. This makes sense, so will ramp down slowly. dynamic on the other hand will likely kill a larger number of children quicker if there are too many spares lying around. In my experience most sites gradually ramp up and down, so both implementations should work just fine.

Another HUGE advantage of php-fpm and mod_proxy_fcgi is the ability for apache to load balance to multiple *remote* php-fpm servers. They don’t HAVE to be on the local machine. This is very useful for cluster configurations, so instead of having multiple apache machines with mod_php loaded, load one or two apache instances (for dealing with mod_rewrite or whatever other “heavy duty” apache requirements you have), and let that fan out to a larger number of php-fpm servers (these should most likely be static ones, we are talking large scale here are we not?). You can still have your haproxy, lighttpd, nginx, varsnish, or whatever reverse proxies in front. You can however eliminate that large apache cluster you have. And if mod_php was your only apache requirement … you could even consider switching directly to lighttpd. I’ve got a lot of clients that still use mod_rewrite and will continue to do so, so I’m not moving anywhere any time soon.

4 Responses to “Using php-fpm and mod_proxy_fcgi to optimize and secure LAMP servers”

  1. Mvaldez says:

    Again, thanks for sharing. I’m moving away from suphp to php-fpm, however, using ProxyPassMatch to decide when to transfer the control to php-fpm looks quite dangerous. I mean, with fastcgi modules and suphp, Apache actually look in the file system and check if the file is there before executing it. With mod_proxy (ProxyPassMatch) the regular expression is checking only the URL, even if the requested file is not really a PHP file (for example “/test.gif/x.php”).

    Regards, Mvaldez.

  2. Jaco Kroon says:

    Perfectly valid argument. One could arguably use mod_rewrite to first rewrite to an alternate URL, eg /php/ on condition that the file exists and then utilize mod_proxy from there on, so not insurmountable.

    Having said that, the biggest issue I’ve picked up so far is if the DirectoryIndex is set to index.php index.html then it will always try to use index.php even if only index.html exists, the way I came around that was to swap the DirectoryIndex order so that index.php is last. php-fpm path is to the best of what I could determine restricted to files/folders beneath the virtual root permitting all has been set up sanely, and will thus just result in a 404 scenario with some less than stellar error message.

    Alternatively, if ProxyPassMatch can be set up to do this conditionally then I’m pretty sure one could add the conditions correctly. A good place for discusion would be #httpd on Freenode.

  3. lunakid says:

    After a full day of trial and error, finally reached a state where a) Apache connects FPM via sockets, b) ProxyPassMatch is no longer used, and, most importantly: all the hacks and subtle problems with Rewrite configs are gone.

    The magic was found here: http://www.serverphorums.com/read.php?7,956732 — look for the post with this:

    “You can use UDS too; you just have to trick Apache into creating proxy instances first:

    # we must declare a parameter in here (doesn’t matter which) or it’ll not register the proxy ahead of time
    ProxySet disablereuse=off

    SetHandler proxy:fcgi://php-fpm

    This basically creates an alias to the UDS under whatever name you give it after “fcgi://”, and you can reference that in SetHandler, rewrites or ProxyPass(Match). Apache will pass “proxy:fcgi://php-fpm:8000″ (no idea why it picks that port) as the prefix to the backend in this case.”

  4. lunakid says:

    Arrgh, all the Apache context specifier tags have been stripped… 🙁

    Never mind, look for David Zuelke’s post at June 14, 2014 02:40PM.

    Have fun!