Note: This post is going to get re-written shortly to better fit the new format of the blog, ie: pulling out key points and making them separate posts, with tools and caveats for each one. Leaving this here for historocity’s sake
In the course of revamping the UBC Computer Science department’s website, I came face-to-face once again with the fact that Drupal, in terms of performance, is like a bunch of drunken monkeys on an oil tanker in a narrow channel during a monsoon: fun to play with, as long as you don’t have to try to get them to actually do anything coordinated or useful, and as long as your environmental disaster insurance is paid up.
Replace “monkeys” with “modules”.
Inevitably the coast guard shows up, and tries to make all those drunken monkeys walk in a straight line… and, well, it doesn’t work. (Again, replace “coast guard” with “users”. And the ship with a server. And replace the captain with our sysamin. Imagine the captain sobbing. There you go.)
There are two aspects to site performance: load times, and server performance. Load time is, roughly, how long it takes the client to download a page and all its parts; server performance is how long it takes to generate that page to get sent in the first place – and this metric will determine how well the server stands up to getting hammered by high load, as well.
Some of the problems I encountered were basic best-practices things you have to tackle when taking a site from development to production, just in terms of page load-times.
- too much CSS, and none of it squished/optimized
- too many enormous background graphics
- images not optimized or sprited
- IE-specific CSS not removed to a separate file
- PNGs where true background alpha transparency really isn’t needed
- nothing validating as W3C Complaint
Fixing this stuff took the first page load-time from 14 seconds to 7-ish seconds (I use both YSlow and Google’s PageSpeed Firefox plugins for measuring this).
Not bad – 50% reduction.
But 7 seconds? That’s awful. There’s much more that can be done. On the one hand, much overhead is just delivering the files to the client, and we had lots of big pretty background CSS images eating up a lot of download time. Not much we can do about that beyond what I listed above. But getting all the CSS and JS to the client can be optimized. Zip ‘em up.
Turn on gzipping on the server via mod_deflate.
Modern browsers, all of them, can handle gzipped files. Drupal itself has a page compression setting. However, I did this through the server, rather than through Drupal, basically as a way of spreading out the work – letting Drupal do it means it is done through PHP functions. Apache does it after that point, using its Apache resources. (Note: if you have a proxy, do it there instead – although there are SSL issues that come with that.)
If you do this in Apache, Drupal’s Page Compression should be turned off. But it you do it in Apache, non-Drupal pages on the server will get included too. That’s good for us.
First page load went from 7 seconds to 5 just by doing this. Add something like this to your httpd.conf:
# Insert filter
# Netscape 4.x has some problems…
BrowserMatch ^Mozilla/4 gzip-only-text/html
# Netscape 4.06-4.08 have some more problems
BrowserMatch ^Mozilla/4\.0 no-gzip
# MSIE masquerades as Netscape, but it is fine
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
# Don’t compress images
SetEnvIfNoCase Request_URI .(?:gif|jpe?g|png)$ no-gzip dont-vary
# Don’t compress compressed files or executables
SetEnvIfNoCase Request_URI .(?:exe|t?gz|zip|bz2|sit|rar)$ no-gzip dont-vary
# Don’t compress PDFs or other pre-compressed media
SetEnvIfNoCase Request_URI .(?:pdf|avi|mp3|mp4|mov|rm)$ no-gzip dont-vary
#Make sure proxies don’t deliver the wrong content
Header append Vary User-Agent env=!dont-vary
#This is pretty close to Apache’s suggested configuration; you don’t have to include the Netscape stuff, but
#we have a very varied user-base, so I left it in.
Not only were the pages slow to get to the client: the pages were causing the server to collapse into a little heap. The first load-tests of our setup basically put our server on its knees after 30 concurrent requests. Not good. So, database tweaks and server tweaks were needed, along with some stuff in Drupal itself.
- Storage engine: use InnoDB as your storage engine, not MyISAM. It’s more overhead, but – especially if you allow user logins – is more robust and is the better choice for Drupal. InnoDB will be the default for Drupal 7, also.
- Make sure your InnoDB log buffer and log table sizes are bigger than the default. Read here.
- Use a separate database server if you can.
There’s lots you can do within Drupal. Once we launch I plan to go back and do a full post on this, but for now…:
- Install the Drupal Tweaks module. It’s like it was made for this or something!
- Install the Block Cache module (I’m using the “with node grants” patch because we’ve got lots of access control stuff on the site.) Drupal evaluates all blocks on each page load, so this helps by telling it what situations the block should be looked at – per page, per role, etc. You decide. Because I’m using Context, this module isn’t so important, but it still helps because all the page menus are part of the main Block loading cycle, even if the rest of the blocks are disabled. I have them set to cache “per page”, so that active menu items get set.
- Panels/Context improvements. I ended up doing a custom panels/context hybrid; I’m not happy with it but it’s pretty OK. I didn’t benchmark this, because I can’t change our setup at this point anyway, but I did do some things like removing lots of custom panels and putting them into selection criteria in the master node/%node that will I think improve performance in the future (I’ll do another post on that later, too. I PROMISE.)
- Set each View to cache at whatever interval is reasonable.
- Using Panels caching, if you’re using Panels.
- Memory. You need lots of it. Up your PHP.ini memory to be 64MB. At least. We have it higher, but we do lots of cron importing and job scheduling. Raise it in increments and stop when you stop getting whitescreened. Depending on your setup, you can do it:
- in PHP.ini: memory_limit=64MB
- in settings.php: ini_set(‘memory_limit’, ’64M’);
- in .htaccess: php_value memory_limit 64M
- If you have a dedicated server and can do so, increase your RAM. If you’re on a shared server with ssh access, you can check how much RAM your server has by typing free -t -m. Rent more. For example (rough numbers here) you’ll want around 4GB for a site that has the common Drupal modules and gets >10,000 hits a day.
Install an Opcode Cache like APC. This stores chunks of oft-compiled code in memory for quick retrieval. All PHP-powered CMSes like opcode caching. Just do it. It’s like hot coffee for the monkey. However, there are some Drupal-specific gotchas. The apc section of our PHP.ini looks like this:
Some of this stuff is already the default, but it’s here to enable tweaking and testing. TTL is important, because it means that if your cache fills up it won’t get dumped: older entries will get timed out. gc_ttl is for garbage collection, and is more important if you’re using apc-specific functions.
Note the apc.filters directive.
When I turned APC on, I got a lot of stuff that looked like this:
|include(): apc failed to locate ./sites/all/modules/context/theme/context-block-browser-item.tpl.php – bailingin /cs/web/www.cs.ubc.ca/docs/drupal/includes/theme.inc on line 1066.
Hmm. It turns out that Drupal’s PHP Template engine is doing weird things with relative paths that was causing APC to throw up all over the place. Drunken monkeys, anybody?
Turning on or off the apc.include_once_override directive doesn’t fix the problem – it’s buggy right now, and using it has performance problems of its own. Since I’m using PHP5.3 anyway, it doesn’t matter so much, since PHP5.3 has its own includes optimizations.
Instead, I simply filtered out the problem modules until I can get into the code and see what’s going on. The filter is a regular expression, and as you can see I filtered out a few other places I don’t want APC going (the feeds module is making it choke as well, so I just excluded the folder, since it only runs once a day on cron anyway).
You can explicifly include or exclude from the filter by using – or + before the expression. If you’re using + filters, make sure “apc.cache_by_default=0″.
Load test that sucker
It doesn’t matter what you use, it really doesn’t. Apache Benchmark is nice if you have server access. There are dozens of online services that’ll let you do a few tests free so you can get out of your own building and see what somebody on the actual internerd is seeing, and they give you pretty graphs too. Whatever. You can google yourself up lots of different tutorials on load testing, but basically remember that concurrent connections are as key as the number of requests.
Ultimately, with all this stuff tweaked and working well, our server went from groaning at a mere 30 concurrent connections with basically a J-shaped load profile (running ‘top’ on the server showed that the edge of the vertical was the point at which it started swapping to disk), to something that looked more like a sideways hockey stick and was ticking along fine at 100 and 150 and beyond.
Sober little monkeys
Right now, our page load times are sitting around 3-4 seconds for a first load on a cable connection, and that’s not good enough yet – next steps are to go through and turn on Views and Panels caching (as I recommended above) once we’re finished development and have all the views and content in there and know what sort of page profile we’re looking at.
This is a very brief overview, but now I have a site to configure. Later, monkeys.