Posts Tagged ‘wordpress’

ESI Widget Issues in the Varnish, ESI, WordPress experiment

Sunday, July 26th, 2009

The administration interface is quite simple. When the widget is installed, drag it to the Sidebar, then, drag any widgets that you want displayed to the ESI Widget Sidebar.

esi-widget

Current issues:
* When a user is logged in and comments on a post, their ‘login’ information is left on the page if they are the first person to hit the page when Varnish caches the page. If someone is logged in and visits a post page and the page hasn’t been previously cached, the html that shows their login status is cached, though, new visitors see the information, but lack the credentials.

Addons that don’t work properly:
* Any poll application (possible solution to wrap widget in an ESI block)
* Any stat application (unless they convert to a webbug tracker, this probably cannot be fixed easily)
* Any advertisement/banner rotator that runs internal. OpenX will work, as will most non-plugin
* Any postcount/postviews addon
* CommentLuv?
* ExecPHP (will cache the output, but does work)
* Manageable

Any plugin that does something at the time of the post or comment phase, that isn’t dependent on the logged in data should work without a problem. If it requires a login, or uses the IP address to determine whether a visitor has performed an action, will have a problem due to the excessive caching. For sites where the content is needed to be served quickly and there aren’t many comments, ESI Widget would work well.

Because of the way Varnish works, you wouldn’t necessarily have to run Varnish on the server running WordPress. Point the DNS at the Varnish server and set the backend for the host to your WordPress server’s IP address and you can have a Varnish server across the country caching your blog.

WordPress, Varnish and Edge Side Includes

Wednesday, July 22nd, 2009

While talking about WordPress and it’s abysmal performance in high traffic situations to a client, we started looking back at Varnish and other solutions to keep their machine responsive. Since most of the caching solutions generate a page, serve it and cache it, posts and comments tend to lag behind the cache. db-cache does work around this by caching the query objects so that the pages can be generated more quickly and does expire the cache when tables are updated, but, its performance is still lacking. Using APC’s opcode cache or memcached just seemed to add complexity to the overall solution.

Sites like perezhilton.com appear to run behind multiple servers running Varnish, use wp-cache, move the images off to a CDN which results in a 3 request per second site with an 18 second pageload. Varnish’s cache always shows an age of 0 meaning Varnish is acting more as a load balancer than a front-end cache.

Caching isn’t without its downside. Your weblogs will not represent the true traffic. Since Varnish intercepts and serves requests before they get to the backend, those hits never hit the log. Forget pageview/postview stats (even with addons) because the addon won’t get loaded except during caching. Certain Widgets that rely on cookies or IP addresses will need to be modified. A workaround is to use a Text Box Widget and do an ESI include of the widget. For this client, we needed only some of the basic widgets. The hits in the apache logs will come from an IP of 127.0.0.1. Adjust your apache configuration to show the X-Forwarded-For IP address in the logs. If you truly need statistics, you’ll need to use something like Google Analytics. Put their code outside your page elements so that waiting for that javascript to load doesn’t slow down the rendering in the browser.

The test site, http://varnish.cd34.com/ is running Varnish 2.0.4, Apache2-mpm-prefork 2.2.11, Debian/Testing, WordPress 2.8.2. I’ve loaded the default .xml import for testing templates so that there were posts with varied dates and construction in the site. To replicate the client’s site, the following Widgets were added the sidebar: Search, Archives, Categories, Pages, Recent Posts, Tag Cloud, Calendar. Calendar isn’t in the existing site, but, since it is a very ‘expensive’ SQL query to run, it made for a good benchmark.

The demo site is running on:

model name	: Intel(R) Celeron(R) CPU 2.40GHz
stepping	: 9
cpu MHz		: 2400.389
cache size	: 128 KB

with a Western Digital 80gb 7200RPM IDE drive. Since all of the benchmarking was done on the same machine without any config changes taking place between tests, our benchmarks should represent as even a test base as we can expect.

Regrettably, our underpowered machine couldn’t run the benchmark with 50 concurrent tests, nor, could it run the benchmarks with the Calendar Widget enabled. In order to get apachebench to run, we had to bump the number of requests down and reduce the number of concurrent tests.

These results are from Apache without Varnish.

Server Software:        Apache
Server Hostname:        varnish.cd34.com
Server Port:            80

Document Path:          /
Document Length:        43903 bytes

Concurrency Level:      10
Time taken for tests:   159.210 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      4408200 bytes
HTML transferred:       4390300 bytes
Requests per second:    0.63 [#/sec] (mean)
Time per request:       15921.022 [ms] (mean)
Time per request:       1592.102 [ms] (mean, across all concurrent requests)
Transfer rate:          27.04 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   7.0      0      25
Processing: 14785 15863 450.2  15841   17142
Waiting:     8209 8686 363.4   8517    9708
Total:      14785 15865 451.4  15841   17142

Percentage of the requests served within a certain time (ms)
  50%  15841
  66%  15975
  75%  16109
  80%  16153
  90%  16628
  95%  16836
  98%  17001
  99%  17142
 100%  17142 (longest request)

Normally we would have run the Varnish enabled test without the Calendar Widget, but, I felt confident enough to run the test with the widget in the sidebar. Varnish was configured with a 12 hour cache (yes, I know, I’ll address that later) and the ESI Widget was loaded.

Server Software:        Apache
Server Hostname:        varnish.cd34.com
Server Port:            80

Document Path:          /
Document Length:        45544 bytes

Concurrency Level:      50
Time taken for tests:   18.607 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      457980000 bytes
HTML transferred:       455440000 bytes
Requests per second:    537.44 [#/sec] (mean)
Time per request:       93.034 [ms] (mean)
Time per request:       1.861 [ms] (mean, across all concurrent requests)
Transfer rate:          24036.81 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   1.8      0      42
Processing:     1   92  46.2    105     451
Waiting:        0   91  45.8    104     228
Total:          2   93  46.0    105     451

Percentage of the requests served within a certain time (ms)
  50%    105
  66%    117
  75%    123
  80%    128
  90%    142
  95%    155
  98%    171
  99%    181
 100%    451 (longest request)

As you can see, even with the aging hardware, we went from .63 requests per second to 537.44 requests per second.

But, more about that 12 hour cache. The ESI Widget uses an Edge Side Include to include the sidebar into the template. Rather than just cache the entire page, we instruct Varnish to cache the page and include the sidebar. As a result, when a person surfs the site and goes from the front page to a post page, the sidebar doesn’t need to be regenerated when they go to the 2nd page. With wp-cache, it would have regenerated the sidebar Widgets and then cached the resulting page. Obviously, that 12 hour cache is going to affect the usability of the site, so, ESI widget purges the sidebar, front page and post page any time a post is updated or deleted or commented on. Voila, even with a long cache time, we are presented with a site that is dynamic and not delayed until wp-cache’s page cache expires. As this widget is a concept, I’m sure a little intelligence can be added to prevent the excessive purging in some cases, but, it does handle things reasonably well. There are some issues not currently handled with the ESI including how to handle users that are logged for comments. With some template modifications, I think those pieces can be handled with ESI to provide a lightweight method for the authentication portion.

While I have seen other sites mention Varnish and other methods to keep your wordpress installation alive in high traffic, I believe this approach is a step in the right direction. With the ESI widget, you can focus on your site, and let the server do the hard work. This methodology is based on a CMS that I have contemplated writing for many years, though, using Varnish rather than static files.

It is a concept developed in roughly four hours including the time to write the widget and do the benchmarking. It isn’t perfect, but does address the immediate needs of the one client. I think we can consider this concept a success.

If you don’t have the ability to modify your system to run Varnish, then you would be limited to running wp-cache and db-cache. If you can connect to a memcached server, you might consider running Memcached for WordPress as it will make quite a difference as well.

This blog site, http://cd34.com/blog/ is not running behind Varnish. To see the Varnish enabled site with ESI Widget, go to http://varnish.cd34.com/

Software Mentioned:

* Varnish ESI and Purge and Varnish’s suggestions for helping WordPress
* WordPress
* wp-cache
* db-cache

Sites used for reference:

* Supercharge WordPress
* SSI, Memcached and Nginx (with mentions of a Varnish/ESI configuration)

Varnish configuration used for ESI-Widget:

backend default {
.host = "127.0.0.1";
.port = "81";
}

sub vcl_recv {
 if (req.request == "PURGE") {
     purge("req.url == " req.url);
 }

 if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
    unset req.http.cookie;
  }
  if (!(req.url ~ "wp-(login|admin)")) {
    unset req.http.cookie;
  }
}

sub vcl_fetch {
   set obj.ttl = 12h;
   if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
      set obj.ttl = 24 h;
   } else {
      esi;  /* Do ESI processing */
   }
}

Varnish and Nginx with Joomla

Sunday, June 28th, 2009

Recently we had a client that had some performance issues with a Joomla installation. The site wasn’t getting an incredible amount of traffic, but, the traffic it was getting was just absolutely overloading the server.

Since the machine hadn’t been having issues before, the first thing we did was contact the client and ask what had changed. We already knew the site and database that was using most of the CPU time, but, the bandwidth graph didn’t suggest that it was traffic overrunning the server. Our client rescued this client from another hosting company because the site was unusable in during prime time. So, we’ve inherited a problem. During the move, the site was upgraded from 1.0 to 1.5, so, we didn’t even have a decent baseline to revert to.

The stopgap solution was to move the .htaccess mod_rewrite rules into the apache configuration which helped somewhat. We identified a few sections of the code that were getting hit really hard and wrote a mod_rewrite rule to serve those images direct from disk — bypassing Joomla serving those images through itself. This made a large impact and at least got the site responsive enough that we could leave it online and work through the admin to figure out what had gone wrong.

Some of the modules that had been enabled contributed to quite a bit of the performance headache. One chat module generated 404s every second for each person logged in to see if there were any pending messages. Since Joomla is loaded for each 404 file, this added quite a bit of extra processing. Another quick modification to the configuration eliminated dozens of bad requests. At this point, the server is responsive, the client is happy and we make notes in the trouble ticket system and our internal documentation for reference.

Three days later the machine alerts and our load problem is back. After all of the changes, something is still having problems. Upon deeper inspection, we find that portions of the system dealing with the menus are being recreated each time. There’s no built in caching, so, the decision is to try Varnish. Varnish has worked in the past for WordPress sites that have gotten hit hard, so, we figured if we could cache the images, css and some of the static pages that don’t require authentication, we can get the server to be responsive again.

Apart from the basic configuration, our varnish.vcl file looked like this:

sub vcl_recv {
  if (req.http.host ~ "^(www.)?domain.com$") {
     set req.http.host = "domain.com";
  }

 if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
    unset req.http.cookie;
  }
}

sub vcl_fetch {
 set obj.ttl = 60s;
 if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
      set obj.ttl = 3600s;
 }
}

To get the apache logs to report the IP, you need to modify the VirtualHost config to log the forwarded IP.

The performance of the site after running Varnish in front of Apache was quite good. Apache was left with handling only .php and the server is again responsive. It runs like this for a week or more without any issues and only a slight load spike here or there.

However, Joomla doesn’t like the fact that every request’s REMOTE_ADDR is 127.0.0.1 and some addons stop working. In particular an application that allows the client to upload .pdf files into a library requires a valid IP address for some reason. Another module to add a sub-administration panel for a manager/editor also requires an IP address other than 127.0.0.1.

With some reservation, we decide to switch to Nginx + FastCGI which removes the reverse proxy and should fix the IP address problems.

Our configuration for Nginx with Joomla:

server {
        listen 66.55.44.33:80;
	server_name  www.domain.com;
 	rewrite ^(.*) http://domain.com$1 permanent;
}
server {
        listen 66.55.44.33:80;
	server_name  domain.com;

	access_log  /var/log/nginx/domain.com-access.log;

	location / {
		root   /var/www/domain.com;
		index  index.html index.htm index.php;

           if ( !-e $request_filename ) {
             rewrite (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ /index.php last;
             break;
           }

	}

	error_page   500 502 503 504  /50x.html;
	location = /50x.html {
		root   /var/www/nginx-default;
	}

	location ~ \.php$ {
		fastcgi_pass   unix:/tmp/php-fastcgi.socket;
		fastcgi_index  index.php;
		fastcgi_param  SCRIPT_FILENAME  /var/www/domain.com/$fastcgi_script_name;
		include	fastcgi_params;
	}

        location = /modules/mod_oneononechat/chatfiles/ {
           if ( !-e $request_filename ) {
             return 404;
           }
        }
}

With this configuration, Joomla was handed any URL for a file that didn’t exist. This was to allow the Search Engine Friendly (SEF) links. The second 404 handler was to handle the oneononechat module which looks for messages destined for the logged in user.

With Nginx, the site is again responsive. Load spikes occur from time to time, but, the site is stable and has a lot less trouble dealing with the load. However, once in a while the load spikes, but, the server seems to recover pretty well.

However, a module called Rokmenu which was included with the template design appears to have issues. Running php behind FastCGI sometimes gives different results than running as mod_php and it appears that Rokmenu is relying on the path being passed and doesn’t normalize it properly. So, when the menu is generated, with SEF on or off, urls look like /index.php/index.php/index.php/components/com_docman/themes/default/images/icons/16×16/pdf.png.

Obviously this creates a broken link and causes more 404s. We installed a fresh Joomla on Apache, imported the data from the copy running on Nginx, and Apache with mod_php appears to work properly. However, the performance is quite poor.

In order to troubleshoot, we made a list of every addon and ran through some debugging. With apachebench, we wrote up a quick command line that could be pasted in at the ssh prompt and decided upon some metrics. Within minutes, our first test revealed 90% of our performance issue. Two of the addons required compatibility mode because they were written for 1.0 and hadn’t been updated. Turning on compatibility mode on our freshly installed site resulted in 10x worse performance. As a test, we disabled the two modules that relied on compatibility mode and turned off compatibility mode and the load dropped immensely. We had disabled SEF early on thinking it might be the issue, but, we found the performance problem almost immediately. Enabling other modules and subsequent tests showed marginal performance changes. Compatibility mode was our culprit the entire time.

The client started a search for two modules to replace the two that required compatibility mode and disabled them temporarily while we moved the site back to Apache to fix the url issue in Rokmenu. At this point, the site was responsive, though, pageloads with lots of images were not as quick as they had been with Nginx or Varnish. At a later point, images and static files will be served from Nginx or Varnish, but, the site is fairly responsive and handles the load spikes reasonably well when Googlebot or another spider hits.

In the end the site ended up running on Apache because Varnish and Nginx had minor issues with the deployment. Moving to Apache alternatives doesn’t always fix everything and may introduce side-effects that you cannot work around.

Varnish and Apache2

Tuesday, April 7th, 2009

One client had some issues with Apache2 and a WordPress site. While WordPress isn’t really a great performer, this client had multiple domains on the same IP and dropping Nginx in didn’t seem like it would make sense to solve the immediate problem.

First things first, we evaluated where the issue was with WordPress and installed db-cache and wp-cache-2. We had tried wp-super-cache but had seen some issues with it in some configurations. Immediately the pageload time dropped from 41 seconds to 11 seconds. Since the machine was running on a quadcore with 4gb ram and was running mostly idle, the only thing left was the 91 page elements being served. Each pageload, even with pipelining still seemed to cause some stress. Two external javascripts and one external flash object caused some delay in rendering the page. The javascripts were actually responsible for holding up the page rendering which made the site seem even slower than it was. We made some minor modifications, but, while apache2 was configured to serve things as best it could, we felt there was still some room for improvement.

While I had tested Varnish in front of Apache2, I knew it would make an impact in this situation due to the number of elements on the page and the fact that apache did a lot of work to serve each request. Varnish and its VCL eliminated a lot of the overhead Apache had and should result in the capacity for roughly 70% better performance. For this installation, we removed the one IP that was in use by the problem domain from Apache and used that for Varnish and ran Varnish on that IP, using 127.0.0.1 port 80 as the backend.

Converting a site that is in production and live is not for the fainthearted, but, here are a few notes.

For Apache you’ll want to add a line like this to make sure your logs show the remote IP rather than the IP of the Varnish server:

LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-A
gent}i\"" varnishcombined

Modify each of the VirtualHost configs to say:

<VirtualHost 127.0.0.1:80>

and change the line for the logfile to say:

CustomLog /var/log/apache2/domain.com-access.log varnishcombined

Add Listen Directives to prevent Apache from listening to port 80 on the IP address that you want varnish to answer and comment out the default Listen 80:

#Listen 80
Listen 127.0.0.1:80
Listen 66.55.44.33:80

Configuration changes for Varnish:

backend default {
.host = "127.0.0.1";
.port = "80";
}

sub vcl_recv {
  if (req.url ~ "\.(jpg|jpeg|gif|png|tiff|tif|svg|swf|ico|mp3|mp4|m4a|ogg|mov|avi|wmv)$") {
      lookup;
  }

  if (req.url ~ "\.(css|js)$") {
      lookup;
  }
}
sub vcl_fetch {
        if( req.request != "POST" )
        {
                unset obj.http.set-cookie;
        }

        set obj.ttl = 600s;
        set obj.prefetch =  -30s;
        deliver;
}

Shut down Apache, Restart Apache, Start Varnish.

tail -f the logfile for Apache for one of the domains that you have moved. Go to the site. Varnish will load everything the first time, but, successive reloads shouldn’t show requests for images, javascript, css. For this client we opted to hold things in cache for 10 minutes (600 seconds).

Overall, the process was rather seamless. Unlike converting a site to Nginx, we are not required to make changes to the rewrite config or worry about setting up a fastcgi server to answer .php requests. Overall, varnish is quite seamless to the end product. Clients will lose the ability to do some things like deny hotlinking, but, Varnish will run almost invisibly to the client. Short of the page loading considerably quicker, the client was not aware we had made any server changes and that is the true measure of success.

Entries (RSS) and Comments (RSS).
Cluster host: li