Archive for the ‘Scalability’ Category

WordPress, Varnish and Edge Side Includes

Wednesday, July 22nd, 2009

While talking about WordPress and it’s abysmal performance in high traffic situations to a client, we started looking back at Varnish and other solutions to keep their machine responsive. Since most of the caching solutions generate a page, serve it and cache it, posts and comments tend to lag behind the cache. db-cache does work around this by caching the query objects so that the pages can be generated more quickly and does expire the cache when tables are updated, but, its performance is still lacking. Using APC’s opcode cache or memcached just seemed to add complexity to the overall solution.

Sites like perezhilton.com appear to run behind multiple servers running Varnish, use wp-cache, move the images off to a CDN which results in a 3 request per second site with an 18 second pageload. Varnish’s cache always shows an age of 0 meaning Varnish is acting more as a load balancer than a front-end cache.

Caching isn’t without its downside. Your weblogs will not represent the true traffic. Since Varnish intercepts and serves requests before they get to the backend, those hits never hit the log. Forget pageview/postview stats (even with addons) because the addon won’t get loaded except during caching. Certain Widgets that rely on cookies or IP addresses will need to be modified. A workaround is to use a Text Box Widget and do an ESI include of the widget. For this client, we needed only some of the basic widgets. The hits in the apache logs will come from an IP of 127.0.0.1. Adjust your apache configuration to show the X-Forwarded-For IP address in the logs. If you truly need statistics, you’ll need to use something like Google Analytics. Put their code outside your page elements so that waiting for that javascript to load doesn’t slow down the rendering in the browser.

The test site, http://varnish.cd34.com/ is running Varnish 2.0.4, Apache2-mpm-prefork 2.2.11, Debian/Testing, WordPress 2.8.2. I’ve loaded the default .xml import for testing templates so that there were posts with varied dates and construction in the site. To replicate the client’s site, the following Widgets were added the sidebar: Search, Archives, Categories, Pages, Recent Posts, Tag Cloud, Calendar. Calendar isn’t in the existing site, but, since it is a very ‘expensive’ SQL query to run, it made for a good benchmark.

The demo site is running on:

model name	: Intel(R) Celeron(R) CPU 2.40GHz
stepping	: 9
cpu MHz		: 2400.389
cache size	: 128 KB

with a Western Digital 80gb 7200RPM IDE drive. Since all of the benchmarking was done on the same machine without any config changes taking place between tests, our benchmarks should represent as even a test base as we can expect.

Regrettably, our underpowered machine couldn’t run the benchmark with 50 concurrent tests, nor, could it run the benchmarks with the Calendar Widget enabled. In order to get apachebench to run, we had to bump the number of requests down and reduce the number of concurrent tests.

These results are from Apache without Varnish.

Server Software:        Apache
Server Hostname:        varnish.cd34.com
Server Port:            80

Document Path:          /
Document Length:        43903 bytes

Concurrency Level:      10
Time taken for tests:   159.210 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      4408200 bytes
HTML transferred:       4390300 bytes
Requests per second:    0.63 [#/sec] (mean)
Time per request:       15921.022 [ms] (mean)
Time per request:       1592.102 [ms] (mean, across all concurrent requests)
Transfer rate:          27.04 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   7.0      0      25
Processing: 14785 15863 450.2  15841   17142
Waiting:     8209 8686 363.4   8517    9708
Total:      14785 15865 451.4  15841   17142

Percentage of the requests served within a certain time (ms)
  50%  15841
  66%  15975
  75%  16109
  80%  16153
  90%  16628
  95%  16836
  98%  17001
  99%  17142
 100%  17142 (longest request)

Normally we would have run the Varnish enabled test without the Calendar Widget, but, I felt confident enough to run the test with the widget in the sidebar. Varnish was configured with a 12 hour cache (yes, I know, I’ll address that later) and the ESI Widget was loaded.

Server Software:        Apache
Server Hostname:        varnish.cd34.com
Server Port:            80

Document Path:          /
Document Length:        45544 bytes

Concurrency Level:      50
Time taken for tests:   18.607 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      457980000 bytes
HTML transferred:       455440000 bytes
Requests per second:    537.44 [#/sec] (mean)
Time per request:       93.034 [ms] (mean)
Time per request:       1.861 [ms] (mean, across all concurrent requests)
Transfer rate:          24036.81 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   1.8      0      42
Processing:     1   92  46.2    105     451
Waiting:        0   91  45.8    104     228
Total:          2   93  46.0    105     451

Percentage of the requests served within a certain time (ms)
  50%    105
  66%    117
  75%    123
  80%    128
  90%    142
  95%    155
  98%    171
  99%    181
 100%    451 (longest request)

As you can see, even with the aging hardware, we went from .63 requests per second to 537.44 requests per second.

But, more about that 12 hour cache. The ESI Widget uses an Edge Side Include to include the sidebar into the template. Rather than just cache the entire page, we instruct Varnish to cache the page and include the sidebar. As a result, when a person surfs the site and goes from the front page to a post page, the sidebar doesn’t need to be regenerated when they go to the 2nd page. With wp-cache, it would have regenerated the sidebar Widgets and then cached the resulting page. Obviously, that 12 hour cache is going to affect the usability of the site, so, ESI widget purges the sidebar, front page and post page any time a post is updated or deleted or commented on. Voila, even with a long cache time, we are presented with a site that is dynamic and not delayed until wp-cache’s page cache expires. As this widget is a concept, I’m sure a little intelligence can be added to prevent the excessive purging in some cases, but, it does handle things reasonably well. There are some issues not currently handled with the ESI including how to handle users that are logged for comments. With some template modifications, I think those pieces can be handled with ESI to provide a lightweight method for the authentication portion.

While I have seen other sites mention Varnish and other methods to keep your wordpress installation alive in high traffic, I believe this approach is a step in the right direction. With the ESI widget, you can focus on your site, and let the server do the hard work. This methodology is based on a CMS that I have contemplated writing for many years, though, using Varnish rather than static files.

It is a concept developed in roughly four hours including the time to write the widget and do the benchmarking. It isn’t perfect, but does address the immediate needs of the one client. I think we can consider this concept a success.

If you don’t have the ability to modify your system to run Varnish, then you would be limited to running wp-cache and db-cache. If you can connect to a memcached server, you might consider running Memcached for WordPress as it will make quite a difference as well.

This blog site, http://cd34.com/blog/ is not running behind Varnish. To see the Varnish enabled site with ESI Widget, go to http://varnish.cd34.com/

Software Mentioned:

* Varnish ESI and Purge and Varnish’s suggestions for helping WordPress
* WordPress
* wp-cache
* db-cache

Sites used for reference:

* Supercharge WordPress
* SSI, Memcached and Nginx (with mentions of a Varnish/ESI configuration)

Varnish configuration used for ESI-Widget:

backend default {
.host = "127.0.0.1";
.port = "81";
}

sub vcl_recv {
 if (req.request == "PURGE") {
     purge("req.url == " req.url);
 }

 if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
    unset req.http.cookie;
  }
  if (!(req.url ~ "wp-(login|admin)")) {
    unset req.http.cookie;
  }
}

sub vcl_fetch {
   set obj.ttl = 12h;
   if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
      set obj.ttl = 24 h;
   } else {
      esi;  /* Do ESI processing */
   }
}

Varnish proves itself against a DDOS

Saturday, May 2nd, 2009

I’ve worked a lot with Varnish over the last few weeks and we’ve had a rather persistent hacker that has been sending a small but annoying DDOS to a client on one of our machines. Usually we isolate the client and move their affected sites to a machine that won’t affect other clients. Then we can modify firewall rules, find the issue, wait for the attack to end and move them back. Usually this results in a bit of turmoil because not every client is easy to shuffle around. Some have multiple databases and perhaps the application they are running takes a bit more horsepower to run due to the attack.

In this case, the application wasn’t too badly written and it was just a matter of firewalling certain types of packets and modifying the TCP settings to allow things to time out a bit more quickly while the attack persisted. In order to do this seamlessly we had to move the physical IP that client was using to another machine running varnish.

What we ended up with was running Varnish on a machine where we had the ability to freely firewall packets, could turn on more verbose packet logging and, pulled the requests from the original machine. Short of moving the IP address and making config changes on the existing machine, it was straightforward:

Original Machine
* changed apache config to listen to a different IP address on port 81
* modified the firewall to allow port 81
* adjusted the apache config to listen to port 81 on that IP address
* shut down the virtual ethernet interface
* restarted apache

Varnish Machine
* set up the backend to request files from port 81 on the new IP assigned from the old machine
* copied the firewall rules from the Original Machine to the Varnish Machine
* brought up the IP from the original machine
* restarted varnish

Cleared the Arp-cache in the switches that both machines were connected to.

Within seconds, the load on the Original machine dropped to half of what it was before. Varnish had been running on that machine, but, the DDOS was still hitting the firewall rules and causing apache to open connections. Moving both of those pieces of the equation off the machine resulted in an immediate improvement on the Original Machine. Since the same cpu horsepower is being used with the script – Varnish passes those requests through, and we’ve only removed some of the static files from being served from the machine, I believe we can safely conclude that it wasn’t the application that had the problems. Apache has roughly the same number of processes as it had when we were running varnish on that machine, so, the load reduction appears to be mostly related to the firewall rules or the traffic that was still coming through.

Since moving the traffic over to the other machine, we see the same issues being exhibited there. Since that machine isn’t doing anything but caching the apache responses, we can reasonably assume that the firewall is adding quite a bit of overhead to things. The inbound traffic on the Original Machine was cut almost in half with a corresponding jump on the Varnish machine. Since Varnish is dealing with inbound traffic from the original machine and from the DDOS attack, it is difficult to say with certainty that the inbound traffic on that machine is reflecting it, however, based on the 90% cache hit rate and the size of the cached pages, I don’t believe the inbound traffic on that machine should be what it is, so, it is evident that the DDOS traffic moved.

After moving one set of sites, and analyzing the Original Machine, it does appear that a second set of his sites is also impacted.

Varnish saves the day…. maybe

Tuesday, April 28th, 2009

We had a client that had a machine where apache was being overrun… or so we thought.  Everything pointed at this one set of domains owned by a client and in particular two sites with 100+ elements on the page.  Images, css, javascript and iframes composed their main page.  Apache was handling things reasonably well, but, it was immediately obvious that it could be better.

The conversion to Varnish was quite simple to do even on a live server.  Slight modifications to the Apache config file to listen to port 81 on the set of domains in question, and a quick restart.  Varnish was configured to listen to port 80 on that particular IP and some minor modifications were made to the startup.vcl file to modify things slightly:

sub vcl_fetch {
  if (req.url ~ “\.(png|gif|jpg|swf|css|js)$”) {
    set obj.ttl = 3600s;
  }
}

A one hour cache should be granular enough to do a bit more good on these sites, overriding the default of two minutes.  After an hour, it was evident that the sites did peform much more quickly, but, we still had a load issue.  Some modifications of the apache config alleviated some of the other load problems after we dug further into things.

After 5 hours, we ended up with the following statistics from varnish:

0+05:18:24                                                               xxxxxx
Hitrate ratio:       10      100     1000
Hitrate avg:     0.9368   0.9231   0.9156

62576         1.00         3.28 Client connections accepted
466684        57.88        24.43 Client requests received
411765        48.90        21.55 Cache hits
148         0.00         0.01 Cache hits for pass
32018         7.98         1.68 Cache misses
54761         8.98         2.87 Backend connections success
0         0.00         0.00 Backend connections failures
45411         7.98         2.38 Backend connections reuses
48598         7.98         2.54 Backend connections recycles

Varnish is doing a great job.  The site does load considerably faster, but, it didn’t solve the entire problem.  It did reduce the number of apache processes on that machine from 450 to 170 or so, freed up some ram for cache, and did make the server more responsive, but, it probably only contributed to 50% of the issue.  The rest of it was cleaning up some poorly written php code, modifying a few mysql tables and adding some indexes to make things work more quickly.

After we fixed the code problems, we debated removing Varnish from their configuration.  Varnish did buy us time to fix the problem and does result in a better experience for surfers on the sites, but, after the backend changes, it is hard to tell whether it makes enough impact to keep a non-standard configuration running.  Since it is not caching the main page of the site and is only serving the static elements (the site sets an expire time on each generated page), the only real benefit is that we are removing the need for apache to serve the static elements.

While testing another application, we were able to override hardcoded expire times and forcing a minimally cached page.  Even if we cached a generated page for two minutes, it could be the difference between a responsive server and a machine struggling to keep up.  Since WordPress, Joomla, Drupal and others set expire times using dates that have passed, they ensure that the site html being output is not cached.  Varnish allows us to ignore that, and to set our own cache time which could save a site hit with a lot of traffic.

sub vcl_fetch {
  if (obj.ttl < 120s) {
    set obj.ttl = 120s;
  }
}

would give us a minimum two minute cache which would cut the requests to a dynamically generated page considerably.

It is a juggling act.  Where do you make the tradeoff and what do you accelerate? Too many times the solution to a website’s performance problem is to throw more hardware at it.  At some point you have to split the load on multiple servers, adding new bottlenecks.  An application designed to run on a single machine becomes difficult to split to two or more machines, so, many times we do what we can to keep things running on a single machine.

Nginx after one day and conversion of two more machines

Wednesday, April 8th, 2009

Nginx impressed me with the way it was written and its performance has impressed me as well.

This one client has 3 machines that ran Apache2-mpm-worker with php5 running under fastcgi.  While page response time was good, the machines constantly ran at roughly 15% idle cpu time, with roughly 600mb-700mb of the ram used for cache.  All of the machines are quadcore with 4gb RAM and have been running for quite a while and have been tweaked and tuned along the way.

We started with the conversion of one site on one machine which resulted in the client being so impressed that we converted a second site on that machine which resulted in about 80mb/sec being served from nginx within minutes of deployment.  The next morning after we glanced over everything and confirmed that nginx was holding up, we converted the rest of that machine over to Nginx.  Traffic grew almost 20% after that change.

We started looking at the other machines, one of which runs phpadsnew on a relatively large network of his sites and the banners that are served from two of the main sites on one machine.  Converting those two over to nginx meant another 50mb/sec of traffic swapped from Apache.  Immediately he saw results with faster pageloads of his sites that pulled content from a central domain and with the banner ads being displayed more quickly.  After a few moments of analysis, it was decided to swap the entire machine from Apache2 to Nginx.  That process took a few hours due to the number of virtual hosts and the lack of any real script to migrate the configurations.  Response time on the sites was definitely faster.  After a little more discussion, rather than give that machine a day to settle in to see if we would find any problems, we converted his third machine.

First response in the morning:

yesterday we sent 69.1k unique surfers to sponsors, that is the highest we have ever done.

While only one of three machines was running Nginx for the entire day, one machine had about 8 hours under Nginx and the other about 2 hours under Nginx for that ‘day.’

Today, the results are somewhat clear.  Traffic is up overall, the machines are much more responsive.  Each machine is now roughly 80% idle and has roughly 2.4gb of memory reserved for cache.

75

76

861

Backups are scheduled at 3am on the boxes, a few rsync jobs are run to keep some content directories synced between the machines.  Overall you can see the impact on the first graph as the right hand side shows a bit more growth.  The last graph was running nginx, but, struggled to push more than 85mb/sec or so.  The middle graph shows a decline, but, they believe that is external to the process.  The sites are loading more quickly and they expect that the sites will grow quite a bit.  So far, they are reporting roughly an 18% increase in clicks to their sponsor.

Varnish and Apache2

Tuesday, April 7th, 2009

One client had some issues with Apache2 and a WordPress site. While WordPress isn’t really a great performer, this client had multiple domains on the same IP and dropping Nginx in didn’t seem like it would make sense to solve the immediate problem.

First things first, we evaluated where the issue was with WordPress and installed db-cache and wp-cache-2. We had tried wp-super-cache but had seen some issues with it in some configurations. Immediately the pageload time dropped from 41 seconds to 11 seconds. Since the machine was running on a quadcore with 4gb ram and was running mostly idle, the only thing left was the 91 page elements being served. Each pageload, even with pipelining still seemed to cause some stress. Two external javascripts and one external flash object caused some delay in rendering the page. The javascripts were actually responsible for holding up the page rendering which made the site seem even slower than it was. We made some minor modifications, but, while apache2 was configured to serve things as best it could, we felt there was still some room for improvement.

While I had tested Varnish in front of Apache2, I knew it would make an impact in this situation due to the number of elements on the page and the fact that apache did a lot of work to serve each request. Varnish and its VCL eliminated a lot of the overhead Apache had and should result in the capacity for roughly 70% better performance. For this installation, we removed the one IP that was in use by the problem domain from Apache and used that for Varnish and ran Varnish on that IP, using 127.0.0.1 port 80 as the backend.

Converting a site that is in production and live is not for the fainthearted, but, here are a few notes.

For Apache you’ll want to add a line like this to make sure your logs show the remote IP rather than the IP of the Varnish server:

LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-A
gent}i\"" varnishcombined

Modify each of the VirtualHost configs to say:

<VirtualHost 127.0.0.1:80>

and change the line for the logfile to say:

CustomLog /var/log/apache2/domain.com-access.log varnishcombined

Add Listen Directives to prevent Apache from listening to port 80 on the IP address that you want varnish to answer and comment out the default Listen 80:

#Listen 80
Listen 127.0.0.1:80
Listen 66.55.44.33:80

Configuration changes for Varnish:

backend default {
.host = "127.0.0.1";
.port = "80";
}

sub vcl_recv {
  if (req.url ~ "\.(jpg|jpeg|gif|png|tiff|tif|svg|swf|ico|mp3|mp4|m4a|ogg|mov|avi|wmv)$") {
      lookup;
  }

  if (req.url ~ "\.(css|js)$") {
      lookup;
  }
}
sub vcl_fetch {
        if( req.request != "POST" )
        {
                unset obj.http.set-cookie;
        }

        set obj.ttl = 600s;
        set obj.prefetch =  -30s;
        deliver;
}

Shut down Apache, Restart Apache, Start Varnish.

tail -f the logfile for Apache for one of the domains that you have moved. Go to the site. Varnish will load everything the first time, but, successive reloads shouldn’t show requests for images, javascript, css. For this client we opted to hold things in cache for 10 minutes (600 seconds).

Overall, the process was rather seamless. Unlike converting a site to Nginx, we are not required to make changes to the rewrite config or worry about setting up a fastcgi server to answer .php requests. Overall, varnish is quite seamless to the end product. Clients will lose the ability to do some things like deny hotlinking, but, Varnish will run almost invisibly to the client. Short of the page loading considerably quicker, the client was not aware we had made any server changes and that is the true measure of success.

Entries (RSS) and Comments (RSS).
Cluster host: li