Posts Tagged ‘Varnish’

WordPress Varnish ESI Widget is back. Thank you Varnish.

Wednesday, January 26th, 2011

Long ago I wrote the WordPress ESI Widget to help a client’s site stay online during a barrage of traffic. To solve some of the performance problems on high traffic WordPress sites, you have to use caching, but almost all of the caching addons for WordPress do page level caching rather than fragment caching. After the site’s traffic slowed, I stopped development on the widget due to the infrastructure required to support compression.

To compress an ESI assembled page, one needed to run Nginx in front of Varnish and lost some performance as a result. Nginx would take the initial request, pass it to Varnish, Varnish would talk to the backend — which could be the same Nginx server in a somewhat complex configuration — grab the parts, assemble it, hand it back to Nginx which would then compress it and hand it to the surfer.

With Varnish compressing ESI assembled pages, we don’t need the incredibly complex configuration to run ESI. We’re left with a very simple front end cache in front of our backend servers.

Why is Fragment Caching important?

Fragment caching allows the cache to store pieces of the page that may repeat on several pages and assemble those pieces with the rest of the page. The sidebar on your WordPress site only needs to be generated once as someone surfs through your site. This changes the nature of WordPress caching considerably. Compared to the fastest existing WordPress caching plugin, the Varnish ESI widget doubled its performance – bested only by WP Varnish, a plugin that ran Varnish directly and managed cache expiration.

ESI explained simply is probably the best example I have ever found for explaining how ESI works.

But something else is faster

WP Varnish is currently faster, and, for all practical purposes probably always will be on a very busy site. However, on a site that gets a lot of traffic on one page, the second page time to first byte should be faster on an ESI assembled page because the sidebar which contains some of the most computationally expensive parts of the page, doesn’t need to be generated again. While we give up some of the raw speed, we gain an advantage when someone clicks through to read the second page. The perfect use case here is getting publicity for a particular post on your WordPress site, and those surfers decide to read other articles you’ve written.

Varnish’s Original Announcement

From: Poul-Henning Kamp
Date: January 25, 2011 6:04:02 AM EST
To: varnish-misc@varnish-cache.org
Subject: Please help break Varnish GZIP/ESI support before 3.0


One of the major features of Varnish 3.0 is now feature complete, and
I need people to start beating it up and help me find the bugs before
we go into the 3.0 release cycle.


GZIP support
------------

Varnish will ask the backend for gzip'ed objects by default and for
the minority of clients that do not grok that, ungzip during delivery.

If the backend can not or will not gzip the objects, varnish can be
told in VCL to gzip during fetch from the backend.  (It can also
gunzip, but I don't know why would you do that ?)

In addition to bandwidth, this should save varnish storage (one gzip
copy, rather than two copies, one gzip'ed one not).

GZIP support is on by default, but can be disabled with a parameter.



ESI support
-----------

Well, we have ESI support already, the difference is that it also
understands GZIP'ing.  This required a total rewrite of the ESI
parser, much improving the readability of it, I might add.

So now you can use ESI with compression, something that has hitherto
been a faustian bargain, often requiring an afterburner of some kind
to do the compression.

There are a lot of weird cornercases in this code, (such as including
a gzip'ed object in an uncomressed object) so this code really needs
beaten up.

Original message

What else is there?

Another very important fact is that Varnish will use gzip to request assets from the backend. While this doesn’t sound incredibly important, it is. Now, you can run a Varnish server at another data center and not worry as much about latency. Before this version, any ESI assembled page needed to be fetched uncompressed, and, large pages add tiny bits of latency which result in a poorer experience while surfing. Most installations run Varnish on the same machine or on a machine network topologically close, but, this opens the doors for a CDN to run ESI enabled edge servers to supercharge your WordPress site hosted anywhere.

When will it be here?

Varnish moves quickly, and while the changes are substantial in terms of code rewrites, their code is very well written. I don’t expect we’ll see many bugs in the code and it’ll be released in the next few months. This site and a number of other sites we work with will be running it later this week.

In short, caching for WordPress just got an incredible boost. Even before the compression and gzip request from the backend, the ESI Widget was twice as fast as the fastest non-Varnish enabled plugin and over 440 times faster than WordPress out of the box.

Original Info

* WordPress Cache Plugin Benchmarks
* WordPress, Varnish and Edge Side Includes
* ESI Widget Issues in the Varnish, ESI, WordPress experiment
* A WordPress Widget that Enables one to use Varnish and ESI

DDOS attack mitigation

Monday, April 26th, 2010

Today we had a DDOS attack on one of our clients. They were running prefork with mod_php5 with a rather busy application. While we initially started filtering IP addresses using iptables and a few crudely hacked rules, we knew something had to be done that was a little more permanent. Moving to MPM-Worker with PHP served with FastCGI seemed reasonable, but, looking at the history of the attacks on this machine, I believe Apache still would have been vulnerable since we cannot filter the requests early enough in Apache’s request handler.

Apache does have the ability to fight some DDOS attacks using mod_security and mod_evasive, but, this particular attack was designed to affect apache prior to the place where these modules hook into the request. This also precludes using fail2ban. We could run mod_forensic or mod_logio to assist fail2ban, but, it is still a stopgap measure.

We could have used some Snort rules and tied those to iptables, but, that is a rather bad solution to the problem.

While we could have used Varnish, their application would have had some issues. mod_rpaf can help by adjusting the REMOTE_ADDR to take the value from X-Forwarded-For that Varnish sets. mod_geoip actually inserts itself before mod_rpaf, so, we would have needed to make a modification to mod_geoip and recompiled it. I’m not sure how Varnish would have handled Slowloris and we had to fix this now.

Putting them behind a Layer 7 load balancer would have isolated the origin server and handled the brunt of the attack on the load balancer, but, again we would have needed mod_rpaf and some modifications to their code.

In the end, Lighttpd and Nginx appeared to be the only documented solution. After the conversion, we did find documentation that said Varnish and Squid were immune to Slowloris. With Nginx or Lighttpd, we didn’t have IP address issues to contend with, it would be easy enough to modify the fastcgi config to pass the GEOIP information in the same request variable that their application expected. We knew we had to run PHP under FastCGI, so, we might as well pick a tool where we can block the attack in the webserver without having to worry about firewall rules. We did put a few firewall rules in place to block the larger offenders.

in the http { } section of our nginx config, we added:

    client_body_timeout 10;
    client_header_timeout 10;
    keepalive_timeout 10;
    send_timeout 10;
    limit_zone limit_per_ip $binary_remote_addr 16m;

and in the server { } section, we added:

    limit_conn limit_per_ip 5;

Since each server was expecting to handle one or two requests from each IP, this gave us a little headroom while solving the problem in the right place.

I believe Varnish would have held the connection and wouldn’t have sent a request to the backend which makes it fairly versatile as a tool to deal with DDOS attacks. While I do like the ability to block certain requests in VCL, the methods listed to fight this type of attack appeared to favor a non-threaded webserver. Varnish in front of Apache would have worked, but, we already knew we needed to move from Apache at some point with this client and this gave us an opportunity to shift them while under the gun.

Wouldn’t have had it any other way.

WordPress Cache Plugin Benchmarks

Thursday, March 4th, 2010

A lot of time and effort goes into keeping a WordPress site alive when it starts to accumulate traffic. While not every site has the same goals, keeping a site responsive and online is the number one priority. When a surfer requests the page, it should load quickly and be responsive. Each addon handles caching a little differently and should be used in different cases.

For many sites, page caching will provide decent performance. Once your sites starts receiving comments, or people log in, many cache solutions cache too heavily or not enough. As many solutions as there are, it is obvious that WordPress underperforms in higher traffic situations.

The list of caching addons that we’re testing:

* DB Cache (version 0.6)
* DB Cache Reloaded (version 2.0.2)
* W3 Total Cache (version 0.8.5.1)
* WP Cache (version 2.1.2)
* WP Super Cache (version 0.9.9)
* WP Widget Cache (version 0.25.2)
* WP File Cache(version 1.2.5)
* WP Varnish (in beta)
* WP Varnish ESI Widget (in beta)

What are we testing?

* Frontpage hits
* httpload through a series of urls

We take two measurements. The cold start measurement is taken after any plugin cache has been cleared and Apache2 and MySQL have been restarted. A 30 second pause is inserted prior to starting the tests. We perform a frontpage hit 1000 times with 10 parallel connections. We then repeat that test after Apache2 and the caching solution have had time to cache that page. Afterwards, http_load requests a series of 30 URLs to simulate people surfing other pages. Between those two measurements, we should have a pretty good indicator of how well a site is going to perform in real life.

What does the Test Environment look like?

* Debian 3.1/Squeeze VPS
* Linux Kernel 2.6.33
* Single core of a Xen Virtualized Xeon X3220 (2.40ghz)
* 2gb RAM
* CoW file is written on a Raid-10 System using 4x1tb 7200RPM Drives
* Apache 2.2.14 mpm-prefork
* PHP 5.3.1
* WordPress Theme Test Data
* Tests are performed from a Quadcore Xeon machine connected via 1000 Base T on the same switch and /24 as the VPS machine

This setup is designed to replicate what most people might choose to host a reasonably popular wordpress site.

tl;dr Results

If you aren’t using Varnish in front of your web site, the clear winner is W3 Total Cache using Page Caching – Disk (Enhanced), Minify Caching – Alternative PHP Cache (APC), Database Caching – Alternative PHP Cache (APC).

If you can use Varnish, WP Varnish would be a very simple way to gain quite a bit of performance while maintaining interactivity. WP Varnish purges the cache when posts are made, allowing the site to be more dynamic and not suffer from the long cache delay before a page is updated.

W3 Total Cache has a number of options and sometimes settings can be quite detrimental to site performance. If you can’t use APC caching or Memcached for caching Database queries or Minification, turn both off. W3 Total Cache’s interface is overwhelming but the plugin author has indicated that he’ll be making a new ‘Wizard’ configuration menu in the next version along with Fragment Caching.

WP Super Cache isn’t too far behind and is also a reasonable alternative.

Either way, if you want your site to survive, you need to use a cache addon. Going from 2.5 requests per second to 800+ requests per second makes a considerable difference in the usability of your site for visitors. Logged in users and search engine bots still see uncached/live results, so, you don’t need to worry that your site won’t be indexed properly.

Results

Sorted in Ascending order in terms of higher overall performance

Addon Apachebench Cold Start
Warm Start
http_load Cold Start
Warm Start
Req/Second Time/Request 50% within x ms Fetches/Second Min First Response Avg First Response
Baseline 4.97 201.006 2004 15.1021 335.708 583.363
5.00 200.089 2000 15.1712 304.446 583.684
DB Cache 4.80 208.436 2087 15.1021 335.708 583.363
Cached all SQL queries 4.81 207.776 2091 15.1712 304.446 583.684
DB Cache 4.87 205.250 2035 14.1992 302.335 621.092
Out of Box config 4.94 202.624 2026 14.432 114.983 618.434
WP File Cache 4.95 201.890 2009 15.8869 158.597 549.176
4.99 200.211 2004 16.1758 99.728 544.107
DB Cache Reloaded 5.02 199.387 1983 15.0167 187.343 589.196
All SQL Queries Cached 5.03 200.089 1985 14.9233 150.145 586.443
DB Cache Reloaded 5.06 197.636 1968 14.9697 174.857 589.161
Out of Box config 5.08 196.980 1968 15.181 257.533 587.737
Widgetcache 6.667 149.903 1492 15.0264 245.332 602.039
6.72 148.734 1487 15.1887 299.65 598.017
W3 Total Cache 153.45 65.167 60 133.1898 8.916 85.7177
DB Cache off, Page Caching with Memcached 169.46 59.011 57 188.4 9.107 50.142
W3 Total Cache 173.49 57.639 52 108.898 7.668 86.4077
DB Cache off, Minify Cache with Memcached 189.76 52.698 48 203.522 8.122 43.8795
W3 Total Cache 171.34 58.364 50 203.718 8.097 44.1234
DB Cache using Memcached 190.01 52.269 48 206.187 8.186 42.4438
W3 Total Cache 175.29 57.048 48 87.423 7.515 107.973
Out of Box config 191.15 52.314 47 204.387 8.288 43.217
W3 Total Cache 175.29 57.047 51 204.557 8.199 42.9365
Database Cache using APC 191.19 52.304 48 200.612 8.11 44.6691
W3 Total Cache 114.02 87.703 49 114.393 8.206 82.0678
Database Cache Disabled 191.76 52.150 49 203.781 8.095 42.558
W3 Total Cache 175.80 56.884 51 107.842 7.281 87.2761
Database Cache Disabled, Minify Cache using APC 192.01 52.082 50 205.66 8.244 43.1231
W3 Total Cache 104.90 95.325 51 123.041 7.868 74.5887
Database Cache Disabled, Page Caching using APC 197.55 50.620 46 210.445 7.907 41.4102
WP Super Cache 336.88 2.968 16 15.1021 335.708 583.363
Out of Box config, Half On 391.59 2.554 16 15.1712 304.446 583.684
WP Cache 161.63 6.187 12 15.1021 335.708 583.363
482.29 20.735 11 15.1712 304.446 583.684
WP Super Cache 919.11 1.088 3 190.117 1.473 47.9367
Full on, Lockdown mode 965.69 1.036 3 975.979 1.455 9.67185
WP Super Cache 928.45 1.077 3 210.106 1.468 43.8167
Full on 970.45 1.030 3 969.256 1.488 9.78753
W3 Total Cache 1143.94 8.742 2 165.547 0.958 56.7702
Page Cache using Disk Enhanced 1222.16 8.182 3 1290.43 0.961 7.15632
W3 Total Cache 1153.50 8.669 3 165.725 0.916 56.5004
Page Caching – Disk Enhanced, Minify/Database using APC 1211.22 8.256 2 1305.94 0.948 6.97114
Varnish ESI 2304.18 0.434 4 349.351 0.221 28.1079
2243.33 0.44689 4 4312.78 0.152 2.09931
WP Varnish 1683.89 0.594 3 369.543 0.155 26.8906
3028.41 0.330 3 4318.48 0.148 2.15063

Test Script

#!/bin/sh

FETCHES=1000
PARALLEL=10

/usr/sbin/apache2ctl stop
/etc/init.d/mysql restart
apache2ctl start
echo Sleeping
sleep 30
time ( \
echo First Run; \
ab -n $FETCHES -c $PARALLEL http://example.com/; \
echo Second Run; \
ab -n $FETCHES -c $PARALLEL http://example.com/; \
\
echo First Run; \
./http_load -parallel $PARALLEL -fetches $FETCHES wordpresstest; \
echo Second Run; \
./http_load -parallel $PARALLEL -fetches $FETCHES wordpresstest; \
)

URL File for http_load

http://example.com/
http://example.com/2010/03/hello-world/
http://example.com/2008/09/layout-test/
http://example.com/2008/04/simple-gallery-test/
http://example.com/2007/12/category-name-clash/
http://example.com/2007/12/test-with-enclosures/
http://example.com/2007/11/block-quotes/
http://example.com/2007/11/many-categories/
http://example.com/2007/11/many-tags/
http://example.com/2007/11/tags-a-and-c/
http://example.com/2007/11/tags-b-and-c/
http://example.com/2007/11/tags-a-and-b/
http://example.com/2007/11/tag-c/
http://example.com/2007/11/tag-b/
http://example.com/2007/11/tag-a/
http://example.com/2007/09/tags-a-b-c/
http://example.com/2007/09/raw-html-code/
http://example.com/2007/09/simple-markup-test/
http://example.com/2007/09/embedded-video/
http://example.com/2007/09/contributor-post-approved/
http://example.com/2007/09/one-comment/
http://example.com/2007/09/no-comments/
http://example.com/2007/09/many-trackbacks/
http://example.com/2007/09/one-trackback/
http://example.com/2007/09/comment-test/
http://example.com/2007/09/a-post-with-multiple-pages/
http://example.com/2007/09/lorem-ipsum/
http://example.com/2007/09/cat-c/
http://example.com/2007/09/cat-b/
http://example.com/2007/09/cat-a/
http://example.com/2007/09/cats-a-and-c/

Using Varnish to assist with AB Testing

Thursday, February 25th, 2010

While working with a recent client project, they mentioned AB Testing a few designs. While I enjoy statistics, we looked at Google’s Website Optimizer to track trials and conversions. After some internal testing, we opted to use Funnels and Goals rather than the AB or Multivariate test. I had little control over the origin server, but I did have control over the front-end cache.

Our situation reminded me of a situation I encountered years ago. A client had an inhouse web designer and a subcontracted web designer. I felt the subcontracted web designer’s design would convert better. The client wasn’t completely convinced, but agreed to running two designs head to head. However, their implementation of the test biased the results.

What went wrong?

Each design was run for a week, in series. While this provided ample time for gathering data, the inhouse designer’s design ran during a national holiday with a three day weekend, and the subcontractor’s design ran the following week. Internet traffic patterns, the holiday weekend, weather, sporting events, TV/Movie premieres, etc. added so many variables which should have invalidated the results.

Since Google’s AB Testing has session persistence and splits traffic between the AB tests, we need to emulate this behavior. When people run AB tests in series rather than parallel, or, switch pages with a cron job or some other automated method, I cringe. A test at 5pm EST and 6pm EST will yield different results. At 5pm EST, your target audience could be driving home from work. At 6pm EST they could be sitting down for dinner.

How can Varnish help?

If we allow Varnish to select the landing page/offer page outside the origin server’s control, we can run both tests run at the same time. An internet logjam in Seattle, WA would affect both tests evenly. Likewise, a national or worldwide event would affect both tests equally. Now that we know how to make sure the AB Test is fairly balanced, we have to implement it.

Redirection sometimes plays havoc on browsers and spiders, so, we’ll rewrite the URL within Varnish using some Inline C and VCL. Google uses javascript and a document.location call to send some visitors to the B/alternate page. Users that have javascript disabled, will only see the Primary page.

Our Varnish config file contains the following:

sub vcl_recv {
  if (req.url == "/") {
    C{
      char buff[5];
      sprintf(buff,"%d",rand()%2 + 1);
      VRT_SetHdr(sp, HDR_REQ, "\011X-ABtest:", buff, vrt_magic_string_end );
    }C
    set req.url = "/" req.http.X-ABtest "/" req.url;
  }
}

We’ve placed our landing pages in /1/ and /2/ directories on our origin server. The only page Varnish intercepts is the index page at the root of the site. Varnish randomly chooses to serve the index.html page from /1/ or /2/, internally rewrites our URL and serves it from the cache or the origin server. Since the URL rewriting is done within vcl_recv, subsequent requests for the page don’t hit the origin. The same method can be used to test landing pages that aren’t at the root of your site by modifying the if (req.url == “”) { condition.

You can test multipage offers by placing additional pages within the /1/ and /2/ directories on your origin along with the signup form. Unlike Google’s AB Test, Varnish does not support session persistence. Reloading the root page will result in the surfer alternating between both test pages. Subsequent pages need to be loaded from /1/ or /2/ based on which landing page was selected.

When doing any AB Test, change as few variables as possible, document the changes, and analyze the difference between the results. Running at least 1000 views of each is an absolute minimum. While Google’s Multivariate test provides a lot more options, a simple AB test between two pages or site tours can give some insight into what works rather easily.

If you cannot use Google’s AB Test or the Multivariate Test, using their Funnels and Goals tool will still allow you to do AB Testing.

Varnish VCL, Inline C and a random image

Thursday, February 18th, 2010

While working with the prototype of a site, I wanted to have a particular panel image randomly chosen when the page was viewed. While this could be done on the server side, I wanted to move this to Varnish so that Varnish’s cache would be used rather than piping the request through each time to the origin server.

At the top of /etc/varnish/default.vcl

C{
  #include <stdlib.h>
  #include <stdio.h>
}C

and our vcl_recv function gets the following:

  if (req.url ~ "^/panel/") {
    C{
      char buff[5];
      sprintf(buff,"%d",rand()%4);
      VRT_SetHdr(sp, HDR_REQ, "\010X-Panel:", buff, vrt_magic_string_end);
    }C
    set req.url = regsub(req.url, "^/panel/(.*)\.(.*)$", "/panel/\1.ZZZZ.\2");
    set req.url = regsub(req.url, "ZZZZ", req.http.X-Panel);
  }

The above code allows for us to specify the source code in the html document as:

<img src="/panel/random.jpg" width="300" height="300" alt="Panel Image"/>

Since we have modified the request uri in vcl_recv before the object is cached, subsequent requests for the same modified URI will be served from Varnish’s cache, without requiring another fetch from the origin server. Based on the other VCL and preferences, you can specify a long expire time, remove cookies, or do ESI processing. Since the regexp passes the extension through, we could also randomly choose .html, .css, .jpg or any other extension you desire.

In the directory panel, you would need to have

/panel/random.0.jpg
/panel/random.1.jpg
/panel/random.2.jpg
/panel/random.3.jpg

which would be served by Varnish when the url /panel/random.jpg is requested.

Moving that process to Varnish should cut down on the load from the origin server while making your site look active and dynamic.

Entries (RSS) and Comments (RSS).
Cluster host: li