Archive for the ‘Webserver Software’ Category

DDOS attack mitigation

Monday, April 26th, 2010

Today we had a DDOS attack on one of our clients. They were running prefork with mod_php5 with a rather busy application. While we initially started filtering IP addresses using iptables and a few crudely hacked rules, we knew something had to be done that was a little more permanent. Moving to MPM-Worker with PHP served with FastCGI seemed reasonable, but, looking at the history of the attacks on this machine, I believe Apache still would have been vulnerable since we cannot filter the requests early enough in Apache’s request handler.

Apache does have the ability to fight some DDOS attacks using mod_security and mod_evasive, but, this particular attack was designed to affect apache prior to the place where these modules hook into the request. This also precludes using fail2ban. We could run mod_forensic or mod_logio to assist fail2ban, but, it is still a stopgap measure.

We could have used some Snort rules and tied those to iptables, but, that is a rather bad solution to the problem.

While we could have used Varnish, their application would have had some issues. mod_rpaf can help by adjusting the REMOTE_ADDR to take the value from X-Forwarded-For that Varnish sets. mod_geoip actually inserts itself before mod_rpaf, so, we would have needed to make a modification to mod_geoip and recompiled it. I’m not sure how Varnish would have handled Slowloris and we had to fix this now.

Putting them behind a Layer 7 load balancer would have isolated the origin server and handled the brunt of the attack on the load balancer, but, again we would have needed mod_rpaf and some modifications to their code.

In the end, Lighttpd and Nginx appeared to be the only documented solution. After the conversion, we did find documentation that said Varnish and Squid were immune to Slowloris. With Nginx or Lighttpd, we didn’t have IP address issues to contend with, it would be easy enough to modify the fastcgi config to pass the GEOIP information in the same request variable that their application expected. We knew we had to run PHP under FastCGI, so, we might as well pick a tool where we can block the attack in the webserver without having to worry about firewall rules. We did put a few firewall rules in place to block the larger offenders.

in the http { } section of our nginx config, we added:

    client_body_timeout 10;
    client_header_timeout 10;
    keepalive_timeout 10;
    send_timeout 10;
    limit_zone limit_per_ip $binary_remote_addr 16m;

and in the server { } section, we added:

    limit_conn limit_per_ip 5;

Since each server was expecting to handle one or two requests from each IP, this gave us a little headroom while solving the problem in the right place.

I believe Varnish would have held the connection and wouldn’t have sent a request to the backend which makes it fairly versatile as a tool to deal with DDOS attacks. While I do like the ability to block certain requests in VCL, the methods listed to fight this type of attack appeared to favor a non-threaded webserver. Varnish in front of Apache would have worked, but, we already knew we needed to move from Apache at some point with this client and this gave us an opportunity to shift them while under the gun.

Wouldn’t have had it any other way.

Using Varnish to assist with AB Testing

Thursday, February 25th, 2010

While working with a recent client project, they mentioned AB Testing a few designs. While I enjoy statistics, we looked at Google’s Website Optimizer to track trials and conversions. After some internal testing, we opted to use Funnels and Goals rather than the AB or Multivariate test. I had little control over the origin server, but I did have control over the front-end cache.

Our situation reminded me of a situation I encountered years ago. A client had an inhouse web designer and a subcontracted web designer. I felt the subcontracted web designer’s design would convert better. The client wasn’t completely convinced, but agreed to running two designs head to head. However, their implementation of the test biased the results.

What went wrong?

Each design was run for a week, in series. While this provided ample time for gathering data, the inhouse designer’s design ran during a national holiday with a three day weekend, and the subcontractor’s design ran the following week. Internet traffic patterns, the holiday weekend, weather, sporting events, TV/Movie premieres, etc. added so many variables which should have invalidated the results.

Since Google’s AB Testing has session persistence and splits traffic between the AB tests, we need to emulate this behavior. When people run AB tests in series rather than parallel, or, switch pages with a cron job or some other automated method, I cringe. A test at 5pm EST and 6pm EST will yield different results. At 5pm EST, your target audience could be driving home from work. At 6pm EST they could be sitting down for dinner.

How can Varnish help?

If we allow Varnish to select the landing page/offer page outside the origin server’s control, we can run both tests run at the same time. An internet logjam in Seattle, WA would affect both tests evenly. Likewise, a national or worldwide event would affect both tests equally. Now that we know how to make sure the AB Test is fairly balanced, we have to implement it.

Redirection sometimes plays havoc on browsers and spiders, so, we’ll rewrite the URL within Varnish using some Inline C and VCL. Google uses javascript and a document.location call to send some visitors to the B/alternate page. Users that have javascript disabled, will only see the Primary page.

Our Varnish config file contains the following:

sub vcl_recv {
  if (req.url == "/") {
    C{
      char buff[5];
      sprintf(buff,"%d",rand()%2 + 1);
      VRT_SetHdr(sp, HDR_REQ, "\011X-ABtest:", buff, vrt_magic_string_end );
    }C
    set req.url = "/" req.http.X-ABtest "/" req.url;
  }
}

We’ve placed our landing pages in /1/ and /2/ directories on our origin server. The only page Varnish intercepts is the index page at the root of the site. Varnish randomly chooses to serve the index.html page from /1/ or /2/, internally rewrites our URL and serves it from the cache or the origin server. Since the URL rewriting is done within vcl_recv, subsequent requests for the page don’t hit the origin. The same method can be used to test landing pages that aren’t at the root of your site by modifying the if (req.url == “”) { condition.

You can test multipage offers by placing additional pages within the /1/ and /2/ directories on your origin along with the signup form. Unlike Google’s AB Test, Varnish does not support session persistence. Reloading the root page will result in the surfer alternating between both test pages. Subsequent pages need to be loaded from /1/ or /2/ based on which landing page was selected.

When doing any AB Test, change as few variables as possible, document the changes, and analyze the difference between the results. Running at least 1000 views of each is an absolute minimum. While Google’s Multivariate test provides a lot more options, a simple AB test between two pages or site tours can give some insight into what works rather easily.

If you cannot use Google’s AB Test or the Multivariate Test, using their Funnels and Goals tool will still allow you to do AB Testing.

Varnish VCL, Inline C and a random image

Thursday, February 18th, 2010

While working with the prototype of a site, I wanted to have a particular panel image randomly chosen when the page was viewed. While this could be done on the server side, I wanted to move this to Varnish so that Varnish’s cache would be used rather than piping the request through each time to the origin server.

At the top of /etc/varnish/default.vcl

C{
  #include <stdlib.h>
  #include <stdio.h>
}C

and our vcl_recv function gets the following:

  if (req.url ~ "^/panel/") {
    C{
      char buff[5];
      sprintf(buff,"%d",rand()%4);
      VRT_SetHdr(sp, HDR_REQ, "\010X-Panel:", buff, vrt_magic_string_end);
    }C
    set req.url = regsub(req.url, "^/panel/(.*)\.(.*)$", "/panel/\1.ZZZZ.\2");
    set req.url = regsub(req.url, "ZZZZ", req.http.X-Panel);
  }

The above code allows for us to specify the source code in the html document as:

<img src="/panel/random.jpg" width="300" height="300" alt="Panel Image"/>

Since we have modified the request uri in vcl_recv before the object is cached, subsequent requests for the same modified URI will be served from Varnish’s cache, without requiring another fetch from the origin server. Based on the other VCL and preferences, you can specify a long expire time, remove cookies, or do ESI processing. Since the regexp passes the extension through, we could also randomly choose .html, .css, .jpg or any other extension you desire.

In the directory panel, you would need to have

/panel/random.0.jpg
/panel/random.1.jpg
/panel/random.2.jpg
/panel/random.3.jpg

which would be served by Varnish when the url /panel/random.jpg is requested.

Moving that process to Varnish should cut down on the load from the origin server while making your site look active and dynamic.

Varnish and Nginx with Joomla

Sunday, June 28th, 2009

Recently we had a client that had some performance issues with a Joomla installation. The site wasn’t getting an incredible amount of traffic, but, the traffic it was getting was just absolutely overloading the server.

Since the machine hadn’t been having issues before, the first thing we did was contact the client and ask what had changed. We already knew the site and database that was using most of the CPU time, but, the bandwidth graph didn’t suggest that it was traffic overrunning the server. Our client rescued this client from another hosting company because the site was unusable in during prime time. So, we’ve inherited a problem. During the move, the site was upgraded from 1.0 to 1.5, so, we didn’t even have a decent baseline to revert to.

The stopgap solution was to move the .htaccess mod_rewrite rules into the apache configuration which helped somewhat. We identified a few sections of the code that were getting hit really hard and wrote a mod_rewrite rule to serve those images direct from disk — bypassing Joomla serving those images through itself. This made a large impact and at least got the site responsive enough that we could leave it online and work through the admin to figure out what had gone wrong.

Some of the modules that had been enabled contributed to quite a bit of the performance headache. One chat module generated 404s every second for each person logged in to see if there were any pending messages. Since Joomla is loaded for each 404 file, this added quite a bit of extra processing. Another quick modification to the configuration eliminated dozens of bad requests. At this point, the server is responsive, the client is happy and we make notes in the trouble ticket system and our internal documentation for reference.

Three days later the machine alerts and our load problem is back. After all of the changes, something is still having problems. Upon deeper inspection, we find that portions of the system dealing with the menus are being recreated each time. There’s no built in caching, so, the decision is to try Varnish. Varnish has worked in the past for WordPress sites that have gotten hit hard, so, we figured if we could cache the images, css and some of the static pages that don’t require authentication, we can get the server to be responsive again.

Apart from the basic configuration, our varnish.vcl file looked like this:

sub vcl_recv {
  if (req.http.host ~ "^(www.)?domain.com$") {
     set req.http.host = "domain.com";
  }

 if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
    unset req.http.cookie;
  }
}

sub vcl_fetch {
 set obj.ttl = 60s;
 if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
      set obj.ttl = 3600s;
 }
}

To get the apache logs to report the IP, you need to modify the VirtualHost config to log the forwarded IP.

The performance of the site after running Varnish in front of Apache was quite good. Apache was left with handling only .php and the server is again responsive. It runs like this for a week or more without any issues and only a slight load spike here or there.

However, Joomla doesn’t like the fact that every request’s REMOTE_ADDR is 127.0.0.1 and some addons stop working. In particular an application that allows the client to upload .pdf files into a library requires a valid IP address for some reason. Another module to add a sub-administration panel for a manager/editor also requires an IP address other than 127.0.0.1.

With some reservation, we decide to switch to Nginx + FastCGI which removes the reverse proxy and should fix the IP address problems.

Our configuration for Nginx with Joomla:

server {
        listen 66.55.44.33:80;
	server_name  www.domain.com;
 	rewrite ^(.*) http://domain.com$1 permanent;
}
server {
        listen 66.55.44.33:80;
	server_name  domain.com;

	access_log  /var/log/nginx/domain.com-access.log;

	location / {
		root   /var/www/domain.com;
		index  index.html index.htm index.php;

           if ( !-e $request_filename ) {
             rewrite (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ /index.php last;
             break;
           }

	}

	error_page   500 502 503 504  /50x.html;
	location = /50x.html {
		root   /var/www/nginx-default;
	}

	location ~ \.php$ {
		fastcgi_pass   unix:/tmp/php-fastcgi.socket;
		fastcgi_index  index.php;
		fastcgi_param  SCRIPT_FILENAME  /var/www/domain.com/$fastcgi_script_name;
		include	fastcgi_params;
	}

        location = /modules/mod_oneononechat/chatfiles/ {
           if ( !-e $request_filename ) {
             return 404;
           }
        }
}

With this configuration, Joomla was handed any URL for a file that didn’t exist. This was to allow the Search Engine Friendly (SEF) links. The second 404 handler was to handle the oneononechat module which looks for messages destined for the logged in user.

With Nginx, the site is again responsive. Load spikes occur from time to time, but, the site is stable and has a lot less trouble dealing with the load. However, once in a while the load spikes, but, the server seems to recover pretty well.

However, a module called Rokmenu which was included with the template design appears to have issues. Running php behind FastCGI sometimes gives different results than running as mod_php and it appears that Rokmenu is relying on the path being passed and doesn’t normalize it properly. So, when the menu is generated, with SEF on or off, urls look like /index.php/index.php/index.php/components/com_docman/themes/default/images/icons/16×16/pdf.png.

Obviously this creates a broken link and causes more 404s. We installed a fresh Joomla on Apache, imported the data from the copy running on Nginx, and Apache with mod_php appears to work properly. However, the performance is quite poor.

In order to troubleshoot, we made a list of every addon and ran through some debugging. With apachebench, we wrote up a quick command line that could be pasted in at the ssh prompt and decided upon some metrics. Within minutes, our first test revealed 90% of our performance issue. Two of the addons required compatibility mode because they were written for 1.0 and hadn’t been updated. Turning on compatibility mode on our freshly installed site resulted in 10x worse performance. As a test, we disabled the two modules that relied on compatibility mode and turned off compatibility mode and the load dropped immensely. We had disabled SEF early on thinking it might be the issue, but, we found the performance problem almost immediately. Enabling other modules and subsequent tests showed marginal performance changes. Compatibility mode was our culprit the entire time.

The client started a search for two modules to replace the two that required compatibility mode and disabled them temporarily while we moved the site back to Apache to fix the url issue in Rokmenu. At this point, the site was responsive, though, pageloads with lots of images were not as quick as they had been with Nginx or Varnish. At a later point, images and static files will be served from Nginx or Varnish, but, the site is fairly responsive and handles the load spikes reasonably well when Googlebot or another spider hits.

In the end the site ended up running on Apache because Varnish and Nginx had minor issues with the deployment. Moving to Apache alternatives doesn’t always fix everything and may introduce side-effects that you cannot work around.

Flash Media Encoder and Red5

Wednesday, June 17th, 2009

Using the HTML from the flowplayer and red5 post, you need to change the url in the javascript to:

url: 'red5StreamDemo', live: true

You will need to use Flash Media Encoder version 2.5 which can be downloaded at adobe.com. Flash Media Encoder version 3.0 will not work with red5 and based on developer comments, will probably not be supported

It is against the license for Flash Media Encoder to use it with anything other than Flash Media Server.

Here is a screenshot of the config required to get Flash Media Encoder 2.5 to work with red5. Replace yourhostname with the name of your red5 server.

Flash Media Encoder version 2.5 connecting to red5

After you’ve configured the FMS Url and the Stream, click the Connect button, and then the Start button at the bottom to start encoding.

Entries (RSS) and Comments (RSS).
Cluster host: li