Posts Tagged ‘Apache’

Hackers bypass .htaccess security by using GETS rather than GET

Friday, December 10th, 2010

Last night I received an urgent message from a client. My machine has been hacked, someone got into the admin area, I need all of the details from this IP.

So, I grepped the logs, grabbed the appropriate entries and saw something odd.

1.2.3.4 - - [09/Dec/2010:22:15:41 -0500] "GETS /admin/index.php HTTP/1.1" 200 3505 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12"
1.2.3.4 - - [09/Dec/2010:22:17:09 -0500] "GETS /admin/usermanagement.php HTTP/1.1" 200 99320 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12"
1.2.3.4 - - [09/Dec/2010:22:18:05 -0500] "GETS /admin/index.php HTTP/1.1" 200 3510 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12"

A modified snippet of the .htaccess file:

AuthUserFile .htpasswd
AuthName "Protected Area"
AuthType Basic

<Limit GET POST>
require valid-user
</Limit>

Of course, we know GETS isn’t valid, but, why is Apache handing out status 200s and content lengths that appear to be valid? We know the area was password protected behind .htaccess and with some quick keyboard work we’ve got a system that properly prompts for Basic Authentication with a properly formed HTTP/1.0 request. Removing the <Limit> restriction from the .htaccess protects the site, but, why are these other methods able to pass through? Replacing GETS with anything other than POST, PUT, DELETE, TRACK, TRACE, OPTIONS, HEAD results in Apache treating those requests as if GET had been typed.

Let’s set up a duplicate environment on another machine to figure out what Apache is doing.

tsavo:~ mcd$ telnet devel.mia 80
Trying x.x.x.x...
Connected to xxxxxxx.xxx.
Escape character is '^]'.
GET /htpasstest/ HTTP/1.0    

HTTP/1.1 401 Authorization Required
Date: Fri, 10 Dec 2010 21:29:58 GMT
Server: Apache
WWW-Authenticate: Basic realm="Protected Area"
Vary: Accept-Encoding
Content-Length: 401
Connection: close
Content-Type: text/html; charset=iso-8859-1

Let’s try what they did:

tsavo:~ mcd$ telnet devel.mia 80
Trying x.x.x.x...
Connected to xxxxxxx.xxx.
Escape character is '^]'.
GETS /htpasstest/ HTTP/1.0

HTTP/1.1 501 Method Not Implemented
Date: Fri, 10 Dec 2010 21:53:58 GMT
Server: Apache
Allow: GET,HEAD,POST,OPTIONS,TRACE
Vary: Accept-Encoding
Content-Length: 227
Connection: close
Content-Type: text/html; charset=iso-8859-1

Odd, this is the behavior we expected, but, not what we are experiencing on the client’s machine. Digging a little further we look at the differences and begin to suspect the machine may have been compromised. The first thing that struck was mod_negotiation – probably not. mod_actions, maybe, but, no. DAV wasn’t loaded, but, Zend Optimizer was on the machine that appeared to have been exploited. Testing the above script on the client’s machine resulted in…. exactly the same behavior — method not supported. Testing the directory that was exploited results in the GETS request served as if it was a GET request.

So, now we’ve got a particular domain on the machine that is not behaving as the config files would suggest. A quick test on the original domain, and as expected, GETS responds with the data and bypasses the authorization clause in the .htaccess. Lets try one more test:


# telnet xxxxxx.mia 80
Trying x.x.x.x...
Connected to xxxxxx.xxx.
Escape character is '^]'.
GETS /zend.php HTTP/1.1
Host: xxxxxx.xxx

HTTP/1.1 200 OK
Date: Fri, 10 Dec 2010 22:02:33 GMT
Server: Apache
Vary: Accept-Encoding
Content-Length: 602
Connection: close
Content-Type: text/html

Bingo. A Zend encoded file handed us an error 200 even though it contained an invalid request method.

The solution in this case was simple, remove the <Limit> clause from the .htaccess.

The question is, is Zend Optimizer actually doing the proper thing here. Watching Apache with gdb, Zend Optimizer does appear to hook the Apache request handler a bit higher, but why is it attempting to correct an invalid request?

One of the first rules in input validation is validate and reject on error. Never try to correct the data and accept it. If you try to correct it and make a mistake, you’re just as vulnerable and hackers will try to figure out those patterns and add extra escaping into their url request. In this case, only a few pages were able to be displayed as there were checks to make sure forms were POSTed. But, the Limit in .htaccess that should have protected the application, didn’t work as expected because the invalid methods weren’t specified.

As so many applications on the web generate .htpasswd files with the Limit clause, it makes me wonder how many Zend Encoded applications are vulnerable. Take a minute to check your systems.

Nginx to Apache?

Sunday, May 23rd, 2010

A few months ago we had a client that wanted to run Nginx/FastCGI rather than Apache because it was known to be faster. While we’ve had extensive experience performance tuning various webserver combinations, the workload proposed would really have been better served with Apache. While we inherited this problem from another hosting company — he moved because they couldn’t fix the performance issues — he maintained that Nginx/FastCGI for PHP was the fastest because of all of the benchmarks that had been run on the internet.

While the conversion to or from one server to another is usually painful, much of the pain can be avoided by running Apache on an alternate port, testing, then, swapping the configuration around. The graph below shows when we changed from Nginx to Apache:

We made the conversion from Nginx to Apache on Friday. Once we made the conversion, there were issues with the machine which was running an older kernel. After reviewing the workload, we migrated from 2.6.31.1 with the Anticipatory Scheduler to 2.6.34 with the Deadline Scheduler. Three other machines had been running 2.6.33.1 with the CFQ scheduler and showed no issues at the 10mb/sec mark, but, we felt that we might benchmark his workload using deadline. We’ve run a number of high-end webservers with both Anticipatory and CFQ prior to 2.6.33 and for most of our workloads, Anticipatory seemed to win. With 2.6.33, Anticipatory was removed, leaving NOOP, CFQ and Deadline. While we have a few MySQL servers running Deadline, this is probably the first heavy-use webserver that we’ve moved from CFQ/AS to Deadline.

The dips in the daily graph were during times where a cron job was running. The two final dips were during the kernel installation.

All in all, the conversion went well. The machine never really appeared to be slow, but, it is obvious that it is now handling more traffic. The load averages are roughly the same as they were before. CPU utilization is roughly the same, but, more importantly, Disk I/O is about half what it was and System now hovers around 3-4%. During the hourly cron job, the machine is not having issues like it was before.

Nginx isn’t always the best solution. In this case, 100% of the traffic is serving an 8k-21k php script to each visitor. Static content is served from another machine running Nginx.

While I do like Nginx, it is always best to use the right tool for the job. In this case, Apache happened to be the right tool.

DDOS attack mitigation

Monday, April 26th, 2010

Today we had a DDOS attack on one of our clients. They were running prefork with mod_php5 with a rather busy application. While we initially started filtering IP addresses using iptables and a few crudely hacked rules, we knew something had to be done that was a little more permanent. Moving to MPM-Worker with PHP served with FastCGI seemed reasonable, but, looking at the history of the attacks on this machine, I believe Apache still would have been vulnerable since we cannot filter the requests early enough in Apache’s request handler.

Apache does have the ability to fight some DDOS attacks using mod_security and mod_evasive, but, this particular attack was designed to affect apache prior to the place where these modules hook into the request. This also precludes using fail2ban. We could run mod_forensic or mod_logio to assist fail2ban, but, it is still a stopgap measure.

We could have used some Snort rules and tied those to iptables, but, that is a rather bad solution to the problem.

While we could have used Varnish, their application would have had some issues. mod_rpaf can help by adjusting the REMOTE_ADDR to take the value from X-Forwarded-For that Varnish sets. mod_geoip actually inserts itself before mod_rpaf, so, we would have needed to make a modification to mod_geoip and recompiled it. I’m not sure how Varnish would have handled Slowloris and we had to fix this now.

Putting them behind a Layer 7 load balancer would have isolated the origin server and handled the brunt of the attack on the load balancer, but, again we would have needed mod_rpaf and some modifications to their code.

In the end, Lighttpd and Nginx appeared to be the only documented solution. After the conversion, we did find documentation that said Varnish and Squid were immune to Slowloris. With Nginx or Lighttpd, we didn’t have IP address issues to contend with, it would be easy enough to modify the fastcgi config to pass the GEOIP information in the same request variable that their application expected. We knew we had to run PHP under FastCGI, so, we might as well pick a tool where we can block the attack in the webserver without having to worry about firewall rules. We did put a few firewall rules in place to block the larger offenders.

in the http { } section of our nginx config, we added:

    client_body_timeout 10;
    client_header_timeout 10;
    keepalive_timeout 10;
    send_timeout 10;
    limit_zone limit_per_ip $binary_remote_addr 16m;

and in the server { } section, we added:

    limit_conn limit_per_ip 5;

Since each server was expecting to handle one or two requests from each IP, this gave us a little headroom while solving the problem in the right place.

I believe Varnish would have held the connection and wouldn’t have sent a request to the backend which makes it fairly versatile as a tool to deal with DDOS attacks. While I do like the ability to block certain requests in VCL, the methods listed to fight this type of attack appeared to favor a non-threaded webserver. Varnish in front of Apache would have worked, but, we already knew we needed to move from Apache at some point with this client and this gave us an opportunity to shift them while under the gun.

Wouldn’t have had it any other way.

Varnish and Nginx with Joomla

Sunday, June 28th, 2009

Recently we had a client that had some performance issues with a Joomla installation. The site wasn’t getting an incredible amount of traffic, but, the traffic it was getting was just absolutely overloading the server.

Since the machine hadn’t been having issues before, the first thing we did was contact the client and ask what had changed. We already knew the site and database that was using most of the CPU time, but, the bandwidth graph didn’t suggest that it was traffic overrunning the server. Our client rescued this client from another hosting company because the site was unusable in during prime time. So, we’ve inherited a problem. During the move, the site was upgraded from 1.0 to 1.5, so, we didn’t even have a decent baseline to revert to.

The stopgap solution was to move the .htaccess mod_rewrite rules into the apache configuration which helped somewhat. We identified a few sections of the code that were getting hit really hard and wrote a mod_rewrite rule to serve those images direct from disk — bypassing Joomla serving those images through itself. This made a large impact and at least got the site responsive enough that we could leave it online and work through the admin to figure out what had gone wrong.

Some of the modules that had been enabled contributed to quite a bit of the performance headache. One chat module generated 404s every second for each person logged in to see if there were any pending messages. Since Joomla is loaded for each 404 file, this added quite a bit of extra processing. Another quick modification to the configuration eliminated dozens of bad requests. At this point, the server is responsive, the client is happy and we make notes in the trouble ticket system and our internal documentation for reference.

Three days later the machine alerts and our load problem is back. After all of the changes, something is still having problems. Upon deeper inspection, we find that portions of the system dealing with the menus are being recreated each time. There’s no built in caching, so, the decision is to try Varnish. Varnish has worked in the past for WordPress sites that have gotten hit hard, so, we figured if we could cache the images, css and some of the static pages that don’t require authentication, we can get the server to be responsive again.

Apart from the basic configuration, our varnish.vcl file looked like this:

sub vcl_recv {
  if (req.http.host ~ "^(www.)?domain.com$") {
     set req.http.host = "domain.com";
  }

 if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
    unset req.http.cookie;
  }
}

sub vcl_fetch {
 set obj.ttl = 60s;
 if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
      set obj.ttl = 3600s;
 }
}

To get the apache logs to report the IP, you need to modify the VirtualHost config to log the forwarded IP.

The performance of the site after running Varnish in front of Apache was quite good. Apache was left with handling only .php and the server is again responsive. It runs like this for a week or more without any issues and only a slight load spike here or there.

However, Joomla doesn’t like the fact that every request’s REMOTE_ADDR is 127.0.0.1 and some addons stop working. In particular an application that allows the client to upload .pdf files into a library requires a valid IP address for some reason. Another module to add a sub-administration panel for a manager/editor also requires an IP address other than 127.0.0.1.

With some reservation, we decide to switch to Nginx + FastCGI which removes the reverse proxy and should fix the IP address problems.

Our configuration for Nginx with Joomla:

server {
        listen 66.55.44.33:80;
	server_name  www.domain.com;
 	rewrite ^(.*) http://domain.com$1 permanent;
}
server {
        listen 66.55.44.33:80;
	server_name  domain.com;

	access_log  /var/log/nginx/domain.com-access.log;

	location / {
		root   /var/www/domain.com;
		index  index.html index.htm index.php;

           if ( !-e $request_filename ) {
             rewrite (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ /index.php last;
             break;
           }

	}

	error_page   500 502 503 504  /50x.html;
	location = /50x.html {
		root   /var/www/nginx-default;
	}

	location ~ \.php$ {
		fastcgi_pass   unix:/tmp/php-fastcgi.socket;
		fastcgi_index  index.php;
		fastcgi_param  SCRIPT_FILENAME  /var/www/domain.com/$fastcgi_script_name;
		include	fastcgi_params;
	}

        location = /modules/mod_oneononechat/chatfiles/ {
           if ( !-e $request_filename ) {
             return 404;
           }
        }
}

With this configuration, Joomla was handed any URL for a file that didn’t exist. This was to allow the Search Engine Friendly (SEF) links. The second 404 handler was to handle the oneononechat module which looks for messages destined for the logged in user.

With Nginx, the site is again responsive. Load spikes occur from time to time, but, the site is stable and has a lot less trouble dealing with the load. However, once in a while the load spikes, but, the server seems to recover pretty well.

However, a module called Rokmenu which was included with the template design appears to have issues. Running php behind FastCGI sometimes gives different results than running as mod_php and it appears that Rokmenu is relying on the path being passed and doesn’t normalize it properly. So, when the menu is generated, with SEF on or off, urls look like /index.php/index.php/index.php/components/com_docman/themes/default/images/icons/16×16/pdf.png.

Obviously this creates a broken link and causes more 404s. We installed a fresh Joomla on Apache, imported the data from the copy running on Nginx, and Apache with mod_php appears to work properly. However, the performance is quite poor.

In order to troubleshoot, we made a list of every addon and ran through some debugging. With apachebench, we wrote up a quick command line that could be pasted in at the ssh prompt and decided upon some metrics. Within minutes, our first test revealed 90% of our performance issue. Two of the addons required compatibility mode because they were written for 1.0 and hadn’t been updated. Turning on compatibility mode on our freshly installed site resulted in 10x worse performance. As a test, we disabled the two modules that relied on compatibility mode and turned off compatibility mode and the load dropped immensely. We had disabled SEF early on thinking it might be the issue, but, we found the performance problem almost immediately. Enabling other modules and subsequent tests showed marginal performance changes. Compatibility mode was our culprit the entire time.

The client started a search for two modules to replace the two that required compatibility mode and disabled them temporarily while we moved the site back to Apache to fix the url issue in Rokmenu. At this point, the site was responsive, though, pageloads with lots of images were not as quick as they had been with Nginx or Varnish. At a later point, images and static files will be served from Nginx or Varnish, but, the site is fairly responsive and handles the load spikes reasonably well when Googlebot or another spider hits.

In the end the site ended up running on Apache because Varnish and Nginx had minor issues with the deployment. Moving to Apache alternatives doesn’t always fix everything and may introduce side-effects that you cannot work around.

Combined Web Site Logging splitter for AWStats

Friday, May 29th, 2009

AWStats has an interesting problem when working with combined logging. When you have 500 domains and combined logfiles at roughly 2 gigabytes a day, awstats spends a lot of time shuffling through all of the log files to return the results. The simple solution appeared to be a small python script that read the awstats config directory and split the logfile into pieces so that awstats could run on individual logfiles. It requires one loop through the combined logfile to create all of the logfiles, rather than looping through the 2 gigabyte logfile for each domain when awstats was set up with combined logging.

#!/usr/bin/python

import os,re
from string import split

dirs = os.listdir('/etc/awstats')

domainlist = {}

for dir in dirs:
  if (re.search('\.conf$',dir)):
    dom = re.sub('^awstats\.', '', dir)
    dom = re.sub('\.conf$', '', dom)
    domainlist[dom] = 1
    
loglist = open('/var/log/apache2/combined-access.log.1','r')
for line in loglist:
  (domain,logline) = line.split(None, 1)
  if (domain in domainlist):
    if (domainlist[domain] == 1):
      domainlist[domain] = open('/var/log/apache2/' + domain + '-access.log.1', 'w')
    domainlist[domain].write(logline)

While the code isn’t particularly earthshattering, it cut down log processing to roughly 20 minutes per day rather than the previous 16-30 hours per day.

Entries (RSS) and Comments (RSS).
Cluster host: li