Our best practices regarding a web hosting environment

Over the years we’ve had to deal with persistent security scans from hosts around the world, verifying that our installations were secure. After witnessing a competitor implode this morning as the result of a hack, I’m putting this out as a few of our best practices when dealing with Virtual and Dedicated web hosting. He called, we cleaned up quite a bit, but, he’s got a lot of work to do. When I got the call, he was ready to close shop offering to let us clean up.

SSH

One of the worst offenders are the SSH bots that come in and try 60000 password combinations on your machine. Since SSH requires a secure handshake, it uses a bit of CPU. As a result, when you have 200 IP addresses on a machine and their script goes through sequentially, you’re dealing with 12 million authentication attempts.

The first thing we did was limit our machines to answer only on the primary IP address – which cut our exposure tremendously. You could move SSH to an alternate port. If you do move it, make sure to use a port number lower than 1024. Ports above 1024 can be started by unprivileged users and you don’t want someone to have crashed the SSH daemon using the OOM killer, and restarting their own. Accidentally accepting a new key when you log in under pressure results in a compromised account. Due to a number of issues, we left SSH answering on port 22, but, used IPRECENT to allow 6 connections from an IP address within a minute, or, the IP address would be blocked. With this method, an attacker could do a authentication request every 12 seconds and work around this, or, use multiple proxy servers, but, it is usually easier to just allow the automated scan to run, then, move on when you don’t find anything quickly enough.

/sbin/iptables -A INPUT -p tcp --dport ssh -i eth0 -m state --state NEW -m recent --set
/sbin/iptables -A INPUT -p tcp --dport ssh -i eth0 -m state --state NEW -m recent --update --seconds 60 --hitcount 6 -j DROP

Another possibility is using Fail2ban. Fail2ban can report back to a central server and works with numerous SSH daemon log format strings. Personally, I prefer IPRECENT for its ease of use for services without worrying about what daemon is running, and the fact that it is self-healing.

Depending on your client needs, you could also use port knocking. Port 22 is closed unless it receives two syn packets on two ports. Both an unlock and lock sequence can be added and it can be configured to add the IP you’re currently connecting from. Handy if you’re on the road and need to connect in, but, don’t want to leave your connection wide open.

POP3/IMAP

POP3 receives a ton of attempts – more so than IMAP, but, they are usually served by the same daemon. You can use the IPRECENT rule above to limit connections, but, some of the scan scripts are smart enough to pipeline requests. Adjust your daemon to return fails after 3 attempts without checking the backend and the scanner just gets a number of failed attempts.

Make sure you support TLS – possibly even disabling PLAINTEXT authentication as almost every email client out there should use SSL connections.

FTP

For a while, we limited FTP to the primary machine IP due to a number of scans. IPRECENT doesn’t work well here either as some FTP clients will try to open 10-50 connections to transfer data more quickly. Choosing a lightweight FTP daemon that has some protections built in makes a difference. You can also have only the primary IP answer and set up a CNAME of ftp.theirdomain.com to point to the hostname to avoid some confusion. FTP passes the data over the wire unencrypted and you should use FTP-SSL or, if possible FTPS. Most FTP client software understands both.

www

With SQL Injections, ZMeu scans and everyone trying to look for vulnerabilities and exploits, there are a number of things that can be done. The problem is, once they find a vulnerability, exploit code is usually left on the server. That code might run attacks against other machines, send spam or allow them a remote shell.

One method of finding out what is being executed without actually breaking things with Suhosin is to run in simulation mode with the following patch (around line 1600) in execute.c:

        } else if (SUHOSIN_G(func_blacklist) != NULL) {
                if (zend_hash_exists(SUHOSIN_G(func_blacklist), lcname, function_name_strlen+1)) {
                        suhosin_log(S_EXECUTOR, "function within blacklist called: %s()", lcname);
                      /*goto execute_internal_bailout;*/
                }
        }

You want to comment out the goto execute_internal_bailout; line so that it logs to syslog rather than actually breaking due to simulation mode not actually running as a simulation. This way, you can add commands in your blacklist, run in simulation mode and actually see what is being executed.

If you’ve never built it, untar it, phpize, ./configure, make (verify that it has built sanely), make install, modify the config file, restart apache, check the error log for segfaults, etc.

Some somewhat sane defaults:

suhosin.log.syslog.facility = 0
suhosin.simulation = on
suhosin.executor.func.blacklist = include,include_once,require_once,passthru,eval,system,popen,exec,shell_exec,preg_replace,mail
suhosin.log.syslog.facility = 5
suhosin.log.syslog.priority = 1

Why preg_replace

There are a number of ways to execute a command, but, PHP allows the e PCRE modifier which evals the result:

preg_replace("/.*/e","\x65\x76\x61\x6C\x28");

You can run mod_security, but, so many software developers have problems with it that the default .htaccess for many applications is to just disable mod_security altogether. It isn’t really security if it gets disabled.

One issue with WordPress is that it handles 404s but has to do a bit of work beforehand before it can determine if it is a 404. ZMeu scans multiple IP addresses and hammers away with a few thousand requests, most of which 404 when a WordPress site answers on the bare IP.

The quick solution to this is to create virtualhost entries for the bare IPs that point somewhere, i.e. a parking page.

#!/usr/bin/python

DEFAULT_PATH = '/var/www/uc'

import os

ips = os.popen('/sbin/ifconfig -a|grep "inet addr"|cut -f 2 -d ":"|cut -f 1 -d " "|grep -v 127.0.0.1')
for ip in ips:
    print """
<virtualhost %s:80>
ServerName %s
DocumentRoot %s
</virtualhost>""" % (ip.strip(), ip.strip(), DEFAULT_PATH)

Now, when ZMeu or any of the other scanners come along, rather than hammering away at a WordPress or Joomla site, they are handed a parking page and 404s can be handled more sanely. Briefly, this is due to the fact that ZMeu is scanning the IP without sending through hostnames in most cases. Once they start scanning using hostnames, this method won’t provide as much utility.

Another issue years ago was having DocumentRoot set to /var/www and having domains in /var/www/domain.com, /var/www/domainb.com. domain.com and domainb.com would have a /cgi-bin/ directory set with ScriptAlias, but, if you visited the machine on the bare IP that didn’t match a VirtualHost, they would then scan the DocumentRoot. Secure files saved in /cgi-bin/ that should have been executed, usually wouldn’t execute and would be handed back with a content-type: text/plain, exposing datafiles. Make sure DocumentRoot in the base configuration points to a directory that is not the parent of multiple domain directories.

.htaccess

I don’t know how this one started or why this ever happened, but, so many tools on the internet try to make creating an .htaccess to password protect your directory and they contain one fatal flaw. I covered this a bit more in depth in another post, but, briefly, in the .htaccess, the site is protected with the following:

AuthUserFile .htpasswd
AuthName "Protected Area"
AuthType Basic

<Limit GET POST>
require valid-user
</Limit>

Since the directory being protected is only protected against GET and POST requests, as long as they are using PHP to serve the pages, GETS will work. In fact, almost anything other than one of the two verbs will get results without having to be authenticated.

WordPress

One of the most popular pieces of software our clients use is WordPress followed by Joomla. Both have numerous updates throughout the year, bundling features with security patches which makes upgrading somewhat painful for people. A recent patch rolled in with 3.4 and 3.4.1 fixed security issues, but, broke a number of themes causing a bit of pain for clients. Joomla isn’t immune from this either and has made multiple security upgrades bundled with features that break backwards compatibility.

If you run multiple WordPress sites on a single machine, take a look at this post which contains code to upgrade WordPress sites from the command line.

Plugins are the second issue. WordPress and some plugins have bundled code, but, when that bundled code has a security update, the plugin that bundled it often doesn’t get updated. timthumb.php comes to mind as a plugin that we found in numerous plugins and had to do a little grep/awk magic to replace all of them. One plugin updated, but, overwrote timthumb.php with an exploitable version – causing all sorts of discontent.

SetUID versus www-data

I’ve got a much longer post regarding this. Suffice it to say that no one will ever agree one way or the other, but, I feel that limiting what a site is able to write to on the physical disk is better than allowing the webserver to write over the entire site. In the case of a WordPress site that is compromised, with www-data, they may only be able to write files to the theme directory or the upload directory, eliminating the ability to damage most of the rest of the site. Joomla however, leaves the FTP password in the clear in a config file. Once a Joomla site has been hacked that uses www-data mode, the FTP password has been exposed and can be used to modify the site.

Ok, so the site has been hacked, now what?

Restore backups and keep on running? Regrettably, that is what most hosting companies do. If you have done any of the above, particularly the Suhosin patch, you can look through the logs to see what was executed. Even code that is Zend Optimized will trigger a syslog entry so that you can at least see what may be happening. If you run in www-data mode, new .php files owned by www-data are potential suspects especially in directories that shouldn’t contain executable code. Over the years, I’ve attempted to get WordPress to disable .php execution in the /uploads directory to no avail. Since a remote exploit may allow them to save a file, a common dumping ground is the wp-content/uploads directory so that they can later have a remote shell. Sometimes, they’ll get creative and put a file in /wp-content/uploads/2012/05/blueberry.jpg.php if you have a file named blueberry.jpg.

There are a number of things to look for – modified times (though, sometimes hackers are resetting the timestamps on files to match other files in that directory), .php files where they shouldn’t be, etc. There are other tricks, sometimes they will modify .htaccess to run a .jpeg as .php, so, you can’t always depend on that. If they can’t upload a remote exploit, there is a possibility that they can upload a .jpg file that actually contains php code which can be remotely included from another exploited site. Since PHP doesn’t check the mime type on an include, exploit code could be sitting on your server.

Even mime type validation of uploads isn’t always enough.

Javascript exploits are some of the toughest to ferret out of a site. Viewing the source of a document usually results in the pre-rendered version, and if their exploit was snuck onto the end of jquery.1.7.min.js, the page can get modified after it has loaded. Firefox has a ‘View Generated Source’ option which makes it a little easier to track this down. Be prepared for some obfuscation as the code will sometimes be encrypted a bit. A Live Headers plugin can also give you a hostname to grep the contents of the site.

One of the trickier WordPress exploits was using wp_options _transient_feed_ variables to store exploit code. Cleaning that up was tricky to say the least, but, the code was gzipped, base64 encoded and stored as an option variable that was inserted into a plugin very cleanly.

Some of the code found in the template:

$z=get_option("_transient_feed_8130f985e7c4c2eda46e2cc91c38468d");
$z=base64_decode(str_rot13($z)); if(strpos($z,"2A0BBFB0")!==false){ 

and in wp_options:

| 2537 | 0 | _transient_feed_8130f985e7c4c2eda46e2cc91c38468d | s:50396:"nJLbVJEyMzyhMJDbWmWOZRWPExVjWlxcrj0XVPNtVROypaWipy9lMKOipaEcozpbZPx7VROcozyspzImqT9lMFtvp2SzMI9go2E

The trick to a cleanup is to be meticulous in your analysis. File dates, file ownership, what was changed, what logging has been done all have the ability to provide clues. Sometimes, you won’t be able to find the original exploit on the first runthrough and will have to install a bit more logging.

If you still don’t have a good idea, you can do something like this:

.htaccess:

include_path = ".:/var/www/php"
auto_prepend_file postlog.php

postlog.php

<?php

if ( (isset( $HTTP_RAW_POST_DATA ) || !empty( $_POST )) &&
     (strpos($_SERVER['REMOTE_ADDR'],'11.222.') === FALSE)
   ) {
// block out local IP addresses (monitoring, etc)

if ( !isset( $HTTP_RAW_POST_DATA ) ) {
$HTTP_RAW_POST_DATA = file_get_contents( 'php://input' );
}
 
  $buffer = "Date: " . date('M/d/Y H:i') . "\nSite: {$_SERVER[ 'HTTP_HOST' ]}\nURL: {$_SERVER[ 'REQUEST_URI' ]}\nPOST request from: {$_SERVER[ 'REMOTE_ADDR' ]}\n\nPOST DATA:: " . print_r( $_POST, 1 ) . "\nCOOKIES: " . print_r( $_COOKIE, 1 ) . "\nHTTP_RAW_POST_DATA: $HTTP_RAW_POST_DATA\nSERVER Data:" . print_r ($_SERVER, 1) . "\n----------\n" ;
  $tmp = explode('/',$_SERVER[ 'SCRIPT_NAME' ]);
  $fn = array_pop($tmp);

  $fh = fopen('/home/username/postlog/'.date('Ymd').'.'.$fn,'a+');
  fwrite($fh, $buffer);
  fclose($fh);
}
?>

What this will do is log every post request that is done and log it to a file. If the site gets exploited again, you have a forensic log that you can go back through.

The single biggest thing I can say is keep your applications updated. I know most webhosts don’t pay attention to what is running on their machines, but, it is usually easier to prevent things from breaking than to fix them after they’ve broken.

My competitor? They’re on hour 72 cleaning things up since they don’t maintain two generations of weekly backups.

Tags: , , , , , , ,

Leave a Reply

You must be logged in to post a comment.

Entries (RSS) and Comments (RSS).
Cluster host: li