Archive for the ‘Web Security’ Category

Making things a little more difficult to run exploits on compromised WordPress sites

Monday, September 30th, 2013

I was called in to fix a number of WordPress sites that had been hacked. Many were running older versions of WordPress and thankfully weren’t running SetUID, so, the damage was limited to exploit scripts running in some of the world writeable directories.

After cleaning up the sites, upgrading them to the latest version of WordPress and scanning for additional exploits, I added a number of rules to each of the Apache VirtualHost configs on his server.

<Directory /var/www/>
AllowOverride none
RemoveHandler .cgi .pl .py
<FilesMatch "\.(php|p?html?)$">
  SetHandler none

<Directory /var/www/>
AllowOverride none
RemoveHandler .cgi .pl .py
<FilesMatch "\.(php|p?html?)$">
  SetHandler none

These rules need to be placed in the VirtualHost configuration and prevent PHP, cgi scripts, Perl and Python files from being executed in the two directories that WordPress is allowed to write to. To prevent other tampering, we disallow Overrides which prevents hackers from creating a directory and including their own .htaccess that would enable PHP or CGI to be parsed.

Since making these changes, we’ve seen a few files dropped into the uploads directory, but, none have been executable.

DNS Authoritative Server, disabled recursive lookups and analyzed the logs

Thursday, March 8th, 2012

We operate a number of DNS servers, one of which we left open for recursive lookups for clients to see their sites before they’ve moved the DNS. While we normally handle that through a proxy server we maintain, DNS was a secondary method for doing this which was sometimes easier than having a client use a proxy server.

Today, after 60 days notice, we shut off external recursive lookups at 6pm EST. In the last two hours, we’ve received a number of ‘lookups’ from a few ‘high volume’ IP Addresses. The list is surprising.

Output from (annotated with whois info):

grep -E 'denied$' /var/log/syslog|cut -f 8 -d ' '|cut -f 1 -d '#'|sort|uniq -c|sort -nr|head -n 30
 506452  (GTS Telecom Romania Operations)
  38444  (Cloudflare EU)
  10490  (Cloudflare EU)
  10236  (Softlayer)
   4784  (OneandOne AG)
   1277  (Godaddy)
    617 2001:2060:ffff:a01::53 (Sonera, Finland)
    528  (Cyber Technology BVBA) 
    528  (Cyber Technology BVBA)

What are we seeing?

GTS Telecom Romania Operations:
# grep /var/log/syslog|cut -f 2 -d "'"|sort|uniq -c|sort -nr|more

CloudFlare EU:
# grep /var/log/syslog|cut -f 2 -d "'"|sort|uniq -c|sort -nr|more

# grep /var/log/syslog|cut -f 2 -d "'"|sort|uniq -c|sort -nr|more

# grep /var/log/syslog|cut -f 2 -d "'"|sort|uniq -c|sort -nr|more

OneandOne AG:
# grep /var/log/syslog|cut -f 2 -d "'"|sort|uniq -c|sort -nr|more

# grep /var/log/syslog|cut -f 2 -d "'"|sort|uniq -c|sort -nr|more

Lumped into that was an IPv6 resolver that actually pointed out an interesting issue. The domains 2001:2060:ffff:a01::53 is looking for should be hitting our server for an authoritative lookup, but, appears to be doing recursive lookups. I’ve made a minor change to our configs to see if I can log a bit more data as it isn’t a continuous stream.

It is interesting to see the number of hits that occurred in a two hour period which explains why the network PPS rate on that server has been higher than normal.

This traffic is part of a DNS reflection DDOS attack using spoofed UDP packets with the ‘source address’ set to the above targets. Why they’ve chosen or as their typical entry, I don’t know. Since UDP is a connectionless protocol, there is no Syn/Ack, making DNS susceptible to spoofed packets. Our DNS resolver, which was previously publicly available, was responding based on the source IP in the UDP payload. The hackers chose a particular group of servers to send those responses to.

Oddly, I did see some TCP traffic from one of Cloudflare’s EU servers which should have been impossible as they are using anycast. Shortly after disabling recursive lookups, the attack stopped. This means that the hackers are watching the servers involved in the attack and when they saw that it was no longer affected, they stopped sending that spoofed traffic.

We are in the process of upgrading our DNS servers and swapping things around and this was the first step in redesigning our DNS infrastructure. It was only by chance we noticed that our resolver was being used in the DNS reflection attack as it was only sending out a few mb/sec more traffic than it should have been.

My apologies to the servers that were receiving DDOS traffic from our resolvers.

Anandtech uses Trackback spam on my blog to increase traffic?

Monday, February 20th, 2012

I’ve been dealing with a rather persistent comment spam issue with my blog. Akismet seems to ignore this particular type of trackback spam and has let hundreds slip through over the last few months. Since my blog really isn’t that busy, it is very easy to identify the spam. I can’t turn off trackbacks as I have gotten trackbacks from other people referencing my site showing how they solved a problem.

A couple weeks ago I installed Simple Trackback Validation with Topsy Blocker which has done a good job of catching the ones that Akismet seems to have problems detecting. As I run a rather complex setup for testing my plugins, I submitted a bug fix which the author promptly installed. I don’t know why Akismet doesn’t detect keyword keyword… as potential spam, but, almost every trackback Akismet has missed, follows that pattern and so far, the other plugin has caught every single one.

This morning, I noticed two comments had been posted in the last seven hours, and, when I looked, I saw:

While my site isn’t that busy, it ranks fairly well on some search phrases and social circles. The most popular post on my blog was written over two years ago and was for Mac OS/X Snow Leopard. Other posts have been more popular over a thirty day period, but, that post has stood the test of time even though it is now two generations of Operating Systems behind.

But, I do have sites that do trackback spam from that particular page quite frequently, and, Akismet misses them about 85% of the time. You wouldn’t know from the Akismet graphs, it claims a much higher success rate, almost the inverse of the fail rate.

I know when I get links from, I do often check them to make sure they are linking, and, in a few spot checks in the past, they were, but, those links appear to have been cleaned from their forums and the two posts where I remember them being listed. Today however, the trackback was on a very new article which had no relation to the page it linked to, and obviously, no link from their site pointed to my site.

I looked back through the approved comments and found 71 other trackbacks, investigated a number of pages, and, as you might suspect, my link wasn’t present anywhere.

This is where the analysis turns a bit sinister. Why did they pick a page that detailed an issue with Varnish and gzip compressed pages to link to an article Anandtech wrote yesterday? Age of the post? The original post was from Dec 2009, though, it is the outlier in the stats.

Upon looking at thirty of the trackbacks, a curious pattern emerged. Since adding the social media buttons for Google+, Twitter and Facebook, I’ve had a quick metric to gauge post popularity, and, lo and behold, Anandtech is targeting posts that have high tweet counts with the exception of the original outlier.

The original post is linked to a post that deals with Social Game Design which is also a popular post. There may have been a trackback on that page which I deleted ages ago and their trackback bot just spidered it.

Or, it could be completely dumb and just taken the urls from if they looked far enough back through my history – except for the original outlier.

But Anandtech, Welcome to the Comment Blacklist.

285 WordPress Sites, upgraded in 11 minutes – and they weren’t MultiSite

Sunday, February 12th, 2012

A number of our clients run WordPress, but, for some reason, keeping them updated is a problem. Sites are uploaded and run on autopilot and are forgotten… until they are hacked. Last week a client asked why his WordPress 2.8 site was hacked. WordPress 2.8 was released in June 2009 with 25 WordPress releases since. We checked a few of his sites and found a few different versions running, but, how many other clients were running old WordPress versions? The results were shocking.

Finding WordPress sites on shared storage

First, we need to find each of the individual client’s WordPress installations.

find /var/www -type f -wholename \*wp-includes/version.php|awk '{ print "grep -H \"wp_version =\" " $1 }' | sh > /var/tmp/wpversions

From this, with a little cut and sort trickery, we end up with the following histogram:

35 3.1
29 2.8.5
25 3.2.1
24 2.9.2
20 2.8.4
19 3.0.1
16 2.8.2
16 2.1
15 2.6
13 2.7.1
8 3.3.1
6 2.9.1
6 2.8
6 2.3
5 3.3
5 2.7
4 3.1.2
4 2.8.1
3 3.1.1
3 3.0.5
3 3.0.4
3 2.8.6
2 3.1.3
2 3.0
2 2.0.3
1 3.1.4
1 2.9
1 2.6.3
1 2.5.1
1 2.3.3
1 2.3.2
1 2.2.2
1 2.2.1
1 2.2
1 2.1.3
1 2.0.5
1 2.0.4

Yes, we have 2 2.0.3 installations in production use out of 285 sites. Of them, 8, or, less than 3% are running the current version, 3.3.1.

Clearly this is a problem.

We have a few options, one of which is to utilize the upgrade process inside WordPress which requires us to communicate with each client, or, write a quick script to give us admin privileges to do the upgrade. Or, we could use bash.

The magic

Our filesystem structure is set up so that each user has their own UID/GID, and the paths where the domains are located are fairly static. However, the script just takes the path of the wp-content/version.php file, strips off the correct pieces, copies the uncompressed WordPress .tar.gz file, changes ownership from root to the user that owns the directory.

There are two variables that need to be set:

WORDPRESS_TMPDIR – set this to the directory where you have untarred and ungzipped the WordPress archive

BASE_PATH – set this to the machine’s root path.

The script


# cd34, 20120212
# find /var/www -type f -wholename \*wp-includes/version.php|awk '{ print "grep -H \"wp_version =\" " $1 }' | sh > /var/tmp/wpversions
# if you want to really save time:
# awk < /var/tmp/wpversion '{ print "/path/to/ " $1 }' | sh -x

# set this to match your temporary directory location for WordPress
# wget -O /var/tmp
# cd /var/tmp
# tar xzf latest.tar.gz

#set this to signify the base path of your machine's web root

if [ "X" == "$1X" ];
  echo "Needs a pathname for the version.php file"
  echo "$0 /var/www/"
  echo "You can include data after version.php, i.e. :$version from find command"

  TMP=`stat $WP_PATH|grep Uid:`
  TMP_GID=${TMP##*Gid: ( }
  TMP_UID=${TMP##*Uid: ( }

  `chown -R --from=root $DUID.$DGID $WP_PATH`
  `/usr/bin/wget -q -O /dev/null "http://$DOMAIN/wp-admin/upgrade.php?step=1"`
  echo "Upgraded: http://$DOMAIN"

The code for this and a few other tools that I’ve written can be found at cd34-tools, hosted on

DDOS Packet Logger rough cut

Thursday, January 5th, 2012

I believe this is ready for a little external testing. While I am not extremely happy with the compression used, it does give about a 15% reduction in space with very little CPU impact.

I do intend to write my own streaming compression which should get me closer to a 55% compression ratio based on some simple testing. I need to add some features to select the ethernet port to watch and get logging rotating on a daily basis, but, it does do the original intended job.

Thank you for any feedback.

Note: you don’t need to be under a DDOS to test it, it just logs packets going to port 25 and 80 to a logfile for later processing.

Entries (RSS) and Comments (RSS).
Cluster host: li