Incubate Miami Demo Day, May 6, 2011

May 7th, 2011

Incubate Miami sponsored Demo Day:

The event marks the official launch of the next wave of new technology companies from the Incubate Miami 12-week Technology Accelerator Program. Each company will give a demo presentation, showcasing technologies which range from advances in video and television, to healthcare and mobile applications.

Apologies for missing Flomio.com’s picture – they dealt with NFC and had a pretty good presentation as well.

Met quite a few great people from South Florida, talked with a number of the people involved in quite a few of the startups.

Weekend, WRT54GS, OpenWRT, IPv6 through tunnelbroker.net

April 30th, 2011

While we’ve been doing a lot of work recently with IPv6, I decided to see if I could reconfigure an older Linksys WRT54GS to run OpenWRT, so that I could use it to route IPv6 to the machines at the house, rather than using the entire /64 on my macbook. This will also allow me to run IPv6 on other machines at the house.

First I ran into some issues flashing OpenWRT – which were fixed by upgrading the firmware on the machine to the latest version supplied by Cisco/Linksys, then, flashing the OpenWRT build from http://downloads.openwrt.org/snapshots/trunk/brcm47xx/.

Once you’ve done that, telnet to 192.168.1.1, type passwd, enter a new password, log out, ssh root@192.168.1.1 using the new password and you’re set.

Configuring wireless was simple enough, though, I couldn’t get WEP to work, I had to move over to WEP/PSK2. With WEP configured, using multiple different suggested configurations, OpenWRT would always respond with:

Configuration file: /var/run/hostapd-phy0.conf
Could not set WEP encryption.
Interface initialization failed
wlan0: Unable to setup interface.
rmdir[ctrl_interface]: No such file or directory
Failed to start hostapd for phy0

Changing the encryption type to psk2 and setting the key allowed me to type wifi which then recognized the configuration. A warning pops up:

root@OpenWrt:/etc/config# wifi
Configuration file: /var/run/hostapd-phy0.conf
Using interface wlan0 with hwaddr 00:12:17:3a:c6:4a and ssid 'ipv6'
random: Cannot read from /dev/random: Resource temporarily unavailable
random: Only 0/20 bytes of strong random data available from /dev/random
random: Not enough entropy pool available for secure operations
WPA: Not enough entropy in random pool for secure operations - update keys later when the first station connects

I set up a separate network, and am allowing the one router to stay online with my existing config. That way, I am not disrupting the main router and can keep testing on its own Wireless LAN. At this point, I’ve set 192.168.6.0/24 as the IPv4 for the IPv6 Wireless router, connected through it as my preferred Wireless LAN and am now able to surf the internet.

Next, we need to set up our IPv6 configuration from http://www.tunnelbroker.net/, a free service provided by Hurricane Electric.

We need to install the ipv6 kernel models, then, activate IPv6 (or, you can power cycle the router and the ipv6 modules will automatically be installed.

opkg install kmod-ipv6
insmod ipv6
opkg install 6in4

We can verify that ipv6 is working by typing:

root@OpenWrt:/etc# ifconfig br-lan
br-lan    Link encap:Ethernet  HWaddr 00:12:17:3A:C6:48  
          inet addr:192.168.6.1  Bcast:192.168.6.255  Mask:255.255.255.0
          inet6 addr: fe80::212:17ff:fe3a:c648/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5338 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4690 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:574933 (561.4 KiB)  TX bytes:2397889 (2.2 MiB)

and we can see that the inet6 addr: is set with a default, unrouteable address. For troubleshooting, we’ll install tcptraceroute6.

opkg install tcptraceroute6

From this thread, we take the script listed and name it /etc/init.d/ipv6:

NOTE: I’ve made minor changes altering br0 to br-lan as the original script uses the whiterussian distribution of openWRT and we’re using the kamakaze version.

#!/bin/sh /etc/rc.common

#Information from the "Tunnel Details" page
SERVER_v4=Server IPv4 Address
SERVER_v6=Server IPv6 Address

CLIENT_v4=Client IPv4 Address
CLIENT_v6=Client IPv6 Address

# Uncomment if you have a /48
#ROUTED_48=Your /48 netblock's gateway address, e.g., 2001:a:b::1
ROUTED_64=Your /64 netblock's gateway address, e.g., 2001:a:b:c::1

START=50

start() {
	echo "Starting he.net IPv6 tunnel: "
	ip tunnel add henet mode sit remote $SERVER_v4 local $CLIENT_v4 ttl 255
	ip link set henet up

	ip -6 addr add $CLIENT_v6/64 dev henet
	ip -6 ro add default via $SERVER_v6 dev henet

	ip -6 addr add $ROUTED_64/64 dev br-lan
	# Uncomment if you have a /48
        #ip -6 addr add $ROUTED_48/48 dev br-lan
	ip -f inet6 addr

	echo "Done."
}
stop() {
	echo -n "Stopping he.net IPv6 tunnel: "
	ip link set henet down
	ip tunnel del henet

	ip -6 addr delete $ROUTED_64/64 dev br-lan
	# Uncomment if you have a /48
        #ip -6 addr delete $ROUTED_48/48 dev br-lan

	echo "Done."
}
restart() {
	stop
	start
}

We fill in the information available to us from the tunnelbroker.net admin page which lists your existing tunnel configurations.

/etc/init.d/ipv6 start

root@OpenWrt:/etc/init.d# ping6 -c 5 ipv6.google.com
PING ipv6.google.com (2001:4860:8003::63): 56 data bytes
64 bytes from 2001:4860:8003::63: seq=0 ttl=55 time=89.572 ms
64 bytes from 2001:4860:8003::63: seq=1 ttl=55 time=88.701 ms
64 bytes from 2001:4860:8003::63: seq=2 ttl=55 time=121.524 ms
64 bytes from 2001:4860:8003::63: seq=3 ttl=55 time=87.989 ms
64 bytes from 2001:4860:8003::63: seq=4 ttl=55 time=88.010 ms

--- ipv6.google.com ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 87.989/95.159/121.524 ms
root@OpenWrt:/etc/init.d#

And we have IPv6 routing on the router. After we’re sure things are working, we can do:

/etc/init.d/ipv6 enable

which will configure the router to run our script on startup. A slight configuration change on the laptop, and:

tsavo:~ mcd$ ping6 -c 5 ipv6.google.com
PING6(56=40+8+8 bytes) 2001:470:4:590::cd34 --> 2001:4860:8007::67
16 bytes from 2001:4860:8007::67, icmp_seq=0 hlim=54 time=91.914 ms
16 bytes from 2001:4860:8007::67, icmp_seq=1 hlim=54 time=90.727 ms
16 bytes from 2001:4860:8007::67, icmp_seq=2 hlim=54 time=91.214 ms
16 bytes from 2001:4860:8007::67, icmp_seq=3 hlim=54 time=94.121 ms
16 bytes from 2001:4860:8007::67, icmp_seq=4 hlim=54 time=90.975 ms

--- ipv6.l.google.com ping6 statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 90.727/91.790/94.121/1.231 ms
tsavo:~ mcd$

Compared to the tunnel script on the mac, I’ve shaved off about 51ms from each ping – which seems to indicate that the gif0 interface on the mac is a little resource heavy since I am routing through the WRT54GS through a WRT160Nv2 and still getting better ping times.

At this point, it would be wise to install ipv6tables, shorewall6-lite or one of the other ipv6 capable firewalls. Configuring those is as easy as it would be on a normal machine, with shorewall probably being the easiest of them to configure.

Not bad for about 40 minutes of work – and now I can add other machines on my network and utilize IPv6.

Conversion to Dual Stack IPv6

April 25th, 2011

Over the weekend we took a large step and converted 25% of our network over to DualStack IPv4/IPv6. We aren’t just saying that we’re ready for IPv6, we are actually using IPv6 internally and starting to move some public facing sites so that they can serve both IPv4 and IPv6 enabled web surfers. We primarily run Debian, but, have a few Windows, FreeBSD and other OSs. Our current efforts are switching our Debian machines over since that comprises 95% of our network. We run 2.6.38 with a few config changes rather than the default Debian kernel, but, much of the testing was done with a stock Debian kernel.

We ran into a few minor issues, but, currently backups for the first group of machines that we converted are running over IPv6. Additionally, email that is handled from our MX servers is handed off to the machines over IPv6. Currently, only the actual machines have IPv6 addresses so, we don’t have many public facing sites running, but, of the few that are announcing both IPv4 and IPv6, they amount to almost 4% of our traffic. Clients that access the machines directly for SSH, FTP, POP3, IMAP, SMTP will use IPv6 if they are able. Most clients don’t use the actual devicename for FTP/POP3/IMAP/SMTP, so, most won’t use IPv6 until their public facing site is IPv6 enabled.

Our network is relatively flat which makes our deployment a little easier, but, the basic structure is:

edge router -> chassis switch
                        chassis switch
                        chassis switch
                        ...
edge router -> chassis switch

We use VRRP to announce a x:x::1/64, each machine gets a /128 from that /64, then, using static routes, we route a /64 to each machine. Due to an issue with OSPF3 on our current network, we had to fall back to static routes. Each machine is allocated a /128 from our main network, and a /64 to the client. Virtual webhost machines, we might allocate /80s to each virtual client out of the /64, but we haven’t made a firm decision on that. We’ve actually cheated and run IPv6 on their own connections to the chassis switch to make traffic and flow monitoring a little easier.

Our basic network now consists of every machine in a single /64 which cuts down on arp requests and VLAN issues, but, requires a slightly different configuration than our existing IPv4 network which used VLANs.

When we configure a machine, we need to add the admin IP to it and push the config changes using our management software. We’ve not automated putting the initial IP address on each machine as it requires route entries into our edge routers. Once OSPF3 is fixed later this week, I expect the process to be more automated.

The first step is to take a /128 out of the /64 for the ‘device’ network and assign it to the machine:


ifconfig eth0 add xxxx:xxxx::c:1/64
route --inet6 add default gateway xxxx:xxxx::1

We opted to use ::ad:xxxx for admin machines, ::c:xxxx for client servers. Since you can use hexadecimal, you could actually assign cabinet numbers, or switch numbers to make identifying the machine location a little quicker. Perhaps some identifier for the building, cabinet/rack, switch it is connected to, etc. could be used. For now, we’re using :c: and :ad: to signify client and admin. Our primary storage server is :ad:bac1, our development machine is :ad:de, etc. Our admin network is unlikely to exceed 65536 machines, but, there is a lot of flexibility if you want to get creative.

Once we’ve added the initial IP, our management software inserts the following into /etc/network/interfaces:

iface eth0 inet6 static
        address xxxx:xxxx::c:1
        netmask 64
        endpoint any
        gateway xxxx:xxxx::1

At this point, the AAAA record for the device is published, and we can access the machine over ssh using IPv6.

For Postfix, we needed to add the following to /etc/postfix/main.cf:

inet_protocols = ipv4, ipv6

Additionally, we needed to modify /etc/postfix/mynetworks.txt to add:

[xxxx:xxxx::/64]

which allows machines on our local network to communicate with the server and ‘relay’. It is possible that the line to be modified might not refer to a config file and is specified in /etc/postfix/main.cf:

mynetworks =

Dovecot required changes to /etc/dovecot/dovecot.conf:

listen=[::], *

Pure-FTPD had problems with IPv6 reverse lookups:


echo yes > /etc/pure-ftpd/conf/DontResolve;/etc/init.d/pure-ftpd restart

And of course, /etc/resolv.conf:


nameserver xx.xx.xx.xx
nameserver xx.xx.xx.xx
nameserver xxxx:xxxx::ad:1
nameserver xxxx:xxxx::ad:2

We’ve had minor customer impact and lost one email during our conversions due to missing the mynetworks parameter in postfix and bouncing one message. Debian’s version of Dovecot doesn’t listen to both interfaces with listen=[::] as one might imagine by reading the documentation, but, that was tested on a test machine and didn’t affect any clients.

Many of the config files require [] around the IPv6 addresses such as Apache and Varnish. When you need to specify Listen ports on those machines since they have multiple services listening on port 80 on the same machine on separate IPs, it is something to remember.

Most of the server software we’ve run across hasn’t had any issues. However, client software that uses char(15) to store IP addresses probably needs to be fixed.

So far, I think we’ll be ready for World IPv6 Day with over 98% of our machines running DualStack and we’re shooting for 20% of our client’s public facing sites to have IPv6 support by June 8, 2011.

We have two machines running Tux that are stuck on 2.4.37 and regrettably, Tux appears to segfault when it receives IPv6 traffic. It is a shame since Varnish and Nginx are still outclassed by a webserver written twelve years ago.

So far, the conversion process has been quite straightforward with the minor issues you would expect when introducing IPv6 to applications and server stacks that weren’t written to handle it. I suspect we’ll have 98% of our machines dualstack by May 7 which will give us a month to get 20% of our client base educated and convinced to turn on IPv6.

The Path to IPv6 from a webhosting perspective

March 29th, 2011

My goal in June 2010 was to be completely IPv4/IPv6 dual stack by the end of 2010. This started a long, arduous process that required reworking portions of our network, upgrading the software on our border routers, increasing the memory on our border routers for the larger BGP table, removing a provider that refused to handle IPv6 in the data center we were located in, adding a separate provider so that we could have redundant IPv6 feeds and a number of other issues. In the last 7 days since we turned up IPv6 and started announcing two /48s, We’ve gotten 25% of our network configured for IPv6 and expect to be able to transition the remaining 75% in the next 15 days.

Of course, with IPv6 comes a new kernel as the existing kernel we’ve used didn’t have IPv6. 2.6.38 comes with Automatic Process Grouping which in early testing has had a positive impact on several machines with different workloads. So, we have an additional reason to deploy kernels on every machine.

Some of the issues we ran into:

* Router
** Initial problem with IPv6 and the OS on the router
** Current minor issue with OSPF3
* Route Performance Control Box
** appears to ignore IPv6 traffic
* Aggregate Network
** OSPF3 support, altered network design to reflatten it (this from unflattening it a few years back)
* Nameservers
** Currently using bind9, no issues, switching to PowerDNS for other reasons
** Glue records at register required manual entry (webform didn’t accept : in an IP address)
* MX Servers
** Postfix, no issues, added inet_protocols=ipv4, ipv6, restarted
** Some anti-spam software that depended on IP addressing acts a little odd
** Antivirus daemon appears to only listen on IPv4 socket, but, since that is an internal milter, it doesn’t cause any real problems now.
** First 7 days, 247k emails processed, 2 from IPv6
* Webservers
* Load Balancers
** very odd issue with the new kernel, udev, and the SSD drives, not network/ipv6 related
* Cluster
** No issues, GFS, DRBD, Apache, Dovecot, etc all recognized IPv6
* General Machine issues
** Firewall software on each machine requires separate rulesets for IPv6. Not a huge problem, but, one to consider.
* Client applications
** char(15) in mysql to store IP addresses
** parsing of Apache CLF doesn’t understand IP addresses

One person that was testing with a Teredo tunnel wasn’t able to access the site via IPv6, but, was able to ping. After reading through a number of pages on the web:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Dnscache\Parameters

and add a DWORD value:
AddrConfigControl = 0

fixed the issue. For my home connection, I used TunnelBroker along with the script mentioned on the page Enable IPv6 on Mac OS X, the tunnelbroker.net way.

After receiving this tweet, I decided that running this site with a separate hostname for IPv6 was probably not a great test and put the AAAA records in DNS. So far, one person has mentioned that they had difficulties reaching the site, but, that was a problem with their ISP and transit. Their ISP appears to be blocking protocol 41 packets. Switching to a tunnel fixed that problem.

All in all, most of the issues are very minor from a networking standpoint, but, web applications are going to have the most trouble.

We’re working hard to make sure everything is dual-stack enabled by IPv6 Jump Day (June 8, 2011) but I suspect it won’t be until 2020-2030 before we can deploy IPv6 only services.

IPv6 Readiness test

March 26th, 2011

From the site test-ipv6.com, my laptop at the data center, using our DNS appears to be working correctly.

Still some prep work left, but, converted another dozen or so machines this evening.

Entries (RSS) and Comments (RSS).
Cluster host: li