Thursday, November 17, 2011

IPv6 Transition Dangers

IPv6 has been right around the corner for a long time (13 years by my count).  I have just started embracing IPv6 for the home environment this year.  I bought a buffalo router running DD-WRT to replace the Linksys system we had been using for our home firewall/router.  You can shell into the underlying embedded linux  system and configure things directly.

I was able to configure a 6to4 tunnel on the router and get our home environment operational with IPv6.  I was motivated to do this to complete some testing for a client.

The downside is that ip6tables does not come with the version of DD-WRT I'm using.  You need to build the kernel module.  I got about half way through that and got distracted by other things, so I turned off our tunnel.  Perhaps I can get that set up over Thanksgiving break.

Former Cisco colleagues Darrin Miller and Sean Convery wrote up a very excellent threat analysis of IPv6 a few years back which you can find at Sean's IPv6 page.  Darrin was gracious enough to come and visit my class several years back, and he gave a very excellent presentation of the issues.  One of the big IPv6 issues he identify was the tunnel transition issues.  Even if my home router had IPv6 installed, my windows 7 system seems to have installed a 6to4 tunnel for me.  If the traffic is tunneled going past the border firewall there is no way it can make even the most basic checks that most home routers do to prevent or limit connections initiating from the outside.  Seems like that threat is coming true as predicted.

Old blog

I had been sporadically keeping a blog a few years back.  My husband loves to set up moveable type blogs, but they tend to disappear after a while.  When I went to update my linked in entry, I saw that the old blog is still out there at  http://thought-mesh.net/twt/

Another Learning Experience Being in the Middle of the Network

As I noted in my earlier blog, I've been working on a project that involves intercepting network packets on a bridged/routed linux box transparently.  I'm intercepting more packets than I really need, and processing the first few bytes of the session to determine if the packet is part of a protocol I care about.  If I determine this isn't the target protocol, the program mindlessly forwards on the rest of the packets in the session.

This strategy seemed to work reasonably well.  The program has been in various states of test for about a year and seemed to mostly work as advertised.  Finally got paired with a more dedicated engineer from the customer company to finish up the testing, and we deployed in the office environment.  Slingbox from the office stopped working.  Read up on the slingbox protocol, took packet captures from both interfaces, and stared at them.

Finally, I noticed that the first packet of the TCP session on the incoming interface was N bytes, but two packets of size O and P (such that O + P = N) appeared on the outgoing interface.  My program passing along the the for O bytes before deciding it wasn't the write protocol and passing along the rest of the packet.

The slingbox program "should" have been smart enough to keep ready if it didn't get enough bytes on the first read, but evidently this particular slingbox implementation didn't.  So as the intermediary program, my program adapted and buffered its write, and then all was well with slingbox.

I noticed a few days later that I was getting fewer computers communicating along the desired protocol with my program in the middle than when my program was out of the picture.  Saw that I was splitting the first packet for the protocol match case too.  Evidently some implementations of the target protocol also don't continue reading on the TCP socket.  I fixed it to buffer the first packet in all cases, and the difference in the number of communicators went away.

Two lessons I learned.  1. Never make assumptions about how the network communication "should" be implemented.  2. A dedicated engineering driving testing can uncover a lot of interesting issues.

Pictures of Chickens

There can never be enough pictures of chickens on the Internet.

Two girls from our original flock of five last summer waiting to get in. So far only one has made it in. She got really confused and stayed on the door mat squawking until I shooed her out.


The girls from last summer taking a dust bath in our fire pit


At the beginning of this summer, we had the unfortunate chicken massacre. 4 of our 5 chickens were eaten by some unseen predator. Based on how the predator had to access the chickens, we suspect a raccoon. We reinforced the coop and waited until the end of the summer to get two new hens from friends. Below is Brownie and one of the originals, Chooksie.



The other new chicken is below, Blackie.


Tuesday, November 15, 2011

Fun with fragments and brouters

I've been working on a project that involves transparently intercepting traffic from a Linux bridge and manipulating the traffic.  I've learned all kinds of things about trproxy, iptables, policy routing, and other exciting technologies.  This week's life lesson has involve IP fragmentation.

Tproxy requires that packets are routed to the loopback interface, so that packets not really addressed to the intercepting machine are delivered to a process on that machine (thus implementing the "transparent" proxy).  This means that my bridged machine must route some of the traffic.  No problem, the broute table in ebtables enables us to identify some traffic to move from the bridge path to the routed path.

In my case, the traffic I'm interested in is not well defined by a particular port or small set of ports.  I have to consider all high port traffic.  So I have the following rules in ebtables

ebtables -t broute -A BROUTING -p IPv4 --ip-protocol udp  --ip-source-port 1024:65535 --ip-destination-port 1024:65535 -j redirect --redirect-target DROP

ebtables -t broute -A BROUTING -p IPv4 --ip-protocol tcp  --ip-source-port 1024:65535 --ip-destination-port 1024:65535 -j redirect --redirect-target DROP

All was well with the world until we encountered an environment with SIP and TFTP.  Some packets appeared on one interface and didn't get passed to the other interface.  However ICMP messages about time exceeded did appear.  Then I noticed that the missing packets were all fragmented.  Finally, it hit me that the broute tables is evaluated before IP defragmentation is even considered.  So the first fragment contains the port information and gets passed to the routing path, but the subsequent fragments do not contain port information and so stay on the bridge path.  The other fragments never appear along the route path, so the first fragment is never forwarded.

Once I changed the ebtables rule to not check the ports but forward all TCP and UDP traffic to the routing path, the packets started flowing again.

It seems that with path MTU discovery it is fairly rare to find IP fragments of TCP streams these days, but with UDP traffic it is the responsibility of the application writer to create small enough packets to avoid fragmentation.  Thus, I didn't see this problem until we had UDP based SIP and TFTP services in the environment.