Sunday, February 24, 2013

Off to ApacheCon NA

Weather permitting, Alan is heading out to ApacheCon NA 2013 tomorrow. He will be giving a presentation on working with transparency in Apache Traffic Server (ATS). Alan implemented most of the transparency support in ATS, so he knows what he is talking about.

He's spent the past month verifying his memory and working up a master setup script for transparent Apache Traffic Server. You can access his slides and scripts at ATS setup on the Network Geographics web site.

Thursday, December 27, 2012

End of teaching

After 8 years, I'm now officially done with teaching security courses at the University of Illinois. It has been an interesting run, but I needed to do something else for a while. I think the academics have it right with the 7 year sabbatical cycle. For me, I need to change things up after 6 to 8 years.

Earlier this fall I gave a talk in the ITI's Trust and Security Seminar series reviewing how the security education program has changed at UIUC during my time. Here's a video of the talk. I forgot to turn on the mike until 2 minutes and 30 seconds in.

Secure in the cloud

Over this summer and fall, I've jumped into the cloud while working with SafelyFiled. As a security person, you must make the obligatory snort about the innate un-trustworthiness of the cloud. But after coming through this experience, I think the cloud offers a couple security benefits.

The problem with the cloud is that it is "out there". I cannot physically secure the server. And coming from a traditional security background, I really want to physically secure things. I cannot be lazy and say, it is ok to leave the network traffic unencrypted because I can see where the wire runs. Once you commit to using infrastructure you don't own, you can no longer be lazy in your security analysis.

Truthfully this analysis has been necessary for quite some time. If you are a small organization, you have been using third party data centers for years now. Disks wander off in such shared data center environments. If you are a part of a large organization, you must worry about the trustworthiness of other elements of your organization. But with increase of virtualization and press coverage, the need to not trust anything becomes more and more apparent. So one good security thing about the move to the cloud, is the (hopefully) increased security paranoia when designing your system architecture.

Another benefit was pointed out by a presenter at the recent AWS developers' conference. This person was presenting on the Virtual Private Cloud (VPC). He was walking through the network ACLs, routing tables, firewall rules, and MAC check rules that are provided by VPC. He made the observation that for someone seeking to verify that a network is set up securely, it is much easier in VPC that it would be in a real physical network with a variety of enforcing devices. Of course this assumes that the VPC is correctly enforcing the rules. But for most people with a moderately complex network, they will have a much weaker understanding of their security stance in a physical network structure than they would in a VPC environment.

So the cloud is by no means making securing the world easier. And a security ignorant individual is going to be just as security ignorant when working in a cloud environment. But for the security savvy, the cloud has its good points.

Tuesday, November 20, 2012

Saved the day with the netfilter arcane

My husband was setting up an experiment to test his fixes on Apache Traffic Server (ATS).  He had  set up his iptables, his routing tables, and processes, but still the packets were not going past ATS.

I managed to save the day by spending 5 minutes staring at his rules and channeling all those hours of staring fruitlessly at similar rules setting up my own torrent proxy experiments.  The word "rp_filter" popped out of my mouth.  And for once, that was it!  Sometimes the seemingly useless and arcane is just what is needed.

Thursday, November 17, 2011

IPv6 Transition Dangers

IPv6 has been right around the corner for a long time (13 years by my count).  I have just started embracing IPv6 for the home environment this year.  I bought a buffalo router running DD-WRT to replace the Linksys system we had been using for our home firewall/router.  You can shell into the underlying embedded linux  system and configure things directly.

I was able to configure a 6to4 tunnel on the router and get our home environment operational with IPv6.  I was motivated to do this to complete some testing for a client.

The downside is that ip6tables does not come with the version of DD-WRT I'm using.  You need to build the kernel module.  I got about half way through that and got distracted by other things, so I turned off our tunnel.  Perhaps I can get that set up over Thanksgiving break.

Former Cisco colleagues Darrin Miller and Sean Convery wrote up a very excellent threat analysis of IPv6 a few years back which you can find at Sean's IPv6 page.  Darrin was gracious enough to come and visit my class several years back, and he gave a very excellent presentation of the issues.  One of the big IPv6 issues he identify was the tunnel transition issues.  Even if my home router had IPv6 installed, my windows 7 system seems to have installed a 6to4 tunnel for me.  If the traffic is tunneled going past the border firewall there is no way it can make even the most basic checks that most home routers do to prevent or limit connections initiating from the outside.  Seems like that threat is coming true as predicted.

Old blog

I had been sporadically keeping a blog a few years back.  My husband loves to set up moveable type blogs, but they tend to disappear after a while.  When I went to update my linked in entry, I saw that the old blog is still out there at

Another Learning Experience Being in the Middle of the Network

As I noted in my earlier blog, I've been working on a project that involves intercepting network packets on a bridged/routed linux box transparently.  I'm intercepting more packets than I really need, and processing the first few bytes of the session to determine if the packet is part of a protocol I care about.  If I determine this isn't the target protocol, the program mindlessly forwards on the rest of the packets in the session.

This strategy seemed to work reasonably well.  The program has been in various states of test for about a year and seemed to mostly work as advertised.  Finally got paired with a more dedicated engineer from the customer company to finish up the testing, and we deployed in the office environment.  Slingbox from the office stopped working.  Read up on the slingbox protocol, took packet captures from both interfaces, and stared at them.

Finally, I noticed that the first packet of the TCP session on the incoming interface was N bytes, but two packets of size O and P (such that O + P = N) appeared on the outgoing interface.  My program passing along the the for O bytes before deciding it wasn't the write protocol and passing along the rest of the packet.

The slingbox program "should" have been smart enough to keep ready if it didn't get enough bytes on the first read, but evidently this particular slingbox implementation didn't.  So as the intermediary program, my program adapted and buffered its write, and then all was well with slingbox.

I noticed a few days later that I was getting fewer computers communicating along the desired protocol with my program in the middle than when my program was out of the picture.  Saw that I was splitting the first packet for the protocol match case too.  Evidently some implementations of the target protocol also don't continue reading on the TCP socket.  I fixed it to buffer the first packet in all cases, and the difference in the number of communicators went away.

Two lessons I learned.  1. Never make assumptions about how the network communication "should" be implemented.  2. A dedicated engineering driving testing can uncover a lot of interesting issues.