I had blogged about this some time ago. The configuration I described in that post worked fine on my laptop, with Debian installed, but when I tried it on my Desktop, where I use Gentoo, it wouldn’t work.

It took me *3 days* of ‘debugging’, until I was able to find why that happened!

I tried various changes to the iptables and iproute2 configuration, giving more hints to both utilities in order to use the correct routing table, mark the packets correctly etc, but it still wouldn’t work.

After a lot of time tweaking the configuration, without results, I saw that, although ping -Ieth0 ${VPN_SERVER}, didn’t ‘work’ (with openvpn running, and tap0 configured with the correct address/netmask), I could see with tcpdump the ‘ECHO REPLY’ packets sent by the VPN server, with correct source and destination addresses.

After stracing the ping command, I saw that when ping issued a recvmsg syscall, recvmsg returned with -EAGAIN. So, now I know that the packets do arrive to the interface with correct addresses, but they couldn’t ‘reach’ the upper network stacks of the kernel.

The problem was that both machines were running vanilla kernels, so I couldn’t blame any Debian or Gentoo specific patches. But since I knew that the problem was in the kernel, I tried to see if any kernel .config options, regarding NETFILTER, and multiple routing tables didn’t match between the two configs. But I couldn’t find anything that could cause that ‘bug’.

So since the kernel sources are the same, and I can’t find anything in the .configs that could cause the problem, I try tweaking some /proc/sys/net ‘files’, although I couldn’t see why these would differ between the two machines. And then I saw some /proc/sys/net/ipv4/ files in Gentoo, that didn’t show up in Debian (/proc/sys/net/ipv4/cipso*).

I googled to find what cipso is, and I finally found out that it was part of the NetLabel project. CIPSO (Common IP Security Option) is an IETF draft (it’s quite old actually) and is implemented like a ‘security module’ in the Linux Kernel, and it was what it caused the problem, probably because it tried to do some verification on the inbound packets, which failed, and therefore the packets were ‘silently’ dropped. LWN has an article with more infromation about packet labeling and CIPSO, and there’s also related Documentation in the Linux Kernel.

make defconfig enbales Netlabel, but Debian’s default configuration had it disabled, and that’s why Openvpn/iproute2/iptables configuration worked with Debian, but failed on Gentoo.

Instead of compiling a new kernel, one can just do

echo 0 > /proc/sys/net/ipv4/cipso_rbm_strict_valid

and disable CIPSO verification on inbound packets, so that multiple routing tables and packet marking work as expected.

A couple of days ago, we did some presentations about DNS at a FOSS NTUA meeting.

I prepared a presentation about DNS tunneling and how to bypass Captive Portals at Wifi Hotspots, which require authentication.
(We want to do another presentation, to test ICMP/ping tunnel too ;)).

I had blogged on that topic some time ago.
It was about time for a test-drive. :P

I set up iodine, a DNS tunneling server(and client), and I was ready to test it, since I would be travelling with Minoan Lines the next day.

I first did some tests from my home 24Mbps ADSL connection, and the results weren’t very encouraging. Although the tunnel did work, and I could route all of my traffic through the DNS tunnel, and over a second OpenVPN secure tunnel, bandwidth dropped to ~30Kbps, when using the NTUA FTP Server, through the DNS tunnel.
(The tunnel also worked with the NTUA Wifi Captive Portal, although at first we had some ‘technical issues’, ie I hadn’t set up NAT on the server to masquarade and forward the traffic coming from the tunnel :P).

The problem is that the bandwidth of the Minoan Lines(actually Forthnet ‘runs’ it afaik) Wifi(not inside the ‘local’ network of course) was ~30Kbps(terrible, I know), without using DNS tunneling. So, I wasn’t very optimistic. (I think they have some satelite connection, or something like that from the Wifi to the Internet).

When I was on the ship, I tried to test it. At first, I encountered another technical issue(the local DNS had an IP inside the Wifi local network, and due to NAT the IP our server was ‘seeing’, was different than the IP of the DNS packets, so we had to run iodined with the -c flag). Luckily, FOSS NTUA members(who had root access on the computer running iodined) are 1337 and fixed that in no time. :P

And at last, I had a ‘working’ DNS tunnel, but with extremely high ping times(2sec RTT) to the other end of the tunnel, and when I tried to route all traffic through the tunnel I had a ridiculous 22sec RTT to ntua.gr. Of course even browsing the Web was impossible, since all the HTTP requests timed out before an answer could reach my laptop. :P

However, because I am a Forthnet customer(for my ADSL connection), I was able to use my Username/Password of my home ADSL connection, and have free access to the Internet, from their hotspot(with the amaing bandwidth of ~30Kbps :P). At least they do the authentication over SSL. :P

Although DNS tunneling didn’t really work in this case(the tunnel itself worked, but due to the bandwidth being so low, I didn’t have a ‘usable’ connection to the Internet), I think that in other hotspots, which provide better bandwidth/connection, it can be a very effective way to bypass the authentication and use them for free. ;)

Probably, there’ll be a Part 3, with results from bandwidth benchmarks inside the NTUA Wifi, and maybe some ICMP tunneling stuff.

Cheers! :)

So, only for Greeks, or people from other countries, who have travelled with Minoan Lines… :P

If you have ever travelled from Athens to Heraklion(or vice-versa :P) with a Minoan Lines ship, mabye you’ll notice that there’s a Wifi Hotspot, owned by Forthnet. If you try to use it, you’ll be presented with a Captive Portal.

In order to get access to the Internet, you have to pay some money(extremely overpriced, concidering the speed/bandwidth, although … you are in a ship :P).

I suppose Forthnet has many other hotspots, like this one, and I guess the prices are pretty much the same. Unless you are already a Forthnet customer(like I am). Then, you have free access.

But, even if you are a Forthnet customer, I think it’s fun! to find out if/how you can bypass this captive portal.

A month ago, I was travelling to Crete, so I tried some things, but everything phailed. :P

So, I googled a bit, and I found some interesting things.

Apparently, the best, if not the only, way to bypass the captive portal is DNS Tunneling.

However, the connection was awful, so SSHing to my server, and setting up the “customized” DNS server, was impossible.

So, I did all the preperations(DNS server modifications, etc…) while I was in Crete, and hoped I could test it when I’d travel back to Athens.

But, the Wifi Hotspot(specifically the Captive Portal “server” I think) was ‘down’, when I was travelling, so I couldn’t test DNS tunelling.

Maybe, next time.

Anyway, if anyone has tried it, let me know.

Although I think the bandwidth/speed will be terrible, considering the DNS tunelling overhead.

Btw, tricks like MAC/IP spoofing, ARP poisoning, hacking a poor Windoze unpatched user(etc etc), and setting up a NAT, are out of the question, since I wanted to ‘hack’ the hotspot/portal, and not the (l)users. :P

Ch(b)eers!

(to the hotspot admins! :P)

RFC mania

January 22, 2010

I had to do an SNMP-related excercise for the Network Management Lab. We had to write a MIB(Message Information Base) for a firewall, to describe the filters and the rules of the firewall.
The MIB should be written in SNMPv2 SMI, so I read some RFCs.
I never liked the RFCs, and now I think that they’re even more disgusting. :P
Actually, I think that people who are involved with the whole process of the RFCs have serious personal problems(just kidding :P).
And to prove that I have a point, a friend of mine reminded me of an epic RFC.
RFC 1149, or IP over Avian Carriers!!!!
And that’s not the worst part. There’s more!
Some people did an actual implementation of the RFC!
I knew about the RFC but not that there was an “implementation”. :P
About 10 years ago.
Link 1 and Link 2.
The highlight(besides the pigeons ofcourse) was the source code, and the ping times.

Ok, in fact, I would say that the whole thing was fun and maybe interesting, but the RFCs are still disgusting. :P
Except for RFCs like these :P
:D

OpenVPN/iptables/iproute2

January 19, 2010

Here’s the deal.
We have an OpenVPN server, part of a network, for instance network 10.0.0.0/24, and server’s IP 10.0.0.15.
We connect to the OpenVPN server, using [b]UDP[/b], and a virtual [b]tap[/b] interface, let’s say, tap0.
After we’ve connected successfully with the vpn server, we run a dhcp client on the tap0 interface, and get an IP inside the 10.0.0.0/24 network, let’s say 10.0.0.55.
Along with the IP assignment, a new route will be added in the routing table, a route to the network 10.0.0.0/24, with no gateway(=link), through the tap0 interface.
Howerver, now we can’t contact the OpenVPN server. After the new route is added all of our packets to the VPN server, including the vpn packets, will be routed through the tap0 interface, and therefore VPN will stop working.
So, we add to the routing a table a route to the vpn server(10.0.0.15), via our local gateway(for instance 192.168.1.1), through our physical network interface(for instance eth0).
Now, we can communicate with every other host inside the 10.0.0.0/24 network over a VPN encrypted channel. But all of our connections to the VPN server will go through the unencrypted channel(192.168.1.1/eth0 route, bypassing the VPN/tap0 interafce).
But that’s not what we actually want.
Actually, we want to communicate with the VPN server over the VPN ‘tunnel'(and through tap0) for all the connections we make, except for the VPN connection.
That’s possible if we use iptables and iproute2.
We’ll mark the packets of the VPN connection using iptables(ie the packets using UDP, with destination address the VPN server, and destination port the port to which the server listens — port 1194 most likely).
iptables -t mangle -A OUTPUT -p udp -d 10.0.0.15 --dport 1194 -j MARK --set-mark 1
Now, we’ll create a rule with iproute2, which will route the marked packets using a different routing table.
First we create the new table.
echo 200 vpn.out >> /etc/iproute2/rt_tables
We add the rule.
ip rule add fwmark 1 lookup vpn.out
And we add the route for the vpn server to the vpn.out table.
ip route add 10.0.0.15 via 192.168.1.1 dev eth0 table vpn.out
One last thing.
With this configuration, there’s a problem in the selection of the source address for the vpn packets to the vpn server. Because the marking and the change of the route is done later, VPN will see the “10.0.0.0/24, no gateway, dev tap0″ route in the main routing table, and will select the tap0 IP as the source address, which is obviously wrong since we want to get routed through the eth0 interface(with IP 192.168.1.2 for instance). This is fixed if we add the local 192.168.1.2 option in our vpn client configutaion file, so that OpenVPN binds to that address and selects it correctly as the source address.
That’s it!
We send only vpn packets through the 192.168.1.1/eth0 route, and everything else, including all other connections to the VPN server, are sent over vpn.
This ‘trick’ is very useful when you want to be able to ssh to the VPN server, but you want to prohibit ssh from IPs outside the local network.

sshd + reverse DNS lookup

October 19, 2009

This post is mainly for ‘self reference’, in case something like this happens again.

According to the sshd man page, by default, sshd will perform a reverse DNS lookup, based on the client’s IP, for various reasons.

A reverse DNS lookup is used in order to add the hostname to the utmp file, that keeps track of the logins/logouts to the system. One way to ‘switch it off’ is by using the -u0 option when stating sshd. The -u option is used to specify the size of the field of the utmp structure that holds the remote host name.

A reverse lookup is also performed when the configuration(or the authentication mechainsm used) requires such a lookup. The HostBasedAuthentication auth mech, a “from=hostname” option in the .authorized_keys file, or the AllowUsers/DenyUsers option that includes hostnames, in the sshd_config, require a reverse DNS lookup.

Btw, the UseDNS option in the sshd_config, which I think is enable by default, will not prevent sshd from doing a reverse lookup, for the above mentioned reasons. However, if this option is set to ‘No’, sshd will not try to verify that the resolved hostname maps back to the same IP that the client provided(adding an extra ‘layer’ of security).

So, the point is that if for some reason the ‘primary’ namserver in the resolv.conf is not responding, you’ll experience a lag when trying to login using ssh, which can be confusing if you don’t know the whole reverse DNS story.

Another thing that I hadn’t thought before I learned about sshd reverse lookups, is that a DNS problem can easily ‘lock you out’ of a computer, if you use hostname based patterns with TCP wrappers(hosts.allow, hosts.deny). And maybe this can explain some “Connection closed by remote host” errors, when trying to login to a remote computer. :P

Follow

Get every new post delivered to your Inbox.

Join 276 other followers