A couple of days ago, we did some presentations about DNS at a FOSS NTUA meeting.

I prepared a presentation about DNS tunneling and how to bypass Captive Portals at Wifi Hotspots, which require authentication.
(We want to do another presentation, to test ICMP/ping tunnel too ;)).

I had blogged on that topic some time ago.
It was about time for a test-drive. :P

I set up iodine, a DNS tunneling server(and client), and I was ready to test it, since I would be travelling with Minoan Lines the next day.

I first did some tests from my home 24Mbps ADSL connection, and the results weren’t very encouraging. Although the tunnel did work, and I could route all of my traffic through the DNS tunnel, and over a second OpenVPN secure tunnel, bandwidth dropped to ~30Kbps, when using the NTUA FTP Server, through the DNS tunnel.
(The tunnel also worked with the NTUA Wifi Captive Portal, although at first we had some ‘technical issues’, ie I hadn’t set up NAT on the server to masquarade and forward the traffic coming from the tunnel :P).

The problem is that the bandwidth of the Minoan Lines(actually Forthnet ‘runs’ it afaik) Wifi(not inside the ‘local’ network of course) was ~30Kbps(terrible, I know), without using DNS tunneling. So, I wasn’t very optimistic. (I think they have some satelite connection, or something like that from the Wifi to the Internet).

When I was on the ship, I tried to test it. At first, I encountered another technical issue(the local DNS had an IP inside the Wifi local network, and due to NAT the IP our server was ‘seeing’, was different than the IP of the DNS packets, so we had to run iodined with the -c flag). Luckily, FOSS NTUA members(who had root access on the computer running iodined) are 1337 and fixed that in no time. :P

And at last, I had a ‘working’ DNS tunnel, but with extremely high ping times(2sec RTT) to the other end of the tunnel, and when I tried to route all traffic through the tunnel I had a ridiculous 22sec RTT to ntua.gr. Of course even browsing the Web was impossible, since all the HTTP requests timed out before an answer could reach my laptop. :P

However, because I am a Forthnet customer(for my ADSL connection), I was able to use my Username/Password of my home ADSL connection, and have free access to the Internet, from their hotspot(with the amaing bandwidth of ~30Kbps :P). At least they do the authentication over SSL. :P

Although DNS tunneling didn’t really work in this case(the tunnel itself worked, but due to the bandwidth being so low, I didn’t have a ‘usable’ connection to the Internet), I think that in other hotspots, which provide better bandwidth/connection, it can be a very effective way to bypass the authentication and use them for free. ;)

Probably, there’ll be a Part 3, with results from bandwidth benchmarks inside the NTUA Wifi, and maybe some ICMP tunneling stuff.

Cheers! :)

Thanks to the “Understanding the Linux Kernel” I spent several hours trying to understand the APM code used to ‘call’ the APM Protected Mode 32-bit Interface Connect.
Of course APM is deprecated since ages I think, but I was curious. :P

As the APM 1.2 specification states:

The APM BIOS 32-bit protected mode interface requires 3 consecutive
selector/segment descriptors for use as 32-bit code, 16-bit code, and data segments,
respectively. Both 32-bit and 16-bit code segment descriptors are necessary so the
APM BIOS 32-bit interface can call other BIOS routines in a 16-bit code segment if
necessary. The caller must initialize these descriptors using the segment base and
length information returned from this call to the APM BIOS. These selectors may
either be in the GDT or LDT, but must be valid when the APM BIOS is called in
protected mode.

So, at boot time, Linux will query the BIOS/APM for information about the base and length of the segments that APM code uses(query_apm_bios()):

        /* 32-bit connect */
	ireg.al = 0x03;
	intcall(0x15, &ireg, &oreg);

	boot_params.apm_bios_info.cseg        = oreg.ax;
	boot_params.apm_bios_info.offset      = oreg.ebx;
	boot_params.apm_bios_info.cseg_16     = oreg.cx;
	boot_params.apm_bios_info.dseg        = oreg.dx;
	boot_params.apm_bios_info.cseg_len    = oreg.si;
	boot_params.apm_bios_info.cseg_16_len = oreg.hsi;
	boot_params.apm_bios_info.dseg_len    = oreg.di;

These are the values that Linux will use to set-up the appropriate segments in the Global Descriptor Table(GDT).

	/*
	 * Set up the long jump entry point to the APM BIOS, which is called
	 * from inline assembly.
	 */
	apm_bios_entry.offset = apm_info.bios.offset;
	apm_bios_entry.segment = APM_CS;
	/*
	 * The APM 1.1 BIOS is supposed to provide limit information that it
	 * recognizes.  Many machines do this correctly, but many others do
	 * not restrict themselves to their claimed limit.  When this happens,
	 * they will cause a segmentation violation in the kernel at boot time.
	 * Most BIOS's, however, will respect a 64k limit, so we use that.
	 *
	 * Note we only set APM segments on CPU zero, since we pin the APM
	 * code to that CPU.
	 */
	gdt = get_cpu_gdt_table(0);
	set_desc_base(&gdt[APM_CS >> 3],
		 (unsigned long)__va((unsigned long)apm_info.bios.cseg << 4));
	set_desc_base(&gdt[APM_CS_16 >> 3],
		 (unsigned long)__va((unsigned long)apm_info.bios.cseg_16 << 4));
	set_desc_base(&gdt[APM_DS >> 3],
		 (unsigned long)__va((unsigned long)apm_info.bios.dseg << 4));

So, now the APM segments(their descriptors) are ready to use.

The code used to make an APM BIOS call looks like this:

static inline void apm_bios_call_asm(u32 func, u32 ebx_in, u32 ecx_in,
					u32 *eax, u32 *ebx, u32 *ecx,
					u32 *edx, u32 *esi)
{
	/*
	 * N.B. We do NOT need a cld after the BIOS call
	 * because we always save and restore the flags.
	 */
	__asm__ __volatile__(APM_DO_ZERO_SEGS
		"pushl %%edi\n\t"
		"pushl %%ebp\n\t"
		"lcall *%%cs:apm_bios_entry\n\t"
		"setc %%al\n\t"
		"popl %%ebp\n\t"
		"popl %%edi\n\t"
		APM_DO_POP_SEGS
		: "=a" (*eax), "=b" (*ebx), "=c" (*ecx), "=d" (*edx),
		  "=S" (*esi)
		: "a" (func), "b" (ebx_in), "c" (ecx_in)
		: "memory", "cc");
}

It took me a while to figure out the long jump.

lcall *%%cs:apm_bios_entry

because apm_bios_entry is defined as:

static struct {
        unsigned long   offset;
        unsigned short  segment;
} apm_bios_entry;

At first I though that the struct should be defined the other way around(first the segment and then the offset).
I experimented a bit with inline asm, and after lots of segmentation faults, and some time going over Intel x86 manuals about the ljmp instruction, I figured it out.

Well, I think it took much much longer than it should to understand was going on. :S

The ljmp expects a mem16:mem32 operand, where mem16 is the segment, and mem32 the offset.
And that's exactly how the struct apm_bios_entry is stored in memory.
However, as I 'read' mem16:mem32, I thought that mem16 should be stored before mem32. :S

And thus I lost several hours writing and experimenting with segfaulting C code. :P
For something pretty obvious...

Follow

Get every new post delivered to your Inbox.

Join 276 other followers