Here are some things I learned while reading the Linux kernel source code(some of which took me a couple of hours of googling and searching through documentation, git commit posts, threads on lkml etc etc :P).

1)You cannot write extended toplevel inline assembly, ie when you want to use extended inline assembly to pass the value of some C variables or constants, you can only do it inside a function. And as I found out, someone had filed a bug at the GCC bugzilla. So something like this

static const char foo[] = "Hello, world!";
enum { bar = 17 };
asm(".pushsection baz; .long %c0, %c1, %c2; .popsection"
    : : "i" (foo), "i" (sizeof(foo)), "i" (bar));

won’t work.

2)I didn’t search very much the documentation about inline asm, but I couldn’t find what’s the difference between %c0 and %0. It’s used at the example code above, and in a kernel macro I saw. I understood that it had to do with some ‘constant casting’, but I couldn’t find anywhere the exact difference. So I wrote a simple piece of code to clarify that:

main() {
	asm("movl %0, %%eax; movl %c0, %%eax"
		:: "i" (0xff) );
}

and after

gcc -S foo.c

I get:

movl $255, %eax
movl 255, %eax

So %0 is used when we want an integer constant to be used as an immediate value in instructions like mov, add etc, which means that it should be prefixed with $, while %c0 is used when we want the number itself for instructions like .long, .size etc which demand an absolute expression/value as ‘arguments’.

3) When using the section attribute on a variable, in order to change the section it belongs, you cannot change the section’s type to nobits, it’ll be progbits by default. progbits means that the section will actually get space allocated inside the executable(like text and data sections), in contrast to nobits sections like bss for example.
i.e. you can’t do this

static char foo __attribute__(section("bar", nobits));

4)I also found out about the pushsection and popsections asm directives, which manipulate the ELF section stack, and seem to be very useful in certain occasions. pushsection obviously pushes the current section to the section stack, and replace it with the argument passed to the directive, while popsection replaces the current section with the section on top of the section stack.

5)Finally the ‘used’ attribute, which indicates that the symbol(function in our case) is actually used/called/referenced even if the compiler can’t ‘see’ it(otherwise I think that the compiler optimizations would omit code generation for that function).

And now a kernel macro which includes all of the above:

/*
 * Reserve space in the brk section.  The name must be unique within
 * the file, and somewhat descriptive.  The size is in bytes.  Must be
 * used at file scope.
 *
 * (This uses a temp function to wrap the asm so we can pass it the
 * size parameter; otherwise we wouldn't be able to.  We can't use a
 * "section" attribute on a normal variable because it always ends up
 * being @progbits, which ends up allocating space in the vmlinux
 * executable.)
 */
#define RESERVE_BRK(name,sz)						\
	static void __section(.discard.text) __used			\
	__brk_reservation_fn_##name##__(void) {				\
		asm volatile (						\
			".pushsection .brk_reservation,\"aw\",@nobits;" \
			".brk." #name ":"				\
			" 1:.skip %c0;"					\
			" .size .brk." #name ", . - 1b;"		\
			" .popsection"					\
			: : "i" (sz));					\
	}

And a bit more detailed explanation from the git commit

The C definition of RESERVE_BRK() ends up being more complex than
one would expect to work around a cluster of gcc infelicities:

The first attempt was to simply try putting __section(.brk_reservation)
on a variable. This doesn’t work because it ends up making it a
@progbits section, which gets actual space allocated in the vmlinux
executable.

The second attempt was to emit the space into a section using asm,
but gcc doesn’t allow arguments to be passed to file-level asm()
statements, making it hard to pass in the size.

The final attempt is to wrap the asm() in a function to allow
it to have arguments, and put the function itself into the
.discard section, which vmlinux*.lds drops entirely from the
emitted vmlinux.

Another thing to notice is that the wrapper function is put in the .discard.text section, which according to the vmlinux.lds(the linker script used to generate/link the vmlinux executable) will be discarded and thus not included in the executable.
From scripts/module-common.lds:

/*
 * Common module linker script, always used when linking a module.
 * Archs are free to supply their own linker scripts.  ld will
 * combine them automatically.
 */
SECTIONS {
	/DISCARD/ : { *(.discard) }
}

The purpose of the RESERVE_BRK macro, and the brk-like allocator for very early memory allocations needed during the kernel boot process is an interesting story too(which means another post coming soon)! ;)

Coolest hack/trick ever!

February 17, 2011

Some time ago, I wrote about lguest, a minimal x86 hypervisor for the Linux Kernel, which is mainly used for experimentation, and learning stuff about hypervisors, operating systems, even computer architecture/ISA(x86 in particular).

Today I cloned the git repo for the lguest64 port, and I started browsing through the documentation and the code. In the launcher code(the program that initializes/sets up and launches a new guest kernel), I saw the coolest programming hack/trick I’ve seen in a long time. :P

/*L:170 Prepare to be SHOCKED and AMAZED.  And possibly a trifle nauseated.
 *
 * We know that CONFIG_PAGE_OFFSET sets what virtual address the kernel expects
 * to be.  We don't know what that option was, but we can figure it out
 * approximately by looking at the addresses in the code.  I chose the common
 * case of reading a memory location into the %eax register:
 *
 *  movl <some-address>, %eax
 *
 * This gets encoded as five bytes: "0xA1 <4-byte-address>".  For example,
 * "0xA1 0x18 0x60 0x47 0xC0" reads the address 0xC0476018 into %eax.
 *
 * In this example can guess that the kernel was compiled with
 * CONFIG_PAGE_OFFSET set to 0xC0000000 (it's always a round number).  If the
 * kernel were larger than 16MB, we might see 0xC1 addresses show up, but our
 * kernel isn't that bloated yet.
 *
 * Unfortunately, x86 has variable-length instructions, so finding this
 * particular instruction properly involves writing a disassembler.  Instead,
 * we rely on statistics.  We look for "0xA1" and tally the different bytes
 * which occur 4 bytes later (the "0xC0" in our example above).  When one of
 * those bytes appears three times, we can be reasonably confident that it
 * forms the start of CONFIG_PAGE_OFFSET.
 *
 * This is amazingly reliable. */
static unsigned long intuit_page_offset(unsigned char *img, unsigned long len)
{
	unsigned int i, possibilities[256] = { 0 };

	for (i = 0; i + 4 < len; i++) {
		/* mov 0xXXXXXXXX,%eax */
		if (img[i] == 0xA1 && ++possibilities[img[i+4]] > 3)
			return (unsigned long)img[i+4] << 24;
	}
	errx(1, "could not determine page offset");
}

It’s very well commented, and so I don’t think there’s something I could explain any better.
Very very nice trick!
‘Prepare to be shocked and amazed!’ ;)

Follow

Get every new post delivered to your Inbox.

Join 276 other followers