]> pilppa.com Git - linux-2.6-omap-h63xx.git/commitdiff
[PATCH] x86_64: prefetch the mmap_sem in the fault path
authorArjan van de Ven <arjan@intel.linux.com>
Sat, 25 Mar 2006 15:30:10 +0000 (16:30 +0100)
committerLinus Torvalds <torvalds@g5.osdl.org>
Sat, 25 Mar 2006 17:10:54 +0000 (09:10 -0800)
In a micro-benchmark that stresses the pagefault path, the down_read_trylock
on the mmap_sem showed up quite high on the profile. Turns out this lock is
bouncing between cpus quite a bit and thus is cache-cold a lot. This patch
prefetches the lock (for write) as early as possible (and before some other
somewhat expensive operations). With this patch, the down_read_trylock
basically fell out of the top of profile.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
arch/x86_64/mm/fault.c

index de91e17daf6f21e5f6893e73f862324d00f5bcd2..316c53de47bd8574cd530b8fd76e7405bc1f6df0 100644 (file)
@@ -314,11 +314,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
        unsigned long flags;
        siginfo_t info;
 
+       tsk = current;
+       mm = tsk->mm;
+       prefetchw(&mm->mmap_sem);
+
        /* get the address */
        __asm__("movq %%cr2,%0":"=r" (address));
 
-       tsk = current;
-       mm = tsk->mm;
        info.si_code = SEGV_MAPERR;