]> pilppa.com Git - linux-2.6-omap-h63xx.git/commit
[PATCH] holepunch: fix shmem_truncate_range punch locking
authorHugh Dickins <hugh@veritas.com>
Thu, 29 Mar 2007 08:20:36 +0000 (01:20 -0700)
committerLinus Torvalds <torvalds@woody.linux-foundation.org>
Thu, 29 Mar 2007 15:22:25 +0000 (08:22 -0700)
commit1ae7000630e3c05b6f7e3dfc76472f1bca6c1788
tree805d97820dae82a5141f0b1aefc1383bd794e956
parenta2646d1e6c8d2239d8054a7d342eb9775a1d273a
[PATCH] holepunch: fix shmem_truncate_range punch locking

Miklos Szeredi observes that during truncation of shmem page directories,
info->lock is released to improve latency (after lowering i_size and
next_index to exclude races); but this is quite wrong for holepunching, which
receives no such protection from i_size or next_index, and is left vulnerable
to races with shmem_unuse, shmem_getpage and shmem_writepage.

Hold info->lock throughout when holepunching?  No, any user could prevent
rescheduling for far too long.  Instead take info->lock just when needed: in
shmem_free_swp when removing the swap entries, and whenever removing a
directory page from the level above.  But so long as we remove before
scanning, we can safely skip taking the lock at the lower levels, except at
misaligned start and end of the hole.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: Miklos Szeredi <mszeredi@suse.cz>
Cc: Badari Pulavarty <pbadari@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/shmem.c