From: Paul Mundt Date: Sat, 6 Jan 2007 00:36:30 +0000 (-0800) Subject: [PATCH] Sanely size hash tables when using large base pages X-Git-Tag: v2.6.20-rc4~65 X-Git-Url: https://err.no/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=9ab37b8f21b4dfe256d736c13738d20c88a1f3ad;p=linux-2.6 [PATCH] Sanely size hash tables when using large base pages At the moment the inode/dentry cache hash tables (common by way of alloc_large_system_hash()) are incorrectly sized by their respective detection logic when we attempt to use large base pages on systems with little memory. This results in odd behaviour when using a 64kB PAGE_SIZE, such as: Dentry cache hash table entries: 8192 (order: -1, 32768 bytes) Inode-cache hash table entries: 4096 (order: -2, 16384 bytes) The mount cache hash table is seemingly the only one that gets this right by directly taking PAGE_SIZE in to account. The following patch attempts to catch the bogus values and round it up to at least 0-order. Signed-off-by: Paul Mundt Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8c1a116875..4a9a83fc1b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3321,6 +3321,10 @@ void *__init alloc_large_system_hash(const char *tablename, numentries >>= (scale - PAGE_SHIFT); else numentries <<= (PAGE_SHIFT - scale); + + /* Make sure we've got at least a 0-order allocation.. */ + if (unlikely((numentries * bucketsize) < PAGE_SIZE)) + numentries = PAGE_SIZE / bucketsize; } numentries = roundup_pow_of_two(numentries);