]> err.no Git - linux-2.6/commitdiff
hugetlb: fix dynamic pool resize failure case
authorAdam Litke <agl@us.ibm.com>
Tue, 16 Oct 2007 08:26:25 +0000 (01:26 -0700)
committerLinus Torvalds <torvalds@woody.linux-foundation.org>
Tue, 16 Oct 2007 16:43:03 +0000 (09:43 -0700)
When gather_surplus_pages() fails to allocate enough huge pages to satisfy
the requested reservation, it frees what it did allocate back to the buddy
allocator.  put_page() should be called instead of update_and_free_page()
to ensure that pool counters are updated as appropriate and the page's
refcount is decremented.

Signed-off-by: Adam Litke <agl@us.ibm.com>
Acked-by: Dave Hansen <haveblue@us.ibm.com>
Cc: David Gibson <hermes@gibson.dropbear.id.au>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Badari Pulavarty <pbadari@us.ibm.com>
Cc: Ken Chen <kenchen@google.com>
Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/hugetlb.c

index 82efecbab96fab90f9e35d019a8d6284ba5a0133..ae2959bb59cbb785c70ffa99677c6194b1d4910d 100644 (file)
@@ -302,8 +302,17 @@ free:
                list_del(&page->lru);
                if ((--needed) >= 0)
                        enqueue_huge_page(page);
-               else
-                       update_and_free_page(page);
+               else {
+                       /*
+                        * Decrement the refcount and free the page using its
+                        * destructor.  This must be done with hugetlb_lock
+                        * unlocked which is safe because free_huge_page takes
+                        * hugetlb_lock before deciding how to free the page.
+                        */
+                       spin_unlock(&hugetlb_lock);
+                       put_page(page);
+                       spin_lock(&hugetlb_lock);
+               }
        }
 
        return ret;