]> err.no Git - linux-2.6/commitdiff
mm: fix page allocation for larger I/O segments
authorMel Gorman <mel@csn.ul.ie>
Tue, 18 Dec 2007 00:20:05 +0000 (16:20 -0800)
committerLinus Torvalds <torvalds@woody.linux-foundation.org>
Tue, 18 Dec 2007 03:28:16 +0000 (19:28 -0800)
In some cases the IO subsystem is able to merge requests if the pages are
adjacent in physical memory.  This was achieved in the allocator by having
expand() return pages in physically contiguous order in situations were a
large buddy was split.  However, list-based anti-fragmentation changed the
order pages were returned in to avoid searching in buffered_rmqueue() for a
page of the appropriate migrate type.

This patch restores behaviour of rmqueue_bulk() preserving the physical
order of pages returned by the allocator without incurring increased search
costs for anti-fragmentation.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Cc: James Bottomley <James.Bottomley@steeleye.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Mark Lord <mlord@pobox.com
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/page_alloc.c

index b5a58d476c1a66a7cc6ce94adaedfc9e2aff0a06..d73bfad1c32f2e2254aaa1f47de3bda7db0b8b88 100644 (file)
@@ -847,8 +847,19 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
                struct page *page = __rmqueue(zone, order, migratetype);
                if (unlikely(page == NULL))
                        break;
+
+               /*
+                * Split buddy pages returned by expand() are received here
+                * in physical page order. The page is added to the callers and
+                * list and the list head then moves forward. From the callers
+                * perspective, the linked list is ordered by page number in
+                * some conditions. This is useful for IO devices that can
+                * merge IO requests if the physical pages are ordered
+                * properly.
+                */
                list_add(&page->lru, list);
                set_page_private(page, migratetype);
+               list = &page->lru;
        }
        spin_unlock(&zone->lock);
        return i;