From: Chen, Kenneth W Date: Wed, 4 Oct 2006 09:15:24 +0000 (-0700) Subject: [PATCH] enforce proper tlb flush in unmap_hugepage_range X-Git-Tag: v2.6.19-rc1~158 X-Git-Url: https://err.no/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=fe1668ae5bf0145014c71797febd9ad5670d5d05;p=linux-2.6 [PATCH] enforce proper tlb flush in unmap_hugepage_range Spotted by Hugh that hugetlb page is free'ed back to global pool before performing any TLB flush in unmap_hugepage_range(). This potentially allow threads to abuse free-alloc race condition. The generic tlb gather code is unsuitable to use by hugetlb, I just open coded a page gathering list and delayed put_page until tlb flush is performed. Cc: Hugh Dickins Signed-off-by: Ken Chen Acked-by: William Irwin Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7c7d03dbf7..1d709ff528 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -364,6 +364,8 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, pte_t *ptep; pte_t pte; struct page *page; + struct page *tmp; + LIST_HEAD(page_list); WARN_ON(!is_vm_hugetlb_page(vma)); BUG_ON(start & ~HPAGE_MASK); @@ -384,12 +386,16 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, continue; page = pte_page(pte); - put_page(page); + list_add(&page->lru, &page_list); add_mm_counter(mm, file_rss, (int) -(HPAGE_SIZE / PAGE_SIZE)); } spin_unlock(&mm->page_table_lock); flush_tlb_range(vma, start, end); + list_for_each_entry_safe(page, tmp, &page_list, lru) { + list_del(&page->lru); + put_page(page); + } } static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,