The 32 bits map_page() function is used internally by the mm code
for early mmu mappings and for ioremap. It should never be called
for an address that already has a valid PTE or hash entry, so we
add a BUG_ON for that and remove the useless flush_HPTE call.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/powerpc/mm/pgtable_32.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
pg = pte_alloc_kernel(pd, va);
if (pg != 0) {
err = 0;
- set_pte_at(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT, __pgprot(flags)));
- if (mem_init_done)
- flush_HPTE(0, va, pmd_val(*pd));
+ /* The PTE should never be already set nor present in the
+ * hash table
+ */
+ BUG_ON(pte_val(*pg) & (_PAGE_PRESENT | _PAGE_HASHPTE));
+ set_pte_at(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT,
+ __pgprot(flags)));
}
return err;
}