[PATCH] x86/x86_64: mark rodata section read-only: make some datastructures const
Mark some key kernel datastructures readonly. This patch was previously
posted on Jun 28th but was back then not merged because nothing was enforcing
rodata anyway.. well that changed now :)
Patch by Christoph Lameter <christoph@lameter.com> and Dave Jones
<davej@redhat.com>
Signed-off-by: Arjan van de Ven <arjan@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] x86/x86_64: mark rodata section read only: generic x86-64 bugfix
Bug fix required for the .rodata work on x86-64:
when change_page_attr() and friends need to break up a 2Mb page into 4Kb
pages, it always set the NX bit on the PMD, which causes the cpu to consider
the entire 2Mb region to be NX regardless of the actual PTE perms. This is
fine in general, with one big exception: the 2Mb page that covers the last
part of the kernel .text! The fix is to not invent a new permission for the
new PMD entry, but to just inherit the existing one minus the PSE bit.
Signed-off-by: Arjan van de Ven <arjan@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] x86/x86_64: mark rodata section read only: generic infrastructure
Generic prep-work for marking the .rodata section readonly:
* Align the rodata section at 4Kb boundary
* call the mark_rodata_ro() function when available
Signed-off-by: Arjan van de Ven <arjan@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Adrian Bunk <bunk@stusta.de> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Zachary Amsden [Fri, 6 Jan 2006 08:11:59 +0000 (00:11 -0800)]
[PATCH] x86: Deprecate useless bug
Remove the "temporary debugging check" which has managed to live for quite
some time, and is clearly unneeded. The mm can never be live at this point,
so clearly checking the LDT in the mm->context is redundant as well.
Signed-off-by: Zachary Amsden <zach@vmware.com> Cc: "Seth, Rohit" <rohit.seth@intel.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Zachary Amsden [Fri, 6 Jan 2006 08:11:56 +0000 (00:11 -0800)]
[PATCH] x86: Fixed pnp bios limits
PnP BIOS data, code, and 32-bit entry segments all have fixed limits as well;
set them in the GDT rather than adding more code. It would be nice to add
these fixups to the boot GDT rather than setting the GDT for each CPU; perhaps
I can wiggle this in later, but getting it in before the subsys init looks
tricky.
Also, make some progress on deprecating the ugly Q_SET_SEL macros.
Signed-off-by: Zachary Amsden <zach@vmware.com> Cc: "Seth, Rohit" <rohit.seth@intel.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Zachary Amsden [Fri, 6 Jan 2006 08:11:55 +0000 (00:11 -0800)]
[PATCH] x86: Pnp byte granularity
The one remaining caller of set_limit, the PnP BIOS code, calls into the PnP
BIOS, passing kernel parameters in and out. These parameteres may be passed
from arbitrary kernel virtual memory, so they deserve strict protection to
stop a bad BIOS from smashing beyond the object size.
Unfortunately, the use of set_limit was badly botching this by setting the
limit in terms of pages, when it really should have byte granularity.
When doing this, I discovered my BIOS had the buggy code during the "get
system device node" call:
mov ax, es:[bx]
Which is harmless, but has a trivial workaround.
Signed-off-by: Zachary Amsden <zach@vmware.com> Cc: "Seth, Rohit" <rohit.seth@intel.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Zachary Amsden [Fri, 6 Jan 2006 08:11:53 +0000 (00:11 -0800)]
[PATCH] x86: Always relax segments
APM BIOSes have many bugs regarding proper representation of the appropriate
segment limits for calling the BIOS. By default, APM_RELAX_SEGMENTS is always
turned on to support running the APM BIOS on these buggy machines. Keeping
64k limits poses very little danger to the kernel, because the pages where the
APM BIOS is located will always be in low physical memory BIOS areas, which
should already be marked reserved, and only buggy BIOSes would possibly
overstep the segment bounds with writes to data anyway.
Since forcing stricter limits breaks many machines and is not default
behavior, it seems reasonable to deprecate the older code which may cause APM
BIOS to fault.
If you really have a badly enough broken APM BIOS that you have to turn off
APM_RELAX_SEGMENTS, seems like the best recourse here would be to disable the
APM BIOS and / or not compile it into your kernel to begin with, and / or add
your system to the known bad list.
The reason I want to deprecate this code is there is underlying brokenness
with the set_limit macros, and getting rid of many of the call sites rather
than rewriting them seems to be the simplest and most correct course of
action.
Signed-off-by: Zachary Amsden <zach@vmware.com> Acked-by: "Seth, Rohit" <rohit.seth@intel.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Zachary Amsden [Fri, 6 Jan 2006 08:11:50 +0000 (00:11 -0800)]
[PATCH] x86: Cr4 is valid on some 486s
So some 486 processors do have CR4 register. Allow them to present it in
register dumps by using the old fault technique rather than testing processor
family.
Thanks to Maciej for noticing this.
Signed-off-by: Zachary Amsden <zach@vmware.com> Cc: "Seth, Rohit" <rohit.seth@intel.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Jan Beulich [Fri, 6 Jan 2006 08:11:48 +0000 (00:11 -0800)]
[PATCH] i386: don't blindly enable interrupts in die()
Rather than blindly re-enabling interrupts in die(), save their state
upon entry and then restore that state.
If the kernel is in really bad condition and faults with interrupts disabled,
re-enabling them in die() may cause even more trouble, implying more chances
of data corruption.
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Zachary Amsden [Fri, 6 Jan 2006 08:11:47 +0000 (00:11 -0800)]
[PATCH] x86: GDT alignment fix
Make GDT page aligned and page padded to support running inside of a
hypervisor. This prevents false sharing of the GDT page with other hot
data, which is not allowed in Xen, and causes performance problems in
VMware.
Rather than go back to the old method of statically allocating the GDT
(which wastes unneded space for non-present CPUs), the GDT for APs is
allocated dynamically.
David Howells [Fri, 6 Jan 2006 08:11:45 +0000 (00:11 -0800)]
[PATCH] frv: improve signal handling
The attached patch improves the signal handling:
(1) It makes do_signal() static as it isn't called from anywhere outside of
the arch code.
(2) It removes the regs argument to all the static functions within that file,
using __frame instead (which is the same thing held in a global register).
Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
David Howells [Fri, 6 Jan 2006 08:11:44 +0000 (00:11 -0800)]
[PATCH] frv: fix signal handling
The attached patch makes FRV signal handling work properly:
(1) After do_notify_resume() has been called, the work flags must be checked
again (there may be another signal to deliver or the process might require
rescheduling for instance).
(2) After the signal frame is set up on the userspace stack, ptrace() should
be given an opportunity to single-step into the signal handler.
(3) The error state from setting up a signal frame should be passed back up
the call chain.
(4) The segfault handler shouldn't be preemptively reset in the arch if we
fail to deliver a SEGV signal: force_sig() will take care of that.
Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
David Howells [Fri, 6 Jan 2006 08:11:44 +0000 (00:11 -0800)]
[PATCH] FRV: Make futex code compilable on nommu [try #2]
Make the futex code compilable and usable on NOMMU by making the attempt to
handle page faults conditional on CONFIG_MMU. If this is not enabled, then
we can assume that EFAULT returned from futex_atomic_op_inuser() is not
recoverable, and that the address lies outside of valid memory.
handle_mm_fault() is made to BUG if called on NOMMU without attempting to
invoke the actual handler (__handle_mm_fault).
Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
David Howells [Fri, 6 Jan 2006 08:11:43 +0000 (00:11 -0800)]
[PATCH] FRV: Implement futex operations for FRV
The attached patch implements futex operations for the FRV architecture. The
operations are applicable to both MMU and no-MMU modes; though the EFAULT
handling will be a little bit of wasted space on the latter.
Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
David Howells [Fri, 6 Jan 2006 08:11:42 +0000 (00:11 -0800)]
[PATCH] NOMMU: Make SYSV IPC SHM use ramfs facilities on NOMMU
The attached patch makes the SYSV IPC shared memory facilities use the new
ramfs facilities on a no-MMU kernel.
The following changes are made:
(1) There are now shmem_mmap() and shmem_get_unmapped_area() functions to
allow the IPC SHM facilities to commune with the tiny-shmem and shmem
code.
(2) ramfs files now need resizing using do_truncate() rather than by modifying
the inode size directly (see shmem_file_setup()). This causes ramfs to
attempt to bind a block of pages of sufficient size to the inode.
(3) CONFIG_SYSVIPC is no longer contingent on CONFIG_MMU.
Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
David Howells [Fri, 6 Jan 2006 08:11:41 +0000 (00:11 -0800)]
[PATCH] NOMMU: Provide shared-writable mmap support on ramfs
The attached patch makes ramfs support shared-writable mmaps by:
(1) Attempting to perform a contiguous block allocation to the requested size
when truncate attempts to increase the file from zero size, such as
happens when:
(2) Permitting any shared-writable mapping over any contiguous set of extant
pages. get_unmapped_area() will return the address into the actual ramfs
pages. The mapping may start anywhere and be of any size, but may not go
over the end of file. Multiple mappings may overlap in any way.
(3) Not permitting a file to be shrunk if it would truncate any shared
mappings (private mappings are copied).
Thus this patch provides support for POSIX shared memory on NOMMU kernels,
with certain limitations such as there being a large enough block of pages
available to support the allocation and it only working on directly mappable
filesystems.
Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Ben Collins [Fri, 6 Jan 2006 08:11:40 +0000 (00:11 -0800)]
[PATCH] therm_adt746x: Quiet fan speed change messages
Only output the messages about fan speed changes with a verbose=1 module
param.
Signed-off-by: Fabio M. Di Nitto <fabbione@ubuntu.com> Signed-off-by: Ben Collins <bcollins@ubuntu.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Sylvain Munaut [Fri, 6 Jan 2006 08:11:36 +0000 (00:11 -0800)]
[PATCH] ppc32: Fix MPC52xx configuration space access
This patch takes care of an errata of the MPC5200 by avoiding 32 bits access
in type 1 configuration accesses. All others accesses are still 32 bits wide.
It also adds some mb() since the simple out_be(...) are not sufficient in
this case.
Sylvain Munaut [Fri, 6 Jan 2006 08:11:35 +0000 (00:11 -0800)]
[PATCH] ppc32: Remove __init qualifier from mpc52xx pci resources fixups
The mpc52xx_pci_fixup_resources is not only called at init but also when there
is a pci hotplug like when a cardbus card is plugged in. So that function is
needed after init too.
Thanks to Asier Llano Palacios for reporting this.
Sylvain Munaut [Fri, 6 Jan 2006 08:11:35 +0000 (00:11 -0800)]
[PATCH] ppc32: Modify Freescale MPC52xx IRQ mapping to _not_ use irq 0
AFAIK IRQ number 0 is a perfectly valid IRQ number. But it seems there are
numerous places where it's considered to be invalid or "no irq" value. Since
that value is problematic, the IRQ mapping is changed to not use it.
Sylvain Munaut [Fri, 6 Jan 2006 08:11:34 +0000 (00:11 -0800)]
[PATCH] ppc32: Fix static IO mapping for Freescale MPC52xx
The current iomapping used MBAR_SIZE for the size argument of
io_block_mapping, resulting in a call to setbat with a size argument of 64k
which is invalid.
This patch correct this and maps the whole 0xf0000000->0xffffffff range so
that devices on the local bus are also included in the BAT mapping.
Thanks to Bernhard Kuhn from Metrowerks for pointing this out.
Sylvain Munaut [Fri, 6 Jan 2006 08:11:32 +0000 (00:11 -0800)]
[PATCH] ppc32/serial: Change mpc52xx_uart.c to use the Low Density Serial port major
Before this patch we were just using the "classic" /dev/ttySx devices.
However when another on the system is loaded that uses those (like drivers for
serial PCMCIA), that creates a conflict for the minors. Therefore, we now use
/dev/ttyPSC[0:5] (note the 0-based numbering !) with some minors we've been
assigned in the "Low Density Serial port major"
Arthur Othieno [Fri, 6 Jan 2006 08:11:29 +0000 (00:11 -0800)]
[PATCH] macintosh: don't store i2c_add_driver() return if no further processing done
therm_pm72.c and windfarm_lm75_sensor.c both store the return from
i2c_add_driver() but do no further processing on the result. Simply return
what i2c_add_driver() did, instead.
Signed-off-by: Arthur Othieno <a.othieno@bluewin.ch> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Nicolas Kaiser [Fri, 6 Jan 2006 08:11:22 +0000 (00:11 -0800)]
[PATCH] selinux: ARRAY_SIZE cleanups
Use ARRAY_SIZE macro instead of sizeof(x)/sizeof(x[0]).
Signed-off-by: Nicolas Kaiser <nikai@nikai.net> Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov> Acked-by: James Morris <jmorris@namei.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Nick Piggin [Fri, 6 Jan 2006 08:11:20 +0000 (00:11 -0800)]
[PATCH] mm: page_state opt
Optimise page_state manipulations by introducing interrupt unsafe accessors
to page_state fields. Callers must provide their own locking (either
disable interrupts or not update from interrupt context).
Switch over the hot callsites that can easily be moved under interrupts off
sections.
Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Several counters already have the need to use 64 atomic variables on 64 bit
platforms (see mm_counter_t in sched.h). We have to do ugly ifdefs to fall
back to 32 bit atomic on 32 bit platforms.
The VM statistics patch that I am working on will also make more extensive
use of atomic64.
This patch introduces a new type atomic_long_t by providing definitions in
asm-generic/atomic.h that works similar to the c "long" type. Its 32 bits
on 32 bit platforms and 64 bits on 64 bit platforms.
Also cleans up the determination of the mm_counter_t in sched.h.
Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The use k in the inner loop means that the highest zone nr is always used
if any zone of a node is populated. This means that the policy zone is not
correctly determined on arches that do no use HIGHMEM like ia64.
Change the loop to decrement k which also simplifies the BUG_ON.
Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] mm: move determination of policy_zone into page allocator
Currently the function to build a zonelist for a BIND policy has the side
effect to set the policy_zone. This seems to be a bit strange. policy
zone seems to not be initialized elsewhere and therefore 0. Do we police
ZONE_DMA if no bind policy has been used yet?
This patch moves the determination of the zone to apply policies to into
the page allocator. We determine the zone while building the zonelist for
nodes.
Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Andrew Morton [Fri, 6 Jan 2006 08:11:14 +0000 (00:11 -0800)]
[PATCH] vmscan: balancing fix
Revert a patch which went into 2.6.8-rc1. The changelog for that patch was:
The shrink_zone() logic can, under some circumstances, cause far too many
pages to be reclaimed. Say, we're scanning at high priority and suddenly
hit a large number of reclaimable pages on the LRU.
Change things so we bale out when SWAP_CLUSTER_MAX pages have been
reclaimed.
Problem is, this change caused significant imbalance in inter-zone scan
balancing by truncating scans of larger zones.
Suppose, for example, ZONE_HIGHMEM is 10x the size of ZONE_NORMAL. The zone
balancing algorithm would require that if we're scanning 100 pages of
ZONE_HIGHMEM, we should scan 10 pages of ZONE_NORMAL. But this logic will
cause the scanning of ZONE_HIGHMEM to bale out after only 32 pages are
reclaimed. Thus effectively causing smaller zones to be scanned relatively
harder than large ones.
Now I need to remember what the workload was which caused me to write this
patch originally, then fix it up in a different way...
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Nick Piggin [Fri, 6 Jan 2006 08:11:13 +0000 (00:11 -0800)]
[PATCH] mm: pfault optimisation
This atomic operation is superfluous: the pte will be added with the
referenced bit set, and the page will be referenced through this mapping after
the page fault handler returns anyway.
Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Nick Piggin [Fri, 6 Jan 2006 08:11:11 +0000 (00:11 -0800)]
[PATCH] mm: bad_page optimisation
Cut down size slightly by not passing bad_page the function name (it should be
able to be determined by dump_stack()). And cut down the number of printks in
bad_page.
Also, cut down some branching in the destroy_compound_page path.
Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
As find_lock_page() already checks with TestSetPageLocked() that page is
locked, there is no need to call lock_page() that will try-lock page again
(chances of page being unlocked in between are small). Call __lock_page()
directly, this saves one atomic operation.
Also, mark truncate-while-slept path as unlikely while we are here.
(akpm: ug. But this is actually a common path for normal old read()s against
a page which is under readahead I/O so ho-hum.)
David Howells [Fri, 6 Jan 2006 08:11:08 +0000 (00:11 -0800)]
[PATCH] FRV: Clean up bootmem allocator's page freeing algorithm
The attached patch cleans up the way the bootmem allocator frees pages.
A new function, __free_pages_bootmem(), is provided in mm/page_alloc.c that is
called from mm/bootmem.c to turn pages over to the main allocator. All the
bits of code to initialise pages (clearing PG_reserved and setting the page
count) are moved to here. The checks on page validity are removed, on the
assumption that the struct page arrays will have been prepared correctly.
Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] Cleanup bootmem allocator and fix alloc_bootmem_low
Patch cleans up the alloc_bootmem fix for swiotlb. Patch removes
alloc_bootmem_*_limit api and fixes alloc_boot_*low api to do the right
thing -- allocate from low32 memory.
Hugh Dickins [Fri, 6 Jan 2006 08:10:55 +0000 (00:10 -0800)]
[PATCH] mm: free_pages_and_swap_cache opt
Minor optimization (though it doesn't help in the PREEMPT case, severely
constrained by small ZAP_BLOCK_SIZE). free_pages_and_swap_cache works in
chunks of 16, calling release_pages which works in chunks of PAGEVEC_SIZE.
But PAGEVEC_SIZE was dropped from 16 to 14 in 2.6.10, so we're now doing more
spin_lock_irq'ing than necessary: use PAGEVEC_SIZE throughout.
Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Andy Whitcroft [Fri, 6 Jan 2006 08:10:54 +0000 (00:10 -0800)]
[PATCH] sparsemem: provide pfn_to_nid
Before SPARSEMEM is initialised we cannot provide an efficient pfn_to_nid()
implmentation; before initialisation is complete we use early_pfn_to_nid()
to provide location information. Until recently there was no non-init user
of this functionality. Provide a post init pfn_to_nid() implementation.
Note that this implmentation assumes that the pfn passed has been validated
with pfn_valid(). The current single user of this function already has
this check.
Signed-off-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Andy Whitcroft [Fri, 6 Jan 2006 08:10:53 +0000 (00:10 -0800)]
[PATCH] flatmem split out memory model
There are three places we define pfn_to_nid(). Two in linux/mmzone.h and one
in asm/mmzone.h. These in essence represent the three memory models. The
definition in linux/mmzone.h under !NEED_MULTIPLE_NODES is both the FLATMEM
definition and the optimisation for single NUMA nodes; the one under SPARSEMEM
is the NUMA sparsemem one; the one in asm/mmzone.h under DISCONTIGMEM is the
discontigmem one. This is not in the least bit obvious, particularly the
connection between the non-NUMA optimisations and the memory models.
Two patches:
flatmem-split-out-memory-model: simplifies the selection of pfn_to_nid()
implementations. The selection is based primarily off the memory model
selected. Optimisations for non-NUMA are applied where needed.
sparse-provide-pfn_to_nid: implement pfn_to_nid() for SPARSEMEM
This patch:
pfn_to_nid is memory model specific
The pfn_to_nid() call is memory model specific. It represents the locality
identifier for the memory passed. Classically this would be a NUMA node,
but not a chunk of memory under DISCONTIGMEM.
The SPARSEMEM and FLATMEM memory model non-NUMA versions of pfn_to_nid()
are folded together under NEED_MULTIPLE_NODES, while DISCONTIGMEM has its
own optimisation. This is all very confusing.
This patch splits out each implementation of pfn_to_nid() so that we can
see them and the optimisations to each.
Signed-off-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Russell King [Fri, 6 Jan 2006 08:10:52 +0000 (00:10 -0800)]
[PATCH] Shut up warnings in ipc/shm.c
Fix two warnings in ipc/shm.c
ipc/shm.c:122: warning: statement with no effect
ipc/shm.c:560: warning: statement with no effect
by converting the macros to empty inline functions. For safety, let's do
all three. This also has the advantage that typechecking gets performed
even without CONFIG_SHMEM enabled.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The NODES_SPAN_OTHER_NODES config option was created so that DISCONTIGMEM
could handle pSeries numa layouts. However, support for DISCONTIGMEM has
been replaced by SPARSEMEM on powerpc. As a result, this config option and
supporting code is no longer needed.
I have already sent a patch to Paul that removes the option from powerpc
specific code. This removes the arch independent piece. Doesn't really
matter which is applied first.
Signed-off-by: Mike Kravetz <kravetz@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] hugepages: fold find_or_alloc_pages into huge_no_page()
The number of parameters for find_or_alloc_page increases significantly after
policy support is added to huge pages. Simplify the code by folding
find_or_alloc_huge_page() into hugetlb_no_page().
Adam Litke objected to this piece in an earlier patch but I think this is a
good simplification. Diffstat shows that we can get rid of almost half of the
lines of find_or_alloc_page(). If we can find no consensus then lets simply
drop this patch.
Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Andi Kleen <ak@muc.de> Acked-by: William Lee Irwin III <wli@holomorphy.com> Cc: Adam Litke <agl@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] Remove old node based policy interface from mempolicy.c
mempolicy.c contains provisional interface for huge page allocation based on
node numbers. This is in use in SLES9 but was never used (AFAIK) in upstream
versions of Linux.
Huge page allocations now use zonelists to figure out where to allocate pages.
The use of zonelists allows us to find the closest hugepage which was the
consideration of the NUMA distance for huge page allocations.
Remove the obsolete functions.
Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Andi Kleen <ak@muc.de> Acked-by: William Lee Irwin III <wli@holomorphy.com> Cc: Adam Litke <agl@us.ibm.com> Acked-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The huge_zonelist() function in the memory policy layer provides an list of
zones ordered by NUMA distance. The hugetlb layer will walk that list looking
for a zone that has available huge pages but is also in the nodeset of the
current cpuset.
This patch does not contain the folding of find_or_alloc_huge_page() that was
controversial in the earlier discussion.
Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Andi Kleen <ak@muc.de> Acked-by: William Lee Irwin III <wli@holomorphy.com> Cc: Adam Litke <agl@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This was discussed at
http://marc.theaimsgroup.com/?l=linux-kernel&m=113166526217117&w=2
This patch changes the dequeueing to select a huge page near the node
executing instead of always beginning to check for free nodes from node 0.
This will result in a placement of the huge pages near the executing
processor improving performance.
The existing implementation can place the huge pages far away from the
executing processor causing significant degradation of performance. The
search starting from zero also means that the lower zones quickly run out
of memory. Selecting a huge page near the process distributed the huge
pages better.
Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: Adam Litke <agl@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
David Gibson [Fri, 6 Jan 2006 08:10:44 +0000 (00:10 -0800)]
[PATCH] Hugetlb: Copy on Write support
Implement copy-on-write support for hugetlb mappings so MAP_PRIVATE can be
supported. This helps us to safely use hugetlb pages in many more
applications. The patch makes the following changes. If needed, I also have
it broken out according to the following paragraphs.
1. Add a pair of functions to set/clear write access on huge ptes. The
writable check in make_huge_pte is moved out to the caller for use by COW
later.
2. Hugetlb copy-on-write requires special case handling in the following
situations:
- copy_hugetlb_page_range() - Copied pages must be write protected so
a COW fault will be triggered (if necessary) if those pages are written
to.
- find_or_alloc_huge_page() - Only MAP_SHARED pages are added to the
page cache. MAP_PRIVATE pages still need to be locked however.
3. Provide hugetlb_cow() and calls from hugetlb_fault() and
hugetlb_no_page() which handles the COW fault by making the actual copy.
4. Remove the check in hugetlbfs_file_map() so that MAP_PRIVATE mmaps
will be allowed. Make MAP_HUGETLB exempt from the depricated VM_RESERVED
mapping check.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Adam Litke <agl@us.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "Seth, Rohit" <rohit.seth@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Adam Litke [Fri, 6 Jan 2006 08:10:43 +0000 (00:10 -0800)]
[PATCH] Hugetlb: Reorganize hugetlb_fault to prepare for COW
This patch splits the "no_page()" type activity into its own function,
hugetlb_no_page(). hugetlb_fault() becomes the entry point for hugetlb faults
and delegates to the appropriate handler depending on the type of fault.
Right now we still have only hugetlb_no_page() but a later patch introduces a
COW fault.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Adam Litke <agl@us.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "Seth, Rohit" <rohit.seth@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Adam Litke [Fri, 6 Jan 2006 08:10:42 +0000 (00:10 -0800)]
[PATCH] Hugetlb: Rename find_lock_page to find_or_alloc_huge_page
find_lock_huge_page() isn't a great name, since it does extra things not
analagous to find_lock_page(). Rename it find_or_alloc_huge_page() which is
closer to the mark.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Adam Litke <agl@us.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "Seth, Rohit" <rohit.seth@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Adam Litke [Fri, 6 Jan 2006 08:10:40 +0000 (00:10 -0800)]
[PATCH] Hugetlb: Remove duplicate i_size check
cleanup
Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Adam Litke <agl@us.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "Seth, Rohit" <rohit.seth@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] madvise(MADV_REMOVE): remove pages from tmpfs shm backing store
Here is the patch to implement madvise(MADV_REMOVE) - which frees up a
given range of pages & its associated backing store. Current
implementation supports only shmfs/tmpfs and other filesystems return
-ENOSYS.
"Some app allocates large tmpfs files, then when some task quits and some
client disconnect, some memory can be released. However the only way to
release tmpfs-swap is to MADV_REMOVE". - Andrea Arcangeli
Databases want to use this feature to drop a section of their bufferpool
(shared memory segments) - without writing back to disk/swap space.
This feature is also useful for supporting hot-plug memory on UML.
Concerns raised by Andrew Morton:
- "We have no plan for holepunching! If we _do_ have such a plan (or
might in the future) then what would the API look like? I think
sys_holepunch(fd, start, len), so we should start out with that."
- Using madvise is very weird, because people will ask "why do I need to
mmap my file before I can stick a hole in it?"
- None of the other madvise operations call into the filesystem in this
manner. A broad question is: is this capability an MM operation or a
filesytem operation? truncate, for example, is a filesystem operation
which sometimes has MM side-effects. madvise is an mm operation and with
this patch, it gains FS side-effects, only they're really, really
significant ones."
Comments:
- Andrea suggested the fs operation too but then it's more efficient to
have it as a mm operation with fs side effects, because they don't
immediatly know fd and physical offset of the range. It's possible to
fixup in userland and to use the fs operation but it's more expensive,
the vmas are already in the kernel and we can use them.
Short term plan & Future Direction:
- We seem to need this interface only for shmfs/tmpfs files in the short
term. We have to add hooks into the filesystem for correctness and
completeness. This is what this patch does.
- In the future, plan is to support both fs and mmap apis also. This
also involves (other) filesystem specific functions to be implemented.
- Current patch doesn't support VM_NONLINEAR - which can be addressed in
the future.
Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Andrea Arcangeli <andrea@suse.de> Cc: Michael Kerrisk <mtk-manpages@gmx.net> Cc: Ulrich Drepper <drepper@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch makes truncate_inode_pages_range from truncate_inode_pages.
truncate_inode_pages became a one-liner call to truncate_inode_pages_range.
Reiser4 needs truncate_inode_pages_ranges because it tries to keep
correspondence between existences of metadata pointing to data pages and pages
to which those metadata point to. So, when metadata of certain part of file
is removed from filesystem tree, only pages of corresponding range are to be
truncated.
(Needed by the madvise(MADV_REMOVE) patch)
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Andy Whitcroft [Fri, 6 Jan 2006 08:10:35 +0000 (00:10 -0800)]
[PATCH] memhotplug: register_memory should be global
register_memory is global and declared so in linux/memory.h. Update the
HOTPLUG specific definition to match. This fixes a compile warning when
HOTPLUG is enabled.
Signed-off-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Andy Whitcroft [Fri, 6 Jan 2006 08:10:35 +0000 (00:10 -0800)]
[PATCH] memhotplug: register_ and unregister_memory_notifier should be global
Both register_memory_notifer and unregister_memory_notifier are global and
declared so in linux/memory.h. Update the HOTPLUG specific definitions to
match. This fixes a compile warning when HOTPLUG is enabled.
Signed-off-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Two changes to the setting of the ALLOC_CPUSET flag in
mm/page_alloc.c:__alloc_pages()
- A bug fix - the "ignoring mins" case should not be honoring ALLOC_CPUSET.
This case of all cases, since it is handling a request that will free up
more memory than is asked for (exiting tasks, e.g.) should be allowed to
escape cpuset constraints when memory is tight.
- A logic change to make it simpler. Honor cpusets even on GFP_ATOMIC
(!wait) requests. With this, cpuset confinement applies to all requests
except ALLOC_NO_WATERMARKS, so that in a subsequent cleanup patch, I can
remove the ALLOC_CPUSET flag entirely. Since I don't know any real reason
this logic has to be either way, I am choosing the path of the simplest
code.
Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Fri, 6 Jan 2006 08:09:49 +0000 (00:09 -0800)]
[PATCH] knfsd: fix hash function for IP addresses on 64bit little-endian machines.
The hash.h hash_long function, when used on a 64 bit machine, ignores many
of the middle-order bits. (The prime chosen it too bit-sparse).
IP addresses for clients of an NFS server are very likely to differ only in
the low-order bits. As addresses are stored in network-byte-order, these
bits become middle-order bits in a little-endian 64bit 'long', and so do
not contribute to the hash. Thus you can have the situation where all
clients appear on one hash chain.
So, until hash_long is fixed (or maybe forever), us a hash function that
works well on IP addresses - xor the bytes together.
Thanks to "Iozone" <capps@iozone.org> for identifying this problem.
Cc: "Iozone" <capps@iozone.org> Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Herbert Xu [Fri, 6 Jan 2006 08:09:47 +0000 (00:09 -0800)]
[PATCH] nbd: fix TX/RX race condition
Janos Haar of First NetCenter Bt. reported numerous crashes involving the
NBD driver. With his help, this was tracked down to bogus bio vectors
which in turn was the result of a race condition between the
receive/transmit routines in the NBD driver.
The bug manifests itself like this:
CPU0 CPU1
do_nbd_request
add req to queuelist
nbd_send_request
send req head
for each bio
kmap
send
nbd_read_stat
nbd_find_request
nbd_end_request
kunmap
When CPU1 finishes nbd_end_request, the request and all its associated
bio's are freed. So when CPU0 calls kunmap whose argument is derived from
the last bio, it may crash.
Under normal circumstances, the race occurs only on the last bio. However,
if an error is encountered on the remote NBD server (such as an incorrect
magic number in the request), or if there were a bug in the server, it is
possible for the nbd_end_request to occur any time after the request's
addition to the queuelist.
The following patch fixes this problem by making sure that requests are not
added to the queuelist until after they have been completed transmission.
In order for the receiving side to be ready for responses involving
requests still being transmitted, the patch introduces the concept of the
active request.
When a response matches the current active request, its processing is
delayed until after the tranmission has come to a stop.
This has been tested by Janos and it has been successful in curing this
race condition.
From: Herbert Xu <herbert@gondor.apana.org.au>
Here is an updated patch which removes the active_req wait in
nbd_clear_queue and the associated memory barrier.
I've also clarified this in the comment.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Cc: <djani22@dynamicweb.hu> Cc: Paul Clements <Paul.Clements@SteelEye.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Joshua Kwan [Fri, 6 Jan 2006 08:09:45 +0000 (00:09 -0800)]
[PATCH] hfsplus oops fix
nls_utf8 is available, and the check in hfsplus_fill_super checks the wrong
pointer for NULLness (it checks the saved nls, not the new one that it
needs to use.)
Signed-off-by: Joshua Kwan <joshk@triplehelix.org> Cc: Roman Zippel <zippel@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Chuck Ebbert [Fri, 6 Jan 2006 04:11:29 +0000 (23:11 -0500)]
[PATCH] i386: PTRACE_POKEUSR: allow changing RF bit in EFLAGS register.
Setting RF (resume flag) allows a debugger to resume execution after a
code breakpoint without tripping the breakpoint again. It is reset by
the CPU after execution of one instruction.
Requested by Stephane Eranian:
"I am trying to the user HW debug registers on i386 and I am running
into a problem with ptrace() not allowing access to EFLAGS_RF for
POKEUSER (see FLAG_MASK). [ ... ] It avoids the need to remove the
breakpoint, single step, and reinstall. The equivalent functionality
exists on IA-64 and is allowed by ptrace()"