Mike Anderson [Mon, 27 Mar 2006 09:17:54 +0000 (01:17 -0800)]
[PATCH] dm table: store md
Store an up-pointer to the owning struct mapped_device in every table when it
is created.
Access it with:
struct mapped_device *dm_table_get_md(struct dm_table *t)
Tables linked to md must be destroyed before the md itself.
Signed-off-by: Mike Anderson <andmike@us.ibm.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Mike Anderson [Mon, 27 Mar 2006 09:17:52 +0000 (01:17 -0800)]
[PATCH] dm: store md name
The patch stores a printable device number in struct mapped_device for use in
warning messages and with a proposed netlink interface.
Signed-off-by: Mike Anderson <andmike@us.ibm.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Jun'ichi Nomura [Mon, 27 Mar 2006 09:17:51 +0000 (01:17 -0800)]
[PATCH] dm flush queue EINTR
If dm_suspend() is cancelled, bios already added to the deferred list need to
be submitted. Otherwise they remain 'in limbo' until there's a dm_resume().
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Before removing a snapshot, wait for the completion of any kcopyd jobs using
it.
Do this by maintaining a count (nr_jobs) of how many outstanding jobs each
kcopyd_client has.
The snapshot destructor first unregisters the snapshot so that no new kcopyd
jobs (created by writes to the origin) will reference that particular
snapshot. kcopyd_client_destroy() is now run next to wait for the completion
of any outstanding jobs before the snapshot exception structures (that those
jobs reference) are freed.
Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
NeilBrown [Mon, 27 Mar 2006 09:17:49 +0000 (01:17 -0800)]
[PATCH] dm: make sure QUEUE_FLAG_CLUSTER is set properly
This flag should be set for a virtual device iff it is set for all
underlying devices.
Signed-off-by: Neil Brown <neilb@suse.de> Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Andrew Morton [Mon, 27 Mar 2006 09:17:48 +0000 (01:17 -0800)]
[PATCH] dm: remove SECTOR_FORMAT
We don't know what type sector_t has. Sometimes it's unsigned long, sometimes
it's unsigned long long. For example on ppc64 it's unsigned long with
CONFIG_LBD=n and on x86_64 it's unsigned long long with CONFIG_LBD=n.
The way to handle all of this is to always use unsigned long long and to
always typecast the sector_t when printing it.
Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Jun'ichi Nomura [Mon, 27 Mar 2006 09:17:47 +0000 (01:17 -0800)]
[PATCH] drivers/md/dm-raid1.c: Fix inconsistent mirroring after interrupted recovery
dm-mirror has potential data corruption problem: while on-disk log shows
that all disk contents are in-sync, actual contents of the disks are not
synchronized. This problem occurs if initial recovery (synching) is
interrupted and resumed.
Attached patch fixes this problem.
Background:
rh_dec() changes the region state from RH_NOSYNC (out-of-sync) to RH_CLEAN
(in-sync), which results in the corresponding bit of clean_bits being set.
This is harmful if on-disk log is used and the map is removed/suspended
before the initial sync is completed. The clean_bits is written down to
the on-disk log at the map removal, and, upon resume, it's read and copied
to sync_bits. Since the recovery process refers to the sync_bits to find a
region to be recovered, the region whose state was changed from RH_NOSYNC
to RH_CLEAN is no longer recovered.
If you haven't applied dm-raid1-read-balancing.patch proposed in dm-devel
sometimes ago, the contents of the mirrored disk just corrupt silently. If
you have, balanced read may get bogus data from out-of-sync disks.
The patch keeps RH_NOSYNC state unchanged. It will be changed to
RH_RECOVERING when recovery starts and get reclaimed when the recovery
completes. So it doesn't leak the region hash entry.
Description:
Keep RH_NOSYNC state unchanged when I/O on the region completes.
rh_dec() changes the region state from RH_NOSYNC (out-of-sync) to RH_CLEAN
(in-sync), which results in the corresponding bit of clean_bits being set.
This is harmful if on-disk log is used and the map is removed/suspended
before the initial sync is completed. The clean_bits is written down to
the on-disk log at the map removal, and, upon resume, it's read and copied
to sync_bits. Since the recovery process refers to the sync_bits to find a
region to be recovered, the region whose state was changed from RH_NOSYNC
to RH_CLEAN is no longer recovered.
If you haven't applied dm-raid1-read-balancing.patch proposed in dm-devel
sometimes ago, the contents of the mirrored disk just corrupt silently. If
you have, balanced read may get bogus data from out-of-sync disks.
The RH_NOSYNC region will be changed to RH_RECOVERING when recovery starts
on the region and get reclaimed when the recovery completes. So it doesn't
leak the region hash entry.
Alasdair said:
I've analysed the relevant part of the state machine and I believe that
the patch is correct.
(Further work on this code is still needed - this patch has the
side-effect of holding onto memory unnecessarily for long periods of time
under certain workloads - but better that than corrupting data.)
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When a snapshot becomes invalid, s->valid is set to 0. In this state, a
snapshot can no longer be accessed.
When s->lock is acquired, before doing anything else, s->valid must be checked
to ensure the snapshot remains valid.
This patch eliminates some races (that may cause panics) by adding some
missing checks. At the same time, some unnecessary levels of indentation are
removed and snapshot invalidation is moved into a single function that always
generates a device-mapper event.
Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
[PATCH] device-mapper snapshot: replace sibling list
The siblings "list" is used unsafely at the moment.
Firstly, only the element on the list being changed gets locked (via the
snapshot lock), not the next and previous elements which have pointers that
are also being changed.
Secondly, if you have two or more snapshots and write to the same chunk a
second time before every snapshot has finished making its private copy of the
data, if you're unlucky, _origin_write() could attempt its list_merge() and
dereference a 'last' pointer to a pending_exception structure that has just
been freed.
Analysis reveals that the list is actually only there for reference counting.
If 5 pending_exceptions are needed in origin_write, then the 5 are joined
together into a 5-element list - without a separate list head because there's
nowhere suitable to store it. As the pending_exceptions complete, they are
removed from the list one-by-one and any contents of origin_bios get moved
across to one of the remaining pending_exceptions on the list. Whichever one
is last is detected because list_empty() is then true and the origin_bios get
submitted.
The fix proposed here uses an alternative reference counting mechanism by
choosing one of the pending_exceptions as primary and maintaining an atomic
counter there.
Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Say you have several snapshots of the same origin and then you issue a write
to some place in the origin for the first time.
Before the device-mapper snapshot target lets the write go through to the
underlying device, it needs to make a copy of the data that is about to be
overwritten. Each snapshot is independent, so it makes one copy for each
snapshot.
__origin_write() loops through each snapshot and checks to see whether a copy
is needed for that snapshot. (A copy is only needed the first time that data
changes.)
If a copy is needed, the code allocates a 'pending_exception' structure
holding the details. It links these together for all the snapshots, then
works its way through this list and submits the copying requests to the kcopyd
thread by calling start_copy(). When each request is completed, the original
pending_exception structure gets freed in pending_complete().
If you're very unlucky, this structure can get freed *before* the submission
process has finished walking the list.
This patch:
1) Creates a new temporary list pe_queue to hold the pending exception
structures;
2) Does all the bookkeeping up-front, then walks through the new list
safely and calls start_copy() for each pending_exception that needed it;
3) Avoids attempting to add pe->siblings to the list if it's already
connected.
[NB This does not fix all the races in this code. More patches will follow.]
Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Olaf Hering [Mon, 27 Mar 2006 09:17:38 +0000 (01:17 -0800)]
[PATCH] fbdev: add modeline for 1680x1050@60
Add a modeline for the Philips 200W display. aty128fb does not do DDC, it
picks 1920x1440 or similar. It works ok with nvidiafb because it can ask
for DDC data.
Alan Curry [Mon, 27 Mar 2006 09:17:30 +0000 (01:17 -0800)]
[PATCH] framebuffer: cmap-setting return values
A set of 3 small bugfixes, all of which are related to bogus return values
of fb colormap-setting functions.
First, fb_alloc_cmap returns -1 if memory allocation fails. This is a hard
condition to reproduce since you'd have to be really low on memory, but from
studying the contexts in which it is called, I think this function should be
returning a negative errno, and the -1 will be seen as an EPERM. Switching it
to -ENOMEM makes sense.
Second, the store_cmap function which is called for writes to
/sys/class/graphics/fb0/color_map returns 0 for success, but it should be
returning the count of bytes written since its return value ends up in
userspace as the result of the write() syscall.
Third, radeonfb returns 1 instead of a negative errno when FBIOPUTCMAP is
called with an oversized colormap. This is seen in userspace as a return
value of 1 from the ioctl() syscall with errno left unchanged. A more
useful return value would be -EINVAL.
Signed-off-by: Alan Curry <pacman@TheWorld.com> Cc: "Antonino A. Daplas" <adaplas@pol.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Jean Delvare [Mon, 27 Mar 2006 09:17:26 +0000 (01:17 -0800)]
[PATCH] matrox maven: memory allocation and other cleanups
A few cleanups which were done to almost all i2c drivers some times
ago, but matroxfb_maven was forgotten:
* Don't allocate two different structures at once.
* Use kzalloc instead of kmalloc+memset.
* Use strlcpy instead of strcpy.
* Drop duplicate error message on client deregistration failure.
Signed-off-by: Jean Delvare <khali@linux-fr.org> Acked-by: Petr Vandrovec <vandrove@vc.cvut.cz> Cc: "Antonino A. Daplas" <adaplas@pol.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Arthur Othieno [Mon, 27 Mar 2006 09:17:24 +0000 (01:17 -0800)]
[PATCH] matroxfb: simply return what i2c_add_driver() does
insmod will tell us when the module failed to load. We do no further
processing on the return from i2c_add_driver(), so just return what
i2c_add_driver() did, instead of storing it.
Add __init/__exit annotations while we're at it.
Signed-off-by: Arthur Othieno <apgo@patchbomb.org> Acked-by: Jean Delvare <khali@linux-fr.org> Acked-by: Petr Vandrovec <petr@vandrovec.name> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
David Vrabel [Mon, 27 Mar 2006 09:17:23 +0000 (01:17 -0800)]
[PATCH] fbdev: framebuffer driver for Geode GX
A framebuffer driver for the display controller in AMD Geode GX processors
(Geode GX533, Geode GX500 etc.). Tested at 640x480, 800x600, 1024x768 and
1280x1024 at 8, 16, and 24 bpp with both CRT and TFT. No accelerated features
currently implemented and compression remains disabled.
This driver requires that the BIOS (or the SoftVG/Firmbase code in the BIOS)
has created an appropriate virtual PCI header.
Signed-off-by: David Vrabel <dvrabel@arcom.com> Signed-off-by: Antonino Daplas <adaplas@pol.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Add suspend and resume hooks to make software suspend more reliable. Resuming
from standby should generally work. Resuming from mem and from disk requires
that the GPU is disabled. Adding these to the suspend script...
fbset -accel false -a
/* suspend here */
fbset -accel true -a
... should generally work. In addition, resuming from mem requires that the
video card has to be POSTed by the BIOS or some other utility.
The scrollback buffer of the VGA console is located in VGA RAM. This RAM
is fixed in size and is very small. To make the scrollback buffer larger,
it must be placed instead in System RAM.
This patch adds this feature. The feature and the size of the buffer are
made as a kernel config option. Besides consuming kernel memory, this
feature will slow down the console by approximately 20%.
Samuel Thibault [Mon, 27 Mar 2006 09:17:19 +0000 (01:17 -0800)]
[PATCH] vgacon: fix EGA cursor resize function
This corrects cursor resize on ega boards: registers are write-only, so we
shouldn't even try to read them. And on ega, 31/30 produces a flat cursor.
Using 31/31 is better: except with 32 pixels high fonts, it shouldn't show
up.
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org> Cc: "Antonino A. Daplas" <adaplas@pol.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Remove the assumption that pnp_register_driver() returns the number of devices
claimed. Returning the count is unreliable because devices may be hot-plugged
in the future.
This changes the convention to "zero for success, or a negative error value,"
which matches pci_register_driver(), acpi_bus_register_driver(), and
platform_driver_register().
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Adam Belay <ambx1@neo.rr.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This series of patches removes the assumption that pnp_register_driver()
returns the number of devices claimed. Returning the count is unreliable
because devices may be hot-plugged in the future. (Many devices don't support
hot-plug, of course, but PNP in general does.)
This changes the convention to "zero for success, or a negative error value,"
which matches pci_register_driver(), acpi_bus_register_driver(), and
platform_driver_register().
If drivers need to know the number of devices, they can count calls to their
.probe() methods.
This patch:
Remove the assumption that pnp_register_driver() returns the number of devices
claimed.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Adam Belay <ambx1@neo.rr.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Remove the assumption that pnp_register_driver() returns the number of devices
claimed.
parport_pc_init() does nothing with "count", so remove it. Then nobody uses
the return value of parport_pc_find_ports(), so make it void. Finally, update
pnp_register_driver() usage.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Adam Belay <ambx1@neo.rr.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Alessandro Zummo [Mon, 27 Mar 2006 09:16:37 +0000 (01:16 -0800)]
[PATCH] RTC subsystem: class
Add the basic RTC subsystem infrastructure to the kernel.
rtc/class.c - registration facilities for RTC drivers
rtc/interface.c - kernel/rtc interface functions
rtc/hctosys.c - snippet of code that copies hw clock to sw clock
at bootup, if configured to do so.
Alessandro Zummo [Mon, 27 Mar 2006 09:16:35 +0000 (01:16 -0800)]
[PATCH] RTC subsystem: ARM cleanup
This patch removes from the ARM subsytem some of the rtc-related functions
that have been included in the RTC subsystem. It also fixes some naming
collisions.
Alan Stern [Mon, 27 Mar 2006 09:16:30 +0000 (01:16 -0800)]
[PATCH] Notifier chain update: API changes
The kernel's implementation of notifier chains is unsafe. There is no
protection against entries being added to or removed from a chain while the
chain is in use. The issues were discussed in this thread:
We noticed that notifier chains in the kernel fall into two basic usage
classes:
"Blocking" chains are always called from a process context
and the callout routines are allowed to sleep;
"Atomic" chains can be called from an atomic context and
the callout routines are not allowed to sleep.
We decided to codify this distinction and make it part of the API. Therefore
this set of patches introduces three new, parallel APIs: one for blocking
notifiers, one for atomic notifiers, and one for "raw" notifiers (which is
really just the old API under a new name). New kinds of data structures are
used for the heads of the chains, and new routines are defined for
registration, unregistration, and calling a chain. The three APIs are
explained in include/linux/notifier.h and their implementation is in
kernel/sys.c.
With atomic and blocking chains, the implementation guarantees that the chain
links will not be corrupted and that chain callers will not get messed up by
entries being added or removed. For raw chains the implementation provides no
guarantees at all; users of this API must provide their own protections. (The
idea was that situations may come up where the assumptions of the atomic and
blocking APIs are not appropriate, so it should be possible for users to
handle these things in their own way.)
There are some limitations, which should not be too hard to live with. For
atomic/blocking chains, registration and unregistration must always be done in
a process context since the chain is protected by a mutex/rwsem. Also, a
callout routine for a non-raw chain must not try to register or unregister
entries on its own chain. (This did happen in a couple of places and the code
had to be changed to avoid it.)
Since atomic chains may be called from within an NMI handler, they cannot use
spinlocks for synchronization. Instead we use RCU. The overhead falls almost
entirely in the unregister routine, which is okay since unregistration is much
less frequent that calling a chain.
Here is the list of chains that we adjusted and their classifications. None
of them use the raw API, so for the moment it is only a placeholder.
It's possible that some of these classifications are wrong. If they are,
please let us know or submit a patch to fix them. Note that any chain that
gets called very frequently should be atomic, because the rwsem read-locking
used for blocking chains is very likely to incur cache misses on SMP systems.
(However, if the chain's callout routines may sleep then the chain cannot be
atomic.)
The patch set was written by Alan Stern and Chandra Seetharaman, incorporating
material written by Keith Owens and suggestions from Paul McKenney and Andrew
Morton.
[jes@sgi.com: restructure the notifier chain initialization macros] Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: Jes Sorensen <jes@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Ingo Molnar [Mon, 27 Mar 2006 09:16:22 +0000 (01:16 -0800)]
[PATCH] lightweight robust futexes: core
Add the core infrastructure for robust futexes: structure definitions, the new
syscalls and the do_exit() based cleanup mechanism.
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Arjan van de Ven <arjan@infradead.org> Acked-by: Ulrich Drepper <drepper@redhat.com> Cc: Michael Kerrisk <mtk-manpages@gmx.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Ingo Molnar [Mon, 27 Mar 2006 09:16:21 +0000 (01:16 -0800)]
[PATCH] lightweight robust futexes: arch defaults
This patchset provides a new (written from scratch) implementation of robust
futexes, called "lightweight robust futexes". We believe this new
implementation is faster and simpler than the vma-based robust futex solutions
presented before, and we'd like this patchset to be adopted in the upstream
kernel. This is version 1 of the patchset.
Background
----------
What are robust futexes? To answer that, we first need to understand what
futexes are: normal futexes are special types of locks that in the
noncontended case can be acquired/released from userspace without having to
enter the kernel.
A futex is in essence a user-space address, e.g. a 32-bit lock variable
field. If userspace notices contention (the lock is already owned and someone
else wants to grab it too) then the lock is marked with a value that says
"there's a waiter pending", and the sys_futex(FUTEX_WAIT) syscall is used to
wait for the other guy to release it. The kernel creates a 'futex queue'
internally, so that it can later on match up the waiter with the waker -
without them having to know about each other. When the owner thread releases
the futex, it notices (via the variable value) that there were waiter(s)
pending, and does the sys_futex(FUTEX_WAKE) syscall to wake them up. Once all
waiters have taken and released the lock, the futex is again back to
'uncontended' state, and there's no in-kernel state associated with it. The
kernel completely forgets that there ever was a futex at that address. This
method makes futexes very lightweight and scalable.
"Robustness" is about dealing with crashes while holding a lock: if a process
exits prematurely while holding a pthread_mutex_t lock that is also shared
with some other process (e.g. yum segfaults while holding a pthread_mutex_t,
or yum is kill -9-ed), then waiters for that lock need to be notified that the
last owner of the lock exited in some irregular way.
To solve such types of problems, "robust mutex" userspace APIs were created:
pthread_mutex_lock() returns an error value if the owner exits prematurely -
and the new owner can decide whether the data protected by the lock can be
recovered safely.
There is a big conceptual problem with futex based mutexes though: it is the
kernel that destroys the owner task (e.g. due to a SEGFAULT), but the kernel
cannot help with the cleanup: if there is no 'futex queue' (and in most cases
there is none, futexes being fast lightweight locks) then the kernel has no
information to clean up after the held lock! Userspace has no chance to clean
up after the lock either - userspace is the one that crashes, so it has no
opportunity to clean up. Catch-22.
In practice, when e.g. yum is kill -9-ed (or segfaults), a system reboot is
needed to release that futex based lock. This is one of the leading
bugreports against yum.
To solve this problem, 'Robust Futex' patches were created and presented on
lkml: the one written by Todd Kneisel and David Singleton is the most advanced
at the moment. These patches all tried to extend the futex abstraction by
registering futex-based locks in the kernel - and thus give the kernel a
chance to clean up.
E.g. in David Singleton's robust-futex-6.patch, there are 3 new syscall
variants to sys_futex(): FUTEX_REGISTER, FUTEX_DEREGISTER and FUTEX_RECOVER.
The kernel attaches such robust futexes to vmas (via
vma->vm_file->f_mapping->robust_head), and at do_exit() time, all vmas are
searched to see whether they have a robust_head set.
Lots of work went into the vma-based robust-futex patch, and recently it has
improved significantly, but unfortunately it still has two fundamental
problems left:
- they have quite complex locking and race scenarios. The vma-based
patches had been pending for years, but they are still not completely
reliable.
- they have to scan _every_ vma at sys_exit() time, per thread!
The second disadvantage is a real killer: pthread_exit() takes around 1
microsecond on Linux, but with thousands (or tens of thousands) of vmas every
pthread_exit() takes a millisecond or more, also totally destroying the CPU's
L1 and L2 caches!
This is very much noticeable even for normal process sys_exit_group() calls:
the kernel has to do the vma scanning unconditionally! (this is because the
kernel has no knowledge about how many robust futexes there are to be cleaned
up, because a robust futex might have been registered in another task, and the
futex variable might have been simply mmap()-ed into this process's address
space).
This huge overhead forced the creation of CONFIG_FUTEX_ROBUST, but worse than
that: the overhead makes robust futexes impractical for any type of generic
Linux distribution.
So it became clear to us, something had to be done. Last week, when Thomas
Gleixner tried to fix up the vma-based robust futex patch in the -rt tree, he
found a handful of new races and we were talking about it and were analyzing
the situation. At that point a fundamentally different solution occured to
me. This patchset (written in the past couple of days) implements that new
solution. Be warned though - the patchset does things we normally dont do in
Linux, so some might find the approach disturbing. Parental advice
recommended ;-)
New approach to robust futexes
------------------------------
At the heart of this new approach there is a per-thread private list of robust
locks that userspace is holding (maintained by glibc) - which userspace list
is registered with the kernel via a new syscall [this registration happens at
most once per thread lifetime]. At do_exit() time, the kernel checks this
user-space list: are there any robust futex locks to be cleaned up?
In the common case, at do_exit() time, there is no list registered, so the
cost of robust futexes is just a simple current->robust_list != NULL
comparison. If the thread has registered a list, then normally the list is
empty. If the thread/process crashed or terminated in some incorrect way then
the list might be non-empty: in this case the kernel carefully walks the list
[not trusting it], and marks all locks that are owned by this thread with the
FUTEX_OWNER_DEAD bit, and wakes up one waiter (if any).
The list is guaranteed to be private and per-thread, so it's lockless. There
is one race possible though: since adding to and removing from the list is
done after the futex is acquired by glibc, there is a few instructions window
for the thread (or process) to die there, leaving the futex hung. To protect
against this possibility, userspace (glibc) also maintains a simple per-thread
'list_op_pending' field, to allow the kernel to clean up if the thread dies
after acquiring the lock, but just before it could have added itself to the
list. Glibc sets this list_op_pending field before it tries to acquire the
futex, and clears it after the list-add (or list-remove) has finished.
That's all that is needed - all the rest of robust-futex cleanup is done in
userspace [just like with the previous patches].
Ulrich Drepper has implemented the necessary glibc support for this new
mechanism, which fully enables robust mutexes. (Ulrich plans to commit these
changes to glibc-HEAD later today.)
Key differences of this userspace-list based approach, compared to the vma
based method:
- it's much, much faster: at thread exit time, there's no need to loop
over every vma (!), which the VM-based method has to do. Only a very
simple 'is the list empty' op is done.
- no VM changes are needed - 'struct address_space' is left alone.
- no registration of individual locks is needed: robust mutexes dont need
any extra per-lock syscalls. Robust mutexes thus become a very lightweight
primitive - so they dont force the application designer to do a hard choice
between performance and robustness - robust mutexes are just as fast.
- no per-lock kernel allocation happens.
- no resource limits are needed.
- no kernel-space recovery call (FUTEX_RECOVER) is needed.
- the implementation and the locking is "obvious", and there are no
interactions with the VM.
Performance
-----------
I have benchmarked the time needed for the kernel to process a list of 1
million (!) held locks, using the new method [on a 2GHz CPU]:
- with FUTEX_WAIT set [contended mutex]: 130 msecs
- without FUTEX_WAIT set [uncontended mutex]: 30 msecs
I have also measured an approach where glibc does the lock notification [which
it currently does for !pshared robust mutexes], and that took 256 msecs -
clearly slower, due to the 1 million FUTEX_WAKE syscalls userspace had to do.
(1 million held locks are unheard of - we expect at most a handful of locks to
be held at a time. Nevertheless it's nice to know that this approach scales
nicely.)
Implementation details
----------------------
The patch adds two new syscalls: one to register the userspace list, and one
to query the registered list pointer:
asmlinkage long
sys_set_robust_list(struct robust_list_head __user *head,
size_t len);
List registration is very fast: the pointer is simply stored in
current->robust_list. [Note that in the future, if robust futexes become
widespread, we could extend sys_clone() to register a robust-list head for new
threads, without the need of another syscall.]
So there is virtually zero overhead for tasks not using robust futexes, and
even for robust futex users, there is only one extra syscall per thread
lifetime, and the cleanup operation, if it happens, is fast and
straightforward. The kernel doesnt have any internal distinction between
robust and normal futexes.
If a futex is found to be held at exit time, the kernel sets the highest bit
of the futex word:
#define FUTEX_OWNER_DIED 0x40000000
and wakes up the next futex waiter (if any). User-space does the rest of
the cleanup.
Otherwise, robust futexes are acquired by glibc by putting the TID into the
futex field atomically. Waiters set the FUTEX_WAITERS bit:
#define FUTEX_WAITERS 0x80000000
and the remaining bits are for the TID.
Testing, architecture support
-----------------------------
I've tested the new syscalls on x86 and x86_64, and have made sure the parsing
of the userspace list is robust [ ;-) ] even if the list is deliberately
corrupted.
i386 and x86_64 syscalls are wired up at the moment, and Ulrich has tested the
new glibc code (on x86_64 and i386), and it works for his robust-mutex
testcases.
All other architectures should build just fine too - but they wont have the
new syscalls yet.
Architectures need to implement the new futex_atomic_cmpxchg_inuser() inline
function before writing up the syscalls (that function returns -ENOSYS right
now).
This patch:
Add placeholder futex_atomic_cmpxchg_inuser() implementations to every
architecture that supports futexes. It returns -ENOSYS.
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Arjan van de Ven <arjan@infradead.org> Acked-by: Ulrich Drepper <drepper@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Ingo Molnar [Mon, 27 Mar 2006 09:16:09 +0000 (01:16 -0800)]
[PATCH] s390: add ptr_to_compat()
Add ptr_to_compat() to s390 - needed by the new robust-futex code.
Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
untested. CHECKME: am i right about the 0x7fffffffUL masking?
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Dave Hansen [Mon, 27 Mar 2006 09:16:04 +0000 (01:16 -0800)]
[PATCH] unify PFN_* macros
Just about every architecture defines some macros to do operations on pfns.
They're all virtually identical. This patch consolidates all of them.
One minor glitch is that at least i386 uses them in a very skeletal header
file. To keep away from #include dependency hell, I stuck the new
definitions in a new, isolated header.
Of all of the implementations, sh64 is the only one that varied by a bit.
It used some masks to ensure that any sign-extension got ripped away before
the arithmetic is done. This has been posted to that sh64 maintainers and
the development list.
Compiles on x86, x86_64, ia64 and ppc64.
Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Helper functions for for_each_online_pgdat/for_each_zone look too big to be
inlined. Speed of these helper macro itself is not very important. (inner
loops are tend to do more work than this)
This patch make helper function to be out-of-lined.
This patch defines for_each_online_pgdat() as a replacement of
for_each_pgdat()
Now, online nodes are managed by node_online_map. But for_each_pgdat()
uses pgdat_link to iterate over all nodes(pgdat). This means management
structure for online pgdat is duplicated.
I think using node_online_map for for_each_pgdat() is simple and sane
rather ather than pgdat_link. New macro is named as
for_each_online_pgdat(). Following patch will fix callers of
for_each_pgdat().
The bootmem allocater uses for_each_pgdat() before pgdat initialization. I
don't think it's sane. Following patch will fix it.
pfn_to_page uses pgdat, page_to_pfn uses zone. page_to_pfn can use pgdat
instead of zone, which is only one user of zone_mem_map. By modifing it,
we can remove zone_mem_map.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Christoph Lameter <christoph@lameter.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>