* git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched: (61 commits)
sched: refine negative nice level granularity
sched: fix update_stats_enqueue() reniced codepath
sched: round a bit better
sched: make the multiplication table more accurate
sched: optimize update_rq_clock() calls in the load-balancer
sched: optimize activate_task()
sched: clean up set_curr_task_fair()
sched: remove __update_rq_clock() call from entity_tick()
sched: move the __update_rq_clock() call to scheduler_tick()
sched debug: remove the 'u64 now' parameter from print_task()/_rq()
sched: remove the 'u64 now' local variables
sched: remove the 'u64 now' parameter from deactivate_task()
sched: remove the 'u64 now' parameter from dequeue_task()
sched: remove the 'u64 now' parameter from enqueue_task()
sched: remove the 'u64 now' parameter from dec_nr_running()
sched: remove the 'u64 now' parameter from inc_nr_running()
sched: remove the 'u64 now' parameter from dec_load()
sched: remove the 'u64 now' parameter from inc_load()
sched: remove the 'u64 now' parameter from update_curr_load()
sched: remove the 'u64 now' parameter from ->task_new()
...
lguest: avoid shared libraries mapped over guest memory
Some versions of ld.so mmap the shared libraries right in over guest
memory, so compile lguest statically by default.
[ FC7 maps shared libraries very low, where the launcher maps guest's
physical memory. Quick fix is to link Launcher static, real fix is
for 2.6.24. ]
-static is a simple fix. I expect this problem will be more common than we
like, as different distro's make different "improvements" to ld.so
Signed-off-by: Ronald G. Minnich <rminnich@gmail.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Rusty Russell [Thu, 9 Aug 2007 10:57:13 +0000 (20:57 +1000)]
lguest: Fix Malicious Guest GDT Host Crash
If a Guest makes hypercall which sets a GDT entry to not present, we
currently set any segment registers using that GDT entry to 0.
Unfortunately, this is not sufficient: there are other ways of
altering GDT entries which will cause a fault.
The correct solution to do what Linux does: let them set any GDT value
they want and handle the #GP when popping causes a fault. This has
the added benefit of making our Switcher slightly more robust in the
case of any other bugs which cause it to fault.
We kill the Guest if it causes a fault in the Switcher: it's the
Guest's responsibility to make sure it's not using segments when it
changes them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Rusty Russell [Thu, 9 Aug 2007 10:52:35 +0000 (20:52 +1000)]
Fix non-TSC guest clocksource lockup
lguest uses a host-supplied wallclock-based clocksource when the TSC
is not reliable. As this is already in nanoseconds, I naively used a
multiplier of 1 and a shift of 0.
But update_wall_time() in its infinite wisdom decides to adjust the
clock a little (where does it think it's getting a more accurate time
from?)
It will happily tweak the multiplier... to 0, then -1.
So the "fix" is to use a shift of 22 like everyone else, and a
multiplier of 1 << 22.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Thu, 9 Aug 2007 15:10:16 +0000 (08:10 -0700)]
Revert "genirq: temporary fix for level-triggered IRQ resend"
This reverts commit 0fc4969b866671dfe39b1a9119d0fdc7ea0f63e5. It was
always meant to be temporary, but it's generating more useless noise
than anything else, and we probably should never have done it in the
generic kernel (only had the people involved test it on their own).
Ingo Molnar [Thu, 9 Aug 2007 09:16:52 +0000 (11:16 +0200)]
sched: refine negative nice level granularity
refine the granularity of negative nice level tasks: let them
reschedule more often to offset the effect of them consuming
their wait_runtime proportionately slower. (This makes nice-0
task scheduling smoother in the presence of negatively
reniced tasks.)
Ingo Molnar [Thu, 9 Aug 2007 09:16:51 +0000 (11:16 +0200)]
sched: optimize update_rq_clock() calls in the load-balancer
optimize update_rq_clock() calls in the load-balancer: update them
right after locking the runqueue(s) so that the pull functions do
not have to call it.
Ingo Molnar [Thu, 9 Aug 2007 09:16:51 +0000 (11:16 +0200)]
sched: optimize activate_task()
optimize activate_task() by removing update_rq_clock() from it.
(and add update_rq_clock() to all callsites of activate_task() that
did not have it before.)
Ingo Molnar [Thu, 9 Aug 2007 09:16:47 +0000 (11:16 +0200)]
sched: remove 'now' use from assignments
change all 'now' timestamp uses in assignments to rq->clock.
( this is an identity transformation that causes no functionality change:
all such new rq->clock is necessarily preceded by an update_rq_clock()
call. )
Ingo Molnar [Thu, 9 Aug 2007 09:16:46 +0000 (11:16 +0200)]
sched: add [__]update_rq_clock(rq)
add the [__]update_rq_clock(rq) functions. (No change in functionality,
just reorganization to prepare for elimination of the heavy 64-bit
timestamp-passing in the scheduler.)
Peter Williams [Thu, 9 Aug 2007 09:16:46 +0000 (11:16 +0200)]
sched: fix bug in balance_tasks()
There are two problems with balance_tasks() and how it used:
1. The variables best_prio and best_prio_seen (inherited from the old
move_tasks()) were only required to handle problems caused by the
active/expired arrays, the order in which they were processed and the
possibility that the task with the highest priority could be on either.
These issues are no longer present and the extra overhead associated
with their use is unnecessary (and possibly wrong).
2. In the absence of CONFIG_FAIR_GROUP_SCHED being set, the same
this_best_prio variable needs to be used by all scheduling classes or
there is a risk of moving too much load. E.g. if the highest priority
task on this at the beginning is a fairly low priority task and the rt
class migrates a task (during its turn) then that moved task becomes the
new highest priority task on this_rq but when the sched_fair class
initializes its copy of this_best_prio it will get the priority of the
original highest priority task as, due to the run queue locks being
held, the reschedule triggered by pull_task() will not have taken place.
This could result in inappropriate overriding of skip_for_load and
excessive load being moved.
The attached patch addresses these problems by deleting all reference to
best_prio and best_prio_seen and making this_best_prio a reference
parameter to the various functions involved.
load_balance_fair() has also been modified so that this_best_prio is
only reset (in the loop) if CONFIG_FAIR_GROUP_SCHED is set. This should
preserve the effect of helping spread groups' higher priority tasks
around the available CPUs while improving system performance when
CONFIG_FAIR_GROUP_SCHED isn't set.
Signed-off-by: Peter Williams <pwil3058@bigpond.net.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Josh Triplett [Thu, 9 Aug 2007 09:16:46 +0000 (11:16 +0200)]
sched: mark print_cfs_stats static
sched_fair.c defines print_cfs_stats, and sched_debug.c uses it, but sched.c
includes both sched_fair.c and sched_debug.c, so all the references to
print_cfs_stats occur in the same compilation unit. Thus, mark
print_cfs_stats static.
Eliminates a sparse warning:
warning: symbol 'print_cfs_stats' was not declared. Should it be static?
Ulrich Drepper [Thu, 9 Aug 2007 09:16:46 +0000 (11:16 +0200)]
sched: clean up sched_getaffinity()
here's another tiny cleanup. The generated code is not affected (gcc is
smart enough) but for people looking over the code it is just irritating
to have the extra conditional.
Peter Williams [Thu, 9 Aug 2007 09:16:46 +0000 (11:16 +0200)]
sched: simplify move_tasks()
The move_tasks() function is currently multiplexed with two distinct
capabilities:
1. attempt to move a specified amount of weighted load from one run
queue to another; and
2. attempt to move a specified number of tasks from one run queue to
another.
The first of these capabilities is used in two places, load_balance()
and load_balance_idle(), and in both of these cases the return value of
move_tasks() is used purely to decide if tasks/load were moved and no
notice of the actual number of tasks moved is taken.
The second capability is used in exactly one place,
active_load_balance(), to attempt to move exactly one task and, as
before, the return value is only used as an indicator of success or failure.
This multiplexing of sched_task() was introduced, by me, as part of the
smpnice patches and was motivated by the fact that the alternative, one
function to move specified load and one to move a single task, would
have led to two functions of roughly the same complexity as the old
move_tasks() (or the new balance_tasks()). However, the new modular
design of the new CFS scheduler allows a simpler solution to be adopted
and this patch addresses that solution by:
1. adding a new function, move_one_task(), to be used by
active_load_balance(); and
2. making move_tasks() a single purpose function that tries to move a
specified weighted load and returns 1 for success and 0 for failure.
One of the consequences of these changes is that neither move_one_task()
or the new move_tasks() care how many tasks sched_class.load_balance()
moves and this enables its interface to be simplified by returning the
amount of load moved as its result and removing the load_moved pointer
from the argument list. This helps simplify the new move_tasks() and
slightly reduces the amount of work done in each of
sched_class.load_balance()'s implementations.
Further simplification, e.g. changes to balance_tasks(), are possible
but (slightly) complicated by the special needs of load_balance_fair()
so I've left them to a later patch (if this one gets accepted).
NB Since move_tasks() gets called with two run queue locks held even
small reductions in overhead are worthwhile.
[ mingo@elte.hu ]
this change also reduces code size nicely:
text data bss dec hex filename
39216 3618 24 42858 a76a sched.o.before
39173 3618 24 42815 a73f sched.o.after
Signed-off-by: Peter Williams <pwil3058@bigpond.net.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Ingo Molnar [Thu, 9 Aug 2007 09:16:45 +0000 (11:16 +0200)]
sched: reorder update_cpu_load(rq) with the ->task_tick() call
Peter Williams suggested to flip the order of update_cpu_load(rq) with
the ->task_tick() call. This is a NOP for the current scheduler (the
two functions are independent of each other), ->task_tick() might
create some state for update_cpu_load() in the future (or in PlugSched).
Al Viro [Tue, 7 Aug 2007 23:01:46 +0000 (00:01 +0100)]
fix oops in __audit_signal_info()
The check for audit_signals is misplaced and the check for
audit_dummy_context() is missing; as the result, if we send a signal to
auditd from task with NULL ->audit_context while we have audit_signals
!= 0 we end up with an oops.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: James Morris <jmorris@namei.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> >> Looks like memset() is zeroing wrong nr of bytes.
> >
> > Good catch, however, I think we can just remove this memset altogether
> > since the memory gets allocated via kzalloc.
>
> Correct, that memset() is superfluous.
Paul Mundt [Wed, 1 Aug 2007 06:48:55 +0000 (15:48 +0900)]
net: smc91x: Build fixes for general sh boards.
SH boards in general only wire this up in 8 or 16-bit mode, and
as we never had the wrappers for 32-bit mode defined, SMC_CAN_USE_32BIT
caused build failure for the non-Solution Engine boards. This gets it
building again.
Also kill off the straggling set_irq_type() definition, this is left
over cruft that was missed when the rest of it switched to IRQ flags.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
--
Rusty Russell [Mon, 6 Aug 2007 00:48:18 +0000 (10:48 +1000)]
Enable lguest drivers in Kconfig
Lguest drivers need to default to "Y" otherwise they're never selected
for new builds. (We don't bother prompting, because they're less than
4k combined, and implied by selecting lguest support).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Avi Kivity [Sun, 5 Aug 2007 07:16:11 +0000 (10:16 +0300)]
KVM: x86 emulator: fix debug reg mov instructions
More fallout from the writeback fixes: debug register transfer
instructions do their own writeback and thus need to disable the general
writeback mechanism.
This fixes oopses and some guest failures on AMD machines (the Intel
variant decodes the instruction in hardware and thus does not need
emulation).
Cc: Alistair John Strachan <alistair@devzero.co.uk> Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Tue, 7 Aug 2007 00:52:56 +0000 (17:52 -0700)]
Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
* 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6:
[NETFILTER]: Add xt_statistic.h to the header list for usermode programs
[BNX2]: Fix suspend/resume problem.
[TG3]: Fix suspend/resume problem.
Dave Airlie [Mon, 6 Aug 2007 23:09:51 +0000 (09:09 +1000)]
drm/i915: Fix i965 secured batchbuffer usage
This 965G and above chipsets moved the batch buffer non-secure bits to
another place. This means that previous drm's allowed in-secure batchbuffers
to be submitted to the hardware from non-privileged users who are logged
into X and and have access to direct rendering.
Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Francois Romieu [Wed, 1 Aug 2007 22:00:48 +0000 (00:00 +0200)]
r8169: avoid needless NAPI poll scheduling
Theory : though needless, it should not have hurt.
Practice: it does not play nice with DEBUG_SHIRQ + LOCKDEP + UP
(see https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=242572).
The patch makes sense in itself but I should dig why it has an effect
on #242572 (assuming that NAPI do not change in a near future).
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Cc: Edward Hsu <edward_hsu@realtek.com.tw>
Michael Buesch [Tue, 31 Jul 2007 18:41:04 +0000 (20:41 +0200)]
[PATCH] softmac: Fix deadlock of wx_set_essid with assoc work
The essid wireless extension does deadlock against the assoc mutex,
as we don't unlock the assoc mutex when flushing the workqueue, which
also holds the lock.
Signed-off-by: Michael Buesch <mb@bu3sch.de> Signed-off-by: John W. Linville <linville@tuxdriver.com>
While filling the control set the driver tests for a PSPOLL frame.
But it tested only the subtype of the packet. The full type needs
to be tested to identify those packets reliably.
[dsd@gentoo.org: backport to mainline] Signed-off-by: Ulrich Kunitz <kune@deine-taler.de> Signed-off-by: Daniel Drake <dsd@gentoo.org> Signed-off-by: John W. Linville <linville@tuxdriver.com>
Michael Wu [Mon, 16 Jul 2007 00:09:55 +0000 (17:09 -0700)]
[PATCH] rtl8187: ensure priv->hwaddr is always valid
conf->mac_addr is not guaranteed to be set. This ensures priv->hwaddr is
always set to a valid mac address. Thanks to Johannes Berg
<johannes@sipsolutions.net> for finding this problem.
Signed-off-by: Michael Wu <flamingice@sourmilk.net> Signed-off-by: John W. Linville <linville@tuxdriver.com>
Russell King [Mon, 6 Aug 2007 15:10:54 +0000 (16:10 +0100)]
[ARM] pata_icside: fix the FIXMEs
Alan Cox suggested that the solution to the FIXMEs in pata_icside is
to use a private postreset method to detect the lack of devices on a
port, and in such a case, disable the interrupt for the port.
This patch implements such a method, and removes the hard coded
disable of port 0. Tested as working.
Acked-by: Jeff Garzik <jeff@garzik.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[CRYPTO] api: fix writting into unallocated memory in setkey_aligned
setkey_unaligned() commited in ca7c39385ce1a7b44894a4b225a4608624e90730
overwrites unallocated memory in the following memset() because
I used the wrong buffer length.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>