]> err.no Git - linux-2.6/log
linux-2.6
16 years agosched, futex: detach sched.h and futex.h
Alexey Dobriyan [Fri, 25 Jan 2008 20:08:34 +0000 (21:08 +0100)]
sched, futex: detach sched.h and futex.h

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: fix: don't take a mutex from interrupt context
Peter Zijlstra [Fri, 25 Jan 2008 20:08:34 +0000 (21:08 +0100)]
sched: fix: don't take a mutex from interrupt context

print_cfs_stats is callable from interrupt context (sysrq), hence it should
not take mutexes. Change it to use RCU since the task group data is RCU
freed anyway.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: print backtrace of running tasks too
Nick Piggin [Fri, 25 Jan 2008 20:08:34 +0000 (21:08 +0100)]
sched: print backtrace of running tasks too

The attached patch is something really simple that can sometimes help
in getting more info out of a hung system.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agoprintk: use ktime_get()
Ingo Molnar [Fri, 25 Jan 2008 20:08:34 +0000 (21:08 +0100)]
printk: use ktime_get()

printk timestamps: use ktime_get().

Some platforms have a functioning clocksource function only after
they are done with early bootup, so delay this until out of
SYSTEM_BOOTING state.

it's also inherently safe now, as any bugs in this area will be
caught by the printk recursion checks.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosoftlockup: fix signedness
Ingo Molnar [Fri, 25 Jan 2008 20:08:34 +0000 (21:08 +0100)]
softlockup: fix signedness

fix softlockup tunables signedness.

mark tunables read-mostly.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: latencytop support
Arjan van de Ven [Fri, 25 Jan 2008 20:08:34 +0000 (21:08 +0100)]
sched: latencytop support

LatencyTOP kernel infrastructure; it measures latencies in the
scheduler and tracks it system wide and per process.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: fix goto retry in pick_next_task_rt()
Dmitry Adamushko [Fri, 25 Jan 2008 20:08:34 +0000 (21:08 +0100)]
sched: fix goto retry in pick_next_task_rt()

looking at it one more time:

(1) it looks to me that there is no need to call
sched_rt_ratio_exceeded() from pick_next_rt_entity()

- [ for CONFIG_FAIR_GROUP_SCHED ] queues with rt_rq->rt_throttled are
not within this 'tree-like hierarchy' (or whatever we should call it
:-)

- there is also no need to re-check 'rt_rq->rt_time > ratio' at this
point as 'rt_rq->rt_time' couldn't have been increased since the last
call to update_curr_rt() (which obviously calls
sched_rt_ratio_esceeded())
well, it might be that 'ratio' for this rt_rq has been re-configured
(and the period over which this rt_rq was active has not yet been
finished)... but I don't think we should really take this into
account.

(2) now pick_next_rt_entity() must never return NULL, so let's change
pick_next_task_rt() accordingly.

Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agotimers: don't #error on higher HZ values
Pavel Machek [Fri, 25 Jan 2008 20:08:34 +0000 (21:08 +0100)]
timers: don't #error on higher HZ values

For some crazy reason (trying to work around hw problem in i810) I wanted
to use HZ around 4000.

Signed-off-by: Pavel Machek <pavel@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: monitor clock underflows in /proc/sched_debug
Guillaume Chazarain [Fri, 25 Jan 2008 20:08:34 +0000 (21:08 +0100)]
sched: monitor clock underflows in /proc/sched_debug

We monitor clock overflows, let's also monitor clock underflows.

Signed-off-by: Guillaume Chazarain <guichaz@yahoo.fr>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: fix rq->clock warps on frequency changes
Guillaume Chazarain [Fri, 25 Jan 2008 20:08:33 +0000 (21:08 +0100)]
sched: fix rq->clock warps on frequency changes

sched: fix rq->clock warps on frequency changes

Fix 2bacec8c318ca0418c0ee9ac662ee44207765dd4
(sched: touch softlockup watchdog after idling) that reintroduced warps
on frequency changes. touch_softlockup_watchdog() calls __update_rq_clock
that checks rq->clock for warps, so call it after adjusting rq->clock.

Signed-off-by: Guillaume Chazarain <guichaz@yahoo.fr>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: fix, always create kernel threads with normal priority
Michal Schmidt [Fri, 25 Jan 2008 20:08:33 +0000 (21:08 +0100)]
sched: fix, always create kernel threads with normal priority

Ensure that the kernel threads are created with the usual nice level
and affinity even if kthreadd's properties were changed from the
default by root.

Signed-off-by: Michal Schmidt <mschmidt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agodebug: clean up kernel/profile.c
Paolo Ciarrocchi [Fri, 25 Jan 2008 20:08:33 +0000 (21:08 +0100)]
debug: clean up kernel/profile.c

 Before:
 total: 25 errors, 13 warnings, 602 lines checked

 After:
 total: 0 errors, 2 warnings, 601 lines checked

No code changed:

kernel/profile.o:
   text    data     bss     dec     hex filename
   3048     236      24    3308     cec profile.o.before
   3048     236      24    3308     cec profile.o.after
 md5:
   2501d64748a4d350dffb11203e2a5182  profile.o.before.asm
   2501d64748a4d350dffb11203e2a5182  profile.o.after.asm

Signed-off-by: Paolo Ciarrocchi <paolo.ciarrocchi@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: remove the !PREEMPT_BKL code
Ingo Molnar [Fri, 25 Jan 2008 20:08:33 +0000 (21:08 +0100)]
sched: remove the !PREEMPT_BKL code

remove the !PREEMPT_BKL code.

this removes 160 lines of legacy code.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: make PREEMPT_BKL the default
Ingo Molnar [Fri, 25 Jan 2008 20:08:33 +0000 (21:08 +0100)]
sched: make PREEMPT_BKL the default

make PREEMPT_BKL the default.

precursor to removal of the !PREEMPT_BKL code.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agodebug: track and print last unloaded module in the oops trace
Arjan van de Ven [Fri, 25 Jan 2008 20:08:33 +0000 (21:08 +0100)]
debug: track and print last unloaded module in the oops trace

Based on a suggestion from Andi:

 In various cases, the unload of a module may leave some bad state around
 that causes a kernel crash AFTER a module is unloaded; and it's then hard
 to find which module caused that.

This patch tracks the last unloaded module, and prints this as part of the
module list in the oops trace.

Right now, only the last 1 module is tracked; I expect that this is enough
for the vast majority of cases where this information matters; if it turns
out that tracking more is important, we can always extend it to that.

[ mingo@elte.hu: build fix ]

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agodebug: show being-loaded/being-unloaded indicator for modules
Arjan van de Ven [Fri, 25 Jan 2008 20:08:33 +0000 (21:08 +0100)]
debug: show being-loaded/being-unloaded indicator for modules

It's rather common that an oops/WARN_ON/BUG happens during the load or
unload of a module. Unfortunatly, it's not always easy to see directly
which module is being loaded/unloaded from the oops itself. Worse,
it's not even always possible to ask the bug reporter, since there
are so many components (udev etc) that auto-load modules that there's
a good chance that even the reporter doesn't know which module this is.

This patch extends the existing "show if it's tainting" print code,
which is used as part of printing the modules in the oops/BUG/WARN_ON
to include a "+" for "being loaded" and a "-" for "being unloaded".

As a result this extension, the "taint_flags()" function gets renamed to
"module_flags()" (and takes a module struct as argument, not a taint
flags int).

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: rt-watchdog: fix .rlim_max = RLIM_INFINITY
Peter Zijlstra [Fri, 25 Jan 2008 20:08:32 +0000 (21:08 +0100)]
sched: rt-watchdog: fix .rlim_max = RLIM_INFINITY

Remove the curious logic to set it_sched_expires in the future. It useless
because rt.timeout wouldn't be incremented anyway.

Explicity check for RLIM_INFINITY as a test programm that had a 1s soft limit
and a inf hard limit would SIGKILL at 1s. This is because RLIM_INFINITY+d-1
is d-2.

Signed-off-by: Peter Zijlsta <a.p.zijlstra@chello.nl>
CC: Michal Schmidt <mschmidt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: rt-group: reduce rescheduling
Peter Zijlstra [Fri, 25 Jan 2008 20:08:32 +0000 (21:08 +0100)]
sched: rt-group: reduce rescheduling

Only reschedule if the new group has a higher prio task.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agohrtimer: unlock hrtimer_wakeup
Peter Zijlstra [Fri, 25 Jan 2008 20:08:32 +0000 (21:08 +0100)]
hrtimer: unlock hrtimer_wakeup

hrtimer_wakeup creates a

  base->lock
    rq->lock

lock dependancy. Avoid this by switching to HRTIMER_CB_IRQSAFE_NO_SOFTIRQ
which doesn't hold base->lock.

This fully untangles hrtimer locks from the scheduler locks, and allows
hrtimer usage in the scheduler proper.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agohrtimer: fixup the HRTIMER_CB_IRQSAFE_NO_SOFTIRQ fallback
Peter Zijlstra [Fri, 25 Jan 2008 20:08:31 +0000 (21:08 +0100)]
hrtimer: fixup the HRTIMER_CB_IRQSAFE_NO_SOFTIRQ fallback

Currently all highres=off timers are run from softirq context, but
HRTIMER_CB_IRQSAFE_NO_SOFTIRQ timers expect to run from irq context.

Fix this up by splitting it similar to the highres=on case.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agohrtimer: clean up cpu->base locking tricks
Peter Zijlstra [Fri, 25 Jan 2008 20:08:31 +0000 (21:08 +0100)]
hrtimer: clean up cpu->base locking tricks

In order to more easily allow for the scheduler to use timers, clean up
the locking a bit.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: rt throttling vs no_hz
Peter Zijlstra [Fri, 25 Jan 2008 20:08:31 +0000 (21:08 +0100)]
sched: rt throttling vs no_hz

We need to teach no_hz about the rt throttling because its tick driven.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: pull_rt_task() cleanup
Mike Galbraith [Fri, 25 Jan 2008 20:08:30 +0000 (21:08 +0100)]
sched: pull_rt_task() cleanup

"goto out" is an odd way to spell "skip".

Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: rt group scheduling
Peter Zijlstra [Fri, 25 Jan 2008 20:08:30 +0000 (21:08 +0100)]
sched: rt group scheduling

Extend group scheduling to also cover the realtime classes. It uses the time
limiting introduced by the previous patch to allow multiple realtime groups.

The hard time limit is required to keep behaviour deterministic.

The algorithms used make the realtime scheduler O(tg), linear scaling wrt the
number of task groups. This is the worst case behaviour I can't seem to get out
of, the avg. case of the algorithms can be improved, I focused on correctness
and worst case.

[ akpm@linux-foundation.org: move side-effects out of BUG_ON(). ]

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: rt time limit
Peter Zijlstra [Fri, 25 Jan 2008 20:08:29 +0000 (21:08 +0100)]
sched: rt time limit

Very simple time limit on the realtime scheduling classes.
Allow the rq's realtime class to consume sched_rt_ratio of every
sched_rt_period slice. If the class exceeds this quota the fair class
will preempt the realtime class.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: high-res preemption tick
Peter Zijlstra [Fri, 25 Jan 2008 20:08:29 +0000 (21:08 +0100)]
sched: high-res preemption tick

Use HR-timers (when available) to deliver an accurate preemption tick.

The regular scheduler tick that runs at 1/HZ can be too coarse when nice
level are used. The fairness system will still keep the cpu utilisation 'fair'
by then delaying the task that got an excessive amount of CPU time but try to
minimize this by delivering preemption points spot-on.

The average frequency of this extra interrupt is sched_latency / nr_latency.
Which need not be higher than 1/HZ, its just that the distribution within the
sched_latency period is important.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: do not do cond_resched() when CONFIG_PREEMPT
Herbert Xu [Fri, 25 Jan 2008 20:08:28 +0000 (21:08 +0100)]
sched: do not do cond_resched() when CONFIG_PREEMPT

Why do we even have cond_resched when real preemption
is on? It seems to be a waste of space and time.

remove cond_resched with CONFIG_PREEMPT on.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: documentation, whitespace fixes
Ingo Molnar [Fri, 25 Jan 2008 20:08:28 +0000 (21:08 +0100)]
sched: documentation, whitespace fixes

whitespace fixes.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: SCHED_FIFO/SCHED_RR watchdog timer
Peter Zijlstra [Fri, 25 Jan 2008 20:08:27 +0000 (21:08 +0100)]
sched: SCHED_FIFO/SCHED_RR watchdog timer

Introduce a new rlimit that allows the user to set a runtime timeout on
real-time tasks their slice. Once this limit is exceeded the task will receive
SIGXCPU.

So it measures runtime since the last sleep.

Input and ideas by Thomas Gleixner and Lennart Poettering.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
CC: Lennart Poettering <mzxreary@0pointer.de>
CC: Michael Kerrisk <mtk.manpages@googlemail.com>
CC: Ulrich Drepper <drepper@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: sched_rt_entity
Peter Zijlstra [Fri, 25 Jan 2008 20:08:27 +0000 (21:08 +0100)]
sched: sched_rt_entity

Move the task_struct members specific to rt scheduling together.
A future optimization could be to put sched_entity and sched_rt_entity
into a union.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
CC: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agouids: merge multiple error paths in alloc_uid() into one
Pavel Emelyanov [Fri, 25 Jan 2008 20:08:26 +0000 (21:08 +0100)]
uids: merge multiple error paths in alloc_uid() into one

There are already 4 error paths in alloc_uid() that do incremental rollbacks.
I think it's time to merge them.  This costs us 8 lines of code :)

Maybe it would be better to merge this patch with the previous one, but I
remember that some time ago I sent a similar patch (fixing the error path and
cleaning it), but I was told to make two patches in such cases.

Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Acked-by: Dhaval Giani <dhaval@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: dynamically update the root-domain span/online maps
Gregory Haskins [Fri, 25 Jan 2008 20:08:26 +0000 (21:08 +0100)]
sched: dynamically update the root-domain span/online maps

The baseline code statically builds the span maps when the domain is formed.
Previous attempts at dynamically updating the maps caused a suspend-to-ram
regression, which should now be fixed.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
CC: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agoPreempt-RCU: update RCU Documentation.
Paul E. McKenney [Fri, 25 Jan 2008 20:08:25 +0000 (21:08 +0100)]
Preempt-RCU: update RCU Documentation.

This patch updates the RCU documentation to reflect preemptible RCU as
well as recent publications.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Reviewed-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agoPreempt-RCU: CPU Hotplug handling
Paul E. McKenney [Fri, 25 Jan 2008 20:08:25 +0000 (21:08 +0100)]
Preempt-RCU: CPU Hotplug handling

This patch allows preemptible RCU to tolerate CPU-hotplug operations.
It accomplishes this by maintaining a local copy of a map of online
CPUs, which it accesses under its own lock.

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agoPreempt-RCU: implementation
Paul E. McKenney [Fri, 25 Jan 2008 20:08:24 +0000 (21:08 +0100)]
Preempt-RCU: implementation

This patch implements a new version of RCU which allows its read-side
critical sections to be preempted. It uses a set of counter pairs
to keep track of the read-side critical sections and flips them
when all tasks exit read-side critical section. The details
of this implementation can be found in this paper -

http://www.rdrop.com/users/paulmck/RCU/OLSrtRCU.2006.08.11a.pdf

and the article-

http://lwn.net/Articles/253651/

This patch was developed as a part of the -rt kernel development and
meant to provide better latencies when read-side critical sections of
RCU don't disable preemption.  As a consequence of keeping track of RCU
readers, the readers have a slight overhead (optimizations in the paper).
This implementation co-exists with the "classic" RCU implementations
and can be switched to at compiler.

Also includes RCU tracing summarized in debugfs.

[ akpm@linux-foundation.org: build fixes on non-preempt architectures ]

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com>
Signed-off-by: Paul E. McKenney <paulmck@us.ibm.com>
Reviewed-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agoPreempt-RCU: fix rcu_barrier for preemptive environment.
Paul E. McKenney [Fri, 25 Jan 2008 20:08:24 +0000 (21:08 +0100)]
Preempt-RCU: fix rcu_barrier for preemptive environment.

Fix rcu_barrier() to work properly in preemptive kernel environment.
Also, the ordering of callback must be preserved while moving
callbacks to another CPU during CPU hotplug.

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agoPreempt-RCU: reorganize RCU code into rcuclassic.c and rcupdate.c
Paul E. McKenney [Fri, 25 Jan 2008 20:08:24 +0000 (21:08 +0100)]
Preempt-RCU: reorganize RCU code into rcuclassic.c and rcupdate.c

This patch re-organizes the RCU code to enable multiple implementations
of RCU. Users of RCU continues to include rcupdate.h and the
RCU interfaces remain the same. This is in preparation for
subsequently merging the preemptible RCU implementation.

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agoPreempt-RCU: Use softirq instead of tasklets for
Dipankar Sarma [Fri, 25 Jan 2008 20:08:23 +0000 (21:08 +0100)]
Preempt-RCU: Use softirq instead of tasklets for

This patch makes RCU use softirq instead of tasklets.

It also adds a memory barrier after raising the softirq
inorder to ensure that the cpu sees the most recently updated
value of rcu->cur while processing callbacks.
The discussion of the related theoretical race pointed out
by James Huang can be found here --> http://lkml.org/lkml/2007/11/20/603

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com>
Reviewed-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: remove some old cpuset logic
Gregory Haskins [Fri, 25 Jan 2008 20:08:23 +0000 (21:08 +0100)]
sched: remove some old cpuset logic

We had support for overlapping cpuset based rto logic in early
prototypes that is no longer used, so remove it.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: RT-balance, only adjust overload state when changing
Gregory Haskins [Fri, 25 Jan 2008 20:08:23 +0000 (21:08 +0100)]
sched: RT-balance, only adjust overload state when changing

The overload set/clears were originally idempotent when this logic was first
implemented.  But that is no longer true due to the addition of the atomic
counter and this logic was never updated to work properly with that change.
So only adjust the overload state if it is actually changing to avoid
getting out of sync.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: RT-balance, add new methods to sched_class
Steven Rostedt [Fri, 25 Jan 2008 20:08:22 +0000 (21:08 +0100)]
sched: RT-balance, add new methods to sched_class

Dmitry Adamushko found that the current implementation of the RT
balancing code left out changes to the sched_setscheduler and
rt_mutex_setprio.

This patch addresses this issue by adding methods to the schedule classes
to handle being switched out of (switched_from) and being switched into
(switched_to) a sched_class. Also a method for changing of priorities
is also added (prio_changed).

This patch also removes some duplicate logic between rt_mutex_setprio and
sched_setscheduler.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: RT-balance, replace hooks with pre/post schedule and wakeup methods
Steven Rostedt [Fri, 25 Jan 2008 20:08:22 +0000 (21:08 +0100)]
sched: RT-balance, replace hooks with pre/post schedule and wakeup methods

To make the main sched.c code more agnostic to the schedule classes.
Instead of having specific hooks in the schedule code for the RT class
balancing. They are replaced with a pre_schedule, post_schedule
and task_wake_up methods. These methods may be used by any of the classes
but currently, only the sched_rt class implements them.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: remove do_div() from __sched_slice()
Peter Zijlstra [Fri, 25 Jan 2008 20:08:21 +0000 (21:08 +0100)]
sched: remove do_div() from __sched_slice()

Yanmin Zhang noticed a nice optimization:

  p = l * nr / nl, nl = l/g -> p = g * nr

which eliminates a do_div() from __sched_period().

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: get rid of 'new_cpu' in try_to_wake_up()
Dmitry Adamushko [Fri, 25 Jan 2008 20:08:21 +0000 (21:08 +0100)]
sched: get rid of 'new_cpu' in try_to_wake_up()

Clean-up try_to_wake_up().

Get rid of the 'new_cpu' variable in try_to_wake_up() [ that's, one
#ifdef section less ].  Also remove a few redundant blank lines.

Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: no need for 'affine wakeup' balancing
Dmitry Adamushko [Fri, 25 Jan 2008 20:08:21 +0000 (21:08 +0100)]
sched: no need for 'affine wakeup' balancing

No need to do a check for 'affine wakeup and passive balancing possibilities'
in select_task_rq_fair() when task_cpu(p) == this_cpu.

I guess, this part got missed upon introduction of per-sched_class
select_task_rq() in try_to_wake_up().

Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: whitespace cleanups in topology.h
Ingo Molnar [Fri, 25 Jan 2008 20:08:20 +0000 (21:08 +0100)]
sched: whitespace cleanups in topology.h

whitespace cleanups in topology.h.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: reactivate fork balancing
Ingo Molnar [Fri, 25 Jan 2008 20:08:20 +0000 (21:08 +0100)]
sched: reactivate fork balancing

reactivate fork balancing.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: add credits for RT balancing improvements
Ingo Molnar [Fri, 25 Jan 2008 20:08:19 +0000 (21:08 +0100)]
sched: add credits for RT balancing improvements

add credits for RT balancing improvements.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: style cleanup, #2
Ingo Molnar [Fri, 25 Jan 2008 20:08:19 +0000 (21:08 +0100)]
sched: style cleanup, #2

style cleanup of various changes that were done recently.

no code changed:

      text    data     bss     dec     hex filename
     26399    2578      48   29025    7161 sched.o.before
     26399    2578      48   29025    7161 sched.o.after

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: remove unused JIFFIES_TO_NS() macro
Ingo Molnar [Fri, 25 Jan 2008 20:08:19 +0000 (21:08 +0100)]
sched: remove unused JIFFIES_TO_NS() macro

remove unused JIFFIES_TO_NS() macro.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: fix sched_rt.c:join/leave_domain
Ingo Molnar [Fri, 25 Jan 2008 20:08:18 +0000 (21:08 +0100)]
sched: fix sched_rt.c:join/leave_domain

fix build bug in sched_rt.c:join/leave_domain and make them only
be included on SMP builds.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: only balance our RT tasks within our domain
Gregory Haskins [Fri, 25 Jan 2008 20:08:18 +0000 (21:08 +0100)]
sched: only balance our RT tasks within our domain

We move the rt-overload data as the first global to per-domain
reclassification.  This limits the scope of overload related cache-line
bouncing to stay with a specified partition instead of affecting all
cpus in the system.

Finally, we limit the scope of find_lowest_cpu searches to the domain
instead of the entire system.  Note that we would always respect domain
boundaries even without this patch, but we first would scan potentially
all cpus before whittling the list down.  Now we can avoid looking at
RQs that are out of scope, again reducing cache-line hits.

Note: In some cases, task->cpus_allowed will effectively reduce our search
to within our domain.  However, I believe there are cases where the
cpus_allowed mask may be all ones and therefore we err on the side of
caution.  If it can be optimized later, so be it.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
CC: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: add sched-domain roots
Gregory Haskins [Fri, 25 Jan 2008 20:08:18 +0000 (21:08 +0100)]
sched: add sched-domain roots

We add the notion of a root-domain which will be used later to rescope
global variables to per-domain variables.  Each exclusive cpuset
essentially defines an island domain by fully partitioning the member cpus
from any other cpuset.  However, we currently still maintain some
policy/state as global variables which transcend all cpusets.  Consider,
for instance, rt-overload state.

Whenever a new exclusive cpuset is created, we also create a new
root-domain object and move each cpu member to the root-domain's span.
By default the system creates a single root-domain with all cpus as
members (mimicking the global state we have today).

We add some plumbing for storing class specific data in our root-domain.
Whenever a RQ is switching root-domains (because of repartitioning) we
give each sched_class the opportunity to remove any state from its old
domain and add state to the new one.  This logic doesn't have any clients
yet but it will later in the series.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
CC: Christoph Lameter <clameter@sgi.com>
CC: Paul Jackson <pj@sgi.com>
CC: Simon Derr <simon.derr@bull.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: clean up schedule_balance_rt()
Ingo Molnar [Fri, 25 Jan 2008 20:08:17 +0000 (21:08 +0100)]
sched: clean up schedule_balance_rt()

clean up schedule_balance_rt().

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: clean up pull_rt_task()
Ingo Molnar [Fri, 25 Jan 2008 20:08:17 +0000 (21:08 +0100)]
sched: clean up pull_rt_task()

clean up pull_rt_task().

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: remove leftover debugging
Ingo Molnar [Fri, 25 Jan 2008 20:08:16 +0000 (21:08 +0100)]
sched: remove leftover debugging

remove leftover debugging.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: remove rt_overload()
Ingo Molnar [Fri, 25 Jan 2008 20:08:16 +0000 (21:08 +0100)]
sched: remove rt_overload()

remove rt_overload() - it's an unnecessary indirection.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: clean up kernel/sched_rt.c
Ingo Molnar [Fri, 25 Jan 2008 20:08:15 +0000 (21:08 +0100)]
sched: clean up kernel/sched_rt.c

clean up whitespace damage and missing comments in kernel/sched_rt.c.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: clean up overlong line in kernel/sched_debug.c
Ingo Molnar [Fri, 25 Jan 2008 20:08:15 +0000 (21:08 +0100)]
sched: clean up overlong line in kernel/sched_debug.c

clean up overlong line in kernel/sched_debug.c.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: clean up find_lock_lowest_rq()
Ingo Molnar [Fri, 25 Jan 2008 20:08:15 +0000 (21:08 +0100)]
sched: clean up find_lock_lowest_rq()

clean up find_lock_lowest_rq().

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: clean up pick_next_highest_task_rt()
Ingo Molnar [Fri, 25 Jan 2008 20:08:14 +0000 (21:08 +0100)]
sched: clean up pick_next_highest_task_rt()

clean up pick_next_highest_task_rt().

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: RT-balance on new task
Steven Rostedt [Fri, 25 Jan 2008 20:08:14 +0000 (21:08 +0100)]
sched: RT-balance on new task

rt-balance when creating new tasks.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: RT-balance, optimize cpu search
Steven Rostedt [Fri, 25 Jan 2008 20:08:13 +0000 (21:08 +0100)]
sched: RT-balance, optimize cpu search

This patch removes several cpumask operations by keeping track
of the first of the CPUS that is of the lowest priority. When
the search for the lowest priority runqueue is completed, all
the bits up to the first CPU with the lowest priority runqueue
is cleared.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: RT-balance, optimize
Gregory Haskins [Fri, 25 Jan 2008 20:08:13 +0000 (21:08 +0100)]
sched: RT-balance, optimize

We can cheaply track the number of bits set in the cpumask for the lowest
priority CPUs.  Therefore, compute the mask's weight and use it to skip
the optimal domain search logic when there is only one CPU available.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: break out early if RT task cannot be migrated
Gregory Haskins [Fri, 25 Jan 2008 20:08:13 +0000 (21:08 +0100)]
sched: break out early if RT task cannot be migrated

We don't need to bother searching if the task cannot be migrated

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: RT-balance, avoid overloading
Steven Rostedt [Fri, 25 Jan 2008 20:08:12 +0000 (21:08 +0100)]
sched: RT-balance, avoid overloading

This patch changes the searching for a run queue by a waking RT task
to try to pick another runqueue if the currently running task
is an RT task.

The reason is that RT tasks behave different than normal
tasks. Preempting a normal task to run a RT task to keep
its cache hot is fine, because the preempted non-RT task
may wait on that same runqueue to run again unless the
migration thread comes along and pulls it off.

RT tasks behave differently. If one is preempted, it makes
an active effort to continue to run. So by having a high
priority task preempt a lower priority RT task, that lower
RT task will then quickly try to run on another runqueue.
This will cause that lower RT task to replace its nice
hot cache (and TLB) with a completely cold one. This is
for the hope that the new high priority RT task will keep
 its cache hot.

Remeber that this high priority RT task was just woken up.
So it may likely have been sleeping for several milliseconds,
and will end up with a cold cache anyway. RT tasks run till
they voluntarily stop, or are preempted by a higher priority
task. This means that it is unlikely that the woken RT task
will have a hot cache to wake up to. So pushing off a lower
RT task is just killing its cache for no good reason.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: wake-balance fixes
Gregory Haskins [Fri, 25 Jan 2008 20:08:12 +0000 (21:08 +0100)]
sched: wake-balance fixes

We have logic to detect whether the system has migratable tasks, but we are
not using it when deciding whether to push tasks away.  So we add support
for considering this new information.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: optimize RT affinity
Gregory Haskins [Fri, 25 Jan 2008 20:08:11 +0000 (21:08 +0100)]
sched: optimize RT affinity

The current code base assumes a relatively flat CPU/core topology and will
route RT tasks to any CPU fairly equally.  In the real world, there are
various toplogies and affinities that govern where a task is best suited to
run with the smallest amount of overhead.  NUMA and multi-core CPUs are
prime examples of topologies that can impact cache performance.

Fortunately, linux is already structured to represent these topologies via
the sched_domains interface.  So we change our RT router to consult a
combination of topology and affinity policy to best place tasks during
migration.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: pre-route RT tasks on wakeup
Gregory Haskins [Fri, 25 Jan 2008 20:08:10 +0000 (21:08 +0100)]
sched: pre-route RT tasks on wakeup

In the original patch series that Steven Rostedt and I worked on together,
we both took different approaches to low-priority wakeup path.  I utilized
"pre-routing" (push the task away to a less important RQ before activating)
approach, while Steve utilized a "post-routing" approach.  The advantage of
my approach is that you avoid the overhead of a wasted activate/deactivate
cycle and peripherally related burdens.  The advantage of Steve's method is
that it neatly solves an issue preventing a "pull" optimization from being
deployed.

In the end, we ended up deploying Steve's idea.  But it later dawned on me
that we could get the best of both worlds by deploying both ideas together,
albeit slightly modified.

The idea is simple:  Use a "light-weight" lookup for pre-routing, since we
only need to approximate a good home for the task.  And we also retain the
post-routing push logic to clean up any inaccuracies caused by a condition
of "priority mistargeting" caused by the lightweight lookup.  Most of the
time, the pre-routing should work and yield lower overhead.  In the cases
where it doesnt, the post-router will bat cleanup.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: RT balancing: include current CPU
Gregory Haskins [Fri, 25 Jan 2008 20:08:10 +0000 (21:08 +0100)]
sched: RT balancing: include current CPU

It doesn't hurt if we allow the current CPU to be included in the
search.  We will just simply skip it later if the current CPU turns out
to be the lowest.

We will use this later in the series

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: break out search for RT tasks
Gregory Haskins [Fri, 25 Jan 2008 20:08:10 +0000 (21:08 +0100)]
sched: break out search for RT tasks

Isolate the search logic into a function so that it can be used later
in places other than find_locked_lowest_rq().

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: de-SCHED_OTHER-ize the RT path
Gregory Haskins [Fri, 25 Jan 2008 20:08:09 +0000 (21:08 +0100)]
sched: de-SCHED_OTHER-ize the RT path

The current wake-up code path tries to determine if it can optimize the
wake-up to "this_cpu" by computing load calculations.  The problem is that
these calculations are only relevant to SCHED_OTHER tasks where load is king.
For RT tasks, priority is king.  So the load calculation is completely wasted
bandwidth.

Therefore, we create a new sched_class interface to help with
pre-wakeup routing decisions and move the load calculation as a function
of CFS task's class.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: clean up this_rq use in kernel/sched_rt.c
Gregory Haskins [Fri, 25 Jan 2008 20:08:09 +0000 (21:08 +0100)]
sched: clean up this_rq use in kernel/sched_rt.c

"this_rq" is normally used to denote the RQ on the current cpu
(i.e. "cpu_rq(this_cpu)").  So clean up the usage of this_rq to be
more consistent with the rest of the code.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: add RT-balance cpu-weight
Gregory Haskins [Fri, 25 Jan 2008 20:08:07 +0000 (21:08 +0100)]
sched: add RT-balance cpu-weight

Some RT tasks (particularly kthreads) are bound to one specific CPU.
It is fairly common for two or more bound tasks to get queued up at the
same time.  Consider, for instance, softirq_timer and softirq_sched.  A
timer goes off in an ISR which schedules softirq_thread to run at RT50.
Then the timer handler determines that it's time to smp-rebalance the
system so it schedules softirq_sched to run.  So we are in a situation
where we have two RT50 tasks queued, and the system will go into
rt-overload condition to request other CPUs for help.

This causes two problems in the current code:

1) If a high-priority bound task and a low-priority unbounded task queue
   up behind the running task, we will fail to ever relocate the unbounded
   task because we terminate the search on the first unmovable task.

2) We spend precious futile cycles in the fast-path trying to pull
   overloaded tasks over.  It is therefore optimial to strive to avoid the
   overhead all together if we can cheaply detect the condition before
   overload even occurs.

This patch tries to achieve this optimization by utilizing the hamming
weight of the task->cpus_allowed mask.  A weight of 1 indicates that
the task cannot be migrated.  We will then utilize this information to
skip non-migratable tasks and to eliminate uncessary rebalance attempts.

We introduce a per-rq variable to count the number of migratable tasks
that are currently running.  We only go into overload if we have more
than one rt task, AND at least one of them is migratable.

In addition, we introduce a per-task variable to cache the cpus_allowed
weight, since the hamming calculation is probably relatively expensive.
We only update the cached value when the mask is updated which should be
relatively infrequent, especially compared to scheduling frequency
in the fast path.

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: disable standard balancer for RT tasks
Steven Rostedt [Fri, 25 Jan 2008 20:08:07 +0000 (21:08 +0100)]
sched: disable standard balancer for RT tasks

Since we now take an active approach to load balancing, we don't need to
balance RT tasks via the normal task balancer. In fact, this code was
found to pull RT tasks away from CPUS that the active movement performed,
resulting in large latencies.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: push RT tasks from overloaded CPUs
Steven Rostedt [Fri, 25 Jan 2008 20:08:07 +0000 (21:08 +0100)]
sched: push RT tasks from overloaded CPUs

This patch adds pushing of overloaded RT tasks from a runqueue that is
having tasks (most likely RT tasks) added to the run queue.

TODO: We don't cover the case of waking of new RT tasks (yet).

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: pull RT tasks from overloaded runqueues
Steven Rostedt [Fri, 25 Jan 2008 20:08:07 +0000 (21:08 +0100)]
sched: pull RT tasks from overloaded runqueues

This patch adds the algorithm to pull tasks from RT overloaded runqueues.

When a pull RT is initiated, all overloaded runqueues are examined for
a RT task that is higher in prio than the highest prio task queued on the
target runqueue. If another runqueue holds a RT task that is of higher
prio than the highest prio task on the target runqueue is found it is pulled
to the target runqueue.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: add rt-overload tracking
Steven Rostedt [Fri, 25 Jan 2008 20:08:06 +0000 (21:08 +0100)]
sched: add rt-overload tracking

This patch adds an RT overload accounting system. When a runqueue has
more than one RT task queued, it is marked as overloaded. That is that it
is a candidate to have RT tasks pulled from it.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: add RT task pushing
Steven Rostedt [Fri, 25 Jan 2008 20:08:05 +0000 (21:08 +0100)]
sched: add RT task pushing

This patch adds an algorithm to push extra RT tasks off a run queue to
other CPU runqueues.

When more than one RT task is added to a run queue, this algorithm takes
an assertive approach to push the RT tasks that are not running onto other
run queues that have lower priority.  The way this works is that the highest
RT task that is not running is looked at and we examine the runqueues on
the CPUS for that tasks affinity mask. We find the runqueue with the lowest
prio in the CPU affinity of the picked task, and if it is lower in prio than
the picked task, we push the task onto that CPU runqueue.

We continue pushing RT tasks off the current runqueue until we don't push any
more.  The algorithm stops when the next highest RT task can't preempt any
other processes on other CPUS.

TODO: The algorithm may stop when there are still RT tasks that can be
 migrated. Specifically, if the highest non running RT task CPU affinity
 is restricted to CPUs that are running higher priority tasks, there may
 be a lower priority task queued that has an affinity with a CPU that is
 running a lower priority task that it could be migrated to.  This
 patch set does not address this issue.

Note: checkpatch reveals two over 80 character instances. I'm not sure
 that breaking them up will help visually, so I left them as is.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: track highest prio task queued
Steven Rostedt [Fri, 25 Jan 2008 20:08:04 +0000 (21:08 +0100)]
sched: track highest prio task queued

This patch adds accounting to each runqueue to keep track of the
highest prio task queued on the run queue. We only care about
RT tasks, so if the run queue does not contain any active RT tasks
its priority will be considered MAX_RT_PRIO.

This information will be used for later patches.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: count # of queued RT tasks
Steven Rostedt [Fri, 25 Jan 2008 20:08:03 +0000 (21:08 +0100)]
sched: count # of queued RT tasks

This patch adds accounting to keep track of the number of RT tasks running
on a runqueue. This information will be used in later patches.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosoftlockup: automatically detect hung TASK_UNINTERRUPTIBLE tasks
Ingo Molnar [Fri, 25 Jan 2008 20:08:02 +0000 (21:08 +0100)]
softlockup: automatically detect hung TASK_UNINTERRUPTIBLE tasks

this patch extends the soft-lockup detector to automatically
detect hung TASK_UNINTERRUPTIBLE tasks. Such hung tasks are
printed the following way:

 ------------------>
 INFO: task prctl:3042 blocked for more than 120 seconds.
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message
 prctl         D fd5e3793     0  3042   2997
        f6050f38 00000046 00000001 fd5e3793 00000009 c06d8264 c06dae80 00000286
        f6050f40 f6050f00 f7d34d90 f7d34fc8 c1e1be80 00000001 f6050000 00000000
        f7e92d00 00000286 f6050f18 c0489d1a f6050f40 00006605 00000000 c0133a5b
 Call Trace:
  [<c04883a5>] schedule_timeout+0x6d/0x8b
  [<c04883d8>] schedule_timeout_uninterruptible+0x15/0x17
  [<c0133a76>] msleep+0x10/0x16
  [<c0138974>] sys_prctl+0x30/0x1e2
  [<c0104c52>] sysenter_past_esp+0x5f/0xa5
  =======================
 2 locks held by prctl/3042:
 #0:  (&sb->s_type->i_mutex_key#5){--..}, at: [<c0197d11>] do_fsync+0x38/0x7a
 #1:  (jbd_handle){--..}, at: [<c01ca3d2>] journal_start+0xc7/0xe9
 <------------------

the current default timeout is 120 seconds. Such messages are printed
up to 10 times per bootup. If the system has crashed already then the
messages are not printed.

if lockdep is enabled then all held locks are printed as well.

this feature is a natural extension to the softlockup-detector (kernel
locked up without scheduling) and to the NMI watchdog (kernel locked up
with IRQs disabled).

[ Gautham R Shenoy <ego@in.ibm.com>: CPU hotplug fixes. ]
[ Andrew Morton <akpm@linux-foundation.org>: build warning fix. ]

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
16 years agocpu-hotplug: fix build on !CONFIG_SMP
Ingo Molnar [Fri, 25 Jan 2008 20:08:02 +0000 (21:08 +0100)]
cpu-hotplug: fix build on !CONFIG_SMP

fix build on !CONFIG_SMP.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agocpu-hotplug: replace per-subsystem mutexes with get_online_cpus()
Gautham R Shenoy [Fri, 25 Jan 2008 20:08:02 +0000 (21:08 +0100)]
cpu-hotplug: replace per-subsystem mutexes with get_online_cpus()

This patch converts the known per-subsystem mutexes to get_online_cpus
put_online_cpus. It also eliminates the CPU_LOCK_ACQUIRE and
CPU_LOCK_RELEASE hotplug notification events.

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agocpu-hotplug: replace lock_cpu_hotplug() with get_online_cpus()
Gautham R Shenoy [Fri, 25 Jan 2008 20:08:02 +0000 (21:08 +0100)]
cpu-hotplug: replace lock_cpu_hotplug() with get_online_cpus()

Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use
get_online_cpus and put_online_cpus instead as it highlights the
refcount semantics in these operations.

The new API guarantees protection against the cpu-hotplug operation, but
it doesn't guarantee serialized access to any of the local data
structures. Hence the changes needs to be reviewed.

In case of pseries_add_processor/pseries_remove_processor, use
cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the
cpu_present_map there.

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agocpu-hotplug: refcount based cpu hotplug
Gautham R Shenoy [Fri, 25 Jan 2008 20:08:01 +0000 (21:08 +0100)]
cpu-hotplug: refcount based cpu hotplug

This patch implements a Refcount + Waitqueue based model for
cpu-hotplug.

Now, a thread which wants to prevent cpu-hotplug, will bump up a global
refcount and the thread which wants to perform a cpu-hotplug operation
will block till the global refcount goes to zero.

The readers, if any, during an ongoing cpu-hotplug operation are blocked
until the cpu-hotplug operation is over.

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Paul Jackson <pj@sgi.com> [For !CONFIG_HOTPLUG_CPU ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: group scheduler, fix fairness of cpu bandwidth allocation for task groups
Srivatsa Vaddagiri [Fri, 25 Jan 2008 20:08:00 +0000 (21:08 +0100)]
sched: group scheduler, fix fairness of cpu bandwidth allocation for task groups

The current load balancing scheme isn't good enough for precise
group fairness.

For example: on a 8-cpu system, I created 3 groups as under:

a = 8 tasks (cpu.shares = 1024)
b = 4 tasks (cpu.shares = 1024)
c = 3 tasks (cpu.shares = 1024)

a, b and c are task groups that have equal weight. We would expect each
of the groups to receive 33.33% of cpu bandwidth under a fair scheduler.

This is what I get with the latest scheduler git tree:

Signed-off-by: Ingo Molnar <mingo@elte.hu>
--------------------------------------------------------------------------------
Col1  | Col2    | Col3  |  Col4
------|---------|-------|-------------------------------------------------------
a     | 277.676 | 57.8% | 54.1%  54.1%  54.1%  54.2%  56.7%  62.2%  62.8% 64.5%
b     | 116.108 | 24.2% | 47.4%  48.1%  48.7%  49.3%
c     |  86.326 | 18.0% | 47.5%  47.9%  48.5%
--------------------------------------------------------------------------------

Explanation of o/p:

Col1 -> Group name
Col2 -> Cumulative execution time (in seconds) received by all tasks of that
group in a 60sec window across 8 cpus
Col3 -> CPU bandwidth received by the group in the 60sec window, expressed in
        percentage. Col3 data is derived as:
Col3 = 100 * Col2 / (NR_CPUS * 60)
Col4 -> CPU bandwidth received by each individual task of the group.
Col4 = 100 * cpu_time_recd_by_task / 60

[I can share the test case that produces a similar o/p if reqd]

The deviation from desired group fairness is as below:

a = +24.47%
b = -9.13%
c = -15.33%

which is quite high.

After the patch below is applied, here are the results:

--------------------------------------------------------------------------------
Col1  | Col2    | Col3  |  Col4
------|---------|-------|-------------------------------------------------------
a     | 163.112 | 34.0% | 33.2%  33.4%  33.5%  33.5%  33.7%  34.4%  34.8% 35.3%
b     | 156.220 | 32.5% | 63.3%  64.5%  66.1%  66.5%
c     | 160.653 | 33.5% | 85.8%  90.6%  91.4%
--------------------------------------------------------------------------------

Deviation from desired group fairness is as below:

a = +0.67%
b = -0.83%
c = +0.17%

which is far better IMO. Most of other runs have yielded a deviation within
+-2% at the most, which is good.

Why do we see bad (group) fairness with current scheuler?
=========================================================

Currently cpu's weight is just the summation of individual task weights.
This can yield incorrect results. For ex: consider three groups as below
on a 2-cpu system:

CPU0 CPU1
---------------------------
A (10)  B(5)
C(5)
---------------------------

Group A has 10 tasks, all on CPU0, Group B and C have 5 tasks each all
of which are on CPU1. Each task has the same weight (NICE_0_LOAD =
1024).

The current scheme would yield a cpu weight of 10240 (10*1024) for each cpu and
the load balancer will think both CPUs are perfectly balanced and won't
move around any tasks. This, however, would yield this bandwidth:

A = 50%
B = 25%
C = 25%

which is not the desired result.

What's changing in the patch?
=============================

- How cpu weights are calculated when CONFIF_FAIR_GROUP_SCHED is
  defined (see below)
- API Change
- Two tunables introduced in sysfs (under SCHED_DEBUG) to
  control the frequency at which the load balance monitor
  thread runs.

The basic change made in this patch is how cpu weight (rq->load.weight) is
calculated. Its now calculated as the summation of group weights on a cpu,
rather than summation of task weights. Weight exerted by a group on a
cpu is dependent on the shares allocated to it and also the number of
tasks the group has on that cpu compared to the total number of
(runnable) tasks the group has in the system.

Let,
W(K,i)  = Weight of group K on cpu i
T(K,i)  = Task load present in group K's cfs_rq on cpu i
T(K)    = Total task load of group K across various cpus
S(K)  = Shares allocated to group K
NRCPUS = Number of online cpus in the scheduler domain to
    which group K is assigned.

Then,
W(K,i) = S(K) * NRCPUS * T(K,i) / T(K)

A load balance monitor thread is created at bootup, which periodically
runs and adjusts group's weight on each cpu. To avoid its overhead, two
min/max tunables are introduced (under SCHED_DEBUG) to control the rate
at which it runs.

Fixes from: Peter Zijlstra <a.p.zijlstra@chello.nl>

- don't start the load_balance_monitor when there is only a single cpu.
- rename the kthread because its currently longer than TASK_COMM_LEN

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: introduce a mutex and corresponding API to serialize access to doms_curarray
Srivatsa Vaddagiri [Fri, 25 Jan 2008 20:08:00 +0000 (21:08 +0100)]
sched: introduce a mutex and corresponding API to serialize access to doms_curarray

doms_cur[] array represents various scheduling domains which are
mutually exclusive. Currently cpusets code can modify this array (by
calling partition_sched_domains()) as a result of user modifying
sched_load_balance flag for various cpusets.

This patch introduces a mutex and corresponding API (only when
CONFIG_FAIR_GROUP_SCHED is defined) which allows a reader to safely read
the doms_cur[] array w/o worrying abt concurrent modifications to the
array.

The fair group scheduler code (introduced in next patch of this series)
makes use of this mutex to walk thr' doms_cur[] array while rebalancing
shares of task groups across cpus.

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: group scheduling, change how cpu load is calculated
Srivatsa Vaddagiri [Fri, 25 Jan 2008 20:08:00 +0000 (21:08 +0100)]
sched: group scheduling, change how cpu load is calculated

This patch changes how the cpu load exerted by fair_sched_class tasks
is calculated. Load exerted by fair_sched_class tasks on a cpu is now
a summation of the group weights, rather than summation of task weights.
Weight exerted by a group on a cpu is dependent on the shares allocated
to it.

This version of patch has a minor impact on code size, but should have
no runtime/functional impact for !CONFIG_FAIR_GROUP_SCHED.

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: group scheduling, minor fixes
Srivatsa Vaddagiri [Fri, 25 Jan 2008 20:07:59 +0000 (21:07 +0100)]
sched: group scheduling, minor fixes

Minor bug fixes for the group scheduler:

- Use a mutex to serialize add/remove of task groups and also when
  changing shares of a task group. Use the same mutex when printing
  cfs_rq debugging stats for various task groups.

- Use list_for_each_entry_rcu in for_each_leaf_cfs_rq macro (when
  walking task group list)

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: group scheduling code cleanup
Srivatsa Vaddagiri [Fri, 25 Jan 2008 20:07:59 +0000 (21:07 +0100)]
sched: group scheduling code cleanup

Minor cleanups:

- Fix coding style
- remove obsolete comment

Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: remove printk_clock references from ia64
Ingo Molnar [Fri, 25 Jan 2008 20:07:59 +0000 (21:07 +0100)]
sched: remove printk_clock references from ia64

remove remaining printk_clock references from ia64.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: remove printk_clock()
Ingo Molnar [Fri, 25 Jan 2008 20:07:59 +0000 (21:07 +0100)]
sched: remove printk_clock()

printk_clock() is obsolete - it has been replaced with cpu_clock().

Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agosched: fix CONFIG_PRINT_TIME's reliance on sched_clock()
Ingo Molnar [Fri, 25 Jan 2008 20:07:58 +0000 (21:07 +0100)]
sched: fix CONFIG_PRINT_TIME's reliance on sched_clock()

Stefano Brivio reported weird printk timestamp behavior during
CPU frequency changes:

  http://bugzilla.kernel.org/show_bug.cgi?id=9475

fix CONFIG_PRINT_TIME's reliance on sched_clock() and use cpu_clock()
instead.

Reported-and-bisected-by: Stefano Brivio <stefano.brivio@polimi.it>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agoprintk: make printk more robust by not allowing recursion
Ingo Molnar [Fri, 25 Jan 2008 20:07:58 +0000 (21:07 +0100)]
printk: make printk more robust by not allowing recursion

make printk more robust by allowing recursion only if there's a crash
going on. Also add recursion detection.

I've tested it with an artificially injected printk recursion - instead
of a lockup or spontaneous reboot or other crash, the output was a well
controlled:

[   41.057335] SysRq : <2>BUG: recent printk recursion!
[   41.057335] loglevel0-8 reBoot Crashdump show-all-locks(D) tErm Full kIll saK showMem Nice powerOff showPc show-all-timers(Q) unRaw Sync showTasks Unmount shoW-blocked-tasks

also do all this printk-debug logic with irqs disabled.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Reviewed-by: Nick Piggin <npiggin@suse.de>
16 years agoMerge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris...
Linus Torvalds [Fri, 25 Jan 2008 16:44:29 +0000 (08:44 -0800)]
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/selinux-2.6

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/selinux-2.6:
  selinux: make mls_compute_sid always polyinstantiate
  security/selinux: constify function pointer tables and fields
  security: add a secctx_to_secid() hook
  security: call security_file_permission from rw_verify_area
  security: remove security_sb_post_mountroot hook
  Security: remove security.h include from mm.h
  Security: remove security_file_mmap hook sparse-warnings (NULL as 0).
  Security: add get, set, and cloning of superblock security information
  security/selinux: Add missing "space"

16 years agoMerge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoen...
Linus Torvalds [Fri, 25 Jan 2008 16:40:02 +0000 (08:40 -0800)]
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoen/avr32-2.6

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoen/avr32-2.6:
  [AVR32] extint: Set initial irq type to low level
  [AVR32] extint: change set_irq_type() handling
  [AVR32] NMI debugging
  [AVR32] constify function pointer tables
  [AVR32] ATNGW100: Update defconfig
  [AVR32] ATSTK1002: Update defconfig
  [AVR32] Kconfig: Choose daughterboard instead of CPU
  [AVR32] Add support for ATSTK1003 and ATSTK1004
  [AVR32] Clean up external DAC setup code
  [AVR32] ATSTK1000: Move gpio-leds setup to setup.c
  [AVR32] Add support for AT32AP7001 and AT32AP7002
  [AVR32] Provide more CPU information in /proc/cpuinfo and dmesg
  [AVR32] Oprofile support
  [AVR32] Include instrumentation menu
  Disable VGA text console for AVR32 architecture
  [AVR32] Enable debugging only when needed
  ptrace: Call arch_ptrace_attach() when request=PTRACE_TRACEME
  [AVR32] Remove redundant try_to_freeze() call from do_signal()
  [AVR32] Drop GFP_COMP for DMA memory allocations

16 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-2.6-nmw
Linus Torvalds [Fri, 25 Jan 2008 16:39:18 +0000 (08:39 -0800)]
Merge git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-2.6-nmw

* git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-2.6-nmw: (56 commits)
  [GFS2] Allow journal recovery on read-only mount
  [GFS2] Lockup on error
  [GFS2] Fix page_mkwrite truncation race path
  [GFS2] Fix typo
  [GFS2] Fix write alloc required shortcut calculation
  [GFS2] gfs2_alloc_required performance
  [GFS2] Remove unneeded i_spin
  [GFS2] Reduce inode size by moving i_alloc out of line
  [GFS2] Fix assert in log code
  [GFS2] Fix problems relating to execution of files on GFS2
  [GFS2] Initialize extent_list earlier
  [GFS2] Allow page migration for writeback and ordered pages
  [GFS2] Remove unused variable
  [GFS2] Fix log block mapper
  [GFS2] Minor correction
  [GFS2] Eliminate the no longer needed sd_statfs_mutex
  [GFS2] Incremental patch to fix compiler warning
  [GFS2] Function meta_read optimization
  [GFS2] Only fetch the dinode once in block_map
  [GFS2] Reorganize function gfs2_glmutex_lock
  ...

16 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Linus Torvalds [Fri, 25 Jan 2008 16:38:25 +0000 (08:38 -0800)]
Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (125 commits)
  [CRYPTO] twofish: Merge common glue code
  [CRYPTO] hifn_795x: Fixup container_of() usage
  [CRYPTO] cast6: inline bloat--
  [CRYPTO] api: Set default CRYPTO_MINALIGN to unsigned long long
  [CRYPTO] tcrypt: Make xcbc available as a standalone test
  [CRYPTO] xcbc: Remove bogus hash/cipher test
  [CRYPTO] xcbc: Fix algorithm leak when block size check fails
  [CRYPTO] tcrypt: Zero axbuf in the right function
  [CRYPTO] padlock: Only reset the key once for each CBC and ECB operation
  [CRYPTO] api: Include sched.h for cond_resched in scatterwalk.h
  [CRYPTO] salsa20-asm: Remove unnecessary dependency on CRYPTO_SALSA20
  [CRYPTO] tcrypt: Add select of AEAD
  [CRYPTO] salsa20: Add x86-64 assembly version
  [CRYPTO] salsa20_i586: Salsa20 stream cipher algorithm (i586 version)
  [CRYPTO] gcm: Introduce rfc4106
  [CRYPTO] api: Show async type
  [CRYPTO] chainiv: Avoid lock spinning where possible
  [CRYPTO] seqiv: Add select AEAD in Kconfig
  [CRYPTO] scatterwalk: Handle zero nbytes in scatterwalk_map_and_copy
  [CRYPTO] null: Allow setkey on digest_null
  ...

16 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-2.6
Linus Torvalds [Fri, 25 Jan 2008 16:34:42 +0000 (08:34 -0800)]
Merge git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-2.6

This can be broken down into these major areas:
 - Documentation updates (language translations and fixes, as
   well as kobject and kset documenatation updates.)
 - major kset/kobject/ktype rework and fixes.  This cleans up the
   kset and kobject and ktype relationship and architecture,
   making sense of things now, and good documenation and samples
   are provided for others to use.  Also the attributes for
   kobjects are much easier to handle now.  This cleaned up a LOT
   of code all through the kernel, making kobjects easier to use
   if you want to.
 - struct bus_type has been reworked to now handle the lifetime
   rules properly, as the kobject is properly dynamic.
 - struct driver has also been reworked, and now the lifetime
   issues are resolved.
 - the block subsystem has been converted to use struct device
   now, and not "raw" kobjects.  This patch has been in the -mm
   tree for over a year now, and finally all the issues are
   worked out with it.  Older distros now properly work with new
   kernels, and no userspace updates are needed at all.
 - nozomi driver is added.  This has also been in -mm for a long
   time, and many people have asked for it to go in.  It is now
   in good enough shape to do so.
 - lots of class_device conversions to use struct device instead.
   The tree is almost all cleaned up now, only SCSI and IB is the
   remaining code to fix up...

* git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-2.6: (196 commits)
  Driver core: coding style fixes
  Kobject: fix coding style issues in kobject c files
  Kobject: fix coding style issues in kobject.h
  Driver core: fix coding style issues in device.h
  spi: use class iteration api
  scsi: use class iteration api
  rtc: use class iteration api
  power supply : use class iteration api
  ieee1394: use class iteration api
  Driver Core: add class iteration api
  Driver core: Cleanup get_device_parent() in device_add() and device_move()
  UIO: constify function pointer tables
  Driver Core: constify the name passed to platform_device_register_simple
  driver core: fix build with SYSFS=n
  sysfs: make SYSFS_DEPRECATED depend on SYSFS
  Driver core: use LIST_HEAD instead of call to INIT_LIST_HEAD in __init
  kobject: add sample code for how to use ksets/ktypes/kobjects
  kobject: add sample code for how to use kobjects in a simple manner.
  kobject: update the kobject/kset documentation
  kobject: remove old, outdated documentation.
  ...