Al Viro [Thu, 12 Jan 2006 09:05:34 +0000 (01:05 -0800)]
[PATCH] missing helper - task_stack_page()
Patchset annotates arch/* uses of ->thread_info. Ones that really are about
access of thread_info of given process are simply switched to
task_thread_info(task); ones that deal with access to objects on stack are
switched to new helper - task_stack_page(). A _lot_ of the latter are
actually open-coded instances of "find where pt_regs are"; those are
consolidated into task_pt_regs(task) (many architectures actually have such
helper already).
Note that these annotations are not mandatory - any code not converted to
these helpers still works. However, they clean up a lot of places and have
actually caught a number of bugs, so converting out of tree ports would be a
good idea...
As an example of breakage caught by that stuff, see i386 pt_regs mess - we
used to have it open-coded in a bunch of places and when back in April Stas
had fixed a bug in copy_thread(), the rest had been left out of sync. That
required two followup patches (the latest - just before 2.6.15) _and_ still
had left /proc/*/stat eip field broken. Try ps -eo eip on i386 and watch the
junk...
This patch:
new helper - task_stack_page(task). Returns pointer to the memory object
containing task stack; usually thread_info of task sits in the beginning
of that object.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
akpm@osdl.org [Thu, 12 Jan 2006 09:05:32 +0000 (01:05 -0800)]
[PATCH] sched: filter affine wakeups
\r)
From: Nick Piggin <nickpiggin@yahoo.com.au>
Track the last waker CPU, and only consider wakeup-balancing if there's a
match between current waker CPU and the previous waker CPU. This ensures
that there is some correlation between two subsequent wakeup events before
we move the task. Should help random-wakeup workloads on large SMP
systems, by reducing the migration attempts by a factor of nr_cpus.
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
akpm@osdl.org [Thu, 12 Jan 2006 09:05:30 +0000 (01:05 -0800)]
[PATCH] scheduler cache-hot-autodetect
\r)
From: Ingo Molnar <mingo@elte.hu>
This is the latest version of the scheduler cache-hot-auto-tune patch.
The first problem was that detection time scaled with O(N^2), which is
unacceptable on larger SMP and NUMA systems. To solve this:
- I've added a 'domain distance' function, which is used to cache
measurement results. Each distance is only measured once. This means
that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT
distances 0 and 1, and on SMP distance 0 is measured. The code walks
the domain tree to determine the distance, so it automatically follows
whatever hierarchy an architecture sets up. This cuts down on the boot
time significantly and removes the O(N^2) limit. The only assumption
is that migration costs can be expressed as a function of domain
distance - this covers the overwhelming majority of existing systems,
and is a good guess even for more assymetric systems.
[ People hacking systems that have assymetries that break this
assumption (e.g. different CPU speeds) should experiment a bit with
the cpu_distance() function. Adding a ->migration_distance factor to
the domain structure would be one possible solution - but lets first
see the problem systems, if they exist at all. Lets not overdesign. ]
Another problem was that only a single cache-size was used for measuring
the cost of migration, and most architectures didnt set that variable
up. Furthermore, a single cache-size does not fit NUMA hierarchies with
L3 caches and does not fit HT setups, where different CPUs will often
have different 'effective cache sizes'. To solve this problem:
- Instead of relying on a single cache-size provided by the platform and
sticking to it, the code now auto-detects the 'effective migration
cost' between two measured CPUs, via iterating through a wide range of
cachesizes. The code searches for the maximum migration cost, which
occurs when the working set of the test-workload falls just below the
'effective cache size'. I.e. real-life optimized search is done for
the maximum migration cost, between two real CPUs.
This, amongst other things, has the positive effect hat if e.g. two
CPUs share a L2/L3 cache, a different (and accurate) migration cost
will be found than between two CPUs on the same system that dont share
any caches.
(The reliable measurement of migration costs is tricky - see the source
for details.)
Furthermore i've added various boot-time options to override/tune
migration behavior.
Firstly, there's a blanket override for autodetection:
migration_cost=1000,2000,3000
will override the depth 0/1/2 values with 1msec/2msec/3msec values.
Secondly, there's a global factor that can be used to increase (or
decrease) the autodetected values:
migration_factor=120
will increase the autodetected values by 20%. This option is useful to
tune things in a workload-dependent way - e.g. if a workload is
cache-insensitive then CPU utilization can be maximized by specifying
migration_factor=0.
I've tested the autodetection code quite extensively on x86, on 3
P3/Xeon/2MB, and the autodetected values look pretty good:
Here it can be seen that there is no migration cost between two HT
siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory
system makes inter-physical-CPU migration pretty cheap: 0.4 msecs.
Tejun Heo [Thu, 12 Jan 2006 14:39:26 +0000 (15:39 +0100)]
[PATCH] fix queue stalling while barrier sequencing
If ordered tag isn't supported, request ordering for barrier
sequencing is performed by queue draining, which basically hangs the
request queue until elv_completed_request() reports completion of all
previous fs requests.
The condition check in elv_completed_request() was only performed for
fs requests. If a special request is queued between the last
to-be-drained request and the barrier sequence, draining is never
completed and the queue is stalled forever.
This patch moves the end-of-draining condition check such that it's
performed for all requests.
o This patch fixes the problem of secondary cpus boot up. This situation
is faced when kernel is built for default locations like 16MB and
onwards. In this configuration, only primary cpu (BP) comes and
secondary cpus don't boot.
o Problem occurs because in trampoline code, lgdt is not able to load the
GDT as it happens to be situated beyond 16MB. This is due to the fact
that cpu is still in real mode and default operand size is 16bit.
o This patch uses lgdtl instead of lgdt to force operand size to 32
instead of 16.
Andi Kleen [Wed, 11 Jan 2006 21:46:57 +0000 (22:46 +0100)]
[PATCH] x86_64: Allow kernel page tables upto the end of memory
Previously they would be only allocated before the kernel text at
1MB. This limited the maximum supported memory to 128GB.
Now allow the e820 allocator to put them everywhere. Try
to put them beyond any DMA zones to avoid filling them up.
This should free some GFP_DMA memory compared to earlier kernels.
Andi Kleen [Wed, 11 Jan 2006 21:46:54 +0000 (22:46 +0100)]
[PATCH] x86_64: Use safe_smp_processor_id in MCE handler
hard_smp_processor_id would return the local APIC id instead
of the Linux processor id. On big systems they are often
not identical. safe_smp_processor_id is just a wrapper
around it that does the necessary conversions.
Andi Kleen [Wed, 11 Jan 2006 21:46:51 +0000 (22:46 +0100)]
[PATCH] x86_64: Some housekeeping in local APIC code
Remove support for obsolete hardware and cleanup.
- Remove checks for non integrated APICs
- Replace apic_write_around with apic_write.
- Remove apic_read_around
- Remove APIC version reads used by old workarounds
- Remove old workaround for Simics
- Fix indentation
Jan Beulich [Wed, 11 Jan 2006 21:46:48 +0000 (22:46 +0100)]
[PATCH] x86_64: Display meaningful part of filename during BUG()
When building in a separate objtree, file names produced by BUG() & Co. can
get fairly long; printing only the first 50 characters may thus result in
(almost) no useful information. The following change makes it so that rather
the last 50 characters of the filename get printed.
Jan Beulich [Wed, 11 Jan 2006 21:46:45 +0000 (22:46 +0100)]
[PATCH] x86_64: Reduce screen space needed by stack trace
Especially under Xen, where the console cannot be adjusted to more than 25
lines, it is fairly important that the information displayed during a panic
is as compact as possible. Below adjustments work towards that.
Jan Beulich [Wed, 11 Jan 2006 21:46:42 +0000 (22:46 +0100)]
[PATCH] x86_64: Fix get_cmos_time()
Due to a broken condition, the body of the loop that is intended to wait for
the Update-In-Progress bit to get set and then cleared again was never
entered; in fact, the entire loop was optimized out by the compiler. Here is
a change to fix the condition (and to also move the initialization of locals
out of the spin lock protected region).
Andi Kleen [Wed, 11 Jan 2006 21:46:36 +0000 (22:46 +0100)]
[PATCH] x86_64: Remove unused AMD K8 C stepping flag
X86_FEATURE_K8_C was a synthetic Linux CPUID flag that was used for some
code optimizations in Opteron C stepping or later. But support for pre C
stepping optimizations has been removed, so this isn't needed anymore.
Vivek Goyal [Wed, 11 Jan 2006 21:46:21 +0000 (22:46 +0100)]
[PATCH] x86_64: ioapic virtual wire mode fix
o Currently, during kexec reboot, IOAPIC is re-programmed back to virtual
wire mode if there was an i8259 connected to it. This enables getting
timer interrupts in second kernel in legacy mode.
o After putting into virtual wire mode, IOAPIC delivers the i8259 interrupts
to CPU0. This works well for kexec but not for kdump as we might crash
on a different CPU and second kernel will not see timer interrupts.
o This patch modifies the redirection table entry to deliver the timer
interrupts to the cpu we are rebooting (instead of hardcoding to zero).
This ensures that second kernel receives timer interrupts even on a
non-boot cpu.
[PATCH] x86_64: Inclusion of ScaleMP vSMP architecture patches - vsmp_arch
Introduce vSMP arch to the kernel.
This patch:
1. Adds CONFIG_X86_VSMP
2. Adds machine specific macros for local_irq_disabled, local_irq_enabled
and irqs_disabled
3. Writes to the vSMP CTL device to indicate kernel compiled with CONFIG_VSMP
[PATCH] x86_64: Inclusion of ScaleMP vSMP architecture patches - vsmp_align
vSMP specific alignment patch to
1. Define INTERNODE_CACHE_SHIFT for vSMP
2. Use this for alignment of critical structures
3. Use INTERNODE_CACHE_SHIFT for ARCH_MIN_TASKALIGN,
and let the slab align task_struct allocations to the internode cacheline size
4. Introduce and use ARCH_MIN_MMSTRUCT_ALIGN for mm_struct slab allocations.
Andi Kleen [Wed, 11 Jan 2006 21:46:12 +0000 (22:46 +0100)]
[PATCH] x86_64: Make sure BITS_PER_ATOMIC is defined in asm-generic/atomic.h
Fixes
CC fs/nfsctl.o
In file included from include2/asm/atomic.h:427,
from /home/lsrc/quilt/linux/include/linux/file.h:8,
from /home/lsrc/quilt/linux/fs/nfsctl.c:8:
/home/lsrc/quilt/linux/include/asm-generic/atomic.h:20:5: warning: "BITS_PER_LONG" is not defined
[PATCH] x86_64: Memorize location of i8259 for reboots.
Currently we attempt to restore virtual wire mode on reboot, which only
works if we can figure out where the i8259 is connected. This is very
useful when we are kexec another kernel and likely helpful to an peculiar
BIOS that make assumptions about how the system is setup.
Since the acpi MADT table does not provide the location where the i8259 is
connected we have to look at the hardware to figure it out.
Most systems have the i8259 connected the local apic of the cpu so won't be
affected but people running Opteron and some serverworks chipsets should be
able to use kexec now.
In addition this patch removes the hard coded assumption that the io_apic
that delivers isa interrups is always known to the kernel as io_apic 0.
There does not appear to be anything to guarantee that assumption is true.
And From: Vivek Goyal <vgoyal@in.ibm.com>
A minor fix to the patch which remembers the location of where i8259 is
connected. Now counter i has been replaced by apic. counter i is having
some junk value which was leading to non-detection of i8259 connected to
IOAPIC.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Chuck Ebbert [Wed, 11 Jan 2006 21:46:03 +0000 (22:46 +0100)]
[PATCH] x86_64: allow setting RF in EFLAGS
Setting RF (resume flag) allows a debugger to resume execution after a code
breakpoint without tripping the breakpoint again. It is reset by the CPU
after executing one instruction.
arch/x86_64/kernel/mce_amd.c:321:29: warning: Using plain integer as NULL pointer
arch/x86_64/kernel/mce_amd.c:410:41: warning: Using plain integer as NULL pointer
Andi Kleen [Wed, 11 Jan 2006 21:45:45 +0000 (22:45 +0100)]
[PATCH] x86_64: Fix warning in nmi.c on uniprocessor kernels
Fix
CC arch/x86_64/kernel/nmi.o
linux/arch/x86_64/kernel/nmi.c: In function ???check_nmi_watchdog???:
linux/arch/x86_64/kernel/nmi.c:155: warning: statement with no effect
Patch uses a static PDA array early at boot and reallocates processor PDA
with node local memory when kmalloc is ready, just before pda_init.
The boot_cpu_pda is needed since the cpu_pda is used even before pda_init for
that cpu is called (to set the static per-cpu areas offset table etc)
[PATCH] x86_64: Early initialization of cpu_to_node
Patch enables early intialization of cpu_to_node.
apicid_to_node is built by reading the SRAT table, from acpi_numa_init with
ACPI_NUMA and k8_scan_nodes with K8_NUMA.
x86_cpu_to_apicid is built by parsing the ACPI MADT table, from acpi_boot_init.
We combine these two tables and setup cpu_to_node.
Early intialization helps the static per_cpu_areas in getting pages from
correct node.
Change since last release:
Do not initialize early init_cpu_to_node for faking node cases.
Patch tested on TYAN dual core 4P board with K8 only, ACPI_NUMA.
Tested on EM64T NUMA. Also tested with numa=off, numa=fake, and running
a kernel compiled with NUMA on a regular EM64 2 way SMP.
Andi Kleen [Wed, 11 Jan 2006 21:45:27 +0000 (22:45 +0100)]
[PATCH] i386: Replace broken serialize_cpu in microcode driver with correct sync_core
Passing random input values in eax to cpuid is not a good idea
because the CPU will GPF for unknown ones.
Use the correct x86-64 version that exists for a longer time too.
This also adds a memory barrier to prevent the optimizer from
reordering.
Andi Kleen [Wed, 11 Jan 2006 21:45:21 +0000 (22:45 +0100)]
[PATCH] x86_64: Support alternative() in vsyscalls
The real vsyscall .text addresses are not mapped when the alternative()
replacement runs early, so use some black magic to access them using
the direct mapping.
Andi Kleen [Wed, 11 Jan 2006 21:44:45 +0000 (22:44 +0100)]
[PATCH] x86_64: Clean up copy_*_user
- Remove optimization for old B stepping Opteron
- Make the fast path for copies with a multiple of eight length faster.
- Minor instruction rearrangement to hopefully avoid a pipeline
stall or two.
- Add comment about errata to consider.
Muli Ben-Yehuda [Wed, 11 Jan 2006 21:44:42 +0000 (22:44 +0100)]
[PATCH] x86_64: Use function pointers to call DMA mapping functions
AK: I hacked Muli's original patch a lot and there were a lot
of changes - all bugs are probably to blame on me now.
There were also some changes in the fall back behaviour
for swiotlb - in particular it doesn't try to use GFP_DMA
now anymore. Also all DMA mapping operations use the
same core dma_alloc_coherent code with proper fallbacks now.
And various other changes and cleanups.
Known problems: iommu=force swiotlb=force together breaks
needs more testing.
This patch cleans up x86_64's DMA mapping dispatching code. Right now
we have three possible IOMMU types: AGP GART, swiotlb and nommu, and
in the future we will also have Xen's x86_64 swiotlb and other HW
IOMMUs for x86_64. In order to support all of them cleanly, this
patch:
- introduces a struct dma_mapping_ops with function pointers for each
of the DMA mapping operations of gart (AMD HW IOMMU), swiotlb
(software IOMMU) and nommu (no IOMMU).
- gets rid of:
if (swiotlb)
return swiotlb_xxx();
- PCI_DMA_BUS_IS_PHYS is now checked against the dma_ops being set
This makes swiotlb faster by avoiding double copying in some cases.
Signed-Off-By: Muli Ben-Yehuda <mulix@mulix.org> Signed-Off-By: Jon D. Mason <jdmason@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Andi Kleen [Wed, 11 Jan 2006 21:44:39 +0000 (22:44 +0100)]
[PATCH] x86_64: Reject SRAT tables that don't cover all memory
Broken BIOS on Iwill 8way systems reports these and it causes the bootmem
allocator to crash. Add a sanity check if all the PXMs in the
SRAT table cover all memory as reported by e820. If the sanity
check fails the SRAT is rejected and the code will fall back
to discover the NUMA topology using the K8 northbridge registers
when applicable.
Andi Kleen [Wed, 11 Jan 2006 21:44:36 +0000 (22:44 +0100)]
[PATCH] x86_64: Add idle notifiers
This adds a new notifier chain that is called with IDLE_START
when a CPU goes idle and IDLE_END when it goes out of idle.
The context can be idle thread or interrupt context.
Since we cannot rely on MONITOR/MWAIT existing the idle
end check currently has to be done in all interrupt
handlers.
They were originally inspired by the similar s390 implementation.
They have a variety of applications:
- They will be needed for CONFIG_NO_IDLE_HZ
- They can be used for oprofile to fix up the missing time
in idle when performance counters don't tick.
- They can be used for better C state management in ACPI
- They could be used for microstate accounting.
[PATCH] x86_64: Handle missing local APIC timer interrupts on C3 state
Whenever we see that a CPU is capable of C3 (during ACPI cstate init), we
disable local APIC timer and switch to using a broadcast from external timer
interrupt (IRQ 0).
[PATCH] i386: Handle missing local APIC timer interrupts on C3 state
Whenever we see that a CPU is capable of C3 (during ACPI cstate init), we
disable local APIC timer and switch to using a broadcast from external timer
interrupt (IRQ 0). This is needed because Intel CPUs stop the local
APIC timer in C3. This is currently only enabled for Intel CPUs.
Patch below adds the code for i386 and also the ACPI hunk.
[PATCH] i386/x86-64: Remove sub jiffy profile timer support
Remove the finer control of local APIC timer. We cannot provide a sub-jiffy
control like this when we use broadcast from external timer in place of
local APIC. Instead of removing this only on systems that may end up using
broadcast from external timer (due to C3), I am going the
"I'm feeling lucky" way to remove this fully. Basically, I am not sure about
usefulness of this code today. Few other architectures also don't seem to
support this today.
If you are using profiling and fine grained control and don't like this going
away in normal case, yell at me right now.
John Blackwood [Wed, 11 Jan 2006 21:44:15 +0000 (22:44 +0100)]
[PATCH] x86_64: Report hardware breakpoints in user space when triggered by the kernel
I would like to throw out a suggestion for a possible change in the way that
the debug register traps are handled in do_debug() when the trap occurs
in kernel-mode.
In the x86_64 version of do_debug(), the code will skip around sending
a SIGTRAP to the current task if the trap occurred while in kernel mode.
On the i386-side of things, if the access happens to occur in kernel mode
(say during a read(2) of user's buffer that matches the address of a
debug register trap), then the do_debug() routine for i386 will go ahead
and call send_sigtrap() and send the SIGTRAP signal. The send_sigtrap()
code will also set the info.si_addr to NULL in this case (even though I
don't understand why, since the SIGTRAP siginfo processing doesn't use
the si_addr field...).
So I would like to suggest that the x86_64 do_debug() routine also
follow this type of behavior and have it go ahead and send the
SIGTRAP signal to the current task, even if the debug register trap
happens to have occurred in kernel mode. I have taken a stab at
a patch for this change below. (It includes the i386-ish change
for setting si_addr to NULL when the trap occurred in kernel mode.)
It seems like a useful feature to be able to 'watch' a user location that
might also be modified in the kernel via a system service call, and have the
debugger report that information back to the user, rather than to just
silently ignore the trap.
Additionally, I realize that users that pull in a kernel debugger such as
KGDB into their kernel might want to remove this change below when they add
in KGDB support. However, they could alternatively look at the current
task's thread.debugreg[] values to see if the trap occurred due to KGDB
or instead because of a user-space debugger trap, and still honor the
user SIGTRAP processing (instead of the KGDB breakpoint processing)
if the trap matches up with the thread.debugreg[] registers.
Andi Kleen [Wed, 11 Jan 2006 21:43:54 +0000 (22:43 +0100)]
[PATCH] x86_64: Allow compilation on a 32bit biarch toolchain
This might help on distributions that use a 32bit biarch compiler.
First pass -m64 by default.
Secondly add some more .code32s because at least the Ubuntu biarch
32bit as called by gcc doesn't seem to handle -m64 -m32 as generated
by the Makefile without such assistance.
And finally make sure the linker script can be preprocessed
with a 32bit cpp.
Ross Biro [Wed, 11 Jan 2006 21:43:51 +0000 (22:43 +0100)]
[PATCH] x86_64: Make udelay more accurate
The attempt to avoid overflow in __delay caused varying precision
on different CPUs depending on differences in the CPU speed.
We should be able to do this multiplication with out overflowing
provided the
cpu is running at less than about 128 GHz. xloops < 20000 * 0x10c6.
loops_per_jiffy * HZ <= cpu_clock_speed. So if the cpu clock speed
< 2^64/(20000 * 0x10c6) = 2^64/ 51E6CC0 < 2^64/2^27 = 2^37 = 128G we
will not overflow the calculation.