Randy Dunlap [Tue, 1 Apr 2008 22:44:01 +0000 (15:44 -0700)]
x86: fix VisualWS and Voyager kexec build failures
without this patch:
VOYAGER:
kernel/built-in.o: In function `crash_kexec':
(.text+0x28588): undefined reference to `machine_crash_shutdown'
VISWS:
kernel/built-in.o: In function `crash_kexec':
/next-20080401/kernel/kexec.c:1074: undefined reference to `machine_crash_shutdown'
make[1]: *** [.tmp_vmlinux1] Error 1
because arch/x86/kernel/reboot.c isn't built since CONFIG_X86_BIOS_REBOOT=n,
so machine_crash_shutdown() isn't available.
This patch does seem a small bit odd since the KEXEC help text says that
kexec is independent of the system firmware.
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
I've now noticed that the machine I call MPENTIUM4 for 32-bit kernels
is called MPSC for 64-bit kernels, and in that case it still doesn't
get the P6 NOPs it ought to. hpa explains that MK8 should still be
excluded, so it's just a matter of including MPSC along with MPENTIUM4.
Signed-off-by: Hugh Dickins <hugh@veritas.com> Acked-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
x86: debug Store - call kfree if only we really need it
We should call for kfree if only we really need it.
Though it's safe to call kfree with NULL pointer passed
in this code we've already tested the pointer and can
eliminate the call
x86: extend the scheduled bzImage symlinks removal
use of the bzImage symlinks in developer scripts is still widespread,
so lets extend the removal period by 2 years. These symlinks cost
us near nothing.
Jack Steiner [Fri, 28 Mar 2008 19:12:16 +0000 (14:12 -0500)]
x86: support for new UV apic
UV supports really big systems. So big, in fact, that the APICID register
does not contain enough bits to contain an APICID that is unique across all
cpus.
The UV BIOS supports 3 APICID modes:
- legacy mode. This mode uses the old APIC mode where
APICID is in bits [31:24] of the APICID register.
- x2apic mode. This mode is whitebox-compatible. APICIDs
are unique across all cpus. Standard x2apic APIC operations
(Intel-defined) can be used for IPIs. The node identifier
fits within the Intel-defined portion of the APICID register.
- x2apic-uv mode. In this mode, the APICIDs on each node have
unique IDs, but IDs on different node are not unique. For example,
if each mode has 32 cpus, the APICIDs on each node might be
0 - 31. Every node has the same set of IDs.
The UV hub is used to route IPIs/interrupts to the correct node.
Traditional APIC operations WILL NOT WORK.
In x2apic-uv mode, the ACPI tables all contain a full unique ID (note:
exact bit layout still changing but the following is close):
nnnnnnnnnnlc0cch
n = unique node number
l = socket number on board
c = core
h = hyperthread
Only the "lc0cch" bits are written to the APICID register. The remaining bits are
supplied by having the get_apic_id() function "OR" the extra bits into the value
read from the APICID register. (Hmmm.. why not keep the ENTIRE APICID register
in per-cpu data....)
The x2apic-uv mode is recognized by the MADT table containing:
oem_id = "SGI"
oem_table_id = "UV-X"
Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Jack Steiner [Fri, 28 Mar 2008 19:12:14 +0000 (14:12 -0500)]
x86: define the macros and tables for blade functions
Add UV macros for converting between cpu numbers, blade numbers
and node numbers. Note that these are used ONLY within x86_64 UV
modules, and are not for general kernel use.
Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Jack Steiner [Fri, 28 Mar 2008 19:12:09 +0000 (14:12 -0500)]
x86: parsing for ACPI "SAPIC" table
Add kernel support for new ACPI "sapic" tables that contain 16-bit APICIDs.
This patch simply adds parsing of an optional SAPIC table if present.
Otherwise, the traditional local APIC table is used.
Note: the SAPIC table is not a new ACPI table - it exists on other architectures
but is not currently recognized by x86_64.
Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Jack Steiner [Fri, 28 Mar 2008 19:12:08 +0000 (14:12 -0500)]
x86: increase size of APICID
Increase the number of bits in an apicid from 8 to 32.
By default, MP_processor_info() gets the APICID from the
mpc_config_processor structure. However, this structure limits
the size of APICID to 8 bits. This patch allows the caller of
MP_processor_info() to optionally pass a larger APICID that will
be used instead of the one in the mpc_config_processor struct.
Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Jack Steiner [Fri, 28 Mar 2008 19:12:06 +0000 (14:12 -0500)]
x86: add functions to determine if platform is a UV platform
Add functions that can be used to determine if an x86_64
system is a SGI "UV" system. UV systems come in 3 types and
are identified by the OEM ID in the MADT.
Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
This patch renames VM_MASK to X86_VM_MASK (which
in turn defined as alias to X86_EFLAGS_VM) to better
distinguish from virtual memory flags. We can't just
use X86_EFLAGS_VM instead because it is also used
for conditional compilation
x86: paravirt_ops: don't steal memory resources in paravirt_disable_iospace
The memory resource is also used for main memory, and we need it to
allocate physical addresses for memory hotplug. Knobbling io space is
enough to get the job done anyway.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
A 1G section size makes memory hotplug too coarse in a virtual
environment. Retuce it by a factor of 2 to 512M. I would have liked
to make it smaller, but it runs out of reserved flags in the page flags.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Christoph Lameter <clameter@sgi.com> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Glauber Costa [Thu, 27 Mar 2008 17:06:03 +0000 (14:06 -0300)]
x86: change naming of cpu_initialized_mask for xen
xen does not use the global cpu_initialized mask, but rather,
a specific one. So we change its name so it won't conflict with the upcoming
movement of cpu_initialized_mask from smp_64.h to smp_32.h.
Signed-off-by: Glauber Costa <gcosta@redhat.com> CC: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Glauber Costa [Thu, 27 Mar 2008 17:06:01 +0000 (14:06 -0300)]
x86: split safe_smp_processor_id
This implementation in x86_64 is clean and consistent, but we
sacrifice it for the sake of being equal to i386 (since the other
way around would be harder).
Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Glauber Costa [Thu, 27 Mar 2008 17:05:59 +0000 (14:05 -0300)]
x86: surround apic headers in apic definitions
Although those constants are always defined in x86_64,
and will have the effect of just including the headers
in the very way we did before, I'm doing this in a separate
patch to be conservative and avoid surprises.
Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Glauber Costa [Thu, 27 Mar 2008 17:05:58 +0000 (14:05 -0300)]
x86: merge hard/logical_smp_processor_id
The code is now the same between i386 and x86_64. We already
know what happens when it reaches this point: They go away
from the arch-specific headers, and suddenly appears in the common
header.
Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Glauber Costa [Thu, 27 Mar 2008 17:05:57 +0000 (14:05 -0300)]
x86: provide bogus hard_smp_processor_id
We provide a bogus macro for x86_64 in case CONFIG_X86_LOCAL_APIC
is not set. It will always be set for x86_64, so the effect
is just to make the code equal to i386.
Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arjan van de Ven [Thu, 17 Apr 2008 15:41:31 +0000 (17:41 +0200)]
x86: add comments to describe the new api's in cacheflush.h
The new cacheflush.h API's didn't have any comments describing
how they're to be used yet and the conventions around these functions.
This patch adds comments to this effect; in order for that to be
a logical series, some prototypes had to move around.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Jesper Juhl [Wed, 26 Mar 2008 01:16:15 +0000 (02:16 +0100)]
x86 floppy: kill off the 'register' keyword from header
When compilers became generally better at optimizing code than humans, the
register keyword became mostly useless. For the floppy driver it certainly
is since it's so slow compared to the rest of the system that optimizing
access to a single variable or two isn't going to make any real difference
So let's just leave it to the compiler - it'll do a better job anyway.
This patch does away with a few register keywords in the x86 floppy driver.
Andi Kleen [Wed, 12 Mar 2008 02:53:32 +0000 (03:53 +0100)]
x86: split large page mapping for AMD TSEG
On AMD SMM protected memory is part of the address map, but handled
internally like an MTRR. That leads to large pages getting split
internally which has some performance implications. Check for the
AMD TSEG MSR and split the large page mapping on that area
explicitely if it is part of the direct mapping.
There is also SMM ASEG, but it is in the first 1MB and already covered by
the earlier split first page patch.
Idea for this came from an earlier patch by Andreas Herrmann
On a RevF dual Socket Opteron system kernbench shows a clear
improvement from this:
(together with the earlier patches in this series, especially the
split first 2MB patch)
[lower is better]
no split stddev split stddev delta
Elapsed Time 87.146 (0.727516) 84.296 (1.09098) -3.2%
User Time 274.537 (4.05226) 273.692 (3.34344) -0.3%
System Time 34.907 (0.42492) 34.508 (0.26832) -1.1%
Percent CPU 322.5 (38.3007) 326.5 (44.5128) +1.2%
=> About 3.2% improvement in elapsed time for kernbench.
With GB pages on AMD Fam1h the impact of splitting is much higher of course,
since it would split two full GB pages (together with the first
1MB split patch) instead of two 2MB pages. I could not benchmark
a clear difference in kernbench on gbpages, so I kept it disabled
for that case
That was only limited benchmarking of course, so if someone
was interested in running more tests for the gbpages case
that could be revisited (contributions welcome)
I didn't bother implementing this for 32bit because it is very
unlikely the 32bit lowmem mapping overlaps into the TSEG near 4GB
and the 2MB low split is already handled for both.
[ mingo@elte.hu: do it on gbpages kernels too, there's no clear reason
why it shouldnt help there. ]
Andi Kleen [Wed, 12 Mar 2008 02:53:30 +0000 (03:53 +0100)]
x86: don't use large pages to map the first 2/4MB of memory
Intel recommends to not use large pages for the first 1MB
of the physical memory because there are fixed size MTRRs there
which cause splitups in the TLBs.
On AMD doing so is also a good idea.
The implementation is a little different between 32bit and 64bit.
On 32bit I just taught the initial page table set up about this
because it was very simple to do. This also has the advantage
that the risk of a prefetch ever seeing the page even
if it only exists for a short time is minimized.
On 64bit that is not quite possible, so use set_memory_4k() a little
later (in check_bugs) instead.
Andi Kleen [Wed, 12 Mar 2008 02:53:29 +0000 (03:53 +0100)]
x86: add set_memory_4k to pageattr.c
Add a new function to force split large pages into 4k pages.
This is needed for some followup optimizations.
I had to add a new field to cpa_data to pass down the information
that try_preserve_large_page should not run.
Right now no set_page_4k() because I didn't need it and all the
specialized users I have in mind would be more comfortable with
pure addresses. I also didn't export it because it's unlikely
external code needs it.
Andi Kleen [Wed, 12 Mar 2008 02:53:27 +0000 (03:53 +0100)]
x86: implement true end_pfn_mapped for 32bit
Even on 32bit 2MB pages can map more memory than is in the true
max_low_pfn if end_pfn is not highmem and not aligned to 2MB.
Add a end_pfn_map similar to x86-64 that accounts for this
fact. This is important for code that really needs to know about
all mapping aliases.
Andi Kleen [Tue, 11 Mar 2008 01:23:21 +0000 (02:23 +0100)]
x86: move early exception handlers into init.text
Currently they are in .text.head because the rest of head_64.S.
.text.head is not removed as init data, but the early exception handlers
should be because they are not needed after early boot of the BP.
So move them over.
Andi Kleen [Tue, 11 Mar 2008 01:23:22 +0000 (02:23 +0100)]
x86: replace early exception setup macro recursion with loop
The early exception handlers are currently set up using a macro
recursion. There is only one user left. Replace the macro with a
standard loop in place.
* Ingo Molnar (mingo@elte.hu) wrote:
>
> * Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> wrote:
>
> > The shadow vmap for DEBUG_RODATA kernel text modification uses
> > virt_to_page to get the pages from the pointer address.
> >
> > However, I think vmalloc_to_page would be required in case the page is
> > used for modules.
> >
> > Since only the core kernel text is marked read-only, use
> > kernel_text_address() to make sure we only shadow map the core kernel
> > text, not modules.
>
> actually, i think we should mark module text readonly too.
>
Yes, but in the meantime, the x86 tree would need this patch to make
kprobes work correctly on modules.
I suspect that without this fix, with the enhanced hotplug and kprobes
patch, kprobes will use text_poke to insert breakpoints in modules
(vmalloced pages used), which will map the wrong pages and corrupt
random kernel locations instead of updating the correct page.
Work that would write protect the module pages should clearly be done,
but it can come in a later time. We have to make sure we interact
correctly with the page allocation debugging, as an example.
Here is the patch against x86.git 2.6.25-rc5 :
The shadow vmap for DEBUG_RODATA kernel text modification uses virt_to_page to
get the pages from the pointer address.
However, I think vmalloc_to_page would be required in case the page is used for
modules.
Since only the core kernel text is marked read-only, use kernel_text_address()
to make sure we only shadow map the core kernel text, not modules.
vSMP detection: access pci config space early in boot to detect if the
system is a vSMPowered box, and cache the result in a flag, so that
is_vsmp_box() retrieves the value of the flag always.