From: Hugh Dickins Date: Tue, 31 Oct 2006 18:41:51 +0000 (+0000) Subject: [POWERPC] Make mmiowb's io_sync preempt safe X-Git-Tag: v2.6.19-rc5~75^2~2 X-Git-Url: https://err.no/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=292f86f005e3867277b2126c2399eea3e773a4fc;p=linux-2.6 [POWERPC] Make mmiowb's io_sync preempt safe If mmiowb() is always used prior to releasing spinlock as Doc suggests, then it's safe against preemption; but I'm not convinced that's always the case. If preemption occurs between sync and get_paca()->io_sync = 0, I believe there's no problem. But in the unlikely event that gcc does the store relative to another register than r13 (as it did with current), then there's a small danger of setting another cpu's io_sync to 0, after it had just set it to 1. Rewrite ppc64 mmiowb to prevent that. The remaining io_sync assignments in io.h all get_paca()->io_sync = 1, which is harmless even if preempted to the wrong cpu (the context switch itself syncs); and those in spinlock.h are while preemption is disabled. Signed-off-by: Hugh Dickins Signed-off-by: Paul Mackerras --- diff --git a/include/asm-powerpc/io.h b/include/asm-powerpc/io.h index 3baff8b0fd..c2c5f14b5f 100644 --- a/include/asm-powerpc/io.h +++ b/include/asm-powerpc/io.h @@ -163,8 +163,11 @@ extern void _outsl_ns(volatile u32 __iomem *port, const void *buf, long count); static inline void mmiowb(void) { - __asm__ __volatile__ ("sync" : : : "memory"); - get_paca()->io_sync = 0; + unsigned long tmp; + + __asm__ __volatile__("sync; li %0,0; stb %0,%1(13)" + : "=&r" (tmp) : "i" (offsetof(struct paca_struct, io_sync)) + : "memory"); } /*