From: Paul E. McKenney Date: Sun, 1 May 2005 15:59:04 +0000 (-0700) Subject: [PATCH] Deprecate synchronize_kernel, GPL replacement X-Git-Tag: v2.6.12-rc4~136^2~91 X-Git-Url: https://err.no/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=9b06e818985d139fd9e82c28297f7744e1b484e1;p=linux-2.6 [PATCH] Deprecate synchronize_kernel, GPL replacement The synchronize_kernel() primitive is used for quite a few different purposes: waiting for RCU readers, waiting for NMIs, waiting for interrupts, and so on. This makes RCU code harder to read, since synchronize_kernel() might or might not have matching rcu_read_lock()s. This patch creates a new synchronize_rcu() that is to be used for RCU readers and a new synchronize_sched() that is used for the rest. These two new primitives currently have the same implementation, but this is might well change with additional real-time support. Both new primitives are GPL-only, the old primitive is deprecated. Signed-off-by: Paul E. McKenney Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 4d74743391..fd276adf0f 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -157,9 +157,9 @@ static inline int rcu_pending(int cpu) /** * rcu_read_lock - mark the beginning of an RCU read-side critical section. * - * When synchronize_kernel() is invoked on one CPU while other CPUs + * When synchronize_rcu() is invoked on one CPU while other CPUs * are within RCU read-side critical sections, then the - * synchronize_kernel() is guaranteed to block until after all the other + * synchronize_rcu() is guaranteed to block until after all the other * CPUs exit their critical sections. Similarly, if call_rcu() is invoked * on one CPU while other CPUs are within RCU read-side critical * sections, invocation of the corresponding RCU callback is deferred @@ -256,6 +256,21 @@ static inline int rcu_pending(int cpu) (p) = (v); \ }) +/** + * synchronize_sched - block until all CPUs have exited any non-preemptive + * kernel code sequences. + * + * This means that all preempt_disable code sequences, including NMI and + * hardware-interrupt handlers, in progress on entry will have completed + * before this primitive returns. However, this does not guarantee that + * softirq handlers will have completed, since in some kernels + * + * This primitive provides the guarantees made by the (deprecated) + * synchronize_kernel() API. In contrast, synchronize_rcu() only + * guarantees that rcu_read_lock() sections will have completed. + */ +#define synchronize_sched() synchronize_rcu() + extern void rcu_init(void); extern void rcu_check_callbacks(int cpu, int user); extern void rcu_restart_cpu(int cpu); @@ -265,7 +280,9 @@ extern void FASTCALL(call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *head))); extern void FASTCALL(call_rcu_bh(struct rcu_head *head, void (*func)(struct rcu_head *head))); -extern void synchronize_kernel(void); +extern __deprecated_for_modules void synchronize_kernel(void); +extern void synchronize_rcu(void); +void synchronize_idle(void); #endif /* __KERNEL__ */ #endif /* __LINUX_RCUPDATE_H */ diff --git a/kernel/rcupdate.c b/kernel/rcupdate.c index ad497722f0..f436993bd5 100644 --- a/kernel/rcupdate.c +++ b/kernel/rcupdate.c @@ -444,15 +444,18 @@ static void wakeme_after_rcu(struct rcu_head *head) } /** - * synchronize_kernel - wait until a grace period has elapsed. + * synchronize_rcu - wait until a grace period has elapsed. * * Control will return to the caller some time after a full grace * period has elapsed, in other words after all currently executing RCU * read-side critical sections have completed. RCU read-side critical * sections are delimited by rcu_read_lock() and rcu_read_unlock(), * and may be nested. + * + * If your read-side code is not protected by rcu_read_lock(), do -not- + * use synchronize_rcu(). */ -void synchronize_kernel(void) +void synchronize_rcu(void) { struct rcu_synchronize rcu; @@ -464,7 +467,16 @@ void synchronize_kernel(void) wait_for_completion(&rcu.completion); } +/* + * Deprecated, use synchronize_rcu() or synchronize_sched() instead. + */ +void synchronize_kernel(void) +{ + synchronize_rcu(); +} + module_param(maxbatch, int, 0); EXPORT_SYMBOL(call_rcu); /* WARNING: GPL-only in April 2006. */ EXPORT_SYMBOL(call_rcu_bh); /* WARNING: GPL-only in April 2006. */ +EXPORT_SYMBOL_GPL(synchronize_rcu); EXPORT_SYMBOL(synchronize_kernel); /* WARNING: GPL-only in April 2006. */