sched: Remove rq->lock from the first half of ttwu()
authorPeter Zijlstra <a.p.zijlstra@chello.nl>
Tue, 5 Apr 2011 15:23:54 +0000 (17:23 +0200)
committerIngo Molnar <mingo@elte.hu>
Thu, 14 Apr 2011 06:52:39 +0000 (08:52 +0200)
commite4a52bcb9a18142d79e231b6733cabdbf2e67c1f
treefcf29647bb6416d826237b90f233b34a169953ab
parent8f42ced974df7d5af2de4cf5ea21fe978c7e4478
sched: Remove rq->lock from the first half of ttwu()

Currently ttwu() does two rq->lock acquisitions, once on the task's
old rq, holding it over the p->state fiddling and load-balance pass.
Then it drops the old rq->lock to acquire the new rq->lock.

By having serialized ttwu(), p->sched_class, p->cpus_allowed with
p->pi_lock, we can now drop the whole first rq->lock acquisition.

The p->pi_lock serializing concurrent ttwu() calls protects p->state,
which we will set to TASK_WAKING to bridge possible p->pi_lock to
rq->lock gaps and serialize set_task_cpu() calls against
task_rq_lock().

The p->pi_lock serialization of p->sched_class allows us to call
scheduling class methods without holding the rq->lock, and the
serialization of p->cpus_allowed allows us to do the load-balancing
bits without races.

Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110405152729.354401150@chello.nl
kernel/sched.c