sched/numa: Avoid selecting oneself as swap target
authorPeter Zijlstra <peterz@infradead.org>
Mon, 10 Nov 2014 09:54:35 +0000 (10:54 +0100)
committerIngo Molnar <mingo@kernel.org>
Sun, 16 Nov 2014 09:04:17 +0000 (10:04 +0100)
Because the whole numa task selection stuff runs with preemption
enabled (its long and expensive) we can end up migrating and selecting
oneself as a swap target. This doesn't really work out well -- we end
up trying to acquire the same lock twice for the swap migrate -- so
avoid this.

Reported-and-Tested-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20141110100328.GF29390@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index 34baa60f8a7bd11f03ccb1f754a08eb1029c46be..3af3d1e7df9b728dd3ab69d0ffd481d48a9f158b 100644 (file)
@@ -1179,6 +1179,13 @@ static void task_numa_compare(struct task_numa_env *env,
                cur = NULL;
        raw_spin_unlock_irq(&dst_rq->lock);
 
+       /*
+        * Because we have preemption enabled we can get migrated around and
+        * end try selecting ourselves (current == env->p) as a swap candidate.
+        */
+       if (cur == env->p)
+               goto unlock;
+
        /*
         * "imp" is the fault differential for the source task between the
         * source and destination node. Calculate the total differential for