workqueue: allow work_on_cpu() to be called recursively
authorLai Jiangshan <laijs@cn.fujitsu.com>
Wed, 24 Jul 2013 10:31:42 +0000 (18:31 +0800)
committerTejun Heo <tj@kernel.org>
Wed, 24 Jul 2013 16:24:25 +0000 (12:24 -0400)
If the @fn call work_on_cpu() again, the lockdep will complain:

> [ INFO: possible recursive locking detected ]
> 3.11.0-rc1-lockdep-fix-a #6 Not tainted
> ---------------------------------------------
> kworker/0:1/142 is trying to acquire lock:
>  ((&wfc.work)){+.+.+.}, at: [<ffffffff81077100>] flush_work+0x0/0xb0
>
> but task is already holding lock:
>  ((&wfc.work)){+.+.+.}, at: [<ffffffff81075dd9>] process_one_work+0x169/0x610
>
> other info that might help us debug this:
>  Possible unsafe locking scenario:
>
>        CPU0
>        ----
>   lock((&wfc.work));
>   lock((&wfc.work));
>
>  *** DEADLOCK ***

It is false-positive lockdep report. In this sutiation,
the two "wfc"s of the two work_on_cpu() are different,
they are both on stack. flush_work() can't be deadlock.

To fix this, we need to avoid the lockdep checking in this case,
thus we instroduce a internal __flush_work() which skip the lockdep.

tj: Minor comment adjustment.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reported-by: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
Reported-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
kernel/workqueue.c

index f02c4a4a0c3c8925f61c29982d87b6edcef7dd95..55f5f0afcd0d966f137293c262c1621b3ebda730 100644 (file)
@@ -2817,6 +2817,19 @@ already_gone:
        return false;
 }
 
+static bool __flush_work(struct work_struct *work)
+{
+       struct wq_barrier barr;
+
+       if (start_flush_work(work, &barr)) {
+               wait_for_completion(&barr.done);
+               destroy_work_on_stack(&barr.work);
+               return true;
+       } else {
+               return false;
+       }
+}
+
 /**
  * flush_work - wait for a work to finish executing the last queueing instance
  * @work: the work to flush
@@ -2830,18 +2843,10 @@ already_gone:
  */
 bool flush_work(struct work_struct *work)
 {
-       struct wq_barrier barr;
-
        lock_map_acquire(&work->lockdep_map);
        lock_map_release(&work->lockdep_map);
 
-       if (start_flush_work(work, &barr)) {
-               wait_for_completion(&barr.done);
-               destroy_work_on_stack(&barr.work);
-               return true;
-       } else {
-               return false;
-       }
+       return __flush_work(work);
 }
 EXPORT_SYMBOL_GPL(flush_work);
 
@@ -4756,7 +4761,14 @@ long work_on_cpu(int cpu, long (*fn)(void *), void *arg)
 
        INIT_WORK_ONSTACK(&wfc.work, work_for_cpu_fn);
        schedule_work_on(cpu, &wfc.work);
-       flush_work(&wfc.work);
+
+       /*
+        * The work item is on-stack and can't lead to deadlock through
+        * flushing.  Use __flush_work() to avoid spurious lockdep warnings
+        * when work_on_cpu()s are nested.
+        */
+       __flush_work(&wfc.work);
+
        return wfc.ret;
 }
 EXPORT_SYMBOL_GPL(work_on_cpu);