ARM: 7658/1: mm: fix race updating mm->context.id on ASID rollover
authorWill Deacon <will.deacon@arm.com>
Thu, 28 Feb 2013 16:47:20 +0000 (17:47 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 14 Mar 2013 18:26:23 +0000 (11:26 -0700)
commit 37f47e3d62533c931b04cb409f2eb299e6342331 upstream.

If a thread triggers an ASID rollover, other threads of the same process
must be made to wait until the mm->context.id for the shared mm_struct
has been updated to new generation and associated book-keeping (e.g.
TLB invalidation) has ben performed.

However, there is a *tiny* window where both mm->context.id and the
relevant active_asids entry are updated to the new generation, but the
TLB flush has not been performed, which could allow another thread to
return to userspace with a dirty TLB, potentially leading to data
corruption. In reality this will never occur because one CPU would need
to perform a context-switch in the time it takes another to do a couple
of atomic test/set operations but we should plug the race anyway.

This patch moves the active_asids update until after the potential TLB
flush on context-switch.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/arm/mm/context.c

index bc4a5e9ebb78261c9c3058cf94374309286446f4..6a6b8bf2958f799ddaa28975b0b08086532feb75 100644 (file)
@@ -204,11 +204,11 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
        if ((mm->context.id ^ atomic64_read(&asid_generation)) >> ASID_BITS)
                new_context(mm, cpu);
 
-       atomic64_set(&per_cpu(active_asids, cpu), mm->context.id);
-       cpumask_set_cpu(cpu, mm_cpumask(mm));
-
        if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending))
                local_flush_tlb_all();
+
+       atomic64_set(&per_cpu(active_asids, cpu), mm->context.id);
+       cpumask_set_cpu(cpu, mm_cpumask(mm));
        raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
 
 switch_mm_fastpath: