x86/amd/numa: Fix northbridge quirk to assign correct NUMA node
authorDaniel J Blueman <daniel@numascale.com>
Thu, 13 Mar 2014 11:43:01 +0000 (19:43 +0800)
committerIngo Molnar <mingo@kernel.org>
Fri, 14 Mar 2014 10:05:36 +0000 (11:05 +0100)
commit847d7970defb45540735b3fb4e88471c27cacd85
treee43e960fcde8f7fb0cee11d0ba3dde47d8389d8c
parent731bd6a93a6e9172094a2322bd0ee964bb1f4d63
x86/amd/numa: Fix northbridge quirk to assign correct NUMA node

For systems with multiple servers and routed fabric, all
northbridges get assigned to the first server. Fix this by also
using the node reported from the PCI bus. For single-fabric
systems, the northbriges are on PCI bus 0 by definition, which
are on NUMA node 0 by definition, so this is invarient on most
systems.

Tested on fam10h and fam15h single and multi-fabric systems and
candidate for stable.

Signed-off-by: Daniel J Blueman <daniel@numascale.com>
Acked-by: Steffen Persvold <sp@numascale.com>
Acked-by: Borislav Petkov <bp@suse.de>
Cc: <stable@vger.kernel.org>
Link: http://lkml.kernel.org/r/1394710981-3596-1-git-send-email-daniel@numascale.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/kernel/quirks.c