slub: use raw_cpu_inc for incrementing statistics
authorChristoph Lameter <cl@linux.com>
Mon, 7 Apr 2014 22:39:42 +0000 (15:39 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Mon, 7 Apr 2014 23:36:14 +0000 (16:36 -0700)
Statistics are not critical to the operation of the allocation but
should also not cause too much overhead.

When __this_cpu_inc is altered to check if preemption is disabled this
triggers.  Use raw_cpu_inc to avoid the checks.  Using this_cpu_ops may
cause interrupt disable/enable sequences on various arches which may
significantly impact allocator performance.

[akpm@linux-foundation.org: add comment]
Signed-off-by: Christoph Lameter <cl@linux.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/slub.c

index e7451861f95d55f49dfdd253bb313a57d5421b30..f620bbf4054aa0c1cc771873a8fe3463856f2fcc 100644 (file)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -224,7 +224,11 @@ static inline void memcg_propagate_slab_attrs(struct kmem_cache *s) { }
 static inline void stat(const struct kmem_cache *s, enum stat_item si)
 {
 #ifdef CONFIG_SLUB_STATS
-       __this_cpu_inc(s->cpu_slab->stat[si]);
+       /*
+        * The rmw is racy on a preemptible kernel but this is acceptable, so
+        * avoid this_cpu_add()'s irq-disable overhead.
+        */
+       raw_cpu_inc(s->cpu_slab->stat[si]);
 #endif
 }