pmem: add proper fencing to pmem_rw_page()
authorRoss Zwisler <ross.zwisler@linux.intel.com>
Wed, 16 Sep 2015 20:52:21 +0000 (14:52 -0600)
committerDan Williams <dan.j.williams@intel.com>
Thu, 17 Sep 2015 15:49:28 +0000 (11:49 -0400)
pmem_rw_page() needs to call wmb_pmem() on writes to make sure that the
newly written data is durable.  This flow was added to pmem_rw_bytes()
and pmem_make_request() with this commit:

commit 61031952f4c8 ("arch, x86: pmem api for ensuring durability of
persistent memory updates")

...the pmem_rw_page() path was missed.

Cc: <stable@vger.kernel.org>
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
drivers/nvdimm/pmem.c

index b9525385c0dc35532ceaed6087c3f36ea2697ae0..0ba6a978f227e76a4b0c27046e2736d43cf37e45 100644 (file)
@@ -92,6 +92,8 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector,
        struct pmem_device *pmem = bdev->bd_disk->private_data;
 
        pmem_do_bvec(pmem, page, PAGE_CACHE_SIZE, 0, rw, sector);
+       if (rw & WRITE)
+               wmb_pmem();
        page_endio(page, rw & WRITE, 0);
 
        return 0;