svcrdma: handle rdma read with a non-zero initial page offset
authorSteve Wise <swise@opengridcomputing.com>
Mon, 28 Sep 2015 21:46:06 +0000 (16:46 -0500)
committerJ. Bruce Fields <bfields@redhat.com>
Tue, 29 Sep 2015 16:55:44 +0000 (12:55 -0400)
commitc91aed9896946721bb30705ea2904edb3725dd61
tree85ffa36824c43392ef66206e858afec69f9470ac
parent9ffecb10283508260936b96022d4ee43a7798b4c
svcrdma: handle rdma read with a non-zero initial page offset

The server rdma_read_chunk_lcl() and rdma_read_chunk_frmr() functions
were not taking into account the initial page_offset when determining
the rdma read length.  This resulted in a read who's starting address
and length exceeded the base/bounds of the frmr.

The server gets an async error from the rdma device and kills the
connection, and the client then reconnects and resends.  This repeats
indefinitely, and the application hangs.

Most work loads don't tickle this bug apparently, but one test hit it
every time: building the linux kernel on a 16 core node with 'make -j
16 O=/mnt/0' where /mnt/0 is a ramdisk mounted via NFSRDMA.

This bug seems to only be tripped with devices having small fastreg page
list depths.  I didn't see it with mlx4, for instance.

Fixes: 0bf4828983df ('svcrdma: refactor marshalling logic')
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Tested-by: Chuck Lever <chuck.lever@oracle.com>
Cc: stable@vger.kernel.org
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
net/sunrpc/xprtrdma/svc_rdma_recvfrom.c