arm64: Fix DMA range invalidation for cache line unaligned buffers
authorCatalin Marinas <catalin.marinas@arm.com>
Tue, 1 Apr 2014 17:32:55 +0000 (18:32 +0100)
committerCatalin Marinas <catalin.marinas@arm.com>
Tue, 8 Apr 2014 10:45:08 +0000 (11:45 +0100)
commitebf81a938dade3b450eb11c57fa744cfac4b523f
treefb7c10fd49f9ee3a71a7d5c0620d3a6745a082ea
parentd253b4406df69fa7a74231769d6f6ad80dc33063
arm64: Fix DMA range invalidation for cache line unaligned buffers

If the buffer needing cache invalidation for inbound DMA does start or
end on a cache line aligned address, we need to use the non-destructive
clean&invalidate operation. This issue was introduced by commit
7363590d2c46 (arm64: Implement coherent DMA API based on swiotlb).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org>
arch/arm64/mm/cache.S