X-Git-Url: http://git.agner.ch/gitweb/?p=linux-drm-fsl-dcu.git;a=blobdiff_plain;f=Documentation%2FDMA-API.txt;h=805db4b2cba65b65bd966f054c71ca13427f9287;hp=1af0f2d5022066f1a205ad5265771e3ab112c018;hb=f0eef25339f92f7cd4aeea23d9ae97987a5a1e82;hpb=ad2c10f8f00d3fe2e37dd8a107e7cf4ac0459489 diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt index 1af0f2d50220..805db4b2cba6 100644 --- a/Documentation/DMA-API.txt +++ b/Documentation/DMA-API.txt @@ -33,7 +33,9 @@ pci_alloc_consistent(struct pci_dev *dev, size_t size, Consistent memory is memory for which a write by either the device or the processor can immediately be read by the processor or device -without having to worry about caching effects. +without having to worry about caching effects. (You may however need +to make sure to flush the processor's write buffers before telling +devices to read that memory.) This routine allocates a region of bytes of consistent memory. it also returns a which may be cast to an unsigned @@ -75,7 +77,7 @@ To get this part of the dma_ API, you must #include Many drivers need lots of small dma-coherent memory regions for DMA descriptors or I/O buffers. Rather than allocating in units of a page or more using dma_alloc_coherent(), you can use DMA pools. These work -much like a kmem_cache_t, except that they use the dma-coherent allocator +much like a struct kmem_cache, except that they use the dma-coherent allocator not __get_free_pages(). Also, they understand common hardware constraints for alignment, like queue heads needing to be aligned on N byte boundaries. @@ -92,7 +94,7 @@ The pool create() routines initialize a pool of dma-coherent buffers for use with a given device. It must be called in a context which can sleep. -The "name" is for diagnostics (like a kmem_cache_t name); dev and size +The "name" is for diagnostics (like a struct kmem_cache name); dev and size are like what you'd pass to dma_alloc_coherent(). The device's hardware alignment requirement for this type of data is "align" (which is expressed in bytes, and must be a power of two). If your device has no boundary @@ -304,12 +306,12 @@ dma address with dma_mapping_error(). A non zero return value means the mapping could not be created and the driver should take appropriate action (eg reduce current DMA mapping usage or delay and try again later). -int -dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, - enum dma_data_direction direction) -int -pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, - int nents, int direction) + int + dma_map_sg(struct device *dev, struct scatterlist *sg, + int nents, enum dma_data_direction direction) + int + pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, + int nents, int direction) Maps a scatter gather list from the block layer. @@ -327,12 +329,33 @@ critical that the driver do something, in the case of a block driver aborting the request or even oopsing is better than doing nothing and corrupting the filesystem. -void -dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, - enum dma_data_direction direction) -void -pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, - int nents, int direction) +With scatterlists, you use the resulting mapping like this: + + int i, count = dma_map_sg(dev, sglist, nents, direction); + struct scatterlist *sg; + + for (i = 0, sg = sglist; i < count; i++, sg++) { + hw_address[i] = sg_dma_address(sg); + hw_len[i] = sg_dma_len(sg); + } + +where nents is the number of entries in the sglist. + +The implementation is free to merge several consecutive sglist entries +into one (e.g. with an IOMMU, or if several pages just happen to be +physically contiguous) and returns the actual number of sg entries it +mapped them to. On failure 0, is returned. + +Then you should loop count times (note: this can be less than nents times) +and use sg_dma_address() and sg_dma_len() macros where you previously +accessed sg->address and sg->length as shown above. + + void + dma_unmap_sg(struct device *dev, struct scatterlist *sg, + int nhwentries, enum dma_data_direction direction) + void + pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, + int nents, int direction) unmap the previously mapped scatter/gather list. All the parameters must be the same as those and passed in to the scatter/gather mapping @@ -408,10 +431,10 @@ be identical to those passed in (and returned by dma_alloc_noncoherent()). int -dma_is_consistent(dma_addr_t dma_handle) +dma_is_consistent(struct device *dev, dma_addr_t dma_handle) -returns true if the memory pointed to by the dma_handle is actually -consistent. +returns true if the device dev is performing consistent DMA on the memory +area pointed to by the dma_handle. int dma_get_cache_alignment(void) @@ -436,7 +459,7 @@ anything like this. You must also be extra careful about accessing memory you intend to sync partially. void -dma_cache_sync(void *vaddr, size_t size, +dma_cache_sync(struct device *dev, void *vaddr, size_t size, enum dma_data_direction direction) Do a partial sync of memory that was allocated by @@ -466,7 +489,7 @@ size is the size of the area (must be multiples of PAGE_SIZE). flags can be or'd together and are DMA_MEMORY_MAP - request that the memory returned from -dma_alloc_coherent() be directly writeable. +dma_alloc_coherent() be directly writable. DMA_MEMORY_IO - request that the memory returned from dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.