S: Stanford, California 94305
S: USA
+N: Carlos Chinea
+E: carlos.chinea@nokia.com
+E: cch.devel@gmail.com
+D: Author of HSI Subsystem
+
N: Randolph Chung
E: tausq@debian.org
D: Linux/PA-RISC hacker
as root before you can use this. You'll probably also want to
get the user-space microcode_ctl utility to use with this.
-Powertweak
-----------
-
-If you are running v0.1.17 or earlier, you should upgrade to
-version v0.99.0 or higher. Running old versions may cause problems
-with programs using shared memory.
-
udev
----
udev is a userspace application for populating /dev dynamically with
------------------
o <http://www.urbanmyth.org/microcode/>
-Powertweak
-----------
-o <http://powertweak.sourceforge.net/>
-
udev
----
o <http://www.kernel.org/pub/linux/utils/kernel/hotplug/udev.html>
</sect1>
<sect1><title>Wait queues and Wake events</title>
!Iinclude/linux/wait.h
-!Ekernel/wait.c
+!Ekernel/sched/wait.c
</sect1>
<sect1><title>High-resolution timers</title>
!Iinclude/linux/ktime.h
format. For the single-planar API, applications must set <structfield> plane
</structfield> to zero. Additional flags may be posted in the <structfield>
flags </structfield> field. Refer to a manual for open() for details.
-Currently only O_CLOEXEC is supported. All other fields must be set to zero.
+Currently only O_CLOEXEC, O_RDONLY, O_WRONLY, and O_RDWR are supported. All
+other fields must be set to zero.
In the case of multi-planar API, every plane is exported separately using
multiple <constant> VIDIOC_EXPBUF </constant> calls. </para>
<entry>__u32</entry>
<entry><structfield>flags</structfield></entry>
<entry>Flags for the newly created file, currently only <constant>
-O_CLOEXEC </constant> is supported, refer to the manual of open() for more
-details.</entry>
+O_CLOEXEC </constant>, <constant>O_RDONLY</constant>, <constant>O_WRONLY
+</constant>, and <constant>O_RDWR</constant> are supported, refer to the manual
+of open() for more details.</entry>
</row>
<row>
<entry>__s32</entry>
--- /dev/null
+ ========================================
+ GENERIC ASSOCIATIVE ARRAY IMPLEMENTATION
+ ========================================
+
+Contents:
+
+ - Overview.
+
+ - The public API.
+ - Edit script.
+ - Operations table.
+ - Manipulation functions.
+ - Access functions.
+ - Index key form.
+
+ - Internal workings.
+ - Basic internal tree layout.
+ - Shortcuts.
+ - Splitting and collapsing nodes.
+ - Non-recursive iteration.
+ - Simultaneous alteration and iteration.
+
+
+========
+OVERVIEW
+========
+
+This associative array implementation is an object container with the following
+properties:
+
+ (1) Objects are opaque pointers. The implementation does not care where they
+ point (if anywhere) or what they point to (if anything).
+
+ [!] NOTE: Pointers to objects _must_ be zero in the least significant bit.
+
+ (2) Objects do not need to contain linkage blocks for use by the array. This
+ permits an object to be located in multiple arrays simultaneously.
+ Rather, the array is made up of metadata blocks that point to objects.
+
+ (3) Objects require index keys to locate them within the array.
+
+ (4) Index keys must be unique. Inserting an object with the same key as one
+ already in the array will replace the old object.
+
+ (5) Index keys can be of any length and can be of different lengths.
+
+ (6) Index keys should encode the length early on, before any variation due to
+ length is seen.
+
+ (7) Index keys can include a hash to scatter objects throughout the array.
+
+ (8) The array can iterated over. The objects will not necessarily come out in
+ key order.
+
+ (9) The array can be iterated over whilst it is being modified, provided the
+ RCU readlock is being held by the iterator. Note, however, under these
+ circumstances, some objects may be seen more than once. If this is a
+ problem, the iterator should lock against modification. Objects will not
+ be missed, however, unless deleted.
+
+(10) Objects in the array can be looked up by means of their index key.
+
+(11) Objects can be looked up whilst the array is being modified, provided the
+ RCU readlock is being held by the thread doing the look up.
+
+The implementation uses a tree of 16-pointer nodes internally that are indexed
+on each level by nibbles from the index key in the same manner as in a radix
+tree. To improve memory efficiency, shortcuts can be emplaced to skip over
+what would otherwise be a series of single-occupancy nodes. Further, nodes
+pack leaf object pointers into spare space in the node rather than making an
+extra branch until as such time an object needs to be added to a full node.
+
+
+==============
+THE PUBLIC API
+==============
+
+The public API can be found in <linux/assoc_array.h>. The associative array is
+rooted on the following structure:
+
+ struct assoc_array {
+ ...
+ };
+
+The code is selected by enabling CONFIG_ASSOCIATIVE_ARRAY.
+
+
+EDIT SCRIPT
+-----------
+
+The insertion and deletion functions produce an 'edit script' that can later be
+applied to effect the changes without risking ENOMEM. This retains the
+preallocated metadata blocks that will be installed in the internal tree and
+keeps track of the metadata blocks that will be removed from the tree when the
+script is applied.
+
+This is also used to keep track of dead blocks and dead objects after the
+script has been applied so that they can be freed later. The freeing is done
+after an RCU grace period has passed - thus allowing access functions to
+proceed under the RCU read lock.
+
+The script appears as outside of the API as a pointer of the type:
+
+ struct assoc_array_edit;
+
+There are two functions for dealing with the script:
+
+ (1) Apply an edit script.
+
+ void assoc_array_apply_edit(struct assoc_array_edit *edit);
+
+ This will perform the edit functions, interpolating various write barriers
+ to permit accesses under the RCU read lock to continue. The edit script
+ will then be passed to call_rcu() to free it and any dead stuff it points
+ to.
+
+ (2) Cancel an edit script.
+
+ void assoc_array_cancel_edit(struct assoc_array_edit *edit);
+
+ This frees the edit script and all preallocated memory immediately. If
+ this was for insertion, the new object is _not_ released by this function,
+ but must rather be released by the caller.
+
+These functions are guaranteed not to fail.
+
+
+OPERATIONS TABLE
+----------------
+
+Various functions take a table of operations:
+
+ struct assoc_array_ops {
+ ...
+ };
+
+This points to a number of methods, all of which need to be provided:
+
+ (1) Get a chunk of index key from caller data:
+
+ unsigned long (*get_key_chunk)(const void *index_key, int level);
+
+ This should return a chunk of caller-supplied index key starting at the
+ *bit* position given by the level argument. The level argument will be a
+ multiple of ASSOC_ARRAY_KEY_CHUNK_SIZE and the function should return
+ ASSOC_ARRAY_KEY_CHUNK_SIZE bits. No error is possible.
+
+
+ (2) Get a chunk of an object's index key.
+
+ unsigned long (*get_object_key_chunk)(const void *object, int level);
+
+ As the previous function, but gets its data from an object in the array
+ rather than from a caller-supplied index key.
+
+
+ (3) See if this is the object we're looking for.
+
+ bool (*compare_object)(const void *object, const void *index_key);
+
+ Compare the object against an index key and return true if it matches and
+ false if it doesn't.
+
+
+ (4) Diff the index keys of two objects.
+
+ int (*diff_objects)(const void *object, const void *index_key);
+
+ Return the bit position at which the index key of the specified object
+ differs from the given index key or -1 if they are the same.
+
+
+ (5) Free an object.
+
+ void (*free_object)(void *object);
+
+ Free the specified object. Note that this may be called an RCU grace
+ period after assoc_array_apply_edit() was called, so synchronize_rcu() may
+ be necessary on module unloading.
+
+
+MANIPULATION FUNCTIONS
+----------------------
+
+There are a number of functions for manipulating an associative array:
+
+ (1) Initialise an associative array.
+
+ void assoc_array_init(struct assoc_array *array);
+
+ This initialises the base structure for an associative array. It can't
+ fail.
+
+
+ (2) Insert/replace an object in an associative array.
+
+ struct assoc_array_edit *
+ assoc_array_insert(struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ const void *index_key,
+ void *object);
+
+ This inserts the given object into the array. Note that the least
+ significant bit of the pointer must be zero as it's used to type-mark
+ pointers internally.
+
+ If an object already exists for that key then it will be replaced with the
+ new object and the old one will be freed automatically.
+
+ The index_key argument should hold index key information and is
+ passed to the methods in the ops table when they are called.
+
+ This function makes no alteration to the array itself, but rather returns
+ an edit script that must be applied. -ENOMEM is returned in the case of
+ an out-of-memory error.
+
+ The caller should lock exclusively against other modifiers of the array.
+
+
+ (3) Delete an object from an associative array.
+
+ struct assoc_array_edit *
+ assoc_array_delete(struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ const void *index_key);
+
+ This deletes an object that matches the specified data from the array.
+
+ The index_key argument should hold index key information and is
+ passed to the methods in the ops table when they are called.
+
+ This function makes no alteration to the array itself, but rather returns
+ an edit script that must be applied. -ENOMEM is returned in the case of
+ an out-of-memory error. NULL will be returned if the specified object is
+ not found within the array.
+
+ The caller should lock exclusively against other modifiers of the array.
+
+
+ (4) Delete all objects from an associative array.
+
+ struct assoc_array_edit *
+ assoc_array_clear(struct assoc_array *array,
+ const struct assoc_array_ops *ops);
+
+ This deletes all the objects from an associative array and leaves it
+ completely empty.
+
+ This function makes no alteration to the array itself, but rather returns
+ an edit script that must be applied. -ENOMEM is returned in the case of
+ an out-of-memory error.
+
+ The caller should lock exclusively against other modifiers of the array.
+
+
+ (5) Destroy an associative array, deleting all objects.
+
+ void assoc_array_destroy(struct assoc_array *array,
+ const struct assoc_array_ops *ops);
+
+ This destroys the contents of the associative array and leaves it
+ completely empty. It is not permitted for another thread to be traversing
+ the array under the RCU read lock at the same time as this function is
+ destroying it as no RCU deferral is performed on memory release -
+ something that would require memory to be allocated.
+
+ The caller should lock exclusively against other modifiers and accessors
+ of the array.
+
+
+ (6) Garbage collect an associative array.
+
+ int assoc_array_gc(struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ bool (*iterator)(void *object, void *iterator_data),
+ void *iterator_data);
+
+ This iterates over the objects in an associative array and passes each one
+ to iterator(). If iterator() returns true, the object is kept. If it
+ returns false, the object will be freed. If the iterator() function
+ returns true, it must perform any appropriate refcount incrementing on the
+ object before returning.
+
+ The internal tree will be packed down if possible as part of the iteration
+ to reduce the number of nodes in it.
+
+ The iterator_data is passed directly to iterator() and is otherwise
+ ignored by the function.
+
+ The function will return 0 if successful and -ENOMEM if there wasn't
+ enough memory.
+
+ It is possible for other threads to iterate over or search the array under
+ the RCU read lock whilst this function is in progress. The caller should
+ lock exclusively against other modifiers of the array.
+
+
+ACCESS FUNCTIONS
+----------------
+
+There are two functions for accessing an associative array:
+
+ (1) Iterate over all the objects in an associative array.
+
+ int assoc_array_iterate(const struct assoc_array *array,
+ int (*iterator)(const void *object,
+ void *iterator_data),
+ void *iterator_data);
+
+ This passes each object in the array to the iterator callback function.
+ iterator_data is private data for that function.
+
+ This may be used on an array at the same time as the array is being
+ modified, provided the RCU read lock is held. Under such circumstances,
+ it is possible for the iteration function to see some objects twice. If
+ this is a problem, then modification should be locked against. The
+ iteration algorithm should not, however, miss any objects.
+
+ The function will return 0 if no objects were in the array or else it will
+ return the result of the last iterator function called. Iteration stops
+ immediately if any call to the iteration function results in a non-zero
+ return.
+
+
+ (2) Find an object in an associative array.
+
+ void *assoc_array_find(const struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ const void *index_key);
+
+ This walks through the array's internal tree directly to the object
+ specified by the index key..
+
+ This may be used on an array at the same time as the array is being
+ modified, provided the RCU read lock is held.
+
+ The function will return the object if found (and set *_type to the object
+ type) or will return NULL if the object was not found.
+
+
+INDEX KEY FORM
+--------------
+
+The index key can be of any form, but since the algorithms aren't told how long
+the key is, it is strongly recommended that the index key includes its length
+very early on before any variation due to the length would have an effect on
+comparisons.
+
+This will cause leaves with different length keys to scatter away from each
+other - and those with the same length keys to cluster together.
+
+It is also recommended that the index key begin with a hash of the rest of the
+key to maximise scattering throughout keyspace.
+
+The better the scattering, the wider and lower the internal tree will be.
+
+Poor scattering isn't too much of a problem as there are shortcuts and nodes
+can contain mixtures of leaves and metadata pointers.
+
+The index key is read in chunks of machine word. Each chunk is subdivided into
+one nibble (4 bits) per level, so on a 32-bit CPU this is good for 8 levels and
+on a 64-bit CPU, 16 levels. Unless the scattering is really poor, it is
+unlikely that more than one word of any particular index key will have to be
+used.
+
+
+=================
+INTERNAL WORKINGS
+=================
+
+The associative array data structure has an internal tree. This tree is
+constructed of two types of metadata blocks: nodes and shortcuts.
+
+A node is an array of slots. Each slot can contain one of four things:
+
+ (*) A NULL pointer, indicating that the slot is empty.
+
+ (*) A pointer to an object (a leaf).
+
+ (*) A pointer to a node at the next level.
+
+ (*) A pointer to a shortcut.
+
+
+BASIC INTERNAL TREE LAYOUT
+--------------------------
+
+Ignoring shortcuts for the moment, the nodes form a multilevel tree. The index
+key space is strictly subdivided by the nodes in the tree and nodes occur on
+fixed levels. For example:
+
+ Level: 0 1 2 3
+ =============== =============== =============== ===============
+ NODE D
+ NODE B NODE C +------>+---+
+ +------>+---+ +------>+---+ | | 0 |
+ NODE A | | 0 | | | 0 | | +---+
+ +---+ | +---+ | +---+ | : :
+ | 0 | | : : | : : | +---+
+ +---+ | +---+ | +---+ | | f |
+ | 1 |---+ | 3 |---+ | 7 |---+ +---+
+ +---+ +---+ +---+
+ : : : : | 8 |---+
+ +---+ +---+ +---+ | NODE E
+ | e |---+ | f | : : +------>+---+
+ +---+ | +---+ +---+ | 0 |
+ | f | | | f | +---+
+ +---+ | +---+ : :
+ | NODE F +---+
+ +------>+---+ | f |
+ | 0 | NODE G +---+
+ +---+ +------>+---+
+ : : | | 0 |
+ +---+ | +---+
+ | 6 |---+ : :
+ +---+ +---+
+ : : | f |
+ +---+ +---+
+ | f |
+ +---+
+
+In the above example, there are 7 nodes (A-G), each with 16 slots (0-f).
+Assuming no other meta data nodes in the tree, the key space is divided thusly:
+
+ KEY PREFIX NODE
+ ========== ====
+ 137* D
+ 138* E
+ 13[0-69-f]* C
+ 1[0-24-f]* B
+ e6* G
+ e[0-57-f]* F
+ [02-df]* A
+
+So, for instance, keys with the following example index keys will be found in
+the appropriate nodes:
+
+ INDEX KEY PREFIX NODE
+ =============== ======= ====
+ 13694892892489 13 C
+ 13795289025897 137 D
+ 13889dde88793 138 E
+ 138bbb89003093 138 E
+ 1394879524789 12 C
+ 1458952489 1 B
+ 9431809de993ba - A
+ b4542910809cd - A
+ e5284310def98 e F
+ e68428974237 e6 G
+ e7fffcbd443 e F
+ f3842239082 - A
+
+To save memory, if a node can hold all the leaves in its portion of keyspace,
+then the node will have all those leaves in it and will not have any metadata
+pointers - even if some of those leaves would like to be in the same slot.
+
+A node can contain a heterogeneous mix of leaves and metadata pointers.
+Metadata pointers must be in the slots that match their subdivisions of key
+space. The leaves can be in any slot not occupied by a metadata pointer. It
+is guaranteed that none of the leaves in a node will match a slot occupied by a
+metadata pointer. If the metadata pointer is there, any leaf whose key matches
+the metadata key prefix must be in the subtree that the metadata pointer points
+to.
+
+In the above example list of index keys, node A will contain:
+
+ SLOT CONTENT INDEX KEY (PREFIX)
+ ==== =============== ==================
+ 1 PTR TO NODE B 1*
+ any LEAF 9431809de993ba
+ any LEAF b4542910809cd
+ e PTR TO NODE F e*
+ any LEAF f3842239082
+
+and node B:
+
+ 3 PTR TO NODE C 13*
+ any LEAF 1458952489
+
+
+SHORTCUTS
+---------
+
+Shortcuts are metadata records that jump over a piece of keyspace. A shortcut
+is a replacement for a series of single-occupancy nodes ascending through the
+levels. Shortcuts exist to save memory and to speed up traversal.
+
+It is possible for the root of the tree to be a shortcut - say, for example,
+the tree contains at least 17 nodes all with key prefix '1111'. The insertion
+algorithm will insert a shortcut to skip over the '1111' keyspace in a single
+bound and get to the fourth level where these actually become different.
+
+
+SPLITTING AND COLLAPSING NODES
+------------------------------
+
+Each node has a maximum capacity of 16 leaves and metadata pointers. If the
+insertion algorithm finds that it is trying to insert a 17th object into a
+node, that node will be split such that at least two leaves that have a common
+key segment at that level end up in a separate node rooted on that slot for
+that common key segment.
+
+If the leaves in a full node and the leaf that is being inserted are
+sufficiently similar, then a shortcut will be inserted into the tree.
+
+When the number of objects in the subtree rooted at a node falls to 16 or
+fewer, then the subtree will be collapsed down to a single node - and this will
+ripple towards the root if possible.
+
+
+NON-RECURSIVE ITERATION
+-----------------------
+
+Each node and shortcut contains a back pointer to its parent and the number of
+slot in that parent that points to it. None-recursive iteration uses these to
+proceed rootwards through the tree, going to the parent node, slot N + 1 to
+make sure progress is made without the need for a stack.
+
+The backpointers, however, make simultaneous alteration and iteration tricky.
+
+
+SIMULTANEOUS ALTERATION AND ITERATION
+-------------------------------------
+
+There are a number of cases to consider:
+
+ (1) Simple insert/replace. This involves simply replacing a NULL or old
+ matching leaf pointer with the pointer to the new leaf after a barrier.
+ The metadata blocks don't change otherwise. An old leaf won't be freed
+ until after the RCU grace period.
+
+ (2) Simple delete. This involves just clearing an old matching leaf. The
+ metadata blocks don't change otherwise. The old leaf won't be freed until
+ after the RCU grace period.
+
+ (3) Insertion replacing part of a subtree that we haven't yet entered. This
+ may involve replacement of part of that subtree - but that won't affect
+ the iteration as we won't have reached the pointer to it yet and the
+ ancestry blocks are not replaced (the layout of those does not change).
+
+ (4) Insertion replacing nodes that we're actively processing. This isn't a
+ problem as we've passed the anchoring pointer and won't switch onto the
+ new layout until we follow the back pointers - at which point we've
+ already examined the leaves in the replaced node (we iterate over all the
+ leaves in a node before following any of its metadata pointers).
+
+ We might, however, re-see some leaves that have been split out into a new
+ branch that's in a slot further along than we were at.
+
+ (5) Insertion replacing nodes that we're processing a dependent branch of.
+ This won't affect us until we follow the back pointers. Similar to (4).
+
+ (6) Deletion collapsing a branch under us. This doesn't affect us because the
+ back pointers will get us back to the parent of the new node before we
+ could see the new node. The entire collapsed subtree is thrown away
+ unchanged - and will still be rooted on the same slot, so we shouldn't
+ process it a second time as we'll go back to slot + 1.
+
+Note:
+
+ (*) Under some circumstances, we need to simultaneously change the parent
+ pointer and the parent slot pointer on a node (say, for example, we
+ inserted another node before it and moved it up a level). We cannot do
+ this without locking against a read - so we have to replace that node too.
+
+ However, when we're changing a shortcut into a node this isn't a problem
+ as shortcuts only have one slot and so the parent slot number isn't used
+ when traversing backwards over one. This means that it's okay to change
+ the slot number first - provided suitable barriers are used to make sure
+ the parent slot number is read after the back pointer.
+
+Obsolete blocks and leaves are freed up after an RCU grace period has passed,
+so as long as anyone doing walking or iteration holds the RCU read lock, the
+old superstructure should not go away on them.
--- /dev/null
+Null block device driver
+================================================================================
+
+I. Overview
+
+The null block device (/dev/nullb*) is used for benchmarking the various
+block-layer implementations. It emulates a block device of X gigabytes in size.
+The following instances are possible:
+
+ Single-queue block-layer
+ - Request-based.
+ - Single submission queue per device.
+ - Implements IO scheduling algorithms (CFQ, Deadline, noop).
+ Multi-queue block-layer
+ - Request-based.
+ - Configurable submission queues per device.
+ No block-layer (Known as bio-based)
+ - Bio-based. IO requests are submitted directly to the device driver.
+ - Directly accepts bio data structure and returns them.
+
+All of them have a completion queue for each core in the system.
+
+II. Module parameters applicable for all instances:
+
+queue_mode=[0-2]: Default: 2-Multi-queue
+ Selects which block-layer the module should instantiate with.
+
+ 0: Bio-based.
+ 1: Single-queue.
+ 2: Multi-queue.
+
+home_node=[0--nr_nodes]: Default: NUMA_NO_NODE
+ Selects what CPU node the data structures are allocated from.
+
+gb=[Size in GB]: Default: 250GB
+ The size of the device reported to the system.
+
+bs=[Block size (in bytes)]: Default: 512 bytes
+ The block size reported to the system.
+
+nr_devices=[Number of devices]: Default: 2
+ Number of block devices instantiated. They are instantiated as /dev/nullb0,
+ etc.
+
+irq_mode=[0-2]: Default: 1-Soft-irq
+ The completion mode used for completing IOs to the block-layer.
+
+ 0: None.
+ 1: Soft-irq. Uses IPI to complete IOs across CPU nodes. Simulates the overhead
+ when IOs are issued from another CPU node than the home the device is
+ connected to.
+ 2: Timer: Waits a specific period (completion_nsec) for each IO before
+ completion.
+
+completion_nsec=[ns]: Default: 10.000ns
+ Combined with irq_mode=2 (timer). The time each completion event must wait.
+
+submit_queues=[0..nr_cpus]:
+ The number of submission queues attached to the device driver. If unset, it
+ defaults to 1 on single-queue and bio-based instances. For multi-queue,
+ it is ignored when use_per_node_hctx module parameter is 1.
+
+hw_queue_depth=[0..qdepth]: Default: 64
+ The hardware queue depth of the device.
+
+III: Multi-queue specific parameters
+
+use_per_node_hctx=[0/1]: Default: 0
+ 0: The number of submit queues are set to the value of the submit_queues
+ parameter.
+ 1: The multi-queue block layer is instantiated with a hardware dispatch
+ queue for each CPU node in the system.
Invalidation is removing an entry from the cache without writing it
back. Cache blocks can be invalidated via the invalidate_cblocks
message, which takes an arbitrary number of cblock ranges. Each cblock
-must be expressed as a decimal value, in the future a variant message
-that takes cblock ranges expressed in hexidecimal may be needed to
-better support efficient invalidation of larger caches. The cache must
-be in passthrough mode when invalidate_cblocks is used.
+range's end value is "one past the end", meaning 5-10 expresses a range
+of values from 5 to 9. Each cblock must be expressed as a decimal
+value, in the future a variant message that takes cblock ranges
+expressed in hexidecimal may be needed to better support efficient
+invalidation of larger caches. The cache must be in passthrough mode
+when invalidate_cblocks is used.
invalidate_cblocks [<cblock>|<cblock begin>-<cblock end>]*
Required properties:
- compatible : Should be "ti,omap3-mpu" for OMAP3
Should be "ti,omap4-mpu" for OMAP4
+ Should be "ti,omap5-mpu" for OMAP5
- ti,hwmods: "mpu"
Examples:
+- For an OMAP5 SMP system:
+
+mpu {
+ compatible = "ti,omap5-mpu";
+ ti,hwmods = "mpu"
+};
+
- For an OMAP4 SMP system:
mpu {
Required properties:
- compatible : should be one of
+ "arm,armv8-pmuv3"
"arm,cortex-a15-pmu"
"arm,cortex-a9-pmu"
"arm,cortex-a8-pmu"
/* NTC thermistor is a hwmon device */
ncp15wb473@0 {
compatible = "ntc,ncp15wb473";
- pullup-uV = <1800000>;
+ pullup-uv = <1800000>;
pullup-ohm = <47000>;
pulldown-ohm = <0>;
io-channels = <&adc 4>;
Required Properties:
-- comptible: should be one of the following.
+- compatible: should be one of the following.
- "samsung,exynos4210-clock" - controller compatible with Exynos4210 SoC.
- "samsung,exynos4412-clock" - controller compatible with Exynos4412 SoC.
Required Properties:
-- comptible: should be one of the following.
+- compatible: should be one of the following.
- "samsung,exynos5250-clock" - controller compatible with Exynos5250 SoC.
- reg: physical base address of the controller and length of memory mapped
Required Properties:
-- comptible: should be one of the following.
+- compatible: should be one of the following.
- "samsung,exynos5420-clock" - controller compatible with Exynos5420 SoC.
- reg: physical base address of the controller and length of memory mapped
Required Properties:
-- comptible: should be "samsung,exynos5440-clock".
+- compatible: should be "samsung,exynos5440-clock".
- reg: physical base address of the controller and length of memory mapped
region.
dependent:
- bit 7-0: peripheral identifier for the hardware handshaking interface. The
identifier can be different for tx and rx.
- - bit 11-8: FIFO configuration. 0 for half FIFO, 1 for ALAP, 1 for ASAP.
+ - bit 11-8: FIFO configuration. 0 for half FIFO, 1 for ALAP, 2 for ASAP.
Example:
Every GPIO controller node must have #gpio-cells property defined,
this information will be used to translate gpio-specifiers.
+See bindings/gpio/gpio.txt for details of how to specify GPIO
+information for devices.
+
+The GPIO module usually is connected to the SoC's internal interrupt
+controller, see bindings/interrupt-controller/interrupts.txt (the
+interrupt client nodes section) for details how to specify this GPIO
+module's interrupt.
+
+The GPIO module may serve as another interrupt controller (cascaded to
+the SoC's internal interrupt controller). See the interrupt controller
+nodes section in bindings/interrupt-controller/interrupts.txt for
+details.
Required properties:
-- compatible : "fsl,<CHIP>-gpio" followed by "fsl,mpc8349-gpio" for
- 83xx, "fsl,mpc8572-gpio" for 85xx and "fsl,mpc8610-gpio" for 86xx.
-- #gpio-cells : Should be two. The first cell is the pin number and the
- second cell is used to specify optional parameters (currently unused).
- - interrupts : Interrupt mapping for GPIO IRQ.
- - interrupt-parent : Phandle for the interrupt controller that
- services interrupts for this device.
-- gpio-controller : Marks the port as GPIO controller.
+- compatible: "fsl,<chip>-gpio" followed by "fsl,mpc8349-gpio"
+ for 83xx, "fsl,mpc8572-gpio" for 85xx, or
+ "fsl,mpc8610-gpio" for 86xx.
+- #gpio-cells: Should be two. The first cell is the pin number
+ and the second cell is used to specify optional
+ parameters (currently unused).
+- interrupt-parent: Phandle for the interrupt controller that
+ services interrupts for this device.
+- interrupts: Interrupt mapping for GPIO IRQ.
+- gpio-controller: Marks the port as GPIO controller.
+
+Optional properties:
+- interrupt-controller: Empty boolean property which marks the GPIO
+ module as an IRQ controller.
+- #interrupt-cells: Should be two. Defines the number of integer
+ cells required to specify an interrupt within
+ this interrupt controller. The first cell
+ defines the pin number, the second cell
+ defines additional flags (trigger type,
+ trigger polarity). Note that the available
+ set of trigger conditions supported by the
+ GPIO module depends on the actual SoC.
Example of gpio-controller nodes for a MPC8347 SoC:
#gpio-cells = <2>;
compatible = "fsl,mpc8347-gpio", "fsl,mpc8349-gpio";
reg = <0xc00 0x100>;
- interrupts = <74 0x8>;
interrupt-parent = <&ipic>;
+ interrupts = <74 0x8>;
gpio-controller;
+ interrupt-controller;
+ #interrupt-cells = <2>;
};
gpio2: gpio-controller@d00 {
#gpio-cells = <2>;
compatible = "fsl,mpc8347-gpio", "fsl,mpc8349-gpio";
reg = <0xd00 0x100>;
- interrupts = <75 0x8>;
interrupt-parent = <&ipic>;
+ interrupts = <75 0x8>;
gpio-controller;
};
-See booting-without-of.txt for details of how to specify GPIO
-information for devices.
-
-To use GPIO pins as interrupt sources for peripherals, specify the
-GPIO controller as the interrupt parent and define GPIO number +
-trigger mode using the interrupts property, which is defined like
-this:
-
-interrupts = <number trigger>, where:
- - number: GPIO pin (0..31)
- - trigger: trigger mode:
- 2 = trigger on falling edge
- 3 = trigger on both edges
-
-Example of device using this is:
+Example of a peripheral using the GPIO module as an IRQ controller:
funkyfpga@0 {
compatible = "funky-fpga";
...
- interrupts = <4 3>;
interrupt-parent = <&gpio1>;
+ interrupts = <4 3>;
};
I2C for OMAP platforms
Required properties :
-- compatible : Must be "ti,omap3-i2c" or "ti,omap4-i2c"
+- compatible : Must be "ti,omap2420-i2c", "ti,omap2430-i2c", "ti,omap3-i2c"
+ or "ti,omap4-i2c"
- ti,hwmods : Must be "i2c<n>", n being the instance number (1-based)
- #address-cells = <1>;
- #size-cells = <0>;
adt7461 +/-1C TDM Extended Temp Range I.C
at,24c08 i2c serial eeprom (24cxx)
atmel,24c02 i2c serial eeprom (24cxx)
+atmel,at97sc3204t i2c trusted platform module (TPM)
catalyst,24c32 i2c serial eeprom
dallas,ds1307 64 x 8, Serial, I2C Real-Time Clock
dallas,ds1338 I2C RTC with 56-Byte NV RAM
fsl,mma8450 MMA8450Q: Xtrinsic Low-power, 3-axis Xtrinsic Accelerometer
fsl,mpr121 MPR121: Proximity Capacitive Touch Sensor Controller
fsl,sgtl5000 SGTL5000: Ultra Low-Power Audio Codec
+gmt,g751 G751: Digital Temperature Sensor and Thermal Watchdog with Two-Wire Interface
infineon,slb9635tt Infineon SLB9635 (Soft-) I2C TPM (old protocol, max 100khz)
infineon,slb9645tt Infineon SLB9645 I2C TPM (new protocol, max 400khz)
maxim,ds1050 5 Bit Programmable, Pulse-Width Modulator
national,lm75 I2C TEMP SENSOR
national,lm80 Serial Interface ACPI-Compatible Microprocessor System Hardware Monitor
national,lm92 ±0.33°C Accurate, 12-Bit + Sign Temperature Sensor and Thermal Window Comparator with Two-Wire Interface
+nuvoton,npct501 i2c trusted platform module (TPM)
nxp,pca9556 Octal SMBus and I2C registered interface
nxp,pca9557 8-bit I2C-bus and SMBus I/O port with reset
nxp,pcf8563 Real-time clock/calendar
ti,tsc2003 I2C Touch-Screen Controller
ti,tmp102 Low Power Digital Temperature Sensor with SMBUS/Two Wire Serial Interface
ti,tmp275 Digital Temperature Sensor
+winbond,wpct301 i2c trusted platform module (TPM)
--- /dev/null
+* TI MMC host controller for OMAP1 and 2420
+
+The MMC Host Controller on TI OMAP1 and 2420 family provides
+an interface for MMC, SD, and SDIO types of memory cards.
+
+This file documents differences between the core properties described
+by mmc.txt and the properties used by the omap mmc driver.
+
+Note that this driver will not work with omap2430 or later omaps,
+please see the omap hsmmc driver for the current omaps.
+
+Required properties:
+- compatible: Must be "ti,omap2420-mmc", for OMAP2420 controllers
+- ti,hwmods: For 2420, must be "msdi<n>", where n is controller
+ instance starting 1
+
+Examples:
+
+ msdi1: mmc@4809c000 {
+ compatible = "ti,omap2420-mmc";
+ ti,hwmods = "msdi1";
+ reg = <0x4809c000 0x80>;
+ interrupts = <83>;
+ dmas = <&sdma 61 &sdma 62>;
+ dma-names = "tx", "rx";
+ };
+
+* TI MMC host controller for OMAP1 and 2420
+
+The MMC Host Controller on TI OMAP1 and 2420 family provides
+an interface for MMC, SD, and SDIO types of memory cards.
+
+This file documents differences between the core properties described
+by mmc.txt and the properties used by the omap mmc driver.
+
+Note that this driver will not work with omap2430 or later omaps,
+please see the omap hsmmc driver for the current omaps.
+
+Required properties:
+- compatible: Must be "ti,omap2420-mmc", for OMAP2420 controllers
+- ti,hwmods: For 2420, must be "msdi<n>", where n is controller
+ instance starting 1
+
+Examples:
+
+ msdi1: mmc@4809c000 {
+ compatible = "ti,omap2420-mmc";
+ ti,hwmods = "msdi1";
+ reg = <0x4809c000 0x80>;
+ interrupts = <83>;
+ dmas = <&sdma 61 &sdma 62>;
+ dma-names = "tx", "rx";
+ };
+
for the davinci_emac interface contains.
Required properties:
-- compatible: "ti,davinci-dm6467-emac";
+- compatible: "ti,davinci-dm6467-emac" or "ti,am3517-emac"
- reg: Offset and length of the register set for the device
- ti,davinci-ctrl-reg-offset: offset to control register
- ti,davinci-ctrl-mod-reg-offset: offset to control module register
only if property "phy-reset-gpios" is available. Missing the property
will have the duration be 1 millisecond. Numbers greater than 1000 are
invalid and 1 millisecond will be used instead.
+- phy-supply: regulator that powers the Ethernet PHY.
Example:
phy-mode = "mii";
phy-reset-gpios = <&gpio2 14 0>; /* GPIO2_14 */
local-mac-address = [00 04 9F 01 1B B9];
+ phy-supply = <®_fec_supply>;
};
Optional properties:
- phy-device : phandle to Ethernet phy
- local-mac-address : Ethernet mac address to use
+- reg-io-width : Mask of sizes (in bytes) of the IO accesses that
+ are supported on the device. Valid value for SMSC LAN91c111 are
+ 1, 2 or 4. If it's omitted or invalid, the size would be 2 meaning
+ 16-bit access only.
-* Freescale 83xx DMA Controller
+* Freescale DMA Controllers
-Freescale PowerPC 83xx have on chip general purpose DMA controllers.
+** Freescale Elo DMA Controller
+ This is a little-endian 4-channel DMA controller, used in Freescale mpc83xx
+ series chips such as mpc8315, mpc8349, mpc8379 etc.
Required properties:
-- compatible : compatible list, contains 2 entries, first is
- "fsl,CHIP-dma", where CHIP is the processor
- (mpc8349, mpc8360, etc.) and the second is
- "fsl,elo-dma"
-- reg : <registers mapping for DMA general status reg>
-- ranges : Should be defined as specified in 1) to describe the
- DMA controller channels.
+- compatible : must include "fsl,elo-dma"
+- reg : DMA General Status Register, i.e. DGSR which contains
+ status for all the 4 DMA channels
+- ranges : describes the mapping between the address space of the
+ DMA channels and the address space of the DMA controller
- cell-index : controller index. 0 for controller @ 0x8100
-- interrupts : <interrupt mapping for DMA IRQ>
+- interrupts : interrupt specifier for DMA IRQ
- interrupt-parent : optional, if needed for interrupt mapping
-
- DMA channel nodes:
- - compatible : compatible list, contains 2 entries, first is
- "fsl,CHIP-dma-channel", where CHIP is the processor
- (mpc8349, mpc8350, etc.) and the second is
- "fsl,elo-dma-channel". However, see note below.
- - reg : <registers mapping for channel>
- - cell-index : dma channel index starts at 0.
+ - compatible : must include "fsl,elo-dma-channel"
+ However, see note below.
+ - reg : DMA channel specific registers
+ - cell-index : DMA channel index starts at 0.
Optional properties:
- - interrupts : <interrupt mapping for DMA channel IRQ>
- (on 83xx this is expected to be identical to
- the interrupts property of the parent node)
+ - interrupts : interrupt specifier for DMA channel IRQ
+ (on 83xx this is expected to be identical to
+ the interrupts property of the parent node)
- interrupt-parent : optional, if needed for interrupt mapping
Example:
};
};
-* Freescale 85xx/86xx DMA Controller
-
-Freescale PowerPC 85xx/86xx have on chip general purpose DMA controllers.
+** Freescale EloPlus DMA Controller
+ This is a 4-channel DMA controller with extended addresses and chaining,
+ mainly used in Freescale mpc85xx/86xx, Pxxx and BSC series chips, such as
+ mpc8540, mpc8641 p4080, bsc9131 etc.
Required properties:
-- compatible : compatible list, contains 2 entries, first is
- "fsl,CHIP-dma", where CHIP is the processor
- (mpc8540, mpc8540, etc.) and the second is
- "fsl,eloplus-dma"
-- reg : <registers mapping for DMA general status reg>
+- compatible : must include "fsl,eloplus-dma"
+- reg : DMA General Status Register, i.e. DGSR which contains
+ status for all the 4 DMA channels
- cell-index : controller index. 0 for controller @ 0x21000,
1 for controller @ 0xc000
-- ranges : Should be defined as specified in 1) to describe the
- DMA controller channels.
+- ranges : describes the mapping between the address space of the
+ DMA channels and the address space of the DMA controller
- DMA channel nodes:
- - compatible : compatible list, contains 2 entries, first is
- "fsl,CHIP-dma-channel", where CHIP is the processor
- (mpc8540, mpc8560, etc.) and the second is
- "fsl,eloplus-dma-channel". However, see note below.
- - cell-index : dma channel index starts at 0.
- - reg : <registers mapping for channel>
- - interrupts : <interrupt mapping for DMA channel IRQ>
+ - compatible : must include "fsl,eloplus-dma-channel"
+ However, see note below.
+ - cell-index : DMA channel index starts at 0.
+ - reg : DMA channel specific registers
+ - interrupts : interrupt specifier for DMA channel IRQ
- interrupt-parent : optional, if needed for interrupt mapping
Example:
};
};
+** Freescale Elo3 DMA Controller
+ DMA controller which has same function as EloPlus except that Elo3 has 8
+ channels while EloPlus has only 4, it is used in Freescale Txxx and Bxxx
+ series chips, such as t1040, t4240, b4860.
+
+Required properties:
+
+- compatible : must include "fsl,elo3-dma"
+- reg : contains two entries for DMA General Status Registers,
+ i.e. DGSR0 which includes status for channel 1~4, and
+ DGSR1 for channel 5~8
+- ranges : describes the mapping between the address space of the
+ DMA channels and the address space of the DMA controller
+
+- DMA channel nodes:
+ - compatible : must include "fsl,eloplus-dma-channel"
+ - reg : DMA channel specific registers
+ - interrupts : interrupt specifier for DMA channel IRQ
+ - interrupt-parent : optional, if needed for interrupt mapping
+
+Example:
+dma@100300 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "fsl,elo3-dma";
+ reg = <0x100300 0x4>,
+ <0x100600 0x4>;
+ ranges = <0x0 0x100100 0x500>;
+ dma-channel@0 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x0 0x80>;
+ interrupts = <28 2 0 0>;
+ };
+ dma-channel@80 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x80 0x80>;
+ interrupts = <29 2 0 0>;
+ };
+ dma-channel@100 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x100 0x80>;
+ interrupts = <30 2 0 0>;
+ };
+ dma-channel@180 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x180 0x80>;
+ interrupts = <31 2 0 0>;
+ };
+ dma-channel@300 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x300 0x80>;
+ interrupts = <76 2 0 0>;
+ };
+ dma-channel@380 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x380 0x80>;
+ interrupts = <77 2 0 0>;
+ };
+ dma-channel@400 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x400 0x80>;
+ interrupts = <78 2 0 0>;
+ };
+ dma-channel@480 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x480 0x80>;
+ interrupts = <79 2 0 0>;
+ };
+};
+
Note on DMA channel compatible properties: The compatible property must say
"fsl,elo-dma-channel" or "fsl,eloplus-dma-channel" to be used by the Elo DMA
driver (fsldma). Any DMA channel used by fsldma cannot be used by another
--- /dev/null
+Qualcomm MSM pseudo random number generator.
+
+Required properties:
+
+- compatible : should be "qcom,prng"
+- reg : specifies base physical address and size of the registers map
+- clocks : phandle to clock-controller plus clock-specifier pair
+- clock-names : "core" clocks all registers, FIFO and circuits in PRNG IP block
+
+Example:
+
+ rng@f9bff000 {
+ compatible = "qcom,prng";
+ reg = <0xf9bff000 0x200>;
+ clocks = <&clock GCC_PRNG_AHB_CLK>;
+ clock-names = "core";
+ };
+++ /dev/null
-NVIDIA Tegra 2 SPI device
-
-Required properties:
-- compatible : should be "nvidia,tegra20-spi".
-- gpios : should specify GPIOs used for chipselect.
fsl Freescale Semiconductor
GEFanuc GE Fanuc Intelligent Platforms Embedded Systems, Inc.
gef GE Fanuc Intelligent Platforms Embedded Systems, Inc.
+gmt Global Mixed-mode Technology, Inc.
hisilicon Hisilicon Limited.
hp Hewlett Packard
ibm International Business Machines (IBM)
idt Integrated Device Technologies, Inc.
img Imagination Technologies Ltd.
intercontrol Inter Control Group
+lg LG Corporation
linux Linux-specific binding
lsi LSI Corp. (LSI Logic)
marvell Marvell Technology Group Ltd.
Part 2 - When dmatest is built as a module...
-After mounting debugfs and loading the module, the /sys/kernel/debug/dmatest
-folder with nodes will be created. There are two important files located. First
-is the 'run' node that controls run and stop phases of the test, and the second
-one, 'results', is used to get the test case results.
-
-Note that in this case test will not run on load automatically.
-
Example of usage:
+ % modprobe dmatest channel=dma0chan0 timeout=2000 iterations=1 run=1
+
+...or:
+ % modprobe dmatest
% echo dma0chan0 > /sys/module/dmatest/parameters/channel
% echo 2000 > /sys/module/dmatest/parameters/timeout
% echo 1 > /sys/module/dmatest/parameters/iterations
- % echo 1 > /sys/kernel/debug/dmatest/run
+ % echo 1 > /sys/module/dmatest/parameters/run
+
+...or on the kernel command line:
+
+ dmatest.channel=dma0chan0 dmatest.timeout=2000 dmatest.iterations=1 dmatest.run=1
Hint: available channel list could be extracted by running the following
command:
% ls -1 /sys/class/dma/
-After a while you will start to get messages about current status or error like
-in the original code.
+Once started a message like "dmatest: Started 1 threads using dma0chan0" is
+emitted. After that only test failure messages are reported until the test
+stops.
Note that running a new test will not stop any in progress test.
-The following command should return actual state of the test.
- % cat /sys/kernel/debug/dmatest/run
-
-To wait for test done the user may perform a busy loop that checks the state.
-
- % while [ $(cat /sys/kernel/debug/dmatest/run) = "Y" ]
- > do
- > echo -n "."
- > sleep 1
- > done
- > echo
+The following command returns the state of the test.
+ % cat /sys/module/dmatest/parameters/run
+
+To wait for test completion userpace can poll 'run' until it is false, or use
+the wait parameter. Specifying 'wait=1' when loading the module causes module
+initialization to pause until a test run has completed, while reading
+/sys/module/dmatest/parameters/wait waits for any running test to complete
+before returning. For example, the following scripts wait for 42 tests
+to complete before exiting. Note that if 'iterations' is set to 'infinite' then
+waiting is disabled.
+
+Example:
+ % modprobe dmatest run=1 iterations=42 wait=1
+ % modprobe -r dmatest
+...or:
+ % modprobe dmatest run=1 iterations=42
+ % cat /sys/module/dmatest/parameters/wait
+ % modprobe -r dmatest
Part 3 - When built-in in the kernel...
Part 4 - Gathering the test results
-The module provides a storage for the test results in the memory. The gathered
-data could be used after test is done.
+Test results are printed to the kernel log buffer with the format:
-The special file 'results' in the debugfs represents gathered data of the in
-progress test. The messages collected are printed to the kernel log as well.
+"dmatest: result <channel>: <test id>: '<error msg>' with src_off=<val> dst_off=<val> len=<val> (<err code>)"
Example of output:
- % cat /sys/kernel/debug/dmatest/results
- dma0chan0-copy0: #1: No errors with src_off=0x7bf dst_off=0x8ad len=0x3fea (0)
+ % dmesg | tail -n 1
+ dmatest: result dma0chan0-copy0: #1: No errors with src_off=0x7bf dst_off=0x8ad len=0x3fea (0)
The message format is unified across the different types of errors. A number in
the parens represents additional information, e.g. error code, error counter,
-or status.
+or status. A test thread also emits a summary line at completion listing the
+number of tests executed, number that failed, and a result code.
-Comparison between buffers is stored to the dedicated structure.
+Example:
+ % dmesg | tail -n 1
+ dmatest: dma0chan0-copy0: summary 1 test, 0 failures 1000 iops 100000 KB/s (0)
-Note that the verify result is now accessible only via file 'results' in the
-debugfs.
+The details of a data miscompare error are also emitted, but do not follow the
+above format.
See comments at the top of fs/btrfs/check-integrity.c for more info.
+ commit=<seconds>
+ Set the interval of periodic commit, 30 seconds by default. Higher
+ values defer data being synced to permanent storage with obvious
+ consequences when the system crashes. The upper bound is not forced,
+ but a warning is printed if it's more than 300 seconds (5 minutes).
+
compress
compress=<type>
compress-force
Currently this scans a list of several previous tree roots and tries to
use the first readable.
- skip_balance
+ rescan_uuid_tree
+ Force check and rebuild procedure of the UUID tree. This should not
+ normally be needed.
+
+ skip_balance
Skip automatic resume of interrupted balance operation after mount.
May be resumed with "btrfs balance resume."
These include the following tools:
-mkfs.btrfs: create a filesystem
-
-btrfsctl: control program to create snapshots and subvolumes:
+* mkfs.btrfs: create a filesystem
- mount /dev/sda2 /mnt
- btrfsctl -s new_subvol_name /mnt
- btrfsctl -s snapshot_of_default /mnt/default
- btrfsctl -s snapshot_of_new_subvol /mnt/new_subvol_name
- btrfsctl -s snapshot_of_a_snapshot /mnt/snapshot_of_new_subvol
- ls /mnt
- default snapshot_of_a_snapshot snapshot_of_new_subvol
- new_subvol_name snapshot_of_default
+* btrfs: a single tool to manage the filesystems, refer to the manpage for more details
- Snapshots and subvolumes cannot be deleted right now, but you can
- rm -rf all the files and directories inside them.
+* 'btrfsck' or 'btrfs check': do a consistency check of the filesystem
-btrfsck: do a limited check of the FS extent trees.
+Other tools for specific tasks:
-btrfs-debug-tree: print all of the FS metadata in text form. Example:
+* btrfs-convert: in-place conversion from ext2/3/4 filesystems
- btrfs-debug-tree /dev/sda2 >& big_output_file
+* btrfs-image: dump filesystem metadata for debugging
+++ /dev/null
-GPIO Interfaces
-
-This provides an overview of GPIO access conventions on Linux.
-
-These calls use the gpio_* naming prefix. No other calls should use that
-prefix, or the related __gpio_* prefix.
-
-
-What is a GPIO?
-===============
-A "General Purpose Input/Output" (GPIO) is a flexible software-controlled
-digital signal. They are provided from many kinds of chip, and are familiar
-to Linux developers working with embedded and custom hardware. Each GPIO
-represents a bit connected to a particular pin, or "ball" on Ball Grid Array
-(BGA) packages. Board schematics show which external hardware connects to
-which GPIOs. Drivers can be written generically, so that board setup code
-passes such pin configuration data to drivers.
-
-System-on-Chip (SOC) processors heavily rely on GPIOs. In some cases, every
-non-dedicated pin can be configured as a GPIO; and most chips have at least
-several dozen of them. Programmable logic devices (like FPGAs) can easily
-provide GPIOs; multifunction chips like power managers, and audio codecs
-often have a few such pins to help with pin scarcity on SOCs; and there are
-also "GPIO Expander" chips that connect using the I2C or SPI serial busses.
-Most PC southbridges have a few dozen GPIO-capable pins (with only the BIOS
-firmware knowing how they're used).
-
-The exact capabilities of GPIOs vary between systems. Common options:
-
- - Output values are writable (high=1, low=0). Some chips also have
- options about how that value is driven, so that for example only one
- value might be driven ... supporting "wire-OR" and similar schemes
- for the other value (notably, "open drain" signaling).
-
- - Input values are likewise readable (1, 0). Some chips support readback
- of pins configured as "output", which is very useful in such "wire-OR"
- cases (to support bidirectional signaling). GPIO controllers may have
- input de-glitch/debounce logic, sometimes with software controls.
-
- - Inputs can often be used as IRQ signals, often edge triggered but
- sometimes level triggered. Such IRQs may be configurable as system
- wakeup events, to wake the system from a low power state.
-
- - Usually a GPIO will be configurable as either input or output, as needed
- by different product boards; single direction ones exist too.
-
- - Most GPIOs can be accessed while holding spinlocks, but those accessed
- through a serial bus normally can't. Some systems support both types.
-
-On a given board each GPIO is used for one specific purpose like monitoring
-MMC/SD card insertion/removal, detecting card writeprotect status, driving
-a LED, configuring a transceiver, bitbanging a serial bus, poking a hardware
-watchdog, sensing a switch, and so on.
-
-
-GPIO conventions
-================
-Note that this is called a "convention" because you don't need to do it this
-way, and it's no crime if you don't. There **are** cases where portability
-is not the main issue; GPIOs are often used for the kind of board-specific
-glue logic that may even change between board revisions, and can't ever be
-used on a board that's wired differently. Only least-common-denominator
-functionality can be very portable. Other features are platform-specific,
-and that can be critical for glue logic.
-
-Plus, this doesn't require any implementation framework, just an interface.
-One platform might implement it as simple inline functions accessing chip
-registers; another might implement it by delegating through abstractions
-used for several very different kinds of GPIO controller. (There is some
-optional code supporting such an implementation strategy, described later
-in this document, but drivers acting as clients to the GPIO interface must
-not care how it's implemented.)
-
-That said, if the convention is supported on their platform, drivers should
-use it when possible. Platforms must select ARCH_REQUIRE_GPIOLIB or
-ARCH_WANT_OPTIONAL_GPIOLIB in their Kconfig. Drivers that can't work without
-standard GPIO calls should have Kconfig entries which depend on GPIOLIB. The
-GPIO calls are available, either as "real code" or as optimized-away stubs,
-when drivers use the include file:
-
- #include <linux/gpio.h>
-
-If you stick to this convention then it'll be easier for other developers to
-see what your code is doing, and help maintain it.
-
-Note that these operations include I/O barriers on platforms which need to
-use them; drivers don't need to add them explicitly.
-
-
-Identifying GPIOs
------------------
-GPIOs are identified by unsigned integers in the range 0..MAX_INT. That
-reserves "negative" numbers for other purposes like marking signals as
-"not available on this board", or indicating faults. Code that doesn't
-touch the underlying hardware treats these integers as opaque cookies.
-
-Platforms define how they use those integers, and usually #define symbols
-for the GPIO lines so that board-specific setup code directly corresponds
-to the relevant schematics. In contrast, drivers should only use GPIO
-numbers passed to them from that setup code, using platform_data to hold
-board-specific pin configuration data (along with other board specific
-data they need). That avoids portability problems.
-
-So for example one platform uses numbers 32-159 for GPIOs; while another
-uses numbers 0..63 with one set of GPIO controllers, 64-79 with another
-type of GPIO controller, and on one particular board 80-95 with an FPGA.
-The numbers need not be contiguous; either of those platforms could also
-use numbers 2000-2063 to identify GPIOs in a bank of I2C GPIO expanders.
-
-If you want to initialize a structure with an invalid GPIO number, use
-some negative number (perhaps "-EINVAL"); that will never be valid. To
-test if such number from such a structure could reference a GPIO, you
-may use this predicate:
-
- int gpio_is_valid(int number);
-
-A number that's not valid will be rejected by calls which may request
-or free GPIOs (see below). Other numbers may also be rejected; for
-example, a number might be valid but temporarily unused on a given board.
-
-Whether a platform supports multiple GPIO controllers is a platform-specific
-implementation issue, as are whether that support can leave "holes" in the space
-of GPIO numbers, and whether new controllers can be added at runtime. Such issues
-can affect things including whether adjacent GPIO numbers are both valid.
-
-Using GPIOs
------------
-The first thing a system should do with a GPIO is allocate it, using
-the gpio_request() call; see later.
-
-One of the next things to do with a GPIO, often in board setup code when
-setting up a platform_device using the GPIO, is mark its direction:
-
- /* set as input or output, returning 0 or negative errno */
- int gpio_direction_input(unsigned gpio);
- int gpio_direction_output(unsigned gpio, int value);
-
-The return value is zero for success, else a negative errno. It should
-be checked, since the get/set calls don't have error returns and since
-misconfiguration is possible. You should normally issue these calls from
-a task context. However, for spinlock-safe GPIOs it's OK to use them
-before tasking is enabled, as part of early board setup.
-
-For output GPIOs, the value provided becomes the initial output value.
-This helps avoid signal glitching during system startup.
-
-For compatibility with legacy interfaces to GPIOs, setting the direction
-of a GPIO implicitly requests that GPIO (see below) if it has not been
-requested already. That compatibility is being removed from the optional
-gpiolib framework.
-
-Setting the direction can fail if the GPIO number is invalid, or when
-that particular GPIO can't be used in that mode. It's generally a bad
-idea to rely on boot firmware to have set the direction correctly, since
-it probably wasn't validated to do more than boot Linux. (Similarly,
-that board setup code probably needs to multiplex that pin as a GPIO,
-and configure pullups/pulldowns appropriately.)
-
-
-Spinlock-Safe GPIO access
--------------------------
-Most GPIO controllers can be accessed with memory read/write instructions.
-Those don't need to sleep, and can safely be done from inside hard
-(nonthreaded) IRQ handlers and similar contexts.
-
-Use the following calls to access such GPIOs,
-for which gpio_cansleep() will always return false (see below):
-
- /* GPIO INPUT: return zero or nonzero */
- int gpio_get_value(unsigned gpio);
-
- /* GPIO OUTPUT */
- void gpio_set_value(unsigned gpio, int value);
-
-The values are boolean, zero for low, nonzero for high. When reading the
-value of an output pin, the value returned should be what's seen on the
-pin ... that won't always match the specified output value, because of
-issues including open-drain signaling and output latencies.
-
-The get/set calls have no error returns because "invalid GPIO" should have
-been reported earlier from gpio_direction_*(). However, note that not all
-platforms can read the value of output pins; those that can't should always
-return zero. Also, using these calls for GPIOs that can't safely be accessed
-without sleeping (see below) is an error.
-
-Platform-specific implementations are encouraged to optimize the two
-calls to access the GPIO value in cases where the GPIO number (and for
-output, value) are constant. It's normal for them to need only a couple
-of instructions in such cases (reading or writing a hardware register),
-and not to need spinlocks. Such optimized calls can make bitbanging
-applications a lot more efficient (in both space and time) than spending
-dozens of instructions on subroutine calls.
-
-
-GPIO access that may sleep
---------------------------
-Some GPIO controllers must be accessed using message based busses like I2C
-or SPI. Commands to read or write those GPIO values require waiting to
-get to the head of a queue to transmit a command and get its response.
-This requires sleeping, which can't be done from inside IRQ handlers.
-
-Platforms that support this type of GPIO distinguish them from other GPIOs
-by returning nonzero from this call (which requires a valid GPIO number,
-which should have been previously allocated with gpio_request):
-
- int gpio_cansleep(unsigned gpio);
-
-To access such GPIOs, a different set of accessors is defined:
-
- /* GPIO INPUT: return zero or nonzero, might sleep */
- int gpio_get_value_cansleep(unsigned gpio);
-
- /* GPIO OUTPUT, might sleep */
- void gpio_set_value_cansleep(unsigned gpio, int value);
-
-
-Accessing such GPIOs requires a context which may sleep, for example
-a threaded IRQ handler, and those accessors must be used instead of
-spinlock-safe accessors without the cansleep() name suffix.
-
-Other than the fact that these accessors might sleep, and will work
-on GPIOs that can't be accessed from hardIRQ handlers, these calls act
-the same as the spinlock-safe calls.
-
- ** IN ADDITION ** calls to setup and configure such GPIOs must be made
-from contexts which may sleep, since they may need to access the GPIO
-controller chip too: (These setup calls are usually made from board
-setup or driver probe/teardown code, so this is an easy constraint.)
-
- gpio_direction_input()
- gpio_direction_output()
- gpio_request()
-
-## gpio_request_one()
-## gpio_request_array()
-## gpio_free_array()
-
- gpio_free()
- gpio_set_debounce()
-
-
-
-Claiming and Releasing GPIOs
-----------------------------
-To help catch system configuration errors, two calls are defined.
-
- /* request GPIO, returning 0 or negative errno.
- * non-null labels may be useful for diagnostics.
- */
- int gpio_request(unsigned gpio, const char *label);
-
- /* release previously-claimed GPIO */
- void gpio_free(unsigned gpio);
-
-Passing invalid GPIO numbers to gpio_request() will fail, as will requesting
-GPIOs that have already been claimed with that call. The return value of
-gpio_request() must be checked. You should normally issue these calls from
-a task context. However, for spinlock-safe GPIOs it's OK to request GPIOs
-before tasking is enabled, as part of early board setup.
-
-These calls serve two basic purposes. One is marking the signals which
-are actually in use as GPIOs, for better diagnostics; systems may have
-several hundred potential GPIOs, but often only a dozen are used on any
-given board. Another is to catch conflicts, identifying errors when
-(a) two or more drivers wrongly think they have exclusive use of that
-signal, or (b) something wrongly believes it's safe to remove drivers
-needed to manage a signal that's in active use. That is, requesting a
-GPIO can serve as a kind of lock.
-
-Some platforms may also use knowledge about what GPIOs are active for
-power management, such as by powering down unused chip sectors and, more
-easily, gating off unused clocks.
-
-For GPIOs that use pins known to the pinctrl subsystem, that subsystem should
-be informed of their use; a gpiolib driver's .request() operation may call
-pinctrl_request_gpio(), and a gpiolib driver's .free() operation may call
-pinctrl_free_gpio(). The pinctrl subsystem allows a pinctrl_request_gpio()
-to succeed concurrently with a pin or pingroup being "owned" by a device for
-pin multiplexing.
-
-Any programming of pin multiplexing hardware that is needed to route the
-GPIO signal to the appropriate pin should occur within a GPIO driver's
-.direction_input() or .direction_output() operations, and occur after any
-setup of an output GPIO's value. This allows a glitch-free migration from a
-pin's special function to GPIO. This is sometimes required when using a GPIO
-to implement a workaround on signals typically driven by a non-GPIO HW block.
-
-Some platforms allow some or all GPIO signals to be routed to different pins.
-Similarly, other aspects of the GPIO or pin may need to be configured, such as
-pullup/pulldown. Platform software should arrange that any such details are
-configured prior to gpio_request() being called for those GPIOs, e.g. using
-the pinctrl subsystem's mapping table, so that GPIO users need not be aware
-of these details.
-
-Also note that it's your responsibility to have stopped using a GPIO
-before you free it.
-
-Considering in most cases GPIOs are actually configured right after they
-are claimed, three additional calls are defined:
-
- /* request a single GPIO, with initial configuration specified by
- * 'flags', identical to gpio_request() wrt other arguments and
- * return value
- */
- int gpio_request_one(unsigned gpio, unsigned long flags, const char *label);
-
- /* request multiple GPIOs in a single call
- */
- int gpio_request_array(struct gpio *array, size_t num);
-
- /* release multiple GPIOs in a single call
- */
- void gpio_free_array(struct gpio *array, size_t num);
-
-where 'flags' is currently defined to specify the following properties:
-
- * GPIOF_DIR_IN - to configure direction as input
- * GPIOF_DIR_OUT - to configure direction as output
-
- * GPIOF_INIT_LOW - as output, set initial level to LOW
- * GPIOF_INIT_HIGH - as output, set initial level to HIGH
- * GPIOF_OPEN_DRAIN - gpio pin is open drain type.
- * GPIOF_OPEN_SOURCE - gpio pin is open source type.
-
- * GPIOF_EXPORT_DIR_FIXED - export gpio to sysfs, keep direction
- * GPIOF_EXPORT_DIR_CHANGEABLE - also export, allow changing direction
-
-since GPIOF_INIT_* are only valid when configured as output, so group valid
-combinations as:
-
- * GPIOF_IN - configure as input
- * GPIOF_OUT_INIT_LOW - configured as output, initial level LOW
- * GPIOF_OUT_INIT_HIGH - configured as output, initial level HIGH
-
-When setting the flag as GPIOF_OPEN_DRAIN then it will assume that pins is
-open drain type. Such pins will not be driven to 1 in output mode. It is
-require to connect pull-up on such pins. By enabling this flag, gpio lib will
-make the direction to input when it is asked to set value of 1 in output mode
-to make the pin HIGH. The pin is make to LOW by driving value 0 in output mode.
-
-When setting the flag as GPIOF_OPEN_SOURCE then it will assume that pins is
-open source type. Such pins will not be driven to 0 in output mode. It is
-require to connect pull-down on such pin. By enabling this flag, gpio lib will
-make the direction to input when it is asked to set value of 0 in output mode
-to make the pin LOW. The pin is make to HIGH by driving value 1 in output mode.
-
-In the future, these flags can be extended to support more properties.
-
-Further more, to ease the claim/release of multiple GPIOs, 'struct gpio' is
-introduced to encapsulate all three fields as:
-
- struct gpio {
- unsigned gpio;
- unsigned long flags;
- const char *label;
- };
-
-A typical example of usage:
-
- static struct gpio leds_gpios[] = {
- { 32, GPIOF_OUT_INIT_HIGH, "Power LED" }, /* default to ON */
- { 33, GPIOF_OUT_INIT_LOW, "Green LED" }, /* default to OFF */
- { 34, GPIOF_OUT_INIT_LOW, "Red LED" }, /* default to OFF */
- { 35, GPIOF_OUT_INIT_LOW, "Blue LED" }, /* default to OFF */
- { ... },
- };
-
- err = gpio_request_one(31, GPIOF_IN, "Reset Button");
- if (err)
- ...
-
- err = gpio_request_array(leds_gpios, ARRAY_SIZE(leds_gpios));
- if (err)
- ...
-
- gpio_free_array(leds_gpios, ARRAY_SIZE(leds_gpios));
-
-
-GPIOs mapped to IRQs
---------------------
-GPIO numbers are unsigned integers; so are IRQ numbers. These make up
-two logically distinct namespaces (GPIO 0 need not use IRQ 0). You can
-map between them using calls like:
-
- /* map GPIO numbers to IRQ numbers */
- int gpio_to_irq(unsigned gpio);
-
- /* map IRQ numbers to GPIO numbers (avoid using this) */
- int irq_to_gpio(unsigned irq);
-
-Those return either the corresponding number in the other namespace, or
-else a negative errno code if the mapping can't be done. (For example,
-some GPIOs can't be used as IRQs.) It is an unchecked error to use a GPIO
-number that wasn't set up as an input using gpio_direction_input(), or
-to use an IRQ number that didn't originally come from gpio_to_irq().
-
-These two mapping calls are expected to cost on the order of a single
-addition or subtraction. They're not allowed to sleep.
-
-Non-error values returned from gpio_to_irq() can be passed to request_irq()
-or free_irq(). They will often be stored into IRQ resources for platform
-devices, by the board-specific initialization code. Note that IRQ trigger
-options are part of the IRQ interface, e.g. IRQF_TRIGGER_FALLING, as are
-system wakeup capabilities.
-
-Non-error values returned from irq_to_gpio() would most commonly be used
-with gpio_get_value(), for example to initialize or update driver state
-when the IRQ is edge-triggered. Note that some platforms don't support
-this reverse mapping, so you should avoid using it.
-
-
-Emulating Open Drain Signals
-----------------------------
-Sometimes shared signals need to use "open drain" signaling, where only the
-low signal level is actually driven. (That term applies to CMOS transistors;
-"open collector" is used for TTL.) A pullup resistor causes the high signal
-level. This is sometimes called a "wire-AND"; or more practically, from the
-negative logic (low=true) perspective this is a "wire-OR".
-
-One common example of an open drain signal is a shared active-low IRQ line.
-Also, bidirectional data bus signals sometimes use open drain signals.
-
-Some GPIO controllers directly support open drain outputs; many don't. When
-you need open drain signaling but your hardware doesn't directly support it,
-there's a common idiom you can use to emulate it with any GPIO pin that can
-be used as either an input or an output:
-
- LOW: gpio_direction_output(gpio, 0) ... this drives the signal
- and overrides the pullup.
-
- HIGH: gpio_direction_input(gpio) ... this turns off the output,
- so the pullup (or some other device) controls the signal.
-
-If you are "driving" the signal high but gpio_get_value(gpio) reports a low
-value (after the appropriate rise time passes), you know some other component
-is driving the shared signal low. That's not necessarily an error. As one
-common example, that's how I2C clocks are stretched: a slave that needs a
-slower clock delays the rising edge of SCK, and the I2C master adjusts its
-signaling rate accordingly.
-
-
-GPIO controllers and the pinctrl subsystem
-------------------------------------------
-
-A GPIO controller on a SOC might be tightly coupled with the pinctrl
-subsystem, in the sense that the pins can be used by other functions
-together with an optional gpio feature. We have already covered the
-case where e.g. a GPIO controller need to reserve a pin or set the
-direction of a pin by calling any of:
-
-pinctrl_request_gpio()
-pinctrl_free_gpio()
-pinctrl_gpio_direction_input()
-pinctrl_gpio_direction_output()
-
-But how does the pin control subsystem cross-correlate the GPIO
-numbers (which are a global business) to a certain pin on a certain
-pin controller?
-
-This is done by registering "ranges" of pins, which are essentially
-cross-reference tables. These are described in
-Documentation/pinctrl.txt
-
-While the pin allocation is totally managed by the pinctrl subsystem,
-gpio (under gpiolib) is still maintained by gpio drivers. It may happen
-that different pin ranges in a SoC is managed by different gpio drivers.
-
-This makes it logical to let gpio drivers announce their pin ranges to
-the pin ctrl subsystem before it will call 'pinctrl_request_gpio' in order
-to request the corresponding pin to be prepared by the pinctrl subsystem
-before any gpio usage.
-
-For this, the gpio controller can register its pin range with pinctrl
-subsystem. There are two ways of doing it currently: with or without DT.
-
-For with DT support refer to Documentation/devicetree/bindings/gpio/gpio.txt.
-
-For non-DT support, user can call gpiochip_add_pin_range() with appropriate
-parameters to register a range of gpio pins with a pinctrl driver. For this
-exact name string of pinctrl device has to be passed as one of the
-argument to this routine.
-
-
-What do these conventions omit?
-===============================
-One of the biggest things these conventions omit is pin multiplexing, since
-this is highly chip-specific and nonportable. One platform might not need
-explicit multiplexing; another might have just two options for use of any
-given pin; another might have eight options per pin; another might be able
-to route a given GPIO to any one of several pins. (Yes, those examples all
-come from systems that run Linux today.)
-
-Related to multiplexing is configuration and enabling of the pullups or
-pulldowns integrated on some platforms. Not all platforms support them,
-or support them in the same way; and any given board might use external
-pullups (or pulldowns) so that the on-chip ones should not be used.
-(When a circuit needs 5 kOhm, on-chip 100 kOhm resistors won't do.)
-Likewise drive strength (2 mA vs 20 mA) and voltage (1.8V vs 3.3V) is a
-platform-specific issue, as are models like (not) having a one-to-one
-correspondence between configurable pins and GPIOs.
-
-There are other system-specific mechanisms that are not specified here,
-like the aforementioned options for input de-glitching and wire-OR output.
-Hardware may support reading or writing GPIOs in gangs, but that's usually
-configuration dependent: for GPIOs sharing the same bank. (GPIOs are
-commonly grouped in banks of 16 or 32, with a given SOC having several such
-banks.) Some systems can trigger IRQs from output GPIOs, or read values
-from pins not managed as GPIOs. Code relying on such mechanisms will
-necessarily be nonportable.
-
-Dynamic definition of GPIOs is not currently standard; for example, as
-a side effect of configuring an add-on board with some GPIO expanders.
-
-
-GPIO implementor's framework (OPTIONAL)
-=======================================
-As noted earlier, there is an optional implementation framework making it
-easier for platforms to support different kinds of GPIO controller using
-the same programming interface. This framework is called "gpiolib".
-
-As a debugging aid, if debugfs is available a /sys/kernel/debug/gpio file
-will be found there. That will list all the controllers registered through
-this framework, and the state of the GPIOs currently in use.
-
-
-Controller Drivers: gpio_chip
------------------------------
-In this framework each GPIO controller is packaged as a "struct gpio_chip"
-with information common to each controller of that type:
-
- - methods to establish GPIO direction
- - methods used to access GPIO values
- - flag saying whether calls to its methods may sleep
- - optional debugfs dump method (showing extra state like pullup config)
- - label for diagnostics
-
-There is also per-instance data, which may come from device.platform_data:
-the number of its first GPIO, and how many GPIOs it exposes.
-
-The code implementing a gpio_chip should support multiple instances of the
-controller, possibly using the driver model. That code will configure each
-gpio_chip and issue gpiochip_add(). Removing a GPIO controller should be
-rare; use gpiochip_remove() when it is unavoidable.
-
-Most often a gpio_chip is part of an instance-specific structure with state
-not exposed by the GPIO interfaces, such as addressing, power management,
-and more. Chips such as codecs will have complex non-GPIO state.
-
-Any debugfs dump method should normally ignore signals which haven't been
-requested as GPIOs. They can use gpiochip_is_requested(), which returns
-either NULL or the label associated with that GPIO when it was requested.
-
-
-Platform Support
-----------------
-To support this framework, a platform's Kconfig will "select" either
-ARCH_REQUIRE_GPIOLIB or ARCH_WANT_OPTIONAL_GPIOLIB
-and arrange that its <asm/gpio.h> includes <asm-generic/gpio.h> and defines
-three functions: gpio_get_value(), gpio_set_value(), and gpio_cansleep().
-
-It may also provide a custom value for ARCH_NR_GPIOS, so that it better
-reflects the number of GPIOs in actual use on that platform, without
-wasting static table space. (It should count both built-in/SoC GPIOs and
-also ones on GPIO expanders.
-
-ARCH_REQUIRE_GPIOLIB means that the gpiolib code will always get compiled
-into the kernel on that architecture.
-
-ARCH_WANT_OPTIONAL_GPIOLIB means the gpiolib code defaults to off and the user
-can enable it and build it into the kernel optionally.
-
-If neither of these options are selected, the platform does not support
-GPIOs through GPIO-lib and the code cannot be enabled by the user.
-
-Trivial implementations of those functions can directly use framework
-code, which always dispatches through the gpio_chip:
-
- #define gpio_get_value __gpio_get_value
- #define gpio_set_value __gpio_set_value
- #define gpio_cansleep __gpio_cansleep
-
-Fancier implementations could instead define those as inline functions with
-logic optimizing access to specific SOC-based GPIOs. For example, if the
-referenced GPIO is the constant "12", getting or setting its value could
-cost as little as two or three instructions, never sleeping. When such an
-optimization is not possible those calls must delegate to the framework
-code, costing at least a few dozen instructions. For bitbanged I/O, such
-instruction savings can be significant.
-
-For SOCs, platform-specific code defines and registers gpio_chip instances
-for each bank of on-chip GPIOs. Those GPIOs should be numbered/labeled to
-match chip vendor documentation, and directly match board schematics. They
-may well start at zero and go up to a platform-specific limit. Such GPIOs
-are normally integrated into platform initialization to make them always be
-available, from arch_initcall() or earlier; they can often serve as IRQs.
-
-
-Board Support
--------------
-For external GPIO controllers -- such as I2C or SPI expanders, ASICs, multi
-function devices, FPGAs or CPLDs -- most often board-specific code handles
-registering controller devices and ensures that their drivers know what GPIO
-numbers to use with gpiochip_add(). Their numbers often start right after
-platform-specific GPIOs.
-
-For example, board setup code could create structures identifying the range
-of GPIOs that chip will expose, and passes them to each GPIO expander chip
-using platform_data. Then the chip driver's probe() routine could pass that
-data to gpiochip_add().
-
-Initialization order can be important. For example, when a device relies on
-an I2C-based GPIO, its probe() routine should only be called after that GPIO
-becomes available. That may mean the device should not be registered until
-calls for that GPIO can work. One way to address such dependencies is for
-such gpio_chip controllers to provide setup() and teardown() callbacks to
-board specific code; those board specific callbacks would register devices
-once all the necessary resources are available, and remove them later when
-the GPIO controller device becomes unavailable.
-
-
-Sysfs Interface for Userspace (OPTIONAL)
-========================================
-Platforms which use the "gpiolib" implementors framework may choose to
-configure a sysfs user interface to GPIOs. This is different from the
-debugfs interface, since it provides control over GPIO direction and
-value instead of just showing a gpio state summary. Plus, it could be
-present on production systems without debugging support.
-
-Given appropriate hardware documentation for the system, userspace could
-know for example that GPIO #23 controls the write protect line used to
-protect boot loader segments in flash memory. System upgrade procedures
-may need to temporarily remove that protection, first importing a GPIO,
-then changing its output state, then updating the code before re-enabling
-the write protection. In normal use, GPIO #23 would never be touched,
-and the kernel would have no need to know about it.
-
-Again depending on appropriate hardware documentation, on some systems
-userspace GPIO can be used to determine system configuration data that
-standard kernels won't know about. And for some tasks, simple userspace
-GPIO drivers could be all that the system really needs.
-
-Note that standard kernel drivers exist for common "LEDs and Buttons"
-GPIO tasks: "leds-gpio" and "gpio_keys", respectively. Use those
-instead of talking directly to the GPIOs; they integrate with kernel
-frameworks better than your userspace code could.
-
-
-Paths in Sysfs
---------------
-There are three kinds of entry in /sys/class/gpio:
-
- - Control interfaces used to get userspace control over GPIOs;
-
- - GPIOs themselves; and
-
- - GPIO controllers ("gpio_chip" instances).
-
-That's in addition to standard files including the "device" symlink.
-
-The control interfaces are write-only:
-
- /sys/class/gpio/
-
- "export" ... Userspace may ask the kernel to export control of
- a GPIO to userspace by writing its number to this file.
-
- Example: "echo 19 > export" will create a "gpio19" node
- for GPIO #19, if that's not requested by kernel code.
-
- "unexport" ... Reverses the effect of exporting to userspace.
-
- Example: "echo 19 > unexport" will remove a "gpio19"
- node exported using the "export" file.
-
-GPIO signals have paths like /sys/class/gpio/gpio42/ (for GPIO #42)
-and have the following read/write attributes:
-
- /sys/class/gpio/gpioN/
-
- "direction" ... reads as either "in" or "out". This value may
- normally be written. Writing as "out" defaults to
- initializing the value as low. To ensure glitch free
- operation, values "low" and "high" may be written to
- configure the GPIO as an output with that initial value.
-
- Note that this attribute *will not exist* if the kernel
- doesn't support changing the direction of a GPIO, or
- it was exported by kernel code that didn't explicitly
- allow userspace to reconfigure this GPIO's direction.
-
- "value" ... reads as either 0 (low) or 1 (high). If the GPIO
- is configured as an output, this value may be written;
- any nonzero value is treated as high.
-
- If the pin can be configured as interrupt-generating interrupt
- and if it has been configured to generate interrupts (see the
- description of "edge"), you can poll(2) on that file and
- poll(2) will return whenever the interrupt was triggered. If
- you use poll(2), set the events POLLPRI and POLLERR. If you
- use select(2), set the file descriptor in exceptfds. After
- poll(2) returns, either lseek(2) to the beginning of the sysfs
- file and read the new value or close the file and re-open it
- to read the value.
-
- "edge" ... reads as either "none", "rising", "falling", or
- "both". Write these strings to select the signal edge(s)
- that will make poll(2) on the "value" file return.
-
- This file exists only if the pin can be configured as an
- interrupt generating input pin.
-
- "active_low" ... reads as either 0 (false) or 1 (true). Write
- any nonzero value to invert the value attribute both
- for reading and writing. Existing and subsequent
- poll(2) support configuration via the edge attribute
- for "rising" and "falling" edges will follow this
- setting.
-
-GPIO controllers have paths like /sys/class/gpio/gpiochip42/ (for the
-controller implementing GPIOs starting at #42) and have the following
-read-only attributes:
-
- /sys/class/gpio/gpiochipN/
-
- "base" ... same as N, the first GPIO managed by this chip
-
- "label" ... provided for diagnostics (not always unique)
-
- "ngpio" ... how many GPIOs this manges (N to N + ngpio - 1)
-
-Board documentation should in most cases cover what GPIOs are used for
-what purposes. However, those numbers are not always stable; GPIOs on
-a daughtercard might be different depending on the base board being used,
-or other cards in the stack. In such cases, you may need to use the
-gpiochip nodes (possibly in conjunction with schematics) to determine
-the correct GPIO number to use for a given signal.
-
-
-Exporting from Kernel code
---------------------------
-Kernel code can explicitly manage exports of GPIOs which have already been
-requested using gpio_request():
-
- /* export the GPIO to userspace */
- int gpio_export(unsigned gpio, bool direction_may_change);
-
- /* reverse gpio_export() */
- void gpio_unexport();
-
- /* create a sysfs link to an exported GPIO node */
- int gpio_export_link(struct device *dev, const char *name,
- unsigned gpio)
-
- /* change the polarity of a GPIO node in sysfs */
- int gpio_sysfs_set_active_low(unsigned gpio, int value);
-
-After a kernel driver requests a GPIO, it may only be made available in
-the sysfs interface by gpio_export(). The driver can control whether the
-signal direction may change. This helps drivers prevent userspace code
-from accidentally clobbering important system state.
-
-This explicit exporting can help with debugging (by making some kinds
-of experiments easier), or can provide an always-there interface that's
-suitable for documenting as part of a board support package.
-
-After the GPIO has been exported, gpio_export_link() allows creating
-symlinks from elsewhere in sysfs to the GPIO sysfs node. Drivers can
-use this to provide the interface under their own device in sysfs with
-a descriptive name.
-
-Drivers can use gpio_sysfs_set_active_low() to hide GPIO line polarity
-differences between boards from user space. This only affects the
-sysfs interface. Polarity change can be done both before and after
-gpio_export(), and previously enabled poll(2) support for either
-rising or falling edge will be reconfigured to follow this setting.
--- /dev/null
+00-INDEX
+ - This file
+gpio.txt
+ - Introduction to GPIOs and their kernel interfaces
+consumer.txt
+ - How to obtain and use GPIOs in a driver
+driver.txt
+ - How to write a GPIO driver
+board.txt
+ - How to assign GPIOs to a consumer device and a function
+sysfs.txt
+ - Information about the GPIO sysfs interface
+gpio-legacy.txt
+ - Historical documentation of the deprecated GPIO integer interface
--- /dev/null
+GPIO Mappings
+=============
+
+This document explains how GPIOs can be assigned to given devices and functions.
+Note that it only applies to the new descriptor-based interface. For a
+description of the deprecated integer-based GPIO interface please refer to
+gpio-legacy.txt (actually, there is no real mapping possible with the old
+interface; you just fetch an integer from somewhere and request the
+corresponding GPIO.
+
+Platforms that make use of GPIOs must select ARCH_REQUIRE_GPIOLIB (if GPIO usage
+is mandatory) or ARCH_WANT_OPTIONAL_GPIOLIB (if GPIO support can be omitted) in
+their Kconfig. Then, how GPIOs are mapped depends on what the platform uses to
+describe its hardware layout. Currently, mappings can be defined through device
+tree, ACPI, and platform data.
+
+Device Tree
+-----------
+GPIOs can easily be mapped to devices and functions in the device tree. The
+exact way to do it depends on the GPIO controller providing the GPIOs, see the
+device tree bindings for your controller.
+
+GPIOs mappings are defined in the consumer device's node, in a property named
+<function>-gpios, where <function> is the function the driver will request
+through gpiod_get(). For example:
+
+ foo_device {
+ compatible = "acme,foo";
+ ...
+ led-gpios = <&gpio 15 GPIO_ACTIVE_HIGH>, /* red */
+ <&gpio 16 GPIO_ACTIVE_HIGH>, /* green */
+ <&gpio 17 GPIO_ACTIVE_HIGH>; /* blue */
+
+ power-gpio = <&gpio 1 GPIO_ACTIVE_LOW>;
+ };
+
+This property will make GPIOs 15, 16 and 17 available to the driver under the
+"led" function, and GPIO 1 as the "power" GPIO:
+
+ struct gpio_desc *red, *green, *blue, *power;
+
+ red = gpiod_get_index(dev, "led", 0);
+ green = gpiod_get_index(dev, "led", 1);
+ blue = gpiod_get_index(dev, "led", 2);
+
+ power = gpiod_get(dev, "power");
+
+The led GPIOs will be active-high, while the power GPIO will be active-low (i.e.
+gpiod_is_active_low(power) will be true).
+
+ACPI
+----
+ACPI does not support function names for GPIOs. Therefore, only the "idx"
+argument of gpiod_get_index() is useful to discriminate between GPIOs assigned
+to a device. The "con_id" argument can still be set for debugging purposes (it
+will appear under error messages as well as debug and sysfs nodes).
+
+Platform Data
+-------------
+Finally, GPIOs can be bound to devices and functions using platform data. Board
+files that desire to do so need to include the following header:
+
+ #include <linux/gpio/driver.h>
+
+GPIOs are mapped by the means of tables of lookups, containing instances of the
+gpiod_lookup structure. Two macros are defined to help declaring such mappings:
+
+ GPIO_LOOKUP(chip_label, chip_hwnum, dev_id, con_id, flags)
+ GPIO_LOOKUP_IDX(chip_label, chip_hwnum, dev_id, con_id, idx, flags)
+
+where
+
+ - chip_label is the label of the gpiod_chip instance providing the GPIO
+ - chip_hwnum is the hardware number of the GPIO within the chip
+ - dev_id is the identifier of the device that will make use of this GPIO. If
+ NULL, the GPIO will be available to all devices.
+ - con_id is the name of the GPIO function from the device point of view. It
+ can be NULL.
+ - idx is the index of the GPIO within the function.
+ - flags is defined to specify the following properties:
+ * GPIOF_ACTIVE_LOW - to configure the GPIO as active-low
+ * GPIOF_OPEN_DRAIN - GPIO pin is open drain type.
+ * GPIOF_OPEN_SOURCE - GPIO pin is open source type.
+
+In the future, these flags might be extended to support more properties.
+
+Note that GPIO_LOOKUP() is just a shortcut to GPIO_LOOKUP_IDX() where idx = 0.
+
+A lookup table can then be defined as follows:
+
+ struct gpiod_lookup gpios_table[] = {
+ GPIO_LOOKUP_IDX("gpio.0", 15, "foo.0", "led", 0, GPIO_ACTIVE_HIGH),
+ GPIO_LOOKUP_IDX("gpio.0", 16, "foo.0", "led", 1, GPIO_ACTIVE_HIGH),
+ GPIO_LOOKUP_IDX("gpio.0", 17, "foo.0", "led", 2, GPIO_ACTIVE_HIGH),
+ GPIO_LOOKUP("gpio.0", 1, "foo.0", "power", GPIO_ACTIVE_LOW),
+ };
+
+And the table can be added by the board code as follows:
+
+ gpiod_add_table(gpios_table, ARRAY_SIZE(gpios_table));
+
+The driver controlling "foo.0" will then be able to obtain its GPIOs as follows:
+
+ struct gpio_desc *red, *green, *blue, *power;
+
+ red = gpiod_get_index(dev, "led", 0);
+ green = gpiod_get_index(dev, "led", 1);
+ blue = gpiod_get_index(dev, "led", 2);
+
+ power = gpiod_get(dev, "power");
+ gpiod_direction_output(power, 1);
+
+Since the "power" GPIO is mapped as active-low, its actual signal will be 0
+after this code. Contrary to the legacy integer GPIO interface, the active-low
+property is handled during mapping and is thus transparent to GPIO consumers.
--- /dev/null
+GPIO Descriptor Consumer Interface
+==================================
+
+This document describes the consumer interface of the GPIO framework. Note that
+it describes the new descriptor-based interface. For a description of the
+deprecated integer-based GPIO interface please refer to gpio-legacy.txt.
+
+
+Guidelines for GPIOs consumers
+==============================
+
+Drivers that can't work without standard GPIO calls should have Kconfig entries
+that depend on GPIOLIB. The functions that allow a driver to obtain and use
+GPIOs are available by including the following file:
+
+ #include <linux/gpio/consumer.h>
+
+All the functions that work with the descriptor-based GPIO interface are
+prefixed with gpiod_. The gpio_ prefix is used for the legacy interface. No
+other function in the kernel should use these prefixes.
+
+
+Obtaining and Disposing GPIOs
+=============================
+
+With the descriptor-based interface, GPIOs are identified with an opaque,
+non-forgeable handler that must be obtained through a call to one of the
+gpiod_get() functions. Like many other kernel subsystems, gpiod_get() takes the
+device that will use the GPIO and the function the requested GPIO is supposed to
+fulfill:
+
+ struct gpio_desc *gpiod_get(struct device *dev, const char *con_id)
+
+If a function is implemented by using several GPIOs together (e.g. a simple LED
+device that displays digits), an additional index argument can be specified:
+
+ struct gpio_desc *gpiod_get_index(struct device *dev,
+ const char *con_id, unsigned int idx)
+
+Both functions return either a valid GPIO descriptor, or an error code checkable
+with IS_ERR(). They will never return a NULL pointer.
+
+Device-managed variants of these functions are also defined:
+
+ struct gpio_desc *devm_gpiod_get(struct device *dev, const char *con_id)
+
+ struct gpio_desc *devm_gpiod_get_index(struct device *dev,
+ const char *con_id,
+ unsigned int idx)
+
+A GPIO descriptor can be disposed of using the gpiod_put() function:
+
+ void gpiod_put(struct gpio_desc *desc)
+
+It is strictly forbidden to use a descriptor after calling this function. The
+device-managed variant is, unsurprisingly:
+
+ void devm_gpiod_put(struct device *dev, struct gpio_desc *desc)
+
+
+Using GPIOs
+===========
+
+Setting Direction
+-----------------
+The first thing a driver must do with a GPIO is setting its direction. This is
+done by invoking one of the gpiod_direction_*() functions:
+
+ int gpiod_direction_input(struct gpio_desc *desc)
+ int gpiod_direction_output(struct gpio_desc *desc, int value)
+
+The return value is zero for success, else a negative errno. It should be
+checked, since the get/set calls don't return errors and since misconfiguration
+is possible. You should normally issue these calls from a task context. However,
+for spinlock-safe GPIOs it is OK to use them before tasking is enabled, as part
+of early board setup.
+
+For output GPIOs, the value provided becomes the initial output value. This
+helps avoid signal glitching during system startup.
+
+A driver can also query the current direction of a GPIO:
+
+ int gpiod_get_direction(const struct gpio_desc *desc)
+
+This function will return either GPIOF_DIR_IN or GPIOF_DIR_OUT.
+
+Be aware that there is no default direction for GPIOs. Therefore, **using a GPIO
+without setting its direction first is illegal and will result in undefined
+behavior!**
+
+
+Spinlock-Safe GPIO Access
+-------------------------
+Most GPIO controllers can be accessed with memory read/write instructions. Those
+don't need to sleep, and can safely be done from inside hard (non-threaded) IRQ
+handlers and similar contexts.
+
+Use the following calls to access GPIOs from an atomic context:
+
+ int gpiod_get_value(const struct gpio_desc *desc);
+ void gpiod_set_value(struct gpio_desc *desc, int value);
+
+The values are boolean, zero for low, nonzero for high. When reading the value
+of an output pin, the value returned should be what's seen on the pin. That
+won't always match the specified output value, because of issues including
+open-drain signaling and output latencies.
+
+The get/set calls do not return errors because "invalid GPIO" should have been
+reported earlier from gpiod_direction_*(). However, note that not all platforms
+can read the value of output pins; those that can't should always return zero.
+Also, using these calls for GPIOs that can't safely be accessed without sleeping
+(see below) is an error.
+
+
+GPIO Access That May Sleep
+--------------------------
+Some GPIO controllers must be accessed using message based buses like I2C or
+SPI. Commands to read or write those GPIO values require waiting to get to the
+head of a queue to transmit a command and get its response. This requires
+sleeping, which can't be done from inside IRQ handlers.
+
+Platforms that support this type of GPIO distinguish them from other GPIOs by
+returning nonzero from this call:
+
+ int gpiod_cansleep(const struct gpio_desc *desc)
+
+To access such GPIOs, a different set of accessors is defined:
+
+ int gpiod_get_value_cansleep(const struct gpio_desc *desc)
+ void gpiod_set_value_cansleep(struct gpio_desc *desc, int value)
+
+Accessing such GPIOs requires a context which may sleep, for example a threaded
+IRQ handler, and those accessors must be used instead of spinlock-safe
+accessors without the cansleep() name suffix.
+
+Other than the fact that these accessors might sleep, and will work on GPIOs
+that can't be accessed from hardIRQ handlers, these calls act the same as the
+spinlock-safe calls.
+
+
+Active-low State and Raw GPIO Values
+------------------------------------
+Device drivers like to manage the logical state of a GPIO, i.e. the value their
+device will actually receive, no matter what lies between it and the GPIO line.
+In some cases, it might make sense to control the actual GPIO line value. The
+following set of calls ignore the active-low property of a GPIO and work on the
+raw line value:
+
+ int gpiod_get_raw_value(const struct gpio_desc *desc)
+ void gpiod_set_raw_value(struct gpio_desc *desc, int value)
+ int gpiod_get_raw_value_cansleep(const struct gpio_desc *desc)
+ void gpiod_set_raw_value_cansleep(struct gpio_desc *desc, int value)
+
+The active-low state of a GPIO can also be queried using the following call:
+
+ int gpiod_is_active_low(const struct gpio_desc *desc)
+
+Note that these functions should only be used with great moderation ; a driver
+should not have to care about the physical line level.
+
+GPIOs mapped to IRQs
+--------------------
+GPIO lines can quite often be used as IRQs. You can get the IRQ number
+corresponding to a given GPIO using the following call:
+
+ int gpiod_to_irq(const struct gpio_desc *desc)
+
+It will return an IRQ number, or an negative errno code if the mapping can't be
+done (most likely because that particular GPIO cannot be used as IRQ). It is an
+unchecked error to use a GPIO that wasn't set up as an input using
+gpiod_direction_input(), or to use an IRQ number that didn't originally come
+from gpiod_to_irq(). gpiod_to_irq() is not allowed to sleep.
+
+Non-error values returned from gpiod_to_irq() can be passed to request_irq() or
+free_irq(). They will often be stored into IRQ resources for platform devices,
+by the board-specific initialization code. Note that IRQ trigger options are
+part of the IRQ interface, e.g. IRQF_TRIGGER_FALLING, as are system wakeup
+capabilities.
+
+
+Interacting With the Legacy GPIO Subsystem
+==========================================
+Many kernel subsystems still handle GPIOs using the legacy integer-based
+interface. Although it is strongly encouraged to upgrade them to the safer
+descriptor-based API, the following two functions allow you to convert a GPIO
+descriptor into the GPIO integer namespace and vice-versa:
+
+ int desc_to_gpio(const struct gpio_desc *desc)
+ struct gpio_desc *gpio_to_desc(unsigned gpio)
+
+The GPIO number returned by desc_to_gpio() can be safely used as long as the
+GPIO descriptor has not been freed. All the same, a GPIO number passed to
+gpio_to_desc() must have been properly acquired, and usage of the returned GPIO
+descriptor is only possible after the GPIO number has been released.
+
+Freeing a GPIO obtained by one API with the other API is forbidden and an
+unchecked error.
--- /dev/null
+GPIO Descriptor Driver Interface
+================================
+
+This document serves as a guide for GPIO chip drivers writers. Note that it
+describes the new descriptor-based interface. For a description of the
+deprecated integer-based GPIO interface please refer to gpio-legacy.txt.
+
+Each GPIO controller driver needs to include the following header, which defines
+the structures used to define a GPIO driver:
+
+ #include <linux/gpio/driver.h>
+
+
+Internal Representation of GPIOs
+================================
+
+Inside a GPIO driver, individual GPIOs are identified by their hardware number,
+which is a unique number between 0 and n, n being the number of GPIOs managed by
+the chip. This number is purely internal: the hardware number of a particular
+GPIO descriptor is never made visible outside of the driver.
+
+On top of this internal number, each GPIO also need to have a global number in
+the integer GPIO namespace so that it can be used with the legacy GPIO
+interface. Each chip must thus have a "base" number (which can be automatically
+assigned), and for each GPIO the global number will be (base + hardware number).
+Although the integer representation is considered deprecated, it still has many
+users and thus needs to be maintained.
+
+So for example one platform could use numbers 32-159 for GPIOs, with a
+controller defining 128 GPIOs at a "base" of 32 ; while another platform uses
+numbers 0..63 with one set of GPIO controllers, 64-79 with another type of GPIO
+controller, and on one particular board 80-95 with an FPGA. The numbers need not
+be contiguous; either of those platforms could also use numbers 2000-2063 to
+identify GPIOs in a bank of I2C GPIO expanders.
+
+
+Controller Drivers: gpio_chip
+=============================
+
+In the gpiolib framework each GPIO controller is packaged as a "struct
+gpio_chip" (see linux/gpio/driver.h for its complete definition) with members
+common to each controller of that type:
+
+ - methods to establish GPIO direction
+ - methods used to access GPIO values
+ - method to return the IRQ number associated to a given GPIO
+ - flag saying whether calls to its methods may sleep
+ - optional debugfs dump method (showing extra state like pullup config)
+ - optional base number (will be automatically assigned if omitted)
+ - label for diagnostics and GPIOs mapping using platform data
+
+The code implementing a gpio_chip should support multiple instances of the
+controller, possibly using the driver model. That code will configure each
+gpio_chip and issue gpiochip_add(). Removing a GPIO controller should be rare;
+use gpiochip_remove() when it is unavoidable.
+
+Most often a gpio_chip is part of an instance-specific structure with state not
+exposed by the GPIO interfaces, such as addressing, power management, and more.
+Chips such as codecs will have complex non-GPIO state.
+
+Any debugfs dump method should normally ignore signals which haven't been
+requested as GPIOs. They can use gpiochip_is_requested(), which returns either
+NULL or the label associated with that GPIO when it was requested.
+
+Locking IRQ usage
+-----------------
+Input GPIOs can be used as IRQ signals. When this happens, a driver is requested
+to mark the GPIO as being used as an IRQ:
+
+ int gpiod_lock_as_irq(struct gpio_desc *desc)
+
+This will prevent the use of non-irq related GPIO APIs until the GPIO IRQ lock
+is released:
+
+ void gpiod_unlock_as_irq(struct gpio_desc *desc)
--- /dev/null
+GPIO Interfaces
+
+This provides an overview of GPIO access conventions on Linux.
+
+These calls use the gpio_* naming prefix. No other calls should use that
+prefix, or the related __gpio_* prefix.
+
+
+What is a GPIO?
+===============
+A "General Purpose Input/Output" (GPIO) is a flexible software-controlled
+digital signal. They are provided from many kinds of chip, and are familiar
+to Linux developers working with embedded and custom hardware. Each GPIO
+represents a bit connected to a particular pin, or "ball" on Ball Grid Array
+(BGA) packages. Board schematics show which external hardware connects to
+which GPIOs. Drivers can be written generically, so that board setup code
+passes such pin configuration data to drivers.
+
+System-on-Chip (SOC) processors heavily rely on GPIOs. In some cases, every
+non-dedicated pin can be configured as a GPIO; and most chips have at least
+several dozen of them. Programmable logic devices (like FPGAs) can easily
+provide GPIOs; multifunction chips like power managers, and audio codecs
+often have a few such pins to help with pin scarcity on SOCs; and there are
+also "GPIO Expander" chips that connect using the I2C or SPI serial busses.
+Most PC southbridges have a few dozen GPIO-capable pins (with only the BIOS
+firmware knowing how they're used).
+
+The exact capabilities of GPIOs vary between systems. Common options:
+
+ - Output values are writable (high=1, low=0). Some chips also have
+ options about how that value is driven, so that for example only one
+ value might be driven ... supporting "wire-OR" and similar schemes
+ for the other value (notably, "open drain" signaling).
+
+ - Input values are likewise readable (1, 0). Some chips support readback
+ of pins configured as "output", which is very useful in such "wire-OR"
+ cases (to support bidirectional signaling). GPIO controllers may have
+ input de-glitch/debounce logic, sometimes with software controls.
+
+ - Inputs can often be used as IRQ signals, often edge triggered but
+ sometimes level triggered. Such IRQs may be configurable as system
+ wakeup events, to wake the system from a low power state.
+
+ - Usually a GPIO will be configurable as either input or output, as needed
+ by different product boards; single direction ones exist too.
+
+ - Most GPIOs can be accessed while holding spinlocks, but those accessed
+ through a serial bus normally can't. Some systems support both types.
+
+On a given board each GPIO is used for one specific purpose like monitoring
+MMC/SD card insertion/removal, detecting card writeprotect status, driving
+a LED, configuring a transceiver, bitbanging a serial bus, poking a hardware
+watchdog, sensing a switch, and so on.
+
+
+GPIO conventions
+================
+Note that this is called a "convention" because you don't need to do it this
+way, and it's no crime if you don't. There **are** cases where portability
+is not the main issue; GPIOs are often used for the kind of board-specific
+glue logic that may even change between board revisions, and can't ever be
+used on a board that's wired differently. Only least-common-denominator
+functionality can be very portable. Other features are platform-specific,
+and that can be critical for glue logic.
+
+Plus, this doesn't require any implementation framework, just an interface.
+One platform might implement it as simple inline functions accessing chip
+registers; another might implement it by delegating through abstractions
+used for several very different kinds of GPIO controller. (There is some
+optional code supporting such an implementation strategy, described later
+in this document, but drivers acting as clients to the GPIO interface must
+not care how it's implemented.)
+
+That said, if the convention is supported on their platform, drivers should
+use it when possible. Platforms must select ARCH_REQUIRE_GPIOLIB or
+ARCH_WANT_OPTIONAL_GPIOLIB in their Kconfig. Drivers that can't work without
+standard GPIO calls should have Kconfig entries which depend on GPIOLIB. The
+GPIO calls are available, either as "real code" or as optimized-away stubs,
+when drivers use the include file:
+
+ #include <linux/gpio.h>
+
+If you stick to this convention then it'll be easier for other developers to
+see what your code is doing, and help maintain it.
+
+Note that these operations include I/O barriers on platforms which need to
+use them; drivers don't need to add them explicitly.
+
+
+Identifying GPIOs
+-----------------
+GPIOs are identified by unsigned integers in the range 0..MAX_INT. That
+reserves "negative" numbers for other purposes like marking signals as
+"not available on this board", or indicating faults. Code that doesn't
+touch the underlying hardware treats these integers as opaque cookies.
+
+Platforms define how they use those integers, and usually #define symbols
+for the GPIO lines so that board-specific setup code directly corresponds
+to the relevant schematics. In contrast, drivers should only use GPIO
+numbers passed to them from that setup code, using platform_data to hold
+board-specific pin configuration data (along with other board specific
+data they need). That avoids portability problems.
+
+So for example one platform uses numbers 32-159 for GPIOs; while another
+uses numbers 0..63 with one set of GPIO controllers, 64-79 with another
+type of GPIO controller, and on one particular board 80-95 with an FPGA.
+The numbers need not be contiguous; either of those platforms could also
+use numbers 2000-2063 to identify GPIOs in a bank of I2C GPIO expanders.
+
+If you want to initialize a structure with an invalid GPIO number, use
+some negative number (perhaps "-EINVAL"); that will never be valid. To
+test if such number from such a structure could reference a GPIO, you
+may use this predicate:
+
+ int gpio_is_valid(int number);
+
+A number that's not valid will be rejected by calls which may request
+or free GPIOs (see below). Other numbers may also be rejected; for
+example, a number might be valid but temporarily unused on a given board.
+
+Whether a platform supports multiple GPIO controllers is a platform-specific
+implementation issue, as are whether that support can leave "holes" in the space
+of GPIO numbers, and whether new controllers can be added at runtime. Such issues
+can affect things including whether adjacent GPIO numbers are both valid.
+
+Using GPIOs
+-----------
+The first thing a system should do with a GPIO is allocate it, using
+the gpio_request() call; see later.
+
+One of the next things to do with a GPIO, often in board setup code when
+setting up a platform_device using the GPIO, is mark its direction:
+
+ /* set as input or output, returning 0 or negative errno */
+ int gpio_direction_input(unsigned gpio);
+ int gpio_direction_output(unsigned gpio, int value);
+
+The return value is zero for success, else a negative errno. It should
+be checked, since the get/set calls don't have error returns and since
+misconfiguration is possible. You should normally issue these calls from
+a task context. However, for spinlock-safe GPIOs it's OK to use them
+before tasking is enabled, as part of early board setup.
+
+For output GPIOs, the value provided becomes the initial output value.
+This helps avoid signal glitching during system startup.
+
+For compatibility with legacy interfaces to GPIOs, setting the direction
+of a GPIO implicitly requests that GPIO (see below) if it has not been
+requested already. That compatibility is being removed from the optional
+gpiolib framework.
+
+Setting the direction can fail if the GPIO number is invalid, or when
+that particular GPIO can't be used in that mode. It's generally a bad
+idea to rely on boot firmware to have set the direction correctly, since
+it probably wasn't validated to do more than boot Linux. (Similarly,
+that board setup code probably needs to multiplex that pin as a GPIO,
+and configure pullups/pulldowns appropriately.)
+
+
+Spinlock-Safe GPIO access
+-------------------------
+Most GPIO controllers can be accessed with memory read/write instructions.
+Those don't need to sleep, and can safely be done from inside hard
+(nonthreaded) IRQ handlers and similar contexts.
+
+Use the following calls to access such GPIOs,
+for which gpio_cansleep() will always return false (see below):
+
+ /* GPIO INPUT: return zero or nonzero */
+ int gpio_get_value(unsigned gpio);
+
+ /* GPIO OUTPUT */
+ void gpio_set_value(unsigned gpio, int value);
+
+The values are boolean, zero for low, nonzero for high. When reading the
+value of an output pin, the value returned should be what's seen on the
+pin ... that won't always match the specified output value, because of
+issues including open-drain signaling and output latencies.
+
+The get/set calls have no error returns because "invalid GPIO" should have
+been reported earlier from gpio_direction_*(). However, note that not all
+platforms can read the value of output pins; those that can't should always
+return zero. Also, using these calls for GPIOs that can't safely be accessed
+without sleeping (see below) is an error.
+
+Platform-specific implementations are encouraged to optimize the two
+calls to access the GPIO value in cases where the GPIO number (and for
+output, value) are constant. It's normal for them to need only a couple
+of instructions in such cases (reading or writing a hardware register),
+and not to need spinlocks. Such optimized calls can make bitbanging
+applications a lot more efficient (in both space and time) than spending
+dozens of instructions on subroutine calls.
+
+
+GPIO access that may sleep
+--------------------------
+Some GPIO controllers must be accessed using message based busses like I2C
+or SPI. Commands to read or write those GPIO values require waiting to
+get to the head of a queue to transmit a command and get its response.
+This requires sleeping, which can't be done from inside IRQ handlers.
+
+Platforms that support this type of GPIO distinguish them from other GPIOs
+by returning nonzero from this call (which requires a valid GPIO number,
+which should have been previously allocated with gpio_request):
+
+ int gpio_cansleep(unsigned gpio);
+
+To access such GPIOs, a different set of accessors is defined:
+
+ /* GPIO INPUT: return zero or nonzero, might sleep */
+ int gpio_get_value_cansleep(unsigned gpio);
+
+ /* GPIO OUTPUT, might sleep */
+ void gpio_set_value_cansleep(unsigned gpio, int value);
+
+
+Accessing such GPIOs requires a context which may sleep, for example
+a threaded IRQ handler, and those accessors must be used instead of
+spinlock-safe accessors without the cansleep() name suffix.
+
+Other than the fact that these accessors might sleep, and will work
+on GPIOs that can't be accessed from hardIRQ handlers, these calls act
+the same as the spinlock-safe calls.
+
+ ** IN ADDITION ** calls to setup and configure such GPIOs must be made
+from contexts which may sleep, since they may need to access the GPIO
+controller chip too: (These setup calls are usually made from board
+setup or driver probe/teardown code, so this is an easy constraint.)
+
+ gpio_direction_input()
+ gpio_direction_output()
+ gpio_request()
+
+## gpio_request_one()
+## gpio_request_array()
+## gpio_free_array()
+
+ gpio_free()
+ gpio_set_debounce()
+
+
+
+Claiming and Releasing GPIOs
+----------------------------
+To help catch system configuration errors, two calls are defined.
+
+ /* request GPIO, returning 0 or negative errno.
+ * non-null labels may be useful for diagnostics.
+ */
+ int gpio_request(unsigned gpio, const char *label);
+
+ /* release previously-claimed GPIO */
+ void gpio_free(unsigned gpio);
+
+Passing invalid GPIO numbers to gpio_request() will fail, as will requesting
+GPIOs that have already been claimed with that call. The return value of
+gpio_request() must be checked. You should normally issue these calls from
+a task context. However, for spinlock-safe GPIOs it's OK to request GPIOs
+before tasking is enabled, as part of early board setup.
+
+These calls serve two basic purposes. One is marking the signals which
+are actually in use as GPIOs, for better diagnostics; systems may have
+several hundred potential GPIOs, but often only a dozen are used on any
+given board. Another is to catch conflicts, identifying errors when
+(a) two or more drivers wrongly think they have exclusive use of that
+signal, or (b) something wrongly believes it's safe to remove drivers
+needed to manage a signal that's in active use. That is, requesting a
+GPIO can serve as a kind of lock.
+
+Some platforms may also use knowledge about what GPIOs are active for
+power management, such as by powering down unused chip sectors and, more
+easily, gating off unused clocks.
+
+For GPIOs that use pins known to the pinctrl subsystem, that subsystem should
+be informed of their use; a gpiolib driver's .request() operation may call
+pinctrl_request_gpio(), and a gpiolib driver's .free() operation may call
+pinctrl_free_gpio(). The pinctrl subsystem allows a pinctrl_request_gpio()
+to succeed concurrently with a pin or pingroup being "owned" by a device for
+pin multiplexing.
+
+Any programming of pin multiplexing hardware that is needed to route the
+GPIO signal to the appropriate pin should occur within a GPIO driver's
+.direction_input() or .direction_output() operations, and occur after any
+setup of an output GPIO's value. This allows a glitch-free migration from a
+pin's special function to GPIO. This is sometimes required when using a GPIO
+to implement a workaround on signals typically driven by a non-GPIO HW block.
+
+Some platforms allow some or all GPIO signals to be routed to different pins.
+Similarly, other aspects of the GPIO or pin may need to be configured, such as
+pullup/pulldown. Platform software should arrange that any such details are
+configured prior to gpio_request() being called for those GPIOs, e.g. using
+the pinctrl subsystem's mapping table, so that GPIO users need not be aware
+of these details.
+
+Also note that it's your responsibility to have stopped using a GPIO
+before you free it.
+
+Considering in most cases GPIOs are actually configured right after they
+are claimed, three additional calls are defined:
+
+ /* request a single GPIO, with initial configuration specified by
+ * 'flags', identical to gpio_request() wrt other arguments and
+ * return value
+ */
+ int gpio_request_one(unsigned gpio, unsigned long flags, const char *label);
+
+ /* request multiple GPIOs in a single call
+ */
+ int gpio_request_array(struct gpio *array, size_t num);
+
+ /* release multiple GPIOs in a single call
+ */
+ void gpio_free_array(struct gpio *array, size_t num);
+
+where 'flags' is currently defined to specify the following properties:
+
+ * GPIOF_DIR_IN - to configure direction as input
+ * GPIOF_DIR_OUT - to configure direction as output
+
+ * GPIOF_INIT_LOW - as output, set initial level to LOW
+ * GPIOF_INIT_HIGH - as output, set initial level to HIGH
+ * GPIOF_OPEN_DRAIN - gpio pin is open drain type.
+ * GPIOF_OPEN_SOURCE - gpio pin is open source type.
+
+ * GPIOF_EXPORT_DIR_FIXED - export gpio to sysfs, keep direction
+ * GPIOF_EXPORT_DIR_CHANGEABLE - also export, allow changing direction
+
+since GPIOF_INIT_* are only valid when configured as output, so group valid
+combinations as:
+
+ * GPIOF_IN - configure as input
+ * GPIOF_OUT_INIT_LOW - configured as output, initial level LOW
+ * GPIOF_OUT_INIT_HIGH - configured as output, initial level HIGH
+
+When setting the flag as GPIOF_OPEN_DRAIN then it will assume that pins is
+open drain type. Such pins will not be driven to 1 in output mode. It is
+require to connect pull-up on such pins. By enabling this flag, gpio lib will
+make the direction to input when it is asked to set value of 1 in output mode
+to make the pin HIGH. The pin is make to LOW by driving value 0 in output mode.
+
+When setting the flag as GPIOF_OPEN_SOURCE then it will assume that pins is
+open source type. Such pins will not be driven to 0 in output mode. It is
+require to connect pull-down on such pin. By enabling this flag, gpio lib will
+make the direction to input when it is asked to set value of 0 in output mode
+to make the pin LOW. The pin is make to HIGH by driving value 1 in output mode.
+
+In the future, these flags can be extended to support more properties.
+
+Further more, to ease the claim/release of multiple GPIOs, 'struct gpio' is
+introduced to encapsulate all three fields as:
+
+ struct gpio {
+ unsigned gpio;
+ unsigned long flags;
+ const char *label;
+ };
+
+A typical example of usage:
+
+ static struct gpio leds_gpios[] = {
+ { 32, GPIOF_OUT_INIT_HIGH, "Power LED" }, /* default to ON */
+ { 33, GPIOF_OUT_INIT_LOW, "Green LED" }, /* default to OFF */
+ { 34, GPIOF_OUT_INIT_LOW, "Red LED" }, /* default to OFF */
+ { 35, GPIOF_OUT_INIT_LOW, "Blue LED" }, /* default to OFF */
+ { ... },
+ };
+
+ err = gpio_request_one(31, GPIOF_IN, "Reset Button");
+ if (err)
+ ...
+
+ err = gpio_request_array(leds_gpios, ARRAY_SIZE(leds_gpios));
+ if (err)
+ ...
+
+ gpio_free_array(leds_gpios, ARRAY_SIZE(leds_gpios));
+
+
+GPIOs mapped to IRQs
+--------------------
+GPIO numbers are unsigned integers; so are IRQ numbers. These make up
+two logically distinct namespaces (GPIO 0 need not use IRQ 0). You can
+map between them using calls like:
+
+ /* map GPIO numbers to IRQ numbers */
+ int gpio_to_irq(unsigned gpio);
+
+ /* map IRQ numbers to GPIO numbers (avoid using this) */
+ int irq_to_gpio(unsigned irq);
+
+Those return either the corresponding number in the other namespace, or
+else a negative errno code if the mapping can't be done. (For example,
+some GPIOs can't be used as IRQs.) It is an unchecked error to use a GPIO
+number that wasn't set up as an input using gpio_direction_input(), or
+to use an IRQ number that didn't originally come from gpio_to_irq().
+
+These two mapping calls are expected to cost on the order of a single
+addition or subtraction. They're not allowed to sleep.
+
+Non-error values returned from gpio_to_irq() can be passed to request_irq()
+or free_irq(). They will often be stored into IRQ resources for platform
+devices, by the board-specific initialization code. Note that IRQ trigger
+options are part of the IRQ interface, e.g. IRQF_TRIGGER_FALLING, as are
+system wakeup capabilities.
+
+Non-error values returned from irq_to_gpio() would most commonly be used
+with gpio_get_value(), for example to initialize or update driver state
+when the IRQ is edge-triggered. Note that some platforms don't support
+this reverse mapping, so you should avoid using it.
+
+
+Emulating Open Drain Signals
+----------------------------
+Sometimes shared signals need to use "open drain" signaling, where only the
+low signal level is actually driven. (That term applies to CMOS transistors;
+"open collector" is used for TTL.) A pullup resistor causes the high signal
+level. This is sometimes called a "wire-AND"; or more practically, from the
+negative logic (low=true) perspective this is a "wire-OR".
+
+One common example of an open drain signal is a shared active-low IRQ line.
+Also, bidirectional data bus signals sometimes use open drain signals.
+
+Some GPIO controllers directly support open drain outputs; many don't. When
+you need open drain signaling but your hardware doesn't directly support it,
+there's a common idiom you can use to emulate it with any GPIO pin that can
+be used as either an input or an output:
+
+ LOW: gpio_direction_output(gpio, 0) ... this drives the signal
+ and overrides the pullup.
+
+ HIGH: gpio_direction_input(gpio) ... this turns off the output,
+ so the pullup (or some other device) controls the signal.
+
+If you are "driving" the signal high but gpio_get_value(gpio) reports a low
+value (after the appropriate rise time passes), you know some other component
+is driving the shared signal low. That's not necessarily an error. As one
+common example, that's how I2C clocks are stretched: a slave that needs a
+slower clock delays the rising edge of SCK, and the I2C master adjusts its
+signaling rate accordingly.
+
+
+GPIO controllers and the pinctrl subsystem
+------------------------------------------
+
+A GPIO controller on a SOC might be tightly coupled with the pinctrl
+subsystem, in the sense that the pins can be used by other functions
+together with an optional gpio feature. We have already covered the
+case where e.g. a GPIO controller need to reserve a pin or set the
+direction of a pin by calling any of:
+
+pinctrl_request_gpio()
+pinctrl_free_gpio()
+pinctrl_gpio_direction_input()
+pinctrl_gpio_direction_output()
+
+But how does the pin control subsystem cross-correlate the GPIO
+numbers (which are a global business) to a certain pin on a certain
+pin controller?
+
+This is done by registering "ranges" of pins, which are essentially
+cross-reference tables. These are described in
+Documentation/pinctrl.txt
+
+While the pin allocation is totally managed by the pinctrl subsystem,
+gpio (under gpiolib) is still maintained by gpio drivers. It may happen
+that different pin ranges in a SoC is managed by different gpio drivers.
+
+This makes it logical to let gpio drivers announce their pin ranges to
+the pin ctrl subsystem before it will call 'pinctrl_request_gpio' in order
+to request the corresponding pin to be prepared by the pinctrl subsystem
+before any gpio usage.
+
+For this, the gpio controller can register its pin range with pinctrl
+subsystem. There are two ways of doing it currently: with or without DT.
+
+For with DT support refer to Documentation/devicetree/bindings/gpio/gpio.txt.
+
+For non-DT support, user can call gpiochip_add_pin_range() with appropriate
+parameters to register a range of gpio pins with a pinctrl driver. For this
+exact name string of pinctrl device has to be passed as one of the
+argument to this routine.
+
+
+What do these conventions omit?
+===============================
+One of the biggest things these conventions omit is pin multiplexing, since
+this is highly chip-specific and nonportable. One platform might not need
+explicit multiplexing; another might have just two options for use of any
+given pin; another might have eight options per pin; another might be able
+to route a given GPIO to any one of several pins. (Yes, those examples all
+come from systems that run Linux today.)
+
+Related to multiplexing is configuration and enabling of the pullups or
+pulldowns integrated on some platforms. Not all platforms support them,
+or support them in the same way; and any given board might use external
+pullups (or pulldowns) so that the on-chip ones should not be used.
+(When a circuit needs 5 kOhm, on-chip 100 kOhm resistors won't do.)
+Likewise drive strength (2 mA vs 20 mA) and voltage (1.8V vs 3.3V) is a
+platform-specific issue, as are models like (not) having a one-to-one
+correspondence between configurable pins and GPIOs.
+
+There are other system-specific mechanisms that are not specified here,
+like the aforementioned options for input de-glitching and wire-OR output.
+Hardware may support reading or writing GPIOs in gangs, but that's usually
+configuration dependent: for GPIOs sharing the same bank. (GPIOs are
+commonly grouped in banks of 16 or 32, with a given SOC having several such
+banks.) Some systems can trigger IRQs from output GPIOs, or read values
+from pins not managed as GPIOs. Code relying on such mechanisms will
+necessarily be nonportable.
+
+Dynamic definition of GPIOs is not currently standard; for example, as
+a side effect of configuring an add-on board with some GPIO expanders.
+
+
+GPIO implementor's framework (OPTIONAL)
+=======================================
+As noted earlier, there is an optional implementation framework making it
+easier for platforms to support different kinds of GPIO controller using
+the same programming interface. This framework is called "gpiolib".
+
+As a debugging aid, if debugfs is available a /sys/kernel/debug/gpio file
+will be found there. That will list all the controllers registered through
+this framework, and the state of the GPIOs currently in use.
+
+
+Controller Drivers: gpio_chip
+-----------------------------
+In this framework each GPIO controller is packaged as a "struct gpio_chip"
+with information common to each controller of that type:
+
+ - methods to establish GPIO direction
+ - methods used to access GPIO values
+ - flag saying whether calls to its methods may sleep
+ - optional debugfs dump method (showing extra state like pullup config)
+ - label for diagnostics
+
+There is also per-instance data, which may come from device.platform_data:
+the number of its first GPIO, and how many GPIOs it exposes.
+
+The code implementing a gpio_chip should support multiple instances of the
+controller, possibly using the driver model. That code will configure each
+gpio_chip and issue gpiochip_add(). Removing a GPIO controller should be
+rare; use gpiochip_remove() when it is unavoidable.
+
+Most often a gpio_chip is part of an instance-specific structure with state
+not exposed by the GPIO interfaces, such as addressing, power management,
+and more. Chips such as codecs will have complex non-GPIO state.
+
+Any debugfs dump method should normally ignore signals which haven't been
+requested as GPIOs. They can use gpiochip_is_requested(), which returns
+either NULL or the label associated with that GPIO when it was requested.
+
+
+Platform Support
+----------------
+To support this framework, a platform's Kconfig will "select" either
+ARCH_REQUIRE_GPIOLIB or ARCH_WANT_OPTIONAL_GPIOLIB
+and arrange that its <asm/gpio.h> includes <asm-generic/gpio.h> and defines
+three functions: gpio_get_value(), gpio_set_value(), and gpio_cansleep().
+
+It may also provide a custom value for ARCH_NR_GPIOS, so that it better
+reflects the number of GPIOs in actual use on that platform, without
+wasting static table space. (It should count both built-in/SoC GPIOs and
+also ones on GPIO expanders.
+
+ARCH_REQUIRE_GPIOLIB means that the gpiolib code will always get compiled
+into the kernel on that architecture.
+
+ARCH_WANT_OPTIONAL_GPIOLIB means the gpiolib code defaults to off and the user
+can enable it and build it into the kernel optionally.
+
+If neither of these options are selected, the platform does not support
+GPIOs through GPIO-lib and the code cannot be enabled by the user.
+
+Trivial implementations of those functions can directly use framework
+code, which always dispatches through the gpio_chip:
+
+ #define gpio_get_value __gpio_get_value
+ #define gpio_set_value __gpio_set_value
+ #define gpio_cansleep __gpio_cansleep
+
+Fancier implementations could instead define those as inline functions with
+logic optimizing access to specific SOC-based GPIOs. For example, if the
+referenced GPIO is the constant "12", getting or setting its value could
+cost as little as two or three instructions, never sleeping. When such an
+optimization is not possible those calls must delegate to the framework
+code, costing at least a few dozen instructions. For bitbanged I/O, such
+instruction savings can be significant.
+
+For SOCs, platform-specific code defines and registers gpio_chip instances
+for each bank of on-chip GPIOs. Those GPIOs should be numbered/labeled to
+match chip vendor documentation, and directly match board schematics. They
+may well start at zero and go up to a platform-specific limit. Such GPIOs
+are normally integrated into platform initialization to make them always be
+available, from arch_initcall() or earlier; they can often serve as IRQs.
+
+
+Board Support
+-------------
+For external GPIO controllers -- such as I2C or SPI expanders, ASICs, multi
+function devices, FPGAs or CPLDs -- most often board-specific code handles
+registering controller devices and ensures that their drivers know what GPIO
+numbers to use with gpiochip_add(). Their numbers often start right after
+platform-specific GPIOs.
+
+For example, board setup code could create structures identifying the range
+of GPIOs that chip will expose, and passes them to each GPIO expander chip
+using platform_data. Then the chip driver's probe() routine could pass that
+data to gpiochip_add().
+
+Initialization order can be important. For example, when a device relies on
+an I2C-based GPIO, its probe() routine should only be called after that GPIO
+becomes available. That may mean the device should not be registered until
+calls for that GPIO can work. One way to address such dependencies is for
+such gpio_chip controllers to provide setup() and teardown() callbacks to
+board specific code; those board specific callbacks would register devices
+once all the necessary resources are available, and remove them later when
+the GPIO controller device becomes unavailable.
+
+
+Sysfs Interface for Userspace (OPTIONAL)
+========================================
+Platforms which use the "gpiolib" implementors framework may choose to
+configure a sysfs user interface to GPIOs. This is different from the
+debugfs interface, since it provides control over GPIO direction and
+value instead of just showing a gpio state summary. Plus, it could be
+present on production systems without debugging support.
+
+Given appropriate hardware documentation for the system, userspace could
+know for example that GPIO #23 controls the write protect line used to
+protect boot loader segments in flash memory. System upgrade procedures
+may need to temporarily remove that protection, first importing a GPIO,
+then changing its output state, then updating the code before re-enabling
+the write protection. In normal use, GPIO #23 would never be touched,
+and the kernel would have no need to know about it.
+
+Again depending on appropriate hardware documentation, on some systems
+userspace GPIO can be used to determine system configuration data that
+standard kernels won't know about. And for some tasks, simple userspace
+GPIO drivers could be all that the system really needs.
+
+Note that standard kernel drivers exist for common "LEDs and Buttons"
+GPIO tasks: "leds-gpio" and "gpio_keys", respectively. Use those
+instead of talking directly to the GPIOs; they integrate with kernel
+frameworks better than your userspace code could.
+
+
+Paths in Sysfs
+--------------
+There are three kinds of entry in /sys/class/gpio:
+
+ - Control interfaces used to get userspace control over GPIOs;
+
+ - GPIOs themselves; and
+
+ - GPIO controllers ("gpio_chip" instances).
+
+That's in addition to standard files including the "device" symlink.
+
+The control interfaces are write-only:
+
+ /sys/class/gpio/
+
+ "export" ... Userspace may ask the kernel to export control of
+ a GPIO to userspace by writing its number to this file.
+
+ Example: "echo 19 > export" will create a "gpio19" node
+ for GPIO #19, if that's not requested by kernel code.
+
+ "unexport" ... Reverses the effect of exporting to userspace.
+
+ Example: "echo 19 > unexport" will remove a "gpio19"
+ node exported using the "export" file.
+
+GPIO signals have paths like /sys/class/gpio/gpio42/ (for GPIO #42)
+and have the following read/write attributes:
+
+ /sys/class/gpio/gpioN/
+
+ "direction" ... reads as either "in" or "out". This value may
+ normally be written. Writing as "out" defaults to
+ initializing the value as low. To ensure glitch free
+ operation, values "low" and "high" may be written to
+ configure the GPIO as an output with that initial value.
+
+ Note that this attribute *will not exist* if the kernel
+ doesn't support changing the direction of a GPIO, or
+ it was exported by kernel code that didn't explicitly
+ allow userspace to reconfigure this GPIO's direction.
+
+ "value" ... reads as either 0 (low) or 1 (high). If the GPIO
+ is configured as an output, this value may be written;
+ any nonzero value is treated as high.
+
+ If the pin can be configured as interrupt-generating interrupt
+ and if it has been configured to generate interrupts (see the
+ description of "edge"), you can poll(2) on that file and
+ poll(2) will return whenever the interrupt was triggered. If
+ you use poll(2), set the events POLLPRI and POLLERR. If you
+ use select(2), set the file descriptor in exceptfds. After
+ poll(2) returns, either lseek(2) to the beginning of the sysfs
+ file and read the new value or close the file and re-open it
+ to read the value.
+
+ "edge" ... reads as either "none", "rising", "falling", or
+ "both". Write these strings to select the signal edge(s)
+ that will make poll(2) on the "value" file return.
+
+ This file exists only if the pin can be configured as an
+ interrupt generating input pin.
+
+ "active_low" ... reads as either 0 (false) or 1 (true). Write
+ any nonzero value to invert the value attribute both
+ for reading and writing. Existing and subsequent
+ poll(2) support configuration via the edge attribute
+ for "rising" and "falling" edges will follow this
+ setting.
+
+GPIO controllers have paths like /sys/class/gpio/gpiochip42/ (for the
+controller implementing GPIOs starting at #42) and have the following
+read-only attributes:
+
+ /sys/class/gpio/gpiochipN/
+
+ "base" ... same as N, the first GPIO managed by this chip
+
+ "label" ... provided for diagnostics (not always unique)
+
+ "ngpio" ... how many GPIOs this manges (N to N + ngpio - 1)
+
+Board documentation should in most cases cover what GPIOs are used for
+what purposes. However, those numbers are not always stable; GPIOs on
+a daughtercard might be different depending on the base board being used,
+or other cards in the stack. In such cases, you may need to use the
+gpiochip nodes (possibly in conjunction with schematics) to determine
+the correct GPIO number to use for a given signal.
+
+
+Exporting from Kernel code
+--------------------------
+Kernel code can explicitly manage exports of GPIOs which have already been
+requested using gpio_request():
+
+ /* export the GPIO to userspace */
+ int gpio_export(unsigned gpio, bool direction_may_change);
+
+ /* reverse gpio_export() */
+ void gpio_unexport();
+
+ /* create a sysfs link to an exported GPIO node */
+ int gpio_export_link(struct device *dev, const char *name,
+ unsigned gpio)
+
+ /* change the polarity of a GPIO node in sysfs */
+ int gpio_sysfs_set_active_low(unsigned gpio, int value);
+
+After a kernel driver requests a GPIO, it may only be made available in
+the sysfs interface by gpio_export(). The driver can control whether the
+signal direction may change. This helps drivers prevent userspace code
+from accidentally clobbering important system state.
+
+This explicit exporting can help with debugging (by making some kinds
+of experiments easier), or can provide an always-there interface that's
+suitable for documenting as part of a board support package.
+
+After the GPIO has been exported, gpio_export_link() allows creating
+symlinks from elsewhere in sysfs to the GPIO sysfs node. Drivers can
+use this to provide the interface under their own device in sysfs with
+a descriptive name.
+
+Drivers can use gpio_sysfs_set_active_low() to hide GPIO line polarity
+differences between boards from user space. This only affects the
+sysfs interface. Polarity change can be done both before and after
+gpio_export(), and previously enabled poll(2) support for either
+rising or falling edge will be reconfigured to follow this setting.
--- /dev/null
+GPIO Interfaces
+===============
+
+The documents in this directory give detailed instructions on how to access
+GPIOs in drivers, and how to write a driver for a device that provides GPIOs
+itself.
+
+Due to the history of GPIO interfaces in the kernel, there are two different
+ways to obtain and use GPIOs:
+
+ - The descriptor-based interface is the preferred way to manipulate GPIOs,
+and is described by all the files in this directory excepted gpio-legacy.txt.
+ - The legacy integer-based interface which is considered deprecated (but still
+usable for compatibility reasons) is documented in gpio-legacy.txt.
+
+The remainder of this document applies to the new descriptor-based interface.
+gpio-legacy.txt contains the same information applied to the legacy
+integer-based interface.
+
+
+What is a GPIO?
+===============
+
+A "General Purpose Input/Output" (GPIO) is a flexible software-controlled
+digital signal. They are provided from many kinds of chip, and are familiar
+to Linux developers working with embedded and custom hardware. Each GPIO
+represents a bit connected to a particular pin, or "ball" on Ball Grid Array
+(BGA) packages. Board schematics show which external hardware connects to
+which GPIOs. Drivers can be written generically, so that board setup code
+passes such pin configuration data to drivers.
+
+System-on-Chip (SOC) processors heavily rely on GPIOs. In some cases, every
+non-dedicated pin can be configured as a GPIO; and most chips have at least
+several dozen of them. Programmable logic devices (like FPGAs) can easily
+provide GPIOs; multifunction chips like power managers, and audio codecs
+often have a few such pins to help with pin scarcity on SOCs; and there are
+also "GPIO Expander" chips that connect using the I2C or SPI serial buses.
+Most PC southbridges have a few dozen GPIO-capable pins (with only the BIOS
+firmware knowing how they're used).
+
+The exact capabilities of GPIOs vary between systems. Common options:
+
+ - Output values are writable (high=1, low=0). Some chips also have
+ options about how that value is driven, so that for example only one
+ value might be driven, supporting "wire-OR" and similar schemes for the
+ other value (notably, "open drain" signaling).
+
+ - Input values are likewise readable (1, 0). Some chips support readback
+ of pins configured as "output", which is very useful in such "wire-OR"
+ cases (to support bidirectional signaling). GPIO controllers may have
+ input de-glitch/debounce logic, sometimes with software controls.
+
+ - Inputs can often be used as IRQ signals, often edge triggered but
+ sometimes level triggered. Such IRQs may be configurable as system
+ wakeup events, to wake the system from a low power state.
+
+ - Usually a GPIO will be configurable as either input or output, as needed
+ by different product boards; single direction ones exist too.
+
+ - Most GPIOs can be accessed while holding spinlocks, but those accessed
+ through a serial bus normally can't. Some systems support both types.
+
+On a given board each GPIO is used for one specific purpose like monitoring
+MMC/SD card insertion/removal, detecting card write-protect status, driving
+a LED, configuring a transceiver, bit-banging a serial bus, poking a hardware
+watchdog, sensing a switch, and so on.
+
+
+Common GPIO Properties
+======================
+
+These properties are met through all the other documents of the GPIO interface
+and it is useful to understand them, especially if you need to define GPIO
+mappings.
+
+Active-High and Active-Low
+--------------------------
+It is natural to assume that a GPIO is "active" when its output signal is 1
+("high"), and inactive when it is 0 ("low"). However in practice the signal of a
+GPIO may be inverted before is reaches its destination, or a device could decide
+to have different conventions about what "active" means. Such decisions should
+be transparent to device drivers, therefore it is possible to define a GPIO as
+being either active-high ("1" means "active", the default) or active-low ("0"
+means "active") so that drivers only need to worry about the logical signal and
+not about what happens at the line level.
+
+Open Drain and Open Source
+--------------------------
+Sometimes shared signals need to use "open drain" (where only the low signal
+level is actually driven), or "open source" (where only the high signal level is
+driven) signaling. That term applies to CMOS transistors; "open collector" is
+used for TTL. A pullup or pulldown resistor causes the high or low signal level.
+This is sometimes called a "wire-AND"; or more practically, from the negative
+logic (low=true) perspective this is a "wire-OR".
+
+One common example of an open drain signal is a shared active-low IRQ line.
+Also, bidirectional data bus signals sometimes use open drain signals.
+
+Some GPIO controllers directly support open drain and open source outputs; many
+don't. When you need open drain signaling but your hardware doesn't directly
+support it, there's a common idiom you can use to emulate it with any GPIO pin
+that can be used as either an input or an output:
+
+ LOW: gpiod_direction_output(gpio, 0) ... this drives the signal and overrides
+ the pullup.
+
+ HIGH: gpiod_direction_input(gpio) ... this turns off the output, so the pullup
+ (or some other device) controls the signal.
+
+The same logic can be applied to emulate open source signaling, by driving the
+high signal and configuring the GPIO as input for low. This open drain/open
+source emulation can be handled transparently by the GPIO framework.
+
+If you are "driving" the signal high but gpiod_get_value(gpio) reports a low
+value (after the appropriate rise time passes), you know some other component is
+driving the shared signal low. That's not necessarily an error. As one common
+example, that's how I2C clocks are stretched: a slave that needs a slower clock
+delays the rising edge of SCK, and the I2C master adjusts its signaling rate
+accordingly.
--- /dev/null
+GPIO Sysfs Interface for Userspace
+==================================
+
+Platforms which use the "gpiolib" implementors framework may choose to
+configure a sysfs user interface to GPIOs. This is different from the
+debugfs interface, since it provides control over GPIO direction and
+value instead of just showing a gpio state summary. Plus, it could be
+present on production systems without debugging support.
+
+Given appropriate hardware documentation for the system, userspace could
+know for example that GPIO #23 controls the write protect line used to
+protect boot loader segments in flash memory. System upgrade procedures
+may need to temporarily remove that protection, first importing a GPIO,
+then changing its output state, then updating the code before re-enabling
+the write protection. In normal use, GPIO #23 would never be touched,
+and the kernel would have no need to know about it.
+
+Again depending on appropriate hardware documentation, on some systems
+userspace GPIO can be used to determine system configuration data that
+standard kernels won't know about. And for some tasks, simple userspace
+GPIO drivers could be all that the system really needs.
+
+Note that standard kernel drivers exist for common "LEDs and Buttons"
+GPIO tasks: "leds-gpio" and "gpio_keys", respectively. Use those
+instead of talking directly to the GPIOs; they integrate with kernel
+frameworks better than your userspace code could.
+
+
+Paths in Sysfs
+--------------
+There are three kinds of entry in /sys/class/gpio:
+
+ - Control interfaces used to get userspace control over GPIOs;
+
+ - GPIOs themselves; and
+
+ - GPIO controllers ("gpio_chip" instances).
+
+That's in addition to standard files including the "device" symlink.
+
+The control interfaces are write-only:
+
+ /sys/class/gpio/
+
+ "export" ... Userspace may ask the kernel to export control of
+ a GPIO to userspace by writing its number to this file.
+
+ Example: "echo 19 > export" will create a "gpio19" node
+ for GPIO #19, if that's not requested by kernel code.
+
+ "unexport" ... Reverses the effect of exporting to userspace.
+
+ Example: "echo 19 > unexport" will remove a "gpio19"
+ node exported using the "export" file.
+
+GPIO signals have paths like /sys/class/gpio/gpio42/ (for GPIO #42)
+and have the following read/write attributes:
+
+ /sys/class/gpio/gpioN/
+
+ "direction" ... reads as either "in" or "out". This value may
+ normally be written. Writing as "out" defaults to
+ initializing the value as low. To ensure glitch free
+ operation, values "low" and "high" may be written to
+ configure the GPIO as an output with that initial value.
+
+ Note that this attribute *will not exist* if the kernel
+ doesn't support changing the direction of a GPIO, or
+ it was exported by kernel code that didn't explicitly
+ allow userspace to reconfigure this GPIO's direction.
+
+ "value" ... reads as either 0 (low) or 1 (high). If the GPIO
+ is configured as an output, this value may be written;
+ any nonzero value is treated as high.
+
+ If the pin can be configured as interrupt-generating interrupt
+ and if it has been configured to generate interrupts (see the
+ description of "edge"), you can poll(2) on that file and
+ poll(2) will return whenever the interrupt was triggered. If
+ you use poll(2), set the events POLLPRI and POLLERR. If you
+ use select(2), set the file descriptor in exceptfds. After
+ poll(2) returns, either lseek(2) to the beginning of the sysfs
+ file and read the new value or close the file and re-open it
+ to read the value.
+
+ "edge" ... reads as either "none", "rising", "falling", or
+ "both". Write these strings to select the signal edge(s)
+ that will make poll(2) on the "value" file return.
+
+ This file exists only if the pin can be configured as an
+ interrupt generating input pin.
+
+ "active_low" ... reads as either 0 (false) or 1 (true). Write
+ any nonzero value to invert the value attribute both
+ for reading and writing. Existing and subsequent
+ poll(2) support configuration via the edge attribute
+ for "rising" and "falling" edges will follow this
+ setting.
+
+GPIO controllers have paths like /sys/class/gpio/gpiochip42/ (for the
+controller implementing GPIOs starting at #42) and have the following
+read-only attributes:
+
+ /sys/class/gpio/gpiochipN/
+
+ "base" ... same as N, the first GPIO managed by this chip
+
+ "label" ... provided for diagnostics (not always unique)
+
+ "ngpio" ... how many GPIOs this manges (N to N + ngpio - 1)
+
+Board documentation should in most cases cover what GPIOs are used for
+what purposes. However, those numbers are not always stable; GPIOs on
+a daughtercard might be different depending on the base board being used,
+or other cards in the stack. In such cases, you may need to use the
+gpiochip nodes (possibly in conjunction with schematics) to determine
+the correct GPIO number to use for a given signal.
+
+
+Exporting from Kernel code
+--------------------------
+Kernel code can explicitly manage exports of GPIOs which have already been
+requested using gpio_request():
+
+ /* export the GPIO to userspace */
+ int gpiod_export(struct gpio_desc *desc, bool direction_may_change);
+
+ /* reverse gpio_export() */
+ void gpiod_unexport(struct gpio_desc *desc);
+
+ /* create a sysfs link to an exported GPIO node */
+ int gpiod_export_link(struct device *dev, const char *name,
+ struct gpio_desc *desc);
+
+ /* change the polarity of a GPIO node in sysfs */
+ int gpiod_sysfs_set_active_low(struct gpio_desc *desc, int value);
+
+After a kernel driver requests a GPIO, it may only be made available in
+the sysfs interface by gpiod_export(). The driver can control whether the
+signal direction may change. This helps drivers prevent userspace code
+from accidentally clobbering important system state.
+
+This explicit exporting can help with debugging (by making some kinds
+of experiments easier), or can provide an always-there interface that's
+suitable for documenting as part of a board support package.
+
+After the GPIO has been exported, gpiod_export_link() allows creating
+symlinks from elsewhere in sysfs to the GPIO sysfs node. Drivers can
+use this to provide the interface under their own device in sysfs with
+a descriptive name.
+
+Drivers can use gpiod_sysfs_set_active_low() to hide GPIO line polarity
+differences between boards from user space. Polarity change can be done both
+before and after gpiod_export(), and previously enabled poll(2) support for
+either rising or falling edge will be reconfigured to follow this setting.
owned by uid=0.
ima_hash= [IMA]
- Format: { "sha1" | "md5" }
+ Format: { md5 | sha1 | rmd160 | sha256 | sha384
+ | sha512 | ... }
default: "sha1"
+ The list of supported hash algorithms is defined
+ in crypto/hash_info.h.
+
ima_tcb [IMA]
Load a policy which meets the needs of the Trusted
Computing Base. This means IMA will measure all
programs exec'd, files mmap'd for exec, and all files
opened for read by uid=0.
+ ima_template= [IMA]
+ Select one of defined IMA measurements template formats.
+ Formats: { "ima" | "ima-ng" }
+ Default: "ima-ng"
+
init= [KNL]
Format: <full_path>
Run specified binary instead of /sbin/init as init
* atapi_dmadir: Enable ATAPI DMADIR bridge support
+ * disable: Disable this device.
+
If there are multiple matching configurations changing
the same attribute, the last one is used.
int i;
void *dp = get_dp(mic, type);
- for (i = mic_aligned_size(struct mic_bootparam); i < PAGE_SIZE;
+ for (i = sizeof(struct mic_bootparam); i < PAGE_SIZE;
i += mic_total_desc_size(d)) {
d = dp + i;
__func__, mic->name, vr0->va, vr0->info, vr_size,
vring_size(MIC_VRING_ENTRIES, MIC_VIRTIO_RING_ALIGN));
mpsslog("magic 0x%x expected 0x%x\n",
- vr0->info->magic, MIC_MAGIC + type);
- assert(vr0->info->magic == MIC_MAGIC + type);
+ le32toh(vr0->info->magic), MIC_MAGIC + type);
+ assert(le32toh(vr0->info->magic) == MIC_MAGIC + type);
if (vr1) {
vr1->va = (struct mic_vring *)
&va[MIC_DEVICE_PAGE_END + vr_size];
__func__, mic->name, vr1->va, vr1->info, vr_size,
vring_size(MIC_VRING_ENTRIES, MIC_VIRTIO_RING_ALIGN));
mpsslog("magic 0x%x expected 0x%x\n",
- vr1->info->magic, MIC_MAGIC + type + 1);
- assert(vr1->info->magic == MIC_MAGIC + type + 1);
+ le32toh(vr1->info->magic), MIC_MAGIC + type + 1);
+ assert(le32toh(vr1->info->magic) == MIC_MAGIC + type + 1);
}
done:
return va;
virtio_net(void *arg)
{
static __u8 vnet_hdr[2][sizeof(struct virtio_net_hdr)];
- static __u8 vnet_buf[2][MAX_NET_PKT_SIZE] __aligned(64);
+ static __u8 vnet_buf[2][MAX_NET_PKT_SIZE] __attribute__ ((aligned(64)));
struct iovec vnet_iov[2][2] = {
{ { .iov_base = vnet_hdr[0], .iov_len = sizeof(vnet_hdr[0]) },
{ .iov_base = vnet_buf[0], .iov_len = sizeof(vnet_buf[0]) } },
}
do {
+ ret = lseek(fd, 0, SEEK_SET);
+ if (ret < 0) {
+ mpsslog("%s: Failed to seek to file start '%s': %s\n",
+ mic->name, pathname, strerror(errno));
+ goto close_error1;
+ }
ret = read(fd, value, sizeof(value));
if (ret < 0) {
mpsslog("%s: Failed to read sysfs entry '%s': %s\n",
--- /dev/null
+ ==============================
+ KERNEL MODULE SIGNING FACILITY
+ ==============================
+
+CONTENTS
+
+ - Overview.
+ - Configuring module signing.
+ - Generating signing keys.
+ - Public keys in the kernel.
+ - Manually signing modules.
+ - Signed modules and stripping.
+ - Loading signed modules.
+ - Non-valid signatures and unsigned modules.
+ - Administering/protecting the private key.
+
+
+========
+OVERVIEW
+========
+
+The kernel module signing facility cryptographically signs modules during
+installation and then checks the signature upon loading the module. This
+allows increased kernel security by disallowing the loading of unsigned modules
+or modules signed with an invalid key. Module signing increases security by
+making it harder to load a malicious module into the kernel. The module
+signature checking is done by the kernel so that it is not necessary to have
+trusted userspace bits.
+
+This facility uses X.509 ITU-T standard certificates to encode the public keys
+involved. The signatures are not themselves encoded in any industrial standard
+type. The facility currently only supports the RSA public key encryption
+standard (though it is pluggable and permits others to be used). The possible
+hash algorithms that can be used are SHA-1, SHA-224, SHA-256, SHA-384, and
+SHA-512 (the algorithm is selected by data in the signature).
+
+
+==========================
+CONFIGURING MODULE SIGNING
+==========================
+
+The module signing facility is enabled by going to the "Enable Loadable Module
+Support" section of the kernel configuration and turning on
+
+ CONFIG_MODULE_SIG "Module signature verification"
+
+This has a number of options available:
+
+ (1) "Require modules to be validly signed" (CONFIG_MODULE_SIG_FORCE)
+
+ This specifies how the kernel should deal with a module that has a
+ signature for which the key is not known or a module that is unsigned.
+
+ If this is off (ie. "permissive"), then modules for which the key is not
+ available and modules that are unsigned are permitted, but the kernel will
+ be marked as being tainted.
+
+ If this is on (ie. "restrictive"), only modules that have a valid
+ signature that can be verified by a public key in the kernel's possession
+ will be loaded. All other modules will generate an error.
+
+ Irrespective of the setting here, if the module has a signature block that
+ cannot be parsed, it will be rejected out of hand.
+
+
+ (2) "Automatically sign all modules" (CONFIG_MODULE_SIG_ALL)
+
+ If this is on then modules will be automatically signed during the
+ modules_install phase of a build. If this is off, then the modules must
+ be signed manually using:
+
+ scripts/sign-file
+
+
+ (3) "Which hash algorithm should modules be signed with?"
+
+ This presents a choice of which hash algorithm the installation phase will
+ sign the modules with:
+
+ CONFIG_SIG_SHA1 "Sign modules with SHA-1"
+ CONFIG_SIG_SHA224 "Sign modules with SHA-224"
+ CONFIG_SIG_SHA256 "Sign modules with SHA-256"
+ CONFIG_SIG_SHA384 "Sign modules with SHA-384"
+ CONFIG_SIG_SHA512 "Sign modules with SHA-512"
+
+ The algorithm selected here will also be built into the kernel (rather
+ than being a module) so that modules signed with that algorithm can have
+ their signatures checked without causing a dependency loop.
+
+
+=======================
+GENERATING SIGNING KEYS
+=======================
+
+Cryptographic keypairs are required to generate and check signatures. A
+private key is used to generate a signature and the corresponding public key is
+used to check it. The private key is only needed during the build, after which
+it can be deleted or stored securely. The public key gets built into the
+kernel so that it can be used to check the signatures as the modules are
+loaded.
+
+Under normal conditions, the kernel build will automatically generate a new
+keypair using openssl if one does not exist in the files:
+
+ signing_key.priv
+ signing_key.x509
+
+during the building of vmlinux (the public part of the key needs to be built
+into vmlinux) using parameters in the:
+
+ x509.genkey
+
+file (which is also generated if it does not already exist).
+
+It is strongly recommended that you provide your own x509.genkey file.
+
+Most notably, in the x509.genkey file, the req_distinguished_name section
+should be altered from the default:
+
+ [ req_distinguished_name ]
+ O = Magrathea
+ CN = Glacier signing key
+ emailAddress = slartibartfast@magrathea.h2g2
+
+The generated RSA key size can also be set with:
+
+ [ req ]
+ default_bits = 4096
+
+
+It is also possible to manually generate the key private/public files using the
+x509.genkey key generation configuration file in the root node of the Linux
+kernel sources tree and the openssl command. The following is an example to
+generate the public/private key files:
+
+ openssl req -new -nodes -utf8 -sha256 -days 36500 -batch -x509 \
+ -config x509.genkey -outform DER -out signing_key.x509 \
+ -keyout signing_key.priv
+
+
+=========================
+PUBLIC KEYS IN THE KERNEL
+=========================
+
+The kernel contains a ring of public keys that can be viewed by root. They're
+in a keyring called ".system_keyring" that can be seen by:
+
+ [root@deneb ~]# cat /proc/keys
+ ...
+ 223c7853 I------ 1 perm 1f030000 0 0 keyring .system_keyring: 1
+ 302d2d52 I------ 1 perm 1f010000 0 0 asymmetri Fedora kernel signing key: d69a84e6bce3d216b979e9505b3e3ef9a7118079: X509.RSA a7118079 []
+ ...
+
+Beyond the public key generated specifically for module signing, any file
+placed in the kernel source root directory or the kernel build root directory
+whose name is suffixed with ".x509" will be assumed to be an X.509 public key
+and will be added to the keyring.
+
+Further, the architecture code may take public keys from a hardware store and
+add those in also (e.g. from the UEFI key database).
+
+Finally, it is possible to add additional public keys by doing:
+
+ keyctl padd asymmetric "" [.system_keyring-ID] <[key-file]
+
+e.g.:
+
+ keyctl padd asymmetric "" 0x223c7853 <my_public_key.x509
+
+Note, however, that the kernel will only permit keys to be added to
+.system_keyring _if_ the new key's X.509 wrapper is validly signed by a key
+that is already resident in the .system_keyring at the time the key was added.
+
+
+=========================
+MANUALLY SIGNING MODULES
+=========================
+
+To manually sign a module, use the scripts/sign-file tool available in
+the Linux kernel source tree. The script requires 4 arguments:
+
+ 1. The hash algorithm (e.g., sha256)
+ 2. The private key filename
+ 3. The public key filename
+ 4. The kernel module to be signed
+
+The following is an example to sign a kernel module:
+
+ scripts/sign-file sha512 kernel-signkey.priv \
+ kernel-signkey.x509 module.ko
+
+The hash algorithm used does not have to match the one configured, but if it
+doesn't, you should make sure that hash algorithm is either built into the
+kernel or can be loaded without requiring itself.
+
+
+============================
+SIGNED MODULES AND STRIPPING
+============================
+
+A signed module has a digital signature simply appended at the end. The string
+"~Module signature appended~." at the end of the module's file confirms that a
+signature is present but it does not confirm that the signature is valid!
+
+Signed modules are BRITTLE as the signature is outside of the defined ELF
+container. Thus they MAY NOT be stripped once the signature is computed and
+attached. Note the entire module is the signed payload, including any and all
+debug information present at the time of signing.
+
+
+======================
+LOADING SIGNED MODULES
+======================
+
+Modules are loaded with insmod, modprobe, init_module() or finit_module(),
+exactly as for unsigned modules as no processing is done in userspace. The
+signature checking is all done within the kernel.
+
+
+=========================================
+NON-VALID SIGNATURES AND UNSIGNED MODULES
+=========================================
+
+If CONFIG_MODULE_SIG_FORCE is enabled or enforcemodulesig=1 is supplied on
+the kernel command line, the kernel will only load validly signed modules
+for which it has a public key. Otherwise, it will also load modules that are
+unsigned. Any module for which the kernel has a key, but which proves to have
+a signature mismatch will not be permitted to load.
+
+Any module that has an unparseable signature will be rejected.
+
+
+=========================================
+ADMINISTERING/PROTECTING THE PRIVATE KEY
+=========================================
+
+Since the private key is used to sign modules, viruses and malware could use
+the private key to sign modules and compromise the operating system. The
+private key must be either destroyed or moved to a secure location and not kept
+in the root node of the kernel source tree.
Default: 64 (as recommended by RFC1700)
ip_no_pmtu_disc - BOOLEAN
- Disable Path MTU Discovery.
- default FALSE
+ Disable Path MTU Discovery. If enabled and a
+ fragmentation-required ICMP is received, the PMTU to this
+ destination will be set to min_pmtu (see below). You will need
+ to raise min_pmtu to the smallest interface MTU on your system
+ manually if you want to avoid locally generated fragments.
+ Default: FALSE
min_pmtu - INTEGER
default 552 - minimum discovered Path MTU
[shutdown] close() --------> destruction of the transmission socket and
deallocation of all associated resources.
+Socket creation and destruction is also straight forward, and is done
+the same way as in capturing described in the previous paragraph:
+
+ int fd = socket(PF_PACKET, mode, 0);
+
+The protocol can optionally be 0 in case we only want to transmit
+via this socket, which avoids an expensive call to packet_rcv().
+In this case, you also need to bind(2) the TX_RING with sll_protocol = 0
+set. Otherwise, htons(ETH_P_ALL) or any other protocol, for example.
+
Binding the socket to your network interface is mandatory (with zero copy) to
know the header size of frames used in the circular buffer.
should be used. Of course, for this purpose the device's runtime PM has to be
enabled earlier by calling pm_runtime_enable().
-If the device bus type's or driver's ->probe() callback runs
-pm_runtime_suspend() or pm_runtime_idle() or their asynchronous counterparts,
-they will fail returning -EAGAIN, because the device's usage counter is
-incremented by the driver core before executing ->probe(). Still, it may be
-desirable to suspend the device as soon as ->probe() has finished, so the driver
-core uses pm_runtime_put_sync() to invoke the subsystem-level idle callback for
-the device at that time.
+It may be desirable to suspend the device once ->probe() has finished.
+Therefore the driver core uses the asyncronous pm_request_idle() to submit a
+request to execute the subsystem-level idle callback for the device at that
+time. A driver that makes use of the runtime autosuspend feature, may want to
+update the last busy mark before returning from ->probe().
Moreover, the driver core prevents runtime PM callbacks from racing with the bus
notifier callback in __device_release_driver(), which is necessary, because the
__pm_runtime_disable() with 'false' as the second argument for every device
right before executing the subsystem-level .suspend_late() callback for it.
- * During system resume it calls pm_runtime_enable() and pm_runtime_put_sync()
+ * During system resume it calls pm_runtime_enable() and pm_runtime_put()
for every device right after executing the subsystem-level .resume_early()
callback and right after executing the subsystem-level .resume() callback
for it, respectively.
- description of the kernel key retention service.
tomoyo.txt
- documentation on the TOMOYO Linux Security Module.
+IMA-templates.txt
+ - documentation on the template management mechanism for IMA.
--- /dev/null
+ IMA Template Management Mechanism
+
+
+==== INTRODUCTION ====
+
+The original 'ima' template is fixed length, containing the filedata hash
+and pathname. The filedata hash is limited to 20 bytes (md5/sha1).
+The pathname is a null terminated string, limited to 255 characters.
+To overcome these limitations and to add additional file metadata, it is
+necessary to extend the current version of IMA by defining additional
+templates. For example, information that could be possibly reported are
+the inode UID/GID or the LSM labels either of the inode and of the process
+that is accessing it.
+
+However, the main problem to introduce this feature is that, each time
+a new template is defined, the functions that generate and display
+the measurements list would include the code for handling a new format
+and, thus, would significantly grow over the time.
+
+The proposed solution solves this problem by separating the template
+management from the remaining IMA code. The core of this solution is the
+definition of two new data structures: a template descriptor, to determine
+which information should be included in the measurement list; a template
+field, to generate and display data of a given type.
+
+Managing templates with these structures is very simple. To support
+a new data type, developers define the field identifier and implement
+two functions, init() and show(), respectively to generate and display
+measurement entries. Defining a new template descriptor requires
+specifying the template format, a string of field identifiers separated
+by the '|' character. While in the current implementation it is possible
+to define new template descriptors only by adding their definition in the
+template specific code (ima_template.c), in a future version it will be
+possible to register a new template on a running kernel by supplying to IMA
+the desired format string. In this version, IMA initializes at boot time
+all defined template descriptors by translating the format into an array
+of template fields structures taken from the set of the supported ones.
+
+After the initialization step, IMA will call ima_alloc_init_template()
+(new function defined within the patches for the new template management
+mechanism) to generate a new measurement entry by using the template
+descriptor chosen through the kernel configuration or through the newly
+introduced 'ima_template=' kernel command line parameter. It is during this
+phase that the advantages of the new architecture are clearly shown:
+the latter function will not contain specific code to handle a given template
+but, instead, it simply calls the init() method of the template fields
+associated to the chosen template descriptor and store the result (pointer
+to allocated data and data length) in the measurement entry structure.
+
+The same mechanism is employed to display measurements entries.
+The functions ima[_ascii]_measurements_show() retrieve, for each entry,
+the template descriptor used to produce that entry and call the show()
+method for each item of the array of template fields structures.
+
+
+
+==== SUPPORTED TEMPLATE FIELDS AND DESCRIPTORS ====
+
+In the following, there is the list of supported template fields
+('<identifier>': description), that can be used to define new template
+descriptors by adding their identifier to the format string
+(support for more data types will be added later):
+
+ - 'd': the digest of the event (i.e. the digest of a measured file),
+ calculated with the SHA1 or MD5 hash algorithm;
+ - 'n': the name of the event (i.e. the file name), with size up to 255 bytes;
+ - 'd-ng': the digest of the event, calculated with an arbitrary hash
+ algorithm (field format: [<hash algo>:]digest, where the digest
+ prefix is shown only if the hash algorithm is not SHA1 or MD5);
+ - 'n-ng': the name of the event, without size limitations.
+
+
+Below, there is the list of defined template descriptors:
+ - "ima": its format is 'd|n';
+ - "ima-ng" (default): its format is 'd-ng|n-ng'.
+
+
+
+==== USE ====
+
+To specify the template descriptor to be used to generate measurement entries,
+currently the following methods are supported:
+
+ - select a template descriptor among those supported in the kernel
+ configuration ('ima-ng' is the default choice);
+ - specify a template descriptor name from the kernel command line through
+ the 'ima_template=' parameter.
calling processes has a searchable link to the key from one of its
keyrings. There are three functions for dealing with these:
- key_ref_t make_key_ref(const struct key *key,
- unsigned long possession);
+ key_ref_t make_key_ref(const struct key *key, bool possession);
struct key *key_ref_to_ptr(const key_ref_t key_ref);
- unsigned long is_key_possessed(const key_ref_t key_ref);
+ bool is_key_possessed(const key_ref_t key_ref);
The first function constructs a key reference from a key pointer and
- possession information (which must be 0 or 1 and not any other value).
+ possession information (which must be true or false).
The second function retrieves the key pointer from a reference and the
third retrieves the possession flag.
the argument will not be parsed.
-(*) Extra references can be made to a key by calling the following function:
+(*) Extra references can be made to a key by calling one of the following
+ functions:
+ struct key *__key_get(struct key *key);
struct key *key_get(struct key *key);
- These need to be disposed of by calling key_put() when they've been
- finished with. The key pointer passed in will be returned. If the pointer
- is NULL or CONFIG_KEYS is not set then the key will not be dereferenced and
- no increment will take place.
+ Keys so references will need to be disposed of by calling key_put() when
+ they've been finished with. The key pointer passed in will be returned.
+
+ In the case of key_get(), if the pointer is NULL or CONFIG_KEYS is not set
+ then the key will not be dereferenced and no increment will take place.
(*) A key's serial number can be obtained by calling:
buf += " /*\n"
buf += " * Setup default attribute lists for various fabric->tf_cit_tmpl\n"
buf += " */\n"
- buf += " TF_CIT_TMPL(fabric)->tfc_wwn_cit.ct_attrs = " + fabric_mod_name + "_wwn_attrs;\n"
- buf += " TF_CIT_TMPL(fabric)->tfc_tpg_base_cit.ct_attrs = NULL;\n"
- buf += " TF_CIT_TMPL(fabric)->tfc_tpg_attrib_cit.ct_attrs = NULL;\n"
- buf += " TF_CIT_TMPL(fabric)->tfc_tpg_param_cit.ct_attrs = NULL;\n"
- buf += " TF_CIT_TMPL(fabric)->tfc_tpg_np_base_cit.ct_attrs = NULL;\n"
- buf += " TF_CIT_TMPL(fabric)->tfc_tpg_nacl_base_cit.ct_attrs = NULL;\n"
- buf += " TF_CIT_TMPL(fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;\n"
- buf += " TF_CIT_TMPL(fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = NULL;\n"
- buf += " TF_CIT_TMPL(fabric)->tfc_tpg_nacl_param_cit.ct_attrs = NULL;\n"
+ buf += " fabric->tf_cit_tmpl.tfc_wwn_cit.ct_attrs = " + fabric_mod_name + "_wwn_attrs;\n"
+ buf += " fabric->tf_cit_tmpl.tfc_tpg_base_cit.ct_attrs = NULL;\n"
+ buf += " fabric->tf_cit_tmpl.tfc_tpg_attrib_cit.ct_attrs = NULL;\n"
+ buf += " fabric->tf_cit_tmpl.tfc_tpg_param_cit.ct_attrs = NULL;\n"
+ buf += " fabric->tf_cit_tmpl.tfc_tpg_np_base_cit.ct_attrs = NULL;\n"
+ buf += " fabric->tf_cit_tmpl.tfc_tpg_nacl_base_cit.ct_attrs = NULL;\n"
+ buf += " fabric->tf_cit_tmpl.tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;\n"
+ buf += " fabric->tf_cit_tmpl.tfc_tpg_nacl_auth_cit.ct_attrs = NULL;\n"
+ buf += " fabric->tf_cit_tmpl.tfc_tpg_nacl_param_cit.ct_attrs = NULL;\n"
buf += " /*\n"
buf += " * Register the fabric for use within TCM\n"
buf += " */\n"
PMD split lock enabling requires pgtable_pmd_page_ctor() call on PMD table
allocation and pgtable_pmd_page_dtor() on freeing.
-Allocation usually happens in pmd_alloc_one(), freeing in pmd_free(), but
-make sure you cover all PMD table allocation / freeing paths: i.e X86_PAE
-preallocate few PMDs on pgd_alloc().
+Allocation usually happens in pmd_alloc_one(), freeing in pmd_free() and
+pmd_free_tlb(), but make sure you cover all PMD table allocation / freeing
+paths: i.e X86_PAE preallocate few PMDs on pgd_alloc().
With everything in place you can set CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK.
F: arch/arm/boot/dts/sama*.dtsi
ARM/CALXEDA HIGHBANK ARCHITECTURE
-M: Rob Herring <rob.herring@calxeda.com>
+M: Rob Herring <robh@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-highbank/
F: arch/arm/mach-footbridge/
ARM/FREESCALE IMX / MXC ARM ARCHITECTURE
+M: Shawn Guo <shawn.guo@linaro.org>
M: Sascha Hauer <kernel@pengutronix.de>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
-T: git git://git.pengutronix.de/git/imx/linux-2.6.git
+T: git git://git.linaro.org/people/shawnguo/linux-2.6.git
F: arch/arm/mach-imx/
+F: arch/arm/boot/dts/imx*
F: arch/arm/configs/imx*_defconfig
-ARM/FREESCALE IMX6
-M: Shawn Guo <shawn.guo@linaro.org>
-L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-S: Maintained
-T: git git://git.linaro.org/people/shawnguo/linux-2.6.git
-F: arch/arm/mach-imx/*imx6*
-
ARM/FREESCALE MXS ARM ARCHITECTURE
M: Shawn Guo <shawn.guo@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-keystone/
+F: drivers/clk/keystone/
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone.git
ARM/LOGICPD PXA270 MACHINE SUPPORT
M: Lennert Buytenhek <kernel@wantstofly.org>
S: Supported
F: arch/arm/mach-zynq/
F: drivers/cpuidle/cpuidle-zynq.c
+N: zynq
+N: xilinx
+F: drivers/clocksource/cadence_ttc_timer.c
ARM SMMU DRIVER
M: Will Deacon <will.deacon@arm.com>
F: drivers/gpio/gpio-bt8xx.c
BTRFS FILE SYSTEM
-M: Chris Mason <chris.mason@fusionio.com>
+M: Chris Mason <clm@fb.com>
+M: Josef Bacik <jbacik@fb.com>
L: linux-btrfs@vger.kernel.org
W: http://btrfs.wiki.kernel.org/
Q: http://patchwork.kernel.org/project/linux-btrfs/list/
F: Documentation/zh_CN/
CHIPIDEA USB HIGH SPEED DUAL ROLE CONTROLLER
-M: Alexander Shishkin <alexander.shishkin@linux.intel.com>
+M: Peter Chen <Peter.Chen@freescale.com>
+T: git://github.com/hzpeterchen/linux-usb.git
L: linux-usb@vger.kernel.org
S: Maintained
F: drivers/usb/chipidea/
+CHROME HARDWARE PLATFORM SUPPORT
+M: Olof Johansson <olof@lixom.net>
+S: Maintained
+F: drivers/platform/chrome/
+
CISCO VIC ETHERNET NIC DRIVER
M: Christian Benvenuti <benve@cisco.com>
M: Sujith Sankar <ssujith@cisco.com>
GPIO SUBSYSTEM
M: Linus Walleij <linus.walleij@linaro.org>
-S: Maintained
+M: Alexandre Courbot <gnurou@gmail.com>
L: linux-gpio@vger.kernel.org
-F: Documentation/gpio.txt
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git
+S: Maintained
+F: Documentation/gpio/
F: drivers/gpio/
F: include/linux/gpio*
F: include/asm-generic/gpio.h
S: Maintained
F: drivers/media/usb/gspca/
+GUID PARTITION TABLE (GPT)
+M: Davidlohr Bueso <davidlohr@hp.com>
+L: linux-efi@vger.kernel.org
+S: Maintained
+F: block/partitions/efi.*
+
STK1160 USB VIDEO CAPTURE DRIVER
M: Ezequiel Garcia <elezegarcia@gmail.com>
L: linux-media@vger.kernel.org
S: Maintained
F: fs/hpfs/
+HSI SUBSYSTEM
+M: Sebastian Reichel <sre@debian.org>
+S: Maintained
+F: Documentation/ABI/testing/sysfs-bus-hsi
+F: drivers/hsi/
+F: include/linux/hsi/
+F: include/uapi/linux/hsi/
+
HSO 3G MODEM DRIVER
M: Jan Dumon <j.dumon@option.com>
W: http://www.pharscape.org
S: Maintained
F: drivers/net/usb/hso.c
+HSR NETWORK PROTOCOL
+M: Arvid Brodin <arvid.brodin@alten.se>
+L: netdev@vger.kernel.org
+S: Maintained
+F: net/hsr/
+
HTCPEN TOUCHSCREEN DRIVER
M: Pau Oliva Fora <pof@eslack.org>
L: linux-input@vger.kernel.org
F: arch/x86/kernel/cpu/mshyperv.c
F: drivers/hid/hid-hyperv.c
F: drivers/hv/
+F: drivers/input/serio/hyperv-keyboard.c
F: drivers/net/hyperv/
F: drivers/scsi/storvsc_drv.c
F: drivers/video/hyperv_fb.c
M: Carolyn Wyborny <carolyn.wyborny@intel.com>
M: Don Skidmore <donald.c.skidmore@intel.com>
M: Greg Rose <gregory.v.rose@intel.com>
-M: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
M: Alex Duyck <alexander.h.duyck@intel.com>
M: John Ronciak <john.ronciak@intel.com>
-M: Tushar Dave <tushar.n.dave@intel.com>
L: e1000-devel@lists.sourceforge.net
W: http://www.intel.com/support/feedback.htm
W: http://e1000.sourceforge.net/
F: Documentation/lockdep*.txt
F: Documentation/lockstat.txt
F: include/linux/lockdep.h
-F: kernel/lockdep*
+F: kernel/locking/
LOGICAL DISK MANAGER SUPPORT (LDM, Windows 2000/XP/Vista Dynamic Disks)
M: "Richard Russon (FlatCap)" <ldm@flatcap.org>
M: Herbert Xu <herbert@gondor.apana.org.au>
M: "David S. Miller" <davem@davemloft.net>
L: netdev@vger.kernel.org
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next.git
S: Maintained
F: net/xfrm/
F: net/key/
F: net/ipv4/xfrm*
+F: net/ipv4/esp4.c
+F: net/ipv4/ah4.c
+F: net/ipv4/ipcomp.c
+F: net/ipv4/ip_vti.c
F: net/ipv6/xfrm*
+F: net/ipv6/esp6.c
+F: net/ipv6/ah6.c
+F: net/ipv6/ipcomp6.c
+F: net/ipv6/ip6_vti.c
F: include/uapi/linux/xfrm.h
F: include/net/xfrm.h
F: include/linux/platform_data/pn544.h
NFS, SUNRPC, AND LOCKD CLIENTS
-M: Trond Myklebust <Trond.Myklebust@netapp.com>
+M: Trond Myklebust <trond.myklebust@primarydata.com>
L: linux-nfs@vger.kernel.org
W: http://client.linux-nfs.org
-T: git git://git.linux-nfs.org/pub/linux/nfs-2.6.git
+T: git git://git.linux-nfs.org/projects/trondmy/linux-nfs.git
S: Maintained
F: fs/lockd/
F: fs/nfs/
OPEN FIRMWARE AND FLATTENED DEVICE TREE
M: Grant Likely <grant.likely@linaro.org>
-M: Rob Herring <rob.herring@calxeda.com>
+M: Rob Herring <robh+dt@kernel.org>
L: devicetree@vger.kernel.org
W: http://fdt.secretlab.ca
T: git git://git.secretlab.ca/git/linux-2.6.git
K: of_match_table
OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS
-M: Rob Herring <rob.herring@calxeda.com>
+M: Rob Herring <robh+dt@kernel.org>
M: Pawel Moll <pawel.moll@arm.com>
M: Mark Rutland <mark.rutland@arm.com>
-M: Stephen Warren <swarren@wwwdotorg.org>
M: Ian Campbell <ijc+devicetree@hellion.org.uk>
+M: Kumar Gala <galak@codeaurora.org>
L: devicetree@vger.kernel.org
S: Maintained
F: Documentation/devicetree/
F: include/linux/pci*
F: arch/x86/pci/
+PCI DRIVER FOR IMX6
+M: Richard Zhu <r65037@freescale.com>
+M: Shawn Guo <shawn.guo@linaro.org>
+L: linux-pci@vger.kernel.org
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: drivers/pci/host/*imx6*
+
+PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support)
+M: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
+M: Jason Cooper <jason@lakedaemon.net>
+L: linux-pci@vger.kernel.org
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: drivers/pci/host/*mvebu*
+
PCI DRIVER FOR NVIDIA TEGRA
M: Thierry Reding <thierry.reding@gmail.com>
L: linux-tegra@vger.kernel.org
+L: linux-pci@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
F: drivers/pci/host/pci-tegra.c
+PCI DRIVER FOR RENESAS R-CAR
+M: Simon Horman <horms@verge.net.au>
+L: linux-pci@vger.kernel.org
+L: linux-sh@vger.kernel.org
+S: Maintained
+F: drivers/pci/host/*rcar*
+
PCI DRIVER FOR SAMSUNG EXYNOS
M: Jingoo Han <jg1.han@samsung.com>
L: linux-pci@vger.kernel.org
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
S: Maintained
F: drivers/pci/host/pci-exynos.c
+PCI DRIVER FOR SYNOPSIS DESIGNWARE
+M: Mohit Kumar <mohit.kumar@st.com>
+M: Jingoo Han <jg1.han@samsung.com>
+L: linux-pci@vger.kernel.org
+S: Maintained
+F: drivers/pci/host/*designware*
+
PCMCIA SUBSYSTEM
P: Linux PCMCIA Team
L: linux-pcmcia@lists.infradead.org
F: kernel/sched/
F: include/linux/sched.h
F: include/uapi/linux/sched.h
-F: kernel/wait.c
F: include/linux/wait.h
SCORE ARCHITECTURE
M: Stephen Smalley <sds@tycho.nsa.gov>
M: James Morris <james.l.morris@oracle.com>
M: Eric Paris <eparis@parisplace.org>
+M: Paul Moore <paul@paul-moore.com>
L: selinux@tycho.nsa.gov (subscribers-only, general discussion)
W: http://selinuxproject.org
-T: git git://git.infradead.org/users/eparis/selinux.git
+T: git git://git.infradead.org/users/pcmoore/selinux
S: Supported
F: include/linux/selinux*
F: security/selinux/
TPM DEVICE DRIVER
M: Leonidas Da Silva Barbosa <leosilva@linux.vnet.ibm.com>
M: Ashley Lai <ashley@ashleylai.com>
+M: Peter Huewe <peterhuewe@gmx.de>
M: Rajiv Andrade <mail@srajiv.net>
W: http://tpmdd.sourceforge.net
M: Marcel Selhorst <tpmdd@selhorst.net>
XFS FILESYSTEM
P: Silicon Graphics Inc
+M: Dave Chinner <david@fromorbit.com>
M: Ben Myers <bpm@sgi.com>
-M: Alex Elder <elder@kernel.org>
M: xfs@oss.sgi.com
L: xfs@oss.sgi.com
W: http://oss.sgi.com/projects/xfs
VERSION = 3
-PATCHLEVEL = 12
+PATCHLEVEL = 13
SUBLEVEL = 0
-EXTRAVERSION =
+EXTRAVERSION = -rc7
NAME = One Giant Leap for Frogkind
# *DOCUMENTATION*
# Select initial ramdisk compression format, default is gzip(1).
# This shall be used by the dracut(8) tool while creating an initramfs image.
#
-INITRD_COMPRESS=gzip
-ifeq ($(CONFIG_RD_BZIP2), y)
- INITRD_COMPRESS=bzip2
-else ifeq ($(CONFIG_RD_LZMA), y)
- INITRD_COMPRESS=lzma
-else ifeq ($(CONFIG_RD_XZ), y)
- INITRD_COMPRESS=xz
-else ifeq ($(CONFIG_RD_LZO), y)
- INITRD_COMPRESS=lzo
-else ifeq ($(CONFIG_RD_LZ4), y)
- INITRD_COMPRESS=lz4
-endif
-export INITRD_COMPRESS
+INITRD_COMPRESS-y := gzip
+INITRD_COMPRESS-$(CONFIG_RD_BZIP2) := bzip2
+INITRD_COMPRESS-$(CONFIG_RD_LZMA) := lzma
+INITRD_COMPRESS-$(CONFIG_RD_XZ) := xz
+INITRD_COMPRESS-$(CONFIG_RD_LZO) := lzo
+INITRD_COMPRESS-$(CONFIG_RD_LZ4) := lz4
+# do not export INITRD_COMPRESS, since we didn't actually
+# choose a sane default compression above.
+# export INITRD_COMPRESS := $(INITRD_COMPRESS-y)
ifdef CONFIG_MODULE_SIG_ALL
MODSECKEY = ./signing_key.priv
select ARCH_WANT_IPC_PARSE_VERSION
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
+ select GENERIC_CLOCKEVENTS
select GENERIC_SMP_IDLE_THREAD
- select GENERIC_CMOS_UPDATE
select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER
select HAVE_MOD_ARCH_SPECIFIC
which always have multiple hoses, and whose consoles support it.
+config ALPHA_QEMU
+ bool "Run under QEMU emulation"
+ depends on !ALPHA_GENERIC
+ ---help---
+ Assume the presence of special features supported by QEMU PALcode
+ that reduce the overhead of system emulation.
+
+ Generic kernels will auto-detect QEMU. But when building a
+ system-specific kernel, the assumption is that we want to
+ elimiate as many runtime tests as possible.
+
+ If unsure, say N.
+
+
config ALPHA_SRM
bool "Use SRM as bootloader" if ALPHA_CABRIOLET || ALPHA_AVANTI_CH || ALPHA_EB64P || ALPHA_PC164 || ALPHA_TAKARA || ALPHA_EB164 || ALPHA_ALCOR || ALPHA_MIATA || ALPHA_LX164 || ALPHA_SX164 || ALPHA_NAUTILUS || ALPHA_NONAME
depends on TTY
Access). This option is for configuring high-end multiprocessor
server machines. If in doubt, say N.
+config ALPHA_WTINT
+ bool "Use WTINT" if ALPHA_SRM || ALPHA_GENERIC
+ default y if ALPHA_QEMU
+ default n if ALPHA_EV5 || ALPHA_EV56 || (ALPHA_EV4 && !ALPHA_LCA)
+ default n if !ALPHA_SRM && !ALPHA_GENERIC
+ default y if SMP
+ ---help---
+ The Wait for Interrupt (WTINT) PALcall attempts to place the CPU
+ to sleep until the next interrupt. This may reduce the power
+ consumed, and the heat produced by the computer. However, it has
+ the side effect of making the cycle counter unreliable as a timing
+ device across the sleep.
+
+ For emulation under QEMU, definitely say Y here, as we have other
+ mechanisms for measuring time than the cycle counter.
+
+ For EV4 (but not LCA), EV5 and EV56 systems, or for systems running
+ MILO, sleep mode is not supported so you might as well say N here.
+
+ For SMP systems we cannot use the cycle counter for timing anyway,
+ so you might as well say Y here.
+
+ If unsure, say N.
+
config NODES_SHIFT
int
default "7"
Take the default (1) unless you want more control or more info.
+choice
+ prompt "Timer interrupt frequency (HZ)?"
+ default HZ_128 if ALPHA_QEMU
+ default HZ_1200 if ALPHA_RAWHIDE
+ default HZ_1024
+ ---help---
+ The frequency at which timer interrupts occur. A high frequency
+ minimizes latency, whereas a low frequency minimizes overhead of
+ process accounting. The later effect is especially significant
+ when being run under QEMU.
+
+ Note that some Alpha hardware cannot change the interrupt frequency
+ of the timer. If unsure, say 1024 (or 1200 for Rawhide).
+
+ config HZ_32
+ bool "32 Hz"
+ config HZ_64
+ bool "64 Hz"
+ config HZ_128
+ bool "128 Hz"
+ config HZ_256
+ bool "256 Hz"
+ config HZ_1024
+ bool "1024 Hz"
+ config HZ_1200
+ bool "1200 Hz"
+endchoice
+
config HZ
- int
- default 1200 if ALPHA_RAWHIDE
+ int
+ default 32 if HZ_32
+ default 64 if HZ_64
+ default 128 if HZ_128
+ default 256 if HZ_256
+ default 1200 if HZ_1200
default 1024
source "drivers/pci/Kconfig"
int nr_irqs;
int rtc_port;
+ int rtc_boot_cpu_only;
unsigned int max_asn;
unsigned long max_isa_dma_address;
unsigned long irq_probe_mask;
struct _alpha_agp_info *(*agp_info)(void);
- unsigned int (*rtc_get_time)(struct rtc_time *);
- int (*rtc_set_time)(struct rtc_time *);
-
const char *vector_name;
/* NUMA information */
#ifdef CONFIG_ALPHA_GENERIC
extern int alpha_using_srm;
+extern int alpha_using_qemu;
#else
-#ifdef CONFIG_ALPHA_SRM
-#define alpha_using_srm 1
-#else
-#define alpha_using_srm 0
-#endif
+# ifdef CONFIG_ALPHA_SRM
+# define alpha_using_srm 1
+# else
+# define alpha_using_srm 0
+# endif
+# ifdef CONFIG_ALPHA_QEMU
+# define alpha_using_qemu 1
+# else
+# define alpha_using_qemu 0
+# endif
#endif /* GENERIC */
-#endif
+#endif /* __KERNEL__ */
#endif /* __ALPHA_MACHVEC_H */
__CALL_PAL_RW2(wrperfmon, unsigned long, unsigned long, unsigned long);
__CALL_PAL_W1(wrusp, unsigned long);
__CALL_PAL_W1(wrvptptr, unsigned long);
+__CALL_PAL_RW1(wtint, unsigned long, unsigned long);
/*
* TB routines..
#define tbiap() __tbi(-1, /* no second argument */)
#define tbia() __tbi(-2, /* no second argument */)
+/*
+ * QEMU Cserv routines..
+ */
+
+static inline unsigned long
+qemu_get_walltime(void)
+{
+ register unsigned long v0 __asm__("$0");
+ register unsigned long a0 __asm__("$16") = 3;
+
+ asm("call_pal %2 # cserve get_time"
+ : "=r"(v0), "+r"(a0)
+ : "i"(PAL_cserve)
+ : "$17", "$18", "$19", "$20", "$21");
+
+ return v0;
+}
+
+static inline unsigned long
+qemu_get_alarm(void)
+{
+ register unsigned long v0 __asm__("$0");
+ register unsigned long a0 __asm__("$16") = 4;
+
+ asm("call_pal %2 # cserve get_alarm"
+ : "=r"(v0), "+r"(a0)
+ : "i"(PAL_cserve)
+ : "$17", "$18", "$19", "$20", "$21");
+
+ return v0;
+}
+
+static inline void
+qemu_set_alarm_rel(unsigned long expire)
+{
+ register unsigned long a0 __asm__("$16") = 5;
+ register unsigned long a1 __asm__("$17") = expire;
+
+ asm volatile("call_pal %2 # cserve set_alarm_rel"
+ : "+r"(a0), "+r"(a1)
+ : "i"(PAL_cserve)
+ : "$0", "$18", "$19", "$20", "$21");
+}
+
+static inline void
+qemu_set_alarm_abs(unsigned long expire)
+{
+ register unsigned long a0 __asm__("$16") = 6;
+ register unsigned long a1 __asm__("$17") = expire;
+
+ asm volatile("call_pal %2 # cserve set_alarm_abs"
+ : "+r"(a0), "+r"(a1)
+ : "i"(PAL_cserve)
+ : "$0", "$18", "$19", "$20", "$21");
+}
+
+static inline unsigned long
+qemu_get_vmtime(void)
+{
+ register unsigned long v0 __asm__("$0");
+ register unsigned long a0 __asm__("$16") = 7;
+
+ asm("call_pal %2 # cserve get_time"
+ : "=r"(v0), "+r"(a0)
+ : "i"(PAL_cserve)
+ : "$17", "$18", "$19", "$20", "$21");
+
+ return v0;
+}
+
#endif /* !__ASSEMBLY__ */
#endif /* __ALPHA_PAL_H */
-#ifndef _ALPHA_RTC_H
-#define _ALPHA_RTC_H
-
-#if defined(CONFIG_ALPHA_MARVEL) && defined(CONFIG_SMP) \
- || defined(CONFIG_ALPHA_GENERIC)
-# define get_rtc_time alpha_mv.rtc_get_time
-# define set_rtc_time alpha_mv.rtc_set_time
-#endif
-
#include <asm-generic/rtc.h>
-
-#endif
#define __HAVE_ARCH_MEMSET
extern void * __constant_c_memset(void *, unsigned long, size_t);
+extern void * ___memset(void *, int, size_t);
extern void * __memset(void *, int, size_t);
extern void * memset(void *, int, size_t);
-#define memset(s, c, n) \
-(__builtin_constant_p(c) \
- ? (__builtin_constant_p(n) && (c) == 0 \
- ? __builtin_memset((s),0,(n)) \
- : __constant_c_memset((s),0x0101010101010101UL*(unsigned char)(c),(n))) \
- : __memset((s),(c),(n)))
+/* For gcc 3.x, we cannot have the inline function named "memset" because
+ the __builtin_memset will attempt to resolve to the inline as well,
+ leading to a "sorry" about unimplemented recursive inlining. */
+extern inline void *__memset(void *s, int c, size_t n)
+{
+ if (__builtin_constant_p(c)) {
+ if (__builtin_constant_p(n)) {
+ return __builtin_memset(s, c, n);
+ } else {
+ unsigned long c8 = (c & 0xff) * 0x0101010101010101UL;
+ return __constant_c_memset(s, c8, n);
+ }
+ }
+ return ___memset(s, c, n);
+}
+
+#define memset __memset
#define __HAVE_ARCH_STRCPY
extern char * strcpy(char *,const char *);
#define PAL_rdusp 58
#define PAL_whami 60
#define PAL_retsys 61
+#define PAL_wtint 62
#define PAL_rti 63
obj-$(CONFIG_SRM_ENV) += srm_env.o
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_PERF_EVENTS) += perf_event.o
+obj-$(CONFIG_RTC_DRV_ALPHA) += rtc.o
ifdef CONFIG_ALPHA_GENERIC
EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(__memcpy);
EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(___memset);
EXPORT_SYMBOL(__memsetw);
EXPORT_SYMBOL(__constant_c_memset);
EXPORT_SYMBOL(copy_page);
break;
case 1:
old_regs = set_irq_regs(regs);
-#ifdef CONFIG_SMP
- {
- long cpu;
-
- smp_percpu_timer_interrupt(regs);
- cpu = smp_processor_id();
- if (cpu != boot_cpuid) {
- kstat_incr_irqs_this_cpu(RTC_IRQ, irq_to_desc(RTC_IRQ));
- } else {
- handle_irq(RTC_IRQ);
- }
- }
-#else
handle_irq(RTC_IRQ);
-#endif
set_irq_regs(old_regs);
return;
case 2:
*/
struct irqaction timer_irqaction = {
- .handler = timer_interrupt,
+ .handler = rtc_timer_interrupt,
.name = "timer",
};
#define CAT1(x,y) x##y
#define CAT(x,y) CAT1(x,y)
-#define DO_DEFAULT_RTC \
- .rtc_port = 0x70, \
- .rtc_get_time = common_get_rtc_time, \
- .rtc_set_time = common_set_rtc_time
+#define DO_DEFAULT_RTC .rtc_port = 0x70
#define DO_EV4_MMU \
.max_asn = EV4_MAX_ASN, \
long pmc_left[3];
/* Subroutine for allocation of PMCs. Enforces constraints. */
int (*check_constraints)(struct perf_event **, unsigned long *, int);
+ /* Subroutine for checking validity of a raw event for this PMU. */
+ int (*raw_event_valid)(u64 config);
};
/*
}
+static int ev67_raw_event_valid(u64 config)
+{
+ return config >= EV67_CYCLES && config < EV67_LAST_ET;
+};
+
+
static const struct alpha_pmu_t ev67_pmu = {
.event_map = ev67_perfmon_event_map,
.max_events = ARRAY_SIZE(ev67_perfmon_event_map),
.pmc_count_mask = {EV67_PCTR_0_COUNT_MASK, EV67_PCTR_1_COUNT_MASK, 0},
.pmc_max_period = {(1UL<<20) - 1, (1UL<<20) - 1, 0},
.pmc_left = {16, 4, 0},
- .check_constraints = ev67_check_constraints
+ .check_constraints = ev67_check_constraints,
+ .raw_event_valid = ev67_raw_event_valid,
};
} else if (attr->type == PERF_TYPE_HW_CACHE) {
return -EOPNOTSUPP;
} else if (attr->type == PERF_TYPE_RAW) {
- ev = attr->config & 0xff;
+ if (!alpha_pmu->raw_event_valid(attr->config))
+ return -EINVAL;
+ ev = attr->config;
} else {
return -EOPNOTSUPP;
}
void (*pm_power_off)(void) = machine_power_off;
EXPORT_SYMBOL(pm_power_off);
+#ifdef CONFIG_ALPHA_WTINT
+/*
+ * Sleep the CPU.
+ * EV6, LCA45 and QEMU know how to power down, skipping N timer interrupts.
+ */
+void arch_cpu_idle(void)
+{
+ wtint(0);
+ local_irq_enable();
+}
+
+void arch_cpu_idle_dead(void)
+{
+ wtint(INT_MAX);
+}
+#endif /* ALPHA_WTINT */
+
struct halt_info {
int mode;
char *restart_cmd;
/* smp.c */
extern void setup_smp(void);
extern void handle_ipi(struct pt_regs *);
-extern void smp_percpu_timer_interrupt(struct pt_regs *);
/* bios32.c */
/* extern void reset_for_srm(void); */
/* time.c */
-extern irqreturn_t timer_interrupt(int irq, void *dev);
+extern irqreturn_t rtc_timer_interrupt(int irq, void *dev);
+extern void init_clockevent(void);
extern void common_init_rtc(void);
extern unsigned long est_cycle_freq;
-extern unsigned int common_get_rtc_time(struct rtc_time *time);
-extern int common_set_rtc_time(struct rtc_time *time);
/* smc37c93x.c */
extern void SMC93x_Init(void);
--- /dev/null
+/*
+ * linux/arch/alpha/kernel/rtc.c
+ *
+ * Copyright (C) 1991, 1992, 1995, 1999, 2000 Linus Torvalds
+ *
+ * This file contains date handling.
+ */
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/string.h>
+#include <linux/mc146818rtc.h>
+#include <linux/bcd.h>
+#include <linux/rtc.h>
+#include <linux/platform_device.h>
+
+#include <asm/rtc.h>
+
+#include "proto.h"
+
+
+/*
+ * Support for the RTC device.
+ *
+ * We don't want to use the rtc-cmos driver, because we don't want to support
+ * alarms, as that would be indistinguishable from timer interrupts.
+ *
+ * Further, generic code is really, really tied to a 1900 epoch. This is
+ * true in __get_rtc_time as well as the users of struct rtc_time e.g.
+ * rtc_tm_to_time. Thankfully all of the other epochs in use are later
+ * than 1900, and so it's easy to adjust.
+ */
+
+static unsigned long rtc_epoch;
+
+static int __init
+specifiy_epoch(char *str)
+{
+ unsigned long epoch = simple_strtoul(str, NULL, 0);
+ if (epoch < 1900)
+ printk("Ignoring invalid user specified epoch %lu\n", epoch);
+ else
+ rtc_epoch = epoch;
+ return 1;
+}
+__setup("epoch=", specifiy_epoch);
+
+static void __init
+init_rtc_epoch(void)
+{
+ int epoch, year, ctrl;
+
+ if (rtc_epoch != 0) {
+ /* The epoch was specified on the command-line. */
+ return;
+ }
+
+ /* Detect the epoch in use on this computer. */
+ ctrl = CMOS_READ(RTC_CONTROL);
+ year = CMOS_READ(RTC_YEAR);
+ if (!(ctrl & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
+ year = bcd2bin(year);
+
+ /* PC-like is standard; used for year >= 70 */
+ epoch = 1900;
+ if (year < 20) {
+ epoch = 2000;
+ } else if (year >= 20 && year < 48) {
+ /* NT epoch */
+ epoch = 1980;
+ } else if (year >= 48 && year < 70) {
+ /* Digital UNIX epoch */
+ epoch = 1952;
+ }
+ rtc_epoch = epoch;
+
+ printk(KERN_INFO "Using epoch %d for rtc year %d\n", epoch, year);
+}
+
+static int
+alpha_rtc_read_time(struct device *dev, struct rtc_time *tm)
+{
+ __get_rtc_time(tm);
+
+ /* Adjust for non-default epochs. It's easier to depend on the
+ generic __get_rtc_time and adjust the epoch here than create
+ a copy of __get_rtc_time with the edits we need. */
+ if (rtc_epoch != 1900) {
+ int year = tm->tm_year;
+ /* Undo the century adjustment made in __get_rtc_time. */
+ if (year >= 100)
+ year -= 100;
+ year += rtc_epoch - 1900;
+ /* Redo the century adjustment with the epoch in place. */
+ if (year <= 69)
+ year += 100;
+ tm->tm_year = year;
+ }
+
+ return rtc_valid_tm(tm);
+}
+
+static int
+alpha_rtc_set_time(struct device *dev, struct rtc_time *tm)
+{
+ struct rtc_time xtm;
+
+ if (rtc_epoch != 1900) {
+ xtm = *tm;
+ xtm.tm_year -= rtc_epoch - 1900;
+ tm = &xtm;
+ }
+
+ return __set_rtc_time(tm);
+}
+
+static int
+alpha_rtc_set_mmss(struct device *dev, unsigned long nowtime)
+{
+ int retval = 0;
+ int real_seconds, real_minutes, cmos_minutes;
+ unsigned char save_control, save_freq_select;
+
+ /* Note: This code only updates minutes and seconds. Comments
+ indicate this was to avoid messing with unknown time zones,
+ and with the epoch nonsense described above. In order for
+ this to work, the existing clock cannot be off by more than
+ 15 minutes.
+
+ ??? This choice is may be out of date. The x86 port does
+ not have problems with timezones, and the epoch processing has
+ now been fixed in alpha_set_rtc_time.
+
+ In either case, one can always force a full rtc update with
+ the userland hwclock program, so surely 15 minute accuracy
+ is no real burden. */
+
+ /* In order to set the CMOS clock precisely, we have to be called
+ 500 ms after the second nowtime has started, because when
+ nowtime is written into the registers of the CMOS clock, it will
+ jump to the next second precisely 500 ms later. Check the Motorola
+ MC146818A or Dallas DS12887 data sheet for details. */
+
+ /* irq are locally disabled here */
+ spin_lock(&rtc_lock);
+ /* Tell the clock it's being set */
+ save_control = CMOS_READ(RTC_CONTROL);
+ CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
+
+ /* Stop and reset prescaler */
+ save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
+ CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
+
+ cmos_minutes = CMOS_READ(RTC_MINUTES);
+ if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
+ cmos_minutes = bcd2bin(cmos_minutes);
+
+ real_seconds = nowtime % 60;
+ real_minutes = nowtime / 60;
+ if (((abs(real_minutes - cmos_minutes) + 15) / 30) & 1) {
+ /* correct for half hour time zone */
+ real_minutes += 30;
+ }
+ real_minutes %= 60;
+
+ if (abs(real_minutes - cmos_minutes) < 30) {
+ if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ real_seconds = bin2bcd(real_seconds);
+ real_minutes = bin2bcd(real_minutes);
+ }
+ CMOS_WRITE(real_seconds,RTC_SECONDS);
+ CMOS_WRITE(real_minutes,RTC_MINUTES);
+ } else {
+ printk_once(KERN_NOTICE
+ "set_rtc_mmss: can't update from %d to %d\n",
+ cmos_minutes, real_minutes);
+ retval = -1;
+ }
+
+ /* The following flags have to be released exactly in this order,
+ * otherwise the DS12887 (popular MC146818A clone with integrated
+ * battery and quartz) will not reset the oscillator and will not
+ * update precisely 500 ms later. You won't find this mentioned in
+ * the Dallas Semiconductor data sheets, but who believes data
+ * sheets anyway ... -- Markus Kuhn
+ */
+ CMOS_WRITE(save_control, RTC_CONTROL);
+ CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
+ spin_unlock(&rtc_lock);
+
+ return retval;
+}
+
+static int
+alpha_rtc_ioctl(struct device *dev, unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case RTC_EPOCH_READ:
+ return put_user(rtc_epoch, (unsigned long __user *)arg);
+ case RTC_EPOCH_SET:
+ if (arg < 1900)
+ return -EINVAL;
+ rtc_epoch = arg;
+ return 0;
+ default:
+ return -ENOIOCTLCMD;
+ }
+}
+
+static const struct rtc_class_ops alpha_rtc_ops = {
+ .read_time = alpha_rtc_read_time,
+ .set_time = alpha_rtc_set_time,
+ .set_mmss = alpha_rtc_set_mmss,
+ .ioctl = alpha_rtc_ioctl,
+};
+
+/*
+ * Similarly, except do the actual CMOS access on the boot cpu only.
+ * This requires marshalling the data across an interprocessor call.
+ */
+
+#if defined(CONFIG_SMP) && \
+ (defined(CONFIG_ALPHA_GENERIC) || defined(CONFIG_ALPHA_MARVEL))
+# define HAVE_REMOTE_RTC 1
+
+union remote_data {
+ struct rtc_time *tm;
+ unsigned long now;
+ long retval;
+};
+
+static void
+do_remote_read(void *data)
+{
+ union remote_data *x = data;
+ x->retval = alpha_rtc_read_time(NULL, x->tm);
+}
+
+static int
+remote_read_time(struct device *dev, struct rtc_time *tm)
+{
+ union remote_data x;
+ if (smp_processor_id() != boot_cpuid) {
+ x.tm = tm;
+ smp_call_function_single(boot_cpuid, do_remote_read, &x, 1);
+ return x.retval;
+ }
+ return alpha_rtc_read_time(NULL, tm);
+}
+
+static void
+do_remote_set(void *data)
+{
+ union remote_data *x = data;
+ x->retval = alpha_rtc_set_time(NULL, x->tm);
+}
+
+static int
+remote_set_time(struct device *dev, struct rtc_time *tm)
+{
+ union remote_data x;
+ if (smp_processor_id() != boot_cpuid) {
+ x.tm = tm;
+ smp_call_function_single(boot_cpuid, do_remote_set, &x, 1);
+ return x.retval;
+ }
+ return alpha_rtc_set_time(NULL, tm);
+}
+
+static void
+do_remote_mmss(void *data)
+{
+ union remote_data *x = data;
+ x->retval = alpha_rtc_set_mmss(NULL, x->now);
+}
+
+static int
+remote_set_mmss(struct device *dev, unsigned long now)
+{
+ union remote_data x;
+ if (smp_processor_id() != boot_cpuid) {
+ x.now = now;
+ smp_call_function_single(boot_cpuid, do_remote_mmss, &x, 1);
+ return x.retval;
+ }
+ return alpha_rtc_set_mmss(NULL, now);
+}
+
+static const struct rtc_class_ops remote_rtc_ops = {
+ .read_time = remote_read_time,
+ .set_time = remote_set_time,
+ .set_mmss = remote_set_mmss,
+ .ioctl = alpha_rtc_ioctl,
+};
+#endif
+
+static int __init
+alpha_rtc_init(void)
+{
+ const struct rtc_class_ops *ops;
+ struct platform_device *pdev;
+ struct rtc_device *rtc;
+ const char *name;
+
+ init_rtc_epoch();
+ name = "rtc-alpha";
+ ops = &alpha_rtc_ops;
+
+#ifdef HAVE_REMOTE_RTC
+ if (alpha_mv.rtc_boot_cpu_only)
+ ops = &remote_rtc_ops;
+#endif
+
+ pdev = platform_device_register_simple(name, -1, NULL, 0);
+ rtc = devm_rtc_device_register(&pdev->dev, name, ops, THIS_MODULE);
+ if (IS_ERR(rtc))
+ return PTR_ERR(rtc);
+
+ platform_set_drvdata(pdev, rtc);
+ return 0;
+}
+device_initcall(alpha_rtc_init);
#ifdef CONFIG_ALPHA_GENERIC
struct alpha_machine_vector alpha_mv;
+#endif
+
+#ifndef alpha_using_srm
int alpha_using_srm;
EXPORT_SYMBOL(alpha_using_srm);
#endif
+#ifndef alpha_using_qemu
+int alpha_using_qemu;
+#endif
+
static struct alpha_machine_vector *get_sysvec(unsigned long, unsigned long,
unsigned long);
static struct alpha_machine_vector *get_sysvec_byname(const char *);
atomic_notifier_chain_register(&panic_notifier_list,
&alpha_panic_block);
-#ifdef CONFIG_ALPHA_GENERIC
+#ifndef alpha_using_srm
/* Assume that we've booted from SRM if we haven't booted from MILO.
Detect the later by looking for "MILO" in the system serial nr. */
alpha_using_srm = strncmp((const char *)hwrpb->ssn, "MILO", 4) != 0;
#endif
+#ifndef alpha_using_qemu
+ /* Similarly, look for QEMU. */
+ alpha_using_qemu = strstr((const char *)hwrpb->ssn, "QEMU") != 0;
+#endif
/* If we are using SRM, we want to allow callbacks
as early as possible, so do this NOW, and then
char *systype_name;
char *sysvariation_name;
int nr_processors;
+ unsigned long timer_freq;
cpu_index = (unsigned) (cpu->type - 1);
cpu_name = "Unknown";
nr_processors = get_nr_processors(cpu, hwrpb->nr_processors);
+#if CONFIG_HZ == 1024 || CONFIG_HZ == 1200
+ timer_freq = (100UL * hwrpb->intr_freq) / 4096;
+#else
+ timer_freq = 100UL * CONFIG_HZ;
+#endif
+
seq_printf(f, "cpu\t\t\t: Alpha\n"
"cpu model\t\t: %s\n"
"cpu variation\t\t: %ld\n"
(char*)hwrpb->ssn,
est_cycle_freq ? : hwrpb->cycle_freq,
est_cycle_freq ? "est." : "",
- hwrpb->intr_freq / 4096,
- (100 * hwrpb->intr_freq / 4096) % 100,
+ timer_freq / 100, timer_freq % 100,
hwrpb->pagesize,
hwrpb->pa_bits,
hwrpb->max_asn,
/* Get our local ticker going. */
smp_setup_percpu_timer(cpuid);
+ init_clockevent();
/* Call platform-specific callin, if specified */
- if (alpha_mv.smp_callin) alpha_mv.smp_callin();
+ if (alpha_mv.smp_callin)
+ alpha_mv.smp_callin();
/* All kernel threads share the same mm context. */
atomic_inc(&init_mm.mm_count);
((bogosum + 2500) / (5000/HZ)) % 100);
}
-\f
-void
-smp_percpu_timer_interrupt(struct pt_regs *regs)
-{
- struct pt_regs *old_regs;
- int cpu = smp_processor_id();
- unsigned long user = user_mode(regs);
- struct cpuinfo_alpha *data = &cpu_data[cpu];
-
- old_regs = set_irq_regs(regs);
-
- /* Record kernel PC. */
- profile_tick(CPU_PROFILING);
-
- if (!--data->prof_counter) {
- /* We need to make like a normal interrupt -- otherwise
- timer interrupts ignore the global interrupt lock,
- which would be a Bad Thing. */
- irq_enter();
-
- update_process_times(user);
-
- data->prof_counter = data->prof_multiplier;
-
- irq_exit();
- }
- set_irq_regs(old_regs);
-}
-
int
setup_profiling_timer(unsigned int multiplier)
{
.machine_check = jensen_machine_check,
.max_isa_dma_address = ALPHA_MAX_ISA_DMA_ADDRESS,
.rtc_port = 0x170,
- .rtc_get_time = common_get_rtc_time,
- .rtc_set_time = common_set_rtc_time,
.nr_irqs = 16,
.device_interrupt = jensen_device_interrupt,
#include <asm/hwrpb.h>
#include <asm/tlbflush.h>
#include <asm/vga.h>
-#include <asm/rtc.h>
#include "proto.h"
#include "err_impl.h"
init_rtc_irq();
}
-struct marvel_rtc_time {
- struct rtc_time *time;
- int retval;
-};
-
-#ifdef CONFIG_SMP
-static void
-smp_get_rtc_time(void *data)
-{
- struct marvel_rtc_time *mrt = data;
- mrt->retval = __get_rtc_time(mrt->time);
-}
-
-static void
-smp_set_rtc_time(void *data)
-{
- struct marvel_rtc_time *mrt = data;
- mrt->retval = __set_rtc_time(mrt->time);
-}
-#endif
-
-static unsigned int
-marvel_get_rtc_time(struct rtc_time *time)
-{
-#ifdef CONFIG_SMP
- struct marvel_rtc_time mrt;
-
- if (smp_processor_id() != boot_cpuid) {
- mrt.time = time;
- smp_call_function_single(boot_cpuid, smp_get_rtc_time, &mrt, 1);
- return mrt.retval;
- }
-#endif
- return __get_rtc_time(time);
-}
-
-static int
-marvel_set_rtc_time(struct rtc_time *time)
-{
-#ifdef CONFIG_SMP
- struct marvel_rtc_time mrt;
-
- if (smp_processor_id() != boot_cpuid) {
- mrt.time = time;
- smp_call_function_single(boot_cpuid, smp_set_rtc_time, &mrt, 1);
- return mrt.retval;
- }
-#endif
- return __set_rtc_time(time);
-}
-
static void
marvel_smp_callin(void)
{
.vector_name = "MARVEL/EV7",
DO_EV7_MMU,
.rtc_port = 0x70,
- .rtc_get_time = marvel_get_rtc_time,
- .rtc_set_time = marvel_set_rtc_time,
+ .rtc_boot_cpu_only = 1,
DO_MARVEL_IO,
.machine_check = marvel_machine_check,
.max_isa_dma_address = ALPHA_MAX_ISA_DMA_ADDRESS,
*
* Copyright (C) 1991, 1992, 1995, 1999, 2000 Linus Torvalds
*
- * This file contains the PC-specific time handling details:
- * reading the RTC at bootup, etc..
- * 1994-07-02 Alan Modra
- * fixed set_rtc_mmss, fixed time.year for >= 2000, new mktime
- * 1995-03-26 Markus Kuhn
- * fixed 500 ms bug at call to set_rtc_mmss, fixed DS12887
- * precision CMOS clock update
+ * This file contains the clocksource time handling.
* 1997-09-10 Updated NTP code according to technical memorandum Jan '96
* "A Kernel Model for Precision Timekeeping" by Dave Mills
* 1997-01-09 Adrian Sun
* 1999-04-16 Thorsten Kranzkowski (dl8bcu@gmx.net)
* fixed algorithm in do_gettimeofday() for calculating the precise time
* from processor cycle counter (now taking lost_ticks into account)
- * 2000-08-13 Jan-Benedict Glaw <jbglaw@lug-owl.de>
- * Fixed time_init to be aware of epoches != 1900. This prevents
- * booting up in 2048 for me;) Code is stolen from rtc.c.
* 2003-06-03 R. Scott Bailey <scott.bailey@eds.com>
* Tighten sanity in time_init from 1% (10,000 PPM) to 250 PPM
*/
#include <asm/uaccess.h>
#include <asm/io.h>
#include <asm/hwrpb.h>
-#include <asm/rtc.h>
#include <linux/mc146818rtc.h>
#include <linux/time.h>
#include <linux/timex.h>
#include <linux/clocksource.h>
+#include <linux/clockchips.h>
#include "proto.h"
#include "irq_impl.h"
-static int set_rtc_mmss(unsigned long);
-
DEFINE_SPINLOCK(rtc_lock);
EXPORT_SYMBOL(rtc_lock);
-#define TICK_SIZE (tick_nsec / 1000)
-
-/*
- * Shift amount by which scaled_ticks_per_cycle is scaled. Shifting
- * by 48 gives us 16 bits for HZ while keeping the accuracy good even
- * for large CPU clock rates.
- */
-#define FIX_SHIFT 48
-
-/* lump static variables together for more efficient access: */
-static struct {
- /* cycle counter last time it got invoked */
- __u32 last_time;
- /* ticks/cycle * 2^48 */
- unsigned long scaled_ticks_per_cycle;
- /* partial unused tick */
- unsigned long partial_tick;
-} state;
-
unsigned long est_cycle_freq;
#ifdef CONFIG_IRQ_WORK
return __builtin_alpha_rpcc();
}
-int update_persistent_clock(struct timespec now)
-{
- return set_rtc_mmss(now.tv_sec);
-}
-void read_persistent_clock(struct timespec *ts)
+\f
+/*
+ * The RTC as a clock_event_device primitive.
+ */
+
+static DEFINE_PER_CPU(struct clock_event_device, cpu_ce);
+
+irqreturn_t
+rtc_timer_interrupt(int irq, void *dev)
{
- unsigned int year, mon, day, hour, min, sec, epoch;
-
- sec = CMOS_READ(RTC_SECONDS);
- min = CMOS_READ(RTC_MINUTES);
- hour = CMOS_READ(RTC_HOURS);
- day = CMOS_READ(RTC_DAY_OF_MONTH);
- mon = CMOS_READ(RTC_MONTH);
- year = CMOS_READ(RTC_YEAR);
-
- if (!(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
- sec = bcd2bin(sec);
- min = bcd2bin(min);
- hour = bcd2bin(hour);
- day = bcd2bin(day);
- mon = bcd2bin(mon);
- year = bcd2bin(year);
- }
+ int cpu = smp_processor_id();
+ struct clock_event_device *ce = &per_cpu(cpu_ce, cpu);
- /* PC-like is standard; used for year >= 70 */
- epoch = 1900;
- if (year < 20)
- epoch = 2000;
- else if (year >= 20 && year < 48)
- /* NT epoch */
- epoch = 1980;
- else if (year >= 48 && year < 70)
- /* Digital UNIX epoch */
- epoch = 1952;
+ /* Don't run the hook for UNUSED or SHUTDOWN. */
+ if (likely(ce->mode == CLOCK_EVT_MODE_PERIODIC))
+ ce->event_handler(ce);
- printk(KERN_INFO "Using epoch = %d\n", epoch);
+ if (test_irq_work_pending()) {
+ clear_irq_work_pending();
+ irq_work_run();
+ }
- if ((year += epoch) < 1970)
- year += 100;
+ return IRQ_HANDLED;
+}
- ts->tv_sec = mktime(year, mon, day, hour, min, sec);
- ts->tv_nsec = 0;
+static void
+rtc_ce_set_mode(enum clock_event_mode mode, struct clock_event_device *ce)
+{
+ /* The mode member of CE is updated in generic code.
+ Since we only support periodic events, nothing to do. */
+}
+
+static int
+rtc_ce_set_next_event(unsigned long evt, struct clock_event_device *ce)
+{
+ /* This hook is for oneshot mode, which we don't support. */
+ return -EINVAL;
}
+static void __init
+init_rtc_clockevent(void)
+{
+ int cpu = smp_processor_id();
+ struct clock_event_device *ce = &per_cpu(cpu_ce, cpu);
+
+ *ce = (struct clock_event_device){
+ .name = "rtc",
+ .features = CLOCK_EVT_FEAT_PERIODIC,
+ .rating = 100,
+ .cpumask = cpumask_of(cpu),
+ .set_mode = rtc_ce_set_mode,
+ .set_next_event = rtc_ce_set_next_event,
+ };
+ clockevents_config_and_register(ce, CONFIG_HZ, 0, 0);
+}
+\f
/*
- * timer_interrupt() needs to keep up the real-time clock,
- * as well as call the "xtime_update()" routine every clocktick
+ * The QEMU clock as a clocksource primitive.
*/
-irqreturn_t timer_interrupt(int irq, void *dev)
+
+static cycle_t
+qemu_cs_read(struct clocksource *cs)
{
- unsigned long delta;
- __u32 now;
- long nticks;
+ return qemu_get_vmtime();
+}
-#ifndef CONFIG_SMP
- /* Not SMP, do kernel PC profiling here. */
- profile_tick(CPU_PROFILING);
-#endif
+static struct clocksource qemu_cs = {
+ .name = "qemu",
+ .rating = 400,
+ .read = qemu_cs_read,
+ .mask = CLOCKSOURCE_MASK(64),
+ .flags = CLOCK_SOURCE_IS_CONTINUOUS,
+ .max_idle_ns = LONG_MAX
+};
- /*
- * Calculate how many ticks have passed since the last update,
- * including any previous partial leftover. Save any resulting
- * fraction for the next pass.
- */
- now = rpcc();
- delta = now - state.last_time;
- state.last_time = now;
- delta = delta * state.scaled_ticks_per_cycle + state.partial_tick;
- state.partial_tick = delta & ((1UL << FIX_SHIFT) - 1);
- nticks = delta >> FIX_SHIFT;
- if (nticks)
- xtime_update(nticks);
+/*
+ * The QEMU alarm as a clock_event_device primitive.
+ */
- if (test_irq_work_pending()) {
- clear_irq_work_pending();
- irq_work_run();
- }
+static void
+qemu_ce_set_mode(enum clock_event_mode mode, struct clock_event_device *ce)
+{
+ /* The mode member of CE is updated for us in generic code.
+ Just make sure that the event is disabled. */
+ qemu_set_alarm_abs(0);
+}
-#ifndef CONFIG_SMP
- while (nticks--)
- update_process_times(user_mode(get_irq_regs()));
-#endif
+static int
+qemu_ce_set_next_event(unsigned long evt, struct clock_event_device *ce)
+{
+ qemu_set_alarm_rel(evt);
+ return 0;
+}
+static irqreturn_t
+qemu_timer_interrupt(int irq, void *dev)
+{
+ int cpu = smp_processor_id();
+ struct clock_event_device *ce = &per_cpu(cpu_ce, cpu);
+
+ ce->event_handler(ce);
return IRQ_HANDLED;
}
+static void __init
+init_qemu_clockevent(void)
+{
+ int cpu = smp_processor_id();
+ struct clock_event_device *ce = &per_cpu(cpu_ce, cpu);
+
+ *ce = (struct clock_event_device){
+ .name = "qemu",
+ .features = CLOCK_EVT_FEAT_ONESHOT,
+ .rating = 400,
+ .cpumask = cpumask_of(cpu),
+ .set_mode = qemu_ce_set_mode,
+ .set_next_event = qemu_ce_set_next_event,
+ };
+
+ clockevents_config_and_register(ce, NSEC_PER_SEC, 1000, LONG_MAX);
+}
+
+\f
void __init
common_init_rtc(void)
{
- unsigned char x;
+ unsigned char x, sel = 0;
/* Reset periodic interrupt frequency. */
- x = CMOS_READ(RTC_FREQ_SELECT) & 0x3f;
- /* Test includes known working values on various platforms
- where 0x26 is wrong; we refuse to change those. */
- if (x != 0x26 && x != 0x25 && x != 0x19 && x != 0x06) {
- printk("Setting RTC_FREQ to 1024 Hz (%x)\n", x);
- CMOS_WRITE(0x26, RTC_FREQ_SELECT);
+#if CONFIG_HZ == 1024 || CONFIG_HZ == 1200
+ x = CMOS_READ(RTC_FREQ_SELECT) & 0x3f;
+ /* Test includes known working values on various platforms
+ where 0x26 is wrong; we refuse to change those. */
+ if (x != 0x26 && x != 0x25 && x != 0x19 && x != 0x06) {
+ sel = RTC_REF_CLCK_32KHZ + 6;
}
+#elif CONFIG_HZ == 256 || CONFIG_HZ == 128 || CONFIG_HZ == 64 || CONFIG_HZ == 32
+ sel = RTC_REF_CLCK_32KHZ + __builtin_ffs(32768 / CONFIG_HZ);
+#else
+# error "Unknown HZ from arch/alpha/Kconfig"
+#endif
+ if (sel) {
+ printk(KERN_INFO "Setting RTC_FREQ to %d Hz (%x)\n",
+ CONFIG_HZ, sel);
+ CMOS_WRITE(sel, RTC_FREQ_SELECT);
+ }
/* Turn on periodic interrupts. */
x = CMOS_READ(RTC_CONTROL);
init_rtc_irq();
}
-unsigned int common_get_rtc_time(struct rtc_time *time)
-{
- return __get_rtc_time(time);
-}
+\f
+#ifndef CONFIG_ALPHA_WTINT
+/*
+ * The RPCC as a clocksource primitive.
+ *
+ * While we have free-running timecounters running on all CPUs, and we make
+ * a half-hearted attempt in init_rtc_rpcc_info to sync the timecounter
+ * with the wall clock, that initialization isn't kept up-to-date across
+ * different time counters in SMP mode. Therefore we can only use this
+ * method when there's only one CPU enabled.
+ *
+ * When using the WTINT PALcall, the RPCC may shift to a lower frequency,
+ * or stop altogether, while waiting for the interrupt. Therefore we cannot
+ * use this method when WTINT is in use.
+ */
-int common_set_rtc_time(struct rtc_time *time)
+static cycle_t read_rpcc(struct clocksource *cs)
{
- return __set_rtc_time(time);
+ return rpcc();
}
+static struct clocksource clocksource_rpcc = {
+ .name = "rpcc",
+ .rating = 300,
+ .read = read_rpcc,
+ .mask = CLOCKSOURCE_MASK(32),
+ .flags = CLOCK_SOURCE_IS_CONTINUOUS
+};
+#endif /* ALPHA_WTINT */
+
+\f
/* Validate a computed cycle counter result against the known bounds for
the given processor core. There's too much brokenness in the way of
timing hardware for any one method to work everywhere. :-(
return rpcc();
}
-#ifndef CONFIG_SMP
-/* Until and unless we figure out how to get cpu cycle counters
- in sync and keep them there, we can't use the rpcc. */
-static cycle_t read_rpcc(struct clocksource *cs)
-{
- cycle_t ret = (cycle_t)rpcc();
- return ret;
-}
-
-static struct clocksource clocksource_rpcc = {
- .name = "rpcc",
- .rating = 300,
- .read = read_rpcc,
- .mask = CLOCKSOURCE_MASK(32),
- .flags = CLOCK_SOURCE_IS_CONTINUOUS
-};
-
-static inline void register_rpcc_clocksource(long cycle_freq)
-{
- clocksource_register_hz(&clocksource_rpcc, cycle_freq);
-}
-#else /* !CONFIG_SMP */
-static inline void register_rpcc_clocksource(long cycle_freq)
-{
-}
-#endif /* !CONFIG_SMP */
-
void __init
time_init(void)
{
unsigned long cycle_freq, tolerance;
long diff;
+ if (alpha_using_qemu) {
+ clocksource_register_hz(&qemu_cs, NSEC_PER_SEC);
+ init_qemu_clockevent();
+
+ timer_irqaction.handler = qemu_timer_interrupt;
+ init_rtc_irq();
+ return;
+ }
+
/* Calibrate CPU clock -- attempt #1. */
if (!est_cycle_freq)
est_cycle_freq = validate_cc_value(calibrate_cc_with_pit());
"and unable to estimate a proper value!\n");
}
- /* From John Bowman <bowman@math.ualberta.ca>: allow the values
- to settle, as the Update-In-Progress bit going low isn't good
- enough on some hardware. 2ms is our guess; we haven't found
- bogomips yet, but this is close on a 500Mhz box. */
- __delay(1000000);
-
-
- if (HZ > (1<<16)) {
- extern void __you_loose (void);
- __you_loose();
- }
-
- register_rpcc_clocksource(cycle_freq);
-
- state.last_time = cc1;
- state.scaled_ticks_per_cycle
- = ((unsigned long) HZ << FIX_SHIFT) / cycle_freq;
- state.partial_tick = 0L;
+ /* See above for restrictions on using clocksource_rpcc. */
+#ifndef CONFIG_ALPHA_WTINT
+ if (hwrpb->nr_processors == 1)
+ clocksource_register_hz(&clocksource_rpcc, cycle_freq);
+#endif
/* Startup the timer source. */
alpha_mv.init_rtc();
+ init_rtc_clockevent();
}
-/*
- * In order to set the CMOS clock precisely, set_rtc_mmss has to be
- * called 500 ms after the second nowtime has started, because when
- * nowtime is written into the registers of the CMOS clock, it will
- * jump to the next second precisely 500 ms later. Check the Motorola
- * MC146818A or Dallas DS12887 data sheet for details.
- *
- * BUG: This routine does not handle hour overflow properly; it just
- * sets the minutes. Usually you won't notice until after reboot!
- */
-
-
-static int
-set_rtc_mmss(unsigned long nowtime)
+/* Initialize the clock_event_device for secondary cpus. */
+#ifdef CONFIG_SMP
+void __init
+init_clockevent(void)
{
- int retval = 0;
- int real_seconds, real_minutes, cmos_minutes;
- unsigned char save_control, save_freq_select;
-
- /* irq are locally disabled here */
- spin_lock(&rtc_lock);
- /* Tell the clock it's being set */
- save_control = CMOS_READ(RTC_CONTROL);
- CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
-
- /* Stop and reset prescaler */
- save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
- CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
-
- cmos_minutes = CMOS_READ(RTC_MINUTES);
- if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
- cmos_minutes = bcd2bin(cmos_minutes);
-
- /*
- * since we're only adjusting minutes and seconds,
- * don't interfere with hour overflow. This avoids
- * messing with unknown time zones but requires your
- * RTC not to be off by more than 15 minutes
- */
- real_seconds = nowtime % 60;
- real_minutes = nowtime / 60;
- if (((abs(real_minutes - cmos_minutes) + 15)/30) & 1) {
- /* correct for half hour time zone */
- real_minutes += 30;
- }
- real_minutes %= 60;
-
- if (abs(real_minutes - cmos_minutes) < 30) {
- if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
- real_seconds = bin2bcd(real_seconds);
- real_minutes = bin2bcd(real_minutes);
- }
- CMOS_WRITE(real_seconds,RTC_SECONDS);
- CMOS_WRITE(real_minutes,RTC_MINUTES);
- } else {
- printk_once(KERN_NOTICE
- "set_rtc_mmss: can't update from %d to %d\n",
- cmos_minutes, real_minutes);
- retval = -1;
- }
-
- /* The following flags have to be released exactly in this order,
- * otherwise the DS12887 (popular MC146818A clone with integrated
- * battery and quartz) will not reset the oscillator and will not
- * update precisely 500 ms later. You won't find this mentioned in
- * the Dallas Semiconductor data sheets, but who believes data
- * sheets anyway ... -- Markus Kuhn
- */
- CMOS_WRITE(save_control, RTC_CONTROL);
- CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
- spin_unlock(&rtc_lock);
-
- return retval;
+ if (alpha_using_qemu)
+ init_qemu_clockevent();
+ else
+ init_rtc_clockevent();
}
+#endif
(const char *)(data[1] | (long)data[2] << 32),
data[0]);
}
+#ifdef CONFIG_ALPHA_WTINT
+ if (type == 4) {
+ /* If CALL_PAL WTINT is totally unsupported by the
+ PALcode, e.g. MILO, "emulate" it by overwriting
+ the insn. */
+ unsigned int *pinsn
+ = (unsigned int *) regs->pc - 1;
+ if (*pinsn == PAL_wtint) {
+ *pinsn = 0x47e01400; /* mov 0,$0 */
+ imb();
+ regs->r0 = 0;
+ return;
+ }
+ }
+#endif /* ALPHA_WTINT */
die_if_kernel((type == 1 ? "Kernel Bug" : "Instruction fault"),
regs, type, NULL);
}
*dst = word | tmp;
checksum += carry;
}
- if (err) *errp = err;
+ if (err && errp) *errp = err;
return checksum;
}
*dst = word | tmp;
checksum += carry;
}
- if (err) *errp = err;
+ if (err && errp) *errp = err;
return checksum;
}
stq_u(partial_dest | second_dest, dst);
out:
checksum += carry;
- if (err) *errp = err;
+ if (err && errp) *errp = err;
return checksum;
}
stq_u(partial_dest | word | second_dest, dst);
checksum += carry;
}
- if (err) *errp = err;
+ if (err && errp) *errp = err;
return checksum;
}
if (len) {
if (!access_ok(VERIFY_READ, src, len)) {
- *errp = -EFAULT;
+ if (errp) *errp = -EFAULT;
memset(dst, 0, len);
return sum;
}
.set noat
.set noreorder
.text
+ .globl memset
.globl __memset
+ .globl ___memset
.globl __memsetw
.globl __constant_c_memset
- .globl memset
- .ent __memset
+ .ent ___memset
.align 5
-__memset:
+___memset:
.frame $30,0,$26,0
.prologue 0
nop
nop
ret $31,($26),1 # L0 :
- .end __memset
+ .end ___memset
/*
* This is the original body of code, prior to replication and
.end __memsetw
-memset = __memset
+memset = ___memset
+__memset = ___memset
.text
.globl memset
.globl __memset
+ .globl ___memset
.globl __memsetw
.globl __constant_c_memset
- .ent __memset
+
+ .ent ___memset
.align 5
-__memset:
+___memset:
.frame $30,0,$26,0
.prologue 0
end:
ret $31,($26),1 /* E1 */
- .end __memset
+ .end ___memset
.align 5
.ent __memsetw
.end __memsetw
-memset = __memset
+memset = ___memset
+__memset = ___memset
config ARC
def_bool y
+ select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS
# ARC Busybox based initramfs absolutely relies on DEVTMPFS for /dev
select DEVTMPFS if !INITRAMFS_SOURCE=""
/******** no-legacy-syscalls-ABI *******/
+/*
+ * Non-typical guard macro to enable inclusion twice in ARCH sys.c
+ * That is how the Generic syscall wrapper generator works
+ */
+#if !defined(_UAPI_ASM_ARC_UNISTD_H) || defined(__SYSCALL)
+#define _UAPI_ASM_ARC_UNISTD_H
+
#define __ARCH_WANT_SYS_EXECVE
#define __ARCH_WANT_SYS_CLONE
#define __ARCH_WANT_SYS_VFORK
/* Generic syscall (fs/filesystems.c - lost in asm-generic/unistd.h */
#define __NR_sysfs (__NR_arch_specific_syscall + 3)
__SYSCALL(__NR_sysfs, sys_sysfs)
+
+#undef __SYSCALL
+
+#endif
cache_result = (config >> 16) & 0xff;
if (cache_type >= PERF_COUNT_HW_CACHE_MAX)
return -EINVAL;
- if (cache_type >= PERF_COUNT_HW_CACHE_OP_MAX)
+ if (cache_op >= PERF_COUNT_HW_CACHE_OP_MAX)
return -EINVAL;
- if (cache_type >= PERF_COUNT_HW_CACHE_RESULT_MAX)
+ if (cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
return -EINVAL;
ret = arc_pmu_cache_map[cache_type][cache_op][cache_result];
select HARDIRQS_SW_RESEND
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL
select HAVE_ARCH_KGDB
- select HAVE_ARCH_SECCOMP_FILTER
+ select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
select HAVE_ARCH_TRACEHOOK
select HAVE_BPF_JIT
select HAVE_CONTEXT_TRACKING
bool "Architected timer support"
depends on CPU_V7
select ARM_ARCH_TIMER
+ select GENERIC_CLOCKEVENTS
help
This option enables support for the ARM architected timer
config OABI_COMPAT
bool "Allow old ABI binaries to run with this kernel (EXPERIMENTAL)"
depends on AEABI && !THUMB2_KERNEL
- default y
help
This option preserves the old syscall interface along with the
new (ARM EABI) one. It also provides a compatibility layer to
in memory differs between the legacy ABI and the new ARM EABI
(only for non "thumb" binaries). This option adds a tiny
overhead to all syscalls and produces a slightly larger kernel.
+
+ The seccomp filter system will not be available when this is
+ selected, since there is no way yet to sensibly distinguish
+ between calling conventions during filtering.
+
If you know you'll be using only pure EABI user space then you
can say N here. If this option is not selected and you attempt
to execute a legacy ABI binary then the result will be
UNPREDICTABLE (in fact it can be predicted that it won't work
- at all). If in doubt say Y.
+ at all). If in doubt say N.
config ARCH_HAS_HOLES_MEMORYMODEL
bool
/ {
model = "IGEP COM AM335x on AQUILA Expansion";
compatible = "isee,am335x-base0033", "isee,am335x-igep0033", "ti,am33xx";
+
+ hdmi {
+ compatible = "ti,tilcdc,slave";
+ i2c = <&i2c0>;
+ pinctrl-names = "default", "off";
+ pinctrl-0 = <&nxp_hdmi_pins>;
+ pinctrl-1 = <&nxp_hdmi_off_pins>;
+ status = "okay";
+ };
+
+ leds_base {
+ pinctrl-names = "default";
+ pinctrl-0 = <&leds_base_pins>;
+
+ compatible = "gpio-leds";
+
+ led@0 {
+ label = "base:red:user";
+ gpios = <&gpio1 21 GPIO_ACTIVE_HIGH>; /* gpio1_21 */
+ default-state = "off";
+ };
+
+ led@1 {
+ label = "base:green:user";
+ gpios = <&gpio2 0 GPIO_ACTIVE_HIGH>; /* gpio2_0 */
+ default-state = "off";
+ };
+ };
+};
+
+&am33xx_pinmux {
+ nxp_hdmi_pins: pinmux_nxp_hdmi_pins {
+ pinctrl-single,pins = <
+ 0x1b0 (PIN_OUTPUT | MUX_MODE3) /* xdma_event_intr0.clkout1 */
+ 0xa0 (PIN_OUTPUT | MUX_MODE0) /* lcd_data0 */
+ 0xa4 (PIN_OUTPUT | MUX_MODE0) /* lcd_data1 */
+ 0xa8 (PIN_OUTPUT | MUX_MODE0) /* lcd_data2 */
+ 0xac (PIN_OUTPUT | MUX_MODE0) /* lcd_data3 */
+ 0xb0 (PIN_OUTPUT | MUX_MODE0) /* lcd_data4 */
+ 0xb4 (PIN_OUTPUT | MUX_MODE0) /* lcd_data5 */
+ 0xb8 (PIN_OUTPUT | MUX_MODE0) /* lcd_data6 */
+ 0xbc (PIN_OUTPUT | MUX_MODE0) /* lcd_data7 */
+ 0xc0 (PIN_OUTPUT | MUX_MODE0) /* lcd_data8 */
+ 0xc4 (PIN_OUTPUT | MUX_MODE0) /* lcd_data9 */
+ 0xc8 (PIN_OUTPUT | MUX_MODE0) /* lcd_data10 */
+ 0xcc (PIN_OUTPUT | MUX_MODE0) /* lcd_data11 */
+ 0xd0 (PIN_OUTPUT | MUX_MODE0) /* lcd_data12 */
+ 0xd4 (PIN_OUTPUT | MUX_MODE0) /* lcd_data13 */
+ 0xd8 (PIN_OUTPUT | MUX_MODE0) /* lcd_data14 */
+ 0xdc (PIN_OUTPUT | MUX_MODE0) /* lcd_data15 */
+ 0xe0 (PIN_OUTPUT | MUX_MODE0) /* lcd_vsync */
+ 0xe4 (PIN_OUTPUT | MUX_MODE0) /* lcd_hsync */
+ 0xe8 (PIN_OUTPUT | MUX_MODE0) /* lcd_pclk */
+ 0xec (PIN_OUTPUT | MUX_MODE0) /* lcd_ac_bias_en */
+ >;
+ };
+ nxp_hdmi_off_pins: pinmux_nxp_hdmi_off_pins {
+ pinctrl-single,pins = <
+ 0x1b0 (PIN_OUTPUT | MUX_MODE3) /* xdma_event_intr0.clkout1 */
+ >;
+ };
+
+ leds_base_pins: pinmux_leds_base_pins {
+ pinctrl-single,pins = <
+ 0x54 (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_a5.gpio1_21 */
+ 0x88 (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_csn3.gpio2_0 */
+ >;
+ };
+};
+
+&lcdc {
+ status = "okay";
+};
+
+&i2c0 {
+ eeprom: eeprom@50 {
+ compatible = "at,24c256";
+ reg = <0x50>;
+ };
};
pinctrl-0 = <&uart0_pins>;
};
+&usb {
+ status = "okay";
+
+ control@44e10000 {
+ status = "okay";
+ };
+
+ usb-phy@47401300 {
+ status = "okay";
+ };
+
+ usb-phy@47401b00 {
+ status = "okay";
+ };
+
+ usb@47401000 {
+ status = "okay";
+ };
+
+ usb@47401800 {
+ status = "okay";
+ dr_mode = "host";
+ };
+
+ dma-controller@07402000 {
+ status = "okay";
+ };
+};
+
#include "tps65910.dtsi"
&tps {
*/
/dts-v1/;
-#include "omap34xx.dtsi"
+#include "am3517.dtsi"
/ {
- model = "TI AM3517 EVM (AM3517/05)";
- compatible = "ti,am3517-evm", "ti,omap3";
+ model = "TI AM3517 EVM (AM3517/05 TMDSEVM3517)";
+ compatible = "ti,am3517-evm", "ti,am3517", "ti,omap3";
memory {
device_type = "memory";
--- /dev/null
+/*
+ * Device Tree Source for am3517 SoC
+ *
+ * Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include "omap3.dtsi"
+
+/ {
+ aliases {
+ serial3 = &uart4;
+ };
+
+ ocp {
+ am35x_otg_hs: am35x_otg_hs@5c040000 {
+ compatible = "ti,omap3-musb";
+ ti,hwmods = "am35x_otg_hs";
+ status = "disabled";
+ reg = <0x5c040000 0x1000>;
+ interrupts = <71>;
+ interrupt-names = "mc";
+ };
+
+ davinci_emac: ethernet@0x5c000000 {
+ compatible = "ti,am3517-emac";
+ ti,hwmods = "davinci_emac";
+ status = "disabled";
+ reg = <0x5c000000 0x30000>;
+ interrupts = <67 68 69 70>;
+ ti,davinci-ctrl-reg-offset = <0x10000>;
+ ti,davinci-ctrl-mod-reg-offset = <0>;
+ ti,davinci-ctrl-ram-offset = <0x20000>;
+ ti,davinci-ctrl-ram-size = <0x2000>;
+ ti,davinci-rmii-en = /bits/ 8 <1>;
+ local-mac-address = [ 00 00 00 00 00 00 ];
+ };
+
+ davinci_mdio: ethernet@0x5c030000 {
+ compatible = "ti,davinci_mdio";
+ ti,hwmods = "davinci_mdio";
+ status = "disabled";
+ reg = <0x5c030000 0x1000>;
+ bus_freq = <1000000>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ uart4: serial@4809e000 {
+ compatible = "ti,omap3-uart";
+ ti,hwmods = "uart4";
+ status = "disabled";
+ reg = <0x4809e000 0x400>;
+ interrupts = <84>;
+ dmas = <&sdma 55 &sdma 54>;
+ dma-names = "tx", "rx";
+ clock-frequency = <48000000>;
+ };
+ };
+};
spi-max-frequency = <50000000>;
};
};
+ };
- pcie-controller {
+ pcie-controller {
+ status = "okay";
+ /*
+ * The two PCIe units are accessible through
+ * both standard PCIe slots and mini-PCIe
+ * slots on the board.
+ */
+ pcie@1,0 {
+ /* Port 0, Lane 0 */
+ status = "okay";
+ };
+ pcie@2,0 {
+ /* Port 1, Lane 0 */
status = "okay";
- /*
- * The two PCIe units are accessible through
- * both standard PCIe slots and mini-PCIe
- * slots on the board.
- */
- pcie@1,0 {
- /* Port 0, Lane 0 */
- status = "okay";
- };
- pcie@2,0 {
- /* Port 1, Lane 0 */
- status = "okay";
- };
};
};
};
coherency-fabric@20200 {
compatible = "marvell,coherency-fabric";
- reg = <0x20200 0xb0>, <0x21810 0x1c>;
+ reg = <0x20200 0xb0>, <0x21010 0x1c>;
};
serial@12000 {
/*
* MV78230 has 2 PCIe units Gen2.0: One unit can be
* configured as x4 or quad x1 lanes. One unit is
- * x4/x1.
+ * x1 only.
*/
pcie-controller {
compatible = "marvell,armada-xp-pcie";
ranges =
<0x82000000 0 0x40000 MBUS_ID(0xf0, 0x01) 0x40000 0 0x00002000 /* Port 0.0 registers */
- 0x82000000 0 0x42000 MBUS_ID(0xf0, 0x01) 0x42000 0 0x00002000 /* Port 2.0 registers */
0x82000000 0 0x44000 MBUS_ID(0xf0, 0x01) 0x44000 0 0x00002000 /* Port 0.1 registers */
0x82000000 0 0x48000 MBUS_ID(0xf0, 0x01) 0x48000 0 0x00002000 /* Port 0.2 registers */
0x82000000 0 0x4c000 MBUS_ID(0xf0, 0x01) 0x4c000 0 0x00002000 /* Port 0.3 registers */
+ 0x82000000 0 0x80000 MBUS_ID(0xf0, 0x01) 0x80000 0 0x00002000 /* Port 1.0 registers */
0x82000000 0x1 0 MBUS_ID(0x04, 0xe8) 0 1 0 /* Port 0.0 MEM */
0x81000000 0x1 0 MBUS_ID(0x04, 0xe0) 0 1 0 /* Port 0.0 IO */
0x82000000 0x2 0 MBUS_ID(0x04, 0xd8) 0 1 0 /* Port 0.1 MEM */
0x81000000 0x3 0 MBUS_ID(0x04, 0xb0) 0 1 0 /* Port 0.2 IO */
0x82000000 0x4 0 MBUS_ID(0x04, 0x78) 0 1 0 /* Port 0.3 MEM */
0x81000000 0x4 0 MBUS_ID(0x04, 0x70) 0 1 0 /* Port 0.3 IO */
- 0x82000000 0x9 0 MBUS_ID(0x04, 0xf8) 0 1 0 /* Port 2.0 MEM */
- 0x81000000 0x9 0 MBUS_ID(0x04, 0xf0) 0 1 0 /* Port 2.0 IO */>;
+ 0x82000000 0x5 0 MBUS_ID(0x08, 0xe8) 0 1 0 /* Port 1.0 MEM */
+ 0x81000000 0x5 0 MBUS_ID(0x08, 0xe0) 0 1 0 /* Port 1.0 IO */>;
pcie@1,0 {
device_type = "pci";
status = "disabled";
};
- pcie@9,0 {
+ pcie@5,0 {
device_type = "pci";
- assigned-addresses = <0x82000800 0 0x42000 0 0x2000>;
- reg = <0x4800 0 0 0 0>;
+ assigned-addresses = <0x82000800 0 0x80000 0 0x2000>;
+ reg = <0x2800 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
- ranges = <0x82000000 0 0 0x82000000 0x9 0 1 0
- 0x81000000 0 0 0x81000000 0x9 0 1 0>;
+ ranges = <0x82000000 0 0 0x82000000 0x5 0 1 0
+ 0x81000000 0 0 0x81000000 0x5 0 1 0>;
interrupt-map-mask = <0 0 0 0>;
- interrupt-map = <0 0 0 0 &mpic 99>;
- marvell,pcie-port = <2>;
+ interrupt-map = <0 0 0 0 &mpic 62>;
+ marvell,pcie-port = <1>;
marvell,pcie-lane = <0>;
- clocks = <&gateclk 26>;
+ clocks = <&gateclk 9>;
status = "disabled";
};
};
/*
* MV78260 has 3 PCIe units Gen2.0: Two units can be
* configured as x4 or quad x1 lanes. One unit is
- * x4/x1.
+ * x4 only.
*/
pcie-controller {
compatible = "marvell,armada-xp-pcie";
0x82000000 0 0x48000 MBUS_ID(0xf0, 0x01) 0x48000 0 0x00002000 /* Port 0.2 registers */
0x82000000 0 0x4c000 MBUS_ID(0xf0, 0x01) 0x4c000 0 0x00002000 /* Port 0.3 registers */
0x82000000 0 0x80000 MBUS_ID(0xf0, 0x01) 0x80000 0 0x00002000 /* Port 1.0 registers */
- 0x82000000 0 0x82000 MBUS_ID(0xf0, 0x01) 0x82000 0 0x00002000 /* Port 3.0 registers */
+ 0x82000000 0 0x84000 MBUS_ID(0xf0, 0x01) 0x84000 0 0x00002000 /* Port 1.1 registers */
+ 0x82000000 0 0x88000 MBUS_ID(0xf0, 0x01) 0x88000 0 0x00002000 /* Port 1.2 registers */
+ 0x82000000 0 0x8c000 MBUS_ID(0xf0, 0x01) 0x8c000 0 0x00002000 /* Port 1.3 registers */
0x82000000 0x1 0 MBUS_ID(0x04, 0xe8) 0 1 0 /* Port 0.0 MEM */
0x81000000 0x1 0 MBUS_ID(0x04, 0xe0) 0 1 0 /* Port 0.0 IO */
0x82000000 0x2 0 MBUS_ID(0x04, 0xd8) 0 1 0 /* Port 0.1 MEM */
0x81000000 0x3 0 MBUS_ID(0x04, 0xb0) 0 1 0 /* Port 0.2 IO */
0x82000000 0x4 0 MBUS_ID(0x04, 0x78) 0 1 0 /* Port 0.3 MEM */
0x81000000 0x4 0 MBUS_ID(0x04, 0x70) 0 1 0 /* Port 0.3 IO */
- 0x82000000 0x9 0 MBUS_ID(0x08, 0xe8) 0 1 0 /* Port 1.0 MEM */
- 0x81000000 0x9 0 MBUS_ID(0x08, 0xe0) 0 1 0 /* Port 1.0 IO */
- 0x82000000 0xa 0 MBUS_ID(0x08, 0xf8) 0 1 0 /* Port 3.0 MEM */
- 0x81000000 0xa 0 MBUS_ID(0x08, 0xf0) 0 1 0 /* Port 3.0 IO */>;
+
+ 0x82000000 0x5 0 MBUS_ID(0x08, 0xe8) 0 1 0 /* Port 1.0 MEM */
+ 0x81000000 0x5 0 MBUS_ID(0x08, 0xe0) 0 1 0 /* Port 1.0 IO */
+ 0x82000000 0x6 0 MBUS_ID(0x08, 0xd8) 0 1 0 /* Port 1.1 MEM */
+ 0x81000000 0x6 0 MBUS_ID(0x08, 0xd0) 0 1 0 /* Port 1.1 IO */
+ 0x82000000 0x7 0 MBUS_ID(0x08, 0xb8) 0 1 0 /* Port 1.2 MEM */
+ 0x81000000 0x7 0 MBUS_ID(0x08, 0xb0) 0 1 0 /* Port 1.2 IO */
+ 0x82000000 0x8 0 MBUS_ID(0x08, 0x78) 0 1 0 /* Port 1.3 MEM */
+ 0x81000000 0x8 0 MBUS_ID(0x08, 0x70) 0 1 0 /* Port 1.3 IO */
+
+ 0x82000000 0x9 0 MBUS_ID(0x04, 0xf8) 0 1 0 /* Port 2.0 MEM */
+ 0x81000000 0x9 0 MBUS_ID(0x04, 0xf0) 0 1 0 /* Port 2.0 IO */>;
pcie@1,0 {
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
- ranges = <0x82000000 0 0 0x82000000 0x2 0 1 0
- 0x81000000 0 0 0x81000000 0x2 0 1 0>;
+ ranges = <0x82000000 0 0 0x82000000 0x2 0 1 0
+ 0x81000000 0 0 0x81000000 0x2 0 1 0>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &mpic 59>;
marvell,pcie-port = <0>;
status = "disabled";
};
- pcie@9,0 {
+ pcie@5,0 {
device_type = "pci";
- assigned-addresses = <0x82000800 0 0x42000 0 0x2000>;
- reg = <0x4800 0 0 0 0>;
+ assigned-addresses = <0x82000800 0 0x80000 0 0x2000>;
+ reg = <0x2800 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
- ranges = <0x82000000 0 0 0x82000000 0x9 0 1 0
- 0x81000000 0 0 0x81000000 0x9 0 1 0>;
+ ranges = <0x82000000 0 0 0x82000000 0x5 0 1 0
+ 0x81000000 0 0 0x81000000 0x5 0 1 0>;
interrupt-map-mask = <0 0 0 0>;
- interrupt-map = <0 0 0 0 &mpic 99>;
- marvell,pcie-port = <2>;
+ interrupt-map = <0 0 0 0 &mpic 62>;
+ marvell,pcie-port = <1>;
marvell,pcie-lane = <0>;
- clocks = <&gateclk 26>;
+ clocks = <&gateclk 9>;
status = "disabled";
};
- pcie@10,0 {
+ pcie@6,0 {
device_type = "pci";
- assigned-addresses = <0x82000800 0 0x82000 0 0x2000>;
- reg = <0x5000 0 0 0 0>;
+ assigned-addresses = <0x82000800 0 0x84000 0 0x2000>;
+ reg = <0x3000 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
- ranges = <0x82000000 0 0 0x82000000 0xa 0 1 0
- 0x81000000 0 0 0x81000000 0xa 0 1 0>;
+ ranges = <0x82000000 0 0 0x82000000 0x6 0 1 0
+ 0x81000000 0 0 0x81000000 0x6 0 1 0>;
interrupt-map-mask = <0 0 0 0>;
- interrupt-map = <0 0 0 0 &mpic 103>;
- marvell,pcie-port = <3>;
+ interrupt-map = <0 0 0 0 &mpic 63>;
+ marvell,pcie-port = <1>;
+ marvell,pcie-lane = <1>;
+ clocks = <&gateclk 10>;
+ status = "disabled";
+ };
+
+ pcie@7,0 {
+ device_type = "pci";
+ assigned-addresses = <0x82000800 0 0x88000 0 0x2000>;
+ reg = <0x3800 0 0 0 0>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ #interrupt-cells = <1>;
+ ranges = <0x82000000 0 0 0x82000000 0x7 0 1 0
+ 0x81000000 0 0 0x81000000 0x7 0 1 0>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &mpic 64>;
+ marvell,pcie-port = <1>;
+ marvell,pcie-lane = <2>;
+ clocks = <&gateclk 11>;
+ status = "disabled";
+ };
+
+ pcie@8,0 {
+ device_type = "pci";
+ assigned-addresses = <0x82000800 0 0x8c000 0 0x2000>;
+ reg = <0x4000 0 0 0 0>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ #interrupt-cells = <1>;
+ ranges = <0x82000000 0 0 0x82000000 0x8 0 1 0
+ 0x81000000 0 0 0x81000000 0x8 0 1 0>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &mpic 65>;
+ marvell,pcie-port = <1>;
+ marvell,pcie-lane = <3>;
+ clocks = <&gateclk 12>;
+ status = "disabled";
+ };
+
+ pcie@9,0 {
+ device_type = "pci";
+ assigned-addresses = <0x82000800 0 0x42000 0 0x2000>;
+ reg = <0x4800 0 0 0 0>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ #interrupt-cells = <1>;
+ ranges = <0x82000000 0 0 0x82000000 0x9 0 1 0
+ 0x81000000 0 0 0x81000000 0x9 0 1 0>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &mpic 99>;
+ marvell,pcie-port = <2>;
marvell,pcie-lane = <0>;
- clocks = <&gateclk 27>;
+ clocks = <&gateclk 26>;
status = "disabled";
};
};
#include <dt-bindings/interrupt-controller/irq.h>
/ {
+ aliases {
+ serial4 = &usart3;
+ };
+
ahb {
apb {
pinctrl@fffff400 {
reg = <0x7e205000 0x1000>;
interrupts = <2 21>;
clocks = <&clk_i2c>;
+ #address-cells = <1>;
+ #size-cells = <0>;
status = "disabled";
};
reg = <0x7e804000 0x1000>;
interrupts = <2 21>;
clocks = <&clk_i2c>;
+ #address-cells = <1>;
+ #size-cells = <0>;
status = "disabled";
};
i2c2_bus: i2c2-bus {
samsung,pin-pud = <0>;
};
+
+ max77686_irq: max77686-irq {
+ samsung,pins = "gpx3-2";
+ samsung,pin-function = <0>;
+ samsung,pin-pud = <0>;
+ samsung,pin-drv = <0>;
+ };
};
i2c@12C60000 {
max77686@09 {
compatible = "maxim,max77686";
+ interrupt-parent = <&gpx3>;
+ interrupts = <2 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&max77686_irq>;
+ wakeup-source;
reg = <0x09>;
voltage-regulators {
clocks = <&clks 197>, <&clks 3>,
<&clks 197>, <&clks 107>,
<&clks 0>, <&clks 118>,
- <&clks 62>, <&clks 139>,
+ <&clks 0>, <&clks 139>,
<&clks 0>;
clock-names = "core", "rxtx0",
"rxtx1", "rxtx2",
gpmc,wr-access-ns = <186>;
gpmc,cycle2cycle-samecsen;
gpmc,cycle2cycle-diffcsen;
- vmmc-supply = <&vddvario>;
- vmmc_aux-supply = <&vdd33a>;
+ vddvario-supply = <&vddvario>;
+ vdd33a-supply = <&vdd33a>;
reg-io-width = <4>;
smsc,save-mac-address;
};
* they probably share the same GPIO IRQ
* REVISIT: Add timing support from slls644g.pdf
*/
- 8250@3,0 {
+ uart@3,0 {
compatible = "ns16550a";
reg = <3 0 0x100>;
bank-width = <2>;
*/
#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/pinctrl/omap.h>
#include "skeleton.dtsi"
serial0 = &uart1;
serial1 = &uart2;
serial2 = &uart3;
+ i2c0 = &i2c1;
+ i2c1 = &i2c2;
};
cpus {
ranges;
ti,hwmods = "l3_main";
+ aes: aes@480a6000 {
+ compatible = "ti,omap2-aes";
+ ti,hwmods = "aes";
+ reg = <0x480a6000 0x50>;
+ dmas = <&sdma 9 &sdma 10>;
+ dma-names = "tx", "rx";
+ };
+
+ hdq1w: 1w@480b2000 {
+ compatible = "ti,omap2420-1w";
+ ti,hwmods = "hdq1w";
+ reg = <0x480b2000 0x1000>;
+ interrupts = <58>;
+ };
+
+ mailbox: mailbox@48094000 {
+ compatible = "ti,omap2-mailbox";
+ ti,hwmods = "mailbox";
+ reg = <0x48094000 0x200>;
+ interrupts = <26>;
+ };
+
intc: interrupt-controller@1 {
compatible = "ti,omap2-intc";
interrupt-controller;
sdma: dma-controller@48056000 {
compatible = "ti,omap2430-sdma", "ti,omap2420-sdma";
+ ti,hwmods = "dma";
reg = <0x48056000 0x1000>;
interrupts = <12>,
<13>,
#dma-requests = <64>;
};
+ i2c1: i2c@48070000 {
+ compatible = "ti,omap2-i2c";
+ ti,hwmods = "i2c1";
+ reg = <0x48070000 0x80>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ interrupts = <56>;
+ dmas = <&sdma 27 &sdma 28>;
+ dma-names = "tx", "rx";
+ };
+
+ i2c2: i2c@48072000 {
+ compatible = "ti,omap2-i2c";
+ ti,hwmods = "i2c2";
+ reg = <0x48072000 0x80>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ interrupts = <57>;
+ dmas = <&sdma 29 &sdma 30>;
+ dma-names = "tx", "rx";
+ };
+
+ mcspi1: mcspi@48098000 {
+ compatible = "ti,omap2-mcspi";
+ ti,hwmods = "mcspi1";
+ reg = <0x48098000 0x100>;
+ interrupts = <65>;
+ dmas = <&sdma 35 &sdma 36 &sdma 37 &sdma 38
+ &sdma 39 &sdma 40 &sdma 41 &sdma 42>;
+ dma-names = "tx0", "rx0", "tx1", "rx1",
+ "tx2", "rx2", "tx3", "rx3";
+ };
+
+ mcspi2: mcspi@4809a000 {
+ compatible = "ti,omap2-mcspi";
+ ti,hwmods = "mcspi2";
+ reg = <0x4809a000 0x100>;
+ interrupts = <66>;
+ dmas = <&sdma 43 &sdma 44 &sdma 45 &sdma 46>;
+ dma-names = "tx0", "rx0", "tx1", "rx1";
+ };
+
+ rng: rng@480a0000 {
+ compatible = "ti,omap2-rng";
+ ti,hwmods = "rng";
+ reg = <0x480a0000 0x50>;
+ interrupts = <36>;
+ };
+
+ sham: sham@480a4000 {
+ compatible = "ti,omap2-sham";
+ ti,hwmods = "sham";
+ reg = <0x480a4000 0x64>;
+ interrupts = <51>;
+ dmas = <&sdma 13>;
+ dma-names = "rx";
+ };
+
uart1: serial@4806a000 {
compatible = "ti,omap2-uart";
ti,hwmods = "uart1";
+ reg = <0x4806a000 0x2000>;
+ interrupts = <72>;
+ dmas = <&sdma 49 &sdma 50>;
+ dma-names = "tx", "rx";
clock-frequency = <48000000>;
};
uart2: serial@4806c000 {
compatible = "ti,omap2-uart";
ti,hwmods = "uart2";
+ reg = <0x4806c000 0x400>;
+ interrupts = <73>;
+ dmas = <&sdma 51 &sdma 52>;
+ dma-names = "tx", "rx";
clock-frequency = <48000000>;
};
uart3: serial@4806e000 {
compatible = "ti,omap2-uart";
ti,hwmods = "uart3";
+ reg = <0x4806e000 0x400>;
+ interrupts = <74>;
+ dmas = <&sdma 53 &sdma 54>;
+ dma-names = "tx", "rx";
clock-frequency = <48000000>;
};
dma-names = "tx", "rx";
};
+ msdi1: mmc@4809c000 {
+ compatible = "ti,omap2420-mmc";
+ ti,hwmods = "msdi1";
+ reg = <0x4809c000 0x80>;
+ interrupts = <83>;
+ dmas = <&sdma 61 &sdma 62>;
+ dma-names = "tx", "rx";
+ };
+
timer1: timer@48028000 {
compatible = "ti,omap2420-timer";
reg = <0x48028000 0x400>;
ti,hwmods = "timer1";
ti,timer-alwon;
};
+
+ wd_timer2: wdt@48022000 {
+ compatible = "ti,omap2-wdt";
+ ti,hwmods = "wd_timer2";
+ reg = <0x48022000 0x80>;
+ };
};
};
+
+&i2c1 {
+ compatible = "ti,omap2420-i2c";
+};
+
+&i2c2 {
+ compatible = "ti,omap2420-i2c";
+};
dma-names = "tx", "rx";
};
+ mmc1: mmc@4809c000 {
+ compatible = "ti,omap2-hsmmc";
+ reg = <0x4809c000 0x200>;
+ interrupts = <83>;
+ ti,hwmods = "mmc1";
+ ti,dual-volt;
+ dmas = <&sdma 61>, <&sdma 62>;
+ dma-names = "tx", "rx";
+ };
+
+ mmc2: mmc@480b4000 {
+ compatible = "ti,omap2-hsmmc";
+ reg = <0x480b4000 0x200>;
+ interrupts = <86>;
+ ti,hwmods = "mmc2";
+ dmas = <&sdma 47>, <&sdma 48>;
+ dma-names = "tx", "rx";
+ };
+
timer1: timer@49018000 {
compatible = "ti,omap2420-timer";
reg = <0x49018000 0x400>;
ti,hwmods = "timer1";
ti,timer-alwon;
};
+
+ mcspi3: mcspi@480b8000 {
+ compatible = "ti,omap2-mcspi";
+ ti,hwmods = "mcspi3";
+ reg = <0x480b8000 0x100>;
+ interrupts = <91>;
+ dmas = <&sdma 15 &sdma 16 &sdma 23 &sdma 24>;
+ dma-names = "tx0", "rx0", "tx1", "rx1";
+ };
+
+ usb_otg_hs: usb_otg_hs@480ac000 {
+ compatible = "ti,omap2-musb";
+ ti,hwmods = "usb_otg_hs";
+ reg = <0x480ac000 0x1000>;
+ interrupts = <93>;
+ };
+
+ wd_timer2: wdt@49016000 {
+ compatible = "ti,omap2-wdt";
+ ti,hwmods = "wd_timer2";
+ reg = <0x49016000 0x80>;
+ };
};
};
+
+&i2c1 {
+ compatible = "ti,omap2430-i2c";
+};
+
+&i2c2 {
+ compatible = "ti,omap2430-i2c";
+};
&usbhsehci {
phys = <0 &hsusb2_phy>;
};
+
+&vaux2 {
+ regulator-name = "usb_1v8";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+};
vcc-supply = <&hsusb2_power>;
};
+ sound {
+ compatible = "ti,omap-twl4030";
+ ti,model = "omap3beagle";
+
+ ti,mcbsp = <&mcbsp2>;
+ ti,codec = <&twl_audio>;
+ };
+
gpio_keys {
compatible = "gpio-keys";
reg = <0x48>;
interrupts = <7>; /* SYS_NIRQ cascaded to intc */
interrupt-parent = <&intc>;
+
+ twl_audio: audio {
+ compatible = "ti,twl4030-audio";
+ codec {
+ };
+ };
};
};
mode = <3>;
power = <50>;
};
+
+&vaux2 {
+ regulator-name = "vdd_ehci";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+};
/*
- * Device Tree Source for IGEP Technology devices
+ * Common device tree for IGEP boards based on AM/DM37x
*
* Copyright (C) 2012 Javier Martinez Canillas <javier@collabora.co.uk>
* Copyright (C) 2012 Enric Balletbo i Serra <eballetbo@gmail.com>
*/
/dts-v1/;
-#include "omap34xx.dtsi"
+#include "omap36xx.dtsi"
/ {
memory {
ti,mcbsp = <&mcbsp2>;
ti,codec = <&twl_audio>;
};
+
+ vdd33: regulator-vdd33 {
+ compatible = "regulator-fixed";
+ regulator-name = "vdd33";
+ regulator-always-on;
+ };
+
+ lbee1usjyc_vmmc: lbee1usjyc_vmmc {
+ pinctrl-names = "default";
+ pinctrl-0 = <&lbee1usjyc_pins>;
+ compatible = "regulator-fixed";
+ regulator-name = "regulator-lbee1usjyc";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ gpio = <&gpio5 10 GPIO_ACTIVE_HIGH>; /* gpio_138 WIFI_PDN */
+ startup-delay-us = <10000>;
+ enable-active-high;
+ vin-supply = <&vdd33>;
+ };
};
&omap3_pmx_core {
>;
};
+ /* WiFi/BT combo */
+ lbee1usjyc_pins: pinmux_lbee1usjyc_pins {
+ pinctrl-single,pins = <
+ 0x136 (PIN_OUTPUT | MUX_MODE4) /* sdmmc2_dat5.gpio_137 */
+ 0x138 (PIN_OUTPUT | MUX_MODE4) /* sdmmc2_dat6.gpio_138 */
+ 0x13a (PIN_OUTPUT | MUX_MODE4) /* sdmmc2_dat7.gpio_139 */
+ >;
+ };
+
mcbsp2_pins: pinmux_mcbsp2_pins {
pinctrl-single,pins = <
0x10c (PIN_INPUT | MUX_MODE0) /* mcbsp2_fsx.mcbsp2_fsx */
0x11a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc1_dat1.sdmmc1_dat1 */
0x11c (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc1_dat2.sdmmc1_dat2 */
0x11e (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc1_dat3.sdmmc1_dat3 */
- 0x120 (PIN_INPUT | MUX_MODE0) /* sdmmc1_dat4.sdmmc1_dat4 */
- 0x122 (PIN_INPUT | MUX_MODE0) /* sdmmc1_dat5.sdmmc1_dat5 */
- 0x124 (PIN_INPUT | MUX_MODE0) /* sdmmc1_dat6.sdmmc1_dat6 */
- 0x126 (PIN_INPUT | MUX_MODE0) /* sdmmc1_dat7.sdmmc1_dat7 */
+ >;
+ };
+
+ mmc2_pins: pinmux_mmc2_pins {
+ pinctrl-single,pins = <
+ 0x128 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_clk.sdmmc2_clk */
+ 0x12a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_cmd.sdmmc2_cmd */
+ 0x12c (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat0.sdmmc2_dat0 */
+ 0x12e (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat1.sdmmc2_dat1 */
+ 0x130 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat2.sdmmc2_dat2 */
+ 0x132 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat3.sdmmc2_dat3 */
>;
};
>;
};
+ i2c1_pins: pinmux_i2c1_pins {
+ pinctrl-single,pins = <
+ 0x18a (PIN_INPUT | MUX_MODE0) /* i2c1_scl.i2c1_scl */
+ 0x18c (PIN_INPUT | MUX_MODE0) /* i2c1_sda.i2c1_sda */
+ >;
+ };
+
+ i2c2_pins: pinmux_i2c2_pins {
+ pinctrl-single,pins = <
+ 0x18e (PIN_INPUT | MUX_MODE0) /* i2c2_scl.i2c2_scl */
+ 0x190 (PIN_INPUT | MUX_MODE0) /* i2c2_sda.i2c2_sda */
+ >;
+ };
+
+ i2c3_pins: pinmux_i2c3_pins {
+ pinctrl-single,pins = <
+ 0x192 (PIN_INPUT | MUX_MODE0) /* i2c3_scl.i2c3_scl */
+ 0x194 (PIN_INPUT | MUX_MODE0) /* i2c3_sda.i2c3_sda */
+ >;
+ };
+
leds_pins: pinmux_leds_pins { };
};
&i2c1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c1_pins>;
clock-frequency = <2600000>;
twl: twl@48 {
#include "twl4030_omap3.dtsi"
&i2c2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2_pins>;
clock-frequency = <400000>;
};
+&i2c3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c3_pins>;
+};
+
&mcbsp2 {
pinctrl-names = "default";
pinctrl-0 = <&mcbsp2_pins>;
pinctrl-0 = <&mmc1_pins>;
vmmc-supply = <&vmmc1>;
vmmc_aux-supply = <&vsim>;
- bus-width = <8>;
+ bus-width = <4>;
};
&mmc2 {
- status = "disabled";
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc2_pins>;
+ vmmc-supply = <&lbee1usjyc_vmmc>;
+ bus-width = <4>;
+ non-removable;
};
&mmc3 {
/*
- * Device Tree Source for IGEPv2 board
+ * Device Tree Source for IGEPv2 Rev. (TI OMAP AM/DM37x)
*
* Copyright (C) 2012 Javier Martinez Canillas <javier@collabora.co.uk>
* Copyright (C) 2012 Enric Balletbo i Serra <eballetbo@gmail.com>
#include "omap-gpmc-smsc911x.dtsi"
/ {
- model = "IGEPv2";
+ model = "IGEPv2 (TI OMAP AM/DM37x)";
compatible = "isee,omap3-igep0020", "ti,omap3";
leds {
pinctrl-names = "default";
pinctrl-0 = <
&hsusbb1_pins
+ &tfp410_pins
+ &dss_pins
>;
hsusbb1_pins: pinmux_hsusbb1_pins {
0x5ba (PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d7.hsusb1_data3 */
>;
};
+
+ tfp410_pins: tfp410_dvi_pins {
+ pinctrl-single,pins = <
+ 0x196 (PIN_OUTPUT | MUX_MODE4) /* hdq_sio.gpio_170 */
+ >;
+ };
+
+ dss_pins: pinmux_dss_dvi_pins {
+ pinctrl-single,pins = <
+ 0x0a4 (PIN_OUTPUT | MUX_MODE0) /* dss_pclk.dss_pclk */
+ 0x0a6 (PIN_OUTPUT | MUX_MODE0) /* dss_hsync.dss_hsync */
+ 0x0a8 (PIN_OUTPUT | MUX_MODE0) /* dss_vsync.dss_vsync */
+ 0x0aa (PIN_OUTPUT | MUX_MODE0) /* dss_acbias.dss_acbias */
+ 0x0ac (PIN_OUTPUT | MUX_MODE0) /* dss_data0.dss_data0 */
+ 0x0ae (PIN_OUTPUT | MUX_MODE0) /* dss_data1.dss_data1 */
+ 0x0b0 (PIN_OUTPUT | MUX_MODE0) /* dss_data2.dss_data2 */
+ 0x0b2 (PIN_OUTPUT | MUX_MODE0) /* dss_data3.dss_data3 */
+ 0x0b4 (PIN_OUTPUT | MUX_MODE0) /* dss_data4.dss_data4 */
+ 0x0b6 (PIN_OUTPUT | MUX_MODE0) /* dss_data5.dss_data5 */
+ 0x0b8 (PIN_OUTPUT | MUX_MODE0) /* dss_data6.dss_data6 */
+ 0x0ba (PIN_OUTPUT | MUX_MODE0) /* dss_data7.dss_data7 */
+ 0x0bc (PIN_OUTPUT | MUX_MODE0) /* dss_data8.dss_data8 */
+ 0x0be (PIN_OUTPUT | MUX_MODE0) /* dss_data9.dss_data9 */
+ 0x0c0 (PIN_OUTPUT | MUX_MODE0) /* dss_data10.dss_data10 */
+ 0x0c2 (PIN_OUTPUT | MUX_MODE0) /* dss_data11.dss_data11 */
+ 0x0c4 (PIN_OUTPUT | MUX_MODE0) /* dss_data12.dss_data12 */
+ 0x0c6 (PIN_OUTPUT | MUX_MODE0) /* dss_data13.dss_data13 */
+ 0x0c8 (PIN_OUTPUT | MUX_MODE0) /* dss_data14.dss_data14 */
+ 0x0ca (PIN_OUTPUT | MUX_MODE0) /* dss_data15.dss_data15 */
+ 0x0cc (PIN_OUTPUT | MUX_MODE0) /* dss_data16.dss_data16 */
+ 0x0ce (PIN_OUTPUT | MUX_MODE0) /* dss_data17.dss_data17 */
+ 0x0d0 (PIN_OUTPUT | MUX_MODE0) /* dss_data18.dss_data18 */
+ 0x0d2 (PIN_OUTPUT | MUX_MODE0) /* dss_data19.dss_data19 */
+ 0x0d4 (PIN_OUTPUT | MUX_MODE0) /* dss_data20.dss_data20 */
+ 0x0d6 (PIN_OUTPUT | MUX_MODE0) /* dss_data21.dss_data21 */
+ 0x0d8 (PIN_OUTPUT | MUX_MODE0) /* dss_data22.dss_data22 */
+ 0x0da (PIN_OUTPUT | MUX_MODE0) /* dss_data23.dss_data23 */
+ >;
+ };
};
&leds_pins {
&usbhsehci {
phys = <&hsusb1_phy>;
};
+
+&vpll2 {
+ /* Needed for DSS */
+ regulator-name = "vdds_dsi";
+};
/*
- * Device Tree Source for IGEP COM Module
+ * Device Tree Source for IGEP COM MODULE (TI OMAP AM/DM37x)
*
* Copyright (C) 2012 Javier Martinez Canillas <javier@collabora.co.uk>
* Copyright (C) 2012 Enric Balletbo i Serra <eballetbo@gmail.com>
#include "omap3-igep.dtsi"
/ {
- model = "IGEP COM Module";
+ model = "IGEP COM MODULE (TI OMAP AM/DM37x)";
compatible = "isee,omap3-igep0030", "ti,omap3";
leds {
/dts-v1/;
-#include "omap34xx.dtsi"
+#include "omap34xx-hs.dtsi"
/ {
model = "Nokia N900";
>;
};
+ mmc2_pins: pinmux_mmc2_pins {
+ pinctrl-single,pins = <
+ 0x128 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_clk */
+ 0x12a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_cmd */
+ 0x12c (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat0 */
+ 0x12e (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat1 */
+ 0x130 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat2 */
+ 0x132 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat3 */
+ 0x134 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat4 */
+ 0x136 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat5 */
+ 0x138 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat6 */
+ 0x13a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat7 */
+ >;
+ };
+
display_pins: pinmux_display_pins {
pinctrl-single,pins = <
0x0d4 (PIN_OUTPUT | MUX_MODE4) /* RX51_LCD_RESET_GPIO */
cd-gpios = <&gpio6 0 GPIO_ACTIVE_HIGH>; /* 160 */
};
+/* most boards use vaux3, only some old versions use vmmc2 instead */
&mmc2 {
- status = "disabled";
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc2_pins>;
+ vmmc-supply = <&vaux3>;
+ vmmc_aux-supply = <&vsim>;
+ bus-width = <8>;
+ non-removable;
};
&mmc3 {
* published by the Free Software Foundation.
*/
-#include "omap36xx.dtsi"
+#include "omap36xx-hs.dtsi"
/ {
cpus {
ranges;
ti,hwmods = "l3_main";
+ aes: aes@480c5000 {
+ compatible = "ti,omap3-aes";
+ ti,hwmods = "aes";
+ reg = <0x480c5000 0x50>;
+ interrupts = <0>;
+ };
+
counter32k: counter@48320000 {
compatible = "ti,omap-counter32k";
reg = <0x48320000 0x20>;
ti,hwmods = "i2c3";
};
+ mailbox: mailbox@48094000 {
+ compatible = "ti,omap3-mailbox";
+ ti,hwmods = "mailbox";
+ reg = <0x48094000 0x200>;
+ interrupts = <26>;
+ };
+
mcspi1: spi@48098000 {
compatible = "ti,omap2-mcspi";
reg = <0x48098000 0x100>;
dma-names = "tx", "rx";
};
+ mmu_isp: mmu@480bd400 {
+ compatible = "ti,omap3-mmu-isp";
+ ti,hwmods = "mmu_isp";
+ reg = <0x480bd400 0x80>;
+ interrupts = <8>;
+ };
+
wdt2: wdt@48314000 {
compatible = "ti,omap3-wdt";
reg = <0x48314000 0x80>;
dma-names = "tx", "rx";
};
+ sham: sham@480c3000 {
+ compatible = "ti,omap3-sham";
+ ti,hwmods = "sham";
+ reg = <0x480c3000 0x64>;
+ interrupts = <49>;
+ };
+
+ smartreflex_core: smartreflex@480cb000 {
+ compatible = "ti,omap3-smartreflex-core";
+ ti,hwmods = "smartreflex_core";
+ reg = <0x480cb000 0x400>;
+ interrupts = <19>;
+ };
+
+ smartreflex_mpu_iva: smartreflex@480c9000 {
+ compatible = "ti,omap3-smartreflex-iva";
+ ti,hwmods = "smartreflex_mpu_iva";
+ reg = <0x480c9000 0x400>;
+ interrupts = <18>;
+ };
+
timer1: timer@48318000 {
compatible = "ti,omap3430-timer";
reg = <0x48318000 0x400>;
--- /dev/null
+/* Disabled modules for secure omaps */
+
+#include "omap34xx.dtsi"
+
+/* Secure omaps have some devices inaccessible depending on the firmware */
+&aes {
+ status = "disabled";
+};
+
+&sham {
+ status = "disabled";
+};
+
+&timer12 {
+ status = "disabled";
+};
--- /dev/null
+/* Disabled modules for secure omaps */
+
+#include "omap36xx.dtsi"
+
+/* Secure omaps have some devices inaccessible depending on the firmware */
+&aes {
+ status = "disabled";
+};
+
+&sham {
+ status = "disabled";
+};
+
+&timer12 {
+ status = "disabled";
+};
0xf0 (PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_sda */
>;
};
-};
-
-&omap4_pmx_wkup {
- led_wkgpio_pins: pinmux_leds_wkpins {
- pinctrl-single,pins = <
- 0x1a (PIN_OUTPUT | MUX_MODE3) /* gpio_wk7 */
- 0x1c (PIN_OUTPUT | MUX_MODE3) /* gpio_wk8 */
- >;
- };
/*
* wl12xx GPIO outputs for WLAN_EN, BT_EN, FM_EN, BT_WAKEUP
pinctrl-single,pins = <
0x38 (PIN_INPUT | MUX_MODE3) /* gpmc_ncs2.gpio_52 */
0x3a (PIN_INPUT | MUX_MODE3) /* gpmc_ncs3.gpio_53 */
- 0x108 (PIN_OUTPUT | MUX_MODE0) /* sdmmc5_clk.sdmmc5_clk */
+ 0x108 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_clk.sdmmc5_clk */
0x10a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_cmd.sdmmc5_cmd */
0x10c (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat0.sdmmc5_dat0 */
0x10e (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat1.sdmmc5_dat1 */
};
};
+&omap4_pmx_wkup {
+ led_wkgpio_pins: pinmux_leds_wkpins {
+ pinctrl-single,pins = <
+ 0x1a (PIN_OUTPUT | MUX_MODE3) /* gpio_wk7 */
+ 0x1c (PIN_OUTPUT | MUX_MODE3) /* gpio_wk8 */
+ >;
+ };
+};
+
&i2c1 {
pinctrl-names = "default";
pinctrl-0 = <&i2c1_pins>;
wl12xx_pins: pinmux_wl12xx_pins {
pinctrl-single,pins = <
0x3a (PIN_INPUT | MUX_MODE3) /* gpmc_ncs3.gpio_53 */
- 0x108 (PIN_OUTPUT | MUX_MODE3) /* sdmmc5_clk.sdmmc5_clk */
- 0x10a (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_cmd.sdmmc5_cmd */
- 0x10c (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat0.sdmmc5_dat0 */
- 0x10e (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat1.sdmmc5_dat1 */
- 0x110 (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat2.sdmmc5_dat2 */
- 0x112 (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat3.sdmmc5_dat3 */
+ 0x108 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_clk.sdmmc5_clk */
+ 0x10a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_cmd.sdmmc5_cmd */
+ 0x10c (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat0.sdmmc5_dat0 */
+ 0x10e (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat1.sdmmc5_dat1 */
+ 0x110 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat2.sdmmc5_dat2 */
+ 0x112 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat3.sdmmc5_dat3 */
>;
};
};
interrupts = <1 9 0xf04>;
};
- gpio0: gpio@ffc40000 {
+ gpio0: gpio@e6050000 {
compatible = "renesas,gpio-r8a7790", "renesas,gpio-rcar";
- reg = <0 0xffc40000 0 0x2c>;
+ reg = <0 0xe6050000 0 0x50>;
interrupt-parent = <&gic>;
interrupts = <0 4 0x4>;
#gpio-cells = <2>;
interrupt-controller;
};
- gpio1: gpio@ffc41000 {
+ gpio1: gpio@e6051000 {
compatible = "renesas,gpio-r8a7790", "renesas,gpio-rcar";
- reg = <0 0xffc41000 0 0x2c>;
+ reg = <0 0xe6051000 0 0x50>;
interrupt-parent = <&gic>;
interrupts = <0 5 0x4>;
#gpio-cells = <2>;
interrupt-controller;
};
- gpio2: gpio@ffc42000 {
+ gpio2: gpio@e6052000 {
compatible = "renesas,gpio-r8a7790", "renesas,gpio-rcar";
- reg = <0 0xffc42000 0 0x2c>;
+ reg = <0 0xe6052000 0 0x50>;
interrupt-parent = <&gic>;
interrupts = <0 6 0x4>;
#gpio-cells = <2>;
interrupt-controller;
};
- gpio3: gpio@ffc43000 {
+ gpio3: gpio@e6053000 {
compatible = "renesas,gpio-r8a7790", "renesas,gpio-rcar";
- reg = <0 0xffc43000 0 0x2c>;
+ reg = <0 0xe6053000 0 0x50>;
interrupt-parent = <&gic>;
interrupts = <0 7 0x4>;
#gpio-cells = <2>;
interrupt-controller;
};
- gpio4: gpio@ffc44000 {
+ gpio4: gpio@e6054000 {
compatible = "renesas,gpio-r8a7790", "renesas,gpio-rcar";
- reg = <0 0xffc44000 0 0x2c>;
+ reg = <0 0xe6054000 0 0x50>;
interrupt-parent = <&gic>;
interrupts = <0 8 0x4>;
#gpio-cells = <2>;
interrupt-controller;
};
- gpio5: gpio@ffc45000 {
+ gpio5: gpio@e6055000 {
compatible = "renesas,gpio-r8a7790", "renesas,gpio-rcar";
- reg = <0 0xffc45000 0 0x2c>;
+ reg = <0 0xe6055000 0 0x50>;
interrupt-parent = <&gic>;
interrupts = <0 9 0x4>;
#gpio-cells = <2>;
sdhi0: sdhi@ee100000 {
compatible = "renesas,sdhi-r8a7790";
- reg = <0 0xee100000 0 0x100>;
+ reg = <0 0xee100000 0 0x200>;
interrupt-parent = <&gic>;
interrupts = <0 165 4>;
cap-sd-highspeed;
sdhi1: sdhi@ee120000 {
compatible = "renesas,sdhi-r8a7790";
- reg = <0 0xee120000 0 0x100>;
+ reg = <0 0xee120000 0 0x200>;
interrupt-parent = <&gic>;
interrupts = <0 166 4>;
cap-sd-highspeed;
mpu_periph_clk: mpu_periph_clk {
#clock-cells = <0>;
- compatible = "altr,socfpga-gate-clk";
+ compatible = "altr,socfpga-perip-clk";
clocks = <&mpuclk>;
fixed-divider = <4>;
};
mpu_l2_ram_clk: mpu_l2_ram_clk {
#clock-cells = <0>;
- compatible = "altr,socfpga-gate-clk";
+ compatible = "altr,socfpga-perip-clk";
clocks = <&mpuclk>;
fixed-divider = <2>;
};
l3_main_clk: l3_main_clk {
#clock-cells = <0>;
- compatible = "altr,socfpga-gate-clk";
+ compatible = "altr,socfpga-perip-clk";
clocks = <&mainclk>;
+ fixed-divider = <1>;
};
l3_mp_clk: l3_mp_clk {
pio: pinctrl@01c20800 {
compatible = "allwinner,sun6i-a31-pinctrl";
reg = <0x01c20800 0x400>;
- interrupts = <0 11 1>, <0 15 1>, <0 16 1>, <0 17 1>;
+ interrupts = <0 11 4>,
+ <0 15 4>,
+ <0 16 4>,
+ <0 17 4>;
clocks = <&apb1_gates 5>;
gpio-controller;
interrupt-controller;
timer@01c20c00 {
compatible = "allwinner,sun4i-timer";
reg = <0x01c20c00 0xa0>;
- interrupts = <0 18 1>,
- <0 19 1>,
- <0 20 1>,
- <0 21 1>,
- <0 22 1>;
+ interrupts = <0 18 4>,
+ <0 19 4>,
+ <0 20 4>,
+ <0 21 4>,
+ <0 22 4>;
clocks = <&osc24M>;
};
uart0: serial@01c28000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28000 0x400>;
- interrupts = <0 0 1>;
+ interrupts = <0 0 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 16>;
uart1: serial@01c28400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28400 0x400>;
- interrupts = <0 1 1>;
+ interrupts = <0 1 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 17>;
uart2: serial@01c28800 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28800 0x400>;
- interrupts = <0 2 1>;
+ interrupts = <0 2 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 18>;
uart3: serial@01c28c00 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28c00 0x400>;
- interrupts = <0 3 1>;
+ interrupts = <0 3 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 19>;
uart4: serial@01c29000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29000 0x400>;
- interrupts = <0 4 1>;
+ interrupts = <0 4 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 20>;
uart5: serial@01c29400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29400 0x400>;
- interrupts = <0 5 1>;
+ interrupts = <0 5 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 21>;
emac: ethernet@01c0b000 {
compatible = "allwinner,sun4i-emac";
reg = <0x01c0b000 0x1000>;
- interrupts = <0 55 1>;
+ interrupts = <0 55 4>;
clocks = <&ahb_gates 17>;
status = "disabled";
};
pio: pinctrl@01c20800 {
compatible = "allwinner,sun7i-a20-pinctrl";
reg = <0x01c20800 0x400>;
- interrupts = <0 28 1>;
+ interrupts = <0 28 4>;
clocks = <&apb0_gates 5>;
gpio-controller;
interrupt-controller;
timer@01c20c00 {
compatible = "allwinner,sun4i-timer";
reg = <0x01c20c00 0x90>;
- interrupts = <0 22 1>,
- <0 23 1>,
- <0 24 1>,
- <0 25 1>,
- <0 67 1>,
- <0 68 1>;
+ interrupts = <0 22 4>,
+ <0 23 4>,
+ <0 24 4>,
+ <0 25 4>,
+ <0 67 4>,
+ <0 68 4>;
clocks = <&osc24M>;
};
uart0: serial@01c28000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28000 0x400>;
- interrupts = <0 1 1>;
+ interrupts = <0 1 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 16>;
uart1: serial@01c28400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28400 0x400>;
- interrupts = <0 2 1>;
+ interrupts = <0 2 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 17>;
uart2: serial@01c28800 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28800 0x400>;
- interrupts = <0 3 1>;
+ interrupts = <0 3 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 18>;
uart3: serial@01c28c00 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28c00 0x400>;
- interrupts = <0 4 1>;
+ interrupts = <0 4 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 19>;
uart4: serial@01c29000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29000 0x400>;
- interrupts = <0 17 1>;
+ interrupts = <0 17 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 20>;
uart5: serial@01c29400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29400 0x400>;
- interrupts = <0 18 1>;
+ interrupts = <0 18 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 21>;
uart6: serial@01c29800 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29800 0x400>;
- interrupts = <0 19 1>;
+ interrupts = <0 19 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 22>;
uart7: serial@01c29c00 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29c00 0x400>;
- interrupts = <0 20 1>;
+ interrupts = <0 20 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 23>;
i2c0: i2c@01c2ac00 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2ac00 0x400>;
- interrupts = <0 7 1>;
+ interrupts = <0 7 4>;
clocks = <&apb1_gates 0>;
clock-frequency = <100000>;
status = "disabled";
i2c1: i2c@01c2b000 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2b000 0x400>;
- interrupts = <0 8 1>;
+ interrupts = <0 8 4>;
clocks = <&apb1_gates 1>;
clock-frequency = <100000>;
status = "disabled";
i2c2: i2c@01c2b400 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2b400 0x400>;
- interrupts = <0 9 1>;
+ interrupts = <0 9 4>;
clocks = <&apb1_gates 2>;
clock-frequency = <100000>;
status = "disabled";
i2c3: i2c@01c2b800 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2b800 0x400>;
- interrupts = <0 88 1>;
+ interrupts = <0 88 4>;
clocks = <&apb1_gates 3>;
clock-frequency = <100000>;
status = "disabled";
i2c4: i2c@01c2bc00 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2bc00 0x400>;
- interrupts = <0 89 1>;
+ interrupts = <0 89 4>;
clocks = <&apb1_gates 15>;
clock-frequency = <100000>;
status = "disabled";
BIT(slot));
if (edma_cc[ctlr]->intr_data[channel].callback)
edma_cc[ctlr]->intr_data[channel].callback(
- channel, DMA_COMPLETE,
+ channel, EDMA_DMA_COMPLETE,
edma_cc[ctlr]->intr_data[channel].data);
}
} while (sh_ipr);
callback) {
edma_cc[ctlr]->intr_data[k].
callback(k,
- DMA_CC_ERROR,
+ EDMA_DMA_CC_ERROR,
edma_cc[ctlr]->intr_data
[k].data);
}
CONFIG_SMSC911X=y
CONFIG_STMMAC_ETH=y
CONFIG_MDIO_SUN4I=y
+CONFIG_TI_CPSW=y
CONFIG_KEYBOARD_SPEAR=y
CONFIG_SERIO_AMBAKMI=y
CONFIG_SERIAL_8250=y
CONFIG_USB_ISP1301=y
CONFIG_USB_MXS_PHY=y
CONFIG_MMC=y
+CONFIG_MMC_BLOCK_MINORS=16
CONFIG_MMC_ARMMMCI=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_ESDHC_IMX=y
CONFIG_MMC_SDHCI_TEGRA=y
CONFIG_MMC_SDHCI_SPEAR=y
+CONFIG_MMC_SDHCI_BCM_KONA=y
CONFIG_MMC_OMAP=y
CONFIG_MMC_OMAP_HS=y
CONFIG_EDAC=y
CONFIG_MFD_TPS65217=y
CONFIG_MFD_TPS65910=y
CONFIG_TWL6040_CORE=y
+CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_PALMAS=y
CONFIG_REGULATOR_TPS65023=y
CONFIG_REGULATOR_TPS6507X=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
CONFIG_COMMON_CLK_DEBUG=y
# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_TMPFS=y
+CONFIG_NFS_FS=y
+CONFIG_ROOT_NFS=y
CONFIG_NLS=y
+CONFIG_PRINTK_TIME=y
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_IDLE=y
+CONFIG_ARM_U8500_CPUIDLE=y
CONFIG_VFP=y
CONFIG_NEON=y
CONFIG_PM_RUNTIME=y
CONFIG_EXT3_FS=y
CONFIG_EXT4_FS=y
CONFIG_VFAT_FS=y
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
# CONFIG_MISC_FILESYSTEMS is not set
# define VFP_ABI_FRAME 0
# define BSAES_ASM_EXTENDED_KEY
# define XTS_CHAIN_TWEAK
-# define __ARM_ARCH__ __LINUX_ARM_ARCH__
+# define __ARM_ARCH__ 7
#endif
#ifdef __thumb__
# define VFP_ABI_FRAME 0
# define BSAES_ASM_EXTENDED_KEY
# define XTS_CHAIN_TWEAK
-# define __ARM_ARCH__ __LINUX_ARM_ARCH__
+# define __ARM_ARCH__ 7
#endif
#ifdef __thumb__
return slot_cnt;
}
-static inline int iop_desc_is_pq(struct iop_adma_desc_slot *desc)
-{
- return 0;
-}
-
-static inline u32 iop_desc_get_dest_addr(struct iop_adma_desc_slot *desc,
- struct iop_adma_chan *chan)
-{
- union iop3xx_desc hw_desc = { .ptr = desc->hw_desc, };
-
- switch (chan->device->id) {
- case DMA0_ID:
- case DMA1_ID:
- return hw_desc.dma->dest_addr;
- case AAU_ID:
- return hw_desc.aau->dest_addr;
- default:
- BUG();
- }
- return 0;
-}
-
-
-static inline u32 iop_desc_get_qdest_addr(struct iop_adma_desc_slot *desc,
- struct iop_adma_chan *chan)
-{
- BUG();
- return 0;
-}
-
static inline u32 iop_desc_get_byte_count(struct iop_adma_desc_slot *desc,
struct iop_adma_chan *chan)
{
* @slot_cnt: total slots used in an transaction (group of operations)
* @slots_per_op: number of slots per operation
* @idx: pool index
- * @unmap_src_cnt: number of xor sources
- * @unmap_len: transaction bytecount
* @tx_list: list of descriptors that are associated with one operation
* @async_tx: support for the async_tx api
* @group_list: list of slots that make up a multi-descriptor transaction
u16 slot_cnt;
u16 slots_per_op;
u16 idx;
- u16 unmap_src_cnt;
- size_t unmap_len;
struct list_head tx_list;
struct dma_async_tx_descriptor async_tx;
union {
*/
#define ioremap(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE)
#define ioremap_nocache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE)
-#define ioremap_cached(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_CACHED)
+#define ioremap_cache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_CACHED)
#define ioremap_wc(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_WC)
#define iounmap __arm_iounmap
#define TASK_UNMAPPED_BASE UL(0x00000000)
#endif
-#ifndef PHYS_OFFSET
-#define PHYS_OFFSET UL(CONFIG_DRAM_BASE)
-#endif
-
#ifndef END_MEM
#define END_MEM (UL(CONFIG_DRAM_BASE) + CONFIG_DRAM_SIZE)
#endif
#ifndef PAGE_OFFSET
-#define PAGE_OFFSET (PHYS_OFFSET)
+#define PAGE_OFFSET PLAT_PHYS_OFFSET
#endif
/*
* The module can be at any place in ram in nommu mode.
*/
#define MODULES_END (END_MEM)
-#define MODULES_VADDR (PHYS_OFFSET)
+#define MODULES_VADDR PAGE_OFFSET
#define XIP_VIRT_ADDR(physaddr) (physaddr)
#endif
#define ARCH_PGD_MASK ((1 << ARCH_PGD_SHIFT) - 1)
+/*
+ * PLAT_PHYS_OFFSET is the offset (from zero) of the start of physical
+ * memory. This is used for XIP and NoMMU kernels, or by kernels which
+ * have their own mach/memory.h. Assembly code must always use
+ * PLAT_PHYS_OFFSET and not PHYS_OFFSET.
+ */
+#ifndef PLAT_PHYS_OFFSET
+#define PLAT_PHYS_OFFSET UL(CONFIG_PHYS_OFFSET)
+#endif
+
#ifndef __ASSEMBLY__
/*
static inline unsigned long __phys_to_virt(phys_addr_t x)
{
unsigned long t;
- __pv_stub(x, t, "sub", __PV_BITS_31_24);
+
+ /*
+ * 'unsigned long' cast discard upper word when
+ * phys_addr_t is 64 bit, and makes sure that inline
+ * assembler expression receives 32 bit argument
+ * in place where 'r' 32 bit operand is expected.
+ */
+ __pv_stub((unsigned long) x, t, "sub", __PV_BITS_31_24);
return t;
}
#else
+#define PHYS_OFFSET PLAT_PHYS_OFFSET
+
static inline phys_addr_t __virt_to_phys(unsigned long x)
{
return (phys_addr_t)x - PAGE_OFFSET + PHYS_OFFSET;
#endif
#endif
-#endif /* __ASSEMBLY__ */
-
-#ifndef PHYS_OFFSET
-#ifdef PLAT_PHYS_OFFSET
-#define PHYS_OFFSET PLAT_PHYS_OFFSET
-#else
-#define PHYS_OFFSET UL(CONFIG_PHYS_OFFSET)
-#endif
-#endif
-
-#ifndef __ASSEMBLY__
/*
* PFNs are used to describe any physical page; this means
#define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
-#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
+#define virt_addr_valid(kaddr) (((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory) \
+ && pfn_valid(__pa(kaddr) >> PAGE_SHIFT) )
#endif
* mapping to be mapped at. This is particularly important for
* non-high vector CPUs.
*/
-#define FIRST_USER_ADDRESS PAGE_SIZE
+#define FIRST_USER_ADDRESS (PAGE_SIZE * 2)
/*
* Use TASK_SIZE as the ceiling argument for free_pgtables() and
return __set_phys_to_machine(pfn, mfn);
}
-#define xen_remap(cookie, size) ioremap_cached((cookie), (size));
+#define xen_remap(cookie, size) ioremap_cache((cookie), (size));
#endif /* _ASM_ARM_XEN_PAGE_H */
#ifdef CONFIG_ARM_MPU
/* Calculate the size of a region covering just the kernel */
- ldr r5, =PHYS_OFFSET @ Region start: PHYS_OFFSET
+ ldr r5, =PLAT_PHYS_OFFSET @ Region start: PHYS_OFFSET
ldr r6, =(_end) @ Cover whole kernel
sub r6, r6, r5 @ Minimum size of region to map
clz r6, r6 @ Region size must be 2^N...
set_region_nr r0, #MPU_RAM_REGION
isb
/* Full access from PL0, PL1, shared for CONFIG_SMP, cacheable */
- ldr r0, =PHYS_OFFSET @ RAM starts at PHYS_OFFSET
+ ldr r0, =PLAT_PHYS_OFFSET @ RAM starts at PHYS_OFFSET
ldr r5,=(MPU_AP_PL1RW_PL0RW | MPU_RGN_NORMAL)
setup_region r0, r5, r6, MPU_DATA_SIDE @ PHYS_OFFSET, shared, enabled
sub r4, r3, r4 @ (PHYS_OFFSET - PAGE_OFFSET)
add r8, r8, r4 @ PHYS_OFFSET
#else
- ldr r8, =PHYS_OFFSET @ always constant in this case
+ ldr r8, =PLAT_PHYS_OFFSET @ always constant in this case
#endif
/*
teq r0, #0x0 @ '0' on actual UP A9 hardware
beq __fixup_smp_on_up @ So its an A9 UP
ldr r0, [r0, #4] @ read SCU Config
+ARM_BE8(rev r0, r0) @ byteswap if big endian
and r0, r0, #0x3 @ number of CPUs
teq r0, #0x0 @ is 1?
movne pc, lr
ldrcc r7, [r4], #4 @ use branch for delay slot
bcc 1b
bx lr
+#else
+#ifdef CONFIG_CPU_ENDIAN_BE8
+ moveq r0, #0x00004000 @ set bit 22, mov to mvn instruction
#else
moveq r0, #0x400000 @ set bit 22, mov to mvn instruction
+#endif
b 2f
1: ldr ip, [r7, r3]
#ifdef CONFIG_CPU_ENDIAN_BE8
tst ip, #0x000f0000 @ check the rotation field
orrne ip, ip, r6, lsl #24 @ mask in offset bits 31-24
biceq ip, ip, #0x00004000 @ clear bit 22
- orreq ip, ip, r0, lsl #24 @ mask in offset bits 7-0
+ orreq ip, ip, r0 @ mask in offset bits 7-0
#else
bic ip, ip, #0x000000ff
tst ip, #0xf00 @ check the rotation field
#include <asm/pgalloc.h>
#include <asm/mmu_context.h>
#include <asm/cacheflush.h>
+#include <asm/fncpy.h>
#include <asm/mach-types.h>
#include <asm/smp_plat.h>
#include <asm/system_misc.h>
-extern const unsigned char relocate_new_kernel[];
+extern void relocate_new_kernel(void);
extern const unsigned int relocate_new_kernel_size;
extern unsigned long kexec_start_address;
{
unsigned long page_list;
unsigned long reboot_code_buffer_phys;
+ unsigned long reboot_entry = (unsigned long)relocate_new_kernel;
+ unsigned long reboot_entry_phys;
void *reboot_code_buffer;
/*
/* copy our kernel relocation code to the control code page */
- memcpy(reboot_code_buffer,
- relocate_new_kernel, relocate_new_kernel_size);
+ reboot_entry = fncpy(reboot_code_buffer,
+ reboot_entry,
+ relocate_new_kernel_size);
+ reboot_entry_phys = (unsigned long)reboot_entry +
+ (reboot_code_buffer_phys - (unsigned long)reboot_code_buffer);
-
- flush_icache_range((unsigned long) reboot_code_buffer,
- (unsigned long) reboot_code_buffer + KEXEC_CONTROL_PAGE_SIZE);
printk(KERN_INFO "Bye!\n");
if (kexec_reinit)
kexec_reinit();
- soft_restart(reboot_code_buffer_phys);
+ soft_restart(reboot_entry_phys);
}
unsigned long get_wchan(struct task_struct *p)
{
struct stackframe frame;
+ unsigned long stack_page;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
return 0;
frame.sp = thread_saved_sp(p);
frame.lr = 0; /* recovered from the stack */
frame.pc = thread_saved_pc(p);
+ stack_page = (unsigned long)task_stack_page(p);
do {
- int ret = unwind_frame(&frame);
- if (ret < 0)
+ if (frame.sp < stack_page ||
+ frame.sp >= stack_page + THREAD_SIZE ||
+ unwind_frame(&frame) < 0)
return 0;
if (!in_sched_functions(frame.pc))
return frame.pc;
* relocate_kernel.S - put the kernel image in place to boot
*/
+#include <linux/linkage.h>
#include <asm/kexec.h>
- .globl relocate_new_kernel
-relocate_new_kernel:
+ .align 3 /* not needed for this code, but keeps fncpy() happy */
+
+ENTRY(relocate_new_kernel)
ldr r0,kexec_indirection_page
ldr r1,kexec_start_address
kexec_boot_atags:
.long 0x0
+ENDPROC(relocate_new_kernel)
+
relocate_new_kernel_end:
.globl relocate_new_kernel_size
machine_desc = mdesc;
machine_name = mdesc->name;
- setup_dma_zone(mdesc);
-
if (mdesc->reboot_mode != REBOOT_HARD)
reboot_mode = mdesc->reboot_mode;
sort(&meminfo.bank, meminfo.nr_banks, sizeof(meminfo.bank[0]), meminfo_cmp, NULL);
early_paging_init(mdesc, lookup_processor_type(read_cpuid_id()));
+ setup_dma_zone(mdesc);
sanity_check_meminfo();
arm_memblock_init(&meminfo, mdesc);
* snippets.
*/
+/*
+ * In CPU_THUMBONLY case kernel arm opcodes are not allowed.
+ * Note in this case codes skips those instructions but it uses .org
+ * directive to keep correct layout of sigreturn_codes array.
+ */
+#ifndef CONFIG_CPU_THUMBONLY
+#define ARM_OK(code...) code
+#else
+#define ARM_OK(code...)
+#endif
+
+ .macro arm_slot n
+ .org sigreturn_codes + 12 * (\n)
+ARM_OK( .arm )
+ .endm
+
+ .macro thumb_slot n
+ .org sigreturn_codes + 12 * (\n) + 8
+ .thumb
+ .endm
+
#if __LINUX_ARM_ARCH__ <= 4
/*
* Note we manually set minimally required arch that supports
.global sigreturn_codes
.type sigreturn_codes, #object
- .arm
+ .align
sigreturn_codes:
/* ARM sigreturn syscall code snippet */
- mov r7, #(__NR_sigreturn - __NR_SYSCALL_BASE)
- swi #(__NR_sigreturn)|(__NR_OABI_SYSCALL_BASE)
+ arm_slot 0
+ARM_OK( mov r7, #(__NR_sigreturn - __NR_SYSCALL_BASE) )
+ARM_OK( swi #(__NR_sigreturn)|(__NR_OABI_SYSCALL_BASE) )
/* Thumb sigreturn syscall code snippet */
- .thumb
+ thumb_slot 0
movs r7, #(__NR_sigreturn - __NR_SYSCALL_BASE)
swi #0
/* ARM sigreturn_rt syscall code snippet */
- .arm
- mov r7, #(__NR_rt_sigreturn - __NR_SYSCALL_BASE)
- swi #(__NR_rt_sigreturn)|(__NR_OABI_SYSCALL_BASE)
+ arm_slot 1
+ARM_OK( mov r7, #(__NR_rt_sigreturn - __NR_SYSCALL_BASE) )
+ARM_OK( swi #(__NR_rt_sigreturn)|(__NR_OABI_SYSCALL_BASE) )
/* Thumb sigreturn_rt syscall code snippet */
- .thumb
+ thumb_slot 1
movs r7, #(__NR_rt_sigreturn - __NR_SYSCALL_BASE)
swi #0
* it is thumb case or not, so we need additional
* word after real last entry.
*/
- .arm
+ arm_slot 2
.space 4
.size sigreturn_codes, . - sigreturn_codes
high = ALIGN(low, THREAD_SIZE);
/* check current frame pointer is within bounds */
- if (fp < (low + 12) || fp + 4 >= high)
+ if (fp < low + 12 || fp > high - 4)
return -EINVAL;
/* restore the registers from the stack frame */
#include <asm/system_misc.h>
#include <asm/opcodes.h>
-static const char *handler[]= { "prefetch abort", "data abort", "address exception", "interrupt" };
+static const char *handler[]= {
+ "prefetch abort",
+ "data abort",
+ "address exception",
+ "interrupt",
+ "undefined instruction",
+};
void *vectors_page;
__do_cache_op(unsigned long start, unsigned long end)
{
int ret;
- unsigned long chunk = PAGE_SIZE;
do {
+ unsigned long chunk = min(PAGE_SIZE, end - start);
+
if (signal_pending(current)) {
struct thread_info *ti = current_thread_info();
memcpy(vectors + 0xfe0, vectors + 0xfe8, 4);
}
#else
-static void __init kuser_init(void *vectors)
+static inline void __init kuser_init(void *vectors)
{
}
#endif
return err;
}
+static phys_addr_t kvm_kaddr_to_phys(void *kaddr)
+{
+ if (!is_vmalloc_addr(kaddr)) {
+ BUG_ON(!virt_addr_valid(kaddr));
+ return __pa(kaddr);
+ } else {
+ return page_to_phys(vmalloc_to_page(kaddr)) +
+ offset_in_page(kaddr);
+ }
+}
+
/**
* create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode
* @from: The virtual kernel start address of the range
*/
int create_hyp_mappings(void *from, void *to)
{
- unsigned long phys_addr = virt_to_phys(from);
+ phys_addr_t phys_addr;
+ unsigned long virt_addr;
unsigned long start = KERN_TO_HYP((unsigned long)from);
unsigned long end = KERN_TO_HYP((unsigned long)to);
- /* Check for a valid kernel memory mapping */
- if (!virt_addr_valid(from) || !virt_addr_valid(to - 1))
- return -EINVAL;
+ start = start & PAGE_MASK;
+ end = PAGE_ALIGN(end);
- return __create_hyp_mappings(hyp_pgd, start, end,
- __phys_to_pfn(phys_addr), PAGE_HYP);
+ for (virt_addr = start; virt_addr < end; virt_addr += PAGE_SIZE) {
+ int err;
+
+ phys_addr = kvm_kaddr_to_phys(from + virt_addr - start);
+ err = __create_hyp_mappings(hyp_pgd, virt_addr,
+ virt_addr + PAGE_SIZE,
+ __phys_to_pfn(phys_addr),
+ PAGE_HYP);
+ if (err)
+ return err;
+ }
+
+ return 0;
}
/**
and r3, r0, #31 @ Get bit offset
mov r0, r0, lsr #5
add r1, r1, r0, lsl #2 @ Get word offset
-#if __LINUX_ARM_ARCH__ >= 7
+#if __LINUX_ARM_ARCH__ >= 7 && defined(CONFIG_SMP)
.arch_extension mp
ALT_SMP(W(pldw) [r1])
ALT_UP(W(nop))
/*
* loops = r0 * HZ * loops_per_jiffy / 1000000
*/
+ .align 3
@ Delay routine
ENTRY(__loop_delay)
static struct clock_event_device clkevt = {
.name = "at91_tick",
.features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT,
- .shift = 32,
.rating = 150,
.set_next_event = clkevt32k_next_event,
.set_mode = clkevt32k_mode,
at91_st_write(AT91_ST_RTMR, 1);
/* Setup timer clockevent, with minimum of two ticks (important!!) */
- clkevt.mult = div_sc(AT91_SLOW_CLOCK, NSEC_PER_SEC, clkevt.shift);
- clkevt.max_delta_ns = clockevent_delta2ns(AT91_ST_ALMV, &clkevt);
- clkevt.min_delta_ns = clockevent_delta2ns(2, &clkevt) + 1;
clkevt.cpumask = cpumask_of(0);
- clockevents_register_device(&clkevt);
+ clockevents_config_and_register(&clkevt, AT91_SLOW_CLOCK,
+ 2, AT91_ST_ALMV);
/* register clocksource */
clocksource_register_hz(&clk32k, AT91_SLOW_CLOCK);
#include <mach/at91_ramc.h>
#include <mach/at91rm9200_sdramc.h>
+#ifdef CONFIG_PM
extern void at91_pm_set_standby(void (*at91_standby)(void));
+#else
+static inline void at91_pm_set_standby(void (*at91_standby)(void)) { }
+#endif
/*
* The AT91RM9200 goes into self-refresh mode with this command, and will
.name = "twi0_clk",
.pid = SAMA5D3_ID_TWI0,
.type = CLK_TYPE_PERIPHERAL,
- .div = AT91_PMC_PCR_DIV2,
+ .div = AT91_PMC_PCR_DIV8,
};
static struct clk twi1_clk = {
.name = "twi1_clk",
.pid = SAMA5D3_ID_TWI1,
.type = CLK_TYPE_PERIPHERAL,
- .div = AT91_PMC_PCR_DIV2,
+ .div = AT91_PMC_PCR_DIV8,
};
static struct clk twi2_clk = {
.name = "twi2_clk",
.pid = SAMA5D3_ID_TWI2,
.type = CLK_TYPE_PERIPHERAL,
- .div = AT91_PMC_PCR_DIV2,
+ .div = AT91_PMC_PCR_DIV8,
};
static struct clk mmc0_clk = {
.name = "mci0_clk",
static struct resource da830_mcasp1_resources[] = {
{
- .name = "mcasp1",
+ .name = "mpu",
.start = DAVINCI_DA830_MCASP1_REG_BASE,
.end = DAVINCI_DA830_MCASP1_REG_BASE + (SZ_1K * 12) - 1,
.flags = IORESOURCE_MEM,
static struct resource da850_mcasp_resources[] = {
{
- .name = "mcasp",
+ .name = "mpu",
.start = DAVINCI_DA8XX_MCASP0_REG_BASE,
.end = DAVINCI_DA8XX_MCASP0_REG_BASE + (SZ_1K * 12) - 1,
.flags = IORESOURCE_MEM,
static struct resource dm355_asp1_resources[] = {
{
+ .name = "mpu",
.start = DAVINCI_ASP1_BASE,
.end = DAVINCI_ASP1_BASE + SZ_8K - 1,
.flags = IORESOURCE_MEM,
int __init dm355_gpio_register(void)
{
return davinci_gpio_register(dm355_gpio_resources,
- sizeof(dm355_gpio_resources),
+ ARRAY_SIZE(dm355_gpio_resources),
&dm355_gpio_platform_data);
}
/*----------------------------------------------------------------------*/
int __init dm365_gpio_register(void)
{
return davinci_gpio_register(dm365_gpio_resources,
- sizeof(dm365_gpio_resources),
+ ARRAY_SIZE(dm365_gpio_resources),
&dm365_gpio_platform_data);
}
static struct resource dm365_asp_resources[] = {
{
+ .name = "mpu",
.start = DAVINCI_DM365_ASP0_BASE,
.end = DAVINCI_DM365_ASP0_BASE + SZ_8K - 1,
.flags = IORESOURCE_MEM,
/* DM6446 EVM uses ASP0; line-out is a pair of RCA jacks */
static struct resource dm644x_asp_resources[] = {
{
+ .name = "mpu",
.start = DAVINCI_ASP0_BASE,
.end = DAVINCI_ASP0_BASE + SZ_8K - 1,
.flags = IORESOURCE_MEM,
int __init dm644x_gpio_register(void)
{
return davinci_gpio_register(dm644_gpio_resources,
- sizeof(dm644_gpio_resources),
+ ARRAY_SIZE(dm644_gpio_resources),
&dm644_gpio_platform_data);
}
/*----------------------------------------------------------------------*/
static struct resource dm646x_mcasp0_resources[] = {
{
- .name = "mcasp0",
+ .name = "mpu",
.start = DAVINCI_DM646X_MCASP0_REG_BASE,
.end = DAVINCI_DM646X_MCASP0_REG_BASE + (SZ_1K << 1) - 1,
.flags = IORESOURCE_MEM,
static struct resource dm646x_mcasp1_resources[] = {
{
- .name = "mcasp1",
+ .name = "mpu",
.start = DAVINCI_DM646X_MCASP1_REG_BASE,
.end = DAVINCI_DM646X_MCASP1_REG_BASE + (SZ_1K << 1) - 1,
.flags = IORESOURCE_MEM,
int __init dm646x_gpio_register(void)
{
return davinci_gpio_register(dm646x_gpio_resources,
- sizeof(dm646x_gpio_resources),
+ ARRAY_SIZE(dm646x_gpio_resources),
&dm646x_gpio_platform_data);
}
/*----------------------------------------------------------------------*/
#include <linux/init.h>
#include <linux/io.h>
#include <linux/spinlock.h>
+#include <video/vga.h>
#include <asm/pgtable.h>
#include <asm/page.h>
iotable_init(ebsa285_host_io_desc, ARRAY_SIZE(ebsa285_host_io_desc));
pci_map_io_early(__phys_to_pfn(DC21285_PCI_IO));
}
+
+ vga_base = PCIMEM_BASE;
}
void footbridge_restart(enum reboot_mode mode, const char *cmd)
void __init footbridge_timer_init(void)
{
struct clock_event_device *ce = &ckevt_dc21285;
+ unsigned rate = DIV_ROUND_CLOSEST(mem_fclk_21285, 16);
- clocksource_register_hz(&cksrc_dc21285, (mem_fclk_21285 + 8) / 16);
+ clocksource_register_hz(&cksrc_dc21285, rate);
setup_irq(ce->irq, &footbridge_timer_irq);
ce->cpumask = cpumask_of(smp_processor_id());
- clockevents_config_and_register(ce, mem_fclk_21285, 0x4, 0xffffff);
+ clockevents_config_and_register(ce, rate, 0x4, 0xffffff);
}
#include <linux/irq.h>
#include <linux/io.h>
#include <linux/spinlock.h>
-#include <video/vga.h>
#include <asm/irq.h>
#include <asm/mach/pci.h>
int cfn_mode;
pcibios_min_mem = 0x81000000;
- vga_base = PCIMEM_BASE;
mem_size = (unsigned int)high_memory - PAGE_OFFSET;
for (mem_mask = 0x00100000; mem_mask < 0x10000000; mem_mask <<= 1)
const char *name;
const char *trigger;
} ebsa285_leds[] = {
- { "ebsa285:amber", "heartbeat", },
- { "ebsa285:green", "cpu0", },
+ { "ebsa285:amber", "cpu0", },
+ { "ebsa285:green", "heartbeat", },
{ "ebsa285:red",},
};
+static unsigned char hw_led_state;
+
static void ebsa285_led_set(struct led_classdev *cdev,
enum led_brightness b)
{
struct ebsa285_led *led = container_of(cdev,
struct ebsa285_led, cdev);
- if (b != LED_OFF)
- *XBUS_LEDS |= led->mask;
+ if (b == LED_OFF)
+ hw_led_state |= led->mask;
else
- *XBUS_LEDS &= ~led->mask;
+ hw_led_state &= ~led->mask;
+ *XBUS_LEDS = hw_led_state;
}
static enum led_brightness ebsa285_led_get(struct led_classdev *cdev)
struct ebsa285_led *led = container_of(cdev,
struct ebsa285_led, cdev);
- return (*XBUS_LEDS & led->mask) ? LED_FULL : LED_OFF;
+ return hw_led_state & led->mask ? LED_OFF : LED_FULL;
}
static int __init ebsa285_leds_init(void)
{
int i;
- if (machine_is_ebsa285())
+ if (!machine_is_ebsa285())
return -ENODEV;
- /* 3 LEDS All ON */
- *XBUS_LEDS |= XBUS_LED_AMBER | XBUS_LED_GREEN | XBUS_LED_RED;
+ /* 3 LEDS all off */
+ hw_led_state = XBUS_LED_AMBER | XBUS_LED_GREEN | XBUS_LED_RED;
+ *XBUS_LEDS = hw_led_state;
for (i = 0; i < ARRAY_SIZE(ebsa285_leds); i++) {
struct ebsa285_led *led;
#include <linux/clkdev.h>
#include <linux/clocksource.h>
#include <linux/dma-mapping.h>
+#include <linux/input.h>
#include <linux/io.h>
#include <linux/irqchip.h>
+#include <linux/mailbox.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/of_address.h>
+#include <linux/reboot.h>
#include <linux/amba/bus.h>
#include <linux/platform_device.h>
.name = "cpuidle-calxeda",
};
+static int hb_keys_notifier(struct notifier_block *nb, unsigned long event, void *data)
+{
+ u32 key = *(u32 *)data;
+
+ if (event != 0x1000)
+ return 0;
+
+ if (key == KEY_POWER)
+ orderly_poweroff(false);
+ else if (key == 0xffff)
+ ctrl_alt_del();
+
+ return 0;
+}
+static struct notifier_block hb_keys_nb = {
+ .notifier_call = hb_keys_notifier,
+};
+
static void __init highbank_init(void)
{
struct device_node *np;
bus_register_notifier(&platform_bus_type, &highbank_platform_nb);
bus_register_notifier(&amba_bustype, &highbank_amba_nb);
+ pl320_ipc_register_notifier(&hb_keys_nb);
+
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
if (psci_ops.cpu_suspend)
#define iop_chan_pq_slot_count iop_chan_xor_slot_count
#define iop_chan_pq_zero_sum_slot_count iop_chan_xor_slot_count
-static inline u32 iop_desc_get_dest_addr(struct iop_adma_desc_slot *desc,
- struct iop_adma_chan *chan)
-{
- struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
- return hw_desc->dest_addr;
-}
-
-static inline u32 iop_desc_get_qdest_addr(struct iop_adma_desc_slot *desc,
- struct iop_adma_chan *chan)
-{
- struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
- return hw_desc->q_dest_addr;
-}
-
static inline u32 iop_desc_get_byte_count(struct iop_adma_desc_slot *desc,
struct iop_adma_chan *chan)
{
hw_desc->desc_ctrl = u_desc_ctrl.value;
}
-static inline int iop_desc_is_pq(struct iop_adma_desc_slot *desc)
-{
- struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
- union {
- u32 value;
- struct iop13xx_adma_desc_ctrl field;
- } u_desc_ctrl;
-
- u_desc_ctrl.value = hw_desc->desc_ctrl;
- return u_desc_ctrl.field.pq_xfer_en;
-}
-
static inline void
iop_desc_init_pq_zero_sum(struct iop_adma_desc_slot *desc, int src_cnt,
unsigned long flags)
obj-$(CONFIG_ARCH_OMAP2) += $(omap-2-3-common) $(hwmod-common)
obj-$(CONFIG_ARCH_OMAP3) += $(omap-2-3-common) $(hwmod-common) $(secure-common)
-obj-$(CONFIG_ARCH_OMAP4) += prm44xx.o $(hwmod-common) $(secure-common)
+obj-$(CONFIG_ARCH_OMAP4) += $(hwmod-common) $(secure-common)
obj-$(CONFIG_SOC_AM33XX) += irq.o $(hwmod-common)
-obj-$(CONFIG_SOC_OMAP5) += prm44xx.o $(hwmod-common) $(secure-common)
+obj-$(CONFIG_SOC_OMAP5) += $(hwmod-common) $(secure-common)
obj-$(CONFIG_SOC_AM43XX) += $(hwmod-common) $(secure-common)
-obj-$(CONFIG_SOC_DRA7XX) += prm44xx.o $(hwmod-common) $(secure-common)
+obj-$(CONFIG_SOC_DRA7XX) += $(hwmod-common) $(secure-common)
ifneq ($(CONFIG_SND_OMAP_SOC_MCBSP),)
obj-y += mcbsp.o
.dt_compat = omap3_gp_boards_compat,
.restart = omap3xxx_restart,
MACHINE_END
+
+static const char *am3517_boards_compat[] __initdata = {
+ "ti,am3517",
+ NULL,
+};
+
+DT_MACHINE_START(AM3517_DT, "Generic AM3517 (Flattened Device Tree)")
+ .reserve = omap_reserve,
+ .map_io = omap3_map_io,
+ .init_early = am35xx_init_early,
+ .init_irq = omap_intc_of_init,
+ .handle_irq = omap3_intc_handle_irq,
+ .init_machine = omap_generic_init,
+ .init_late = omap3_init_late,
+ .init_time = omap3_gptimer_timer_init,
+ .dt_compat = am3517_boards_compat,
+ .restart = omap3xxx_restart,
+MACHINE_END
#endif
#ifdef CONFIG_SOC_AM33XX
static int ldp_twl_gpio_setup(struct device *dev, unsigned gpio, unsigned ngpio)
{
+ int res;
+
/* LCD enable GPIO */
ldp_lcd_pdata.enable_gpio = gpio + 7;
/* Backlight enable GPIO */
ldp_lcd_pdata.backlight_gpio = gpio + 15;
+ res = platform_device_register(&ldp_lcd_device);
+ if (res)
+ pr_err("Unable to register LCD: %d\n", res);
+
return 0;
}
static struct platform_device *ldp_devices[] __initdata = {
&ldp_gpio_keys_device,
- &ldp_lcd_device,
};
#ifdef CONFIG_OMAP_MUX
extern void omap_sdrc_init(struct omap_sdrc_params *sdrc_cs0,
struct omap_sdrc_params *sdrc_cs1);
struct omap2_hsmmc_info;
-extern int omap4_twl6030_hsmmc_init(struct omap2_hsmmc_info *controllers);
extern void omap_reserve(void);
struct omap_hwmod;
#include "soc.h"
#include "iomap.h"
-#include "mux.h"
#include "control.h"
#include "display.h"
#include "prm.h"
{ "dss_hdmi", "omapdss_hdmi", -1 },
};
-static void __init omap4_tpd12s015_mux_pads(void)
-{
- omap_mux_init_signal("hdmi_cec",
- OMAP_PIN_INPUT_PULLUP);
- omap_mux_init_signal("hdmi_ddc_scl",
- OMAP_PIN_INPUT_PULLUP);
- omap_mux_init_signal("hdmi_ddc_sda",
- OMAP_PIN_INPUT_PULLUP);
-}
-
-static void __init omap4_hdmi_mux_pads(enum omap_hdmi_flags flags)
-{
- u32 reg;
- u16 control_i2c_1;
-
- /*
- * CONTROL_I2C_1: HDMI_DDC_SDA_PULLUPRESX (bit 28) and
- * HDMI_DDC_SCL_PULLUPRESX (bit 24) are set to disable
- * internal pull up resistor.
- */
- if (flags & OMAP_HDMI_SDA_SCL_EXTERNAL_PULLUP) {
- control_i2c_1 = OMAP4_CTRL_MODULE_PAD_CORE_CONTROL_I2C_1;
- reg = omap4_ctrl_pad_readl(control_i2c_1);
- reg |= (OMAP4_HDMI_DDC_SDA_PULLUPRESX_MASK |
- OMAP4_HDMI_DDC_SCL_PULLUPRESX_MASK);
- omap4_ctrl_pad_writel(reg, control_i2c_1);
- }
-}
-
static int omap4_dsi_mux_pads(int dsi_id, unsigned lanes)
{
u32 enable_mask, enable_shift;
return 0;
}
-int __init omap_hdmi_init(enum omap_hdmi_flags flags)
-{
- if (cpu_is_omap44xx()) {
- omap4_hdmi_mux_pads(flags);
- omap4_tpd12s015_mux_pads();
- }
-
- return 0;
-}
-
static int omap_dsi_enable_pads(int dsi_id, unsigned lane_mask)
{
if (cpu_is_omap44xx())
static struct connector_dvi_platform_data omap3_igep2_dvi_connector_pdata = {
.name = "dvi",
.source = "tfp410.0",
- .i2c_bus_num = 3,
+ .i2c_bus_num = 2,
};
static struct platform_device omap3_igep2_dvi_connector_device = {
return ret;
}
+ /*
+ * For some GPMC devices we still need to rely on the bootloader
+ * timings because the devices can be connected via FPGA. So far
+ * the list is smc91x on the omap2 SDP boards, and 8250 on zooms.
+ * REVISIT: Add timing support from slls644g.pdf and from the
+ * lan91c96 manual.
+ */
+ if (of_device_is_compatible(child, "ns16550a") ||
+ of_device_is_compatible(child, "smsc,lan91c94") ||
+ of_device_is_compatible(child, "smsc,lan91c111")) {
+ dev_warn(&pdev->dev,
+ "%s using bootloader timings on CS%d\n",
+ child->name, cs);
+ goto no_timings;
+ }
+
/*
* FIXME: gpmc_cs_request() will map the CS to an arbitary
* location in the gpmc address space. When booting with
gpmc_read_timings_dt(child, &gpmc_t);
gpmc_cs_set_timings(cs, &gpmc_t);
+no_timings:
if (of_platform_device_create(child, NULL, &pdev->dev))
return 0;
return ret;
}
-/*
- * REVISIT: Add timing support from slls644g.pdf
- */
-static int gpmc_probe_8250(struct platform_device *pdev,
- struct device_node *child)
-{
- struct resource res;
- unsigned long base;
- int ret, cs;
-
- if (of_property_read_u32(child, "reg", &cs) < 0) {
- dev_err(&pdev->dev, "%s has no 'reg' property\n",
- child->full_name);
- return -ENODEV;
- }
-
- if (of_address_to_resource(child, 0, &res) < 0) {
- dev_err(&pdev->dev, "%s has malformed 'reg' property\n",
- child->full_name);
- return -ENODEV;
- }
-
- ret = gpmc_cs_request(cs, resource_size(&res), &base);
- if (ret < 0) {
- dev_err(&pdev->dev, "cannot request GPMC CS %d\n", cs);
- return ret;
- }
-
- if (of_platform_device_create(child, NULL, &pdev->dev))
- return 0;
-
- dev_err(&pdev->dev, "failed to create gpmc child %s\n", child->name);
-
- return -ENODEV;
-}
-
static int gpmc_probe_dt(struct platform_device *pdev)
{
int ret;
else if (of_node_cmp(child->name, "onenand") == 0)
ret = gpmc_probe_onenand_child(pdev, child);
else if (of_node_cmp(child->name, "ethernet") == 0 ||
- of_node_cmp(child->name, "nor") == 0)
+ of_node_cmp(child->name, "nor") == 0 ||
+ of_node_cmp(child->name, "uart") == 0)
ret = gpmc_probe_generic_child(pdev, child);
- else if (of_node_cmp(child->name, "8250") == 0)
- ret = gpmc_probe_8250(pdev, child);
if (WARN(ret < 0, "%s: probing gpmc child %s failed\n",
__func__, child->full_name))
{ }
#endif
+#ifdef CONFIG_SOC_HAS_REALTIME_COUNTER
void set_cntfreq(void);
+#else
+static inline void set_cntfreq(void)
+{
+}
+#endif
+
#endif /* __ASSEMBLER__ */
#endif /* OMAP_ARCH_OMAP_SECURE_H */
#include "iomap.h"
#include "common.h"
#include "mmc.h"
-#include "hsmmc.h"
#include "prminst44xx.h"
#include "prcm_mpu44xx.h"
#include "omap4-sar-layout.h"
omap_wakeupgen_init();
irqchip_init();
}
-
-#if defined(CONFIG_MMC_OMAP_HS) || defined(CONFIG_MMC_OMAP_HS_MODULE)
-static int omap4_twl6030_hsmmc_late_init(struct device *dev)
-{
- int irq = 0;
- struct platform_device *pdev = container_of(dev,
- struct platform_device, dev);
- struct omap_mmc_platform_data *pdata = dev->platform_data;
-
- /* Setting MMC1 Card detect Irq */
- if (pdev->id == 0) {
- irq = twl6030_mmc_card_detect_config();
- if (irq < 0) {
- dev_err(dev, "%s: Error card detect config(%d)\n",
- __func__, irq);
- return irq;
- }
- pdata->slots[0].card_detect_irq = irq;
- pdata->slots[0].card_detect = twl6030_mmc_card_detect;
- }
- return 0;
-}
-
-static __init void omap4_twl6030_hsmmc_set_late_init(struct device *dev)
-{
- struct omap_mmc_platform_data *pdata;
-
- /* dev can be null if CONFIG_MMC_OMAP_HS is not set */
- if (!dev) {
- pr_err("Failed %s\n", __func__);
- return;
- }
- pdata = dev->platform_data;
- pdata->init = omap4_twl6030_hsmmc_late_init;
-}
-
-int __init omap4_twl6030_hsmmc_init(struct omap2_hsmmc_info *controllers)
-{
- struct omap2_hsmmc_info *c;
-
- omap_hsmmc_init(controllers);
- for (c = controllers; c->mmc; c++) {
- /* pdev can be null if CONFIG_MMC_OMAP_HS is not set */
- if (!c->pdev)
- continue;
- omap4_twl6030_hsmmc_set_late_init(&c->pdev->dev);
- }
-
- return 0;
-}
-#else
-int __init omap4_twl6030_hsmmc_init(struct omap2_hsmmc_info *controllers)
-{
- return 0;
-}
-#endif
odbfd_exit1:
kfree(hwmods);
odbfd_exit:
+ /* if data/we are at fault.. load up a fail handler */
+ if (ret)
+ pdev->dev.pm_domain = &omap_device_fail_pm_domain;
+
return ret;
}
return pm_generic_runtime_resume(dev);
}
+
+static int _od_fail_runtime_suspend(struct device *dev)
+{
+ dev_warn(dev, "%s: FIXME: missing hwmod/omap_dev info\n", __func__);
+ return -ENODEV;
+}
+
+static int _od_fail_runtime_resume(struct device *dev)
+{
+ dev_warn(dev, "%s: FIXME: missing hwmod/omap_dev info\n", __func__);
+ return -ENODEV;
+}
+
#endif
#ifdef CONFIG_SUSPEND
#define _od_resume_noirq NULL
#endif
+struct dev_pm_domain omap_device_fail_pm_domain = {
+ .ops = {
+ SET_RUNTIME_PM_OPS(_od_fail_runtime_suspend,
+ _od_fail_runtime_resume, NULL)
+ }
+};
+
struct dev_pm_domain omap_device_pm_domain = {
.ops = {
SET_RUNTIME_PM_OPS(_od_runtime_suspend, _od_runtime_resume,
#include "omap_hwmod.h"
extern struct dev_pm_domain omap_device_pm_domain;
+extern struct dev_pm_domain omap_device_fail_pm_domain;
/* omap_device._state values */
#define OMAP_DEVICE_STATE_UNKNOWN 0
}
/**
- * _set_softreset: set OCP_SYSCONFIG.CLOCKACTIVITY bits in @v
+ * _set_softreset: set OCP_SYSCONFIG.SOFTRESET bit in @v
* @oh: struct omap_hwmod *
* @v: pointer to register contents to modify
*
return 0;
}
+/**
+ * _clear_softreset: clear OCP_SYSCONFIG.SOFTRESET bit in @v
+ * @oh: struct omap_hwmod *
+ * @v: pointer to register contents to modify
+ *
+ * Clear the SOFTRESET bit in @v for hwmod @oh. Returns -EINVAL upon
+ * error or 0 upon success.
+ */
+static int _clear_softreset(struct omap_hwmod *oh, u32 *v)
+{
+ u32 softrst_mask;
+
+ if (!oh->class->sysc ||
+ !(oh->class->sysc->sysc_flags & SYSC_HAS_SOFTRESET))
+ return -EINVAL;
+
+ if (!oh->class->sysc->sysc_fields) {
+ WARN(1,
+ "omap_hwmod: %s: sysc_fields absent for sysconfig class\n",
+ oh->name);
+ return -EINVAL;
+ }
+
+ softrst_mask = (0x1 << oh->class->sysc->sysc_fields->srst_shift);
+
+ *v &= ~softrst_mask;
+
+ return 0;
+}
+
/**
* _wait_softreset_complete - wait for an OCP softreset to complete
* @oh: struct omap_hwmod * to wait on
pr_warning("omap_hwmod: %s: cannot clk_get interface_clk %s\n",
oh->name, os->clk);
ret = -EINVAL;
+ continue;
}
os->_clk = c;
/*
pr_warning("omap_hwmod: %s: cannot clk_get opt_clk %s\n",
oh->name, oc->clk);
ret = -EINVAL;
+ continue;
}
oc->_clk = c;
/*
ret = _set_softreset(oh, &v);
if (ret)
goto dis_opt_clks;
+
+ _write_sysconfig(v, oh);
+ ret = _clear_softreset(oh, &v);
+ if (ret)
+ goto dis_opt_clks;
+
_write_sysconfig(v, oh);
if (oh->class->sysc->srst_udelay)
return 0;
}
+static int of_dev_find_hwmod(struct device_node *np,
+ struct omap_hwmod *oh)
+{
+ int count, i, res;
+ const char *p;
+
+ count = of_property_count_strings(np, "ti,hwmods");
+ if (count < 1)
+ return -ENODEV;
+
+ for (i = 0; i < count; i++) {
+ res = of_property_read_string_index(np, "ti,hwmods",
+ i, &p);
+ if (res)
+ continue;
+ if (!strcmp(p, oh->name)) {
+ pr_debug("omap_hwmod: dt %s[%i] uses hwmod %s\n",
+ np->name, i, oh->name);
+ return i;
+ }
+ }
+
+ return -ENODEV;
+}
+
/**
* of_dev_hwmod_lookup - look up needed hwmod from dt blob
* @np: struct device_node *
* @oh: struct omap_hwmod *
+ * @index: index of the entry found
+ * @found: struct device_node * found or NULL
*
* Parse the dt blob and find out needed hwmod. Recursive function is
* implemented to take care hierarchical dt blob parsing.
- * Return: The device node on success or NULL on failure.
+ * Return: Returns 0 on success, -ENODEV when not found.
*/
-static struct device_node *of_dev_hwmod_lookup(struct device_node *np,
- struct omap_hwmod *oh)
+static int of_dev_hwmod_lookup(struct device_node *np,
+ struct omap_hwmod *oh,
+ int *index,
+ struct device_node **found)
{
- struct device_node *np0 = NULL, *np1 = NULL;
- const char *p;
+ struct device_node *np0 = NULL;
+ int res;
+
+ res = of_dev_find_hwmod(np, oh);
+ if (res >= 0) {
+ *found = np;
+ *index = res;
+ return 0;
+ }
for_each_child_of_node(np, np0) {
- if (of_find_property(np0, "ti,hwmods", NULL)) {
- p = of_get_property(np0, "ti,hwmods", NULL);
- if (!strcmp(p, oh->name))
- return np0;
- np1 = of_dev_hwmod_lookup(np0, oh);
- if (np1)
- return np1;
+ struct device_node *fc;
+ int i;
+
+ res = of_dev_hwmod_lookup(np0, oh, &i, &fc);
+ if (res == 0) {
+ *found = fc;
+ *index = i;
+ return 0;
}
}
- return NULL;
+
+ *found = NULL;
+ *index = 0;
+
+ return -ENODEV;
}
/**
* _init_mpu_rt_base - populate the virtual address for a hwmod
* @oh: struct omap_hwmod * to locate the virtual address
* @data: (unused, caller should pass NULL)
+ * @index: index of the reg entry iospace in device tree
* @np: struct device_node * of the IP block's device node in the DT data
*
* Cache the virtual address used by the MPU to access this IP block's
* -ENXIO on absent or invalid register target address space.
*/
static int __init _init_mpu_rt_base(struct omap_hwmod *oh, void *data,
- struct device_node *np)
+ int index, struct device_node *np)
{
struct omap_hwmod_addr_space *mem;
void __iomem *va_start = NULL;
if (!np)
return -ENXIO;
- va_start = of_iomap(np, oh->mpu_rt_idx);
+ va_start = of_iomap(np, index + oh->mpu_rt_idx);
} else {
va_start = ioremap(mem->pa_start, mem->pa_end - mem->pa_start);
}
if (!va_start) {
- pr_err("omap_hwmod: %s: Could not ioremap\n", oh->name);
+ if (mem)
+ pr_err("omap_hwmod: %s: Could not ioremap\n", oh->name);
+ else
+ pr_err("omap_hwmod: %s: Missing dt reg%i for %s\n",
+ oh->name, index, np->full_name);
return -ENXIO;
}
*/
static int __init _init(struct omap_hwmod *oh, void *data)
{
- int r;
+ int r, index;
struct device_node *np = NULL;
if (oh->_state != _HWMOD_STATE_REGISTERED)
return 0;
- if (of_have_populated_dt())
- np = of_dev_hwmod_lookup(of_find_node_by_name(NULL, "ocp"), oh);
+ if (of_have_populated_dt()) {
+ struct device_node *bus;
+
+ bus = of_find_node_by_name(NULL, "ocp");
+ if (!bus)
+ return -ENODEV;
+
+ r = of_dev_hwmod_lookup(bus, oh, &index, &np);
+ if (r)
+ pr_debug("omap_hwmod: %s missing dt data\n", oh->name);
+ else if (np && index)
+ pr_warn("omap_hwmod: %s using broken dt data from %s\n",
+ oh->name, np->name);
+ }
if (oh->class->sysc) {
- r = _init_mpu_rt_base(oh, NULL, np);
+ r = _init_mpu_rt_base(oh, NULL, index, np);
if (r < 0) {
WARN(1, "omap_hwmod: %s: doesn't have mpu register target base\n",
oh->name);
goto error;
_write_sysconfig(v, oh);
+ ret = _clear_softreset(oh, &v);
+ if (ret)
+ goto error;
+ _write_sysconfig(v, oh);
+
error:
return ret;
}
/* gpmc */
static struct omap_hwmod_irq_info omap2xxx_gpmc_irqs[] = {
- { .irq = 20 },
+ { .irq = 20 + OMAP_INTC_START, },
{ .irq = -1 }
};
};
static struct omap_hwmod_irq_info omap2_rng_mpu_irqs[] = {
- { .irq = 52 },
+ { .irq = 52 + OMAP_INTC_START, },
{ .irq = -1 }
};
.syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_MIDLEMODE | SYSC_HAS_CLOCKACTIVITY |
SYSC_HAS_SIDLEMODE | SYSC_HAS_ENAWAKEUP |
- SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE),
+ SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE |
+ SYSS_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
MSTANDBY_FORCE | MSTANDBY_NO | MSTANDBY_SMART),
.sysc_fields = &omap_hwmod_sysc_type1,
* hence HWMOD_SWSUP_MSTANDBY
*/
- /*
- * During system boot; If the hwmod framework resets the module
- * the module will have smart idle settings; which can lead to deadlock
- * (above Errata Id:i660); so, dont reset the module during boot;
- * Use HWMOD_INIT_NO_RESET.
- */
-
- .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY |
- HWMOD_INIT_NO_RESET,
+ .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY,
};
/*
};
static struct omap_hwmod_irq_info omap3xxx_gpmc_irqs[] = {
- { .irq = 20 },
+ { .irq = 20 + OMAP_INTC_START, },
{ .irq = -1 }
};
static struct omap_hwmod omap3xxx_mmu_isp_hwmod;
static struct omap_hwmod_irq_info omap3xxx_mmu_isp_irqs[] = {
- { .irq = 24 },
+ { .irq = 24 + OMAP_INTC_START, },
{ .irq = -1 }
};
static struct omap_hwmod omap3xxx_mmu_iva_hwmod;
static struct omap_hwmod_irq_info omap3xxx_mmu_iva_irqs[] = {
- { .irq = 28 },
+ { .irq = 28 + OMAP_INTC_START, },
{ .irq = -1 }
};
.sysc_offs = 0x0010,
.syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_MIDLEMODE | SYSC_HAS_SIDLEMODE |
- SYSC_HAS_SOFTRESET),
+ SYSC_HAS_SOFTRESET | SYSC_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO |
MSTANDBY_SMART | MSTANDBY_SMART_WKUP),
* hence HWMOD_SWSUP_MSTANDBY
*/
- /*
- * During system boot; If the hwmod framework resets the module
- * the module will have smart idle settings; which can lead to deadlock
- * (above Errata Id:i660); so, dont reset the module during boot;
- * Use HWMOD_INIT_NO_RESET.
- */
-
- .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY |
- HWMOD_INIT_NO_RESET,
+ .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY,
};
/*
.rev_offs = 0x0000,
.sysc_offs = 0x0010,
.sysc_flags = (SYSC_HAS_MIDLEMODE | SYSC_HAS_RESET_STATUS |
- SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET),
+ SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET |
+ SYSC_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO |
MSTANDBY_SMART | MSTANDBY_SMART_WKUP),
* hence HWMOD_SWSUP_MSTANDBY
*/
- /*
- * During system boot; If the hwmod framework resets the module
- * the module will have smart idle settings; which can lead to deadlock
- * (above Errata Id:i660); so, dont reset the module during boot;
- * Use HWMOD_INIT_NO_RESET.
- */
-
- .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY |
- HWMOD_INIT_NO_RESET,
+ .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY,
.main_clk = "l3init_60m_fclk",
.prcm = {
.omap4 = {
.class = &dra7xx_uart_hwmod_class,
.clkdm_name = "l4per_clkdm",
.main_clk = "uart1_gfclk_mux",
- .flags = HWMOD_SWSUP_SIDLE_ACT,
+ .flags = HWMOD_SWSUP_SIDLE_ACT | DEBUG_OMAP2UART1_FLAGS,
.prcm = {
.omap4 = {
.clkctrl_offs = DRA7XX_CM_L4PER_UART1_CLKCTRL_OFFSET,
static struct pdata_init pdata_quirks[] __initdata = {
#ifdef CONFIG_ARCH_OMAP3
+ { "nokia,omap3-n900", hsmmc2_internal_input_clk, },
{ "nokia,omap3-n9", hsmmc2_internal_input_clk, },
{ "nokia,omap3-n950", hsmmc2_internal_input_clk, },
{ "isee,omap3-igep0020", omap3_igep0020_legacy_init, },
* will hang the system.
*/
pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON);
- ret = _omap_save_secure_sram((u32 *)
+ ret = _omap_save_secure_sram((u32 *)(unsigned long)
__pa(omap3_secure_ram_storage));
pwrdm_set_next_pwrst(mpu_pwrdm, mpu_next_state);
/* Following is for error tracking, it should not happen */
for (i = 0; i < pwrdm->banks; i++)
pwrdm->ret_mem_off_counter[i] = 0;
- arch_pwrdm->pwrdm_wait_transition(pwrdm);
+ if (arch_pwrdm && arch_pwrdm->pwrdm_wait_transition)
+ arch_pwrdm->pwrdm_wait_transition(pwrdm);
pwrdm->state = pwrdm_read_pwrst(pwrdm);
pwrdm->state_counter[pwrdm->state] = 1;
extern u32 omap4_prm_vcvp_rmw(u32 mask, u32 bits, u8 offset);
#if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \
- defined(CONFIG_SOC_DRA7XX)
+ defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM43XX)
void omap44xx_prm_reconfigure_io_chain(void);
#else
static inline void omap44xx_prm_reconfigure_io_chain(void)
* published by the Free Software Foundation.
*/
+#include <mach/irqs.h>
+
#define LUBBOCK_ETH_PHYS PXA_CS3_PHYS
#define LUBBOCK_FPGA_PHYS PXA_CS2_PHYS
#include <mach/regs-ost.h>
#include <mach/reset.h>
+#include <mach/smemc.h>
unsigned int reset_status;
EXPORT_SYMBOL(reset_status);
writel_relaxed(OSSR_M3, OSSR);
/* ... in 100 ms */
writel_relaxed(readl_relaxed(OSCR) + 368640, OSMR3);
+ /*
+ * SDRAM hangs on watchdog reset on Marvell PXA270 (erratum 71)
+ * we put SDRAM into self-refresh to prevent that
+ */
+ while (1)
+ writel_relaxed(MDREFR_SLFRSH, MDREFR);
}
void pxa_restart(enum reboot_mode mode, const char *cmd)
break;
}
}
-
* Tosa Keyboard
*/
static const uint32_t tosakbd_keymap[] = {
- KEY(0, 2, KEY_W),
- KEY(0, 6, KEY_K),
- KEY(0, 7, KEY_BACKSPACE),
- KEY(0, 8, KEY_P),
- KEY(1, 1, KEY_Q),
- KEY(1, 2, KEY_E),
- KEY(1, 3, KEY_T),
- KEY(1, 4, KEY_Y),
- KEY(1, 6, KEY_O),
- KEY(1, 7, KEY_I),
- KEY(1, 8, KEY_COMMA),
- KEY(2, 1, KEY_A),
- KEY(2, 2, KEY_D),
- KEY(2, 3, KEY_G),
- KEY(2, 4, KEY_U),
- KEY(2, 6, KEY_L),
- KEY(2, 7, KEY_ENTER),
- KEY(2, 8, KEY_DOT),
- KEY(3, 1, KEY_Z),
- KEY(3, 2, KEY_C),
- KEY(3, 3, KEY_V),
- KEY(3, 4, KEY_J),
- KEY(3, 5, TOSA_KEY_ADDRESSBOOK),
- KEY(3, 6, TOSA_KEY_CANCEL),
- KEY(3, 7, TOSA_KEY_CENTER),
- KEY(3, 8, TOSA_KEY_OK),
- KEY(3, 9, KEY_LEFTSHIFT),
- KEY(4, 1, KEY_S),
- KEY(4, 2, KEY_R),
- KEY(4, 3, KEY_B),
- KEY(4, 4, KEY_N),
- KEY(4, 5, TOSA_KEY_CALENDAR),
- KEY(4, 6, TOSA_KEY_HOMEPAGE),
- KEY(4, 7, KEY_LEFTCTRL),
- KEY(4, 8, TOSA_KEY_LIGHT),
- KEY(4, 10, KEY_RIGHTSHIFT),
- KEY(5, 1, KEY_TAB),
- KEY(5, 2, KEY_SLASH),
- KEY(5, 3, KEY_H),
- KEY(5, 4, KEY_M),
- KEY(5, 5, TOSA_KEY_MENU),
- KEY(5, 7, KEY_UP),
- KEY(5, 11, TOSA_KEY_FN),
- KEY(6, 1, KEY_X),
- KEY(6, 2, KEY_F),
- KEY(6, 3, KEY_SPACE),
- KEY(6, 4, KEY_APOSTROPHE),
- KEY(6, 5, TOSA_KEY_MAIL),
- KEY(6, 6, KEY_LEFT),
- KEY(6, 7, KEY_DOWN),
- KEY(6, 8, KEY_RIGHT),
+ KEY(0, 1, KEY_W),
+ KEY(0, 5, KEY_K),
+ KEY(0, 6, KEY_BACKSPACE),
+ KEY(0, 7, KEY_P),
+ KEY(1, 0, KEY_Q),
+ KEY(1, 1, KEY_E),
+ KEY(1, 2, KEY_T),
+ KEY(1, 3, KEY_Y),
+ KEY(1, 5, KEY_O),
+ KEY(1, 6, KEY_I),
+ KEY(1, 7, KEY_COMMA),
+ KEY(2, 0, KEY_A),
+ KEY(2, 1, KEY_D),
+ KEY(2, 2, KEY_G),
+ KEY(2, 3, KEY_U),
+ KEY(2, 5, KEY_L),
+ KEY(2, 6, KEY_ENTER),
+ KEY(2, 7, KEY_DOT),
+ KEY(3, 0, KEY_Z),
+ KEY(3, 1, KEY_C),
+ KEY(3, 2, KEY_V),
+ KEY(3, 3, KEY_J),
+ KEY(3, 4, TOSA_KEY_ADDRESSBOOK),
+ KEY(3, 5, TOSA_KEY_CANCEL),
+ KEY(3, 6, TOSA_KEY_CENTER),
+ KEY(3, 7, TOSA_KEY_OK),
+ KEY(3, 8, KEY_LEFTSHIFT),
+ KEY(4, 0, KEY_S),
+ KEY(4, 1, KEY_R),
+ KEY(4, 2, KEY_B),
+ KEY(4, 3, KEY_N),
+ KEY(4, 4, TOSA_KEY_CALENDAR),
+ KEY(4, 5, TOSA_KEY_HOMEPAGE),
+ KEY(4, 6, KEY_LEFTCTRL),
+ KEY(4, 7, TOSA_KEY_LIGHT),
+ KEY(4, 9, KEY_RIGHTSHIFT),
+ KEY(5, 0, KEY_TAB),
+ KEY(5, 1, KEY_SLASH),
+ KEY(5, 2, KEY_H),
+ KEY(5, 3, KEY_M),
+ KEY(5, 4, TOSA_KEY_MENU),
+ KEY(5, 6, KEY_UP),
+ KEY(5, 10, TOSA_KEY_FN),
+ KEY(6, 0, KEY_X),
+ KEY(6, 1, KEY_F),
+ KEY(6, 2, KEY_SPACE),
+ KEY(6, 3, KEY_APOSTROPHE),
+ KEY(6, 4, TOSA_KEY_MAIL),
+ KEY(6, 5, KEY_LEFT),
+ KEY(6, 6, KEY_DOWN),
+ KEY(6, 7, KEY_RIGHT),
};
static struct matrix_keymap_data tosakbd_keymap_data = {
* published by the Free Software Foundation.
*/
-#include <linux/clk-provider.h>
-#include <linux/irqchip.h>
#include <linux/of_platform.h>
#include <asm/mach/arch.h>
panic("SoC is not S3C64xx!");
}
-static void __init s3c64xx_dt_init_irq(void)
-{
- of_clk_init(NULL);
- samsung_wdt_reset_of_init();
- irqchip_init();
-};
-
static void __init s3c64xx_dt_init_machine(void)
{
+ samsung_wdt_reset_of_init();
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
}
/* Maintainer: Tomasz Figa <tomasz.figa@gmail.com> */
.dt_compat = s3c64xx_dt_compat,
.map_io = s3c64xx_dt_map_io,
- .init_irq = s3c64xx_dt_init_irq,
.init_machine = s3c64xx_dt_init_machine,
.restart = s3c64xx_dt_restart,
MACHINE_END
REGULATOR_SUPPLY("vqmmc", "sh_mmcif"),
};
+/* Fixed 3.3V regulator used by LCD backlight */
+static struct regulator_consumer_supply fixed5v0_power_consumers[] = {
+ REGULATOR_SUPPLY("power", "pwm-backlight.0"),
+};
+
/* Fixed 3.3V regulator to be used by SDHI0 */
static struct regulator_consumer_supply vcc_sdhi0_consumers[] = {
REGULATOR_SUPPLY("vmmc", "sh_mobile_sdhi.0"),
regulator_register_always_on(0, "fixed-3.3V", fixed3v3_power_consumers,
ARRAY_SIZE(fixed3v3_power_consumers), 3300000);
+ regulator_register_always_on(3, "fixed-5.0V", fixed5v0_power_consumers,
+ ARRAY_SIZE(fixed5v0_power_consumers), 5000000);
pinctrl_register_mappings(eva_pinctrl_map, ARRAY_SIZE(eva_pinctrl_map));
pwm_add_table(pwm_lookup, ARRAY_SIZE(pwm_lookup));
.id = i,
.data = &rsnd_card_info[i],
.size_data = sizeof(struct asoc_simple_card_info),
- .dma_mask = ~0,
+ .dma_mask = DMA_BIT_MASK(32),
};
platform_device_register_full(&cardinfo);
{
lager_add_standard_devices();
- phy_register_fixup_for_id("r8a7790-ether-ff:01", lager_ksz8041_fixup);
+ if (IS_ENABLED(CONFIG_PHYLIB))
+ phy_register_fixup_for_id("r8a7790-ether-ff:01",
+ lager_ksz8041_fixup);
}
static const char * const lager_boards_compat_dt[] __initconst = {
select GENERIC_CLOCKEVENTS
select GPIO_PL061 if GPIOLIB
select HAVE_ARM_SCU
+ select HAVE_ARM_TWD if SMP
select HAVE_SMP
select MFD_SYSCON
select SPARSE_IRQ
switch (tegra_chip_id) {
case TEGRA20:
tegra20_fuse_init_randomness();
+ break;
case TEGRA30:
case TEGRA114:
default:
tegra30_fuse_init_randomness();
+ break;
}
pr_info("Tegra Revision: %s SKU: %d CPU Process: %d Core Process: %d\n",
tegra_sku_id, tegra_cpu_process_id,
tegra_core_process_id);
}
-
-unsigned long long tegra_chip_uid(void)
-{
- unsigned long long lo, hi;
-
- lo = tegra_fuse_readl(FUSE_UID_LOW);
- hi = tegra_fuse_readl(FUSE_UID_HIGH);
- return (hi << 32ull) | lo;
-}
-EXPORT_SYMBOL(tegra_chip_uid);
/* Requires call-back bindings. */
OF_DEV_AUXDATA("arm,cortex-a9-pmu", 0, "arm-pmu", &db8500_pmu_platdata),
/* Requires DMA bindings. */
+ OF_DEV_AUXDATA("arm,pl18x", 0x80126000, "sdi0", &mop500_sdi0_data),
+ OF_DEV_AUXDATA("arm,pl18x", 0x80118000, "sdi1", &mop500_sdi1_data),
+ OF_DEV_AUXDATA("arm,pl18x", 0x80005000, "sdi2", &mop500_sdi2_data),
+ OF_DEV_AUXDATA("arm,pl18x", 0x80114000, "sdi4", &mop500_sdi4_data),
OF_DEV_AUXDATA("stericsson,ux500-msp-i2s", 0x80123000,
"ux500-msp-i2s.0", &msp0_platform_data),
OF_DEV_AUXDATA("stericsson,ux500-msp-i2s", 0x80124000,
#define A15_BX_ADDR0 0x68
#define A7_BX_ADDR0 0x78
+/* SPC CPU/cluster reset statue */
+#define STANDBYWFI_STAT 0x3c
+#define STANDBYWFI_STAT_A15_CPU_MASK(cpu) (1 << (cpu))
+#define STANDBYWFI_STAT_A7_CPU_MASK(cpu) (1 << (3 + (cpu)))
+
/* SPC system config interface registers */
#define SYSCFG_WDATA 0x70
#define SYSCFG_RDATA 0x74
writel_relaxed(enable, info->baseaddr + pwdrn_reg);
}
+static u32 standbywfi_cpu_mask(u32 cpu, u32 cluster)
+{
+ return cluster_is_a15(cluster) ?
+ STANDBYWFI_STAT_A15_CPU_MASK(cpu)
+ : STANDBYWFI_STAT_A7_CPU_MASK(cpu);
+}
+
+/**
+ * ve_spc_cpu_in_wfi(u32 cpu, u32 cluster)
+ *
+ * @cpu: mpidr[7:0] bitfield describing CPU affinity level within cluster
+ * @cluster: mpidr[15:8] bitfield describing cluster affinity level
+ *
+ * @return: non-zero if and only if the specified CPU is in WFI
+ *
+ * Take care when interpreting the result of this function: a CPU might
+ * be in WFI temporarily due to idle, and is not necessarily safely
+ * parked.
+ */
+int ve_spc_cpu_in_wfi(u32 cpu, u32 cluster)
+{
+ int ret;
+ u32 mask = standbywfi_cpu_mask(cpu, cluster);
+
+ if (cluster >= MAX_CLUSTERS)
+ return 1;
+
+ ret = readl_relaxed(info->baseaddr + STANDBYWFI_STAT);
+
+ pr_debug("%s: PCFGREG[0x%X] = 0x%08X, mask = 0x%X\n",
+ __func__, STANDBYWFI_STAT, ret, mask);
+
+ return ret & mask;
+}
+
static int ve_spc_get_performance(int cluster, u32 *freq)
{
struct ve_spc_opp *opps = info->opps[cluster];
void ve_spc_cpu_wakeup_irq(u32 cluster, u32 cpu, bool set);
void ve_spc_set_resume_addr(u32 cluster, u32 cpu, u32 addr);
void ve_spc_powerdown(u32 cluster, bool enable);
+int ve_spc_cpu_in_wfi(u32 cpu, u32 cluster);
#endif
* published by the Free Software Foundation.
*/
+#include <linux/delay.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include "spc.h"
/* SCC conf registers */
+#define RESET_CTRL 0x018
+#define RESET_A15_NCORERESET(cpu) (1 << (2 + (cpu)))
+#define RESET_A7_NCORERESET(cpu) (1 << (16 + (cpu)))
+
#define A15_CONF 0x400
#define A7_CONF 0x500
#define SYS_INFO 0x700
#define SPC_BASE 0xb00
+static void __iomem *scc;
+
/*
* We can't use regular spinlocks. In the switcher case, it is possible
* for an outbound CPU to call power_down() after its inbound counterpart
tc2_pm_down(0);
}
+static int tc2_core_in_reset(unsigned int cpu, unsigned int cluster)
+{
+ u32 mask = cluster ?
+ RESET_A7_NCORERESET(cpu)
+ : RESET_A15_NCORERESET(cpu);
+
+ return !(readl_relaxed(scc + RESET_CTRL) & mask);
+}
+
+#define POLL_MSEC 10
+#define TIMEOUT_MSEC 1000
+
+static int tc2_pm_power_down_finish(unsigned int cpu, unsigned int cluster)
+{
+ unsigned tries;
+
+ pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
+ BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER);
+
+ for (tries = 0; tries < TIMEOUT_MSEC / POLL_MSEC; ++tries) {
+ /*
+ * Only examine the hardware state if the target CPU has
+ * caught up at least as far as tc2_pm_down():
+ */
+ if (ACCESS_ONCE(tc2_pm_use_count[cpu][cluster]) == 0) {
+ pr_debug("%s(cpu=%u, cluster=%u): RESET_CTRL = 0x%08X\n",
+ __func__, cpu, cluster,
+ readl_relaxed(scc + RESET_CTRL));
+
+ /*
+ * We need the CPU to reach WFI, but the power
+ * controller may put the cluster in reset and
+ * power it off as soon as that happens, before
+ * we have a chance to see STANDBYWFI.
+ *
+ * So we need to check for both conditions:
+ */
+ if (tc2_core_in_reset(cpu, cluster) ||
+ ve_spc_cpu_in_wfi(cpu, cluster))
+ return 0; /* success: the CPU is halted */
+ }
+
+ /* Otherwise, wait and retry: */
+ msleep(POLL_MSEC);
+ }
+
+ return -ETIMEDOUT; /* timeout */
+}
+
static void tc2_pm_suspend(u64 residency)
{
unsigned int mpidr, cpu, cluster;
}
static const struct mcpm_platform_ops tc2_pm_power_ops = {
- .power_up = tc2_pm_power_up,
- .power_down = tc2_pm_power_down,
- .suspend = tc2_pm_suspend,
- .powered_up = tc2_pm_powered_up,
+ .power_up = tc2_pm_power_up,
+ .power_down = tc2_pm_power_down,
+ .power_down_finish = tc2_pm_power_down_finish,
+ .suspend = tc2_pm_suspend,
+ .powered_up = tc2_pm_powered_up,
};
static bool __init tc2_pm_usage_count_init(void)
static int __init tc2_pm_init(void)
{
int ret, irq;
- void __iomem *scc;
u32 a15_cluster_id, a7_cluster_id, sys_info;
struct device_node *np;
*
* DMA uncached mapping support.
*/
+#include <linux/bootmem.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/gfp.h>
};
EXPORT_SYMBOL(arm_coherent_dma_ops);
+static int __dma_supported(struct device *dev, u64 mask, bool warn)
+{
+ unsigned long max_dma_pfn;
+
+ /*
+ * If the mask allows for more memory than we can address,
+ * and we actually have that much memory, then we must
+ * indicate that DMA to this device is not supported.
+ */
+ if (sizeof(mask) != sizeof(dma_addr_t) &&
+ mask > (dma_addr_t)~0 &&
+ dma_to_pfn(dev, ~0) < max_pfn) {
+ if (warn) {
+ dev_warn(dev, "Coherent DMA mask %#llx is larger than dma_addr_t allows\n",
+ mask);
+ dev_warn(dev, "Driver did not use or check the return value from dma_set_coherent_mask()?\n");
+ }
+ return 0;
+ }
+
+ max_dma_pfn = min(max_pfn, arm_dma_pfn_limit);
+
+ /*
+ * Translate the device's DMA mask to a PFN limit. This
+ * PFN number includes the page which we can DMA to.
+ */
+ if (dma_to_pfn(dev, mask) < max_dma_pfn) {
+ if (warn)
+ dev_warn(dev, "Coherent DMA mask %#llx (pfn %#lx-%#lx) covers a smaller range of system memory than the DMA zone pfn 0x0-%#lx\n",
+ mask,
+ dma_to_pfn(dev, 0), dma_to_pfn(dev, mask) + 1,
+ max_dma_pfn + 1);
+ return 0;
+ }
+
+ return 1;
+}
+
static u64 get_coherent_dma_mask(struct device *dev)
{
u64 mask = (u64)DMA_BIT_MASK(32);
return 0;
}
- /*
- * If the mask allows for more memory than we can address,
- * and we actually have that much memory, then fail the
- * allocation.
- */
- if (sizeof(mask) != sizeof(dma_addr_t) &&
- mask > (dma_addr_t)~0 &&
- dma_to_pfn(dev, ~0) > arm_dma_pfn_limit) {
- dev_warn(dev, "Coherent DMA mask %#llx is larger than dma_addr_t allows\n",
- mask);
- dev_warn(dev, "Driver did not use or check the return value from dma_set_coherent_mask()?\n");
- return 0;
- }
-
- /*
- * Now check that the mask, when translated to a PFN,
- * fits within the allowable addresses which we can
- * allocate.
- */
- if (dma_to_pfn(dev, mask) < arm_dma_pfn_limit) {
- dev_warn(dev, "Coherent DMA mask %#llx (pfn %#lx-%#lx) covers a smaller range of system memory than the DMA zone pfn 0x0-%#lx\n",
- mask,
- dma_to_pfn(dev, 0), dma_to_pfn(dev, mask) + 1,
- arm_dma_pfn_limit + 1);
+ if (!__dma_supported(dev, mask, true))
return 0;
- }
}
return mask;
*/
int dma_supported(struct device *dev, u64 mask)
{
- unsigned long limit;
-
- /*
- * If the mask allows for more memory than we can address,
- * and we actually have that much memory, then we must
- * indicate that DMA to this device is not supported.
- */
- if (sizeof(mask) != sizeof(dma_addr_t) &&
- mask > (dma_addr_t)~0 &&
- dma_to_pfn(dev, ~0) > arm_dma_pfn_limit)
- return 0;
-
- /*
- * Translate the device's DMA mask to a PFN limit. This
- * PFN number includes the page which we can DMA to.
- */
- limit = dma_to_pfn(dev, mask);
-
- if (limit < arm_dma_pfn_limit)
- return 0;
-
- return 1;
+ return __dma_supported(dev, mask, false);
}
EXPORT_SYMBOL(dma_supported);
unsigned long i;
if (cache_is_vipt_nonaliasing()) {
for (i = 0; i < (1 << compound_order(page)); i++) {
- void *addr = kmap_atomic(page);
+ void *addr = kmap_atomic(page + i);
__cpuc_flush_dcache_area(addr, PAGE_SIZE);
kunmap_atomic(addr);
}
} else {
for (i = 0; i < (1 << compound_order(page)); i++) {
- void *addr = kmap_high_get(page);
+ void *addr = kmap_high_get(page + i);
if (addr) {
__cpuc_flush_dcache_area(addr, PAGE_SIZE);
- kunmap_high(page);
+ kunmap_high(page + i);
}
}
}
#ifdef CONFIG_ZONE_DMA
if (mdesc->dma_zone_size) {
arm_dma_zone_size = mdesc->dma_zone_size;
- arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
+ arm_dma_limit = __pv_phys_offset + arm_dma_zone_size - 1;
} else
arm_dma_limit = 0xffffffff;
arm_dma_pfn_limit = arm_dma_limit >> PAGE_SHIFT;
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
info.length = len;
- info.low_limit = PAGE_SIZE;
+ info.low_limit = FIRST_USER_ADDRESS;
info.high_limit = mm->mmap_base;
info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
mem_types[MT_CACHECLEAN].prot_sect |= PMD_SECT_WB;
break;
}
- printk("Memory policy: ECC %sabled, Data cache %s\n",
- ecc_mask ? "en" : "dis", cp->policy);
+ pr_info("Memory policy: %sData cache %s\n",
+ ecc_mask ? "ECC enabled, " : "", cp->policy);
for (i = 0; i < ARRAY_SIZE(mem_types); i++) {
struct mem_type *t = &mem_types[i];
#include <asm/mach/arch.h>
#include <asm/cputype.h>
#include <asm/mpu.h>
+#include <asm/procinfo.h>
#include "mm.h"
init_pud = pud_offset(init_pgd, 0);
init_pmd = pmd_offset(init_pud, 0);
init_pte = pte_offset_map(init_pmd, 0);
- set_pte_ext(new_pte, *init_pte, 0);
+ set_pte_ext(new_pte + 0, init_pte[0], 0);
+ set_pte_ext(new_pte + 1, init_pte[1], 0);
pte_unmap(init_pte);
pte_unmap(new_pte);
}
/* Suspend/resume support: derived from arch/arm/mach-s5pv210/sleep.S */
.globl cpu_v7_suspend_size
-.equ cpu_v7_suspend_size, 4 * 8
+.equ cpu_v7_suspend_size, 4 * 9
#ifdef CONFIG_ARM_CPU_SUSPEND
ENTRY(cpu_v7_do_suspend)
stmfd sp!, {r4 - r10, lr}
stmia r0!, {r4 - r5}
#ifdef CONFIG_MMU
mrc p15, 0, r6, c3, c0, 0 @ Domain ID
+#ifdef CONFIG_ARM_LPAE
+ mrrc p15, 1, r5, r7, c2 @ TTB 1
+#else
mrc p15, 0, r7, c2, c0, 1 @ TTB 1
+#endif
mrc p15, 0, r11, c2, c0, 2 @ TTB control register
#endif
mrc p15, 0, r8, c1, c0, 0 @ Control register
mrc p15, 0, r9, c1, c0, 1 @ Auxiliary control register
mrc p15, 0, r10, c1, c0, 2 @ Co-processor access control
- stmia r0, {r6 - r11}
+ stmia r0, {r5 - r11}
ldmfd sp!, {r4 - r10, pc}
ENDPROC(cpu_v7_do_suspend)
ldmia r0!, {r4 - r5}
mcr p15, 0, r4, c13, c0, 0 @ FCSE/PID
mcr p15, 0, r5, c13, c0, 3 @ User r/o thread ID
- ldmia r0, {r6 - r11}
+ ldmia r0, {r5 - r11}
#ifdef CONFIG_MMU
mcr p15, 0, ip, c8, c7, 0 @ invalidate TLBs
mcr p15, 0, r6, c3, c0, 0 @ Domain ID
-#ifndef CONFIG_ARM_LPAE
+#ifdef CONFIG_ARM_LPAE
+ mcrr p15, 0, r1, ip, c2 @ TTB 0
+ mcrr p15, 1, r5, r7, c2 @ TTB 1
+#else
ALT_SMP(orr r1, r1, #TTB_FLAGS_SMP)
ALT_UP(orr r1, r1, #TTB_FLAGS_UP)
-#endif
mcr p15, 0, r1, c2, c0, 0 @ TTB 0
mcr p15, 0, r7, c2, c0, 1 @ TTB 1
+#endif
mcr p15, 0, r11, c2, c0, 2 @ TTB control register
ldr r4, =PRRR @ PRRR
ldr r5, =NMRR @ NMRR
if (timer->posted)
return;
- if (timer->errata & OMAP_TIMER_ERRATA_I103_I767)
+ if (timer->errata & OMAP_TIMER_ERRATA_I103_I767) {
+ timer->posted = OMAP_TIMER_NONPOSTED;
+ __omap_dm_timer_write(timer, OMAP_TIMER_IF_CTRL_REG, 0, 0);
return;
+ }
__omap_dm_timer_write(timer, OMAP_TIMER_IF_CTRL_REG,
OMAP_TIMER_CTRL_POSTED, 0);
struct remap_data *info = data;
struct page *page = info->pages[info->index++];
unsigned long pfn = page_to_pfn(page);
- pte_t pte = pfn_pte(pfn, info->prot);
+ pte_t pte = pte_mkspecial(pfn_pte(pfn, info->prot));
if (map_foreign_page(pfn, info->fgmfn, info->domid))
return -EFAULT;
}
if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
return 0;
- xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
+ xen_hvm_resume_frames = res.start;
xen_events_irq = irq_of_parse_and_map(node, 0);
pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
- version, xen_events_irq, xen_hvm_resume_frames);
+ version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT));
xen_domain_type = XEN_HVM_DOMAIN;
xen_setup_features();
struct rb_node rbnode_phys;
};
-rwlock_t p2m_lock;
+static rwlock_t p2m_lock;
struct rb_root phys_to_mach = RB_ROOT;
+EXPORT_SYMBOL_GPL(phys_to_mach);
static struct rb_root mach_to_phys = RB_ROOT;
static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new)
}
EXPORT_SYMBOL_GPL(__set_phys_to_machine);
-int p2m_init(void)
+static int p2m_init(void)
{
rwlock_init(&p2m_lock);
return 0;
range 2 32
depends on SMP
# These have to remain sorted largest to smallest
- default "8" if ARCH_XGENE
- default "4"
+ default "8"
config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs"
/dts-v1/;
+/memreserve/ 0x80000000 0x00010000;
+
/ {
model = "Foundation-v8A";
compatible = "arm,foundation-aarch64", "arm,vexpress";
extern void __iounmap(volatile void __iomem *addr);
extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size);
-#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_DIRTY)
+#define PROT_DEFAULT (pgprot_default | PTE_DIRTY)
#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_DEVICE_nGnRE))
#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL_NC))
#define PROT_NORMAL (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
#define local_fiq_enable() asm("msr daifclr, #1" : : : "memory")
#define local_fiq_disable() asm("msr daifset, #1" : : : "memory")
+#define local_async_enable() asm("msr daifclr, #4" : : : "memory")
+#define local_async_disable() asm("msr daifset, #4" : : : "memory")
+
/*
* Save the current interrupt enable state.
*/
* Section
*/
#define PMD_SECT_VALID (_AT(pmdval_t, 1) << 0)
-#define PMD_SECT_PROT_NONE (_AT(pmdval_t, 1) << 2)
+#define PMD_SECT_PROT_NONE (_AT(pmdval_t, 1) << 58)
#define PMD_SECT_USER (_AT(pmdval_t, 1) << 6) /* AP[1] */
#define PMD_SECT_RDONLY (_AT(pmdval_t, 1) << 7) /* AP[2] */
#define PMD_SECT_S (_AT(pmdval_t, 3) << 8)
* Software defined PTE bits definition.
*/
#define PTE_VALID (_AT(pteval_t, 1) << 0)
-#define PTE_PROT_NONE (_AT(pteval_t, 1) << 2) /* only when !PTE_VALID */
-#define PTE_FILE (_AT(pteval_t, 1) << 3) /* only when !pte_present() */
+#define PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !pte_present() */
#define PTE_DIRTY (_AT(pteval_t, 1) << 55)
#define PTE_SPECIAL (_AT(pteval_t, 1) << 56)
+ /* bit 57 for PMD_SECT_SPLITTING */
+#define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */
/*
* VMALLOC and SPARSEMEM_VMEMMAP ranges.
#define pgprot_noncached(prot) \
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRnE))
#define pgprot_writecombine(prot) \
- __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_GRE))
+ __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC))
#define pgprot_dmacoherent(prot) \
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC))
#define __HAVE_PHYS_MEM_ACCESS_PROT
/*
* Encode and decode a swap entry:
- * bits 0, 2: present (must both be zero)
- * bit 3: PTE_FILE
- * bits 4-8: swap type
- * bits 9-63: swap offset
+ * bits 0-1: present (must be zero)
+ * bit 2: PTE_FILE
+ * bits 3-8: swap type
+ * bits 9-57: swap offset
*/
-#define __SWP_TYPE_SHIFT 4
+#define __SWP_TYPE_SHIFT 3
#define __SWP_TYPE_BITS 6
+#define __SWP_OFFSET_BITS 49
#define __SWP_TYPE_MASK ((1 << __SWP_TYPE_BITS) - 1)
#define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT)
+#define __SWP_OFFSET_MASK ((1UL << __SWP_OFFSET_BITS) - 1)
#define __swp_type(x) (((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK)
-#define __swp_offset(x) ((x).val >> __SWP_OFFSET_SHIFT)
+#define __swp_offset(x) (((x).val >> __SWP_OFFSET_SHIFT) & __SWP_OFFSET_MASK)
#define __swp_entry(type,offset) ((swp_entry_t) { ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
/*
* Encode and decode a file entry:
- * bits 0, 2: present (must both be zero)
- * bit 3: PTE_FILE
- * bits 4-63: file offset / PAGE_SIZE
+ * bits 0-1: present (must be zero)
+ * bit 2: PTE_FILE
+ * bits 3-57: file offset / PAGE_SIZE
*/
#define pte_file(pte) (pte_val(pte) & PTE_FILE)
-#define pte_to_pgoff(x) (pte_val(x) >> 4)
-#define pgoff_to_pte(x) __pte(((x) << 4) | PTE_FILE)
+#define pte_to_pgoff(x) (pte_val(x) >> 3)
+#define pgoff_to_pte(x) __pte(((x) << 3) | PTE_FILE)
-#define PTE_FILE_MAX_BITS 60
+#define PTE_FILE_MAX_BITS 55
extern int kern_addr_valid(unsigned long addr);
unsigned long offset, size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
- __generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs);
}
static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
- __generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs);
}
static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
- __generic_dma_ops(hwdev)->sync_single_for_cpu(hwdev, handle, size, dir);
}
static inline void xen_dma_sync_single_for_device(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
- __generic_dma_ops(hwdev)->sync_single_for_device(hwdev, handle, size, dir);
}
#endif /* _ASM_ARM64_XEN_PAGE_COHERENT_H */
int aarch32_break_handler(struct pt_regs *regs)
{
siginfo_t info;
- unsigned int instr;
+ u32 arm_instr;
+ u16 thumb_instr;
bool bp = false;
void __user *pc = (void __user *)instruction_pointer(regs);
if (compat_thumb_mode(regs)) {
/* get 16-bit Thumb instruction */
- get_user(instr, (u16 __user *)pc);
- if (instr == AARCH32_BREAK_THUMB2_LO) {
+ get_user(thumb_instr, (u16 __user *)pc);
+ thumb_instr = le16_to_cpu(thumb_instr);
+ if (thumb_instr == AARCH32_BREAK_THUMB2_LO) {
/* get second half of 32-bit Thumb-2 instruction */
- get_user(instr, (u16 __user *)(pc + 2));
- bp = instr == AARCH32_BREAK_THUMB2_HI;
+ get_user(thumb_instr, (u16 __user *)(pc + 2));
+ thumb_instr = le16_to_cpu(thumb_instr);
+ bp = thumb_instr == AARCH32_BREAK_THUMB2_HI;
} else {
- bp = instr == AARCH32_BREAK_THUMB;
+ bp = thumb_instr == AARCH32_BREAK_THUMB;
}
} else {
/* 32-bit ARM instruction */
- get_user(instr, (u32 __user *)pc);
- bp = (instr & ~0xf0000000) == AARCH32_BREAK_ARM;
+ get_user(arm_instr, (u32 __user *)pc);
+ arm_instr = le32_to_cpu(arm_instr);
+ bp = (arm_instr & ~0xf0000000) == AARCH32_BREAK_ARM;
}
if (!bp)
#ifdef CONFIG_TRACE_IRQFLAGS
bl trace_hardirqs_off
#endif
-#ifdef CONFIG_PREEMPT
- get_thread_info tsk
- ldr w24, [tsk, #TI_PREEMPT] // get preempt count
- add w0, w24, #1 // increment it
- str w0, [tsk, #TI_PREEMPT]
-#endif
+
irq_handler
+
#ifdef CONFIG_PREEMPT
- str w24, [tsk, #TI_PREEMPT] // restore preempt count
+ get_thread_info tsk
+ ldr w24, [tsk, #TI_PREEMPT] // restore preempt count
cbnz w24, 1f // preempt count != 0
ldr x0, [tsk, #TI_FLAGS] // get flags
tbz x0, #TIF_NEED_RESCHED, 1f // needs rescheduling?
#ifdef CONFIG_TRACE_IRQFLAGS
bl trace_hardirqs_off
#endif
- get_thread_info tsk
-#ifdef CONFIG_PREEMPT
- ldr w24, [tsk, #TI_PREEMPT] // get preempt count
- add w23, w24, #1 // increment it
- str w23, [tsk, #TI_PREEMPT]
-#endif
+
irq_handler
-#ifdef CONFIG_PREEMPT
- ldr w0, [tsk, #TI_PREEMPT]
- str w24, [tsk, #TI_PREEMPT]
- cmp w0, w23
- b.eq 1f
- mov x1, #0
- str x1, [x1] // BUG
-1:
-#endif
+ get_thread_info tsk
+
#ifdef CONFIG_TRACE_IRQFLAGS
bl trace_hardirqs_on
#endif
* be used where CPUs are brought online dynamically by the kernel.
*/
ENTRY(secondary_entry)
- bl __calc_phys_offset // x2=phys offset
bl el2_setup // Drop to EL1
+ bl __calc_phys_offset // x24=PHYS_OFFSET, x28=PHYS_OFFSET-PAGE_OFFSET
+ bl set_cpu_boot_mode_flag
b secondary_startup
ENDPROC(secondary_entry)
{
int err, len, type, disabled = !ctrl.enabled;
- if (disabled) {
- len = 0;
- type = HW_BREAKPOINT_EMPTY;
- } else {
- err = arch_bp_generic_fields(ctrl, &len, &type);
- if (err)
- return err;
-
- switch (note_type) {
- case NT_ARM_HW_BREAK:
- if ((type & HW_BREAKPOINT_X) != type)
- return -EINVAL;
- break;
- case NT_ARM_HW_WATCH:
- if ((type & HW_BREAKPOINT_RW) != type)
- return -EINVAL;
- break;
- default:
+ attr->disabled = disabled;
+ if (disabled)
+ return 0;
+
+ err = arch_bp_generic_fields(ctrl, &len, &type);
+ if (err)
+ return err;
+
+ switch (note_type) {
+ case NT_ARM_HW_BREAK:
+ if ((type & HW_BREAKPOINT_X) != type)
return -EINVAL;
- }
+ break;
+ case NT_ARM_HW_WATCH:
+ if ((type & HW_BREAKPOINT_RW) != type)
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
}
attr->bp_len = len;
attr->bp_type = type;
- attr->disabled = disabled;
return 0;
}
for (i = 0; i < num_regs; ++i) {
unsigned int idx = start + i;
- void *reg;
+ compat_ulong_t reg;
switch (idx) {
case 15:
- reg = (void *)&task_pt_regs(target)->pc;
+ reg = task_pt_regs(target)->pc;
break;
case 16:
- reg = (void *)&task_pt_regs(target)->pstate;
+ reg = task_pt_regs(target)->pstate;
break;
case 17:
- reg = (void *)&task_pt_regs(target)->orig_x0;
+ reg = task_pt_regs(target)->orig_x0;
break;
default:
- reg = (void *)&task_pt_regs(target)->regs[idx];
+ reg = task_pt_regs(target)->regs[idx];
}
- ret = copy_to_user(ubuf, reg, sizeof(compat_ulong_t));
-
+ ret = copy_to_user(ubuf, ®, sizeof(reg));
if (ret)
break;
- else
- ubuf += sizeof(compat_ulong_t);
+
+ ubuf += sizeof(reg);
}
return ret;
for (i = 0; i < num_regs; ++i) {
unsigned int idx = start + i;
- void *reg;
+ compat_ulong_t reg;
+
+ ret = copy_from_user(®, ubuf, sizeof(reg));
+ if (ret)
+ return ret;
+
+ ubuf += sizeof(reg);
switch (idx) {
case 15:
- reg = (void *)&newregs.pc;
+ newregs.pc = reg;
break;
case 16:
- reg = (void *)&newregs.pstate;
+ newregs.pstate = reg;
break;
case 17:
- reg = (void *)&newregs.orig_x0;
+ newregs.orig_x0 = reg;
break;
default:
- reg = (void *)&newregs.regs[idx];
+ newregs.regs[idx] = reg;
}
- ret = copy_from_user(reg, ubuf, sizeof(compat_ulong_t));
-
- if (ret)
- goto out;
- else
- ubuf += sizeof(compat_ulong_t);
}
if (valid_user_regs(&newregs.user_regs))
else
ret = -EINVAL;
-out:
return ret;
}
void __init setup_arch(char **cmdline_p)
{
+ /*
+ * Unmask asynchronous aborts early to catch possible system errors.
+ */
+ local_async_enable();
+
setup_processor();
setup_machine_fdt(__fdt_pointer);
local_irq_enable();
local_fiq_enable();
+ local_async_enable();
/*
* OK, it's off to the idle thread for us
bl __flush_dcache_all
mov lr, x28
ic iallu // I+BTB cache invalidate
+ tlbi vmalle1is // invalidate I + D TLBs
dsb sy
mov x0, #3 << 20
msr cpacr_el1, x0 // Enable FP/ASIMD
msr mdscr_el1, xzr // Reset mdscr_el1
- tlbi vmalle1is // invalidate I + D TLBs
/*
* Memory region attributes for LPAE:
*
*/
retval = clk_round_rate(pll1,
CONFIG_BOARD_FAVR32_ABDAC_RATE * 256 * 16);
- if (retval < 0)
+ if (retval <= 0) {
+ retval = -EINVAL;
goto out_abdac;
+ }
retval = clk_set_rate(pll1, retval);
if (retval != 0)
* published by the Free Software Foundation.
*/
#include <asm/setup.h>
+#include <asm/thread_info.h>
+#include <asm/sysreg.h>
/*
* The kernel is loaded where we want it to be and all caches
.section .init.text,"ax"
.global _start
_start:
- /* Check if the boot loader actually provided a tag table */
- lddpc r0, magic_number
- cp.w r12, r0
- brne no_tag_table
-
/* Initialize .bss */
lddpc r2, bss_start_addr
lddpc r3, end_addr
cp r2, r3
brlo 1b
+ /* Initialize status register */
+ lddpc r0, init_sr
+ mtsr SYSREG_SR, r0
+
+ /* Set initial stack pointer */
+ lddpc sp, stack_addr
+ sub sp, -THREAD_SIZE
+
+#ifdef CONFIG_FRAME_POINTER
+ /* Mark last stack frame */
+ mov lr, 0
+ mov r7, 0
+#endif
+
+ /* Check if the boot loader actually provided a tag table */
+ lddpc r0, magic_number
+ cp.w r12, r0
+ brne no_tag_table
+
/*
* Save the tag table address for later use. This must be done
* _after_ .bss has been initialized...
.long __bss_start
end_addr:
.long _end
+init_sr:
+ .long 0x007f0000 /* Supervisor mode, everything masked */
+stack_addr:
+ .long init_thread_union
+panic_addr:
+ .long panic
no_tag_table:
sub r12, pc, (. - 2f)
- bral panic
+ /* branch to panic() which can be far away with that construct */
+ lddpc pc, panic_addr
2: .asciz "Boot loader didn't provide correct magic number\n"
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
CONFIG_MTD_CONCAT=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
CONFIG_MTD_CFI=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
typedef u16 kprobe_opcode_t;
#define BREAKPOINT_INSTRUCTION 0xd673 /* breakpoint */
#define MAX_INSN_SIZE 2
+#define MAX_STACK_SIZE 64 /* 32 would probably be OK */
#define kretprobe_blacklist_size 0
kprobe_opcode_t insn[MAX_INSN_SIZE];
};
+struct prev_kprobe {
+ struct kprobe *kp;
+ unsigned int status;
+};
+
+/* per-cpu kprobe control block */
+struct kprobe_ctlblk {
+ unsigned int kprobe_status;
+ struct prev_kprobe prev_kprobe;
+ struct pt_regs jprobe_saved_regs;
+ char jprobes_stack[MAX_STACK_SIZE];
+};
+
extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
extern int kprobe_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data);
include include/uapi/asm-generic/Kbuild.asm
header-y += auxvec.h
-header-y += bitsperlong.h
header-y += byteorder.h
header-y += cachectl.h
-header-y += errno.h
-header-y += fcntl.h
-header-y += ioctl.h
-header-y += ioctls.h
-header-y += ipcbuf.h
-header-y += kvm_para.h
-header-y += mman.h
header-y += msgbuf.h
header-y += param.h
-header-y += poll.h
header-y += posix_types.h
header-y += ptrace.h
-header-y += resource.h
header-y += sembuf.h
header-y += setup.h
header-y += shmbuf.h
header-y += sigcontext.h
-header-y += siginfo.h
header-y += signal.h
header-y += socket.h
header-y += sockios.h
header-y += stat.h
-header-y += statfs.h
header-y += swab.h
header-y += termbits.h
header-y += termios.h
header-y += types.h
header-y += unistd.h
+generic-y += bitsperlong.h
+generic-y += errno.h
+generic-y += fcntl.h
+generic-y += ioctl.h
+generic-y += ioctls.h
+generic-y += ipcbuf.h
+generic-y += kvm_para.h
+generic-y += mman.h
generic-y += param.h
+generic-y += poll.h
+generic-y += resource.h
+generic-y += siginfo.h
+generic-y += statfs.h
-#ifndef __ASM_AVR32_AUXVEC_H
-#define __ASM_AVR32_AUXVEC_H
+#ifndef _UAPI__ASM_AVR32_AUXVEC_H
+#define _UAPI__ASM_AVR32_AUXVEC_H
-#endif /* __ASM_AVR32_AUXVEC_H */
+#endif /* _UAPI__ASM_AVR32_AUXVEC_H */
+++ /dev/null
-#include <asm-generic/bitsperlong.h>
/*
* AVR32 endian-conversion functions.
*/
-#ifndef __ASM_AVR32_BYTEORDER_H
-#define __ASM_AVR32_BYTEORDER_H
+#ifndef _UAPI__ASM_AVR32_BYTEORDER_H
+#define _UAPI__ASM_AVR32_BYTEORDER_H
#include <linux/byteorder/big_endian.h>
-#endif /* __ASM_AVR32_BYTEORDER_H */
+#endif /* _UAPI__ASM_AVR32_BYTEORDER_H */
-#ifndef __ASM_AVR32_CACHECTL_H
-#define __ASM_AVR32_CACHECTL_H
+#ifndef _UAPI__ASM_AVR32_CACHECTL_H
+#define _UAPI__ASM_AVR32_CACHECTL_H
/*
* Operations that can be performed through the cacheflush system call
/* Clean the data cache, then invalidate the icache */
#define CACHE_IFLUSH 0
-#endif /* __ASM_AVR32_CACHECTL_H */
+#endif /* _UAPI__ASM_AVR32_CACHECTL_H */
+++ /dev/null
-#ifndef __ASM_AVR32_ERRNO_H
-#define __ASM_AVR32_ERRNO_H
-
-#include <asm-generic/errno.h>
-
-#endif /* __ASM_AVR32_ERRNO_H */
+++ /dev/null
-#ifndef __ASM_AVR32_FCNTL_H
-#define __ASM_AVR32_FCNTL_H
-
-#include <asm-generic/fcntl.h>
-
-#endif /* __ASM_AVR32_FCNTL_H */
+++ /dev/null
-#ifndef __ASM_AVR32_IOCTL_H
-#define __ASM_AVR32_IOCTL_H
-
-#include <asm-generic/ioctl.h>
-
-#endif /* __ASM_AVR32_IOCTL_H */
+++ /dev/null
-#ifndef __ASM_AVR32_IOCTLS_H
-#define __ASM_AVR32_IOCTLS_H
-
-#include <asm-generic/ioctls.h>
-
-#endif /* __ASM_AVR32_IOCTLS_H */
+++ /dev/null
-#include <asm-generic/ipcbuf.h>
+++ /dev/null
-#include <asm-generic/kvm_para.h>
+++ /dev/null
-#include <asm-generic/mman.h>
-#ifndef __ASM_AVR32_MSGBUF_H
-#define __ASM_AVR32_MSGBUF_H
+#ifndef _UAPI__ASM_AVR32_MSGBUF_H
+#define _UAPI__ASM_AVR32_MSGBUF_H
/*
* The msqid64_ds structure for i386 architecture.
unsigned long __unused5;
};
-#endif /* __ASM_AVR32_MSGBUF_H */
+#endif /* _UAPI__ASM_AVR32_MSGBUF_H */
+++ /dev/null
-#include <asm-generic/poll.h>
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
-#ifndef __ASM_AVR32_POSIX_TYPES_H
-#define __ASM_AVR32_POSIX_TYPES_H
+#ifndef _UAPI__ASM_AVR32_POSIX_TYPES_H
+#define _UAPI__ASM_AVR32_POSIX_TYPES_H
/*
* This file is generally used by user-level software, so you need to
#include <asm-generic/posix_types.h>
-#endif /* __ASM_AVR32_POSIX_TYPES_H */
+#endif /* _UAPI__ASM_AVR32_POSIX_TYPES_H */
+++ /dev/null
-#ifndef __ASM_AVR32_RESOURCE_H
-#define __ASM_AVR32_RESOURCE_H
-
-#include <asm-generic/resource.h>
-
-#endif /* __ASM_AVR32_RESOURCE_H */
-#ifndef __ASM_AVR32_SEMBUF_H
-#define __ASM_AVR32_SEMBUF_H
+#ifndef _UAPI__ASM_AVR32_SEMBUF_H
+#define _UAPI__ASM_AVR32_SEMBUF_H
/*
* The semid64_ds structure for AVR32 architecture.
unsigned long __unused4;
};
-#endif /* __ASM_AVR32_SEMBUF_H */
+#endif /* _UAPI__ASM_AVR32_SEMBUF_H */
#define COMMAND_LINE_SIZE 256
-
#endif /* _UAPI__ASM_AVR32_SETUP_H__ */
-#ifndef __ASM_AVR32_SHMBUF_H
-#define __ASM_AVR32_SHMBUF_H
+#ifndef _UAPI__ASM_AVR32_SHMBUF_H
+#define _UAPI__ASM_AVR32_SHMBUF_H
/*
* The shmid64_ds structure for i386 architecture.
unsigned long __unused4;
};
-#endif /* __ASM_AVR32_SHMBUF_H */
+#endif /* _UAPI__ASM_AVR32_SHMBUF_H */
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
-#ifndef __ASM_AVR32_SIGCONTEXT_H
-#define __ASM_AVR32_SIGCONTEXT_H
+#ifndef _UAPI__ASM_AVR32_SIGCONTEXT_H
+#define _UAPI__ASM_AVR32_SIGCONTEXT_H
struct sigcontext {
unsigned long oldmask;
unsigned long r0;
};
-#endif /* __ASM_AVR32_SIGCONTEXT_H */
+#endif /* _UAPI__ASM_AVR32_SIGCONTEXT_H */
+++ /dev/null
-#ifndef _AVR32_SIGINFO_H
-#define _AVR32_SIGINFO_H
-
-#include <asm-generic/siginfo.h>
-
-#endif
size_t ss_size;
} stack_t;
-
#endif /* _UAPI__ASM_AVR32_SIGNAL_H */
-#ifndef __ASM_AVR32_SOCKET_H
-#define __ASM_AVR32_SOCKET_H
+#ifndef _UAPI__ASM_AVR32_SOCKET_H
+#define _UAPI__ASM_AVR32_SOCKET_H
#include <asm/sockios.h>
#define SO_MAX_PACING_RATE 47
-#endif /* __ASM_AVR32_SOCKET_H */
+#endif /* _UAPI__ASM_AVR32_SOCKET_H */
-#ifndef __ASM_AVR32_SOCKIOS_H
-#define __ASM_AVR32_SOCKIOS_H
+#ifndef _UAPI__ASM_AVR32_SOCKIOS_H
+#define _UAPI__ASM_AVR32_SOCKIOS_H
/* Socket-level I/O control calls. */
#define FIOSETOWN 0x8901
#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */
#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */
-#endif /* __ASM_AVR32_SOCKIOS_H */
+#endif /* _UAPI__ASM_AVR32_SOCKIOS_H */
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
-#ifndef __ASM_AVR32_STAT_H
-#define __ASM_AVR32_STAT_H
+#ifndef _UAPI__ASM_AVR32_STAT_H
+#define _UAPI__ASM_AVR32_STAT_H
struct __old_kernel_stat {
unsigned short st_dev;
unsigned long __unused2;
};
-#endif /* __ASM_AVR32_STAT_H */
+#endif /* _UAPI__ASM_AVR32_STAT_H */
+++ /dev/null
-#ifndef __ASM_AVR32_STATFS_H
-#define __ASM_AVR32_STATFS_H
-
-#include <asm-generic/statfs.h>
-
-#endif /* __ASM_AVR32_STATFS_H */
/*
* AVR32 byteswapping functions.
*/
-#ifndef __ASM_AVR32_SWAB_H
-#define __ASM_AVR32_SWAB_H
+#ifndef _UAPI__ASM_AVR32_SWAB_H
+#define _UAPI__ASM_AVR32_SWAB_H
#include <linux/types.h>
#include <linux/compiler.h>
#define __arch_swab32 __arch_swab32
#endif
-#endif /* __ASM_AVR32_SWAB_H */
+#endif /* _UAPI__ASM_AVR32_SWAB_H */
-#ifndef __ASM_AVR32_TERMBITS_H
-#define __ASM_AVR32_TERMBITS_H
+#ifndef _UAPI__ASM_AVR32_TERMBITS_H
+#define _UAPI__ASM_AVR32_TERMBITS_H
#include <linux/posix_types.h>
#define TCSADRAIN 1
#define TCSAFLUSH 2
-#endif /* __ASM_AVR32_TERMBITS_H */
+#endif /* _UAPI__ASM_AVR32_TERMBITS_H */
/* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
-
#endif /* _UAPI__ASM_AVR32_TERMIOS_H */
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
+#ifndef _UAPI__ASM_AVR32_TYPES_H
+#define _UAPI__ASM_AVR32_TYPES_H
+
#include <asm-generic/int-ll64.h>
+
+#endif /* _UAPI__ASM_AVR32_TYPES_H */
#define __NR_eventfd 281
#define __NR_setns 283
-
#endif /* _UAPI__ASM_AVR32_UNISTD_H */
/* We should never get here... */
bad_return:
sub r12, pc, (. - 1f)
- bral panic
+ lddpc pc, 2f
.align 2
1: .asciz "Return from critical exception!"
+2: .long panic
.align 1
do_bus_error_write:
#include <linux/linkage.h>
#include <asm/page.h>
-#include <asm/thread_info.h>
-#include <asm/sysreg.h>
.section .init.text,"ax"
.global kernel_entry
kernel_entry:
- /* Initialize status register */
- lddpc r0, init_sr
- mtsr SYSREG_SR, r0
-
- /* Set initial stack pointer */
- lddpc sp, stack_addr
- sub sp, -THREAD_SIZE
-
-#ifdef CONFIG_FRAME_POINTER
- /* Mark last stack frame */
- mov lr, 0
- mov r7, 0
-#endif
-
/* Start the show */
lddpc pc, kernel_start_addr
.align 2
-init_sr:
- .long 0x007f0000 /* Supervisor mode, everything masked */
-stack_addr:
- .long init_thread_union
kernel_start_addr:
.long start_kernel
static struct irqaction timer_irqaction = {
.handler = timer_interrupt,
/* Oprofile uses the same irq as the timer, so allow it to be shared */
- .flags = IRQF_TIMER | IRQF_DISABLED | IRQF_SHARED,
+ .flags = IRQF_TIMER | IRQF_SHARED,
.name = "avr32_comparator",
};
.enter = avr32_pm_enter,
};
-static unsigned long avr32_pm_offset(void *symbol)
+static unsigned long __init avr32_pm_offset(void *symbol)
{
extern u8 pm_exception[];
if (PCI_CONTROLLER(bus)->iommu)
return;
- handle = PCI_CONTROLLER(bus)->acpi_handle;
+ handle = acpi_device_handle(PCI_CONTROLLER(bus)->companion);
if (!handle)
return;
};
struct pci_controller {
- void *acpi_handle;
+ struct acpi_device *companion;
void *iommu;
int segment;
int node; /* nearest node with memory or -1 for global allocation */
.flush = pfm_flush
};
-static int
-pfmfs_delete_dentry(const struct dentry *dentry)
-{
- return 1;
-}
-
static char *pfmfs_dname(struct dentry *dentry, char *buffer, int buflen)
{
return dynamic_dname(dentry, buffer, buflen, "pfm:[%lu]",
}
static const struct dentry_operations pfmfs_dentry_operations = {
- .d_delete = pfmfs_delete_dentry,
+ .d_delete = always_delete_dentry,
.d_dname = pfmfs_dname,
};
if (!controller)
return NULL;
- controller->acpi_handle = device->handle;
+ controller->companion = device;
- pxm = acpi_get_pxm(controller->acpi_handle);
+ pxm = acpi_get_pxm(device->handle);
#ifdef CONFIG_NUMA
if (pxm >= 0)
controller->node = pxm_to_node(pxm);
{
struct pci_controller *controller = bridge->bus->sysdata;
- ACPI_HANDLE_SET(&bridge->dev, controller->acpi_handle);
+ ACPI_COMPANION_SET(&bridge->dev, controller->companion);
return 0;
}
struct acpi_resource_vendor_typed *vendor;
- handle = PCI_CONTROLLER(bus)->acpi_handle;
+ handle = acpi_device_handle(PCI_CONTROLLER(bus)->companion);
status = acpi_get_vendor_resource(handle, METHOD_NAME__CRS,
&sn_uuid, &buffer);
if (ACPI_FAILURE(status)) {
acpi_status status;
struct acpi_buffer name_buffer = { ACPI_ALLOCATE_BUFFER, NULL };
- rootbus_handle = PCI_CONTROLLER(dev)->acpi_handle;
+ rootbus_handle = acpi_device_handle(PCI_CONTROLLER(dev)->companion);
status = acpi_evaluate_integer(rootbus_handle, METHOD_NAME__SEG, NULL,
&segment);
if (ACPI_SUCCESS(status)) {
CONFIG_IDE=y
CONFIG_BLK_DEV_IDECD=y
CONFIG_BLK_DEV_NS87415=y
-CONFIG_BLK_DEV_SIIMAGE=m
+CONFIG_PATA_SIL680=m
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=y
CONFIG_MODVERSIONS=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_PA8X00=y
-CONFIG_MLONGCALLS=y
CONFIG_64BIT=y
CONFIG_SMP=y
CONFIG_PREEMPT=y
CONFIG_BLK_DEV_IDECD=y
CONFIG_BLK_DEV_PLATFORM=y
CONFIG_BLK_DEV_GENERIC=y
-CONFIG_BLK_DEV_SIIMAGE=y
-CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
+CONFIG_ATA=y
+CONFIG_PATA_SIL680=y
CONFIG_FUSION=y
CONFIG_FUSION_SPI=y
CONFIG_FUSION_SAS=y
# CONFIG_KEYBOARD_ATKBD is not set
# CONFIG_KEYBOARD_HIL_OLD is not set
# CONFIG_KEYBOARD_HIL is not set
-CONFIG_MOUSE_PS2=m
+# CONFIG_MOUSE_PS2 is not set
CONFIG_INPUT_MISC=y
-CONFIG_INPUT_CM109=m
CONFIG_SERIO_SERPORT=m
CONFIG_SERIO_PARKBD=m
CONFIG_SERIO_GSCPS2=m
CONFIG_SND_AD1889=m
# CONFIG_SND_USB is not set
# CONFIG_SND_GSC is not set
-CONFIG_HID_A4TECH=m
-CONFIG_HID_APPLE=m
-CONFIG_HID_BELKIN=m
-CONFIG_HID_CHERRY=m
-CONFIG_HID_CHICONY=m
-CONFIG_HID_CYPRESS=m
-CONFIG_HID_DRAGONRISE=m
-CONFIG_HID_EZKEY=m
-CONFIG_HID_KYE=m
-CONFIG_HID_GYRATION=m
-CONFIG_HID_TWINHAN=m
-CONFIG_HID_KENSINGTON=m
-CONFIG_HID_LOGITECH=m
-CONFIG_HID_LOGITECH_DJ=m
-CONFIG_HID_MICROSOFT=m
-CONFIG_HID_MONTEREY=m
-CONFIG_HID_NTRIG=m
-CONFIG_HID_ORTEK=m
-CONFIG_HID_PANTHERLORD=m
-CONFIG_HID_PETALYNX=m
-CONFIG_HID_SAMSUNG=m
-CONFIG_HID_SUNPLUS=m
-CONFIG_HID_GREENASIA=m
-CONFIG_HID_SMARTJOYPLUS=m
-CONFIG_HID_TOPSEED=m
-CONFIG_HID_THRUSTMASTER=m
-CONFIG_HID_ZEROPLUS=m
-CONFIG_USB_HID=m
CONFIG_USB=y
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_STORAGE=y
CONFIG_BLK_DEV_INTEGRITY=y
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_PA8X00=y
-CONFIG_MLONGCALLS=y
CONFIG_64BIT=y
CONFIG_SMP=y
# CONFIG_COMPACTION is not set
CONFIG_IDE_GD_ATAPI=y
CONFIG_BLK_DEV_IDECD=m
CONFIG_BLK_DEV_NS87415=y
-CONFIG_BLK_DEV_SIIMAGE=y
# CONFIG_SCSI_PROC_FS is not set
CONFIG_BLK_DEV_SD=y
CONFIG_BLK_DEV_SR=y
CONFIG_SCSI_QLA_ISCSI=m
CONFIG_SCSI_DH=y
CONFIG_ATA=y
+CONFIG_PATA_SIL680=y
CONFIG_ATA_GENERIC=y
CONFIG_MD=y
CONFIG_MD_LINEAR=m
CONFIG_INPUT_EVDEV=y
# CONFIG_KEYBOARD_HIL_OLD is not set
# CONFIG_KEYBOARD_HIL is not set
-# CONFIG_INPUT_MOUSE is not set
+# CONFIG_MOUSE_PS2 is not set
CONFIG_INPUT_MISC=y
CONFIG_SERIO_SERPORT=m
# CONFIG_HP_SDC is not set
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
-CONFIG_HID=m
CONFIG_HIDRAW=y
-CONFIG_HID_DRAGONRISE=m
-CONFIG_DRAGONRISE_FF=y
-CONFIG_HID_KYE=m
-CONFIG_HID_GYRATION=m
-CONFIG_HID_TWINHAN=m
-CONFIG_LOGITECH_FF=y
-CONFIG_LOGIRUMBLEPAD2_FF=y
-CONFIG_HID_NTRIG=m
-CONFIG_HID_PANTHERLORD=m
-CONFIG_PANTHERLORD_FF=y
-CONFIG_HID_PETALYNX=m
-CONFIG_HID_SAMSUNG=m
-CONFIG_HID_SONY=m
-CONFIG_HID_SUNPLUS=m
-CONFIG_HID_GREENASIA=m
-CONFIG_GREENASIA_FF=y
-CONFIG_HID_SMARTJOYPLUS=m
-CONFIG_SMARTJOYPLUS_FF=y
-CONFIG_HID_TOPSEED=m
-CONFIG_HID_THRUSTMASTER=m
-CONFIG_THRUSTMASTER_FF=y
-CONFIG_HID_ZEROPLUS=m
-CONFIG_ZEROPLUS_FF=y
-CONFIG_USB_HID=m
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y
CONFIG_USB=y
CONFIG_USB_MON=m
CONFIG_USB_WUSB_CBAF=m
CONFIG_USB_XHCI_HCD=m
-CONFIG_USB_EHCI_HCD=m
-CONFIG_USB_OHCI_HCD=m
-CONFIG_USB_R8A66597_HCD=m
-CONFIG_USB_ACM=m
-CONFIG_USB_PRINTER=m
-CONFIG_USB_WDM=m
-CONFIG_USB_TMC=m
+CONFIG_USB_EHCI_HCD=y
+CONFIG_USB_OHCI_HCD=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_TRIGGERS=y
* This is used for 16550-compatible UARTs
*/
#define BASE_BAUD ( 1843200 / 16 )
-
-#define SERIAL_PORT_DFNS
--- /dev/null
+#ifndef _ASM_SOCKET_H
+#define _ASM_SOCKET_H
+
+#include <uapi/asm/socket.h>
+
+/* O_NONBLOCK clashes with the bits used for socket types. Therefore we
+ * have to define SOCK_NONBLOCK to a different value here.
+ */
+#define SOCK_NONBLOCK 0x40000000
+
+#endif /* _ASM_SOCKET_H */
/*
* User space memory access functions
*/
-#include <asm/processor.h>
#include <asm/page.h>
#include <asm/cache.h>
#include <asm/errno.h>
#include <asm-generic/uaccess-unaligned.h>
-#include <linux/sched.h>
-
#define VERIFY_READ 0
#define VERIFY_WRITE 1
extern int __put_kernel_bad(void);
extern int __put_user_bad(void);
-
-/*
- * Test whether a block of memory is a valid user space address.
- * Returns 0 if the range is valid, nonzero otherwise.
- */
-static inline int __range_not_ok(unsigned long addr, unsigned long size,
- unsigned long limit)
+static inline long access_ok(int type, const void __user * addr,
+ unsigned long size)
{
- unsigned long __newaddr = addr + size;
- return (__newaddr < addr || __newaddr > limit || size > limit);
+ return 1;
}
-/**
- * access_ok: - Checks if a user space pointer is valid
- * @type: Type of access: %VERIFY_READ or %VERIFY_WRITE. Note that
- * %VERIFY_WRITE is a superset of %VERIFY_READ - if it is safe
- * to write to a block, it is always safe to read from it.
- * @addr: User space pointer to start of block to check
- * @size: Size of block to check
- *
- * Context: User context only. This function may sleep.
- *
- * Checks if a pointer to a block of memory in user space is valid.
- *
- * Returns true (nonzero) if the memory block may be valid, false (zero)
- * if it is definitely invalid.
- *
- * Note that, depending on architecture, this function probably just
- * checks that the pointer is in the user space range - after calling
- * this function, memory access functions may still return -EFAULT.
- */
-#define access_ok(type, addr, size) \
-( __chk_user_ptr(addr), \
- !__range_not_ok((unsigned long) (__force void *) (addr), \
- size, user_addr_max()) \
-)
-
#define put_user __put_user
#define get_user __get_user
/*
* Complex access routines -- macros
*/
-#ifdef CONFIG_COMPAT
-#define user_addr_max() (TASK_SIZE)
-#else
-#define user_addr_max() (DEFAULT_TASK_SIZE)
-#endif
+#define user_addr_max() (~0UL)
#define strnlen_user lstrnlen_user
#define strlen_user(str) lstrnlen_user(str, 0x7fffffffL)
-#ifndef _ASM_SOCKET_H
-#define _ASM_SOCKET_H
+#ifndef _UAPI_ASM_SOCKET_H
+#define _UAPI_ASM_SOCKET_H
#include <asm/sockios.h>
#define SO_MAX_PACING_RATE 0x4048
-/* O_NONBLOCK clashes with the bits used for socket types. Therefore we
- * have to define SOCK_NONBLOCK to a different value here.
- */
-#define SOCK_NONBLOCK 0x40000000
-
-#endif /* _ASM_SOCKET_H */
+#endif /* _UAPI_ASM_SOCKET_H */
* HP PARISC Hardware Database
* Access to this database is only possible during bootup
* so don't reference this table after starting the init process
+ *
+ * NOTE: Product names which are listed here and ends with a '?'
+ * are guessed. If you know the correct name, please let us know.
*/
static struct hp_hardware hp_hardware_list[] = {
{HPHW_NPROC,0x5DD,0x4,0x81,"Duet W2"},
{HPHW_NPROC,0x5DE,0x4,0x81,"Piccolo W+"},
{HPHW_NPROC,0x5DF,0x4,0x81,"Cantata W2"},
- {HPHW_NPROC,0x5DF,0x0,0x00,"Marcato W+? (rp5470)"},
+ {HPHW_NPROC,0x5DF,0x0,0x00,"Marcato W+ (rp5470)?"},
{HPHW_NPROC,0x5E0,0x4,0x91,"Cantata DC- W2"},
{HPHW_NPROC,0x5E1,0x4,0x91,"Crescendo DC- W2"},
{HPHW_NPROC,0x5E2,0x4,0x91,"Crescendo 650 W2"},
{HPHW_NPROC,0x888,0x4,0x91,"Storm Peak Fast DC-"},
{HPHW_NPROC,0x889,0x4,0x91,"Storm Peak Fast"},
{HPHW_NPROC,0x88A,0x4,0x91,"Crestone Peak Slow"},
+ {HPHW_NPROC,0x88B,0x4,0x91,"Crestone Peak Fast?"},
{HPHW_NPROC,0x88C,0x4,0x91,"Orca Mako+"},
{HPHW_NPROC,0x88D,0x4,0x91,"Rainier/Medel Mako+ Slow"},
{HPHW_NPROC,0x88E,0x4,0x91,"Rainier/Medel Mako+ Fast"},
+ {HPHW_NPROC,0x892,0x4,0x91,"Mt. Hamilton Slow Mako+?"},
{HPHW_NPROC,0x894,0x4,0x91,"Mt. Hamilton Fast Mako+"},
{HPHW_NPROC,0x895,0x4,0x91,"Storm Peak Slow Mako+"},
{HPHW_NPROC,0x896,0x4,0x91,"Storm Peak Fast Mako+"},
.import fault_vector_11,code /* IVA parisc 1.1 32 bit */
.import $global$ /* forward declaration */
#endif /*!CONFIG_64BIT*/
- .export _stext,data /* Kernel want it this way! */
-_stext:
-ENTRY(stext)
+ENTRY(parisc_kernel_start)
.proc
.callinfo
.procend
#endif /* CONFIG_SMP */
-ENDPROC(stext)
+ENDPROC(parisc_kernel_start)
#ifndef CONFIG_64BIT
.section .data..read_mostly
return (unsigned long) mapping >> 8;
}
-static unsigned long get_shared_area(struct address_space *mapping,
- unsigned long addr, unsigned long len, unsigned long pgoff)
+static unsigned long shared_align_offset(struct file *filp, unsigned long pgoff)
+{
+ struct address_space *mapping = filp ? filp->f_mapping : NULL;
+
+ return (get_offset(mapping) + pgoff) << PAGE_SHIFT;
+}
+
+static unsigned long get_shared_area(struct file *filp, unsigned long addr,
+ unsigned long len, unsigned long pgoff)
{
struct vm_unmapped_area_info info;
info.low_limit = PAGE_ALIGN(addr);
info.high_limit = TASK_SIZE;
info.align_mask = PAGE_MASK & (SHMLBA - 1);
- info.align_offset = (get_offset(mapping) + pgoff) << PAGE_SHIFT;
+ info.align_offset = shared_align_offset(filp, pgoff);
return vm_unmapped_area(&info);
}
return -ENOMEM;
if (flags & MAP_FIXED) {
if ((flags & MAP_SHARED) &&
- (addr - (pgoff << PAGE_SHIFT)) & (SHMLBA - 1))
+ (addr - shared_align_offset(filp, pgoff)) & (SHMLBA - 1))
return -EINVAL;
return addr;
}
if (!addr)
addr = TASK_UNMAPPED_BASE;
- if (filp) {
- addr = get_shared_area(filp->f_mapping, addr, len, pgoff);
- } else if(flags & MAP_SHARED) {
- addr = get_shared_area(NULL, addr, len, pgoff);
- } else {
+ if (filp || (flags & MAP_SHARED))
+ addr = get_shared_area(filp, addr, len, pgoff);
+ else
addr = get_unshared_area(addr, len);
- }
+
return addr;
}
}
/* Called from setup_arch to import the kernel unwind info */
-int unwind_init(void)
+int __init unwind_init(void)
{
long start, stop;
register unsigned long gp __asm__ ("r27");
e = find_unwind_entry(info->ip);
if (e == NULL) {
unsigned long sp;
- extern char _stext[], _etext[];
dbg("Cannot find unwind entry for 0x%lx; forced unwinding\n", info->ip);
break;
info->prev_ip = tmp;
sp = info->prev_sp;
- } while (info->prev_ip < (unsigned long)_stext ||
- info->prev_ip > (unsigned long)_etext);
+ } while (!kernel_text_address(info->prev_ip));
info->rp = 0;
do {
if (unwind_once(&info) < 0 || info.ip == 0)
return 0;
- if (!__kernel_text_address(info.ip)) {
+ if (!kernel_text_address(info.ip))
return 0;
- }
} while (info.ip && level--);
return info.ip;
* Copyright (C) 2000 Michael Ang <mang with subcarrier.org>
* Copyright (C) 2002 Randolph Chung <tausq with parisc-linux.org>
* Copyright (C) 2003 James Bottomley <jejb with parisc-linux.org>
- * Copyright (C) 2006 Helge Deller <deller@gmx.de>
- *
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ * Copyright (C) 2006-2013 Helge Deller <deller@gmx.de>
+ */
+
+/*
+ * Put page table entries (swapper_pg_dir) as the first thing in .bss. This
+ * will ensure that it has .bss alignment (PAGE_SIZE).
*/
+#define BSS_FIRST_SECTIONS *(.data..vm0.pmd) \
+ *(.data..vm0.pgd) \
+ *(.data..vm0.pte)
+
#include <asm-generic/vmlinux.lds.h>
+
/* needed for the processor specific cache alignment size */
#include <asm/cache.h>
#include <asm/page.h>
OUTPUT_ARCH(hppa:hppa2.0w)
#endif
-ENTRY(_stext)
+ENTRY(parisc_kernel_start)
#ifndef CONFIG_64BIT
jiffies = jiffies_64 + 4;
#else
{
. = KERNEL_BINARY_TEXT_START;
+ __init_begin = .;
+ HEAD_TEXT_SECTION
+ INIT_TEXT_SECTION(8)
+
+ . = ALIGN(PAGE_SIZE);
+ INIT_DATA_SECTION(PAGE_SIZE)
+ /* we have to discard exit text and such at runtime, not link time */
+ .exit.text :
+ {
+ EXIT_TEXT
+ }
+ .exit.data :
+ {
+ EXIT_DATA
+ }
+ PERCPU_SECTION(8)
+ . = ALIGN(PAGE_SIZE);
+ __init_end = .;
+ /* freed after init ends here */
+
_text = .; /* Text and read-only data */
- .head ALIGN(16) : {
- HEAD_TEXT
- } = 0
- .text ALIGN(16) : {
+ _stext = .;
+ .text ALIGN(PAGE_SIZE) : {
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
*(.lock.text) /* out-of-line lock text */
*(.gnu.warning)
}
- /* End of text section */
+ . = ALIGN(PAGE_SIZE);
_etext = .;
+ /* End of text section */
/* Start of data section */
_sdata = .;
- RODATA
+ RO_DATA_SECTION(8)
- /* writeable */
- /* Make sure this is page aligned so
- * that we can properly leave these
- * as writable
- */
- . = ALIGN(PAGE_SIZE);
- data_start = .;
+#ifdef CONFIG_64BIT
+ . = ALIGN(16);
+ /* Linkage tables */
+ .opd : {
+ *(.opd)
+ } PROVIDE (__gp = .);
+ .plt : {
+ *(.plt)
+ }
+ .dlt : {
+ *(.dlt)
+ }
+#endif
/* unwind info */
.PARISC.unwind : {
__stop___unwind = .;
}
- EXCEPTION_TABLE(16)
+ /* writeable */
+ /* Make sure this is page aligned so
+ * that we can properly leave these
+ * as writable
+ */
+ . = ALIGN(PAGE_SIZE);
+ data_start = .;
+
+ EXCEPTION_TABLE(8)
NOTES
/* Data */
_edata = .;
/* BSS */
- __bss_start = .;
- /* page table entries need to be PAGE_SIZE aligned */
- . = ALIGN(PAGE_SIZE);
- .data..vmpages : {
- *(.data..vm0.pmd)
- *(.data..vm0.pgd)
- *(.data..vm0.pte)
- }
- .bss : {
- *(.bss)
- *(COMMON)
- }
- __bss_stop = .;
-
-#ifdef CONFIG_64BIT
- . = ALIGN(16);
- /* Linkage tables */
- .opd : {
- *(.opd)
- } PROVIDE (__gp = .);
- .plt : {
- *(.plt)
- }
- .dlt : {
- *(.dlt)
- }
-#endif
+ BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 8)
- /* reserve space for interrupt stack by aligning __init* to 16k */
- . = ALIGN(16384);
- __init_begin = .;
- INIT_TEXT_SECTION(16384)
- . = ALIGN(PAGE_SIZE);
- INIT_DATA_SECTION(16)
- /* we have to discard exit text and such at runtime, not link time */
- .exit.text :
- {
- EXIT_TEXT
- }
- .exit.data :
- {
- EXIT_DATA
- }
-
- PERCPU_SECTION(L1_CACHE_BYTES)
- . = ALIGN(PAGE_SIZE);
- __init_end = .;
- /* freed after init ends here */
_end = . ;
STABS_DEBUG
/* Copy from a not-aligned src to an aligned dst, using shifts. Handles 4 words
* per loop. This code is derived from glibc.
*/
-static inline unsigned long copy_dstaligned(unsigned long dst,
+static noinline unsigned long copy_dstaligned(unsigned long dst,
unsigned long src, unsigned long len)
{
/* gcc complains that a2 and a3 may be uninitialized, but actually
/* Returns PA_MEMCPY_OK, PA_MEMCPY_LOAD_ERROR or PA_MEMCPY_STORE_ERROR.
* In case of an access fault the faulty address can be read from the per_cpu
* exception data struct. */
-static unsigned long pa_memcpy_internal(void *dstp, const void *srcp,
+static noinline unsigned long pa_memcpy_internal(void *dstp, const void *srcp,
unsigned long len)
{
register unsigned long src, dst, t1, t2, t3;
{
unsigned long addr = (unsigned long)src;
- if (size < 0 || addr < PAGE_SIZE)
+ if (addr < PAGE_SIZE)
return -EFAULT;
/* check for I/O space F_EXTEND(0xfff00000) access as well? */
#endif
switch (code) {
case 15: /* Data TLB miss fault/Data page fault */
+ /* send SIGSEGV when outside of vma */
+ if (!vma ||
+ address < vma->vm_start || address > vma->vm_end) {
+ si.si_signo = SIGSEGV;
+ si.si_code = SEGV_MAPERR;
+ break;
+ }
+
+ /* send SIGSEGV for wrong permissions */
+ if ((vma->vm_flags & acc_type) != acc_type) {
+ si.si_signo = SIGSEGV;
+ si.si_code = SEGV_ACCERR;
+ break;
+ }
+
+ /* probably address is outside of mapped file */
+ /* fall through */
case 17: /* NA data TLB miss / page fault */
case 18: /* Unaligned access - PCXS only */
si.si_signo = SIGBUS;
- si.si_code = BUS_ADRERR;
+ si.si_code = (code == 18) ? BUS_ADRALN : BUS_ADRERR;
break;
case 16: /* Non-access instruction TLB miss fault */
case 26: /* PCXL: Data memory access rights trap */
default:
si.si_signo = SIGSEGV;
- si.si_code = SEGV_MAPERR;
+ si.si_code = (code == 26) ? SEGV_ACCERR : SEGV_MAPERR;
+ break;
}
si.si_errno = 0;
si.si_addr = (void __user *) address;
#include <asm/sections.h>
extern int data_start;
+extern void parisc_kernel_start(void); /* Kernel entry point in head.S */
#if PT_NLEVELS == 3
/* NOTE: This layout exactly conforms to the hybrid L2/L3 page table layout
reserve_bootmem_node(NODE_DATA(0), 0UL,
(unsigned long)(PAGE0->mem_free +
PDC_CONSOLE_IO_IODC_SIZE), BOOTMEM_DEFAULT);
- reserve_bootmem_node(NODE_DATA(0), __pa((unsigned long)_text),
- (unsigned long)(_end - _text), BOOTMEM_DEFAULT);
+ reserve_bootmem_node(NODE_DATA(0), __pa(KERNEL_BINARY_TEXT_START),
+ (unsigned long)(_end - KERNEL_BINARY_TEXT_START),
+ BOOTMEM_DEFAULT);
reserve_bootmem_node(NODE_DATA(0), (bootmap_start_pfn << PAGE_SHIFT),
((bootmap_pfn - bootmap_start_pfn) << PAGE_SHIFT),
BOOTMEM_DEFAULT);
request_resource(&sysram_resources[0], &pdcdata_resource);
}
+static int __init parisc_text_address(unsigned long vaddr)
+{
+ static unsigned long head_ptr __initdata;
+
+ if (!head_ptr)
+ head_ptr = PAGE_MASK & (unsigned long)
+ dereference_function_descriptor(&parisc_kernel_start);
+
+ return core_kernel_text(vaddr) || vaddr == head_ptr;
+}
+
static void __init map_pages(unsigned long start_vaddr,
unsigned long start_paddr, unsigned long size,
pgprot_t pgprot, int force)
*/
if (force)
pte = __mk_pte(address, pgprot);
- else if (core_kernel_text(vaddr) &&
+ else if (parisc_text_address(vaddr) &&
address != fv_addr)
pte = __mk_pte(address, PAGE_KERNEL_EXEC);
else
GNUTARGET := powerpcle
MULTIPLEWORD := -mno-multiple
else
+ifeq ($(call cc-option-yn,-mbig-endian),y)
override CC += -mbig-endian
override AS += -mbig-endian
+endif
override LD += -EB
LDEMULATION := ppc
GNUTARGET := powerpc
endif
CFLAGS-$(CONFIG_PPC64) := -mtraceback=no -mcall-aixdesc
+CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv1)
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcmodel=medium,-mminimal-toc)
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mno-pointers-to-nested-functions)
CFLAGS-$(CONFIG_PPC32) := -ffixed-r2 $(MULTIPLEWORD)
CFLAGS-$(CONFIG_POWER6_CPU) += $(call cc-option,-mcpu=power6)
CFLAGS-$(CONFIG_POWER7_CPU) += $(call cc-option,-mcpu=power7)
+# Altivec option not allowed with e500mc64 in GCC.
+ifeq ($(CONFIG_ALTIVEC),y)
+E5500_CPU := -mcpu=powerpc64
+else
E5500_CPU := $(call cc-option,-mcpu=e500mc64,-mcpu=powerpc64)
+endif
CFLAGS-$(CONFIG_E5500_CPU) += $(E5500_CPU)
CFLAGS-$(CONFIG_E6500_CPU) += $(call cc-option,-mcpu=e6500,$(E5500_CPU))
reg = <0xe2000 0x1000>;
};
-/include/ "qoriq-dma-0.dtsi"
+/include/ "elo3-dma-0.dtsi"
dma@100300 {
fsl,iommu-parent = <&pamu0>;
fsl,liodn-reg = <&guts 0x580>; /* DMA1LIODNR */
};
-/include/ "qoriq-dma-1.dtsi"
+/include/ "elo3-dma-1.dtsi"
dma@101300 {
fsl,iommu-parent = <&pamu0>;
fsl,liodn-reg = <&guts 0x584>; /* DMA2LIODNR */
--- /dev/null
+/*
+ * QorIQ Elo3 DMA device tree stub [ controller @ offset 0x100000 ]
+ *
+ * Copyright 2013 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of Freescale Semiconductor nor the
+ * names of its contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+dma0: dma@100300 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "fsl,elo3-dma";
+ reg = <0x100300 0x4>,
+ <0x100600 0x4>;
+ ranges = <0x0 0x100100 0x500>;
+ dma-channel@0 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x0 0x80>;
+ interrupts = <28 2 0 0>;
+ };
+ dma-channel@80 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x80 0x80>;
+ interrupts = <29 2 0 0>;
+ };
+ dma-channel@100 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x100 0x80>;
+ interrupts = <30 2 0 0>;
+ };
+ dma-channel@180 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x180 0x80>;
+ interrupts = <31 2 0 0>;
+ };
+ dma-channel@300 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x300 0x80>;
+ interrupts = <76 2 0 0>;
+ };
+ dma-channel@380 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x380 0x80>;
+ interrupts = <77 2 0 0>;
+ };
+ dma-channel@400 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x400 0x80>;
+ interrupts = <78 2 0 0>;
+ };
+ dma-channel@480 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x480 0x80>;
+ interrupts = <79 2 0 0>;
+ };
+};
--- /dev/null
+/*
+ * QorIQ Elo3 DMA device tree stub [ controller @ offset 0x101000 ]
+ *
+ * Copyright 2013 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of Freescale Semiconductor nor the
+ * names of its contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+dma1: dma@101300 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "fsl,elo3-dma";
+ reg = <0x101300 0x4>,
+ <0x101600 0x4>;
+ ranges = <0x0 0x101100 0x500>;
+ dma-channel@0 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x0 0x80>;
+ interrupts = <32 2 0 0>;
+ };
+ dma-channel@80 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x80 0x80>;
+ interrupts = <33 2 0 0>;
+ };
+ dma-channel@100 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x100 0x80>;
+ interrupts = <34 2 0 0>;
+ };
+ dma-channel@180 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x180 0x80>;
+ interrupts = <35 2 0 0>;
+ };
+ dma-channel@300 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x300 0x80>;
+ interrupts = <80 2 0 0>;
+ };
+ dma-channel@380 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x380 0x80>;
+ interrupts = <81 2 0 0>;
+ };
+ dma-channel@400 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x400 0x80>;
+ interrupts = <82 2 0 0>;
+ };
+ dma-channel@480 {
+ compatible = "fsl,eloplus-dma-channel";
+ reg = <0x480 0x80>;
+ interrupts = <83 2 0 0>;
+ };
+};
reg = <0xea000 0x4000>;
};
-/include/ "qoriq-dma-0.dtsi"
-/include/ "qoriq-dma-1.dtsi"
+/include/ "elo3-dma-0.dtsi"
+/include/ "elo3-dma-1.dtsi"
/include/ "qoriq-espi-0.dtsi"
spi@110000 {
compatible = "fsl,mpc5121-immr";
#address-cells = <1>;
#size-cells = <1>;
- #interrupt-cells = <2>;
ranges = <0x0 0x80000000 0x400000>;
reg = <0x80000000 0x400000>;
bus-frequency = <66000000>; /* 66 MHz ips bus */
compatible = "fsl,mpc5121-immr";
#address-cells = <1>;
#size-cells = <1>;
- #interrupt-cells = <2>;
ranges = <0x0 0x80000000 0x400000>;
reg = <0x80000000 0x400000>;
bus-frequency = <66000000>; // 66 MHz ips bus
reg = <0xA000 0x1000>;
};
+ // disable USB1 port
+ // TODO:
+ // correct pinmux config and fix USB3320 ulpi dependency
+ // before re-enabling it
usb@3000 {
compatible = "fsl,mpc5121-usb2-dr";
reg = <0x3000 0x400>;
interrupts = <43 0x8>;
dr_mode = "host";
phy_type = "ulpi";
+ status = "disabled";
};
// 5125 PSCs are not 52xx or 5121 PSC compatible
tlu@2f000 {
compatible = "fsl,mpc8572-tlu", "fsl_tlu";
reg = <0x2f000 0x1000>;
- interupts = <61 2 >;
+ interrupts = <61 2>;
interrupt-parent = <&mpic>;
};
tlu@15000 {
compatible = "fsl,mpc8572-tlu", "fsl_tlu";
reg = <0x15000 0x1000>;
- interupts = <75 2>;
+ interrupts = <75 2>;
interrupt-parent = <&mpic>;
};
};
tlu@2f000 {
compatible = "fsl,mpc8572-tlu", "fsl_tlu";
reg = <0x2f000 0x1000>;
- interupts = <61 2 >;
+ interrupts = <61 2>;
interrupt-parent = <&mpic>;
};
tlu@15000 {
compatible = "fsl,mpc8572-tlu", "fsl_tlu";
reg = <0x15000 0x1000>;
- interupts = <75 2>;
+ interrupts = <75 2>;
interrupt-parent = <&mpic>;
};
};
tlu@2f000 {
compatible = "fsl,mpc8572-tlu", "fsl_tlu";
reg = <0x2f000 0x1000>;
- interupts = <61 2 >;
+ interrupts = <61 2>;
interrupt-parent = <&mpic>;
};
tlu@15000 {
compatible = "fsl,mpc8572-tlu", "fsl_tlu";
reg = <0x15000 0x1000>;
- interupts = <75 2>;
+ interrupts = <75 2>;
interrupt-parent = <&mpic>;
};
};
tlu@2f000 {
compatible = "fsl,mpc8572-tlu", "fsl_tlu";
reg = <0x2f000 0x1000>;
- interupts = <61 2 >;
+ interrupts = <61 2>;
interrupt-parent = <&mpic>;
};
tlu@15000 {
compatible = "fsl,mpc8572-tlu", "fsl_tlu";
reg = <0x15000 0x1000>;
- interupts = <75 2>;
+ interrupts = <75 2>;
interrupt-parent = <&mpic>;
};
};
add r4,r4,r5
addi r4,r4,-1
divw r4,r4,r5 /* BUS ticks */
+#ifdef CONFIG_8xx
+1: mftbu r5
+ mftb r6
+ mftbu r7
+#else
1: mfspr r5, SPRN_TBRU
mfspr r6, SPRN_TBRL
mfspr r7, SPRN_TBRU
+#endif
cmpw 0,r5,r7
bne 1b /* Get [synced] base time */
addc r9,r6,r4 /* Compute end time */
addze r8,r5
+#ifdef CONFIG_8xx
+2: mftbu r5
+#else
2: mfspr r5, SPRN_TBRU
+#endif
cmpw 0,r5,r8
blt 2b
bgt 3f
+#ifdef CONFIG_8xx
+ mftb r6
+#else
mfspr r6, SPRN_TBRL
+#endif
cmpw 0,r6,r9
blt 2b
3: blr
CONFIG_PPC_MPC52xx=y
CONFIG_PPC_MPC5200_SIMPLE=y
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_SPARSE_IRQ=y
CONFIG_PM=y
# CONFIG_PCI is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PPC_OF_BE=y
CONFIG_USB_STORAGE=y
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_PPC_MPC5200_SIMPLE=y
CONFIG_PPC_LITE5200=y
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SPARSE_IRQ=y
CONFIG_I2C_MPC=y
# CONFIG_HWMON is not set
CONFIG_VIDEO_OUTPUT_CONTROL=m
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_PPC_MPC52xx=y
CONFIG_PPC_MPC5200_SIMPLE=y
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_SPARSE_IRQ=y
CONFIG_PM=y
# CONFIG_PCI is not set
CONFIG_LEDS_TRIGGER_TIMER=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_DS1307=y
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_PPC_MPC52xx=y
CONFIG_PPC_MPC5200_SIMPLE=y
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_HZ_100=y
CONFIG_USB_STORAGE=m
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_PCF8563=m
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=m
CONFIG_EXT3_FS=m
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_PPC_MPC5200_SIMPLE=y
CONFIG_PPC_MPC5200_BUGFIX=y
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_PM=y
# CONFIG_PCI is not set
CONFIG_NET=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_DS1307=y
CONFIG_RTC_DRV_DS1374=y
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_PPC_MPC5200_BUGFIX=y
CONFIG_PPC_MPC5200_LPBFIFO=m
# CONFIG_PPC_PMAC is not set
-CONFIG_PPC_BESTCOMM=y
CONFIG_SIMPLE_GPIO=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_RTC_DRV_DS1307=y
CONFIG_RTC_DRV_DS1374=y
CONFIG_RTC_DRV_PCF8563=m
+CONFIG_DMADEVICES=y
+CONFIG_PPC_BESTCOMM=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_ALTIVEC=y
CONFIG_SMP=y
CONFIG_NR_CPUS=2
-CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_INET_ESP=y
# CONFIG_IPV6 is not set
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
CONFIG_MTD=y
-CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
CONFIG_MTD_SLRAM=y
CONFIG_MTD_PHRAM=y
CONFIG_DM_CRYPT=y
CONFIG_NETDEVICES=y
CONFIG_DUMMY=y
-CONFIG_MII=y
CONFIG_TIGON3=y
CONFIG_E1000=y
CONFIG_PASEMI_MAC=y
CONFIG_NLS_ISO8859_1=y
CONFIG_CRC_CCITT=y
CONFIG_PRINTK_TIME=y
-CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_FS=y
+CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DETECT_HUNG_TASK=y
# CONFIG_SCHED_DEBUG is not set
--- /dev/null
+CONFIG_PPC64=y
+CONFIG_ALTIVEC=y
+CONFIG_VSX=y
+CONFIG_SMP=y
+CONFIG_NR_CPUS=2048
+CONFIG_CPU_LITTLE_ENDIAN=y
+CONFIG_SYSVIPC=y
+CONFIG_POSIX_MQUEUE=y
+CONFIG_AUDIT=y
+CONFIG_AUDITSYSCALL=y
+CONFIG_IRQ_DOMAIN_DEBUG=y
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_TASKSTATS=y
+CONFIG_TASK_DELAY_ACCT=y
+CONFIG_TASK_XACCT=y
+CONFIG_TASK_IO_ACCOUNTING=y
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_CGROUPS=y
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CGROUP_DEVICE=y
+CONFIG_CPUSETS=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_COMPAT_BRK is not set
+CONFIG_PROFILING=y
+CONFIG_OPROFILE=y
+CONFIG_KPROBES=y
+CONFIG_JUMP_LABEL=y
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODVERSIONS=y
+CONFIG_MODULE_SRCVERSION_ALL=y
+CONFIG_PARTITION_ADVANCED=y
+CONFIG_PPC_SPLPAR=y
+CONFIG_SCANLOG=m
+CONFIG_PPC_SMLPAR=y
+CONFIG_DTL=y
+# CONFIG_PPC_PMAC is not set
+CONFIG_RTAS_FLASH=m
+CONFIG_IBMEBUS=y
+CONFIG_HZ_100=y
+CONFIG_BINFMT_MISC=m
+CONFIG_PPC_TRANSACTIONAL_MEM=y
+CONFIG_KEXEC=y
+CONFIG_IRQ_ALL_CPUS=y
+CONFIG_MEMORY_HOTPLUG=y
+CONFIG_MEMORY_HOTREMOVE=y
+CONFIG_CMA=y
+CONFIG_PPC_64K_PAGES=y
+CONFIG_PPC_SUBPAGE_PROT=y
+CONFIG_SCHED_SMT=y
+CONFIG_HOTPLUG_PCI=y
+CONFIG_HOTPLUG_PCI_RPA=m
+CONFIG_HOTPLUG_PCI_RPA_DLPAR=m
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_XFRM_USER=m
+CONFIG_NET_KEY=m
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_NET_IPIP=y
+CONFIG_SYN_COOKIES=y
+CONFIG_INET_AH=m
+CONFIG_INET_ESP=m
+CONFIG_INET_IPCOMP=m
+# CONFIG_IPV6 is not set
+CONFIG_NETFILTER=y
+CONFIG_NF_CONNTRACK=m
+CONFIG_NF_CONNTRACK_EVENTS=y
+CONFIG_NF_CT_PROTO_UDPLITE=m
+CONFIG_NF_CONNTRACK_FTP=m
+CONFIG_NF_CONNTRACK_IRC=m
+CONFIG_NF_CONNTRACK_TFTP=m
+CONFIG_NF_CT_NETLINK=m
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
+CONFIG_NETFILTER_XT_TARGET_MARK=m
+CONFIG_NETFILTER_XT_TARGET_NFLOG=m
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
+CONFIG_NETFILTER_XT_MATCH_COMMENT=m
+CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
+CONFIG_NETFILTER_XT_MATCH_DCCP=m
+CONFIG_NETFILTER_XT_MATCH_DSCP=m
+CONFIG_NETFILTER_XT_MATCH_ESP=m
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
+CONFIG_NETFILTER_XT_MATCH_HELPER=m
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
+CONFIG_NETFILTER_XT_MATCH_LENGTH=m
+CONFIG_NETFILTER_XT_MATCH_LIMIT=m
+CONFIG_NETFILTER_XT_MATCH_MAC=m
+CONFIG_NETFILTER_XT_MATCH_MARK=m
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
+CONFIG_NETFILTER_XT_MATCH_OWNER=m
+CONFIG_NETFILTER_XT_MATCH_POLICY=m
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
+CONFIG_NETFILTER_XT_MATCH_QUOTA=m
+CONFIG_NETFILTER_XT_MATCH_RATEEST=m
+CONFIG_NETFILTER_XT_MATCH_REALM=m
+CONFIG_NETFILTER_XT_MATCH_RECENT=m
+CONFIG_NETFILTER_XT_MATCH_SCTP=m
+CONFIG_NETFILTER_XT_MATCH_STATE=m
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
+CONFIG_NETFILTER_XT_MATCH_STRING=m
+CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
+CONFIG_NETFILTER_XT_MATCH_TIME=m
+CONFIG_NETFILTER_XT_MATCH_U32=m
+CONFIG_NF_CONNTRACK_IPV4=m
+CONFIG_IP_NF_IPTABLES=m
+CONFIG_IP_NF_MATCH_AH=m
+CONFIG_IP_NF_MATCH_ECN=m
+CONFIG_IP_NF_MATCH_TTL=m
+CONFIG_IP_NF_FILTER=m
+CONFIG_IP_NF_TARGET_REJECT=m
+CONFIG_IP_NF_TARGET_ULOG=m
+CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+CONFIG_PROC_DEVICETREE=y
+CONFIG_PARPORT=m
+CONFIG_PARPORT_PC=m
+CONFIG_BLK_DEV_FD=m
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_NBD=m
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=65536
+CONFIG_VIRTIO_BLK=m
+CONFIG_IDE=y
+CONFIG_BLK_DEV_IDECD=y
+CONFIG_BLK_DEV_GENERIC=y
+CONFIG_BLK_DEV_AMD74XX=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_ST=y
+CONFIG_BLK_DEV_SR=y
+CONFIG_BLK_DEV_SR_VENDOR=y
+CONFIG_CHR_DEV_SG=y
+CONFIG_SCSI_MULTI_LUN=y
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_FC_ATTRS=y
+CONFIG_SCSI_CXGB3_ISCSI=m
+CONFIG_SCSI_CXGB4_ISCSI=m
+CONFIG_SCSI_BNX2_ISCSI=m
+CONFIG_BE2ISCSI=m
+CONFIG_SCSI_MPT2SAS=m
+CONFIG_SCSI_IBMVSCSI=y
+CONFIG_SCSI_IBMVFC=m
+CONFIG_SCSI_SYM53C8XX_2=y
+CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=0
+CONFIG_SCSI_IPR=y
+CONFIG_SCSI_QLA_FC=m
+CONFIG_SCSI_QLA_ISCSI=m
+CONFIG_SCSI_LPFC=m
+CONFIG_SCSI_VIRTIO=m
+CONFIG_SCSI_DH=m
+CONFIG_SCSI_DH_RDAC=m
+CONFIG_SCSI_DH_ALUA=m
+CONFIG_ATA=y
+# CONFIG_ATA_SFF is not set
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=y
+CONFIG_MD_LINEAR=y
+CONFIG_MD_RAID0=y
+CONFIG_MD_RAID1=y
+CONFIG_MD_RAID10=m
+CONFIG_MD_RAID456=m
+CONFIG_MD_MULTIPATH=m
+CONFIG_MD_FAULTY=m
+CONFIG_BLK_DEV_DM=y
+CONFIG_DM_CRYPT=m
+CONFIG_DM_SNAPSHOT=m
+CONFIG_DM_MIRROR=m
+CONFIG_DM_ZERO=m
+CONFIG_DM_MULTIPATH=m
+CONFIG_DM_MULTIPATH_QL=m
+CONFIG_DM_MULTIPATH_ST=m
+CONFIG_DM_UEVENT=y
+CONFIG_BONDING=m
+CONFIG_DUMMY=m
+CONFIG_NETCONSOLE=y
+CONFIG_NETPOLL_TRAP=y
+CONFIG_TUN=m
+CONFIG_VIRTIO_NET=m
+CONFIG_VORTEX=y
+CONFIG_ACENIC=m
+CONFIG_ACENIC_OMIT_TIGON_I=y
+CONFIG_PCNET32=y
+CONFIG_TIGON3=y
+CONFIG_CHELSIO_T1=m
+CONFIG_BE2NET=m
+CONFIG_S2IO=m
+CONFIG_IBMVETH=y
+CONFIG_EHEA=y
+CONFIG_E100=y
+CONFIG_E1000=y
+CONFIG_E1000E=y
+CONFIG_IXGB=m
+CONFIG_IXGBE=m
+CONFIG_MLX4_EN=m
+CONFIG_MYRI10GE=m
+CONFIG_QLGE=m
+CONFIG_NETXEN_NIC=m
+CONFIG_PPP=m
+CONFIG_PPP_BSDCOMP=m
+CONFIG_PPP_DEFLATE=m
+CONFIG_PPPOE=m
+CONFIG_PPP_ASYNC=m
+CONFIG_PPP_SYNC_TTY=m
+# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
+CONFIG_INPUT_EVDEV=m
+CONFIG_INPUT_MISC=y
+CONFIG_INPUT_PCSPKR=m
+# CONFIG_SERIO_SERPORT is not set
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_ICOM=m
+CONFIG_SERIAL_JSM=m
+CONFIG_HVC_CONSOLE=y
+CONFIG_HVC_RTAS=y
+CONFIG_HVCS=m
+CONFIG_VIRTIO_CONSOLE=m
+CONFIG_IBM_BSR=m
+CONFIG_GEN_RTC=y
+CONFIG_RAW_DRIVER=y
+CONFIG_MAX_RAW_DEVS=1024
+CONFIG_FB=y
+CONFIG_FIRMWARE_EDID=y
+CONFIG_FB_OF=y
+CONFIG_FB_MATROX=y
+CONFIG_FB_MATROX_MILLENIUM=y
+CONFIG_FB_MATROX_MYSTIQUE=y
+CONFIG_FB_MATROX_G=y
+CONFIG_FB_RADEON=y
+CONFIG_FB_IBM_GXT4500=y
+CONFIG_LCD_PLATFORM=m
+# CONFIG_VGA_CONSOLE is not set
+CONFIG_FRAMEBUFFER_CONSOLE=y
+CONFIG_LOGO=y
+CONFIG_HID_GYRATION=y
+CONFIG_HID_PANTHERLORD=y
+CONFIG_HID_PETALYNX=y
+CONFIG_HID_SAMSUNG=y
+CONFIG_HID_SUNPLUS=y
+CONFIG_USB_HIDDEV=y
+CONFIG_USB=y
+CONFIG_USB_MON=m
+CONFIG_USB_EHCI_HCD=y
+# CONFIG_USB_EHCI_HCD_PPC_OF is not set
+CONFIG_USB_OHCI_HCD=y
+CONFIG_USB_STORAGE=m
+CONFIG_INFINIBAND=m
+CONFIG_INFINIBAND_USER_MAD=m
+CONFIG_INFINIBAND_USER_ACCESS=m
+CONFIG_INFINIBAND_MTHCA=m
+CONFIG_INFINIBAND_EHCA=m
+CONFIG_INFINIBAND_CXGB3=m
+CONFIG_INFINIBAND_CXGB4=m
+CONFIG_MLX4_INFINIBAND=m
+CONFIG_INFINIBAND_IPOIB=m
+CONFIG_INFINIBAND_IPOIB_CM=y
+CONFIG_INFINIBAND_SRP=m
+CONFIG_INFINIBAND_ISER=m
+CONFIG_VIRTIO_PCI=m
+CONFIG_VIRTIO_BALLOON=m
+CONFIG_EXT2_FS=y
+CONFIG_EXT2_FS_XATTR=y
+CONFIG_EXT2_FS_POSIX_ACL=y
+CONFIG_EXT2_FS_SECURITY=y
+CONFIG_EXT2_FS_XIP=y
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_FS_POSIX_ACL=y
+CONFIG_EXT3_FS_SECURITY=y
+CONFIG_EXT4_FS=y
+CONFIG_EXT4_FS_POSIX_ACL=y
+CONFIG_EXT4_FS_SECURITY=y
+CONFIG_REISERFS_FS=y
+CONFIG_REISERFS_FS_XATTR=y
+CONFIG_REISERFS_FS_POSIX_ACL=y
+CONFIG_REISERFS_FS_SECURITY=y
+CONFIG_JFS_FS=m
+CONFIG_JFS_POSIX_ACL=y
+CONFIG_JFS_SECURITY=y
+CONFIG_XFS_FS=m
+CONFIG_XFS_POSIX_ACL=y
+CONFIG_BTRFS_FS=m
+CONFIG_BTRFS_FS_POSIX_ACL=y
+CONFIG_NILFS2_FS=m
+CONFIG_AUTOFS4_FS=m
+CONFIG_FUSE_FS=m
+CONFIG_ISO9660_FS=y
+CONFIG_UDF_FS=m
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_TMPFS=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_HUGETLBFS=y
+CONFIG_CRAMFS=m
+CONFIG_SQUASHFS=m
+CONFIG_SQUASHFS_XATTR=y
+CONFIG_SQUASHFS_LZO=y
+CONFIG_SQUASHFS_XZ=y
+CONFIG_PSTORE=y
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3_ACL=y
+CONFIG_NFS_V4=y
+CONFIG_NFSD=m
+CONFIG_NFSD_V3_ACL=y
+CONFIG_NFSD_V4=y
+CONFIG_CIFS=m
+CONFIG_CIFS_XATTR=y
+CONFIG_CIFS_POSIX=y
+CONFIG_NLS_DEFAULT="utf8"
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_NLS_UTF8=y
+CONFIG_CRC_T10DIF=y
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_DEBUG_KERNEL=y
+CONFIG_DEBUG_STACK_USAGE=y
+CONFIG_DEBUG_STACKOVERFLOW=y
+CONFIG_LOCKUP_DETECTOR=y
+CONFIG_LATENCYTOP=y
+CONFIG_SCHED_TRACER=y
+CONFIG_BLK_DEV_IO_TRACE=y
+CONFIG_CODE_PATCHING_SELFTEST=y
+CONFIG_FTR_FIXUP_SELFTEST=y
+CONFIG_MSI_BITMAP_SELFTEST=y
+CONFIG_XMON=y
+CONFIG_CRYPTO_TEST=m
+CONFIG_CRYPTO_PCBC=m
+CONFIG_CRYPTO_HMAC=y
+CONFIG_CRYPTO_MICHAEL_MIC=m
+CONFIG_CRYPTO_TGR192=m
+CONFIG_CRYPTO_WP512=m
+CONFIG_CRYPTO_ANUBIS=m
+CONFIG_CRYPTO_BLOWFISH=m
+CONFIG_CRYPTO_CAST6=m
+CONFIG_CRYPTO_KHAZAD=m
+CONFIG_CRYPTO_SALSA20=m
+CONFIG_CRYPTO_SERPENT=m
+CONFIG_CRYPTO_TEA=m
+CONFIG_CRYPTO_TWOFISH=m
+CONFIG_CRYPTO_LZO=m
+# CONFIG_CRYPTO_ANSI_CPRNG is not set
+CONFIG_CRYPTO_DEV_NX=y
+CONFIG_CRYPTO_DEV_NX_ENCRYPT=m
extern unsigned long randomize_et_dyn(unsigned long base);
#define ELF_ET_DYN_BASE (randomize_et_dyn(0x20000000))
+#define ELF_CORE_EFLAGS (is_elf2_task() ? 2 : 0)
+
/*
* Our registers are always unsigned longs, whether we're a 32 bit
* process or 64 bit, on either a 64 bit or 32 bit kernel.
#ifdef __powerpc64__
# define SET_PERSONALITY(ex) \
do { \
+ if (((ex).e_flags & 0x3) == 2) \
+ set_thread_flag(TIF_ELF2ABI); \
if ((ex).e_ident[EI_CLASS] == ELFCLASS32) \
set_thread_flag(TIF_32BIT); \
else \
subi r1,r1,INT_FRAME_SIZE; /* alloc frame on kernel stack */ \
beq- 1f; \
ld r1,PACAKSAVE(r13); /* kernel stack to use */ \
-1: cmpdi cr1,r1,0; /* check if r1 is in userspace */ \
+1: cmpdi cr1,r1,-INT_FRAME_SIZE; /* check if r1 is in userspace */ \
blt+ cr1,3f; /* abort if it is */ \
li r1,(n); /* will be reloaded later */ \
sth r1,PACA_TRAP_SAVE(r13); \
extern long pSeries_enable_reloc_on_exc(void);
extern long pSeries_disable_reloc_on_exc(void);
+extern long pseries_big_endian_exceptions(void);
+
#else
#define pSeries_enable_reloc_on_exc() do {} while (0)
extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst);
extern ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst);
extern int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd);
+extern void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu,
+ struct kvm_vcpu *vcpu);
+extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
+ struct kvmppc_book3s_shadow_vcpu *svcpu);
static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
{
ulong vmhandler;
ulong scratch0;
ulong scratch1;
+ ulong scratch2;
u8 in_guest;
u8 restore_hid5;
u8 napping;
};
struct kvmppc_book3s_shadow_vcpu {
+ bool in_use;
ulong gpr[14];
u32 cr;
u32 xer;
int64_t opal_pci_poll(uint64_t phb_id);
int64_t opal_return_cpu(void);
-int64_t opal_xscom_read(uint32_t gcid, uint32_t pcb_addr, uint64_t *val);
+int64_t opal_xscom_read(uint32_t gcid, uint32_t pcb_addr, __be64 *val);
int64_t opal_xscom_write(uint32_t gcid, uint32_t pcb_addr, uint64_t val);
int64_t opal_lpc_write(uint32_t chip_id, enum OpalLPCAddressType addr_type,
uint32_t addr, uint32_t data, uint32_t sz);
int64_t opal_lpc_read(uint32_t chip_id, enum OpalLPCAddressType addr_type,
- uint32_t addr, uint32_t *data, uint32_t sz);
+ uint32_t addr, __be32 *data, uint32_t sz);
int64_t opal_validate_flash(uint64_t buffer, uint32_t *size, uint32_t *result);
int64_t opal_manage_flash(uint8_t op);
int64_t opal_update_flash(uint64_t blk_list);
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
unsigned long address)
{
- struct page *page = page_address(table);
-
tlb_flush_pgtable(tlb, address);
- pgtable_page_dtor(page);
- pgtable_free_tlb(tlb, page, 0);
+ pgtable_page_dtor(table);
+ pgtable_free_tlb(tlb, page_address(table), 0);
}
#endif /* _ASM_POWERPC_PGALLOC_32_H */
unsigned long phys;
unsigned long virt_addr;
};
+extern struct vmemmap_backing *vmemmap_list;
/*
* Functions that deal with pagetables that could be at any level of
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
unsigned long address)
{
- struct page *page = page_address(table);
-
tlb_flush_pgtable(tlb, address);
- pgtable_page_dtor(page);
- pgtable_free_tlb(tlb, page, 0);
+ pgtable_page_dtor(table);
+ pgtable_free_tlb(tlb, page_address(table), 0);
}
#else /* if CONFIG_PPC_64K_PAGES */
return plpar_set_mode(0, 3, 0, 0);
}
+/*
+ * Take exceptions in big endian mode on this partition
+ *
+ * Note: this call has a partition wide scope and can take a while to complete.
+ * If it returns H_LONG_BUSY_* it should be retried periodically until it
+ * returns H_SUCCESS.
+ */
+static inline long enable_big_endian_exceptions(void)
+{
+ /* mflags = 0: big endian exceptions */
+ return plpar_set_mode(0, 4, 0, 0);
+}
+
+/*
+ * Take exceptions in little endian mode on this partition
+ *
+ * Note: this call has a partition wide scope and can take a while to complete.
+ * If it returns H_LONG_BUSY_* it should be retried periodically until it
+ * returns H_SUCCESS.
+ */
+static inline long enable_little_endian_exceptions(void)
+{
+ /* mflags = 1: little endian exceptions */
+ return plpar_set_mode(1, 4, 0, 0);
+}
+
static inline long plapr_set_ciabr(unsigned long ciabr)
{
return plpar_set_mode(0, 1, ciabr, 0);
cmpwi dest,0; \
beq- 90b; \
END_FTR_SECTION_NESTED(CPU_FTR_CELL_TB_BUG, CPU_FTR_CELL_TB_BUG, 96)
+#elif defined(CONFIG_8xx)
+#define MFTB(dest) mftb dest
#else
#define MFTB(dest) mfspr dest, SPRN_TBRL
#endif
#else /* __powerpc64__ */
+#if defined(CONFIG_8xx)
+#define mftbl() ({unsigned long rval; \
+ asm volatile("mftbl %0" : "=r" (rval)); rval;})
+#define mftbu() ({unsigned long rval; \
+ asm volatile("mftbu %0" : "=r" (rval)); rval;})
+#else
#define mftbl() ({unsigned long rval; \
asm volatile("mfspr %0, %1" : "=r" (rval) : \
"i" (SPRN_TBRL)); rval;})
#define mftbu() ({unsigned long rval; \
asm volatile("mfspr %0, %1" : "=r" (rval) : \
"i" (SPRN_TBRU)); rval;})
+#endif
#endif /* !__powerpc64__ */
#define mttbl(v) asm volatile("mttbl %0":: "r"(v))
extern int spinning_secondaries;
extern void cpu_die(void);
+extern int cpu_to_chip_id(int cpu);
#ifdef CONFIG_SMP
}
extern int cpu_to_core_id(int cpu);
-extern int cpu_to_chip_id(int cpu);
/* Since OpenPIC has only 4 IPIs, we use slightly different message numbers.
*
extern void enable_kernel_spe(void);
extern void giveup_spe(struct task_struct *);
extern void load_up_spe(struct task_struct *);
-extern void switch_booke_debug_regs(struct thread_struct *new_thread);
+extern void switch_booke_debug_regs(struct debug_reg *new_debug);
#ifndef CONFIG_SMP
extern void discard_lazy_cpu_state(void);
#define TIF_EMULATE_STACK_STORE 16 /* Is an instruction emulation
for stack store? */
#define TIF_MEMDIE 17 /* is terminating due to OOM killer */
+#if defined(CONFIG_PPC64)
+#define TIF_ELF2ABI 18 /* function descriptors must die! */
+#endif
/* as above, but as bit values */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define is_32bit_task() (1)
#endif
+#if defined(CONFIG_PPC64)
+#define is_elf2_task() (test_thread_flag(TIF_ELF2ABI))
+#else
+#define is_elf2_task() (0)
+#endif
+
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
ret = 0;
__asm__ __volatile__(
+#ifdef CONFIG_8xx
+ "97: mftb %0\n"
+#else
"97: mfspr %0, %2\n"
+#endif
"99:\n"
".section __ftr_fixup,\"a\"\n"
".align 2\n"
" .long 0\n"
" .long 0\n"
".previous"
+#ifdef CONFIG_8xx
+ : "=r" (ret) : "i" (CPU_FTR_601));
+#else
: "=r" (ret) : "i" (CPU_FTR_601), "i" (SPRN_TBRL));
+#endif
return ret;
#endif
}
#ifdef __KERNEL__
/*
- * The PowerPC can do unaligned accesses itself in big endian mode.
+ * The PowerPC can do unaligned accesses itself based on its endian mode.
*/
#include <linux/unaligned/access_ok.h>
#include <linux/unaligned/generic.h>
+#ifdef __LITTLE_ENDIAN__
+#define get_unaligned __get_unaligned_le
+#define put_unaligned __put_unaligned_le
+#else
#define get_unaligned __get_unaligned_be
#define put_unaligned __put_unaligned_be
+#endif
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_UNALIGNED_H */
HSTATE_FIELD(HSTATE_VMHANDLER, vmhandler);
HSTATE_FIELD(HSTATE_SCRATCH0, scratch0);
HSTATE_FIELD(HSTATE_SCRATCH1, scratch1);
+ HSTATE_FIELD(HSTATE_SCRATCH2, scratch2);
HSTATE_FIELD(HSTATE_IN_GUEST, in_guest);
HSTATE_FIELD(HSTATE_RESTORE_HID5, restore_hid5);
HSTATE_FIELD(HSTATE_NAPPING, napping);
void crash_free_reserved_phys_range(unsigned long begin, unsigned long end)
{
unsigned long addr;
- const u32 *basep, *sizep;
+ const __be32 *basep, *sizep;
unsigned int rtas_start = 0, rtas_end = 0;
basep = of_get_property(rtas.dev, "linux,rtas-base", NULL);
sizep = of_get_property(rtas.dev, "rtas-size", NULL);
if (basep && sizep) {
- rtas_start = *basep;
- rtas_end = *basep + *sizep;
+ rtas_start = be32_to_cpup(basep);
+ rtas_end = rtas_start + be32_to_cpup(sizep);
}
for (addr = begin; addr < end; addr += PAGE_SIZE) {
for (i = 0; i < 16; i++)
eeh_ops->read_config(dn, i * 4, 4, &edev->config_space[i]);
+
+ /*
+ * For PCI bridges including root port, we need enable bus
+ * master explicitly. Otherwise, it can't fetch IODA table
+ * entries correctly. So we cache the bit in advance so that
+ * we can restore it after reset, either PHB range or PE range.
+ */
+ if (edev->mode & EEH_DEV_BRIDGE)
+ edev->config_space[1] |= PCI_COMMAND_MASTER;
}
/**
pe = event->pe;
if (pe) {
eeh_pe_state_mark(pe, EEH_PE_RECOVERING);
- pr_info("EEH: Detected PCI bus error on PHB#%d-PE#%x\n",
- pe->phb->global_number, pe->addr);
+ if (pe->type & EEH_PE_PHB)
+ pr_info("EEH: Detected error on PHB#%d\n",
+ pe->phb->global_number);
+ else
+ pr_info("EEH: Detected PCI bus error on "
+ "PHB#%d-PE#%x\n",
+ pe->phb->global_number, pe->addr);
eeh_handle_event(pe);
eeh_pe_state_clear(pe, EEH_PE_RECOVERING);
} else {
* of the function that the cpu should jump to to continue
* initialization.
*/
+ .balign 8
.globl __secondary_hold_spinloop
__secondary_hold_spinloop:
.llong 0x0
mtctr r8
bctr
+.balign 8
p_end: .llong _end - _stext
4: /* Now copy the rest of the kernel up to _end */
#include <linux/ftrace.h>
#include <asm/machdep.h>
+#include <asm/pgalloc.h>
#include <asm/prom.h>
#include <asm/sections.h>
#ifndef CONFIG_NEED_MULTIPLE_NODES
VMCOREINFO_SYMBOL(contig_page_data);
#endif
+#if defined(CONFIG_PPC64) && defined(CONFIG_SPARSEMEM_VMEMMAP)
+ VMCOREINFO_SYMBOL(vmemmap_list);
+ VMCOREINFO_SYMBOL(mmu_vmemmap_psize);
+ VMCOREINFO_SYMBOL(mmu_psize_defs);
+ VMCOREINFO_STRUCT_SIZE(vmemmap_backing);
+ VMCOREINFO_OFFSET(vmemmap_backing, list);
+ VMCOREINFO_OFFSET(vmemmap_backing, phys);
+ VMCOREINFO_OFFSET(vmemmap_backing, virt_addr);
+ VMCOREINFO_STRUCT_SIZE(mmu_psize_def);
+ VMCOREINFO_OFFSET(mmu_psize_def, shift);
+#endif
}
/*
* a small SLB (128MB) since the crash kernel needs to place
* itself and some stacks to be in the first segment.
*/
- crashk_res.start = min(0x80000000ULL, (ppc64_rma_size / 2));
+ crashk_res.start = min(0x8000000ULL, (ppc64_rma_size / 2));
#else
crashk_res.start = KDUMP_KERNELBASE;
#endif
or r3,r7,r9
blr
-#if defined(CONFIG_PPC_PMAC) || defined(CONFIG_PPC_MAPLE)
+#ifdef CONFIG_PPC_EARLY_DEBUG_BOOTX
_GLOBAL(rmci_on)
sync
isync
isync
sync
blr
+#endif /* CONFIG_PPC_EARLY_DEBUG_BOOTX */
+
+#if defined(CONFIG_PPC_PMAC) || defined(CONFIG_PPC_MAPLE)
/*
* Do an IO access in real mode
printk(KERN_WARNING "--------%s---------\n", label);
printk(KERN_WARNING "indx\t\tsig\tchks\tlen\tname\n");
list_for_each_entry(tmp_part, &nvram_partitions, partition) {
- printk(KERN_WARNING "%4d \t%02x\t%02x\t%d\t%12s\n",
+ printk(KERN_WARNING "%4d \t%02x\t%02x\t%d\t%12.12s\n",
tmp_part->index, tmp_part->header.signature,
tmp_part->header.checksum, tmp_part->header.length,
tmp_part->header.name);
#endif
}
-static void prime_debug_regs(struct thread_struct *thread)
+static void prime_debug_regs(struct debug_reg *debug)
{
/*
* We could have inherited MSR_DE from userspace, since
*/
mtmsr(mfmsr() & ~MSR_DE);
- mtspr(SPRN_IAC1, thread->debug.iac1);
- mtspr(SPRN_IAC2, thread->debug.iac2);
+ mtspr(SPRN_IAC1, debug->iac1);
+ mtspr(SPRN_IAC2, debug->iac2);
#if CONFIG_PPC_ADV_DEBUG_IACS > 2
- mtspr(SPRN_IAC3, thread->debug.iac3);
- mtspr(SPRN_IAC4, thread->debug.iac4);
+ mtspr(SPRN_IAC3, debug->iac3);
+ mtspr(SPRN_IAC4, debug->iac4);
#endif
- mtspr(SPRN_DAC1, thread->debug.dac1);
- mtspr(SPRN_DAC2, thread->debug.dac2);
+ mtspr(SPRN_DAC1, debug->dac1);
+ mtspr(SPRN_DAC2, debug->dac2);
#if CONFIG_PPC_ADV_DEBUG_DVCS > 0
- mtspr(SPRN_DVC1, thread->debug.dvc1);
- mtspr(SPRN_DVC2, thread->debug.dvc2);
+ mtspr(SPRN_DVC1, debug->dvc1);
+ mtspr(SPRN_DVC2, debug->dvc2);
#endif
- mtspr(SPRN_DBCR0, thread->debug.dbcr0);
- mtspr(SPRN_DBCR1, thread->debug.dbcr1);
+ mtspr(SPRN_DBCR0, debug->dbcr0);
+ mtspr(SPRN_DBCR1, debug->dbcr1);
#ifdef CONFIG_BOOKE
- mtspr(SPRN_DBCR2, thread->debug.dbcr2);
+ mtspr(SPRN_DBCR2, debug->dbcr2);
#endif
}
/*
* debug registers, set the debug registers from the values
* stored in the new thread.
*/
-void switch_booke_debug_regs(struct thread_struct *new_thread)
+void switch_booke_debug_regs(struct debug_reg *new_debug)
{
if ((current->thread.debug.dbcr0 & DBCR0_IDM)
- || (new_thread->debug.dbcr0 & DBCR0_IDM))
- prime_debug_regs(new_thread);
+ || (new_debug->dbcr0 & DBCR0_IDM))
+ prime_debug_regs(new_debug);
}
EXPORT_SYMBOL_GPL(switch_booke_debug_regs);
#else /* !CONFIG_PPC_ADV_DEBUG_REGS */
#endif /* CONFIG_SMP */
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
- switch_booke_debug_regs(&new->thread);
+ switch_booke_debug_regs(&new->thread.debug);
#else
/*
* For PPC_BOOK3S_64, we use the hw-breakpoint interfaces that would
printk("MSR: "REG" ", regs->msr);
printbits(regs->msr, msr_bits);
printk(" CR: %08lx XER: %08lx\n", regs->ccr, regs->xer);
-#ifdef CONFIG_PPC64
- printk("SOFTE: %ld\n", regs->softe);
-#endif
trap = TRAP(regs);
if ((regs->trap != 0xc00) && cpu_has_feature(CPU_FTR_CFAR))
- printk("CFAR: "REG"\n", regs->orig_gpr3);
- if (trap == 0x300 || trap == 0x600)
+ printk("CFAR: "REG" ", regs->orig_gpr3);
+ if (trap == 0x200 || trap == 0x300 || trap == 0x600)
#if defined(CONFIG_4xx) || defined(CONFIG_BOOKE)
- printk("DEAR: "REG", ESR: "REG"\n", regs->dar, regs->dsisr);
+ printk("DEAR: "REG" ESR: "REG" ", regs->dar, regs->dsisr);
#else
- printk("DAR: "REG", DSISR: %08lx\n", regs->dar, regs->dsisr);
+ printk("DAR: "REG" DSISR: %08lx ", regs->dar, regs->dsisr);
+#endif
+#ifdef CONFIG_PPC64
+ printk("SOFTE: %ld ", regs->softe);
+#endif
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+ if (MSR_TM_ACTIVE(regs->msr))
+ printk("\nPACATMSCRATCH: %016llx ", get_paca()->tm_scratch);
#endif
for (i = 0; i < 32; i++) {
*/
printk("NIP ["REG"] %pS\n", regs->nip, (void *)regs->nip);
printk("LR ["REG"] %pS\n", regs->link, (void *)regs->link);
-#endif
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
- printk("PACATMSCRATCH [%llx]\n", get_paca()->tm_scratch);
#endif
show_stack(current, (unsigned long *) regs->gpr[1]);
if (!user_mode(regs))
regs->msr = MSR_USER;
#else
if (!is_32bit_task()) {
- unsigned long entry, toc;
+ unsigned long entry;
- /* start is a relocated pointer to the function descriptor for
- * the elf _start routine. The first entry in the function
- * descriptor is the entry address of _start and the second
- * entry is the TOC value we need to use.
- */
- __get_user(entry, (unsigned long __user *)start);
- __get_user(toc, (unsigned long __user *)start+1);
+ if (is_elf2_task()) {
+ /* Look ma, no function descriptors! */
+ entry = start;
- /* Check whether the e_entry function descriptor entries
- * need to be relocated before we can use them.
- */
- if (load_addr != 0) {
- entry += load_addr;
- toc += load_addr;
+ /*
+ * Ulrich says:
+ * The latest iteration of the ABI requires that when
+ * calling a function (at its global entry point),
+ * the caller must ensure r12 holds the entry point
+ * address (so that the function can quickly
+ * establish addressability).
+ */
+ regs->gpr[12] = start;
+ /* Make sure that's restored on entry to userspace. */
+ set_thread_flag(TIF_RESTOREALL);
+ } else {
+ unsigned long toc;
+
+ /* start is a relocated pointer to the function
+ * descriptor for the elf _start routine. The first
+ * entry in the function descriptor is the entry
+ * address of _start and the second entry is the TOC
+ * value we need to use.
+ */
+ __get_user(entry, (unsigned long __user *)start);
+ __get_user(toc, (unsigned long __user *)start+1);
+
+ /* Check whether the e_entry function descriptor entries
+ * need to be relocated before we can use them.
+ */
+ if (load_addr != 0) {
+ entry += load_addr;
+ toc += load_addr;
+ }
+ regs->gpr[2] = toc;
}
regs->nip = entry;
- regs->gpr[2] = toc;
regs->msr = MSR_USER64;
} else {
regs->nip = start;
return -1;
}
+/**
+ * cpu_to_chip_id - Return the cpus chip-id
+ * @cpu: The logical cpu number.
+ *
+ * Return the value of the ibm,chip-id property corresponding to the given
+ * logical cpu number. If the chip-id can not be found, returns -1.
+ */
+int cpu_to_chip_id(int cpu)
+{
+ struct device_node *np;
+
+ np = of_get_cpu_node(cpu, NULL);
+ if (!np)
+ return -1;
+
+ of_node_put(np);
+ return of_get_ibm_chip_id(np);
+}
+EXPORT_SYMBOL(cpu_to_chip_id);
+
#ifdef CONFIG_PPC_PSERIES
/*
* Fix up the uninitialized fields in a new device node:
flush_fp_to_thread(child);
if (fpidx < (PT_FPSCR - PT_FPR0))
- memcpy(&tmp, &child->thread.fp_state.fpr,
+ memcpy(&tmp, &child->thread.TS_FPR(fpidx),
sizeof(long));
else
tmp = child->thread.fp_state.fpscr;
flush_fp_to_thread(child);
if (fpidx < (PT_FPSCR - PT_FPR0))
- memcpy(&child->thread.fp_state.fpr, &data,
+ memcpy(&child->thread.TS_FPR(fpidx), &data,
sizeof(long));
else
child->thread.fp_state.fpscr = data;
if (machine_is(pseries) && firmware_has_feature(FW_FEATURE_LPAR) &&
(dn = of_find_node_by_path("/rtas"))) {
int num_addr_cell, num_size_cell, maxcpus;
- const unsigned int *ireg;
+ const __be32 *ireg;
num_addr_cell = of_n_addr_cells(dn);
num_size_cell = of_n_size_cells(dn);
if (!ireg)
goto out;
- maxcpus = ireg[num_addr_cell + num_size_cell];
+ maxcpus = be32_to_cpup(ireg + num_addr_cell + num_size_cell);
/* Double maxcpus for processors which have SMT capability */
if (cpu_has_feature(CPU_FTR_SMT))
#endif /* CONFIG_ALTIVEC */
if (copy_fpr_to_user(&frame->mc_fregs, current))
return 1;
+
+ /*
+ * Clear the MSR VSX bit to indicate there is no valid state attached
+ * to this context, except in the specific case below where we set it.
+ */
+ msr &= ~MSR_VSX;
#ifdef CONFIG_VSX
/*
* Copy VSR 0-31 upper half from thread_struct to local
flush_fp_to_thread(current);
/* copy fpr regs and fpscr */
err |= copy_fpr_to_user(&sc->fp_regs, current);
+
+ /*
+ * Clear the MSR VSX bit to indicate there is no valid state attached
+ * to this context, except in the specific case below where we set it.
+ */
+ msr &= ~MSR_VSX;
#ifdef CONFIG_VSX
/*
* Copy VSX low doubleword to local buffer for formatting,
int handle_rt_signal64(int signr, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs *regs)
{
- /* Handler is *really* a pointer to the function descriptor for
- * the signal routine. The first entry in the function
- * descriptor is the entry address of signal and the second
- * entry is the TOC value we need to use.
- */
- func_descr_t __user *funct_desc_ptr;
struct rt_sigframe __user *frame;
unsigned long newsp = 0;
long err = 0;
goto badframe;
regs->link = (unsigned long) &frame->tramp[0];
}
- funct_desc_ptr = (func_descr_t __user *) ka->sa.sa_handler;
/* Allocate a dummy caller frame for the signal handler. */
newsp = ((unsigned long)frame) - __SIGNAL_FRAMESIZE;
err |= put_user(regs->gpr[1], (unsigned long __user *)newsp);
/* Set up "regs" so we "return" to the signal handler. */
- err |= get_user(regs->nip, &funct_desc_ptr->entry);
+ if (is_elf2_task()) {
+ regs->nip = (unsigned long) ka->sa.sa_handler;
+ regs->gpr[12] = regs->nip;
+ } else {
+ /* Handler is *really* a pointer to the function descriptor for
+ * the signal routine. The first entry in the function
+ * descriptor is the entry address of signal and the second
+ * entry is the TOC value we need to use.
+ */
+ func_descr_t __user *funct_desc_ptr =
+ (func_descr_t __user *) ka->sa.sa_handler;
+
+ err |= get_user(regs->nip, &funct_desc_ptr->entry);
+ err |= get_user(regs->gpr[2], &funct_desc_ptr->toc);
+ }
+
/* enter the signal handler in native-endian mode */
regs->msr &= ~MSR_LE;
regs->msr |= (MSR_KERNEL & MSR_LE);
regs->gpr[1] = newsp;
- err |= get_user(regs->gpr[2], &funct_desc_ptr->toc);
regs->gpr[3] = signr;
regs->result = 0;
if (ka->sa.sa_flags & SA_SIGINFO) {
int cpu_to_core_id(int cpu)
{
struct device_node *np;
- const int *reg;
+ const __be32 *reg;
int id = -1;
np = of_get_cpu_node(cpu, NULL);
if (!reg)
goto out;
- id = *reg;
+ id = be32_to_cpup(reg);
out:
of_node_put(np);
return id;
}
-/* Return the value of the chip-id property corresponding
- * to the given logical cpu.
- */
-int cpu_to_chip_id(int cpu)
-{
- struct device_node *np;
-
- np = of_get_cpu_node(cpu, NULL);
- if (!np)
- return -1;
-
- of_node_put(np);
- return of_get_ibm_chip_id(np);
-}
-EXPORT_SYMBOL(cpu_to_chip_id);
-
/* Helper routines for cpu to core mapping */
int cpu_core_index_of_thread(int cpu)
{
if (i == be64_to_cpu(vpa->dtl_idx))
return 0;
while (i < be64_to_cpu(vpa->dtl_idx)) {
- if (dtl_consumer)
- dtl_consumer(dtl, i);
dtb = be64_to_cpu(dtl->timebase);
tb_delta = be32_to_cpu(dtl->enqueue_to_dispatch_time) +
be32_to_cpu(dtl->ready_to_enqueue_time);
}
if (dtb > stop_tb)
break;
+ if (dtl_consumer)
+ dtl_consumer(dtl, i);
stolen += tb_delta;
++i;
++dtl;
lwz r6,(CFG_TB_ORIG_STAMP+4)(r9)
/* Get a stable TB value */
+#ifdef CONFIG_8xx
+2: mftbu r3
+ mftbl r4
+ mftbu r0
+#else
2: mfspr r3, SPRN_TBRU
mfspr r4, SPRN_TBRL
mfspr r0, SPRN_TBRU
+#endif
cmplw cr0,r3,r0
bne- 2b
/* Size of CR reg in DWARF unwind info. */
#define CRSIZE 4
+/* Offset of CR reg within a full word. */
+#ifdef __LITTLE_ENDIAN__
+#define CROFF 0
+#else
+#define CROFF (RSIZE - CRSIZE)
+#endif
+
/* This is the offset of the VMX reg pointer. */
#define VREGS 48*RSIZE+33*8
rsave (31, 31*RSIZE); \
rsave (67, 32*RSIZE); /* ap, used as temp for nip */ \
rsave (65, 36*RSIZE); /* lr */ \
- rsave (70, 38*RSIZE + (RSIZE - CRSIZE)) /* cr */
+ rsave (68, 38*RSIZE + CROFF); /* cr fields */ \
+ rsave (69, 38*RSIZE + CROFF); \
+ rsave (70, 38*RSIZE + CROFF); \
+ rsave (71, 38*RSIZE + CROFF); \
+ rsave (72, 38*RSIZE + CROFF); \
+ rsave (73, 38*RSIZE + CROFF); \
+ rsave (74, 38*RSIZE + CROFF); \
+ rsave (75, 38*RSIZE + CROFF)
/* Describe where the FP regs are saved. */
#define EH_FRAME_FP \
/* needed to ensure proper operation of coherent allocations
* later, in case driver doesn't set it explicitly */
- dma_set_mask_and_coherent(&viodev->dev, DMA_BIT_MASK(64));
+ dma_coerce_mask_and_coherent(&viodev->dev, DMA_BIT_MASK(64));
}
/* register with generic device framework */
slb_v = vcpu->kvm->arch.vrma_slb_v;
}
+ preempt_disable();
/* Find the HPTE in the hash table */
index = kvmppc_hv_find_lock_hpte(kvm, eaddr, slb_v,
HPTE_V_VALID | HPTE_V_ABSENT);
- if (index < 0)
+ if (index < 0) {
+ preempt_enable();
return -ENOENT;
+ }
hptep = (unsigned long *)(kvm->arch.hpt_virt + (index << 4));
v = hptep[0] & ~HPTE_V_HVLOCK;
gr = kvm->arch.revmap[index].guest_rpte;
/* Unlock the HPTE */
asm volatile("lwsync" : : : "memory");
hptep[0] = v;
+ preempt_enable();
gpte->eaddr = eaddr;
gpte->vpage = ((v & HPTE_V_AVPN) << 4) | ((eaddr >> 12) & 0xfff);
return -EFAULT;
} else {
page = pages[0];
+ pfn = page_to_pfn(page);
if (PageHuge(page)) {
page = compound_head(page);
pte_size <<= compound_order(page);
}
rcu_read_unlock_sched();
}
- pfn = page_to_pfn(page);
}
ret = -EFAULT;
r = (r & ~(HPTE_R_W|HPTE_R_I|HPTE_R_G)) | HPTE_R_M;
}
- /* Set the HPTE to point to pfn */
- r = (r & ~(HPTE_R_PP0 - pte_size)) | (pfn << PAGE_SHIFT);
+ /*
+ * Set the HPTE to point to pfn.
+ * Since the pfn is at PAGE_SIZE granularity, make sure we
+ * don't mask out lower-order bits if psize < PAGE_SIZE.
+ */
+ if (psize < PAGE_SIZE)
+ psize = PAGE_SIZE;
+ r = (r & ~(HPTE_R_PP0 - psize)) | ((pfn << PAGE_SHIFT) & ~(psize - 1));
if (hpte_is_writable(r) && !write_ok)
r = hpte_make_readonly(r);
ret = RESUME_GUEST;
static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu)
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
+ unsigned long flags;
- spin_lock(&vcpu->arch.tbacct_lock);
+ spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE &&
vc->preempt_tb != TB_NIL) {
vc->stolen_tb += mftb() - vc->preempt_tb;
vcpu->arch.busy_stolen += mftb() - vcpu->arch.busy_preempt;
vcpu->arch.busy_preempt = TB_NIL;
}
- spin_unlock(&vcpu->arch.tbacct_lock);
+ spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags);
}
static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu)
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
+ unsigned long flags;
- spin_lock(&vcpu->arch.tbacct_lock);
+ spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE)
vc->preempt_tb = mftb();
if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST)
vcpu->arch.busy_preempt = mftb();
- spin_unlock(&vcpu->arch.tbacct_lock);
+ spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags);
}
static void kvmppc_set_msr_hv(struct kvm_vcpu *vcpu, u64 msr)
*/
if (vc->vcore_state != VCORE_INACTIVE &&
vc->runner->arch.run_task != current) {
- spin_lock(&vc->runner->arch.tbacct_lock);
+ spin_lock_irq(&vc->runner->arch.tbacct_lock);
p = vc->stolen_tb;
if (vc->preempt_tb != TB_NIL)
p += now - vc->preempt_tb;
- spin_unlock(&vc->runner->arch.tbacct_lock);
+ spin_unlock_irq(&vc->runner->arch.tbacct_lock);
} else {
p = vc->stolen_tb;
}
core_stolen = vcore_stolen_time(vc, now);
stolen = core_stolen - vcpu->arch.stolen_logged;
vcpu->arch.stolen_logged = core_stolen;
- spin_lock(&vcpu->arch.tbacct_lock);
+ spin_lock_irq(&vcpu->arch.tbacct_lock);
stolen += vcpu->arch.busy_stolen;
vcpu->arch.busy_stolen = 0;
- spin_unlock(&vcpu->arch.tbacct_lock);
+ spin_unlock_irq(&vcpu->arch.tbacct_lock);
if (!dt || !vpa)
return;
memset(dt, 0, sizeof(struct dtl_entry));
if (list_empty(&vcpu->kvm->arch.rtas_tokens))
return RESUME_HOST;
+ idx = srcu_read_lock(&vcpu->kvm->srcu);
rc = kvmppc_rtas_hcall(vcpu);
+ srcu_read_unlock(&vcpu->kvm->srcu, idx);
if (rc == -ENOENT)
return RESUME_HOST;
if (vcpu->arch.state != KVMPPC_VCPU_RUNNABLE)
return;
- spin_lock(&vcpu->arch.tbacct_lock);
+ spin_lock_irq(&vcpu->arch.tbacct_lock);
now = mftb();
vcpu->arch.busy_stolen += vcore_stolen_time(vc, now) -
vcpu->arch.stolen_logged;
vcpu->arch.busy_preempt = now;
vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST;
- spin_unlock(&vcpu->arch.tbacct_lock);
+ spin_unlock_irq(&vcpu->arch.tbacct_lock);
--vc->n_runnable;
list_del(&vcpu->arch.run_list);
}
is_io = pa & (HPTE_R_I | HPTE_R_W);
pte_size = PAGE_SIZE << (pa & KVMPPC_PAGE_ORDER_MASK);
pa &= PAGE_MASK;
+ pa |= gpa & ~PAGE_MASK;
} else {
/* Translate to host virtual address */
hva = __gfn_to_hva_memslot(memslot, gfn);
ptel = hpte_make_readonly(ptel);
is_io = hpte_cache_bits(pte_val(pte));
pa = pte_pfn(pte) << PAGE_SHIFT;
+ pa |= hva & (pte_size - 1);
+ pa |= gpa & ~PAGE_MASK;
}
}
if (pte_size < psize)
return H_PARAMETER;
- if (pa && pte_size > psize)
- pa |= gpa & (pte_size - 1);
ptel &= ~(HPTE_R_PP0 - psize);
ptel |= pa;
20, /* 1M, unsupported */
};
+/* When called from virtmode, this func should be protected by
+ * preempt_disable(), otherwise, the holding of HPTE_V_HVLOCK
+ * can trigger deadlock issue.
+ */
long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, unsigned long slb_v,
unsigned long valid)
{
13: b machine_check_fwnmi
-
/*
* We come in here when wakened from nap mode on a secondary hw thread.
* Relocation is off and most register values are lost.
/* Clear our vcpu pointer so we don't come back in early */
li r0, 0
std r0, HSTATE_KVM_VCPU(r13)
+ /*
+ * Make sure we clear HSTATE_KVM_VCPU(r13) before incrementing
+ * the nap_count, because once the increment to nap_count is
+ * visible we could be given another vcpu.
+ */
lwsync
/* Clear any pending IPI - we're an offline thread */
ld r5, HSTATE_XICS_PHYS(r13)
/* increment the nap count and then go to nap mode */
ld r4, HSTATE_KVM_VCORE(r13)
addi r4, r4, VCORE_NAP_COUNT
- lwsync /* make previous updates visible */
51: lwarx r3, 0, r4
addi r3, r3, 1
stwcx. r3, 0, r4
* guest CR, R12 saved in shadow VCPU SCRATCH1/0
* guest R13 saved in SPRN_SCRATCH0
*/
- /* abuse host_r2 as third scratch area; we get r2 from PACATOC(r13) */
- std r9, HSTATE_HOST_R2(r13)
+ std r9, HSTATE_SCRATCH2(r13)
lbz r9, HSTATE_IN_GUEST(r13)
cmpwi r9, KVM_GUEST_MODE_HOST_HV
beq kvmppc_bad_host_intr
#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
cmpwi r9, KVM_GUEST_MODE_GUEST
- ld r9, HSTATE_HOST_R2(r13)
+ ld r9, HSTATE_SCRATCH2(r13)
beq kvmppc_interrupt_pr
#endif
/* We're now back in the host but in guest MMU context */
std r6, VCPU_GPR(R6)(r9)
std r7, VCPU_GPR(R7)(r9)
std r8, VCPU_GPR(R8)(r9)
- ld r0, HSTATE_HOST_R2(r13)
+ ld r0, HSTATE_SCRATCH2(r13)
std r0, VCPU_GPR(R9)(r9)
std r10, VCPU_GPR(R10)(r9)
std r11, VCPU_GPR(R11)(r9)
*/
/* Increment the threads-exiting-guest count in the 0xff00
bits of vcore->entry_exit_count */
- lwsync
ld r5,HSTATE_KVM_VCORE(r13)
addi r6,r5,VCORE_ENTRY_EXIT
41: lwarx r3,0,r6
addi r0,r3,0x100
stwcx. r0,0,r6
bne 41b
- lwsync
+ isync /* order stwcx. vs. reading napping_threads */
/*
* At this point we have an interrupt that we have to pass
sld r0,r0,r4
andc. r3,r3,r0 /* no sense IPI'ing ourselves */
beq 43f
+ /* Order entry/exit update vs. IPIs */
+ sync
mulli r4,r4,PACA_SIZE /* get paca for thread 0 */
subf r6,r4,r13
42: andi. r0,r3,1
bge kvm_cede_exit
stwcx. r4,0,r6
bne 31b
+ /* order napping_threads update vs testing entry_exit_count */
+ isync
li r0,1
stb r0,HSTATE_NAPPING(r13)
- /* order napping_threads update vs testing entry_exit_count */
- lwsync
mr r4,r3
lwz r7,VCORE_ENTRY_EXIT(r5)
cmpwi r7,0x100
* R12 = exit handler id
* R13 = PACA
* SVCPU.* = guest *
+ * MSR.EE = 1
*
*/
+ PPC_LL r3, GPR4(r1) /* vcpu pointer */
+
+ /*
+ * kvmppc_copy_from_svcpu can clobber volatile registers, save
+ * the exit handler id to the vcpu and restore it from there later.
+ */
+ stw r12, VCPU_TRAP(r3)
+
/* Transfer reg values from shadow vcpu back to vcpu struct */
/* On 64-bit, interrupts are still off at this point */
- PPC_LL r3, GPR4(r1) /* vcpu pointer */
+
GET_SHADOW_VCPU(r4)
bl FUNC(kvmppc_copy_from_svcpu)
nop
#ifdef CONFIG_PPC_BOOK3S_64
- /* Re-enable interrupts */
- ld r3, HSTATE_HOST_MSR(r13)
- ori r3, r3, MSR_EE
- MTMSR_EERI(r3)
-
/*
* Reload kernel SPRG3 value.
* No need to save guest value as usermode can't modify SPRG3.
*/
ld r3, PACA_SPRG3(r13)
mtspr SPRN_SPRG3, r3
-
#endif /* CONFIG_PPC_BOOK3S_64 */
/* R7 = vcpu */
PPC_STL r31, VCPU_GPR(R31)(r7)
/* Pass the exit number as 3rd argument to kvmppc_handle_exit */
- mr r5, r12
+ lwz r5, VCPU_TRAP(r7)
/* Restore r3 (kvm_run) and r4 (vcpu) */
REST_2GPRS(3, r1)
struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
memcpy(svcpu->slb, to_book3s(vcpu)->slb_shadow, sizeof(svcpu->slb));
svcpu->slb_max = to_book3s(vcpu)->slb_shadow_max;
+ svcpu->in_use = 0;
svcpu_put(svcpu);
#endif
vcpu->cpu = smp_processor_id();
{
#ifdef CONFIG_PPC_BOOK3S_64
struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
+ if (svcpu->in_use) {
+ kvmppc_copy_from_svcpu(vcpu, svcpu);
+ }
memcpy(to_book3s(vcpu)->slb_shadow, svcpu->slb, sizeof(svcpu->slb));
to_book3s(vcpu)->slb_shadow_max = svcpu->slb_max;
svcpu_put(svcpu);
svcpu->ctr = vcpu->arch.ctr;
svcpu->lr = vcpu->arch.lr;
svcpu->pc = vcpu->arch.pc;
+ svcpu->in_use = true;
}
/* Copy data touched by real-mode code from shadow vcpu back to vcpu */
void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
struct kvmppc_book3s_shadow_vcpu *svcpu)
{
+ /*
+ * vcpu_put would just call us again because in_use hasn't
+ * been updated yet.
+ */
+ preempt_disable();
+
+ /*
+ * Maybe we were already preempted and synced the svcpu from
+ * our preempt notifiers. Don't bother touching this svcpu then.
+ */
+ if (!svcpu->in_use)
+ goto out;
+
vcpu->arch.gpr[0] = svcpu->gpr[0];
vcpu->arch.gpr[1] = svcpu->gpr[1];
vcpu->arch.gpr[2] = svcpu->gpr[2];
vcpu->arch.fault_dar = svcpu->fault_dar;
vcpu->arch.fault_dsisr = svcpu->fault_dsisr;
vcpu->arch.last_inst = svcpu->last_inst;
+ svcpu->in_use = false;
+
+out:
+ preempt_enable();
}
static int kvmppc_core_check_requests_pr(struct kvm_vcpu *vcpu)
li r6, MSR_IR | MSR_DR
andc r6, r5, r6 /* Clear DR and IR in MSR value */
-#ifdef CONFIG_PPC_BOOK3S_32
/*
* Set EE in HOST_MSR so that it's enabled when we get into our
- * C exit handler function. On 64-bit we delay enabling
- * interrupts until we have finished transferring stuff
- * to or from the PACA.
+ * C exit handler function.
*/
ori r5, r5, MSR_EE
-#endif
mtsrr0 r7
mtsrr1 r6
RFI
int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
{
int ret, s;
- struct thread_struct thread;
+ struct debug_reg debug;
#ifdef CONFIG_PPC_FPU
struct thread_fp_state fp;
int fpexc_mode;
#endif
/* Switch to guest debug context */
- thread.debug = vcpu->arch.shadow_dbg_reg;
- switch_booke_debug_regs(&thread);
- thread.debug = current->thread.debug;
+ debug = vcpu->arch.shadow_dbg_reg;
+ switch_booke_debug_regs(&debug);
+ debug = current->thread.debug;
current->thread.debug = vcpu->arch.shadow_dbg_reg;
kvmppc_fix_ee_before_entry();
We also get here with interrupts enabled. */
/* Switch back to user space debug context */
- switch_booke_debug_regs(&thread);
- current->thread.debug = thread.debug;
+ switch_booke_debug_regs(&debug);
+ current->thread.debug = debug;
#ifdef CONFIG_PPC_FPU
kvmppc_save_guest_fp(vcpu);
#include <asm/processor.h>
#include <asm/ppc_asm.h>
+#ifdef __BIG_ENDIAN__
+#define sLd sld /* Shift towards low-numbered address. */
+#define sHd srd /* Shift towards high-numbered address. */
+#else
+#define sLd srd /* Shift towards low-numbered address. */
+#define sHd sld /* Shift towards high-numbered address. */
+#endif
+
.align 7
_GLOBAL(__copy_tofrom_user)
BEGIN_FTR_SECTION
24: ld r9,0(r4) /* 3+2n loads, 2+2n stores */
25: ld r0,8(r4)
- sld r6,r9,r10
+ sLd r6,r9,r10
26: ldu r9,16(r4)
- srd r7,r0,r11
- sld r8,r0,r10
+ sHd r7,r0,r11
+ sLd r8,r0,r10
or r7,r7,r6
blt cr6,79f
27: ld r0,8(r4)
28: ld r0,0(r4) /* 4+2n loads, 3+2n stores */
29: ldu r9,8(r4)
- sld r8,r0,r10
+ sLd r8,r0,r10
addi r3,r3,-8
blt cr6,5f
30: ld r0,8(r4)
- srd r12,r9,r11
- sld r6,r9,r10
+ sHd r12,r9,r11
+ sLd r6,r9,r10
31: ldu r9,16(r4)
or r12,r8,r12
- srd r7,r0,r11
- sld r8,r0,r10
+ sHd r7,r0,r11
+ sLd r8,r0,r10
addi r3,r3,16
beq cr6,78f
1: or r7,r7,r6
32: ld r0,8(r4)
76: std r12,8(r3)
-2: srd r12,r9,r11
- sld r6,r9,r10
+2: sHd r12,r9,r11
+ sLd r6,r9,r10
33: ldu r9,16(r4)
or r12,r8,r12
77: stdu r7,16(r3)
- srd r7,r0,r11
- sld r8,r0,r10
+ sHd r7,r0,r11
+ sLd r8,r0,r10
bdnz 1b
78: std r12,8(r3)
or r7,r7,r6
79: std r7,16(r3)
-5: srd r12,r9,r11
+5: sHd r12,r9,r11
or r12,r8,r12
80: std r12,24(r3)
bne 6f
blr
6: cmpwi cr1,r5,8
addi r3,r3,32
- sld r9,r9,r10
+ sLd r9,r9,r10
ble cr1,7f
34: ld r0,8(r4)
- srd r7,r0,r11
+ sHd r7,r0,r11
or r9,r7,r9
7:
bf cr7*4+1,1f
+#ifdef __BIG_ENDIAN__
rotldi r9,r9,32
+#endif
94: stw r9,0(r3)
+#ifdef __LITTLE_ENDIAN__
+ rotrdi r9,r9,32
+#endif
addi r3,r3,4
1: bf cr7*4+2,2f
+#ifdef __BIG_ENDIAN__
rotldi r9,r9,16
+#endif
95: sth r9,0(r3)
+#ifdef __LITTLE_ENDIAN__
+ rotrdi r9,r9,16
+#endif
addi r3,r3,2
2: bf cr7*4+3,3f
+#ifdef __BIG_ENDIAN__
rotldi r9,r9,8
+#endif
96: stb r9,0(r3)
+#ifdef __LITTLE_ENDIAN__
+ rotrdi r9,r9,8
+#endif
3: li r3,0
blr
struct mm_struct *mm = current->mm;
unsigned long addr, len, end;
unsigned long next;
+ unsigned long flags;
pgd_t *pgdp;
int nr = 0;
* So long as we atomically load page table pointers versus teardown,
* we can follow the address down to the the page and take a ref on it.
*/
- local_irq_disable();
+ local_irq_save(flags);
pgdp = pgd_offset(mm, addr);
do {
break;
} while (pgdp++, addr = next, addr != end);
- local_irq_enable();
+ local_irq_restore(flags);
return nr;
}
struct hstate *hstate = hstate_file(vma->vm_file);
unsigned long tsize = huge_page_shift(hstate) - 10;
- __flush_tlb_page(vma ? vma->vm_mm : NULL, vmaddr, tsize, 0);
-
+ __flush_tlb_page(vma->vm_mm, vmaddr, tsize, 0);
}
slice = GET_HIGH_SLICE_INDEX(addr);
*boundary_addr = (slice + end) ?
((slice + end) << SLICE_HIGH_SHIFT) : SLICE_LOW_TOP;
- return !!(available.high_slices & (1u << slice));
+ return !!(available.high_slices & (1ul << slice));
}
}
void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
{
#ifdef CONFIG_HUGETLB_PAGE
- if (is_vm_hugetlb_page(vma))
+ if (vma && is_vm_hugetlb_page(vma))
flush_hugetlb_page(vma, vmaddr);
#endif
default n
endmenu
+
+choice
+ prompt "Endianness selection"
+ default CPU_BIG_ENDIAN
+ help
+ This option selects whether a big endian or little endian kernel will
+ be built.
+
+config CPU_BIG_ENDIAN
+ bool "Build big endian kernel"
+ help
+ Build a big endian kernel.
+
+ If unsure, select this option.
+
+config CPU_LITTLE_ENDIAN
+ bool "Build little endian kernel"
+ help
+ Build a little endian kernel.
+
+ Note that if cross compiling a little endian kernel,
+ CROSS_COMPILE must point to a toolchain capable of targeting
+ little endian powerpc.
+
+endchoice
#include "powernv.h"
#include "pci.h"
-static char *hub_diag = NULL;
static int ioda_eeh_nb_init = 0;
static int ioda_eeh_event(struct notifier_block *nb,
ioda_eeh_nb_init = 1;
}
- /* We needn't HUB diag-data on PHB3 */
- if (phb->type == PNV_PHB_IODA1 && !hub_diag) {
- hub_diag = (char *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
- if (!hub_diag) {
- pr_err("%s: Out of memory !\n", __func__);
- return -ENOMEM;
- }
- }
-
#ifdef CONFIG_DEBUG_FS
if (phb->dbgfs) {
debugfs_create_file("err_injct_outbound", 0600,
static void ioda_eeh_hub_diag(struct pci_controller *hose)
{
struct pnv_phb *phb = hose->private_data;
- struct OpalIoP7IOCErrorData *data;
+ struct OpalIoP7IOCErrorData *data = &phb->diag.hub_diag;
long rc;
- data = (struct OpalIoP7IOCErrorData *)ioda_eeh_hub_diag;
- rc = opal_pci_get_hub_diag_data(phb->hub_id, data, PAGE_SIZE);
+ rc = opal_pci_get_hub_diag_data(phb->hub_id, data, sizeof(*data));
if (rc != OPAL_SUCCESS) {
pr_warning("%s: Failed to get HUB#%llx diag-data (%ld)\n",
__func__, phb->hub_id, rc);
struct OpalIoPhbErrorCommon *common;
long rc;
- common = (struct OpalIoPhbErrorCommon *)phb->diag.blob;
- rc = opal_pci_get_phb_diag_data2(phb->opal_id, common, PAGE_SIZE);
+ rc = opal_pci_get_phb_diag_data2(phb->opal_id, phb->diag.blob,
+ PNV_PCI_DIAG_BUF_SIZE);
if (rc != OPAL_SUCCESS) {
pr_warning("%s: Failed to get diag-data for PHB#%x (%ld)\n",
__func__, hose->global_number, rc);
return;
}
+ common = (struct OpalIoPhbErrorCommon *)phb->diag.blob;
switch (common->ioType) {
case OPAL_PHB_ERROR_DATA_TYPE_P7IOC:
ioda_eeh_p7ioc_phb_diag(hose, common);
static u8 opal_lpc_inb(unsigned long port)
{
int64_t rc;
- uint32_t data;
+ __be32 data;
if (opal_lpc_chip_id < 0 || port > 0xffff)
return 0xff;
rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 1);
- return rc ? 0xff : data;
+ return rc ? 0xff : be32_to_cpu(data);
}
static __le16 __opal_lpc_inw(unsigned long port)
{
int64_t rc;
- uint32_t data;
+ __be32 data;
if (opal_lpc_chip_id < 0 || port > 0xfffe)
return 0xffff;
if (port & 1)
return (__le16)opal_lpc_inb(port) << 8 | opal_lpc_inb(port + 1);
rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 2);
- return rc ? 0xffff : data;
+ return rc ? 0xffff : be32_to_cpu(data);
}
static u16 opal_lpc_inw(unsigned long port)
{
static __le32 __opal_lpc_inl(unsigned long port)
{
int64_t rc;
- uint32_t data;
+ __be32 data;
if (opal_lpc_chip_id < 0 || port > 0xfffc)
return 0xffffffff;
(__le32)opal_lpc_inb(port + 2) << 8 |
opal_lpc_inb(port + 3);
rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 4);
- return rc ? 0xffffffff : data;
+ return rc ? 0xffffffff : be32_to_cpu(data);
}
static u32 opal_lpc_inl(unsigned long port)
{
struct opal_scom_map *m = map;
int64_t rc;
+ __be64 v;
reg = opal_scom_unmangle(reg);
- rc = opal_xscom_read(m->chip, m->addr + reg, (uint64_t *)__pa(value));
+ rc = opal_xscom_read(m->chip, m->addr + reg, (__be64 *)__pa(&v));
+ *value = be64_to_cpu(v);
return opal_xscom_err_xlate(rc);
}
tbl->it_type = TCE_PCI_SWINV_CREATE | TCE_PCI_SWINV_FREE;
}
iommu_init_table(tbl, phb->hose->node);
+ iommu_register_group(tbl, pci_domain_nr(pe->pbus), pe->pe_number);
if (pe->pdev)
set_iommu_table_base(&pe->pdev->dev, tbl);
} ioda;
};
- /* PHB status structure */
+ /* PHB and hub status structure */
union {
unsigned char blob[PNV_PCI_DIAG_BUF_SIZE];
struct OpalIoP7IOCPhbErrorData p7ioc;
+ struct OpalIoP7IOCErrorData hub_diag;
} diag;
+
};
extern struct pci_ops pnv_pci_ops;
#include <asm/io.h>
#include <asm/prom.h>
#include <asm/machdep.h>
+#include <asm/smp.h>
struct powernv_rng {
struct eeh_dev *edev;
struct eeh_pe pe;
struct pci_dn *pdn = PCI_DN(dn);
- const u32 *class_code, *vendor_id, *device_id;
- const u32 *regs;
+ const __be32 *classp, *vendorp, *devicep;
+ u32 class_code;
+ const __be32 *regs;
u32 pcie_flags;
int enable = 0;
int ret;
return NULL;
/* Retrieve class/vendor/device IDs */
- class_code = of_get_property(dn, "class-code", NULL);
- vendor_id = of_get_property(dn, "vendor-id", NULL);
- device_id = of_get_property(dn, "device-id", NULL);
+ classp = of_get_property(dn, "class-code", NULL);
+ vendorp = of_get_property(dn, "vendor-id", NULL);
+ devicep = of_get_property(dn, "device-id", NULL);
/* Skip for bad OF node or PCI-ISA bridge */
- if (!class_code || !vendor_id || !device_id)
+ if (!classp || !vendorp || !devicep)
return NULL;
if (dn->type && !strcmp(dn->type, "isa"))
return NULL;
+ class_code = of_read_number(classp, 1);
+
/*
* Update class code and mode of eeh device. We need
* correctly reflects that current device is root port
* or PCIe switch downstream port.
*/
- edev->class_code = *class_code;
+ edev->class_code = class_code;
edev->pcie_cap = pseries_eeh_find_cap(dn, PCI_CAP_ID_EXP);
edev->mode &= 0xFFFFFF00;
if ((edev->class_code >> 8) == PCI_CLASS_BRIDGE_PCI) {
/* Initialize the fake PE */
memset(&pe, 0, sizeof(struct eeh_pe));
pe.phb = edev->phb;
- pe.config_addr = regs[0];
+ pe.config_addr = of_read_number(regs, 1);
/* Enable EEH on the device */
ret = eeh_ops->set_option(&pe, EEH_OPT_ENABLE);
if (!ret) {
- edev->config_addr = regs[0];
+ edev->config_addr = of_read_number(regs, 1);
/* Retrieve PE address */
edev->pe_config_addr = eeh_ops->get_pe_addr(&pe);
pe.addr = edev->pe_config_addr;
&(ptes[j].pteh), &(ptes[j].ptel));
}
}
+
+#ifdef __LITTLE_ENDIAN__
+ /* Reset exceptions to big endian */
+ if (firmware_has_feature(FW_FEATURE_SET_MODE)) {
+ long rc;
+
+ rc = pseries_big_endian_exceptions();
+ /*
+ * At this point it is unlikely panic() will get anything
+ * out to the user, but at least this will stop us from
+ * continuing on further and creating an even more
+ * difficult to debug situation.
+ */
+ if (rc)
+ panic("Could not enable big endian exceptions");
+ }
+#endif
}
/*
{
struct hvcall_ppp_data ppp_data;
struct device_node *root;
- const int *perf_level;
+ const __be32 *perf_level;
int rc;
rc = h_get_ppp(&ppp_data);
perf_level = of_get_property(root,
"ibm,partition-performance-parameters-level",
NULL);
- if (perf_level && (*perf_level >= 1)) {
+ if (perf_level && (be32_to_cpup(perf_level) >= 1)) {
seq_printf(m,
"physical_procs_allocated_to_virtualization=%d\n",
ppp_data.phys_platform_procs);
int partition_potential_processors;
int partition_active_processors;
struct device_node *rtas_node;
- const int *lrdrp = NULL;
+ const __be32 *lrdrp = NULL;
rtas_node = of_find_node_by_path("/rtas");
if (rtas_node)
if (lrdrp == NULL) {
partition_potential_processors = vdso_data->processorCount;
} else {
- partition_potential_processors = *(lrdrp + 4);
+ partition_potential_processors = be32_to_cpup(lrdrp + 4);
}
of_node_put(rtas_node);
const char *model = "";
const char *system_id = "";
const char *tmp;
- const unsigned int *lp_index_ptr;
+ const __be32 *lp_index_ptr;
unsigned int lp_index = 0;
seq_printf(m, "%s %s\n", MODULE_NAME, MODULE_VERS);
lp_index_ptr = of_get_property(rootdn, "ibm,partition-no",
NULL);
if (lp_index_ptr)
- lp_index = *lp_index_ptr;
+ lp_index = be32_to_cpup(lp_index_ptr);
of_node_put(rootdn);
}
seq_printf(m, "serial_number=%s\n", system_id);
{
struct device_node *dn;
struct pci_dn *pdn;
- const u32 *req_msi;
+ const __be32 *p;
+ u32 req_msi;
pdn = pci_get_pdn(pdev);
if (!pdn)
dn = pdn->node;
- req_msi = of_get_property(dn, prop_name, NULL);
- if (!req_msi) {
+ p = of_get_property(dn, prop_name, NULL);
+ if (!p) {
pr_debug("rtas_msi: No %s on %s\n", prop_name, dn->full_name);
return -ENOENT;
}
- if (*req_msi < nvec) {
+ req_msi = be32_to_cpup(p);
+ if (req_msi < nvec) {
pr_debug("rtas_msi: %s requests < %d MSIs\n", prop_name, nvec);
- if (*req_msi == 0) /* Be paranoid */
+ if (req_msi == 0) /* Be paranoid */
return -ENOSPC;
- return *req_msi;
+ return req_msi;
}
return 0;
static struct device_node *find_pe_total_msi(struct pci_dev *dev, int *total)
{
struct device_node *dn;
- const u32 *p;
+ const __be32 *p;
dn = of_node_get(pci_device_to_OF_node(dev));
while (dn) {
if (p) {
pr_debug("rtas_msi: found prop on dn %s\n",
dn->full_name);
- *total = *p;
+ *total = be32_to_cpup(p);
return dn;
}
static void *count_non_bridge_devices(struct device_node *dn, void *data)
{
struct msi_counts *counts = data;
- const u32 *p;
+ const __be32 *p;
u32 class;
pr_debug("rtas_msi: counting %s\n", dn->full_name);
p = of_get_property(dn, "class-code", NULL);
- class = p ? *p : 0;
+ class = p ? be32_to_cpup(p) : 0;
if ((class >> 8) != PCI_CLASS_BRIDGE_PCI)
counts->num_devices++;
static void *count_spare_msis(struct device_node *dn, void *data)
{
struct msi_counts *counts = data;
- const u32 *p;
+ const __be32 *p;
int req;
if (dn == counts->requestor)
req = 0;
p = of_get_property(dn, "ibm,req#msi", NULL);
if (p)
- req = *p;
+ req = be32_to_cpup(p);
p = of_get_property(dn, "ibm,req#msi-x", NULL);
if (p)
- req = max(req, (int)*p);
+ req = max(req, (int)be32_to_cpup(p));
}
if (req < counts->quota)
static DEFINE_SPINLOCK(nvram_lock);
struct err_log_info {
- int error_type;
- unsigned int seq_num;
+ __be32 error_type;
+ __be32 seq_num;
};
struct nvram_os_partition {
};
struct oops_log_info {
- u16 version;
- u16 report_length;
- u64 timestamp;
+ __be16 version;
+ __be16 report_length;
+ __be64 timestamp;
} __attribute__((packed));
static void oops_to_nvram(struct kmsg_dumper *dumper,
length = part->size;
}
- info.error_type = err_type;
- info.seq_num = error_log_cnt;
+ info.error_type = cpu_to_be32(err_type);
+ info.seq_num = cpu_to_be32(error_log_cnt);
tmp_index = part->index;
}
if (part->os_partition) {
- *error_log_cnt = info.seq_num;
- *err_type = info.error_type;
+ *error_log_cnt = be32_to_cpu(info.seq_num);
+ *err_type = be32_to_cpu(info.error_type);
}
return 0;
pr_err("nvram: logging uncompressed oops/panic report\n");
return -1;
}
- oops_hdr->version = OOPS_HDR_VERSION;
- oops_hdr->report_length = (u16) zipped_len;
- oops_hdr->timestamp = get_seconds();
+ oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION);
+ oops_hdr->report_length = cpu_to_be16(zipped_len);
+ oops_hdr->timestamp = cpu_to_be64(get_seconds());
return 0;
}
clobbering_unread_rtas_event())
return -1;
- oops_hdr->version = OOPS_HDR_VERSION;
- oops_hdr->report_length = (u16) size;
- oops_hdr->timestamp = get_seconds();
+ oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION);
+ oops_hdr->report_length = cpu_to_be16(size);
+ oops_hdr->timestamp = cpu_to_be64(get_seconds());
if (compressed)
err_type = ERR_TYPE_KERNEL_PANIC_GZ;
size_t length, hdr_size;
oops_hdr = (struct oops_log_info *)buff;
- if (oops_hdr->version < OOPS_HDR_VERSION) {
+ if (be16_to_cpu(oops_hdr->version) < OOPS_HDR_VERSION) {
/* Old format oops header had 2-byte record size */
hdr_size = sizeof(u16);
- length = oops_hdr->version;
+ length = be16_to_cpu(oops_hdr->version);
time->tv_sec = 0;
time->tv_nsec = 0;
} else {
hdr_size = sizeof(*oops_hdr);
- length = oops_hdr->report_length;
- time->tv_sec = oops_hdr->timestamp;
+ length = be16_to_cpu(oops_hdr->report_length);
+ time->tv_sec = be64_to_cpu(oops_hdr->timestamp);
time->tv_nsec = 0;
}
*buf = kmalloc(length, GFP_KERNEL);
kmsg_dump_get_buffer(dumper, false,
oops_data, oops_data_sz, &text_len);
err_type = ERR_TYPE_KERNEL_PANIC;
- oops_hdr->version = OOPS_HDR_VERSION;
- oops_hdr->report_length = (u16) text_len;
- oops_hdr->timestamp = get_seconds();
+ oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION);
+ oops_hdr->report_length = cpu_to_be16(text_len);
+ oops_hdr->timestamp = cpu_to_be64(get_seconds());
}
(void) nvram_write_os_partition(&oops_log_partition, oops_buf,
- (int) (sizeof(*oops_hdr) + oops_hdr->report_length), err_type,
+ (int) (sizeof(*oops_hdr) + text_len), err_type,
++oops_count);
spin_unlock_irqrestore(&lock, flags);
{
struct device_node *dn, *pdn;
struct pci_bus *bus;
- const uint32_t *pcie_link_speed_stats;
+ const __be32 *pcie_link_speed_stats;
bus = bridge->bus;
return 0;
for (pdn = dn; pdn != NULL; pdn = of_get_next_parent(pdn)) {
- pcie_link_speed_stats = (const uint32_t *) of_get_property(pdn,
+ pcie_link_speed_stats = of_get_property(pdn,
"ibm,pcie-link-speed-stats", NULL);
if (pcie_link_speed_stats)
break;
return 0;
}
- switch (pcie_link_speed_stats[0]) {
+ switch (be32_to_cpup(pcie_link_speed_stats)) {
case 0x01:
bus->max_bus_speed = PCIE_SPEED_2_5GT;
break;
break;
}
- switch (pcie_link_speed_stats[1]) {
+ switch (be32_to_cpup(pcie_link_speed_stats)) {
case 0x01:
bus->cur_bus_speed = PCIE_SPEED_2_5GT;
break;
#include <linux/of.h>
#include <asm/archrandom.h>
#include <asm/machdep.h>
+#include <asm/plpar_wrappers.h>
static int pseries_get_random_long(unsigned long *v)
}
#endif
+#ifdef __LITTLE_ENDIAN__
+long pseries_big_endian_exceptions(void)
+{
+ long rc;
+
+ while (1) {
+ rc = enable_big_endian_exceptions();
+ if (!H_IS_LONG_BUSY(rc))
+ return rc;
+ mdelay(get_longbusy_msecs(rc));
+ }
+}
+
+static long pseries_little_endian_exceptions(void)
+{
+ long rc;
+
+ while (1) {
+ rc = enable_little_endian_exceptions();
+ if (!H_IS_LONG_BUSY(rc))
+ return rc;
+ mdelay(get_longbusy_msecs(rc));
+ }
+}
+#endif
+
static void __init pSeries_setup_arch(void)
{
panic_timeout = 10;
/* Now try to figure out if we are running on LPAR */
of_scan_flat_dt(pseries_probe_fw_features, NULL);
+#ifdef __LITTLE_ENDIAN__
+ if (firmware_has_feature(FW_FEATURE_SET_MODE)) {
+ long rc;
+ /*
+ * Tell the hypervisor that we want our exceptions to
+ * be taken in little endian mode. If this fails we don't
+ * want to use BUG() because it will trigger an exception.
+ */
+ rc = pseries_little_endian_exceptions();
+ if (rc) {
+ ppc_md.progress("H_SET_MODE LE exception fail", 0);
+ panic("Could not enable little endian exceptions");
+ }
+ }
+#endif
+
if (firmware_has_feature(FW_FEATURE_LPAR))
hpte_init_lpar();
else
#include <linux/of.h>
#include <linux/smp.h>
#include <linux/time.h>
+#include <linux/of_fdt.h>
#include <asm/machdep.h>
#include <asm/udbg.h>
#include <linux/kernel.h>
#include <linux/of.h>
#include <linux/io.h>
+#include <linux/of_address.h>
#include "wsp.h"
#include <linux/smp.h>
#include <linux/spinlock.h>
#include <linux/types.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <linux/of.h>
#include <linux/slab.h>
#include <linux/time.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
#include <asm/reg_a2.h>
#include <asm/irq.h>
#include <linux/of.h>
#include <linux/smp.h>
#include <linux/time.h>
+#include <linux/of_fdt.h>
#include <asm/machdep.h>
#include <asm/udbg.h>
#include <linux/of.h>
#include <linux/spinlock.h>
#include <linux/types.h>
+#include <linux/of_address.h>
#include <asm/cputhreads.h>
#include <asm/reg_a2.h>
#include <linux/smp.h>
#include <linux/delay.h>
#include <linux/time.h>
+#include <linux/of_address.h>
#include <asm/scom.h>
if (IS_ERR_VALUE(offset))
continue;
- ocm_blk = kzalloc(sizeof(struct ocm_block *), GFP_KERNEL);
+ ocm_blk = kzalloc(sizeof(struct ocm_block), GFP_KERNEL);
if (!ocm_blk) {
printk(KERN_ERR "PPC4XX OCM: could not allocate ocm block");
rh_free(ocm_reg->rh, offset);
select GENERIC_CPU_DEVICES if !SMP
select GENERIC_FIND_FIRST_BIT
select GENERIC_SMP_IDLE_THREAD
- select GENERIC_TIME_VSYSCALL_OLD
+ select GENERIC_TIME_VSYSCALL
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_ARCH_JUMP_LABEL if !MARCH_G5
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_UID16 if 32BIT
select HAVE_VIRT_CPU_ACCOUNTING
- select INIT_ALL_POSSIBLE
select KTIME_SCALAR if 32BIT
select MODULES_USE_ELF_RELA
select OLD_SIGACTION
Even if you don't know what to do here, say Y.
config NR_CPUS
- int "Maximum number of CPUs (2-64)"
- range 2 64
+ int "Maximum number of CPUs (2-256)"
+ range 2 256
depends on SMP
default "32" if !64BIT
default "64" if 64BIT
help
This allows you to specify the maximum number of CPUs which this
- kernel will support. The maximum supported value is 64 and the
+ kernel will support. The maximum supported value is 256 and the
minimum value which makes sense is 2.
This is purely to save memory - each supported CPU adds
static char keylen_flag;
struct s390_aes_ctx {
- u8 iv[AES_BLOCK_SIZE];
u8 key[AES_MAX_KEY_SIZE];
long enc;
long dec;
struct s390_xts_ctx {
u8 key[32];
- u8 xts_param[16];
- struct pcc_param pcc;
+ u8 pcc_key[32];
long enc;
long dec;
int key_len;
return aes_set_key(tfm, in_key, key_len);
}
-static int cbc_aes_crypt(struct blkcipher_desc *desc, long func, void *param,
+static int cbc_aes_crypt(struct blkcipher_desc *desc, long func,
struct blkcipher_walk *walk)
{
+ struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
int ret = blkcipher_walk_virt(desc, walk);
unsigned int nbytes = walk->nbytes;
+ struct {
+ u8 iv[AES_BLOCK_SIZE];
+ u8 key[AES_MAX_KEY_SIZE];
+ } param;
if (!nbytes)
goto out;
- memcpy(param, walk->iv, AES_BLOCK_SIZE);
+ memcpy(param.iv, walk->iv, AES_BLOCK_SIZE);
+ memcpy(param.key, sctx->key, sctx->key_len);
do {
/* only use complete blocks */
unsigned int n = nbytes & ~(AES_BLOCK_SIZE - 1);
u8 *out = walk->dst.virt.addr;
u8 *in = walk->src.virt.addr;
- ret = crypt_s390_kmc(func, param, out, in, n);
+ ret = crypt_s390_kmc(func, ¶m, out, in, n);
if (ret < 0 || ret != n)
return -EIO;
nbytes &= AES_BLOCK_SIZE - 1;
ret = blkcipher_walk_done(desc, walk, nbytes);
} while ((nbytes = walk->nbytes));
- memcpy(walk->iv, param, AES_BLOCK_SIZE);
+ memcpy(walk->iv, param.iv, AES_BLOCK_SIZE);
out:
return ret;
return fallback_blk_enc(desc, dst, src, nbytes);
blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_aes_crypt(desc, sctx->enc, sctx->iv, &walk);
+ return cbc_aes_crypt(desc, sctx->enc, &walk);
}
static int cbc_aes_decrypt(struct blkcipher_desc *desc,
return fallback_blk_dec(desc, dst, src, nbytes);
blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_aes_crypt(desc, sctx->dec, sctx->iv, &walk);
+ return cbc_aes_crypt(desc, sctx->dec, &walk);
}
static struct crypto_alg cbc_aes_alg = {
xts_ctx->enc = KM_XTS_128_ENCRYPT;
xts_ctx->dec = KM_XTS_128_DECRYPT;
memcpy(xts_ctx->key + 16, in_key, 16);
- memcpy(xts_ctx->pcc.key + 16, in_key + 16, 16);
+ memcpy(xts_ctx->pcc_key + 16, in_key + 16, 16);
break;
case 48:
xts_ctx->enc = 0;
xts_ctx->enc = KM_XTS_256_ENCRYPT;
xts_ctx->dec = KM_XTS_256_DECRYPT;
memcpy(xts_ctx->key, in_key, 32);
- memcpy(xts_ctx->pcc.key, in_key + 32, 32);
+ memcpy(xts_ctx->pcc_key, in_key + 32, 32);
break;
default:
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
unsigned int nbytes = walk->nbytes;
unsigned int n;
u8 *in, *out;
- void *param;
+ struct pcc_param pcc_param;
+ struct {
+ u8 key[32];
+ u8 init[16];
+ } xts_param;
if (!nbytes)
goto out;
- memset(xts_ctx->pcc.block, 0, sizeof(xts_ctx->pcc.block));
- memset(xts_ctx->pcc.bit, 0, sizeof(xts_ctx->pcc.bit));
- memset(xts_ctx->pcc.xts, 0, sizeof(xts_ctx->pcc.xts));
- memcpy(xts_ctx->pcc.tweak, walk->iv, sizeof(xts_ctx->pcc.tweak));
- param = xts_ctx->pcc.key + offset;
- ret = crypt_s390_pcc(func, param);
+ memset(pcc_param.block, 0, sizeof(pcc_param.block));
+ memset(pcc_param.bit, 0, sizeof(pcc_param.bit));
+ memset(pcc_param.xts, 0, sizeof(pcc_param.xts));
+ memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak));
+ memcpy(pcc_param.key, xts_ctx->pcc_key, 32);
+ ret = crypt_s390_pcc(func, &pcc_param.key[offset]);
if (ret < 0)
return -EIO;
- memcpy(xts_ctx->xts_param, xts_ctx->pcc.xts, 16);
- param = xts_ctx->key + offset;
+ memcpy(xts_param.key, xts_ctx->key, 32);
+ memcpy(xts_param.init, pcc_param.xts, 16);
do {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
out = walk->dst.virt.addr;
in = walk->src.virt.addr;
- ret = crypt_s390_km(func, param, out, in, n);
+ ret = crypt_s390_km(func, &xts_param.key[offset], out, in, n);
if (ret < 0 || ret != n)
return -EIO;
: "memory", "cc");
}
+/*
+ * copy_page uses the mvcl instruction with 0xb0 padding byte in order to
+ * bypass caches when copying a page. Especially when copying huge pages
+ * this keeps L1 and L2 data caches alive.
+ */
static inline void copy_page(void *to, void *from)
{
- if (MACHINE_HAS_MVPG) {
- register unsigned long reg0 asm ("0") = 0;
- asm volatile(
- " mvpg %0,%1"
- : : "a" (to), "a" (from), "d" (reg0)
- : "memory", "cc");
- } else
- asm volatile(
- " mvc 0(256,%0),0(%1)\n"
- " mvc 256(256,%0),256(%1)\n"
- " mvc 512(256,%0),512(%1)\n"
- " mvc 768(256,%0),768(%1)\n"
- " mvc 1024(256,%0),1024(%1)\n"
- " mvc 1280(256,%0),1280(%1)\n"
- " mvc 1536(256,%0),1536(%1)\n"
- " mvc 1792(256,%0),1792(%1)\n"
- " mvc 2048(256,%0),2048(%1)\n"
- " mvc 2304(256,%0),2304(%1)\n"
- " mvc 2560(256,%0),2560(%1)\n"
- " mvc 2816(256,%0),2816(%1)\n"
- " mvc 3072(256,%0),3072(%1)\n"
- " mvc 3328(256,%0),3328(%1)\n"
- " mvc 3584(256,%0),3584(%1)\n"
- " mvc 3840(256,%0),3840(%1)\n"
- : : "a" (to), "a" (from) : "memory");
+ register void *reg2 asm ("2") = to;
+ register unsigned long reg3 asm ("3") = 0x1000;
+ register void *reg4 asm ("4") = from;
+ register unsigned long reg5 asm ("5") = 0xb0001000;
+ asm volatile(
+ " mvcl 2,4"
+ : "+d" (reg2), "+d" (reg3), "+d" (reg4), "+d" (reg5)
+ : : "memory", "cc");
}
#define clear_user_page(page, vaddr, pg) clear_page(page)
#include <linux/types.h>
#include <asm/chpid.h>
+#include <asm/cpu.h>
#define SCLP_CHP_INFO_MASK_SIZE 32
unsigned int standby;
unsigned int combined;
int has_cpu_type;
- struct sclp_cpu_entry cpu[255];
+ struct sclp_cpu_entry cpu[MAX_CPU_ADDRESS + 1];
};
int sclp_get_cpu_info(struct sclp_cpu_info *info);
extern void smp_stop_cpu(void);
extern void smp_cpu_set_polarization(int cpu, int val);
extern int smp_cpu_get_polarization(int cpu);
+extern void smp_fill_possible_mask(void);
#else /* CONFIG_SMP */
static inline void smp_yield_cpu(int cpu) { }
static inline void smp_yield(void) { }
static inline void smp_stop_cpu(void) { }
+static inline void smp_fill_possible_mask(void) { }
#endif /* CONFIG_SMP */
__u64 wtom_clock_nsec; /* 0x28 */
__u32 tz_minuteswest; /* Minutes west of Greenwich 0x30 */
__u32 tz_dsttime; /* Type of dst correction 0x34 */
- __u32 ectg_available;
- __u32 ntp_mult; /* NTP adjusted multiplier 0x3C */
+ __u32 ectg_available; /* ECTG instruction present 0x38 */
+ __u32 tk_mult; /* Mult. used for xtime_nsec 0x3c */
+ __u32 tk_shift; /* Shift used for xtime_nsec 0x40 */
};
struct vdso_per_cpu_data {
DEFINE(__VDSO_WTOM_NSEC, offsetof(struct vdso_data, wtom_clock_nsec));
DEFINE(__VDSO_TIMEZONE, offsetof(struct vdso_data, tz_minuteswest));
DEFINE(__VDSO_ECTG_OK, offsetof(struct vdso_data, ectg_available));
- DEFINE(__VDSO_NTP_MULT, offsetof(struct vdso_data, ntp_mult));
+ DEFINE(__VDSO_TK_MULT, offsetof(struct vdso_data, tk_mult));
+ DEFINE(__VDSO_TK_SHIFT, offsetof(struct vdso_data, tk_shift));
DEFINE(__VDSO_ECTG_BASE, offsetof(struct vdso_per_cpu_data, ectg_timer_base));
DEFINE(__VDSO_ECTG_USER, offsetof(struct vdso_per_cpu_data, ectg_user_time));
/* constants used by the vdso */
DEFINE(__CLOCK_REALTIME, CLOCK_REALTIME);
DEFINE(__CLOCK_MONOTONIC, CLOCK_MONOTONIC);
+ DEFINE(__CLOCK_THREAD_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID);
DEFINE(__CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC);
BLANK();
/* idle data offsets */
return -EINVAL;
/* Use regs->psw.mask instead of PSW_USER_BITS to preserve PER bit. */
- regs->psw.mask = (regs->psw.mask & ~PSW_MASK_USER) |
+ regs->psw.mask = (regs->psw.mask & ~(PSW_MASK_USER | PSW_MASK_RI)) |
(__u64)(user_sregs.regs.psw.mask & PSW32_MASK_USER) << 32 |
(__u64)(user_sregs.regs.psw.mask & PSW32_MASK_RI) << 32 |
(__u64)(user_sregs.regs.psw.addr & PSW32_ADDR_AMODE);
PGM_CHECK_DEFAULT /* 35 */
PGM_CHECK_DEFAULT /* 36 */
PGM_CHECK_DEFAULT /* 37 */
-PGM_CHECK_DEFAULT /* 38 */
+PGM_CHECK_64BIT(do_dat_exception) /* 38 */
PGM_CHECK_64BIT(do_dat_exception) /* 39 */
PGM_CHECK_64BIT(do_dat_exception) /* 3a */
PGM_CHECK_64BIT(do_dat_exception) /* 3b */
setup_vmcoreinfo();
setup_lowcore();
+ smp_fill_possible_mask();
cpu_init();
s390_init_cpu_topology();
return -EINVAL;
/* Use regs->psw.mask instead of PSW_USER_BITS to preserve PER bit. */
- regs->psw.mask = (regs->psw.mask & ~PSW_MASK_USER) |
+ regs->psw.mask = (regs->psw.mask & ~(PSW_MASK_USER | PSW_MASK_RI)) |
(user_sregs.regs.psw.mask & (PSW_MASK_USER | PSW_MASK_RI));
/* Check for invalid user address space control. */
if ((regs->psw.mask & PSW_MASK_ASC) == PSW_ASC_HOME)
return 0;
}
-static int __init setup_possible_cpus(char *s)
-{
- int max, cpu;
+static unsigned int setup_possible_cpus __initdata;
- if (kstrtoint(s, 0, &max) < 0)
- return 0;
- init_cpu_possible(cpumask_of(0));
- for (cpu = 1; cpu < max && cpu < nr_cpu_ids; cpu++)
- set_cpu_possible(cpu, true);
+static int __init _setup_possible_cpus(char *s)
+{
+ get_option(&s, &setup_possible_cpus);
return 0;
}
-early_param("possible_cpus", setup_possible_cpus);
+early_param("possible_cpus", _setup_possible_cpus);
#ifdef CONFIG_HOTPLUG_CPU
#endif /* CONFIG_HOTPLUG_CPU */
+void __init smp_fill_possible_mask(void)
+{
+ unsigned int possible, cpu;
+
+ possible = setup_possible_cpus;
+ if (!possible)
+ possible = MACHINE_IS_VM ? 64 : nr_cpu_ids;
+ for (cpu = 0; cpu < possible && cpu < nr_cpu_ids; cpu++)
+ set_cpu_possible(cpu, true);
+}
+
void __init smp_prepare_cpus(unsigned int max_cpus)
{
/* request the 0x1201 emergency signal external interrupt */
set_clock_comparator(S390_lowcore.clock_comparator);
}
-static int s390_next_ktime(ktime_t expires,
+static int s390_next_event(unsigned long delta,
struct clock_event_device *evt)
{
- struct timespec ts;
- u64 nsecs;
-
- ts.tv_sec = ts.tv_nsec = 0;
- monotonic_to_bootbased(&ts);
- nsecs = ktime_to_ns(ktime_add(timespec_to_ktime(ts), expires));
- do_div(nsecs, 125);
- S390_lowcore.clock_comparator = sched_clock_base_cc + (nsecs << 9);
- /* Program the maximum value if we have an overflow (== year 2042) */
- if (unlikely(S390_lowcore.clock_comparator < sched_clock_base_cc))
- S390_lowcore.clock_comparator = -1ULL;
+ S390_lowcore.clock_comparator = get_tod_clock() + delta;
set_clock_comparator(S390_lowcore.clock_comparator);
return 0;
}
cpu = smp_processor_id();
cd = &per_cpu(comparators, cpu);
cd->name = "comparator";
- cd->features = CLOCK_EVT_FEAT_ONESHOT |
- CLOCK_EVT_FEAT_KTIME;
+ cd->features = CLOCK_EVT_FEAT_ONESHOT;
cd->mult = 16777;
cd->shift = 12;
cd->min_delta_ns = 1;
cd->max_delta_ns = LONG_MAX;
cd->rating = 400;
cd->cpumask = cpumask_of(cpu);
- cd->set_next_ktime = s390_next_ktime;
+ cd->set_next_event = s390_next_event;
cd->set_mode = s390_set_mode;
clockevents_register_device(cd);
return &clocksource_tod;
}
-void update_vsyscall_old(struct timespec *wall_time, struct timespec *wtm,
- struct clocksource *clock, u32 mult)
+void update_vsyscall(struct timekeeper *tk)
{
- if (clock != &clocksource_tod)
+ u64 nsecps;
+
+ if (tk->clock != &clocksource_tod)
return;
/* Make userspace gettimeofday spin until we're done. */
++vdso_data->tb_update_count;
smp_wmb();
- vdso_data->xtime_tod_stamp = clock->cycle_last;
- vdso_data->xtime_clock_sec = wall_time->tv_sec;
- vdso_data->xtime_clock_nsec = wall_time->tv_nsec;
- vdso_data->wtom_clock_sec = wtm->tv_sec;
- vdso_data->wtom_clock_nsec = wtm->tv_nsec;
- vdso_data->ntp_mult = mult;
+ vdso_data->xtime_tod_stamp = tk->clock->cycle_last;
+ vdso_data->xtime_clock_sec = tk->xtime_sec;
+ vdso_data->xtime_clock_nsec = tk->xtime_nsec;
+ vdso_data->wtom_clock_sec =
+ tk->xtime_sec + tk->wall_to_monotonic.tv_sec;
+ vdso_data->wtom_clock_nsec = tk->xtime_nsec +
+ + (tk->wall_to_monotonic.tv_nsec << tk->shift);
+ nsecps = (u64) NSEC_PER_SEC << tk->shift;
+ while (vdso_data->wtom_clock_nsec >= nsecps) {
+ vdso_data->wtom_clock_nsec -= nsecps;
+ vdso_data->wtom_clock_sec++;
+ }
+ vdso_data->tk_mult = tk->mult;
+ vdso_data->tk_shift = tk->shift;
smp_wmb();
++vdso_data->tb_update_count;
}
psal[i] = 0x80000000;
lowcore->paste[4] = (u32)(addr_t) psal;
- psal[0] = 0x20000000;
+ psal[0] = 0x02000000;
psal[2] = (u32)(addr_t) aste;
*(unsigned long *) (aste + 2) = segment_table +
_ASCE_TABLE_LENGTH + _ASCE_USER_BITS + _ASCE_TYPE_SEGMENT;
sl %r1,__VDSO_XTIME_STAMP+4(%r5)
brc 3,2f
ahi %r0,-1
-2: ms %r0,__VDSO_NTP_MULT(%r5) /* cyc2ns(clock,cycle_delta) */
+2: ms %r0,__VDSO_TK_MULT(%r5) /* * tk->mult */
lr %r2,%r0
- l %r0,__VDSO_NTP_MULT(%r5)
+ l %r0,__VDSO_TK_MULT(%r5)
ltr %r1,%r1
mr %r0,%r0
jnm 3f
- a %r0,__VDSO_NTP_MULT(%r5)
+ a %r0,__VDSO_TK_MULT(%r5)
3: alr %r0,%r2
- srdl %r0,12
- al %r0,__VDSO_XTIME_NSEC(%r5) /* + xtime */
- al %r1,__VDSO_XTIME_NSEC+4(%r5)
- brc 12,4f
- ahi %r0,1
-4: l %r2,__VDSO_XTIME_SEC+4(%r5)
- al %r0,__VDSO_WTOM_NSEC(%r5) /* + wall_to_monotonic */
+ al %r0,__VDSO_WTOM_NSEC(%r5)
al %r1,__VDSO_WTOM_NSEC+4(%r5)
brc 12,5f
ahi %r0,1
-5: al %r2,__VDSO_WTOM_SEC+4(%r5)
+5: l %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */
+ srdl %r0,0(%r2) /* >> tk->shift */
+ l %r2,__VDSO_WTOM_SEC+4(%r5)
cl %r4,__VDSO_UPD_COUNT+4(%r5) /* check update counter */
jne 1b
basr %r5,0
sl %r1,__VDSO_XTIME_STAMP+4(%r5)
brc 3,12f
ahi %r0,-1
-12: ms %r0,__VDSO_NTP_MULT(%r5) /* cyc2ns(clock,cycle_delta) */
+12: ms %r0,__VDSO_TK_MULT(%r5) /* * tk->mult */
lr %r2,%r0
- l %r0,__VDSO_NTP_MULT(%r5)
+ l %r0,__VDSO_TK_MULT(%r5)
ltr %r1,%r1
mr %r0,%r0
jnm 13f
- a %r0,__VDSO_NTP_MULT(%r5)
+ a %r0,__VDSO_TK_MULT(%r5)
13: alr %r0,%r2
- srdl %r0,12
- al %r0,__VDSO_XTIME_NSEC(%r5) /* + xtime */
+ al %r0,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */
al %r1,__VDSO_XTIME_NSEC+4(%r5)
brc 12,14f
ahi %r0,1
-14: l %r2,__VDSO_XTIME_SEC+4(%r5)
+14: l %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */
+ srdl %r0,0(%r2) /* >> tk->shift */
+ l %r2,__VDSO_XTIME_SEC+4(%r5)
cl %r4,__VDSO_UPD_COUNT+4(%r5) /* check update counter */
jne 11b
basr %r5,0
sl %r1,__VDSO_XTIME_STAMP+4(%r5)
brc 3,3f
ahi %r0,-1
-3: ms %r0,__VDSO_NTP_MULT(%r5) /* cyc2ns(clock,cycle_delta) */
+3: ms %r0,__VDSO_TK_MULT(%r5) /* * tk->mult */
st %r0,24(%r15)
- l %r0,__VDSO_NTP_MULT(%r5)
+ l %r0,__VDSO_TK_MULT(%r5)
ltr %r1,%r1
mr %r0,%r0
jnm 4f
- a %r0,__VDSO_NTP_MULT(%r5)
+ a %r0,__VDSO_TK_MULT(%r5)
4: al %r0,24(%r15)
- srdl %r0,12
al %r0,__VDSO_XTIME_NSEC(%r5) /* + xtime */
al %r1,__VDSO_XTIME_NSEC+4(%r5)
brc 12,5f
5: mvc 24(4,%r15),__VDSO_XTIME_SEC+4(%r5)
cl %r4,__VDSO_UPD_COUNT+4(%r5) /* check update counter */
jne 1b
+ l %r4,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */
+ srdl %r0,0(%r4) /* >> tk->shift */
l %r4,24(%r15) /* get tv_sec from stack */
basr %r5,0
6: ltr %r0,%r0
je 0f
cghi %r2,__CLOCK_MONOTONIC
je 0f
- cghi %r2,-2 /* CLOCK_THREAD_CPUTIME_ID for this thread */
+ cghi %r2,__CLOCK_THREAD_CPUTIME_ID
+ je 0f
+ cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */
jne 2f
larl %r5,_vdso_data
icm %r0,15,__LC_ECTG_OK(%r5)
larl %r5,_vdso_data
cghi %r2,__CLOCK_REALTIME
je 4f
- cghi %r2,-2 /* CLOCK_THREAD_CPUTIME_ID for this thread */
+ cghi %r2,__CLOCK_THREAD_CPUTIME_ID
+ je 9f
+ cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */
je 9f
cghi %r2,__CLOCK_MONOTONIC
jne 12f
tmll %r4,0x0001 /* pending update ? loop */
jnz 0b
stck 48(%r15) /* Store TOD clock */
+ lgf %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */
+ lg %r0,__VDSO_WTOM_SEC(%r5)
lg %r1,48(%r15)
sg %r1,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */
- msgf %r1,__VDSO_NTP_MULT(%r5) /* * NTP adjustment */
- srlg %r1,%r1,12 /* cyc2ns(clock,cycle_delta) */
- alg %r1,__VDSO_XTIME_NSEC(%r5) /* + xtime */
- lg %r0,__VDSO_XTIME_SEC(%r5)
- alg %r1,__VDSO_WTOM_NSEC(%r5) /* + wall_to_monotonic */
- alg %r0,__VDSO_WTOM_SEC(%r5)
+ msgf %r1,__VDSO_TK_MULT(%r5) /* * tk->mult */
+ alg %r1,__VDSO_WTOM_NSEC(%r5)
+ srlg %r1,%r1,0(%r2) /* >> tk->shift */
clg %r4,__VDSO_UPD_COUNT(%r5) /* check update counter */
jne 0b
larl %r5,13f
tmll %r4,0x0001 /* pending update ? loop */
jnz 5b
stck 48(%r15) /* Store TOD clock */
+ lgf %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */
lg %r1,48(%r15)
sg %r1,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */
- msgf %r1,__VDSO_NTP_MULT(%r5) /* * NTP adjustment */
- srlg %r1,%r1,12 /* cyc2ns(clock,cycle_delta) */
- alg %r1,__VDSO_XTIME_NSEC(%r5) /* + xtime */
- lg %r0,__VDSO_XTIME_SEC(%r5)
+ msgf %r1,__VDSO_TK_MULT(%r5) /* * tk->mult */
+ alg %r1,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */
+ srlg %r1,%r1,0(%r2) /* >> tk->shift */
+ lg %r0,__VDSO_XTIME_SEC(%r5) /* tk->xtime_sec */
clg %r4,__VDSO_UPD_COUNT(%r5) /* check update counter */
jne 5b
larl %r5,13f
stck 48(%r15) /* Store TOD clock */
lg %r1,48(%r15)
sg %r1,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */
- msgf %r1,__VDSO_NTP_MULT(%r5) /* * NTP adjustment */
- srlg %r1,%r1,12 /* cyc2ns(clock,cycle_delta) */
- alg %r1,__VDSO_XTIME_NSEC(%r5) /* + xtime.tv_nsec */
- lg %r0,__VDSO_XTIME_SEC(%r5) /* xtime.tv_sec */
+ msgf %r1,__VDSO_TK_MULT(%r5) /* * tk->mult */
+ alg %r1,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */
+ lg %r0,__VDSO_XTIME_SEC(%r5) /* tk->xtime_sec */
clg %r4,__VDSO_UPD_COUNT(%r5) /* check update counter */
jne 0b
+ lgf %r5,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */
+ srlg %r1,%r1,0(%r5) /* >> tk->shift */
larl %r5,5f
2: clg %r1,0(%r5)
jl 3f
* contains the (negative) exception code.
*/
#ifdef CONFIG_64BIT
+
static unsigned long follow_table(struct mm_struct *mm,
unsigned long address, int write)
{
unsigned long *table = (unsigned long *)__pa(mm->pgd);
+ if (unlikely(address > mm->context.asce_limit - 1))
+ return -0x38UL;
switch (mm->context.asce_bits & _ASCE_TYPE_MASK) {
case _ASCE_TYPE_REGION1:
table = table + ((address >> 53) & 0x7ff);
if (!zdev || zdev->state == ZPCI_FN_STATE_CONFIGURED)
break;
zdev->state = ZPCI_FN_STATE_CONFIGURED;
+ zdev->fh = ccdf->fh;
ret = zpci_enable_device(zdev);
if (ret)
break;
if (pdev)
pci_stop_and_remove_bus_device(pdev);
+ zdev->fh = ccdf->fh;
zpci_disable_device(zdev);
zdev->state = ZPCI_FN_STATE_STANDBY;
break;
EXPORT_SYMBOL(copy_page);
EXPORT_SYMBOL(__clear_user);
EXPORT_SYMBOL(empty_zero_page);
+#ifdef CONFIG_FLATMEM
+/* need in pfn_valid macro */
+EXPORT_SYMBOL(min_low_pfn);
+EXPORT_SYMBOL(max_low_pfn);
+#endif
#define DECLARE_EXPORT(name) \
extern void name(void);EXPORT_SYMBOL(name)
checksum.o strlen.o div64.o div64-generic.o
# Extracted from libgcc
-lib-y += movmem.o ashldi3.o ashrdi3.o lshrdi3.o \
+obj-y += movmem.o ashldi3.o ashrdi3.o lshrdi3.o \
ashlsi3.o ashrsi3.o ashiftrt.o lshrsi3.o \
udiv_qrnnd.o
}
#define pte_accessible pte_accessible
-static inline unsigned long pte_accessible(pte_t a)
+static inline unsigned long pte_accessible(struct mm_struct *mm, pte_t a)
{
return pte_val(a) & _PAGE_VALID;
}
* SUN4V NOTE: _PAGE_VALID is the same value in both the SUN4U
* and SUN4V pte layout, so this inline test is fine.
*/
- if (likely(mm != &init_mm) && pte_accessible(orig))
+ if (likely(mm != &init_mm) && pte_accessible(mm, orig))
tlb_batch_add(mm, addr, ptep, orig, fullmm);
}
extern __must_check long strlen_user(const char __user *str);
extern __must_check long strnlen_user(const char __user *str, long n);
-#define __copy_to_user_inatomic ___copy_to_user
-#define __copy_from_user_inatomic ___copy_from_user
+#define __copy_to_user_inatomic __copy_to_user
+#define __copy_from_user_inatomic __copy_from_user
struct pt_regs;
extern unsigned long compute_effective_address(struct pt_regs *,
return 1;
#ifdef CONFIG_PCI
- if (dev->bus == &pci_bus_type)
+ if (dev_is_pci(dev))
return pci64_dma_supported(to_pci_dev(dev), device_mask);
#endif
*/
int dma_supported(struct device *dev, u64 mask)
{
-#ifdef CONFIG_PCI
- if (dev->bus == &pci_bus_type)
+ if (dev_is_pci(dev))
return 1;
-#endif
+
return 0;
}
EXPORT_SYMBOL(dma_supported);
#include <linux/kgdb.h>
#include <linux/kdebug.h>
#include <linux/ftrace.h>
+#include <linux/context_tracking.h>
#include <asm/cacheflush.h>
#include <asm/kdebug.h>
rmb();
set_cpu_online(cpuid, true);
- local_irq_enable();
/* idle thread is expected to have preempt disabled */
preempt_disable();
+ local_irq_enable();
+
cpu_startup_entry(CPUHP_ONLINE);
}
HEADER_ARCH := $(SUBARCH)
-# Additional ARCH settings for x86
-ifeq ($(SUBARCH),i386)
- HEADER_ARCH := x86
+ifneq ($(filter $(SUBARCH),x86 x86_64 i386),)
+ HEADER_ARCH := x86
endif
-ifeq ($(SUBARCH),x86_64)
- HEADER_ARCH := x86
+
+ifdef CONFIG_64BIT
KBUILD_CFLAGS += -mcmodel=large
endif
unsigned long return_address;
};
-static void print_stack_trace(unsigned long *sp, unsigned long bp)
+static void do_stack_trace(unsigned long *sp, unsigned long bp)
{
int reliable;
unsigned long addr;
}
printk(KERN_CONT "\n");
- print_stack_trace(sp, bp);
+ do_stack_trace(sp, bp);
}
select HAVE_AOUT if X86_32
select HAVE_UNSTABLE_SCHED_CLOCK
select ARCH_SUPPORTS_NUMA_BALANCING
+ select ARCH_SUPPORTS_INT128 if X86_64
select ARCH_WANTS_PROT_NUMA_PROT_NONE
select HAVE_IDE
select HAVE_OPROFILE
KBUILD_CFLAGS += -msoft-float -mregparm=3 -freg-struct-return
+ # Don't autogenerate MMX or SSE instructions
+ KBUILD_CFLAGS += -mno-mmx -mno-sse
+
# Never want PIC in a 32-bit kernel, prevent breakage with GCC built
# with nonstandard options
KBUILD_CFLAGS += -fno-pic
KBUILD_AFLAGS += -m64
KBUILD_CFLAGS += -m64
+ # Don't autogenerate MMX or SSE instructions
+ KBUILD_CFLAGS += -mno-mmx -mno-sse
+
# Use -mpreferred-stack-boundary=3 if supported.
- KBUILD_CFLAGS += $(call cc-option,-mno-sse -mpreferred-stack-boundary=3)
+ KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3)
# FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
# How to compile the 16-bit code. Note we always compile for -march=i386,
# that way we can complain to the user if the CPU is insufficient.
-KBUILD_CFLAGS := $(USERINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \
+KBUILD_CFLAGS := $(USERINCLUDE) -m32 -g -Os -D_SETUP -D__KERNEL__ \
-DDISABLE_BRANCH_PROFILING \
-Wall -Wstrict-prototypes \
-march=i386 -mregparm=3 \
-include $(srctree)/$(src)/code16gcc.h \
-fno-strict-aliasing -fomit-frame-pointer -fno-pic \
+ -mno-mmx -mno-sse \
$(call cc-option, -ffreestanding) \
$(call cc-option, -fno-toplevel-reorder,\
- $(call cc-option, -fno-unit-at-a-time)) \
+ $(call cc-option, -fno-unit-at-a-time)) \
$(call cc-option, -fno-stack-protector) \
$(call cc-option, -mpreferred-stack-boundary=2)
-KBUILD_CFLAGS += $(call cc-option, -m32)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
GCOV_PROFILE := n
cflags-$(CONFIG_X86_32) := -march=i386
cflags-$(CONFIG_X86_64) := -mcmodel=small
KBUILD_CFLAGS += $(cflags-y)
+KBUILD_CFLAGS += -mno-mmx -mno-sse
KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
KBUILD_CFLAGS += $(call cc-option,-fno-stack-protector)
#
avx_supported := $(call as-instr,vpxor %xmm0$(comma)%xmm0$(comma)%xmm0,yes,no)
+avx2_supported := $(call as-instr,vpgatherdd %ymm0$(comma)(%eax$(comma)%ymm1\
+ $(comma)4)$(comma)%ymm2,yes,no)
-obj-$(CONFIG_CRYPTO_ABLK_HELPER_X86) += ablk_helper.o
obj-$(CONFIG_CRYPTO_GLUE_HELPER_X86) += glue_helper.o
obj-$(CONFIG_CRYPTO_AES_586) += aes-i586.o
+++ /dev/null
-/*
- * Shared async block cipher helpers
- *
- * Copyright (c) 2012 Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
- *
- * Based on aesni-intel_glue.c by:
- * Copyright (C) 2008, Intel Corp.
- * Author: Huang Ying <ying.huang@intel.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
- * USA
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/crypto.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <crypto/algapi.h>
-#include <crypto/cryptd.h>
-#include <asm/i387.h>
-#include <asm/crypto/ablk_helper.h>
-
-int ablk_set_key(struct crypto_ablkcipher *tfm, const u8 *key,
- unsigned int key_len)
-{
- struct async_helper_ctx *ctx = crypto_ablkcipher_ctx(tfm);
- struct crypto_ablkcipher *child = &ctx->cryptd_tfm->base;
- int err;
-
- crypto_ablkcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
- crypto_ablkcipher_set_flags(child, crypto_ablkcipher_get_flags(tfm)
- & CRYPTO_TFM_REQ_MASK);
- err = crypto_ablkcipher_setkey(child, key, key_len);
- crypto_ablkcipher_set_flags(tfm, crypto_ablkcipher_get_flags(child)
- & CRYPTO_TFM_RES_MASK);
- return err;
-}
-EXPORT_SYMBOL_GPL(ablk_set_key);
-
-int __ablk_encrypt(struct ablkcipher_request *req)
-{
- struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
- struct async_helper_ctx *ctx = crypto_ablkcipher_ctx(tfm);
- struct blkcipher_desc desc;
-
- desc.tfm = cryptd_ablkcipher_child(ctx->cryptd_tfm);
- desc.info = req->info;
- desc.flags = 0;
-
- return crypto_blkcipher_crt(desc.tfm)->encrypt(
- &desc, req->dst, req->src, req->nbytes);
-}
-EXPORT_SYMBOL_GPL(__ablk_encrypt);
-
-int ablk_encrypt(struct ablkcipher_request *req)
-{
- struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
- struct async_helper_ctx *ctx = crypto_ablkcipher_ctx(tfm);
-
- if (!irq_fpu_usable()) {
- struct ablkcipher_request *cryptd_req =
- ablkcipher_request_ctx(req);
-
- memcpy(cryptd_req, req, sizeof(*req));
- ablkcipher_request_set_tfm(cryptd_req, &ctx->cryptd_tfm->base);
-
- return crypto_ablkcipher_encrypt(cryptd_req);
- } else {
- return __ablk_encrypt(req);
- }
-}
-EXPORT_SYMBOL_GPL(ablk_encrypt);
-
-int ablk_decrypt(struct ablkcipher_request *req)
-{
- struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
- struct async_helper_ctx *ctx = crypto_ablkcipher_ctx(tfm);
-
- if (!irq_fpu_usable()) {
- struct ablkcipher_request *cryptd_req =
- ablkcipher_request_ctx(req);
-
- memcpy(cryptd_req, req, sizeof(*req));
- ablkcipher_request_set_tfm(cryptd_req, &ctx->cryptd_tfm->base);
-
- return crypto_ablkcipher_decrypt(cryptd_req);
- } else {
- struct blkcipher_desc desc;
-
- desc.tfm = cryptd_ablkcipher_child(ctx->cryptd_tfm);
- desc.info = req->info;
- desc.flags = 0;
-
- return crypto_blkcipher_crt(desc.tfm)->decrypt(
- &desc, req->dst, req->src, req->nbytes);
- }
-}
-EXPORT_SYMBOL_GPL(ablk_decrypt);
-
-void ablk_exit(struct crypto_tfm *tfm)
-{
- struct async_helper_ctx *ctx = crypto_tfm_ctx(tfm);
-
- cryptd_free_ablkcipher(ctx->cryptd_tfm);
-}
-EXPORT_SYMBOL_GPL(ablk_exit);
-
-int ablk_init_common(struct crypto_tfm *tfm, const char *drv_name)
-{
- struct async_helper_ctx *ctx = crypto_tfm_ctx(tfm);
- struct cryptd_ablkcipher *cryptd_tfm;
-
- cryptd_tfm = cryptd_alloc_ablkcipher(drv_name, 0, 0);
- if (IS_ERR(cryptd_tfm))
- return PTR_ERR(cryptd_tfm);
-
- ctx->cryptd_tfm = cryptd_tfm;
- tfm->crt_ablkcipher.reqsize = sizeof(struct ablkcipher_request) +
- crypto_ablkcipher_reqsize(&cryptd_tfm->base);
-
- return 0;
-}
-EXPORT_SYMBOL_GPL(ablk_init_common);
-
-int ablk_init(struct crypto_tfm *tfm)
-{
- char drv_name[CRYPTO_MAX_ALG_NAME];
-
- snprintf(drv_name, sizeof(drv_name), "__driver-%s",
- crypto_tfm_alg_driver_name(tfm));
-
- return ablk_init_common(tfm, drv_name);
-}
-EXPORT_SYMBOL_GPL(ablk_init);
-
-MODULE_LICENSE("GPL");
#include <asm/cpu_device_id.h>
#include <asm/i387.h>
#include <asm/crypto/aes.h>
-#include <asm/crypto/ablk_helper.h>
+#include <crypto/ablk_helper.h>
#include <crypto/scatterwalk.h>
#include <crypto/internal/aead.h>
#include <linux/workqueue.h>
#include <linux/types.h>
#include <linux/crypto.h>
#include <linux/err.h>
+#include <crypto/ablk_helper.h>
#include <crypto/algapi.h>
#include <crypto/ctr.h>
#include <crypto/lrw.h>
#include <asm/xcr.h>
#include <asm/xsave.h>
#include <asm/crypto/camellia.h>
-#include <asm/crypto/ablk_helper.h>
#include <asm/crypto/glue_helper.h>
#define CAMELLIA_AESNI_PARALLEL_BLOCKS 16
#include <linux/types.h>
#include <linux/crypto.h>
#include <linux/err.h>
+#include <crypto/ablk_helper.h>
#include <crypto/algapi.h>
#include <crypto/ctr.h>
#include <crypto/lrw.h>
#include <asm/xcr.h>
#include <asm/xsave.h>
#include <asm/crypto/camellia.h>
-#include <asm/crypto/ablk_helper.h>
#include <asm/crypto/glue_helper.h>
#define CAMELLIA_AESNI_PARALLEL_BLOCKS 16
#include <linux/types.h>
#include <linux/crypto.h>
#include <linux/err.h>
+#include <crypto/ablk_helper.h>
#include <crypto/algapi.h>
#include <crypto/cast5.h>
#include <crypto/cryptd.h>
#include <crypto/ctr.h>
#include <asm/xcr.h>
#include <asm/xsave.h>
-#include <asm/crypto/ablk_helper.h>
#include <asm/crypto/glue_helper.h>
#define CAST5_PARALLEL_BLOCKS 16
#include <linux/types.h>
#include <linux/crypto.h>
#include <linux/err.h>
+#include <crypto/ablk_helper.h>
#include <crypto/algapi.h>
#include <crypto/cast6.h>
#include <crypto/cryptd.h>
#include <crypto/xts.h>
#include <asm/xcr.h>
#include <asm/xsave.h>
-#include <asm/crypto/ablk_helper.h>
#include <asm/crypto/glue_helper.h>
#define CAST6_PARALLEL_BLOCKS 8
#include <linux/types.h>
#include <linux/crypto.h>
#include <linux/err.h>
+#include <crypto/ablk_helper.h>
#include <crypto/algapi.h>
#include <crypto/ctr.h>
#include <crypto/lrw.h>
#include <asm/xcr.h>
#include <asm/xsave.h>
#include <asm/crypto/serpent-avx.h>
-#include <asm/crypto/ablk_helper.h>
#include <asm/crypto/glue_helper.h>
#define SERPENT_AVX2_PARALLEL_BLOCKS 16
#include <linux/types.h>
#include <linux/crypto.h>
#include <linux/err.h>
+#include <crypto/ablk_helper.h>
#include <crypto/algapi.h>
#include <crypto/serpent.h>
#include <crypto/cryptd.h>
#include <asm/xcr.h>
#include <asm/xsave.h>
#include <asm/crypto/serpent-avx.h>
-#include <asm/crypto/ablk_helper.h>
#include <asm/crypto/glue_helper.h>
/* 8-way parallel cipher functions */
#include <linux/types.h>
#include <linux/crypto.h>
#include <linux/err.h>
+#include <crypto/ablk_helper.h>
#include <crypto/algapi.h>
#include <crypto/serpent.h>
#include <crypto/cryptd.h>
#include <crypto/lrw.h>
#include <crypto/xts.h>
#include <asm/crypto/serpent-sse2.h>
-#include <asm/crypto/ablk_helper.h>
#include <asm/crypto/glue_helper.h>
static void serpent_decrypt_cbc_xway(void *ctx, u128 *dst, const u128 *src)
/* allow AVX to override SSSE3, it's a little faster */
if (avx_usable()) {
#ifdef CONFIG_AS_AVX2
- if (boot_cpu_has(X86_FEATURE_AVX2))
+ if (boot_cpu_has(X86_FEATURE_AVX2) && boot_cpu_has(X86_FEATURE_BMI2))
sha256_transform_asm = sha256_transform_rorx;
else
#endif
MODULE_DESCRIPTION("SHA256 Secure Hash Algorithm, Supplemental SSE3 accelerated");
MODULE_ALIAS("sha256");
-MODULE_ALIAS("sha384");
+MODULE_ALIAS("sha224");
#include <linux/types.h>
#include <linux/crypto.h>
#include <linux/err.h>
+#include <crypto/ablk_helper.h>
#include <crypto/algapi.h>
#include <crypto/twofish.h>
#include <crypto/cryptd.h>
#include <asm/xcr.h>
#include <asm/xsave.h>
#include <asm/crypto/twofish.h>
-#include <asm/crypto/ablk_helper.h>
#include <asm/crypto/glue_helper.h>
#include <crypto/scatterwalk.h>
#include <linux/workqueue.h>
*/
static inline int atomic_sub_and_test(int i, atomic_t *v)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, i, "%0", "e");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", "e");
}
/**
*/
static inline int atomic_add_negative(int i, atomic_t *v)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, i, "%0", "s");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", "s");
}
/**
*/
static inline int atomic64_sub_and_test(long i, atomic64_t *v)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, i, "%0", "e");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", "e");
}
/**
*/
static inline int atomic64_add_negative(long i, atomic64_t *v)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, i, "%0", "s");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", "s");
}
/**
*/
static inline int test_and_set_bit(long nr, volatile unsigned long *addr)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, nr, "%0", "c");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", "c");
}
/**
*/
static inline int test_and_clear_bit(long nr, volatile unsigned long *addr)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, nr, "%0", "c");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", "c");
}
/**
*/
static inline int test_and_change_bit(long nr, volatile unsigned long *addr)
{
- GEN_BINARY_RMWcc(LOCK_PREFIX "btc", *addr, nr, "%0", "c");
+ GEN_BINARY_RMWcc(LOCK_PREFIX "btc", *addr, "Ir", nr, "%0", "c");
}
static __always_inline int constant_test_bit(long nr, const volatile unsigned long *addr)
+++ /dev/null
-/*
- * Shared async block cipher helpers
- */
-
-#ifndef _CRYPTO_ABLK_HELPER_H
-#define _CRYPTO_ABLK_HELPER_H
-
-#include <linux/crypto.h>
-#include <linux/kernel.h>
-#include <crypto/cryptd.h>
-
-struct async_helper_ctx {
- struct cryptd_ablkcipher *cryptd_tfm;
-};
-
-extern int ablk_set_key(struct crypto_ablkcipher *tfm, const u8 *key,
- unsigned int key_len);
-
-extern int __ablk_encrypt(struct ablkcipher_request *req);
-
-extern int ablk_encrypt(struct ablkcipher_request *req);
-
-extern int ablk_decrypt(struct ablkcipher_request *req);
-
-extern void ablk_exit(struct crypto_tfm *tfm);
-
-extern int ablk_init_common(struct crypto_tfm *tfm, const char *drv_name);
-
-extern int ablk_init(struct crypto_tfm *tfm);
-
-#endif /* _CRYPTO_ABLK_HELPER_H */
*/
static inline int local_sub_and_test(long i, local_t *l)
{
- GEN_BINARY_RMWcc(_ASM_SUB, l->a.counter, i, "%0", "e");
+ GEN_BINARY_RMWcc(_ASM_SUB, l->a.counter, "er", i, "%0", "e");
}
/**
*/
static inline int local_add_negative(long i, local_t *l)
{
- GEN_BINARY_RMWcc(_ASM_ADD, l->a.counter, i, "%0", "s");
+ GEN_BINARY_RMWcc(_ASM_ADD, l->a.counter, "er", i, "%0", "s");
}
/**
int domain; /* PCI domain */
int node; /* NUMA node */
#ifdef CONFIG_ACPI
- void *acpi; /* ACPI-specific data */
+ struct acpi_device *companion; /* ACPI companion device */
#endif
#ifdef CONFIG_X86_64
void *iommu; /* IOMMU private data */
}
#define pte_accessible pte_accessible
-static inline int pte_accessible(pte_t a)
+static inline bool pte_accessible(struct mm_struct *mm, pte_t a)
{
- return pte_flags(a) & _PAGE_PRESENT;
+ if (pte_flags(a) & _PAGE_PRESENT)
+ return true;
+
+ if ((pte_flags(a) & (_PAGE_PROTNONE | _PAGE_NUMA)) &&
+ mm_tlb_flush_pending(mm))
+ return true;
+
+ return false;
}
static inline int pte_hidden(pte_t pte)
DECLARE_PER_CPU(int, __preempt_count);
+/*
+ * We use the PREEMPT_NEED_RESCHED bit as an inverted NEED_RESCHED such
+ * that a decrement hitting 0 means we can and should reschedule.
+ */
+#define PREEMPT_ENABLED (0 + PREEMPT_NEED_RESCHED)
+
/*
* We mask the PREEMPT_NEED_RESCHED bit so as not to confuse all current users
* that think a non-zero value indicates we cannot preempt.
__this_cpu_add_4(__preempt_count, -val);
}
+/*
+ * Because we keep PREEMPT_NEED_RESCHED set when we do _not_ need to reschedule
+ * a decrement which hits zero means we have no preempt_count and should
+ * reschedule.
+ */
static __always_inline bool __preempt_count_dec_and_test(void)
{
GEN_UNARY_RMWcc("decl", __preempt_count, __percpu_arg(0), "e");
#define GEN_UNARY_RMWcc(op, var, arg0, cc) \
__GEN_RMWcc(op " " arg0, var, cc)
-#define GEN_BINARY_RMWcc(op, var, val, arg0, cc) \
- __GEN_RMWcc(op " %1, " arg0, var, cc, "er" (val))
+#define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
+ __GEN_RMWcc(op " %1, " arg0, var, cc, vcon (val))
#else /* !CC_HAVE_ASM_GOTO */
#define GEN_UNARY_RMWcc(op, var, arg0, cc) \
__GEN_RMWcc(op " " arg0, var, cc)
-#define GEN_BINARY_RMWcc(op, var, val, arg0, cc) \
- __GEN_RMWcc(op " %2, " arg0, var, cc, "er" (val))
+#define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
+ __GEN_RMWcc(op " %2, " arg0, var, cc, vcon (val))
#endif /* CC_HAVE_ASM_GOTO */
--- /dev/null
+
+#include <asm/i387.h>
+
+/*
+ * may_use_simd - whether it is allowable at this time to issue SIMD
+ * instructions or access the SIMD register file
+ */
+static __must_check inline bool may_use_simd(void)
+{
+ return irq_fpu_usable();
+}
*/
DEFINE_IRQ_VECTOR_EVENT(irq_work);
+/*
+ * We must dis-allow sampling irq_work_exit() because perf event sampling
+ * itself can cause irq_work, which would lead to an infinite loop;
+ *
+ * 1) irq_work_exit happens
+ * 2) generates perf sample
+ * 3) generates irq_work
+ * 4) goto 1
+ */
+TRACE_EVENT_PERF_PERM(irq_work_exit, is_sampling_event(p_event) ? -EPERM : 0);
+
/*
* call_function - called when entering/exiting a call function interrupt
* vector handler
#define MSR_PP1_ENERGY_STATUS 0x00000641
#define MSR_PP1_POLICY 0x00000642
+#define MSR_CORE_C1_RES 0x00000660
+
#define MSR_AMD64_MC0_MASK 0xc0010044
#define MSR_IA32_MCx_CTL(x) (MSR_IA32_MC0_CTL + 4*(x))
set_cpu_cap(c, X86_FEATURE_PEBS);
}
- if (c->x86 == 6 && c->x86_model == 29 && cpu_has_clflush)
+ if (c->x86 == 6 && cpu_has_clflush &&
+ (c->x86_model == 29 || c->x86_model == 46 || c->x86_model == 47))
set_cpu_cap(c, X86_FEATURE_CLFLUSH_MONITOR);
#ifdef CONFIG_X86_64
__EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK, \
HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_ST_HSW)
-#define EVENT_CONSTRAINT_END \
- EVENT_CONSTRAINT(0, 0, 0)
+/*
+ * We define the end marker as having a weight of -1
+ * to enable blacklisting of events using a counter bitmask
+ * of zero and thus a weight of zero.
+ * The end marker has a weight that cannot possibly be
+ * obtained from counting the bits in the bitmask.
+ */
+#define EVENT_CONSTRAINT_END { .weight = -1 }
+/*
+ * Check for end marker with weight == -1
+ */
#define for_each_event_constraint(e, c) \
- for ((e) = (c); (e)->weight; (e)++)
+ for ((e) = (c); (e)->weight != -1; (e)++)
/*
* Extra registers for specific events.
INTEL_I915GM_IDS(gen3_stolen_size),
INTEL_I945G_IDS(gen3_stolen_size),
INTEL_I945GM_IDS(gen3_stolen_size),
- INTEL_VLV_M_IDS(gen3_stolen_size),
- INTEL_VLV_D_IDS(gen3_stolen_size),
+ INTEL_VLV_M_IDS(gen6_stolen_size),
+ INTEL_VLV_D_IDS(gen6_stolen_size),
INTEL_PINEVIEW_IDS(gen3_stolen_size),
INTEL_I965G_IDS(gen3_stolen_size),
INTEL_G33_IDS(gen3_stolen_size),
{
/* Stop the cpus and apics */
#ifdef CONFIG_X86_IO_APIC
+ /*
+ * Disabling IO APIC before local APIC is a workaround for
+ * erratum AVR31 in "Intel Atom Processor C2000 Product Family
+ * Specification Update". In this situation, interrupts that target
+ * a Logical Processor whose Local APIC is either in the process of
+ * being hardware disabled or software disabled are neither delivered
+ * nor discarded. When this erratum occurs, the processor may hang.
+ *
+ * Even without the erratum, it still makes sense to quiet IO APIC
+ * before disabling Local APIC.
+ */
disable_IO_APIC();
#endif
return (kvm_apic_get_reg(apic, APIC_ID) >> 24) & 0xff;
}
+#define KVM_X2APIC_CID_BITS 0
+
static void recalculate_apic_map(struct kvm *kvm)
{
struct kvm_apic_map *new, *old = NULL;
if (apic_x2apic_mode(apic)) {
new->ldr_bits = 32;
new->cid_shift = 16;
- new->cid_mask = new->lid_mask = 0xffff;
+ new->cid_mask = (1 << KVM_X2APIC_CID_BITS) - 1;
+ new->lid_mask = 0xffff;
} else if (kvm_apic_sw_enabled(apic) &&
!new->cid_mask /* flat mode */ &&
kvm_apic_get_reg(apic, APIC_DFR) == APIC_DFR_CLUSTER) {
ASSERT(apic != NULL);
/* if initial count is 0, current count should also be 0 */
- if (kvm_apic_get_reg(apic, APIC_TMICT) == 0)
+ if (kvm_apic_get_reg(apic, APIC_TMICT) == 0 ||
+ apic->lapic_timer.period == 0)
return 0;
remaining = hrtimer_get_remaining(&apic->lapic_timer.timer);
return;
}
+ if (!kvm_vcpu_is_bsp(apic->vcpu))
+ value &= ~MSR_IA32_APICBASE_BSP;
+ vcpu->arch.apic_base = value;
+
/* update jump label if enable bit changes */
if ((vcpu->arch.apic_base ^ value) & MSR_IA32_APICBASE_ENABLE) {
if (value & MSR_IA32_APICBASE_ENABLE)
recalculate_apic_map(vcpu->kvm);
}
- if (!kvm_vcpu_is_bsp(apic->vcpu))
- value &= ~MSR_IA32_APICBASE_BSP;
-
- vcpu->arch.apic_base = value;
if ((old_value ^ value) & X2APIC_ENABLE) {
if (value & X2APIC_ENABLE) {
u32 id = kvm_apic_id(apic);
void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu)
{
u32 data;
- void *vapic;
if (test_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention))
apic_sync_pv_eoi_from_guest(vcpu, vcpu->arch.apic);
if (!test_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention))
return;
- vapic = kmap_atomic(vcpu->arch.apic->vapic_page);
- data = *(u32 *)(vapic + offset_in_page(vcpu->arch.apic->vapic_addr));
- kunmap_atomic(vapic);
+ kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.apic->vapic_cache, &data,
+ sizeof(u32));
apic_set_tpr(vcpu->arch.apic, data & 0xff);
}
u32 data, tpr;
int max_irr, max_isr;
struct kvm_lapic *apic = vcpu->arch.apic;
- void *vapic;
apic_sync_pv_eoi_to_guest(vcpu, apic);
max_isr = 0;
data = (tpr & 0xff) | ((max_isr & 0xf0) << 8) | (max_irr << 24);
- vapic = kmap_atomic(vcpu->arch.apic->vapic_page);
- *(u32 *)(vapic + offset_in_page(vcpu->arch.apic->vapic_addr)) = data;
- kunmap_atomic(vapic);
+ kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.apic->vapic_cache, &data,
+ sizeof(u32));
}
-void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr)
+int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr)
{
- vcpu->arch.apic->vapic_addr = vapic_addr;
- if (vapic_addr)
+ if (vapic_addr) {
+ if (kvm_gfn_to_hva_cache_init(vcpu->kvm,
+ &vcpu->arch.apic->vapic_cache,
+ vapic_addr, sizeof(u32)))
+ return -EINVAL;
__set_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention);
- else
+ } else {
__clear_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention);
+ }
+
+ vcpu->arch.apic->vapic_addr = vapic_addr;
+ return 0;
}
int kvm_x2apic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data)
*/
void *regs;
gpa_t vapic_addr;
- struct page *vapic_page;
+ struct gfn_to_hva_cache vapic_cache;
unsigned long pending_events;
unsigned int sipi_vector;
};
void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset);
void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vcpu, int vector);
-void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr);
+int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr);
void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu);
void kvm_lapic_sync_to_vapic(struct kvm_vcpu *vcpu);
.get = param_get_bool,
};
-module_param_cb(mmu_audit, &audit_param_ops, &mmu_audit, 0644);
+arch_param_cb(mmu_audit, &audit_param_ops, &mmu_audit, 0644);
vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK);
kvm_set_cr4(vcpu, vmcs12->host_cr4);
- if (nested_cpu_has_ept(vmcs12))
- nested_ept_uninit_mmu_context(vcpu);
+ nested_ept_uninit_mmu_context(vcpu);
kvm_set_cr3(vcpu, vmcs12->host_cr3);
kvm_mmu_reset_context(vcpu);
r = -EFAULT;
if (copy_from_user(&va, argp, sizeof va))
goto out;
- r = 0;
- kvm_lapic_set_vapic_addr(vcpu, va.vapic_addr);
+ r = kvm_lapic_set_vapic_addr(vcpu, va.vapic_addr);
break;
}
case KVM_X86_SETUP_MCE: {
!kvm_event_needs_reinjection(vcpu);
}
-static int vapic_enter(struct kvm_vcpu *vcpu)
-{
- struct kvm_lapic *apic = vcpu->arch.apic;
- struct page *page;
-
- if (!apic || !apic->vapic_addr)
- return 0;
-
- page = gfn_to_page(vcpu->kvm, apic->vapic_addr >> PAGE_SHIFT);
- if (is_error_page(page))
- return -EFAULT;
-
- vcpu->arch.apic->vapic_page = page;
- return 0;
-}
-
-static void vapic_exit(struct kvm_vcpu *vcpu)
-{
- struct kvm_lapic *apic = vcpu->arch.apic;
- int idx;
-
- if (!apic || !apic->vapic_addr)
- return;
-
- idx = srcu_read_lock(&vcpu->kvm->srcu);
- kvm_release_page_dirty(apic->vapic_page);
- mark_page_dirty(vcpu->kvm, apic->vapic_addr >> PAGE_SHIFT);
- srcu_read_unlock(&vcpu->kvm->srcu, idx);
-}
-
static void update_cr8_intercept(struct kvm_vcpu *vcpu)
{
int max_irr, tpr;
struct kvm *kvm = vcpu->kvm;
vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
- r = vapic_enter(vcpu);
- if (r) {
- srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
- return r;
- }
r = 1;
while (r > 0) {
srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
- vapic_exit(vcpu);
-
return r;
}
pte_t pte = gup_get_pte(ptep);
struct page *page;
+ /* Similar to the PMD case, NUMA hinting must take slow path */
+ if (pte_numa(pte)) {
+ pte_unmap(ptep);
+ return 0;
+ }
+
if ((pte_flags(pte) & (mask | _PAGE_SPECIAL)) != mask) {
pte_unmap(ptep);
return 0;
if (pmd_none(pmd) || pmd_trans_splitting(pmd))
return 0;
if (unlikely(pmd_large(pmd))) {
+ /*
+ * NUMA hinting faults need to be handled in the GUP
+ * slowpath for accounting purposes and so that they
+ * can be serialised against THP migration.
+ */
+ if (pmd_numa(pmd))
+ return 0;
if (!gup_huge_pmd(pmd, addr, next, write, pages, nr))
return 0;
} else {
#if PAGETABLE_LEVELS > 2
void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
{
+ struct page *page = virt_to_page(pmd);
paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT);
/*
* NOTE! For PAE, any changes to the top page-directory-pointer-table
#ifdef CONFIG_X86_PAE
tlb->need_flush_all = 1;
#endif
- tlb_remove_page(tlb, virt_to_page(pmd));
+ pgtable_pmd_page_dtor(page);
+ tlb_remove_page(tlb, page);
}
#if PAGETABLE_LEVELS > 3
if (!pmd)
failed = true;
if (pmd && !pgtable_pmd_page_ctor(virt_to_page(pmd))) {
- free_page((unsigned long)pmds[i]);
+ free_page((unsigned long)pmd);
pmd = NULL;
failed = true;
}
sd = &info->sd;
sd->domain = domain;
sd->node = node;
- sd->acpi = device->handle;
+ sd->companion = device;
/*
* Maybe the desired pci bus has been already scanned. In such case
* it is unnecessary to scan the pci bus with the given domain,busnum.
{
struct pci_sysdata *sd = bridge->bus->sysdata;
- ACPI_HANDLE_SET(&bridge->dev, sd->acpi);
+ ACPI_COMPANION_SET(&bridge->dev, sd->companion);
return 0;
}
efi_y += font->height;
}
- if (efi_y + font->height >= si->lfb_height) {
+ if (efi_y + font->height > si->lfb_height) {
u32 i;
efi_y -= font->height;
set_bit(EFI_MEMMAP, &x86_efi_facility);
-#ifdef CONFIG_X86_32
- if (efi_is_native()) {
- x86_platform.get_wallclock = efi_get_time;
- x86_platform.set_wallclock = efi_set_rtc_mmss;
- }
-#endif
-
#if EFI_DEBUG
print_efi_memmap();
#endif
unsigned long status;
bcp = &per_cpu(bau_control, cpu);
- stat = bcp->statp;
- stat->s_enters++;
if (bcp->nobau)
return cpumask;
+ stat = bcp->statp;
+ stat->s_enters++;
+
if (bcp->busy) {
descriptor_status =
read_lmmr(UVH_LB_BAU_SB_ACTIVATION_STATUS_0);
-march=i386 -mregparm=3 \
-include $(srctree)/$(src)/../../boot/code16gcc.h \
-fno-strict-aliasing -fomit-frame-pointer -fno-pic \
+ -mno-mmx -mno-sse \
$(call cc-option, -ffreestanding) \
$(call cc-option, -fno-toplevel-reorder,\
- $(call cc-option, -fno-unit-at-a-time)) \
+ $(call cc-option, -fno-unit-at-a-time)) \
$(call cc-option, -fno-stack-protector) \
$(call cc-option, -mpreferred-stack-boundary=2)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
uint64_t v;
do {
- start = u64_stats_fetch_begin(&stat->syncp);
+ start = u64_stats_fetch_begin_bh(&stat->syncp);
v = stat->cnt;
- } while (u64_stats_fetch_retry(&stat->syncp, start));
+ } while (u64_stats_fetch_retry_bh(&stat->syncp, start));
return v;
}
struct blkg_rwstat tmp;
do {
- start = u64_stats_fetch_begin(&rwstat->syncp);
+ start = u64_stats_fetch_begin_bh(&rwstat->syncp);
tmp = *rwstat;
- } while (u64_stats_fetch_retry(&rwstat->syncp, start));
+ } while (u64_stats_fetch_retry_bh(&rwstat->syncp, start));
return tmp;
}
}
}
-static void bio_end_flush(struct bio *bio, int err)
-{
- if (err)
- clear_bit(BIO_UPTODATE, &bio->bi_flags);
- if (bio->bi_private)
- complete(bio->bi_private);
- bio_put(bio);
-}
-
/**
* blkdev_issue_flush - queue a flush
* @bdev: blockdev to issue flush for
int blkdev_issue_flush(struct block_device *bdev, gfp_t gfp_mask,
sector_t *error_sector)
{
- DECLARE_COMPLETION_ONSTACK(wait);
struct request_queue *q;
struct bio *bio;
int ret = 0;
return -ENXIO;
bio = bio_alloc(gfp_mask, 0);
- bio->bi_end_io = bio_end_flush;
bio->bi_bdev = bdev;
- bio->bi_private = &wait;
- bio_get(bio);
- submit_bio(WRITE_FLUSH, bio);
- wait_for_completion_io(&wait);
+ ret = submit_bio_wait(WRITE_FLUSH, bio);
/*
* The driver must store the error location in ->bi_sector, if
if (error_sector)
*error_sector = bio->bi_sector;
- if (!bio_flagged(bio, BIO_UPTODATE))
- ret = -EIO;
-
bio_put(bio);
return ret;
}
void blk_mq_unregister_disk(struct gendisk *disk)
{
struct request_queue *q = disk->queue;
+ struct blk_mq_hw_ctx *hctx;
+ struct blk_mq_ctx *ctx;
+ int i, j;
+
+ queue_for_each_hw_ctx(q, hctx, i) {
+ hctx_for_each_ctx(hctx, ctx, j) {
+ kobject_del(&ctx->kobj);
+ kobject_put(&ctx->kobj);
+ }
+ kobject_del(&hctx->kobj);
+ kobject_put(&hctx->kobj);
+ }
kobject_uevent(&q->mq_kobj, KOBJ_REMOVE);
kobject_del(&q->mq_kobj);
+ kobject_put(&q->mq_kobj);
kobject_put(&disk_to_dev(disk)->kobj);
}
}
EXPORT_SYMBOL(blk_mq_can_queue);
-static void blk_mq_rq_ctx_init(struct blk_mq_ctx *ctx, struct request *rq,
- unsigned int rw_flags)
+static void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
+ struct request *rq, unsigned int rw_flags)
{
+ if (blk_queue_io_stat(q))
+ rw_flags |= REQ_IO_STAT;
+
rq->mq_ctx = ctx;
rq->cmd_flags = rw_flags;
ctx->rq_dispatched[rw_is_sync(rw_flags)]++;
rq = __blk_mq_alloc_request(hctx, gfp & ~__GFP_WAIT, reserved);
if (rq) {
- blk_mq_rq_ctx_init(ctx, rq, rw);
- break;
- } else if (!(gfp & __GFP_WAIT))
+ blk_mq_rq_ctx_init(q, ctx, rq, rw);
break;
+ }
blk_mq_put_ctx(ctx);
+ if (!(gfp & __GFP_WAIT))
+ break;
+
__blk_mq_run_hw_queue(hctx);
blk_mq_wait_for_tags(hctx->tags);
} while (1);
return NULL;
rq = blk_mq_alloc_request_pinned(q, rw, gfp, reserved);
- blk_mq_put_ctx(rq->mq_ctx);
+ if (rq)
+ blk_mq_put_ctx(rq->mq_ctx);
return rq;
}
return NULL;
rq = blk_mq_alloc_request_pinned(q, rw, gfp, true);
- blk_mq_put_ctx(rq->mq_ctx);
+ if (rq)
+ blk_mq_put_ctx(rq->mq_ctx);
return rq;
}
EXPORT_SYMBOL(blk_mq_alloc_reserved_request);
blk_account_io_completion(rq, bytes);
+ blk_account_io_done(rq);
+
if (rq->end_io)
rq->end_io(rq, error);
else
blk_mq_free_request(rq);
-
- blk_account_io_done(rq);
}
void __blk_mq_end_io(struct request *rq, int error)
{
struct blk_mq_ctx *ctx = rq->mq_ctx;
+ trace_block_rq_insert(hctx->queue, rq);
+
list_add_tail(&rq->queuelist, &ctx->rq_list);
blk_mq_hctx_mark_pending(hctx, ctx);
trace_block_getrq(q, bio, rw);
rq = __blk_mq_alloc_request(hctx, GFP_ATOMIC, false);
if (likely(rq))
- blk_mq_rq_ctx_init(ctx, rq, rw);
+ blk_mq_rq_ctx_init(q, ctx, rq, rw);
else {
blk_mq_put_ctx(ctx);
trace_block_sleeprq(q, bio, rw);
q->queue_hw_ctx = hctxs;
q->mq_ops = reg->ops;
+ q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT;
blk_queue_make_request(q, blk_mq_make_request);
blk_queue_rq_timed_out(q, reg->ops->timeout);
* - Code works, detects all the partitions.
*
************************************************************/
+#include <linux/kernel.h>
#include <linux/crc32.h>
#include <linux/ctype.h>
#include <linux/math64.h>
efi_guid_unparse(&ptes[i].unique_partition_guid, info->uuid);
/* Naively convert UTF16-LE to 7 bits. */
- label_max = min(sizeof(info->volname) - 1,
- sizeof(ptes[i].partition_name));
+ label_max = min(ARRAY_SIZE(info->volname) - 1,
+ ARRAY_SIZE(ptes[i].partition_name));
info->volname[label_max] = 0;
while (label_count < label_max) {
u8 c = ptes[i].partition_name[label_count] & 0xff;
help
Quick & dirty crypto test module.
-config CRYPTO_ABLK_HELPER_X86
+config CRYPTO_ABLK_HELPER
tristate
- depends on X86
select CRYPTO_CRYPTD
config CRYPTO_GLUE_HELPER_X86
select CRYPTO_AES_X86_64 if 64BIT
select CRYPTO_AES_586 if !64BIT
select CRYPTO_CRYPTD
- select CRYPTO_ABLK_HELPER_X86
+ select CRYPTO_ABLK_HELPER
select CRYPTO_ALGAPI
select CRYPTO_GLUE_HELPER_X86 if 64BIT
select CRYPTO_LRW
depends on CRYPTO
select CRYPTO_ALGAPI
select CRYPTO_CRYPTD
- select CRYPTO_ABLK_HELPER_X86
+ select CRYPTO_ABLK_HELPER
select CRYPTO_GLUE_HELPER_X86
select CRYPTO_CAMELLIA_X86_64
select CRYPTO_LRW
depends on CRYPTO
select CRYPTO_ALGAPI
select CRYPTO_CRYPTD
- select CRYPTO_ABLK_HELPER_X86
+ select CRYPTO_ABLK_HELPER
select CRYPTO_GLUE_HELPER_X86
select CRYPTO_CAMELLIA_X86_64
select CRYPTO_CAMELLIA_AESNI_AVX_X86_64
depends on X86 && 64BIT
select CRYPTO_ALGAPI
select CRYPTO_CRYPTD
- select CRYPTO_ABLK_HELPER_X86
+ select CRYPTO_ABLK_HELPER
select CRYPTO_CAST_COMMON
select CRYPTO_CAST5
help
depends on X86 && 64BIT
select CRYPTO_ALGAPI
select CRYPTO_CRYPTD
- select CRYPTO_ABLK_HELPER_X86
+ select CRYPTO_ABLK_HELPER
select CRYPTO_GLUE_HELPER_X86
select CRYPTO_CAST_COMMON
select CRYPTO_CAST6
depends on X86 && 64BIT
select CRYPTO_ALGAPI
select CRYPTO_CRYPTD
- select CRYPTO_ABLK_HELPER_X86
+ select CRYPTO_ABLK_HELPER
select CRYPTO_GLUE_HELPER_X86
select CRYPTO_SERPENT
select CRYPTO_LRW
depends on X86 && !64BIT
select CRYPTO_ALGAPI
select CRYPTO_CRYPTD
- select CRYPTO_ABLK_HELPER_X86
+ select CRYPTO_ABLK_HELPER
select CRYPTO_GLUE_HELPER_X86
select CRYPTO_SERPENT
select CRYPTO_LRW
depends on X86 && 64BIT
select CRYPTO_ALGAPI
select CRYPTO_CRYPTD
- select CRYPTO_ABLK_HELPER_X86
+ select CRYPTO_ABLK_HELPER
select CRYPTO_GLUE_HELPER_X86
select CRYPTO_SERPENT
select CRYPTO_LRW
depends on X86 && 64BIT
select CRYPTO_ALGAPI
select CRYPTO_CRYPTD
- select CRYPTO_ABLK_HELPER_X86
+ select CRYPTO_ABLK_HELPER
select CRYPTO_GLUE_HELPER_X86
select CRYPTO_SERPENT
select CRYPTO_SERPENT_AVX_X86_64
depends on X86 && 64BIT
select CRYPTO_ALGAPI
select CRYPTO_CRYPTD
- select CRYPTO_ABLK_HELPER_X86
+ select CRYPTO_ABLK_HELPER
select CRYPTO_GLUE_HELPER_X86
select CRYPTO_TWOFISH_COMMON
select CRYPTO_TWOFISH_X86_64
This option enables the user-spaces interface for symmetric
key cipher algorithms.
+config CRYPTO_HASH_INFO
+ bool
+
source "drivers/crypto/Kconfig"
source crypto/asymmetric_keys/Kconfig
# Cryptographic API
#
+# memneq MUST be built with -Os or -O0 to prevent early-return optimizations
+# that will defeat memneq's actual purpose to prevent timing attacks.
+CFLAGS_REMOVE_memneq.o := -O1 -O2 -O3
+CFLAGS_memneq.o := -Os
+
obj-$(CONFIG_CRYPTO) += crypto.o
-crypto-y := api.o cipher.o compress.o
+crypto-y := api.o cipher.o compress.o memneq.o
obj-$(CONFIG_CRYPTO_WORKQUEUE) += crypto_wq.o
obj-$(CONFIG_XOR_BLOCKS) += xor.o
obj-$(CONFIG_ASYNC_CORE) += async_tx/
obj-$(CONFIG_ASYMMETRIC_KEY_TYPE) += asymmetric_keys/
+obj-$(CONFIG_CRYPTO_HASH_INFO) += hash_info.o
+obj-$(CONFIG_CRYPTO_ABLK_HELPER) += ablk_helper.o
--- /dev/null
+/*
+ * Shared async block cipher helpers
+ *
+ * Copyright (c) 2012 Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
+ *
+ * Based on aesni-intel_glue.c by:
+ * Copyright (C) 2008, Intel Corp.
+ * Author: Huang Ying <ying.huang@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+ * USA
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/crypto.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/hardirq.h>
+#include <crypto/algapi.h>
+#include <crypto/cryptd.h>
+#include <crypto/ablk_helper.h>
+#include <asm/simd.h>
+
+int ablk_set_key(struct crypto_ablkcipher *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ struct async_helper_ctx *ctx = crypto_ablkcipher_ctx(tfm);
+ struct crypto_ablkcipher *child = &ctx->cryptd_tfm->base;
+ int err;
+
+ crypto_ablkcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+ crypto_ablkcipher_set_flags(child, crypto_ablkcipher_get_flags(tfm)
+ & CRYPTO_TFM_REQ_MASK);
+ err = crypto_ablkcipher_setkey(child, key, key_len);
+ crypto_ablkcipher_set_flags(tfm, crypto_ablkcipher_get_flags(child)
+ & CRYPTO_TFM_RES_MASK);
+ return err;
+}
+EXPORT_SYMBOL_GPL(ablk_set_key);
+
+int __ablk_encrypt(struct ablkcipher_request *req)
+{
+ struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
+ struct async_helper_ctx *ctx = crypto_ablkcipher_ctx(tfm);
+ struct blkcipher_desc desc;
+
+ desc.tfm = cryptd_ablkcipher_child(ctx->cryptd_tfm);
+ desc.info = req->info;
+ desc.flags = 0;
+
+ return crypto_blkcipher_crt(desc.tfm)->encrypt(
+ &desc, req->dst, req->src, req->nbytes);
+}
+EXPORT_SYMBOL_GPL(__ablk_encrypt);
+
+int ablk_encrypt(struct ablkcipher_request *req)
+{
+ struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
+ struct async_helper_ctx *ctx = crypto_ablkcipher_ctx(tfm);
+
+ if (!may_use_simd()) {
+ struct ablkcipher_request *cryptd_req =
+ ablkcipher_request_ctx(req);
+
+ *cryptd_req = *req;
+ ablkcipher_request_set_tfm(cryptd_req, &ctx->cryptd_tfm->base);
+
+ return crypto_ablkcipher_encrypt(cryptd_req);
+ } else {
+ return __ablk_encrypt(req);
+ }
+}
+EXPORT_SYMBOL_GPL(ablk_encrypt);
+
+int ablk_decrypt(struct ablkcipher_request *req)
+{
+ struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
+ struct async_helper_ctx *ctx = crypto_ablkcipher_ctx(tfm);
+
+ if (!may_use_simd()) {
+ struct ablkcipher_request *cryptd_req =
+ ablkcipher_request_ctx(req);
+
+ *cryptd_req = *req;
+ ablkcipher_request_set_tfm(cryptd_req, &ctx->cryptd_tfm->base);
+
+ return crypto_ablkcipher_decrypt(cryptd_req);
+ } else {
+ struct blkcipher_desc desc;
+
+ desc.tfm = cryptd_ablkcipher_child(ctx->cryptd_tfm);
+ desc.info = req->info;
+ desc.flags = 0;
+
+ return crypto_blkcipher_crt(desc.tfm)->decrypt(
+ &desc, req->dst, req->src, req->nbytes);
+ }
+}
+EXPORT_SYMBOL_GPL(ablk_decrypt);
+
+void ablk_exit(struct crypto_tfm *tfm)
+{
+ struct async_helper_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ cryptd_free_ablkcipher(ctx->cryptd_tfm);
+}
+EXPORT_SYMBOL_GPL(ablk_exit);
+
+int ablk_init_common(struct crypto_tfm *tfm, const char *drv_name)
+{
+ struct async_helper_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct cryptd_ablkcipher *cryptd_tfm;
+
+ cryptd_tfm = cryptd_alloc_ablkcipher(drv_name, 0, 0);
+ if (IS_ERR(cryptd_tfm))
+ return PTR_ERR(cryptd_tfm);
+
+ ctx->cryptd_tfm = cryptd_tfm;
+ tfm->crt_ablkcipher.reqsize = sizeof(struct ablkcipher_request) +
+ crypto_ablkcipher_reqsize(&cryptd_tfm->base);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ablk_init_common);
+
+int ablk_init(struct crypto_tfm *tfm)
+{
+ char drv_name[CRYPTO_MAX_ALG_NAME];
+
+ snprintf(drv_name, sizeof(drv_name), "__driver-%s",
+ crypto_tfm_alg_driver_name(tfm));
+
+ return ablk_init_common(tfm, drv_name);
+}
+EXPORT_SYMBOL_GPL(ablk_init);
+
+MODULE_LICENSE("GPL");
#include <crypto/internal/skcipher.h>
#include <linux/cpumask.h>
#include <linux/err.h>
-#include <linux/init.h>
#include <linux/kernel.h>
-#include <linux/module.h>
#include <linux/rtnetlink.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include "internal.h"
-static const char *skcipher_default_geniv __read_mostly;
-
struct ablkcipher_buffer {
struct list_head entry;
struct scatter_walk dst;
alg->cra_blocksize)
return "chainiv";
- return alg->cra_flags & CRYPTO_ALG_ASYNC ?
- "eseqiv" : skcipher_default_geniv;
+ return "eseqiv";
}
static int crypto_givcipher_default(struct crypto_alg *alg, u32 type, u32 mask)
return ERR_PTR(err);
}
EXPORT_SYMBOL_GPL(crypto_alloc_ablkcipher);
-
-static int __init skcipher_module_init(void)
-{
- skcipher_default_geniv = num_possible_cpus() > 1 ?
- "eseqiv" : "chainiv";
- return 0;
-}
-
-static void skcipher_module_exit(void)
-{
-}
-
-module_init(skcipher_module_init);
-module_exit(skcipher_module_exit);
struct hash_ctx *ctx = ask->private;
int err;
+ if (flags & MSG_SENDPAGE_NOTLAST)
+ flags |= MSG_MORE;
+
lock_sock(sk);
sg_init_table(ctx->sgl.sg, 1);
sg_set_page(ctx->sgl.sg, page, size, offset);
struct skcipher_sg_list *sgl;
int err = -EINVAL;
+ if (flags & MSG_SENDPAGE_NOTLAST)
+ flags |= MSG_MORE;
+
lock_sock(sk);
if (!ctx->more && ctx->used)
goto unlock;
*/
if (byte_count < DEFAULT_BLK_SZ) {
empty_rbuf:
- for (; ctx->rand_data_valid < DEFAULT_BLK_SZ;
- ctx->rand_data_valid++) {
+ while (ctx->rand_data_valid < DEFAULT_BLK_SZ) {
*ptr = ctx->rand_data[ctx->rand_data_valid];
ptr++;
byte_count--;
+ ctx->rand_data_valid++;
if (byte_count == 0)
goto done;
}
config ASYMMETRIC_PUBLIC_KEY_SUBTYPE
tristate "Asymmetric public-key crypto algorithm subtype"
select MPILIB
+ select PUBLIC_KEY_ALGO_RSA
+ select CRYPTO_HASH_INFO
help
This option provides support for asymmetric public key type handling.
If signature generation and/or verification are to be used,
config PUBLIC_KEY_ALGO_RSA
tristate "RSA public-key algorithm"
- depends on ASYMMETRIC_PUBLIC_KEY_SUBTYPE
select MPILIB_EXTRA
+ select MPILIB
help
This option enables support for the RSA algorithm (PKCS#1, RFC3447).
.match = asymmetric_key_match,
.destroy = asymmetric_key_destroy,
.describe = asymmetric_key_describe,
+ .def_lookup_type = KEYRING_SEARCH_LOOKUP_ITERATE,
};
EXPORT_SYMBOL_GPL(key_type_asymmetric);
MODULE_LICENSE("GPL");
-const char *const pkey_algo[PKEY_ALGO__LAST] = {
+const char *const pkey_algo_name[PKEY_ALGO__LAST] = {
[PKEY_ALGO_DSA] = "DSA",
[PKEY_ALGO_RSA] = "RSA",
};
-EXPORT_SYMBOL_GPL(pkey_algo);
+EXPORT_SYMBOL_GPL(pkey_algo_name);
-const char *const pkey_hash_algo[PKEY_HASH__LAST] = {
- [PKEY_HASH_MD4] = "md4",
- [PKEY_HASH_MD5] = "md5",
- [PKEY_HASH_SHA1] = "sha1",
- [PKEY_HASH_RIPE_MD_160] = "rmd160",
- [PKEY_HASH_SHA256] = "sha256",
- [PKEY_HASH_SHA384] = "sha384",
- [PKEY_HASH_SHA512] = "sha512",
- [PKEY_HASH_SHA224] = "sha224",
+const struct public_key_algorithm *pkey_algo[PKEY_ALGO__LAST] = {
+#if defined(CONFIG_PUBLIC_KEY_ALGO_RSA) || \
+ defined(CONFIG_PUBLIC_KEY_ALGO_RSA_MODULE)
+ [PKEY_ALGO_RSA] = &RSA_public_key_algorithm,
+#endif
};
-EXPORT_SYMBOL_GPL(pkey_hash_algo);
+EXPORT_SYMBOL_GPL(pkey_algo);
-const char *const pkey_id_type[PKEY_ID_TYPE__LAST] = {
+const char *const pkey_id_type_name[PKEY_ID_TYPE__LAST] = {
[PKEY_ID_PGP] = "PGP",
[PKEY_ID_X509] = "X509",
};
-EXPORT_SYMBOL_GPL(pkey_id_type);
+EXPORT_SYMBOL_GPL(pkey_id_type_name);
/*
* Provide a part of a description of the key for /proc/keys.
if (key)
seq_printf(m, "%s.%s",
- pkey_id_type[key->id_type], key->algo->name);
+ pkey_id_type_name[key->id_type], key->algo->name);
}
/*
/*
* Verify a signature using a public key.
*/
-static int public_key_verify_signature(const struct key *key,
- const struct public_key_signature *sig)
+int public_key_verify_signature(const struct public_key *pk,
+ const struct public_key_signature *sig)
{
- const struct public_key *pk = key->payload.data;
+ const struct public_key_algorithm *algo;
+
+ BUG_ON(!pk);
+ BUG_ON(!pk->mpi[0]);
+ BUG_ON(!pk->mpi[1]);
+ BUG_ON(!sig);
+ BUG_ON(!sig->digest);
+ BUG_ON(!sig->mpi[0]);
+
+ algo = pk->algo;
+ if (!algo) {
+ if (pk->pkey_algo >= PKEY_ALGO__LAST)
+ return -ENOPKG;
+ algo = pkey_algo[pk->pkey_algo];
+ if (!algo)
+ return -ENOPKG;
+ }
- if (!pk->algo->verify_signature)
+ if (!algo->verify_signature)
return -ENOTSUPP;
- if (sig->nr_mpi != pk->algo->n_sig_mpi) {
+ if (sig->nr_mpi != algo->n_sig_mpi) {
pr_debug("Signature has %u MPI not %u\n",
- sig->nr_mpi, pk->algo->n_sig_mpi);
+ sig->nr_mpi, algo->n_sig_mpi);
return -EINVAL;
}
- return pk->algo->verify_signature(pk, sig);
+ return algo->verify_signature(pk, sig);
+}
+EXPORT_SYMBOL_GPL(public_key_verify_signature);
+
+static int public_key_verify_signature_2(const struct key *key,
+ const struct public_key_signature *sig)
+{
+ const struct public_key *pk = key->payload.data;
+ return public_key_verify_signature(pk, sig);
}
/*
.name = "public_key",
.describe = public_key_describe,
.destroy = public_key_destroy,
- .verify_signature = public_key_verify_signature,
+ .verify_signature = public_key_verify_signature_2,
};
EXPORT_SYMBOL_GPL(public_key_subtype);
};
extern const struct public_key_algorithm RSA_public_key_algorithm;
+
+/*
+ * public_key.c
+ */
+extern int public_key_verify_signature(const struct public_key *pk,
+ const struct public_key_signature *sig);
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/slab.h>
+#include <crypto/algapi.h>
#include "public_key.h"
MODULE_LICENSE("GPL");
size_t size;
} RSA_ASN1_templates[PKEY_HASH__LAST] = {
#define _(X) { RSA_digest_info_##X, sizeof(RSA_digest_info_##X) }
- [PKEY_HASH_MD5] = _(MD5),
- [PKEY_HASH_SHA1] = _(SHA1),
- [PKEY_HASH_RIPE_MD_160] = _(RIPE_MD_160),
- [PKEY_HASH_SHA256] = _(SHA256),
- [PKEY_HASH_SHA384] = _(SHA384),
- [PKEY_HASH_SHA512] = _(SHA512),
- [PKEY_HASH_SHA224] = _(SHA224),
+ [HASH_ALGO_MD5] = _(MD5),
+ [HASH_ALGO_SHA1] = _(SHA1),
+ [HASH_ALGO_RIPE_MD_160] = _(RIPE_MD_160),
+ [HASH_ALGO_SHA256] = _(SHA256),
+ [HASH_ALGO_SHA384] = _(SHA384),
+ [HASH_ALGO_SHA512] = _(SHA512),
+ [HASH_ALGO_SHA224] = _(SHA224),
#undef _
};
}
}
- if (memcmp(asn1_template, EM + T_offset, asn1_size) != 0) {
+ if (crypto_memneq(asn1_template, EM + T_offset, asn1_size) != 0) {
kleave(" = -EBADMSG [EM[T] ASN.1 mismatch]");
return -EBADMSG;
}
- if (memcmp(H, EM + T_offset + asn1_size, hash_size) != 0) {
+ if (crypto_memneq(H, EM + T_offset + asn1_size, hash_size) != 0) {
kleave(" = -EKEYREJECTED [EM[T] hash mismatch]");
return -EKEYREJECTED;
}
kfree(cert->subject);
kfree(cert->fingerprint);
kfree(cert->authority);
+ kfree(cert->sig.digest);
+ mpi_free(cert->sig.rsa.s);
kfree(cert);
}
}
return -ENOPKG; /* Unsupported combination */
case OID_md4WithRSAEncryption:
- ctx->cert->sig_hash_algo = PKEY_HASH_MD5;
- ctx->cert->sig_pkey_algo = PKEY_ALGO_RSA;
+ ctx->cert->sig.pkey_hash_algo = HASH_ALGO_MD5;
+ ctx->cert->sig.pkey_algo = PKEY_ALGO_RSA;
break;
case OID_sha1WithRSAEncryption:
- ctx->cert->sig_hash_algo = PKEY_HASH_SHA1;
- ctx->cert->sig_pkey_algo = PKEY_ALGO_RSA;
+ ctx->cert->sig.pkey_hash_algo = HASH_ALGO_SHA1;
+ ctx->cert->sig.pkey_algo = PKEY_ALGO_RSA;
break;
case OID_sha256WithRSAEncryption:
- ctx->cert->sig_hash_algo = PKEY_HASH_SHA256;
- ctx->cert->sig_pkey_algo = PKEY_ALGO_RSA;
+ ctx->cert->sig.pkey_hash_algo = HASH_ALGO_SHA256;
+ ctx->cert->sig.pkey_algo = PKEY_ALGO_RSA;
break;
case OID_sha384WithRSAEncryption:
- ctx->cert->sig_hash_algo = PKEY_HASH_SHA384;
- ctx->cert->sig_pkey_algo = PKEY_ALGO_RSA;
+ ctx->cert->sig.pkey_hash_algo = HASH_ALGO_SHA384;
+ ctx->cert->sig.pkey_algo = PKEY_ALGO_RSA;
break;
case OID_sha512WithRSAEncryption:
- ctx->cert->sig_hash_algo = PKEY_HASH_SHA512;
- ctx->cert->sig_pkey_algo = PKEY_ALGO_RSA;
+ ctx->cert->sig.pkey_hash_algo = HASH_ALGO_SHA512;
+ ctx->cert->sig.pkey_algo = PKEY_ALGO_RSA;
break;
case OID_sha224WithRSAEncryption:
- ctx->cert->sig_hash_algo = PKEY_HASH_SHA224;
- ctx->cert->sig_pkey_algo = PKEY_ALGO_RSA;
+ ctx->cert->sig.pkey_hash_algo = HASH_ALGO_SHA224;
+ ctx->cert->sig.pkey_algo = PKEY_ALGO_RSA;
break;
}
return -EINVAL;
}
- ctx->cert->sig = value;
- ctx->cert->sig_size = vlen;
+ ctx->cert->raw_sig = value;
+ ctx->cert->raw_sig_size = vlen;
return 0;
}
if (ctx->last_oid != OID_rsaEncryption)
return -ENOPKG;
- /* There seems to be an extraneous 0 byte on the front of the data */
- ctx->cert->pkey_algo = PKEY_ALGO_RSA;
+ ctx->cert->pub->pkey_algo = PKEY_ALGO_RSA;
+
+ /* Discard the BIT STRING metadata */
ctx->key = value + 1;
ctx->key_size = vlen - 1;
return 0;
* 2 of the Licence, or (at your option) any later version.
*/
+#include <linux/time.h>
#include <crypto/public_key.h>
struct x509_certificate {
char *authority; /* Authority key fingerprint as hex */
struct tm valid_from;
struct tm valid_to;
- enum pkey_algo pkey_algo : 8; /* Public key algorithm */
- enum pkey_algo sig_pkey_algo : 8; /* Signature public key algorithm */
- enum pkey_hash_algo sig_hash_algo : 8; /* Signature hash algorithm */
const void *tbs; /* Signed data */
- size_t tbs_size; /* Size of signed data */
- const void *sig; /* Signature data */
- size_t sig_size; /* Size of sigature */
+ unsigned tbs_size; /* Size of signed data */
+ unsigned raw_sig_size; /* Size of sigature */
+ const void *raw_sig; /* Signature data */
+ struct public_key_signature sig; /* Signature parameters */
};
/*
*/
extern void x509_free_certificate(struct x509_certificate *cert);
extern struct x509_certificate *x509_cert_parse(const void *data, size_t datalen);
+
+/*
+ * x509_public_key.c
+ */
+extern int x509_get_sig_params(struct x509_certificate *cert);
+extern int x509_check_signature(const struct public_key *pub,
+ struct x509_certificate *cert);
#include "public_key.h"
#include "x509_parser.h"
-static const
-struct public_key_algorithm *x509_public_key_algorithms[PKEY_ALGO__LAST] = {
- [PKEY_ALGO_DSA] = NULL,
-#if defined(CONFIG_PUBLIC_KEY_ALGO_RSA) || \
- defined(CONFIG_PUBLIC_KEY_ALGO_RSA_MODULE)
- [PKEY_ALGO_RSA] = &RSA_public_key_algorithm,
-#endif
-};
-
/*
- * Check the signature on a certificate using the provided public key
+ * Set up the signature parameters in an X.509 certificate. This involves
+ * digesting the signed data and extracting the signature.
*/
-static int x509_check_signature(const struct public_key *pub,
- const struct x509_certificate *cert)
+int x509_get_sig_params(struct x509_certificate *cert)
{
- struct public_key_signature *sig;
struct crypto_shash *tfm;
struct shash_desc *desc;
size_t digest_size, desc_size;
+ void *digest;
int ret;
pr_devel("==>%s()\n", __func__);
-
+
+ if (cert->sig.rsa.s)
+ return 0;
+
+ cert->sig.rsa.s = mpi_read_raw_data(cert->raw_sig, cert->raw_sig_size);
+ if (!cert->sig.rsa.s)
+ return -ENOMEM;
+ cert->sig.nr_mpi = 1;
+
/* Allocate the hashing algorithm we're going to need and find out how
* big the hash operational data will be.
*/
- tfm = crypto_alloc_shash(pkey_hash_algo[cert->sig_hash_algo], 0, 0);
+ tfm = crypto_alloc_shash(hash_algo_name[cert->sig.pkey_hash_algo], 0, 0);
if (IS_ERR(tfm))
return (PTR_ERR(tfm) == -ENOENT) ? -ENOPKG : PTR_ERR(tfm);
desc_size = crypto_shash_descsize(tfm) + sizeof(*desc);
digest_size = crypto_shash_digestsize(tfm);
- /* We allocate the hash operational data storage on the end of our
- * context data.
+ /* We allocate the hash operational data storage on the end of the
+ * digest storage space.
*/
ret = -ENOMEM;
- sig = kzalloc(sizeof(*sig) + desc_size + digest_size, GFP_KERNEL);
- if (!sig)
- goto error_no_sig;
+ digest = kzalloc(digest_size + desc_size, GFP_KERNEL);
+ if (!digest)
+ goto error;
- sig->pkey_hash_algo = cert->sig_hash_algo;
- sig->digest = (u8 *)sig + sizeof(*sig) + desc_size;
- sig->digest_size = digest_size;
+ cert->sig.digest = digest;
+ cert->sig.digest_size = digest_size;
- desc = (void *)sig + sizeof(*sig);
- desc->tfm = tfm;
- desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+ desc = digest + digest_size;
+ desc->tfm = tfm;
+ desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
ret = crypto_shash_init(desc);
if (ret < 0)
goto error;
+ might_sleep();
+ ret = crypto_shash_finup(desc, cert->tbs, cert->tbs_size, digest);
+error:
+ crypto_free_shash(tfm);
+ pr_devel("<==%s() = %d\n", __func__, ret);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(x509_get_sig_params);
- ret = -ENOMEM;
- sig->rsa.s = mpi_read_raw_data(cert->sig, cert->sig_size);
- if (!sig->rsa.s)
- goto error;
+/*
+ * Check the signature on a certificate using the provided public key
+ */
+int x509_check_signature(const struct public_key *pub,
+ struct x509_certificate *cert)
+{
+ int ret;
- ret = crypto_shash_finup(desc, cert->tbs, cert->tbs_size, sig->digest);
- if (ret < 0)
- goto error_mpi;
+ pr_devel("==>%s()\n", __func__);
- ret = pub->algo->verify_signature(pub, sig);
+ ret = x509_get_sig_params(cert);
+ if (ret < 0)
+ return ret;
+ ret = public_key_verify_signature(pub, &cert->sig);
pr_debug("Cert Verification: %d\n", ret);
-
-error_mpi:
- mpi_free(sig->rsa.s);
-error:
- kfree(sig);
-error_no_sig:
- crypto_free_shash(tfm);
-
- pr_devel("<==%s() = %d\n", __func__, ret);
return ret;
}
+EXPORT_SYMBOL_GPL(x509_check_signature);
/*
* Attempt to parse a data blob for a key as an X509 certificate.
static int x509_key_preparse(struct key_preparsed_payload *prep)
{
struct x509_certificate *cert;
- struct tm now;
size_t srlen, sulen;
char *desc = NULL;
int ret;
pr_devel("Cert Issuer: %s\n", cert->issuer);
pr_devel("Cert Subject: %s\n", cert->subject);
- pr_devel("Cert Key Algo: %s\n", pkey_algo[cert->pkey_algo]);
+
+ if (cert->pub->pkey_algo >= PKEY_ALGO__LAST ||
+ cert->sig.pkey_algo >= PKEY_ALGO__LAST ||
+ cert->sig.pkey_hash_algo >= PKEY_HASH__LAST ||
+ !pkey_algo[cert->pub->pkey_algo] ||
+ !pkey_algo[cert->sig.pkey_algo] ||
+ !hash_algo_name[cert->sig.pkey_hash_algo]) {
+ ret = -ENOPKG;
+ goto error_free_cert;
+ }
+
+ pr_devel("Cert Key Algo: %s\n", pkey_algo_name[cert->pub->pkey_algo]);
pr_devel("Cert Valid From: %04ld-%02d-%02d %02d:%02d:%02d\n",
cert->valid_from.tm_year + 1900, cert->valid_from.tm_mon + 1,
cert->valid_from.tm_mday, cert->valid_from.tm_hour,
cert->valid_to.tm_mday, cert->valid_to.tm_hour,
cert->valid_to.tm_min, cert->valid_to.tm_sec);
pr_devel("Cert Signature: %s + %s\n",
- pkey_algo[cert->sig_pkey_algo],
- pkey_hash_algo[cert->sig_hash_algo]);
+ pkey_algo_name[cert->sig.pkey_algo],
+ hash_algo_name[cert->sig.pkey_hash_algo]);
- if (!cert->fingerprint || !cert->authority) {
- pr_warn("Cert for '%s' must have SubjKeyId and AuthKeyId extensions\n",
+ if (!cert->fingerprint) {
+ pr_warn("Cert for '%s' must have a SubjKeyId extension\n",
cert->subject);
ret = -EKEYREJECTED;
goto error_free_cert;
}
- time_to_tm(CURRENT_TIME.tv_sec, 0, &now);
- pr_devel("Now: %04ld-%02d-%02d %02d:%02d:%02d\n",
- now.tm_year + 1900, now.tm_mon + 1, now.tm_mday,
- now.tm_hour, now.tm_min, now.tm_sec);
- if (now.tm_year < cert->valid_from.tm_year ||
- (now.tm_year == cert->valid_from.tm_year &&
- (now.tm_mon < cert->valid_from.tm_mon ||
- (now.tm_mon == cert->valid_from.tm_mon &&
- (now.tm_mday < cert->valid_from.tm_mday ||
- (now.tm_mday == cert->valid_from.tm_mday &&
- (now.tm_hour < cert->valid_from.tm_hour ||
- (now.tm_hour == cert->valid_from.tm_hour &&
- (now.tm_min < cert->valid_from.tm_min ||
- (now.tm_min == cert->valid_from.tm_min &&
- (now.tm_sec < cert->valid_from.tm_sec
- ))))))))))) {
- pr_warn("Cert %s is not yet valid\n", cert->fingerprint);
- ret = -EKEYREJECTED;
- goto error_free_cert;
- }
- if (now.tm_year > cert->valid_to.tm_year ||
- (now.tm_year == cert->valid_to.tm_year &&
- (now.tm_mon > cert->valid_to.tm_mon ||
- (now.tm_mon == cert->valid_to.tm_mon &&
- (now.tm_mday > cert->valid_to.tm_mday ||
- (now.tm_mday == cert->valid_to.tm_mday &&
- (now.tm_hour > cert->valid_to.tm_hour ||
- (now.tm_hour == cert->valid_to.tm_hour &&
- (now.tm_min > cert->valid_to.tm_min ||
- (now.tm_min == cert->valid_to.tm_min &&
- (now.tm_sec > cert->valid_to.tm_sec
- ))))))))))) {
- pr_warn("Cert %s has expired\n", cert->fingerprint);
- ret = -EKEYEXPIRED;
- goto error_free_cert;
- }
-
- cert->pub->algo = x509_public_key_algorithms[cert->pkey_algo];
+ cert->pub->algo = pkey_algo[cert->pub->pkey_algo];
cert->pub->id_type = PKEY_ID_X509;
- /* Check the signature on the key */
- if (strcmp(cert->fingerprint, cert->authority) == 0) {
+ /* Check the signature on the key if it appears to be self-signed */
+ if (!cert->authority ||
+ strcmp(cert->fingerprint, cert->authority) == 0) {
ret = x509_check_signature(cert->pub, cert);
if (ret < 0)
goto error_free_cert;
module_init(x509_key_init);
module_exit(x509_key_exit);
+
+MODULE_DESCRIPTION("X.509 certificate parser");
+MODULE_LICENSE("GPL");
&dest, 1, &src, 1, len);
struct dma_device *device = chan ? chan->device : NULL;
struct dma_async_tx_descriptor *tx = NULL;
+ struct dmaengine_unmap_data *unmap = NULL;
- if (device && is_dma_copy_aligned(device, src_offset, dest_offset, len)) {
- dma_addr_t dma_dest, dma_src;
+ if (device)
+ unmap = dmaengine_get_unmap_data(device->dev, 2, GFP_NOIO);
+
+ if (unmap && is_dma_copy_aligned(device, src_offset, dest_offset, len)) {
unsigned long dma_prep_flags = 0;
if (submit->cb_fn)
dma_prep_flags |= DMA_PREP_INTERRUPT;
if (submit->flags & ASYNC_TX_FENCE)
dma_prep_flags |= DMA_PREP_FENCE;
- dma_dest = dma_map_page(device->dev, dest, dest_offset, len,
- DMA_FROM_DEVICE);
-
- dma_src = dma_map_page(device->dev, src, src_offset, len,
- DMA_TO_DEVICE);
-
- tx = device->device_prep_dma_memcpy(chan, dma_dest, dma_src,
- len, dma_prep_flags);
- if (!tx) {
- dma_unmap_page(device->dev, dma_dest, len,
- DMA_FROM_DEVICE);
- dma_unmap_page(device->dev, dma_src, len,
- DMA_TO_DEVICE);
- }
+
+ unmap->to_cnt = 1;
+ unmap->addr[0] = dma_map_page(device->dev, src, src_offset, len,
+ DMA_TO_DEVICE);
+ unmap->from_cnt = 1;
+ unmap->addr[1] = dma_map_page(device->dev, dest, dest_offset, len,
+ DMA_FROM_DEVICE);
+ unmap->len = len;
+
+ tx = device->device_prep_dma_memcpy(chan, unmap->addr[1],
+ unmap->addr[0], len,
+ dma_prep_flags);
}
if (tx) {
pr_debug("%s: (async) len: %zu\n", __func__, len);
+
+ dma_set_unmap(tx, unmap);
async_tx_submit(chan, tx, submit);
} else {
void *dest_buf, *src_buf;
async_tx_sync_epilog(submit);
}
+ dmaengine_unmap_put(unmap);
+
return tx;
}
EXPORT_SYMBOL_GPL(async_memcpy);
* do_async_gen_syndrome - asynchronously calculate P and/or Q
*/
static __async_inline struct dma_async_tx_descriptor *
-do_async_gen_syndrome(struct dma_chan *chan, struct page **blocks,
- const unsigned char *scfs, unsigned int offset, int disks,
- size_t len, dma_addr_t *dma_src,
+do_async_gen_syndrome(struct dma_chan *chan,
+ const unsigned char *scfs, int disks,
+ struct dmaengine_unmap_data *unmap,
+ enum dma_ctrl_flags dma_flags,
struct async_submit_ctl *submit)
{
struct dma_async_tx_descriptor *tx = NULL;
struct dma_device *dma = chan->device;
- enum dma_ctrl_flags dma_flags = 0;
enum async_tx_flags flags_orig = submit->flags;
dma_async_tx_callback cb_fn_orig = submit->cb_fn;
dma_async_tx_callback cb_param_orig = submit->cb_param;
int src_cnt = disks - 2;
- unsigned char coefs[src_cnt];
unsigned short pq_src_cnt;
dma_addr_t dma_dest[2];
int src_off = 0;
- int idx;
- int i;
- /* DMAs use destinations as sources, so use BIDIRECTIONAL mapping */
- if (P(blocks, disks))
- dma_dest[0] = dma_map_page(dma->dev, P(blocks, disks), offset,
- len, DMA_BIDIRECTIONAL);
- else
- dma_flags |= DMA_PREP_PQ_DISABLE_P;
- if (Q(blocks, disks))
- dma_dest[1] = dma_map_page(dma->dev, Q(blocks, disks), offset,
- len, DMA_BIDIRECTIONAL);
- else
- dma_flags |= DMA_PREP_PQ_DISABLE_Q;
-
- /* convert source addresses being careful to collapse 'empty'
- * sources and update the coefficients accordingly
- */
- for (i = 0, idx = 0; i < src_cnt; i++) {
- if (blocks[i] == NULL)
- continue;
- dma_src[idx] = dma_map_page(dma->dev, blocks[i], offset, len,
- DMA_TO_DEVICE);
- coefs[idx] = scfs[i];
- idx++;
- }
- src_cnt = idx;
+ if (submit->flags & ASYNC_TX_FENCE)
+ dma_flags |= DMA_PREP_FENCE;
while (src_cnt > 0) {
submit->flags = flags_orig;
if (src_cnt > pq_src_cnt) {
submit->flags &= ~ASYNC_TX_ACK;
submit->flags |= ASYNC_TX_FENCE;
- dma_flags |= DMA_COMPL_SKIP_DEST_UNMAP;
submit->cb_fn = NULL;
submit->cb_param = NULL;
} else {
- dma_flags &= ~DMA_COMPL_SKIP_DEST_UNMAP;
submit->cb_fn = cb_fn_orig;
submit->cb_param = cb_param_orig;
if (cb_fn_orig)
dma_flags |= DMA_PREP_INTERRUPT;
}
- if (submit->flags & ASYNC_TX_FENCE)
- dma_flags |= DMA_PREP_FENCE;
- /* Since we have clobbered the src_list we are committed
- * to doing this asynchronously. Drivers force forward
- * progress in case they can not provide a descriptor
+ /* Drivers force forward progress in case they can not provide
+ * a descriptor
*/
for (;;) {
+ dma_dest[0] = unmap->addr[disks - 2];
+ dma_dest[1] = unmap->addr[disks - 1];
tx = dma->device_prep_dma_pq(chan, dma_dest,
- &dma_src[src_off],
+ &unmap->addr[src_off],
pq_src_cnt,
- &coefs[src_off], len,
+ &scfs[src_off], unmap->len,
dma_flags);
if (likely(tx))
break;
dma_async_issue_pending(chan);
}
+ dma_set_unmap(tx, unmap);
async_tx_submit(chan, tx, submit);
submit->depend_tx = tx;
* set to NULL those buffers will be replaced with the raid6_zero_page
* in the synchronous path and omitted in the hardware-asynchronous
* path.
- *
- * 'blocks' note: if submit->scribble is NULL then the contents of
- * 'blocks' may be overwritten to perform address conversions
- * (dma_map_page() or page_address()).
*/
struct dma_async_tx_descriptor *
async_gen_syndrome(struct page **blocks, unsigned int offset, int disks,
&P(blocks, disks), 2,
blocks, src_cnt, len);
struct dma_device *device = chan ? chan->device : NULL;
- dma_addr_t *dma_src = NULL;
+ struct dmaengine_unmap_data *unmap = NULL;
BUG_ON(disks > 255 || !(P(blocks, disks) || Q(blocks, disks)));
- if (submit->scribble)
- dma_src = submit->scribble;
- else if (sizeof(dma_addr_t) <= sizeof(struct page *))
- dma_src = (dma_addr_t *) blocks;
+ if (device)
+ unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOIO);
- if (dma_src && device &&
+ if (unmap &&
(src_cnt <= dma_maxpq(device, 0) ||
dma_maxpq(device, DMA_PREP_CONTINUE) > 0) &&
is_dma_pq_aligned(device, offset, 0, len)) {
+ struct dma_async_tx_descriptor *tx;
+ enum dma_ctrl_flags dma_flags = 0;
+ unsigned char coefs[src_cnt];
+ int i, j;
+
/* run the p+q asynchronously */
pr_debug("%s: (async) disks: %d len: %zu\n",
__func__, disks, len);
- return do_async_gen_syndrome(chan, blocks, raid6_gfexp, offset,
- disks, len, dma_src, submit);
+
+ /* convert source addresses being careful to collapse 'empty'
+ * sources and update the coefficients accordingly
+ */
+ unmap->len = len;
+ for (i = 0, j = 0; i < src_cnt; i++) {
+ if (blocks[i] == NULL)
+ continue;
+ unmap->addr[j] = dma_map_page(device->dev, blocks[i], offset,
+ len, DMA_TO_DEVICE);
+ coefs[j] = raid6_gfexp[i];
+ unmap->to_cnt++;
+ j++;
+ }
+
+ /*
+ * DMAs use destinations as sources,
+ * so use BIDIRECTIONAL mapping
+ */
+ unmap->bidi_cnt++;
+ if (P(blocks, disks))
+ unmap->addr[j++] = dma_map_page(device->dev, P(blocks, disks),
+ offset, len, DMA_BIDIRECTIONAL);
+ else {
+ unmap->addr[j++] = 0;
+ dma_flags |= DMA_PREP_PQ_DISABLE_P;
+ }
+
+ unmap->bidi_cnt++;
+ if (Q(blocks, disks))
+ unmap->addr[j++] = dma_map_page(device->dev, Q(blocks, disks),
+ offset, len, DMA_BIDIRECTIONAL);
+ else {
+ unmap->addr[j++] = 0;
+ dma_flags |= DMA_PREP_PQ_DISABLE_Q;
+ }
+
+ tx = do_async_gen_syndrome(chan, coefs, j, unmap, dma_flags, submit);
+ dmaengine_unmap_put(unmap);
+ return tx;
}
+ dmaengine_unmap_put(unmap);
+
/* run the pq synchronously */
pr_debug("%s: (sync) disks: %d len: %zu\n", __func__, disks, len);
struct dma_async_tx_descriptor *tx;
unsigned char coefs[disks-2];
enum dma_ctrl_flags dma_flags = submit->cb_fn ? DMA_PREP_INTERRUPT : 0;
- dma_addr_t *dma_src = NULL;
- int src_cnt = 0;
+ struct dmaengine_unmap_data *unmap = NULL;
BUG_ON(disks < 4);
- if (submit->scribble)
- dma_src = submit->scribble;
- else if (sizeof(dma_addr_t) <= sizeof(struct page *))
- dma_src = (dma_addr_t *) blocks;
+ if (device)
+ unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOIO);
- if (dma_src && device && disks <= dma_maxpq(device, 0) &&
+ if (unmap && disks <= dma_maxpq(device, 0) &&
is_dma_pq_aligned(device, offset, 0, len)) {
struct device *dev = device->dev;
- dma_addr_t *pq = &dma_src[disks-2];
- int i;
+ dma_addr_t pq[2];
+ int i, j = 0, src_cnt = 0;
pr_debug("%s: (async) disks: %d len: %zu\n",
__func__, disks, len);
- if (!P(blocks, disks))
+
+ unmap->len = len;
+ for (i = 0; i < disks-2; i++)
+ if (likely(blocks[i])) {
+ unmap->addr[j] = dma_map_page(dev, blocks[i],
+ offset, len,
+ DMA_TO_DEVICE);
+ coefs[j] = raid6_gfexp[i];
+ unmap->to_cnt++;
+ src_cnt++;
+ j++;
+ }
+
+ if (!P(blocks, disks)) {
+ pq[0] = 0;
dma_flags |= DMA_PREP_PQ_DISABLE_P;
- else
+ } else {
pq[0] = dma_map_page(dev, P(blocks, disks),
offset, len,
DMA_TO_DEVICE);
- if (!Q(blocks, disks))
+ unmap->addr[j++] = pq[0];
+ unmap->to_cnt++;
+ }
+ if (!Q(blocks, disks)) {
+ pq[1] = 0;
dma_flags |= DMA_PREP_PQ_DISABLE_Q;
- else
+ } else {
pq[1] = dma_map_page(dev, Q(blocks, disks),
offset, len,
DMA_TO_DEVICE);
+ unmap->addr[j++] = pq[1];
+ unmap->to_cnt++;
+ }
if (submit->flags & ASYNC_TX_FENCE)
dma_flags |= DMA_PREP_FENCE;
- for (i = 0; i < disks-2; i++)
- if (likely(blocks[i])) {
- dma_src[src_cnt] = dma_map_page(dev, blocks[i],
- offset, len,
- DMA_TO_DEVICE);
- coefs[src_cnt] = raid6_gfexp[i];
- src_cnt++;
- }
-
for (;;) {
- tx = device->device_prep_dma_pq_val(chan, pq, dma_src,
+ tx = device->device_prep_dma_pq_val(chan, pq,
+ unmap->addr,
src_cnt,
coefs,
len, pqres,
async_tx_quiesce(&submit->depend_tx);
dma_async_issue_pending(chan);
}
+
+ dma_set_unmap(tx, unmap);
async_tx_submit(chan, tx, submit);
return tx;
#include <linux/dma-mapping.h>
#include <linux/raid/pq.h>
#include <linux/async_tx.h>
+#include <linux/dmaengine.h>
static struct dma_async_tx_descriptor *
async_sum_product(struct page *dest, struct page **srcs, unsigned char *coef,
struct dma_chan *chan = async_tx_find_channel(submit, DMA_PQ,
&dest, 1, srcs, 2, len);
struct dma_device *dma = chan ? chan->device : NULL;
+ struct dmaengine_unmap_data *unmap = NULL;
const u8 *amul, *bmul;
u8 ax, bx;
u8 *a, *b, *c;
- if (dma) {
- dma_addr_t dma_dest[2];
- dma_addr_t dma_src[2];
+ if (dma)
+ unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOIO);
+
+ if (unmap) {
struct device *dev = dma->dev;
+ dma_addr_t pq[2];
struct dma_async_tx_descriptor *tx;
enum dma_ctrl_flags dma_flags = DMA_PREP_PQ_DISABLE_P;
if (submit->flags & ASYNC_TX_FENCE)
dma_flags |= DMA_PREP_FENCE;
- dma_dest[1] = dma_map_page(dev, dest, 0, len, DMA_BIDIRECTIONAL);
- dma_src[0] = dma_map_page(dev, srcs[0], 0, len, DMA_TO_DEVICE);
- dma_src[1] = dma_map_page(dev, srcs[1], 0, len, DMA_TO_DEVICE);
- tx = dma->device_prep_dma_pq(chan, dma_dest, dma_src, 2, coef,
+ unmap->addr[0] = dma_map_page(dev, srcs[0], 0, len, DMA_TO_DEVICE);
+ unmap->addr[1] = dma_map_page(dev, srcs[1], 0, len, DMA_TO_DEVICE);
+ unmap->to_cnt = 2;
+
+ unmap->addr[2] = dma_map_page(dev, dest, 0, len, DMA_BIDIRECTIONAL);
+ unmap->bidi_cnt = 1;
+ /* engine only looks at Q, but expects it to follow P */
+ pq[1] = unmap->addr[2];
+
+ unmap->len = len;
+ tx = dma->device_prep_dma_pq(chan, pq, unmap->addr, 2, coef,
len, dma_flags);
if (tx) {
+ dma_set_unmap(tx, unmap);
async_tx_submit(chan, tx, submit);
+ dmaengine_unmap_put(unmap);
return tx;
}
/* could not get a descriptor, unmap and fall through to
* the synchronous path
*/
- dma_unmap_page(dev, dma_dest[1], len, DMA_BIDIRECTIONAL);
- dma_unmap_page(dev, dma_src[0], len, DMA_TO_DEVICE);
- dma_unmap_page(dev, dma_src[1], len, DMA_TO_DEVICE);
+ dmaengine_unmap_put(unmap);
}
/* run the operation synchronously */
struct dma_chan *chan = async_tx_find_channel(submit, DMA_PQ,
&dest, 1, &src, 1, len);
struct dma_device *dma = chan ? chan->device : NULL;
+ struct dmaengine_unmap_data *unmap = NULL;
const u8 *qmul; /* Q multiplier table */
u8 *d, *s;
- if (dma) {
+ if (dma)
+ unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOIO);
+
+ if (unmap) {
dma_addr_t dma_dest[2];
- dma_addr_t dma_src[1];
struct device *dev = dma->dev;
struct dma_async_tx_descriptor *tx;
enum dma_ctrl_flags dma_flags = DMA_PREP_PQ_DISABLE_P;
if (submit->flags & ASYNC_TX_FENCE)
dma_flags |= DMA_PREP_FENCE;
- dma_dest[1] = dma_map_page(dev, dest, 0, len, DMA_BIDIRECTIONAL);
- dma_src[0] = dma_map_page(dev, src, 0, len, DMA_TO_DEVICE);
- tx = dma->device_prep_dma_pq(chan, dma_dest, dma_src, 1, &coef,
- len, dma_flags);
+ unmap->addr[0] = dma_map_page(dev, src, 0, len, DMA_TO_DEVICE);
+ unmap->to_cnt++;
+ unmap->addr[1] = dma_map_page(dev, dest, 0, len, DMA_BIDIRECTIONAL);
+ dma_dest[1] = unmap->addr[1];
+ unmap->bidi_cnt++;
+ unmap->len = len;
+
+ /* this looks funny, but the engine looks for Q at
+ * dma_dest[1] and ignores dma_dest[0] as a dest
+ * due to DMA_PREP_PQ_DISABLE_P
+ */
+ tx = dma->device_prep_dma_pq(chan, dma_dest, unmap->addr,
+ 1, &coef, len, dma_flags);
+
if (tx) {
+ dma_set_unmap(tx, unmap);
+ dmaengine_unmap_put(unmap);
async_tx_submit(chan, tx, submit);
return tx;
}
/* could not get a descriptor, unmap and fall through to
* the synchronous path
*/
- dma_unmap_page(dev, dma_dest[1], len, DMA_BIDIRECTIONAL);
- dma_unmap_page(dev, dma_src[0], len, DMA_TO_DEVICE);
+ dmaengine_unmap_put(unmap);
}
/* no channel available, or failed to allocate a descriptor, so
}
device->device_issue_pending(chan);
} else {
- if (dma_wait_for_async_tx(depend_tx) != DMA_SUCCESS)
+ if (dma_wait_for_async_tx(depend_tx) != DMA_COMPLETE)
panic("%s: DMA error waiting for depend_tx\n",
__func__);
tx->tx_submit(tx);
* we are referring to the correct operation
*/
BUG_ON(async_tx_test_ack(*tx));
- if (dma_wait_for_async_tx(*tx) != DMA_SUCCESS)
+ if (dma_wait_for_async_tx(*tx) != DMA_COMPLETE)
panic("%s: DMA error waiting for transaction\n",
__func__);
async_tx_ack(*tx);
/* do_async_xor - dma map the pages and perform the xor with an engine */
static __async_inline struct dma_async_tx_descriptor *
-do_async_xor(struct dma_chan *chan, struct page *dest, struct page **src_list,
- unsigned int offset, int src_cnt, size_t len, dma_addr_t *dma_src,
+do_async_xor(struct dma_chan *chan, struct dmaengine_unmap_data *unmap,
struct async_submit_ctl *submit)
{
struct dma_device *dma = chan->device;
struct dma_async_tx_descriptor *tx = NULL;
- int src_off = 0;
- int i;
dma_async_tx_callback cb_fn_orig = submit->cb_fn;
void *cb_param_orig = submit->cb_param;
enum async_tx_flags flags_orig = submit->flags;
- enum dma_ctrl_flags dma_flags;
- int xor_src_cnt = 0;
- dma_addr_t dma_dest;
-
- /* map the dest bidrectional in case it is re-used as a source */
- dma_dest = dma_map_page(dma->dev, dest, offset, len, DMA_BIDIRECTIONAL);
- for (i = 0; i < src_cnt; i++) {
- /* only map the dest once */
- if (!src_list[i])
- continue;
- if (unlikely(src_list[i] == dest)) {
- dma_src[xor_src_cnt++] = dma_dest;
- continue;
- }
- dma_src[xor_src_cnt++] = dma_map_page(dma->dev, src_list[i], offset,
- len, DMA_TO_DEVICE);
- }
- src_cnt = xor_src_cnt;
+ enum dma_ctrl_flags dma_flags = 0;
+ int src_cnt = unmap->to_cnt;
+ int xor_src_cnt;
+ dma_addr_t dma_dest = unmap->addr[unmap->to_cnt];
+ dma_addr_t *src_list = unmap->addr;
while (src_cnt) {
+ dma_addr_t tmp;
+
submit->flags = flags_orig;
- dma_flags = 0;
xor_src_cnt = min(src_cnt, (int)dma->max_xor);
- /* if we are submitting additional xors, leave the chain open,
- * clear the callback parameters, and leave the destination
- * buffer mapped
+ /* if we are submitting additional xors, leave the chain open
+ * and clear the callback parameters
*/
if (src_cnt > xor_src_cnt) {
submit->flags &= ~ASYNC_TX_ACK;
submit->flags |= ASYNC_TX_FENCE;
- dma_flags = DMA_COMPL_SKIP_DEST_UNMAP;
submit->cb_fn = NULL;
submit->cb_param = NULL;
} else {
dma_flags |= DMA_PREP_INTERRUPT;
if (submit->flags & ASYNC_TX_FENCE)
dma_flags |= DMA_PREP_FENCE;
- /* Since we have clobbered the src_list we are committed
- * to doing this asynchronously. Drivers force forward progress
- * in case they can not provide a descriptor
+
+ /* Drivers force forward progress in case they can not provide a
+ * descriptor
*/
- tx = dma->device_prep_dma_xor(chan, dma_dest, &dma_src[src_off],
- xor_src_cnt, len, dma_flags);
+ tmp = src_list[0];
+ if (src_list > unmap->addr)
+ src_list[0] = dma_dest;
+ tx = dma->device_prep_dma_xor(chan, dma_dest, src_list,
+ xor_src_cnt, unmap->len,
+ dma_flags);
+ src_list[0] = tmp;
+
if (unlikely(!tx))
async_tx_quiesce(&submit->depend_tx);
while (unlikely(!tx)) {
dma_async_issue_pending(chan);
tx = dma->device_prep_dma_xor(chan, dma_dest,
- &dma_src[src_off],
- xor_src_cnt, len,
+ src_list,
+ xor_src_cnt, unmap->len,
dma_flags);
}
+ dma_set_unmap(tx, unmap);
async_tx_submit(chan, tx, submit);
submit->depend_tx = tx;
if (src_cnt > xor_src_cnt) {
/* drop completed sources */
src_cnt -= xor_src_cnt;
- src_off += xor_src_cnt;
-
/* use the intermediate result a source */
- dma_src[--src_off] = dma_dest;
src_cnt++;
+ src_list += xor_src_cnt - 1;
} else
break;
}
struct dma_chan *chan = async_tx_find_channel(submit, DMA_XOR,
&dest, 1, src_list,
src_cnt, len);
- dma_addr_t *dma_src = NULL;
+ struct dma_device *device = chan ? chan->device : NULL;
+ struct dmaengine_unmap_data *unmap = NULL;
BUG_ON(src_cnt <= 1);
- if (submit->scribble)
- dma_src = submit->scribble;
- else if (sizeof(dma_addr_t) <= sizeof(struct page *))
- dma_src = (dma_addr_t *) src_list;
+ if (device)
+ unmap = dmaengine_get_unmap_data(device->dev, src_cnt+1, GFP_NOIO);
+
+ if (unmap && is_dma_xor_aligned(device, offset, 0, len)) {
+ struct dma_async_tx_descriptor *tx;
+ int i, j;
- if (dma_src && chan && is_dma_xor_aligned(chan->device, offset, 0, len)) {
/* run the xor asynchronously */
pr_debug("%s (async): len: %zu\n", __func__, len);
- return do_async_xor(chan, dest, src_list, offset, src_cnt, len,
- dma_src, submit);
+ unmap->len = len;
+ for (i = 0, j = 0; i < src_cnt; i++) {
+ if (!src_list[i])
+ continue;
+ unmap->to_cnt++;
+ unmap->addr[j++] = dma_map_page(device->dev, src_list[i],
+ offset, len, DMA_TO_DEVICE);
+ }
+
+ /* map it bidirectional as it may be re-used as a source */
+ unmap->addr[j] = dma_map_page(device->dev, dest, offset, len,
+ DMA_BIDIRECTIONAL);
+ unmap->bidi_cnt = 1;
+
+ tx = do_async_xor(chan, unmap, submit);
+ dmaengine_unmap_put(unmap);
+ return tx;
} else {
+ dmaengine_unmap_put(unmap);
/* run the xor synchronously */
pr_debug("%s (sync): len: %zu\n", __func__, len);
WARN_ONCE(chan, "%s: no space for dma address conversion\n",
struct dma_chan *chan = xor_val_chan(submit, dest, src_list, src_cnt, len);
struct dma_device *device = chan ? chan->device : NULL;
struct dma_async_tx_descriptor *tx = NULL;
- dma_addr_t *dma_src = NULL;
+ struct dmaengine_unmap_data *unmap = NULL;
BUG_ON(src_cnt <= 1);
- if (submit->scribble)
- dma_src = submit->scribble;
- else if (sizeof(dma_addr_t) <= sizeof(struct page *))
- dma_src = (dma_addr_t *) src_list;
+ if (device)
+ unmap = dmaengine_get_unmap_data(device->dev, src_cnt, GFP_NOIO);
- if (dma_src && device && src_cnt <= device->max_xor &&
+ if (unmap && src_cnt <= device->max_xor &&
is_dma_xor_aligned(device, offset, 0, len)) {
unsigned long dma_prep_flags = 0;
int i;
dma_prep_flags |= DMA_PREP_INTERRUPT;
if (submit->flags & ASYNC_TX_FENCE)
dma_prep_flags |= DMA_PREP_FENCE;
- for (i = 0; i < src_cnt; i++)
- dma_src[i] = dma_map_page(device->dev, src_list[i],
- offset, len, DMA_TO_DEVICE);
- tx = device->device_prep_dma_xor_val(chan, dma_src, src_cnt,
+ for (i = 0; i < src_cnt; i++) {
+ unmap->addr[i] = dma_map_page(device->dev, src_list[i],
+ offset, len, DMA_TO_DEVICE);
+ unmap->to_cnt++;
+ }
+ unmap->len = len;
+
+ tx = device->device_prep_dma_xor_val(chan, unmap->addr, src_cnt,
len, result,
dma_prep_flags);
if (unlikely(!tx)) {
while (!tx) {
dma_async_issue_pending(chan);
tx = device->device_prep_dma_xor_val(chan,
- dma_src, src_cnt, len, result,
+ unmap->addr, src_cnt, len, result,
dma_prep_flags);
}
}
-
+ dma_set_unmap(tx, unmap);
async_tx_submit(chan, tx, submit);
} else {
enum async_tx_flags flags_orig = submit->flags;
async_tx_sync_epilog(submit);
submit->flags = flags_orig;
}
+ dmaengine_unmap_put(unmap);
return tx;
}
#undef pr
#define pr(fmt, args...) pr_info("raid6test: " fmt, ##args)
-#define NDISKS 16 /* Including P and Q */
+#define NDISKS 64 /* Including P and Q */
static struct page *dataptrs[NDISKS];
static addr_conv_t addr_conv[NDISKS];
err += test(11, &tests);
err += test(12, &tests);
}
+
+ /* the 24 disk case is special for ioatdma as it is the boudary point
+ * at which it needs to switch from 8-source ops to 16-source
+ * ops for continuation (assumes DMA_HAS_PQ_CONTINUE is not set)
+ */
+ if (NDISKS > 24)
+ err += test(24, &tests);
+
err += test(NDISKS, &tests);
pr("\n");
aead_request_complete(req, err);
}
-static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
- unsigned int keylen)
+int crypto_authenc_extractkeys(struct crypto_authenc_keys *keys, const u8 *key,
+ unsigned int keylen)
{
- unsigned int authkeylen;
- unsigned int enckeylen;
- struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
- struct crypto_ahash *auth = ctx->auth;
- struct crypto_ablkcipher *enc = ctx->enc;
- struct rtattr *rta = (void *)key;
+ struct rtattr *rta = (struct rtattr *)key;
struct crypto_authenc_key_param *param;
- int err = -EINVAL;
if (!RTA_OK(rta, keylen))
- goto badkey;
+ return -EINVAL;
if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
- goto badkey;
+ return -EINVAL;
if (RTA_PAYLOAD(rta) < sizeof(*param))
- goto badkey;
+ return -EINVAL;
param = RTA_DATA(rta);
- enckeylen = be32_to_cpu(param->enckeylen);
+ keys->enckeylen = be32_to_cpu(param->enckeylen);
key += RTA_ALIGN(rta->rta_len);
keylen -= RTA_ALIGN(rta->rta_len);
- if (keylen < enckeylen)
- goto badkey;
+ if (keylen < keys->enckeylen)
+ return -EINVAL;
- authkeylen = keylen - enckeylen;
+ keys->authkeylen = keylen - keys->enckeylen;
+ keys->authkey = key;
+ keys->enckey = key + keys->authkeylen;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(crypto_authenc_extractkeys);
+
+static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
+ struct crypto_ahash *auth = ctx->auth;
+ struct crypto_ablkcipher *enc = ctx->enc;
+ struct crypto_authenc_keys keys;
+ int err = -EINVAL;
+
+ if (crypto_authenc_extractkeys(&keys, key, keylen) != 0)
+ goto badkey;
crypto_ahash_clear_flags(auth, CRYPTO_TFM_REQ_MASK);
crypto_ahash_set_flags(auth, crypto_aead_get_flags(authenc) &
CRYPTO_TFM_REQ_MASK);
- err = crypto_ahash_setkey(auth, key, authkeylen);
+ err = crypto_ahash_setkey(auth, keys.authkey, keys.authkeylen);
crypto_aead_set_flags(authenc, crypto_ahash_get_flags(auth) &
CRYPTO_TFM_RES_MASK);
crypto_ablkcipher_clear_flags(enc, CRYPTO_TFM_REQ_MASK);
crypto_ablkcipher_set_flags(enc, crypto_aead_get_flags(authenc) &
CRYPTO_TFM_REQ_MASK);
- err = crypto_ablkcipher_setkey(enc, key + authkeylen, enckeylen);
+ err = crypto_ablkcipher_setkey(enc, keys.enckey, keys.enckeylen);
crypto_aead_set_flags(authenc, crypto_ablkcipher_get_flags(enc) &
CRYPTO_TFM_RES_MASK);
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
+ err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
if (err)
goto out;
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
+ err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
if (err)
goto out;
if (!err) {
struct crypto_aead *authenc = crypto_aead_reqtfm(areq);
struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
- struct ablkcipher_request *abreq = aead_request_ctx(areq);
- u8 *iv = (u8 *)(abreq + 1) +
- crypto_ablkcipher_reqsize(ctx->enc);
+ struct authenc_request_ctx *areq_ctx = aead_request_ctx(areq);
+ struct ablkcipher_request *abreq = (void *)(areq_ctx->tail
+ + ctx->reqoff);
+ u8 *iv = (u8 *)abreq - crypto_ablkcipher_ivsize(ctx->enc);
err = crypto_authenc_genicv(areq, iv, 0);
}
ihash = ohash + authsize;
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- return memcmp(ihash, ohash, authsize) ? -EBADMSG : 0;
+ return crypto_memneq(ihash, ohash, authsize) ? -EBADMSG : 0;
}
static int crypto_authenc_iverify(struct aead_request *req, u8 *iv,
static int crypto_authenc_esn_setkey(struct crypto_aead *authenc_esn, const u8 *key,
unsigned int keylen)
{
- unsigned int authkeylen;
- unsigned int enckeylen;
struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
struct crypto_ahash *auth = ctx->auth;
struct crypto_ablkcipher *enc = ctx->enc;
- struct rtattr *rta = (void *)key;
- struct crypto_authenc_key_param *param;
+ struct crypto_authenc_keys keys;
int err = -EINVAL;
- if (!RTA_OK(rta, keylen))
+ if (crypto_authenc_extractkeys(&keys, key, keylen) != 0)
goto badkey;
- if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
- goto badkey;
- if (RTA_PAYLOAD(rta) < sizeof(*param))
- goto badkey;
-
- param = RTA_DATA(rta);
- enckeylen = be32_to_cpu(param->enckeylen);
-
- key += RTA_ALIGN(rta->rta_len);
- keylen -= RTA_ALIGN(rta->rta_len);
-
- if (keylen < enckeylen)
- goto badkey;
-
- authkeylen = keylen - enckeylen;
crypto_ahash_clear_flags(auth, CRYPTO_TFM_REQ_MASK);
crypto_ahash_set_flags(auth, crypto_aead_get_flags(authenc_esn) &
CRYPTO_TFM_REQ_MASK);
- err = crypto_ahash_setkey(auth, key, authkeylen);
+ err = crypto_ahash_setkey(auth, keys.authkey, keys.authkeylen);
crypto_aead_set_flags(authenc_esn, crypto_ahash_get_flags(auth) &
CRYPTO_TFM_RES_MASK);
crypto_ablkcipher_clear_flags(enc, CRYPTO_TFM_REQ_MASK);
crypto_ablkcipher_set_flags(enc, crypto_aead_get_flags(authenc_esn) &
CRYPTO_TFM_REQ_MASK);
- err = crypto_ablkcipher_setkey(enc, key + authkeylen, enckeylen);
+ err = crypto_ablkcipher_setkey(enc, keys.enckey, keys.enckeylen);
crypto_aead_set_flags(authenc_esn, crypto_ablkcipher_get_flags(enc) &
CRYPTO_TFM_RES_MASK);
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
+ err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
if (err)
goto out;
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
+ err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
if (err)
goto out;
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
+ err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0;
if (err)
goto out;
ihash = ohash + authsize;
scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen,
authsize, 0);
- return memcmp(ihash, ohash, authsize) ? -EBADMSG : 0;
+ return crypto_memneq(ihash, ohash, authsize) ? -EBADMSG : 0;
}
static int crypto_authenc_esn_iverify(struct aead_request *req, u8 *iv,
}
/* compute plaintext into mac */
- get_data_to_compute(cipher, pctx, plain, cryptlen);
+ if (cryptlen)
+ get_data_to_compute(cipher, pctx, plain, cryptlen);
out:
return err;
if (!err) {
err = crypto_ccm_auth(req, req->dst, cryptlen);
- if (!err && memcmp(pctx->auth_tag, pctx->odata, authsize))
+ if (!err && crypto_memneq(pctx->auth_tag, pctx->odata, authsize))
err = -EBADMSG;
}
aead_request_complete(req, err);
return err;
/* verify */
- if (memcmp(authtag, odata, authsize))
+ if (crypto_memneq(authtag, odata, authsize))
return -EBADMSG;
return err;
crypto_xor(auth_tag, iauth_tag, 16);
scatterwalk_map_and_copy(iauth_tag, req->src, cryptlen, authsize, 0);
- return memcmp(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0;
+ return crypto_memneq(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0;
}
static void gcm_decrypt_done(struct crypto_async_request *areq, int err)
--- /dev/null
+/*
+ * Hash Info: Hash algorithms information
+ *
+ * Copyright (c) 2013 Dmitry Kasatkin <d.kasatkin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+
+#include <linux/export.h>
+#include <crypto/hash_info.h>
+
+const char *const hash_algo_name[HASH_ALGO__LAST] = {
+ [HASH_ALGO_MD4] = "md4",
+ [HASH_ALGO_MD5] = "md5",
+ [HASH_ALGO_SHA1] = "sha1",
+ [HASH_ALGO_RIPE_MD_160] = "rmd160",
+ [HASH_ALGO_SHA256] = "sha256",
+ [HASH_ALGO_SHA384] = "sha384",
+ [HASH_ALGO_SHA512] = "sha512",
+ [HASH_ALGO_SHA224] = "sha224",
+ [HASH_ALGO_RIPE_MD_128] = "rmd128",
+ [HASH_ALGO_RIPE_MD_256] = "rmd256",
+ [HASH_ALGO_RIPE_MD_320] = "rmd320",
+ [HASH_ALGO_WP_256] = "wp256",
+ [HASH_ALGO_WP_384] = "wp384",
+ [HASH_ALGO_WP_512] = "wp512",
+ [HASH_ALGO_TGR_128] = "tgr128",
+ [HASH_ALGO_TGR_160] = "tgr160",
+ [HASH_ALGO_TGR_192] = "tgr192",
+};
+EXPORT_SYMBOL_GPL(hash_algo_name);
+
+const int hash_digest_size[HASH_ALGO__LAST] = {
+ [HASH_ALGO_MD4] = MD5_DIGEST_SIZE,
+ [HASH_ALGO_MD5] = MD5_DIGEST_SIZE,
+ [HASH_ALGO_SHA1] = SHA1_DIGEST_SIZE,
+ [HASH_ALGO_RIPE_MD_160] = RMD160_DIGEST_SIZE,
+ [HASH_ALGO_SHA256] = SHA256_DIGEST_SIZE,
+ [HASH_ALGO_SHA384] = SHA384_DIGEST_SIZE,
+ [HASH_ALGO_SHA512] = SHA512_DIGEST_SIZE,
+ [HASH_ALGO_SHA224] = SHA224_DIGEST_SIZE,
+ [HASH_ALGO_RIPE_MD_128] = RMD128_DIGEST_SIZE,
+ [HASH_ALGO_RIPE_MD_256] = RMD256_DIGEST_SIZE,
+ [HASH_ALGO_RIPE_MD_320] = RMD320_DIGEST_SIZE,
+ [HASH_ALGO_WP_256] = WP256_DIGEST_SIZE,
+ [HASH_ALGO_WP_384] = WP384_DIGEST_SIZE,
+ [HASH_ALGO_WP_512] = WP512_DIGEST_SIZE,
+ [HASH_ALGO_TGR_128] = TGR128_DIGEST_SIZE,
+ [HASH_ALGO_TGR_160] = TGR160_DIGEST_SIZE,
+ [HASH_ALGO_TGR_192] = TGR192_DIGEST_SIZE,
+};
+EXPORT_SYMBOL_GPL(hash_digest_size);
--- /dev/null
+/*
+ * Constant-time equality testing of memory regions.
+ *
+ * Authors:
+ *
+ * James Yonan <james@openvpn.net>
+ * Daniel Borkmann <dborkman@redhat.com>
+ *
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ * The full GNU General Public License is included in this distribution
+ * in the file called LICENSE.GPL.
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of OpenVPN Technologies nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <crypto/algapi.h>
+
+#ifndef __HAVE_ARCH_CRYPTO_MEMNEQ
+
+/* Generic path for arbitrary size */
+static inline unsigned long
+__crypto_memneq_generic(const void *a, const void *b, size_t size)
+{
+ unsigned long neq = 0;
+
+#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
+ while (size >= sizeof(unsigned long)) {
+ neq |= *(unsigned long *)a ^ *(unsigned long *)b;
+ a += sizeof(unsigned long);
+ b += sizeof(unsigned long);
+ size -= sizeof(unsigned long);
+ }
+#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
+ while (size > 0) {
+ neq |= *(unsigned char *)a ^ *(unsigned char *)b;
+ a += 1;
+ b += 1;
+ size -= 1;
+ }
+ return neq;
+}
+
+/* Loop-free fast-path for frequently used 16-byte size */
+static inline unsigned long __crypto_memneq_16(const void *a, const void *b)
+{
+#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ if (sizeof(unsigned long) == 8)
+ return ((*(unsigned long *)(a) ^ *(unsigned long *)(b))
+ | (*(unsigned long *)(a+8) ^ *(unsigned long *)(b+8)));
+ else if (sizeof(unsigned int) == 4)
+ return ((*(unsigned int *)(a) ^ *(unsigned int *)(b))
+ | (*(unsigned int *)(a+4) ^ *(unsigned int *)(b+4))
+ | (*(unsigned int *)(a+8) ^ *(unsigned int *)(b+8))
+ | (*(unsigned int *)(a+12) ^ *(unsigned int *)(b+12)));
+ else
+#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
+ return ((*(unsigned char *)(a) ^ *(unsigned char *)(b))
+ | (*(unsigned char *)(a+1) ^ *(unsigned char *)(b+1))
+ | (*(unsigned char *)(a+2) ^ *(unsigned char *)(b+2))
+ | (*(unsigned char *)(a+3) ^ *(unsigned char *)(b+3))
+ | (*(unsigned char *)(a+4) ^ *(unsigned char *)(b+4))
+ | (*(unsigned char *)(a+5) ^ *(unsigned char *)(b+5))
+ | (*(unsigned char *)(a+6) ^ *(unsigned char *)(b+6))
+ | (*(unsigned char *)(a+7) ^ *(unsigned char *)(b+7))
+ | (*(unsigned char *)(a+8) ^ *(unsigned char *)(b+8))
+ | (*(unsigned char *)(a+9) ^ *(unsigned char *)(b+9))
+ | (*(unsigned char *)(a+10) ^ *(unsigned char *)(b+10))
+ | (*(unsigned char *)(a+11) ^ *(unsigned char *)(b+11))
+ | (*(unsigned char *)(a+12) ^ *(unsigned char *)(b+12))
+ | (*(unsigned char *)(a+13) ^ *(unsigned char *)(b+13))
+ | (*(unsigned char *)(a+14) ^ *(unsigned char *)(b+14))
+ | (*(unsigned char *)(a+15) ^ *(unsigned char *)(b+15)));
+}
+
+/* Compare two areas of memory without leaking timing information,
+ * and with special optimizations for common sizes. Users should
+ * not call this function directly, but should instead use
+ * crypto_memneq defined in crypto/algapi.h.
+ */
+noinline unsigned long __crypto_memneq(const void *a, const void *b,
+ size_t size)
+{
+ switch (size) {
+ case 16:
+ return __crypto_memneq_16(a, b);
+ default:
+ return __crypto_memneq_generic(a, b, size);
+ }
+}
+EXPORT_SYMBOL(__crypto_memneq);
+
+#endif /* __HAVE_ARCH_CRYPTO_MEMNEQ */
ret += tcrypt_test("cmac(des3_ede)");
break;
+ case 155:
+ ret += tcrypt_test("authenc(hmac(sha1),cbc(aes))");
+ break;
+
case 200:
test_cipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0,
speed_template_16_24_32);
goto out;
}
- sg_init_one(&sg[0], input,
- template[i].ilen + (enc ? authsize : 0));
-
if (diff_dst) {
output = xoutbuf[0];
output += align_offset;
+ sg_init_one(&sg[0], input, template[i].ilen);
sg_init_one(&sgout[0], output,
+ template[i].rlen);
+ } else {
+ sg_init_one(&sg[0], input,
template[i].ilen +
(enc ? authsize : 0));
- } else {
output = input;
}
memcpy(q, template[i].input + temp,
template[i].tap[k]);
- n = template[i].tap[k];
- if (k == template[i].np - 1 && enc)
- n += authsize;
- if (offset_in_page(q) + n < PAGE_SIZE)
- q[n] = 0;
-
sg_set_buf(&sg[k], q, template[i].tap[k]);
if (diff_dst) {
offset_in_page(IDX[k]);
memset(q, 0, template[i].tap[k]);
- if (offset_in_page(q) + n < PAGE_SIZE)
- q[n] = 0;
sg_set_buf(&sgout[k], q,
template[i].tap[k]);
}
+ n = template[i].tap[k];
+ if (k == template[i].np - 1 && enc)
+ n += authsize;
+ if (offset_in_page(q) + n < PAGE_SIZE)
+ q[n] = 0;
+
temp += template[i].tap[k];
}
goto out;
}
- sg[k - 1].length += authsize;
-
if (diff_dst)
sgout[k - 1].length += authsize;
+ else
+ sg[k - 1].length += authsize;
}
sg_init_table(asg, template[i].anp);
initrd, therefore it's safe to say Y.
See Documentation/acpi/initrd_table_override.txt for details
-config ACPI_BLACKLIST_YEAR
- int "Disable ACPI for systems before Jan 1st this year" if X86_32
- default 0
- help
- Enter a 4-digit year, e.g., 2001, to disable ACPI by default
- on platforms with DMI BIOS date before January 1st that year.
- "acpi=force" can be used to override this mechanism.
-
- Enter 0 to disable this mechanism and allow ACPI to
- run by default no matter what the year. (default)
-
config ACPI_DEBUG
bool "Debug Statements"
default n
config ACPI_EXTLOG
tristate "Extended Error Log support"
depends on X86_MCE && X86_LOCAL_APIC
- select EFI
select UEFI_CPER
default n
help
struct acpi_ac {
struct power_supply charger;
- struct acpi_device *adev;
struct platform_device *pdev;
unsigned long long state;
};
static int acpi_ac_get_state(struct acpi_ac *ac)
{
acpi_status status;
+ acpi_handle handle = ACPI_HANDLE(&ac->pdev->dev);
- status = acpi_evaluate_integer(ac->adev->handle, "_PSR", NULL,
+ status = acpi_evaluate_integer(handle, "_PSR", NULL,
&ac->state);
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
static void acpi_ac_notify_handler(acpi_handle handle, u32 event, void *data)
{
struct acpi_ac *ac = data;
+ struct acpi_device *adev;
if (!ac)
return;
msleep(ac_sleep_before_get_state_ms);
acpi_ac_get_state(ac);
- acpi_bus_generate_netlink_event(ac->adev->pnp.device_class,
+ adev = ACPI_COMPANION(&ac->pdev->dev);
+ acpi_bus_generate_netlink_event(adev->pnp.device_class,
dev_name(&ac->pdev->dev),
event, (u32) ac->state);
- acpi_notifier_call_chain(ac->adev, event, (u32) ac->state);
+ acpi_notifier_call_chain(adev, event, (u32) ac->state);
kobject_uevent(&ac->charger.dev->kobj, KOBJ_CHANGE);
}
if (!pdev)
return -EINVAL;
- result = acpi_bus_get_device(ACPI_HANDLE(&pdev->dev), &adev);
- if (result)
+ adev = ACPI_COMPANION(&pdev->dev);
+ if (!adev)
return -ENODEV;
ac = kzalloc(sizeof(struct acpi_ac), GFP_KERNEL);
strcpy(acpi_device_name(adev), ACPI_AC_DEVICE_NAME);
strcpy(acpi_device_class(adev), ACPI_AC_CLASS);
- ac->adev = adev;
ac->pdev = pdev;
platform_set_drvdata(pdev, ac);
{ "80860F14", (unsigned long)&byt_sdio_dev_desc },
{ "80860F41", (unsigned long)&byt_i2c_dev_desc },
{ "INT33B2", },
+ { "INT33FC", },
+
+ { "INT3430", (unsigned long)&lpt_dev_desc },
+ { "INT3431", (unsigned long)&lpt_dev_desc },
+ { "INT3432", (unsigned long)&lpt_dev_desc },
+ { "INT3433", (unsigned long)&lpt_dev_desc },
+ { "INT3434", (unsigned long)&lpt_uart_dev_desc },
+ { "INT3435", (unsigned long)&lpt_uart_dev_desc },
+ { "INT3436", (unsigned long)&lpt_sdio_dev_desc },
+ { "INT3437", },
{ }
};
pdevinfo.id = -1;
pdevinfo.res = resources;
pdevinfo.num_res = count;
- pdevinfo.acpi_node.handle = adev->handle;
+ pdevinfo.acpi_node.companion = adev;
pdev = platform_device_register_full(&pdevinfo);
if (IS_ERR(pdev)) {
dev_err(&adev->dev, "platform device creation failed: %ld\n",
struct acpi_buffer *output_buffer);
acpi_status
-acpi_rs_create_aml_resources(struct acpi_resource *linked_list_buffer,
+acpi_rs_create_aml_resources(struct acpi_buffer *resource_list,
struct acpi_buffer *output_buffer);
acpi_status
u32 aml_buffer_length, acpi_size * size_needed);
acpi_status
-acpi_rs_get_aml_length(struct acpi_resource *linked_list_buffer,
- acpi_size * size_needed);
+acpi_rs_get_aml_length(struct acpi_resource *resource_list,
+ acpi_size resource_list_size, acpi_size * size_needed);
acpi_status
acpi_rs_get_pci_routing_table_length(union acpi_operand_object *package_object,
void acpi_ns_delete_node(struct acpi_namespace_node *node)
{
union acpi_operand_object *obj_desc;
+ union acpi_operand_object *next_desc;
ACPI_FUNCTION_NAME(ns_delete_node);
acpi_ns_detach_object(node);
/*
- * Delete an attached data object if present (an object that was created
- * and attached via acpi_attach_data). Note: After any normal object is
- * detached above, the only possible remaining object is a data object.
+ * Delete an attached data object list if present (objects that were
+ * attached via acpi_attach_data). Note: After any normal object is
+ * detached above, the only possible remaining object(s) are data
+ * objects, in a linked list.
*/
obj_desc = node->object;
- if (obj_desc && (obj_desc->common.type == ACPI_TYPE_LOCAL_DATA)) {
+ while (obj_desc && (obj_desc->common.type == ACPI_TYPE_LOCAL_DATA)) {
/* Invoke the attached data deletion handler if present */
obj_desc->data.handler(node, obj_desc->data.pointer);
}
+ next_desc = obj_desc->common.next_object;
acpi_ut_remove_reference(obj_desc);
+ obj_desc = next_desc;
+ }
+
+ /* Special case for the statically allocated root node */
+
+ if (node == acpi_gbl_root_node) {
+ return;
}
/* Now we can delete the node */
void acpi_ns_terminate(void)
{
- union acpi_operand_object *obj_desc;
+ acpi_status status;
ACPI_FUNCTION_TRACE(ns_terminate);
/*
- * 1) Free the entire namespace -- all nodes and objects
- *
- * Delete all object descriptors attached to namepsace nodes
+ * Free the entire namespace -- all nodes and all objects
+ * attached to the nodes
*/
acpi_ns_delete_namespace_subtree(acpi_gbl_root_node);
- /* Detach any objects attached to the root */
+ /* Delete any objects attached to the root node */
- obj_desc = acpi_ns_get_attached_object(acpi_gbl_root_node);
- if (obj_desc) {
- acpi_ns_detach_object(acpi_gbl_root_node);
+ status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
+ if (ACPI_FAILURE(status)) {
+ return_VOID;
}
+ acpi_ns_delete_node(acpi_gbl_root_node);
+ (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
+
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Namespace freed\n"));
return_VOID;
}
* FUNCTION: acpi_rs_get_aml_length
*
* PARAMETERS: resource - Pointer to the resource linked list
+ * resource_list_size - Size of the resource linked list
* size_needed - Where the required size is returned
*
* RETURN: Status
******************************************************************************/
acpi_status
-acpi_rs_get_aml_length(struct acpi_resource * resource, acpi_size * size_needed)
+acpi_rs_get_aml_length(struct acpi_resource *resource,
+ acpi_size resource_list_size, acpi_size * size_needed)
{
acpi_size aml_size_needed = 0;
+ struct acpi_resource *resource_end;
acpi_rs_length total_size;
ACPI_FUNCTION_TRACE(rs_get_aml_length);
/* Traverse entire list of internal resource descriptors */
- while (resource) {
+ resource_end =
+ ACPI_ADD_PTR(struct acpi_resource, resource, resource_list_size);
+ while (resource < resource_end) {
/* Validate the descriptor type */
*
* FUNCTION: acpi_rs_create_aml_resources
*
- * PARAMETERS: linked_list_buffer - Pointer to the resource linked list
- * output_buffer - Pointer to the user's buffer
+ * PARAMETERS: resource_list - Pointer to the resource list buffer
+ * output_buffer - Where the AML buffer is returned
*
* RETURN: Status AE_OK if okay, else a valid acpi_status code.
* If the output_buffer is too small, the error will be
* AE_BUFFER_OVERFLOW and output_buffer->Length will point
* to the size buffer needed.
*
- * DESCRIPTION: Takes the linked list of device resources and
- * creates a bytestream to be used as input for the
- * _SRS control method.
+ * DESCRIPTION: Converts a list of device resources to an AML bytestream
+ * to be used as input for the _SRS control method.
*
******************************************************************************/
acpi_status
-acpi_rs_create_aml_resources(struct acpi_resource *linked_list_buffer,
+acpi_rs_create_aml_resources(struct acpi_buffer *resource_list,
struct acpi_buffer *output_buffer)
{
acpi_status status;
ACPI_FUNCTION_TRACE(rs_create_aml_resources);
- ACPI_DEBUG_PRINT((ACPI_DB_INFO, "LinkedListBuffer = %p\n",
- linked_list_buffer));
+ /* Params already validated, no need to re-validate here */
- /*
- * Params already validated, so we don't re-validate here
- *
- * Pass the linked_list_buffer into a module that calculates
- * the buffer size needed for the byte stream.
- */
- status = acpi_rs_get_aml_length(linked_list_buffer, &aml_size_needed);
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "ResourceList Buffer = %p\n",
+ resource_list->pointer));
+
+ /* Get the buffer size needed for the AML byte stream */
+
+ status = acpi_rs_get_aml_length(resource_list->pointer,
+ resource_list->length,
+ &aml_size_needed);
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "AmlSizeNeeded=%X, %s\n",
(u32)aml_size_needed, acpi_format_exception(status)));
/* Do the conversion */
- status =
- acpi_rs_convert_resources_to_aml(linked_list_buffer,
- aml_size_needed,
- output_buffer->pointer);
+ status = acpi_rs_convert_resources_to_aml(resource_list->pointer,
+ aml_size_needed,
+ output_buffer->pointer);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
* Convert the linked list into a byte stream
*/
buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER;
- status = acpi_rs_create_aml_resources(in_buffer->pointer, &buffer);
+ status = acpi_rs_create_aml_resources(in_buffer, &buffer);
if (ACPI_FAILURE(status)) {
goto cleanup;
}
}
acpi_gbl_prev_thread_id = thread_id;
+ acpi_gbl_nesting_level = 0;
}
/*
*/
acpi_os_printf("%9s-%04ld ", module_name, line_number);
+#ifdef ACPI_EXEC_APP
+ /*
+ * For acpi_exec only, emit the thread ID and nesting level.
+ * Note: nesting level is really only useful during a single-thread
+ * execution. Otherwise, multiple threads will keep resetting the
+ * level.
+ */
if (ACPI_LV_THREADS & acpi_dbg_level) {
acpi_os_printf("[%u] ", (u32)thread_id);
}
- acpi_os_printf("[%02ld] %-22.22s: ",
- acpi_gbl_nesting_level,
- acpi_ut_trim_function_name(function_name));
+ acpi_os_printf("[%02ld] ", acpi_gbl_nesting_level);
+#endif
+
+ acpi_os_printf("%-22.22s: ", acpi_ut_trim_function_name(function_name));
va_start(args, format);
acpi_os_vprintf(format, args);
component_id, "%s\n", acpi_gbl_fn_exit_str);
}
- acpi_gbl_nesting_level--;
+ if (acpi_gbl_nesting_level) {
+ acpi_gbl_nesting_level--;
+ }
}
ACPI_EXPORT_SYMBOL(acpi_ut_exit)
}
}
- acpi_gbl_nesting_level--;
+ if (acpi_gbl_nesting_level) {
+ acpi_gbl_nesting_level--;
+ }
}
ACPI_EXPORT_SYMBOL(acpi_ut_status_exit)
ACPI_FORMAT_UINT64(value));
}
- acpi_gbl_nesting_level--;
+ if (acpi_gbl_nesting_level) {
+ acpi_gbl_nesting_level--;
+ }
}
ACPI_EXPORT_SYMBOL(acpi_ut_value_exit)
ptr);
}
- acpi_gbl_nesting_level--;
+ if (acpi_gbl_nesting_level) {
+ acpi_gbl_nesting_level--;
+ }
}
#endif
bool "ACPI Platform Error Interface (APEI)"
select MISC_FILESYSTEMS
select PSTORE
- select EFI
select UEFI_CPER
depends on X86
help
static struct pstore_info erst_info = {
.owner = THIS_MODULE,
.name = "erst",
+ .flags = PSTORE_FLAGS_FRAGILE,
.open = erst_open_pstore,
.close = erst_close_pstore,
.read = erst_reader,
{""}
};
-#if CONFIG_ACPI_BLACKLIST_YEAR
-
-static int __init blacklist_by_year(void)
-{
- int year;
-
- /* Doesn't exist? Likely an old system */
- if (!dmi_get_date(DMI_BIOS_DATE, &year, NULL, NULL)) {
- printk(KERN_ERR PREFIX "no DMI BIOS year, "
- "acpi=force is required to enable ACPI\n" );
- return 1;
- }
- /* 0? Likely a buggy new BIOS */
- if (year == 0) {
- printk(KERN_ERR PREFIX "DMI BIOS year==0, "
- "assuming ACPI-capable machine\n" );
- return 0;
- }
- if (year < CONFIG_ACPI_BLACKLIST_YEAR) {
- printk(KERN_ERR PREFIX "BIOS age (%d) fails cutoff (%d), "
- "acpi=force is required to enable ACPI\n",
- year, CONFIG_ACPI_BLACKLIST_YEAR);
- return 1;
- }
- return 0;
-}
-#else
-static inline int blacklist_by_year(void)
-{
- return 0;
-}
-#endif
-
int __init acpi_blacklisted(void)
{
int i = 0;
}
}
- blacklisted += blacklist_by_year();
-
dmi_check_system(acpi_osi_dmi_table);
return blacklisted;
}
EXPORT_SYMBOL(acpi_bus_get_private_data);
+void acpi_bus_no_hotplug(acpi_handle handle)
+{
+ struct acpi_device *adev = NULL;
+
+ acpi_bus_get_device(handle, &adev);
+ if (adev)
+ adev->flags.no_hotplug = true;
+}
+EXPORT_SYMBOL_GPL(acpi_bus_no_hotplug);
+
static void acpi_print_osc_error(acpi_handle handle,
struct acpi_osc_context *context, char *error)
{
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
-#include <linux/device.h>
+#include <linux/acpi.h>
#include <linux/export.h>
#include <linux/mutex.h>
#include <linux/pm_qos.h>
#include <linux/pm_runtime.h>
-#include <acpi/acpi.h>
-#include <acpi/acpi_bus.h>
-#include <acpi/acpi_drivers.h>
-
#include "internal.h"
#define _COMPONENT ACPI_POWER_COMPONENT
*/
int acpi_pm_device_sleep_state(struct device *dev, int *d_min_p, int d_max_in)
{
- acpi_handle handle = DEVICE_ACPI_HANDLE(dev);
+ acpi_handle handle = ACPI_HANDLE(dev);
struct acpi_device *adev;
int ret, d_min, d_max;
if (!device_run_wake(phys_dev))
return -EINVAL;
- handle = DEVICE_ACPI_HANDLE(phys_dev);
+ handle = ACPI_HANDLE(phys_dev);
if (!handle || acpi_bus_get_device(handle, &adev)) {
dev_dbg(phys_dev, "ACPI handle without context in %s!\n",
__func__);
if (!device_can_wakeup(dev))
return -EINVAL;
- handle = DEVICE_ACPI_HANDLE(dev);
+ handle = ACPI_HANDLE(dev);
if (!handle || acpi_bus_get_device(handle, &adev)) {
dev_dbg(dev, "ACPI handle without context in %s!\n", __func__);
return -ENODEV;
*/
struct acpi_device *acpi_dev_pm_get_node(struct device *dev)
{
- acpi_handle handle = DEVICE_ACPI_HANDLE(dev);
+ acpi_handle handle = ACPI_HANDLE(dev);
struct acpi_device *adev;
return handle && !acpi_bus_get_device(handle, &adev) ? adev : NULL;
static void advance_transaction(struct acpi_ec *ec, u8 status)
{
unsigned long flags;
- struct transaction *t = ec->curr;
+ struct transaction *t;
spin_lock_irqsave(&ec->lock, flags);
+ t = ec->curr;
if (!t)
goto unlock;
if (t->wlen > t->wi) {
int acpi_bind_one(struct device *dev, acpi_handle handle)
{
- struct acpi_device *acpi_dev;
- acpi_status status;
+ struct acpi_device *acpi_dev = NULL;
struct acpi_device_physical_node *physical_node, *pn;
char physical_node_name[PHYSICAL_NODE_NAME_SIZE];
struct list_head *physnode_list;
unsigned int node_id;
int retval = -EINVAL;
- if (ACPI_HANDLE(dev)) {
+ if (ACPI_COMPANION(dev)) {
if (handle) {
- dev_warn(dev, "ACPI handle is already set\n");
+ dev_warn(dev, "ACPI companion already set\n");
return -EINVAL;
} else {
- handle = ACPI_HANDLE(dev);
+ acpi_dev = ACPI_COMPANION(dev);
}
+ } else {
+ acpi_bus_get_device(handle, &acpi_dev);
}
- if (!handle)
+ if (!acpi_dev)
return -EINVAL;
+ get_device(&acpi_dev->dev);
get_device(dev);
- status = acpi_bus_get_device(handle, &acpi_dev);
- if (ACPI_FAILURE(status))
- goto err;
-
physical_node = kzalloc(sizeof(*physical_node), GFP_KERNEL);
if (!physical_node) {
retval = -ENOMEM;
dev_warn(dev, "Already associated with ACPI node\n");
kfree(physical_node);
- if (ACPI_HANDLE(dev) != handle)
+ if (ACPI_COMPANION(dev) != acpi_dev)
goto err;
put_device(dev);
+ put_device(&acpi_dev->dev);
return 0;
}
if (pn->node_id == node_id) {
list_add(&physical_node->node, physnode_list);
acpi_dev->physical_node_count++;
- if (!ACPI_HANDLE(dev))
- ACPI_HANDLE_SET(dev, acpi_dev->handle);
+ if (!ACPI_COMPANION(dev))
+ ACPI_COMPANION_SET(dev, acpi_dev);
acpi_physnode_link_name(physical_node_name, node_id);
retval = sysfs_create_link(&acpi_dev->dev.kobj, &dev->kobj,
return 0;
err:
- ACPI_HANDLE_SET(dev, NULL);
+ ACPI_COMPANION_SET(dev, NULL);
put_device(dev);
+ put_device(&acpi_dev->dev);
return retval;
}
EXPORT_SYMBOL_GPL(acpi_bind_one);
int acpi_unbind_one(struct device *dev)
{
+ struct acpi_device *acpi_dev = ACPI_COMPANION(dev);
struct acpi_device_physical_node *entry;
- struct acpi_device *acpi_dev;
- acpi_status status;
- if (!ACPI_HANDLE(dev))
+ if (!acpi_dev)
return 0;
- status = acpi_bus_get_device(ACPI_HANDLE(dev), &acpi_dev);
- if (ACPI_FAILURE(status)) {
- dev_err(dev, "Oops, ACPI handle corrupt in %s()\n", __func__);
- return -EINVAL;
- }
-
mutex_lock(&acpi_dev->physical_node_lock);
list_for_each_entry(entry, &acpi_dev->physical_node_list, node)
acpi_physnode_link_name(physnode_name, entry->node_id);
sysfs_remove_link(&acpi_dev->dev.kobj, physnode_name);
sysfs_remove_link(&dev->kobj, "firmware_node");
- ACPI_HANDLE_SET(dev, NULL);
- /* acpi_bind_one() increase refcnt by one. */
+ ACPI_COMPANION_SET(dev, NULL);
+ /* Drop references taken by acpi_bind_one(). */
put_device(dev);
+ put_device(&acpi_dev->dev);
kfree(entry);
break;
}
}
EXPORT_SYMBOL_GPL(acpi_unbind_one);
+void acpi_preset_companion(struct device *dev, acpi_handle parent, u64 addr)
+{
+ struct acpi_device *adev;
+
+ if (!acpi_bus_get_device(acpi_get_child(parent, addr), &adev))
+ ACPI_COMPANION_SET(dev, adev);
+}
+EXPORT_SYMBOL_GPL(acpi_preset_companion);
+
static int acpi_platform_notify(struct device *dev)
{
struct acpi_bus_type *type = acpi_get_bus_type(dev);
#include <linux/slab.h>
#include <linux/acpi.h>
#include <linux/acpi_io.h>
-#include <acpi/acpiosxf.h>
/* ACPI NVS regions, APEI may use it */
.ids = root_device_ids,
.attach = acpi_pci_root_add,
.detach = acpi_pci_root_remove,
+ .hotplug = {
+ .ignore = true,
+ },
};
static DEFINE_MUTEX(osc_lock);
dev_err(&device->dev,
"Bus %04x:%02x not present in PCI namespace\n",
root->segment, (unsigned int)root->secondary.start);
+ device->driver_data = NULL;
result = -ENODEV;
goto end;
}
{
struct acpi_device *device = data;
acpi_handle handle = device->handle;
- struct acpi_scan_handler *handler;
u32 ost_code = ACPI_OST_SC_NON_SPECIFIC_FAILURE;
int error;
lock_device_hotplug();
mutex_lock(&acpi_scan_lock);
- handler = device->handler;
- if (!handler || !handler->hotplug.enabled) {
- put_device(&device->dev);
- goto err_support;
- }
-
if (ost_src == ACPI_NOTIFY_EJECT_REQUEST)
acpi_evaluate_hotplug_ost(handle, ACPI_NOTIFY_EJECT_REQUEST,
ACPI_OST_SC_EJECT_IN_PROGRESS, NULL);
- if (handler->hotplug.mode == AHM_CONTAINER)
+ if (device->handler && device->handler->hotplug.mode == AHM_CONTAINER)
kobject_uevent(&device->dev.kobj, KOBJ_OFFLINE);
error = acpi_scan_hot_remove(device);
break;
case ACPI_NOTIFY_EJECT_REQUEST:
acpi_handle_debug(handle, "ACPI_NOTIFY_EJECT_REQUEST event\n");
- status = acpi_bus_get_device(handle, &adev);
- if (ACPI_FAILURE(status))
+ if (acpi_bus_get_device(handle, &adev))
goto err_out;
get_device(&adev->dev);
*/
list_for_each_entry(hwid, &pnp.ids, list) {
handler = acpi_scan_match_handler(hwid->id, NULL);
- if (handler) {
+ if (handler && !handler->hotplug.ignore) {
acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY,
acpi_hotplug_notify_cb, handler);
break;
if (result)
return result;
+ device->flags.match_driver = true;
result = device_attach(&device->dev);
if (result < 0)
return result;
if (result)
return result;
+ device->flags.match_driver = true;
result = device_attach(&device->dev);
}
* generate wakeup events.
*/
if (ACPI_SUCCESS(status) && (acpi_state == ACPI_STATE_S3)) {
- acpi_event_status pwr_btn_status;
+ acpi_event_status pwr_btn_status = ACPI_EVENT_FLAG_DISABLED;
acpi_get_event_status(ACPI_EVENT_POWER_BUTTON, &pwr_btn_status);
sprintf(table_attr->name + ACPI_NAME_SIZE, "%d",
table_attr->instance);
- table_attr->attr.size = 0;
+ table_attr->attr.size = table_header->length;
table_attr->attr.read = acpi_table_show;
table_attr->attr.attr.name = table_attr->name;
table_attr->attr.attr.mode = 0400;
{
struct acpi_table_attr *table_attr;
struct acpi_table_header *table_header = NULL;
- int table_index = 0;
- int result;
+ int table_index;
+ acpi_status status;
+ int ret;
tables_kobj = kobject_create_and_add("tables", acpi_kobj);
if (!tables_kobj)
if (!dynamic_tables_kobj)
goto err_dynamic_tables;
- do {
- result = acpi_get_table_by_index(table_index, &table_header);
- if (!result) {
- table_index++;
- table_attr = NULL;
- table_attr =
- kzalloc(sizeof(struct acpi_table_attr), GFP_KERNEL);
- if (!table_attr)
- return -ENOMEM;
-
- acpi_table_attr_init(table_attr, table_header);
- result =
- sysfs_create_bin_file(tables_kobj,
- &table_attr->attr);
- if (result) {
- kfree(table_attr);
- return result;
- } else
- list_add_tail(&table_attr->node,
- &acpi_table_attr_list);
+ for (table_index = 0;; table_index++) {
+ status = acpi_get_table_by_index(table_index, &table_header);
+
+ if (status == AE_BAD_PARAMETER)
+ break;
+
+ if (ACPI_FAILURE(status))
+ continue;
+
+ table_attr = NULL;
+ table_attr = kzalloc(sizeof(*table_attr), GFP_KERNEL);
+ if (!table_attr)
+ return -ENOMEM;
+
+ acpi_table_attr_init(table_attr, table_header);
+ ret = sysfs_create_bin_file(tables_kobj, &table_attr->attr);
+ if (ret) {
+ kfree(table_attr);
+ return ret;
}
- } while (!result);
+ list_add_tail(&table_attr->node, &acpi_table_attr_list);
+ }
+
kobject_uevent(tables_kobj, KOBJ_ADD);
kobject_uevent(dynamic_tables_kobj, KOBJ_ADD);
- result = acpi_install_table_handler(acpi_sysfs_table_handler, NULL);
+ status = acpi_install_table_handler(acpi_sysfs_table_handler, NULL);
- return result == AE_OK ? 0 : -EINVAL;
+ return ACPI_FAILURE(status) ? -EINVAL : 0;
err_dynamic_tables:
kobject_put(tables_kobj);
err:
static bool allow_duplicates;
module_param(allow_duplicates, bool, 0644);
-/*
- * Some BIOSes claim they use minimum backlight at boot,
- * and this may bring dimming screen after boot
- */
-static bool use_bios_initial_backlight = 1;
-module_param(use_bios_initial_backlight, bool, 0644);
-
/*
* For Windows 8 systems: if set ture and the GPU driver has
* registered a backlight interface, skip registering ACPI video's.
return 0;
}
-static int video_ignore_initial_backlight(const struct dmi_system_id *d)
-{
- use_bios_initial_backlight = 0;
- return 0;
-}
-
static struct dmi_system_id video_dmi_table[] __initdata = {
/*
* Broken _BQC workaround http://bugzilla.kernel.org/show_bug.cgi?id=13121
DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 7720"),
},
},
- {
- .callback = video_ignore_initial_backlight,
- .ident = "HP Folio 13-2000",
- .matches = {
- DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"),
- DMI_MATCH(DMI_PRODUCT_NAME, "HP Folio 13 - 2000 Notebook PC"),
- },
- },
- {
- .callback = video_ignore_initial_backlight,
- .ident = "Fujitsu E753",
- .matches = {
- DMI_MATCH(DMI_BOARD_VENDOR, "FUJITSU"),
- DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E753"),
- },
- },
- {
- .callback = video_ignore_initial_backlight,
- .ident = "HP Pavilion dm4",
- .matches = {
- DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"),
- DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dm4 Notebook PC"),
- },
- },
- {
- .callback = video_ignore_initial_backlight,
- .ident = "HP Pavilion g6 Notebook PC",
- .matches = {
- DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"),
- DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion g6 Notebook PC"),
- },
- },
- {
- .callback = video_ignore_initial_backlight,
- .ident = "HP 1000 Notebook PC",
- .matches = {
- DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"),
- DMI_MATCH(DMI_PRODUCT_NAME, "HP 1000 Notebook PC"),
- },
- },
- {
- .callback = video_ignore_initial_backlight,
- .ident = "HP Pavilion m4",
- .matches = {
- DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"),
- DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion m4 Notebook PC"),
- },
- },
{}
};
if (!device->cap._BQC)
goto set_level;
- if (use_bios_initial_backlight) {
- level = acpi_video_bqc_value_to_level(device, level_old);
- /*
- * On some buggy laptops, _BQC returns an uninitialized
- * value when invoked for the first time, i.e.
- * level_old is invalid (no matter whether it's a level
- * or an index). Set the backlight to max_level in this case.
- */
- for (i = 2; i < br->count; i++)
- if (level == br->levels[i])
- break;
- if (i == br->count || !level)
- level = max_level;
- }
+ level = acpi_video_bqc_value_to_level(device, level_old);
+ /*
+ * On some buggy laptops, _BQC returns an uninitialized
+ * value when invoked for the first time, i.e.
+ * level_old is invalid (no matter whether it's a level
+ * or an index). Set the backlight to max_level in this case.
+ */
+ for (i = 2; i < br->count; i++)
+ if (level == br->levels[i])
+ break;
+ if (i == br->count || !level)
+ level = max_level;
set_level:
result = acpi_video_device_lcd_set_level(device, level);
.driver_data = board_ahci_yes_fbs }, /* 88se9172 on some Gigabyte */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x91a3),
.driver_data = board_ahci_yes_fbs },
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9230),
+ .driver_data = board_ahci_yes_fbs },
/* Promise */
{ PCI_VDEVICE(PROMISE, 0x3f20), board_ahci }, /* PDC42819 */
if (rc)
return rc;
- /* AHCI controllers often implement SFF compatible interface.
- * Grab all PCI BARs just in case.
- */
- rc = pcim_iomap_regions_request_all(pdev, 1 << ahci_pci_bar, DRV_NAME);
- if (rc == -EBUSY)
- pcim_pin_device(pdev);
- if (rc)
- return rc;
-
if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
(pdev->device == 0x2652 || pdev->device == 0x2653)) {
u8 map;
}
}
+ /* AHCI controllers often implement SFF compatible interface.
+ * Grab all PCI BARs just in case.
+ */
+ rc = pcim_iomap_regions_request_all(pdev, 1 << ahci_pci_bar, DRV_NAME);
+ if (rc == -EBUSY)
+ pcim_pin_device(pdev);
+ if (rc)
+ return rc;
+
hpriv = devm_kzalloc(dev, sizeof(*hpriv), GFP_KERNEL);
if (!hpriv)
return -ENOMEM;
/*
* set PHY Paremeters, two steps to configure the GPR13,
* one write for rest of parameters, mask of first write
- * is 0x07fffffd, and the other one write for setting
+ * is 0x07ffffff, and the other one write for setting
* the mpll_clk_en.
*/
regmap_update_bits(imxpriv->gpr, 0x34, IMX6Q_GPR13_SATA_RX_EQ_VAL_MASK
| IMX6Q_GPR13_SATA_TX_ATTEN_MASK
| IMX6Q_GPR13_SATA_TX_BOOST_MASK
| IMX6Q_GPR13_SATA_TX_LVL_MASK
+ | IMX6Q_GPR13_SATA_MPLL_CLK_EN
| IMX6Q_GPR13_SATA_TX_EDGE_RATE
, IMX6Q_GPR13_SATA_RX_EQ_VAL_3_0_DB
| IMX6Q_GPR13_SATA_RX_LOS_LVL_SATA2M
static const struct of_device_id ahci_of_match[] = {
{ .compatible = "snps,spear-ahci", },
{ .compatible = "snps,exynos5440-ahci", },
+ { .compatible = "ibm,476gtr-ahci", },
{},
};
MODULE_DEVICE_TABLE(of, ahci_of_match);
if (libata_noacpi || ap->flags & ATA_FLAG_ACPI_SATA || !host_handle)
return;
- ACPI_HANDLE_SET(&ap->tdev, acpi_get_child(host_handle, ap->port_no));
+ acpi_preset_companion(&ap->tdev, host_handle, ap->port_no);
if (ata_acpi_gtm(ap, &ap->__acpi_init_gtm) == 0)
ap->pflags |= ATA_PFLAG_INIT_GTM_VALID;
parent_handle = port_handle;
}
- ACPI_HANDLE_SET(&dev->tdev, acpi_get_child(parent_handle, adr));
+ acpi_preset_companion(&dev->tdev, parent_handle, adr);
register_hotplug_dock_device(ata_dev_acpi_handle(dev),
&ata_acpi_dev_dock_ops, dev, NULL, NULL);
"failed to get NCQ Send/Recv Log Emask 0x%x\n",
err_mask);
} else {
+ u8 *cmds = dev->ncq_send_recv_cmds;
+
dev->flags |= ATA_DFLAG_NCQ_SEND_RECV;
- memcpy(dev->ncq_send_recv_cmds, ap->sector_buf,
- ATA_LOG_NCQ_SEND_RECV_SIZE);
+ memcpy(cmds, ap->sector_buf, ATA_LOG_NCQ_SEND_RECV_SIZE);
+
+ if (dev->horkage & ATA_HORKAGE_NO_NCQ_TRIM) {
+ ata_dev_dbg(dev, "disabling queued TRIM support\n");
+ cmds[ATA_LOG_NCQ_SEND_RECV_DSM_OFFSET] &=
+ ~ATA_LOG_NCQ_SEND_RECV_DSM_TRIM;
+ }
}
}
{ "ST3320[68]13AS", "SD1[5-9]", ATA_HORKAGE_NONCQ |
ATA_HORKAGE_FIRMWARE_WARN },
+ /* Seagate Momentus SpinPoint M8 seem to have FPMDA_AA issues */
+ { "ST1000LM024 HN-M101MBB", "2AR10001", ATA_HORKAGE_BROKEN_FPDMA_AA },
+
/* Blacklist entries taken from Silicon Image 3124/3132
Windows driver .inf file - also several Linux problem reports */
{ "HTS541060G9SA00", "MB3OC60D", ATA_HORKAGE_NONCQ, },
{ "PIONEER DVD-RW DVR-212D", NULL, ATA_HORKAGE_NOSETXFER },
{ "PIONEER DVD-RW DVR-216D", NULL, ATA_HORKAGE_NOSETXFER },
+ /* devices that don't properly handle queued TRIM commands */
+ { "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
+ { "Crucial_CT???M500SSD1", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
+
/* End Marker */
{ }
};
for (i = 0; i < SATA_PMP_MAX_PORTS; i++)
ata_tlink_delete(&ap->pmp_link[i]);
}
- ata_tport_delete(ap);
-
/* remove the associated SCSI host */
scsi_remove_host(ap->scsi_host);
+ ata_tport_delete(ap);
}
/**
{ "norst", .lflags = ATA_LFLAG_NO_HRST | ATA_LFLAG_NO_SRST },
{ "rstonce", .lflags = ATA_LFLAG_RST_ONCE },
{ "atapi_dmadir", .horkage_on = ATA_HORKAGE_ATAPI_DMADIR },
+ { "disable", .horkage_on = ATA_HORKAGE_DISABLE },
};
char *start = *cur, *p = *cur;
char *id, *val, *endp;
shost->max_lun = 1;
shost->max_channel = 1;
shost->max_cmd_len = 16;
+ shost->no_write_same = 1;
/* Schedule policy is determined by ->qc_defer()
* callback and it needs to see every deferred qc.
return;
}
+ /*
+ * XXX - UGLY HACK
+ *
+ * The block layer suspend/resume path is fundamentally broken due
+ * to freezable kthreads and workqueue and may deadlock if a block
+ * device gets removed while resume is in progress. I don't know
+ * what the solution is short of removing freezable kthreads and
+ * workqueues altogether.
+ *
+ * The following is an ugly hack to avoid kicking off device
+ * removal while freezer is active. This is a joke but does avoid
+ * this particular deadlock scenario.
+ *
+ * https://bugzilla.kernel.org/show_bug.cgi?id=62801
+ * http://marc.info/?l=linux-kernel&m=138695698516487
+ */
+#ifdef CONFIG_FREEZER
+ while (pm_freezing)
+ msleep(10);
+#endif
+
DPRINTK("ENTER\n");
mutex_lock(&ap->scsi_scan_mutex);
static bool odd_can_poweroff(struct ata_device *ata_dev)
{
acpi_handle handle;
- acpi_status status;
struct acpi_device *acpi_dev;
handle = ata_dev_acpi_handle(ata_dev);
if (!handle)
return false;
- status = acpi_bus_get_device(handle, &acpi_dev);
- if (ACPI_FAILURE(status))
+ if (acpi_bus_get_device(handle, &acpi_dev))
return false;
return acpi_device_can_poweroff(acpi_dev);
ret = clk_set_rate(acdev->clk, 166000000);
if (ret) {
dev_warn(acdev->host->dev, "clock set rate failed");
+ clk_disable_unprepare(acdev->clk);
return ret;
}
struct dma_async_tx_descriptor *tx;
struct dma_chan *chan = acdev->dma_chan;
dma_cookie_t cookie;
- unsigned long flags = DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_SRC_UNMAP |
- DMA_COMPL_SKIP_DEST_UNMAP;
+ unsigned long flags = DMA_PREP_INTERRUPT;
int ret = 0;
tx = chan->device->device_prep_dma_memcpy(chan, dest, src, len, flags);
goto err_alloc;
pdev->dev.parent = pdevinfo->parent;
- ACPI_HANDLE_SET(&pdev->dev, pdevinfo->acpi_node.handle);
+ ACPI_COMPANION_SET(&pdev->dev, pdevinfo->acpi_node.companion);
if (pdevinfo->dma_mask) {
/*
ret = platform_device_add(pdev);
if (ret) {
err:
- ACPI_HANDLE_SET(&pdev->dev, NULL);
+ ACPI_COMPANION_SET(&pdev->dev, NULL);
kfree(pdev->dev.dma_mask);
err_alloc:
device_unlock(dev);
+ if (error)
+ pm_runtime_put(dev);
+
return error;
}
BUG_ON(reg_size != 4);
- if (ctx->clk) {
+ if (!IS_ERR(ctx->clk)) {
ret = clk_enable(ctx->clk);
if (ret < 0)
return ret;
offset += ctx->val_bytes;
}
- if (ctx->clk)
+ if (!IS_ERR(ctx->clk))
clk_disable(ctx->clk);
return 0;
BUG_ON(reg_size != 4);
- if (ctx->clk) {
+ if (!IS_ERR(ctx->clk)) {
ret = clk_enable(ctx->clk);
if (ret < 0)
return ret;
offset += ctx->val_bytes;
}
- if (ctx->clk)
+ if (!IS_ERR(ctx->clk))
clk_disable(ctx->clk);
return 0;
{
struct regmap_mmio_context *ctx = context;
- if (ctx->clk) {
+ if (!IS_ERR(ctx->clk)) {
clk_unprepare(ctx->clk);
clk_put(ctx->clk);
}
ctx->regs = regs;
ctx->val_bytes = config->val_bits / 8;
+ ctx->clk = ERR_PTR(-ENODEV);
if (clk_id == NULL)
return ctx;
val + (i * val_bytes),
val_bytes);
if (ret != 0)
- return ret;
+ goto out;
}
} else {
ret = _regmap_raw_write(map, reg, wval, val_bytes * val_count);
/**
* regmap_read(): Read a value from a single register
*
- * @map: Register map to write to
+ * @map: Register map to read from
* @reg: Register to be read from
* @val: Pointer to store read value
*
/**
* regmap_raw_read(): Read raw data from the device
*
- * @map: Register map to write to
+ * @map: Register map to read from
* @reg: First register to be read from
* @val: Pointer to store read value
* @val_len: Size of data to read
/**
* regmap_bulk_read(): Read multiple registers from the device
*
- * @map: Register map to write to
+ * @map: Register map to read from
* @reg: First register to be read from
* @val: Pointer to store read value, in native register size for device
* @val_count: Number of registers to read
#include <linux/module.h>
+
#include <linux/moduleparam.h>
#include <linux/sched.h>
#include <linux/fs.h>
NULL_Q_MQ = 2,
};
-static int submit_queues = 1;
+static int submit_queues;
module_param(submit_queues, int, S_IRUGO);
MODULE_PARM_DESC(submit_queues, "Number of submission queues");
module_param(hw_queue_depth, int, S_IRUGO);
MODULE_PARM_DESC(hw_queue_depth, "Queue depth for each hardware queue. Default: 64");
-static bool use_per_node_hctx = true;
+static bool use_per_node_hctx = false;
module_param(use_per_node_hctx, bool, S_IRUGO);
-MODULE_PARM_DESC(use_per_node_hctx, "Use per-node allocation for hardware context queues. Default: true");
+MODULE_PARM_DESC(use_per_node_hctx, "Use per-node allocation for hardware context queues. Default: false");
static void put_tag(struct nullb_queue *nq, unsigned int tag)
{
blk_end_request_all(rq, 0);
}
-#if defined(CONFIG_SMP) && defined(CONFIG_USE_GENERIC_SMP_HELPERS)
+#ifdef CONFIG_SMP
static void null_ipi_cmd_end_io(void *data)
{
put_cpu();
}
-#endif /* CONFIG_SMP && CONFIG_USE_GENERIC_SMP_HELPERS */
+#endif /* CONFIG_SMP */
static inline void null_handle_cmd(struct nullb_cmd *cmd)
{
end_cmd(cmd);
break;
case NULL_IRQ_SOFTIRQ:
-#if defined(CONFIG_SMP) && defined(CONFIG_USE_GENERIC_SMP_HELPERS)
+#ifdef CONFIG_SMP
null_cmd_end_ipi(cmd);
#else
end_cmd(cmd);
static struct blk_mq_hw_ctx *null_alloc_hctx(struct blk_mq_reg *reg, unsigned int hctx_index)
{
- return kzalloc_node(sizeof(struct blk_mq_hw_ctx), GFP_KERNEL,
- hctx_index);
+ int b_size = DIV_ROUND_UP(reg->nr_hw_queues, nr_online_nodes);
+ int tip = (reg->nr_hw_queues % nr_online_nodes);
+ int node = 0, i, n;
+
+ /*
+ * Split submit queues evenly wrt to the number of nodes. If uneven,
+ * fill the first buckets with one extra, until the rest is filled with
+ * no extra.
+ */
+ for (i = 0, n = 1; i < hctx_index; i++, n++) {
+ if (n % b_size == 0) {
+ n = 0;
+ node++;
+
+ tip--;
+ if (!tip)
+ b_size = reg->nr_hw_queues / nr_online_nodes;
+ }
+ }
+
+ /*
+ * A node might not be online, therefore map the relative node id to the
+ * real node id.
+ */
+ for_each_online_node(n) {
+ if (!node)
+ break;
+ node--;
+ }
+
+ return kzalloc_node(sizeof(struct blk_mq_hw_ctx), GFP_KERNEL, n);
}
static void null_free_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_index)
kfree(hctx);
}
+static void null_init_queue(struct nullb *nullb, struct nullb_queue *nq)
+{
+ BUG_ON(!nullb);
+ BUG_ON(!nq);
+
+ init_waitqueue_head(&nq->wait);
+ nq->queue_depth = nullb->queue_depth;
+}
+
static int null_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
unsigned int index)
{
struct nullb *nullb = data;
struct nullb_queue *nq = &nullb->queues[index];
- init_waitqueue_head(&nq->wait);
- nq->queue_depth = nullb->queue_depth;
- nullb->nr_queues++;
hctx->driver_data = nq;
+ null_init_queue(nullb, nq);
+ nullb->nr_queues++;
return 0;
}
nq->cmds = kzalloc(nq->queue_depth * sizeof(*cmd), GFP_KERNEL);
if (!nq->cmds)
- return 1;
+ return -ENOMEM;
tag_size = ALIGN(nq->queue_depth, BITS_PER_LONG) / BITS_PER_LONG;
nq->tag_map = kzalloc(tag_size * sizeof(unsigned long), GFP_KERNEL);
if (!nq->tag_map) {
kfree(nq->cmds);
- return 1;
+ return -ENOMEM;
}
for (i = 0; i < nq->queue_depth; i++) {
static int setup_queues(struct nullb *nullb)
{
- struct nullb_queue *nq;
- int i;
-
- nullb->queues = kzalloc(submit_queues * sizeof(*nq), GFP_KERNEL);
+ nullb->queues = kzalloc(submit_queues * sizeof(struct nullb_queue),
+ GFP_KERNEL);
if (!nullb->queues)
- return 1;
+ return -ENOMEM;
nullb->nr_queues = 0;
nullb->queue_depth = hw_queue_depth;
- if (queue_mode == NULL_Q_MQ)
- return 0;
+ return 0;
+}
+
+static int init_driver_queues(struct nullb *nullb)
+{
+ struct nullb_queue *nq;
+ int i, ret = 0;
for (i = 0; i < submit_queues; i++) {
nq = &nullb->queues[i];
- init_waitqueue_head(&nq->wait);
- nq->queue_depth = hw_queue_depth;
- if (setup_commands(nq))
- break;
+
+ null_init_queue(nullb, nq);
+
+ ret = setup_commands(nq);
+ if (ret)
+ goto err_queue;
nullb->nr_queues++;
}
- if (i == submit_queues)
- return 0;
-
+ return 0;
+err_queue:
cleanup_queues(nullb);
- return 1;
+ return ret;
}
static int null_add_dev(void)
spin_lock_init(&nullb->lock);
+ if (queue_mode == NULL_Q_MQ && use_per_node_hctx)
+ submit_queues = nr_online_nodes;
+
if (setup_queues(nullb))
goto err;
if (queue_mode == NULL_Q_MQ) {
null_mq_reg.numa_node = home_node;
null_mq_reg.queue_depth = hw_queue_depth;
+ null_mq_reg.nr_hw_queues = submit_queues;
if (use_per_node_hctx) {
null_mq_reg.ops->alloc_hctx = null_alloc_hctx;
null_mq_reg.ops->free_hctx = null_free_hctx;
-
- null_mq_reg.nr_hw_queues = nr_online_nodes;
} else {
null_mq_reg.ops->alloc_hctx = blk_mq_alloc_single_hw_queue;
null_mq_reg.ops->free_hctx = blk_mq_free_single_hw_queue;
-
- null_mq_reg.nr_hw_queues = submit_queues;
}
nullb->q = blk_mq_init_queue(&null_mq_reg, nullb);
} else if (queue_mode == NULL_Q_BIO) {
nullb->q = blk_alloc_queue_node(GFP_KERNEL, home_node);
blk_queue_make_request(nullb->q, null_queue_bio);
+ init_driver_queues(nullb);
} else {
nullb->q = blk_init_queue_node(null_request_fn, &nullb->lock, home_node);
blk_queue_prep_rq(nullb->q, null_rq_prep_fn);
if (nullb->q)
blk_queue_softirq_done(nullb->q, null_softirq_done_fn);
+ init_driver_queues(nullb);
}
if (!nullb->q)
{
unsigned int i;
-#if !defined(CONFIG_SMP) || !defined(CONFIG_USE_GENERIC_SMP_HELPERS)
+#if !defined(CONFIG_SMP)
if (irqmode == NULL_IRQ_SOFTIRQ) {
pr_warn("null_blk: softirq completions not available.\n");
pr_warn("null_blk: using direct completions.\n");
}
#endif
- if (submit_queues > nr_cpu_ids)
+ if (queue_mode == NULL_Q_MQ && use_per_node_hctx) {
+ if (submit_queues < nr_online_nodes) {
+ pr_warn("null_blk: submit_queues param is set to %u.",
+ nr_online_nodes);
+ submit_queues = nr_online_nodes;
+ }
+ } else if (submit_queues > nr_cpu_ids)
submit_queues = nr_cpu_ids;
else if (!submit_queues)
submit_queues = 1;
}
}
-const char *skd_skmsg_state_to_str(enum skd_fit_msg_state state)
+static const char *skd_skmsg_state_to_str(enum skd_fit_msg_state state)
{
switch (state) {
case SKD_MSG_STATE_IDLE:
}
}
-const char *skd_skreq_state_to_str(enum skd_req_state state)
+static const char *skd_skreq_state_to_str(enum skd_req_state state)
{
switch (state) {
case SKD_REQ_STATE_IDLE:
spin_lock_irqsave(&vblk->vq_lock, flags);
if (__virtblk_add_req(vblk->vq, vbr, vbr->sg, num) < 0) {
+ virtqueue_kick(vblk->vq);
spin_unlock_irqrestore(&vblk->vq_lock, flags);
blk_mq_stop_hw_queue(hctx);
- virtqueue_kick(vblk->vq);
return BLK_MQ_RQ_QUEUE_BUSY;
}
- spin_unlock_irqrestore(&vblk->vq_lock, flags);
if (last)
virtqueue_kick(vblk->vq);
+
+ spin_unlock_irqrestore(&vblk->vq_lock, flags);
return BLK_MQ_RQ_QUEUE_OK;
}
if ((ring_req->operation == BLKIF_OP_INDIRECT) &&
(i % SEGS_PER_INDIRECT_FRAME == 0)) {
- unsigned long pfn;
+ unsigned long uninitialized_var(pfn);
if (segments)
kunmap_atomic(segments);
bdev = bdget_disk(disk, 0);
+ if (!bdev) {
+ WARN(1, "Block device %s yanked out from us!\n", disk->disk_name);
+ goto out_mutex;
+ }
if (bdev->bd_openers)
goto out;
out:
bdput(bdev);
+out_mutex:
mutex_unlock(&blkfront_mutex);
}
If unsure, say Y.
+config HW_RANDOM_OMAP3_ROM
+ tristate "OMAP3 ROM Random Number Generator support"
+ depends on HW_RANDOM && ARCH_OMAP3
+ default HW_RANDOM
+ ---help---
+ This driver provides kernel-side support for the Random Number
+ Generator hardware found on OMAP34xx processors.
+
+ To compile this driver as a module, choose M here: the
+ module will be called omap3-rom-rng.
+
+ If unsure, say Y.
+
config HW_RANDOM_OCTEON
tristate "Octeon Random Number Generator support"
depends on HW_RANDOM && CAVIUM_OCTEON_SOC
module will be called tpm-rng.
If unsure, say Y.
+
+config HW_RANDOM_MSM
+ tristate "Qualcomm MSM Random Number Generator support"
+ depends on HW_RANDOM && ARCH_MSM
+ ---help---
+ This driver provides kernel-side support for the Random Number
+ Generator hardware found on Qualcomm MSM SoCs.
+
+ To compile this driver as a module, choose M here. the
+ module will be called msm-rng.
+
+ If unsure, say Y.
obj-$(CONFIG_HW_RANDOM_VIA) += via-rng.o
obj-$(CONFIG_HW_RANDOM_IXP4XX) += ixp4xx-rng.o
obj-$(CONFIG_HW_RANDOM_OMAP) += omap-rng.o
+obj-$(CONFIG_HW_RANDOM_OMAP3_ROM) += omap3-rom-rng.o
obj-$(CONFIG_HW_RANDOM_PASEMI) += pasemi-rng.o
obj-$(CONFIG_HW_RANDOM_VIRTIO) += virtio-rng.o
obj-$(CONFIG_HW_RANDOM_TX4939) += tx4939-rng.o
obj-$(CONFIG_HW_RANDOM_EXYNOS) += exynos-rng.o
obj-$(CONFIG_HW_RANDOM_TPM) += tpm-rng.o
obj-$(CONFIG_HW_RANDOM_BCM2835) += bcm2835-rng.o
+obj-$(CONFIG_HW_RANDOM_MSM) += msm-rng.o
--- /dev/null
+/*
+ * Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/hw_random.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+
+/* Device specific register offsets */
+#define PRNG_DATA_OUT 0x0000
+#define PRNG_STATUS 0x0004
+#define PRNG_LFSR_CFG 0x0100
+#define PRNG_CONFIG 0x0104
+
+/* Device specific register masks and config values */
+#define PRNG_LFSR_CFG_MASK 0x0000ffff
+#define PRNG_LFSR_CFG_CLOCKS 0x0000dddd
+#define PRNG_CONFIG_HW_ENABLE BIT(1)
+#define PRNG_STATUS_DATA_AVAIL BIT(0)
+
+#define MAX_HW_FIFO_DEPTH 16
+#define MAX_HW_FIFO_SIZE (MAX_HW_FIFO_DEPTH * 4)
+#define WORD_SZ 4
+
+struct msm_rng {
+ void __iomem *base;
+ struct clk *clk;
+ struct hwrng hwrng;
+};
+
+#define to_msm_rng(p) container_of(p, struct msm_rng, hwrng)
+
+static int msm_rng_enable(struct hwrng *hwrng, int enable)
+{
+ struct msm_rng *rng = to_msm_rng(hwrng);
+ u32 val;
+ int ret;
+
+ ret = clk_prepare_enable(rng->clk);
+ if (ret)
+ return ret;
+
+ if (enable) {
+ /* Enable PRNG only if it is not already enabled */
+ val = readl_relaxed(rng->base + PRNG_CONFIG);
+ if (val & PRNG_CONFIG_HW_ENABLE)
+ goto already_enabled;
+
+ val = readl_relaxed(rng->base + PRNG_LFSR_CFG);
+ val &= ~PRNG_LFSR_CFG_MASK;
+ val |= PRNG_LFSR_CFG_CLOCKS;
+ writel(val, rng->base + PRNG_LFSR_CFG);
+
+ val = readl_relaxed(rng->base + PRNG_CONFIG);
+ val |= PRNG_CONFIG_HW_ENABLE;
+ writel(val, rng->base + PRNG_CONFIG);
+ } else {
+ val = readl_relaxed(rng->base + PRNG_CONFIG);
+ val &= ~PRNG_CONFIG_HW_ENABLE;
+ writel(val, rng->base + PRNG_CONFIG);
+ }
+
+already_enabled:
+ clk_disable_unprepare(rng->clk);
+ return 0;
+}
+
+static int msm_rng_read(struct hwrng *hwrng, void *data, size_t max, bool wait)
+{
+ struct msm_rng *rng = to_msm_rng(hwrng);
+ size_t currsize = 0;
+ u32 *retdata = data;
+ size_t maxsize;
+ int ret;
+ u32 val;
+
+ /* calculate max size bytes to transfer back to caller */
+ maxsize = min_t(size_t, MAX_HW_FIFO_SIZE, max);
+
+ /* no room for word data */
+ if (maxsize < WORD_SZ)
+ return 0;
+
+ ret = clk_prepare_enable(rng->clk);
+ if (ret)
+ return ret;
+
+ /* read random data from hardware */
+ do {
+ val = readl_relaxed(rng->base + PRNG_STATUS);
+ if (!(val & PRNG_STATUS_DATA_AVAIL))
+ break;
+
+ val = readl_relaxed(rng->base + PRNG_DATA_OUT);
+ if (!val)
+ break;
+
+ *retdata++ = val;
+ currsize += WORD_SZ;
+
+ /* make sure we stay on 32bit boundary */
+ if ((maxsize - currsize) < WORD_SZ)
+ break;
+ } while (currsize < maxsize);
+
+ clk_disable_unprepare(rng->clk);
+
+ return currsize;
+}
+
+static int msm_rng_init(struct hwrng *hwrng)
+{
+ return msm_rng_enable(hwrng, 1);
+}
+
+static void msm_rng_cleanup(struct hwrng *hwrng)
+{
+ msm_rng_enable(hwrng, 0);
+}
+
+static int msm_rng_probe(struct platform_device *pdev)
+{
+ struct resource *res;
+ struct msm_rng *rng;
+ int ret;
+
+ rng = devm_kzalloc(&pdev->dev, sizeof(*rng), GFP_KERNEL);
+ if (!rng)
+ return -ENOMEM;
+
+ platform_set_drvdata(pdev, rng);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ rng->base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(rng->base))
+ return PTR_ERR(rng->base);
+
+ rng->clk = devm_clk_get(&pdev->dev, "core");
+ if (IS_ERR(rng->clk))
+ return PTR_ERR(rng->clk);
+
+ rng->hwrng.name = KBUILD_MODNAME,
+ rng->hwrng.init = msm_rng_init,
+ rng->hwrng.cleanup = msm_rng_cleanup,
+ rng->hwrng.read = msm_rng_read,
+
+ ret = hwrng_register(&rng->hwrng);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to register hwrng\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int msm_rng_remove(struct platform_device *pdev)
+{
+ struct msm_rng *rng = platform_get_drvdata(pdev);
+
+ hwrng_unregister(&rng->hwrng);
+ return 0;
+}
+
+static const struct of_device_id msm_rng_of_match[] = {
+ { .compatible = "qcom,prng", },
+ {}
+};
+MODULE_DEVICE_TABLE(of, msm_rng_of_match);
+
+static struct platform_driver msm_rng_driver = {
+ .probe = msm_rng_probe,
+ .remove = msm_rng_remove,
+ .driver = {
+ .name = KBUILD_MODNAME,
+ .owner = THIS_MODULE,
+ .of_match_table = of_match_ptr(msm_rng_of_match),
+ }
+};
+module_platform_driver(msm_rng_driver);
+
+MODULE_ALIAS("platform:" KBUILD_MODNAME);
+MODULE_AUTHOR("The Linux Foundation");
+MODULE_DESCRIPTION("Qualcomm MSM random number generator driver");
+MODULE_LICENSE("GPL v2");
--- /dev/null
+/*
+ * omap3-rom-rng.c - RNG driver for TI OMAP3 CPU family
+ *
+ * Copyright (C) 2009 Nokia Corporation
+ * Author: Juha Yrjola <juha.yrjola@solidboot.com>
+ *
+ * Copyright (C) 2013 Pali Rohár <pali.rohar@gmail.com>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/random.h>
+#include <linux/hw_random.h>
+#include <linux/timer.h>
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/platform_device.h>
+
+#define RNG_RESET 0x01
+#define RNG_GEN_PRNG_HW_INIT 0x02
+#define RNG_GEN_HW 0x08
+
+/* param1: ptr, param2: count, param3: flag */
+static u32 (*omap3_rom_rng_call)(u32, u32, u32);
+
+static struct timer_list idle_timer;
+static int rng_idle;
+static struct clk *rng_clk;
+
+static void omap3_rom_rng_idle(unsigned long data)
+{
+ int r;
+
+ r = omap3_rom_rng_call(0, 0, RNG_RESET);
+ if (r != 0) {
+ pr_err("reset failed: %d\n", r);
+ return;
+ }
+ clk_disable_unprepare(rng_clk);
+ rng_idle = 1;
+}
+
+static int omap3_rom_rng_get_random(void *buf, unsigned int count)
+{
+ u32 r;
+ u32 ptr;
+
+ del_timer_sync(&idle_timer);
+ if (rng_idle) {
+ clk_prepare_enable(rng_clk);
+ r = omap3_rom_rng_call(0, 0, RNG_GEN_PRNG_HW_INIT);
+ if (r != 0) {
+ clk_disable_unprepare(rng_clk);
+ pr_err("HW init failed: %d\n", r);
+ return -EIO;
+ }
+ rng_idle = 0;
+ }
+
+ ptr = virt_to_phys(buf);
+ r = omap3_rom_rng_call(ptr, count, RNG_GEN_HW);
+ mod_timer(&idle_timer, jiffies + msecs_to_jiffies(500));
+ if (r != 0)
+ return -EINVAL;
+ return 0;
+}
+
+static int omap3_rom_rng_data_present(struct hwrng *rng, int wait)
+{
+ return 1;
+}
+
+static int omap3_rom_rng_data_read(struct hwrng *rng, u32 *data)
+{
+ int r;
+
+ r = omap3_rom_rng_get_random(data, 4);
+ if (r < 0)
+ return r;
+ return 4;
+}
+
+static struct hwrng omap3_rom_rng_ops = {
+ .name = "omap3-rom",
+ .data_present = omap3_rom_rng_data_present,
+ .data_read = omap3_rom_rng_data_read,
+};
+
+static int omap3_rom_rng_probe(struct platform_device *pdev)
+{
+ pr_info("initializing\n");
+
+ omap3_rom_rng_call = pdev->dev.platform_data;
+ if (!omap3_rom_rng_call) {
+ pr_err("omap3_rom_rng_call is NULL\n");
+ return -EINVAL;
+ }
+
+ setup_timer(&idle_timer, omap3_rom_rng_idle, 0);
+ rng_clk = clk_get(&pdev->dev, "ick");
+ if (IS_ERR(rng_clk)) {
+ pr_err("unable to get RNG clock\n");
+ return PTR_ERR(rng_clk);
+ }
+
+ /* Leave the RNG in reset state. */
+ clk_prepare_enable(rng_clk);
+ omap3_rom_rng_idle(0);
+
+ return hwrng_register(&omap3_rom_rng_ops);
+}
+
+static int omap3_rom_rng_remove(struct platform_device *pdev)
+{
+ hwrng_unregister(&omap3_rom_rng_ops);
+ clk_disable_unprepare(rng_clk);
+ clk_put(rng_clk);
+ return 0;
+}
+
+static struct platform_driver omap3_rom_rng_driver = {
+ .driver = {
+ .name = "omap3-rom-rng",
+ .owner = THIS_MODULE,
+ },
+ .probe = omap3_rom_rng_probe,
+ .remove = omap3_rom_rng_remove,
+};
+
+module_platform_driver(omap3_rom_rng_driver);
+
+MODULE_ALIAS("platform:omap3-rom-rng");
+MODULE_AUTHOR("Juha Yrjola");
+MODULE_AUTHOR("Pali Rohár <pali.rohar@gmail.com>");
+MODULE_LICENSE("GPL");
#include <linux/hw_random.h>
#include <asm/vio.h>
-#define MODULE_NAME "pseries-rng"
static int pseries_rng_data_read(struct hwrng *rng, u32 *data)
{
};
static struct hwrng pseries_rng = {
- .name = MODULE_NAME,
+ .name = KBUILD_MODNAME,
.data_read = pseries_rng_data_read,
};
MODULE_DEVICE_TABLE(vio, pseries_rng_driver_ids);
static struct vio_driver pseries_rng_driver = {
- .name = MODULE_NAME,
+ .name = KBUILD_MODNAME,
.probe = pseries_rng_probe,
.remove = pseries_rng_remove,
.get_desired_dma = pseries_rng_get_desired_dma,
module_init(mod_init);
module_exit(mod_exit);
-static struct x86_cpu_id via_rng_cpu_id[] = {
+static struct x86_cpu_id __maybe_unused via_rng_cpu_id[] = {
X86_FEATURE_MATCH(X86_FEATURE_XSTORE),
{}
};
DMI_MATCH(DMI_PRODUCT_NAME, "Vostro"),
},
},
+ {
+ .ident = "Dell XPS421",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "XPS L421X"),
+ },
+ },
{ }
};
from within Linux. To compile this driver as a module, choose
M here; the module will be called tpm_tis.
+config TCG_TIS_I2C_ATMEL
+ tristate "TPM Interface Specification 1.2 Interface (I2C - Atmel)"
+ depends on I2C
+ ---help---
+ If you have an Atmel I2C TPM security chip say Yes and it will be
+ accessible from within Linux.
+ To compile this driver as a module, choose M here; the module will
+ be called tpm_tis_i2c_atmel.
+
config TCG_TIS_I2C_INFINEON
tristate "TPM Interface Specification 1.2 Interface (I2C - Infineon)"
depends on I2C
Specification 0.20 say Yes and it will be accessible from within
Linux.
To compile this driver as a module, choose M here; the module
- will be called tpm_tis_i2c_infineon.
+ will be called tpm_i2c_infineon.
+
+config TCG_TIS_I2C_NUVOTON
+ tristate "TPM Interface Specification 1.2 Interface (I2C - Nuvoton)"
+ depends on I2C
+ ---help---
+ If you have a TPM security chip with an I2C interface from
+ Nuvoton Technology Corp. say Yes and it will be accessible
+ from within Linux.
+ To compile this driver as a module, choose M here; the module
+ will be called tpm_i2c_nuvoton.
config TCG_NSC
tristate "National Semiconductor TPM Interface"
as a module, choose M here; the module will be called tpm_ibmvtpm.
config TCG_ST33_I2C
- tristate "STMicroelectronics ST33 I2C TPM"
- depends on I2C
- depends on GPIOLIB
- ---help---
- If you have a TPM security chip from STMicroelectronics working with
- an I2C bus say Yes and it will be accessible from within Linux.
- To compile this driver as a module, choose M here; the module will be
- called tpm_stm_st33_i2c.
+ tristate "STMicroelectronics ST33 I2C TPM"
+ depends on I2C
+ depends on GPIOLIB
+ ---help---
+ If you have a TPM security chip from STMicroelectronics working with
+ an I2C bus say Yes and it will be accessible from within Linux.
+ To compile this driver as a module, choose M here; the module will be
+ called tpm_stm_st33_i2c.
config TCG_XEN
tristate "XEN TPM Interface"
# Makefile for the kernel tpm device drivers.
#
obj-$(CONFIG_TCG_TPM) += tpm.o
+tpm-y := tpm-interface.o
+tpm-$(CONFIG_ACPI) += tpm_ppi.o
+
ifdef CONFIG_ACPI
- obj-$(CONFIG_TCG_TPM) += tpm_bios.o
- tpm_bios-objs += tpm_eventlog.o tpm_acpi.o tpm_ppi.o
+ tpm-y += tpm_eventlog.o tpm_acpi.o
else
ifdef CONFIG_TCG_IBMVTPM
- obj-$(CONFIG_TCG_TPM) += tpm_bios.o
- tpm_bios-objs += tpm_eventlog.o tpm_of.o
+ tpm-y += tpm_eventlog.o tpm_of.o
endif
endif
obj-$(CONFIG_TCG_TIS) += tpm_tis.o
+obj-$(CONFIG_TCG_TIS_I2C_ATMEL) += tpm_i2c_atmel.o
obj-$(CONFIG_TCG_TIS_I2C_INFINEON) += tpm_i2c_infineon.o
+obj-$(CONFIG_TCG_TIS_I2C_NUVOTON) += tpm_i2c_nuvoton.o
obj-$(CONFIG_TCG_NSC) += tpm_nsc.o
obj-$(CONFIG_TCG_ATMEL) += tpm_atmel.o
obj-$(CONFIG_TCG_INFINEON) += tpm_infineon.o
--- /dev/null
+/*
+ * Copyright (C) 2004 IBM Corporation
+ *
+ * Authors:
+ * Leendert van Doorn <leendert@watson.ibm.com>
+ * Dave Safford <safford@watson.ibm.com>
+ * Reiner Sailer <sailer@watson.ibm.com>
+ * Kylene Hall <kjhall@us.ibm.com>
+ *
+ * Maintained by: <tpmdd-devel@lists.sourceforge.net>
+ *
+ * Device driver for TCG/TCPA TPM (trusted platform module).
+ * Specifications at www.trustedcomputinggroup.org
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation, version 2 of the
+ * License.
+ *
+ * Note, the TPM chip is not interrupt driven (only polling)
+ * and can have very long timeouts (minutes!). Hence the unusual
+ * calls to msleep.
+ *
+ */
+
+#include <linux/poll.h>
+#include <linux/slab.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
+#include <linux/freezer.h>
+
+#include "tpm.h"
+#include "tpm_eventlog.h"
+
+enum tpm_duration {
+ TPM_SHORT = 0,
+ TPM_MEDIUM = 1,
+ TPM_LONG = 2,
+ TPM_UNDEFINED,
+};
+
+#define TPM_MAX_ORDINAL 243
+#define TSC_MAX_ORDINAL 12
+#define TPM_PROTECTED_COMMAND 0x00
+#define TPM_CONNECTION_COMMAND 0x40
+
+/*
+ * Bug workaround - some TPM's don't flush the most
+ * recently changed pcr on suspend, so force the flush
+ * with an extend to the selected _unused_ non-volatile pcr.
+ */
+static int tpm_suspend_pcr;
+module_param_named(suspend_pcr, tpm_suspend_pcr, uint, 0644);
+MODULE_PARM_DESC(suspend_pcr,
+ "PCR to use for dummy writes to faciltate flush on suspend.");
+
+static LIST_HEAD(tpm_chip_list);
+static DEFINE_SPINLOCK(driver_lock);
+static DECLARE_BITMAP(dev_mask, TPM_NUM_DEVICES);
+
+/*
+ * Array with one entry per ordinal defining the maximum amount
+ * of time the chip could take to return the result. The ordinal
+ * designation of short, medium or long is defined in a table in
+ * TCG Specification TPM Main Part 2 TPM Structures Section 17. The
+ * values of the SHORT, MEDIUM, and LONG durations are retrieved
+ * from the chip during initialization with a call to tpm_get_timeouts.
+ */
+static const u8 tpm_ordinal_duration[TPM_MAX_ORDINAL] = {
+ TPM_UNDEFINED, /* 0 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED, /* 5 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT, /* 10 */
+ TPM_SHORT,
+ TPM_MEDIUM,
+ TPM_LONG,
+ TPM_LONG,
+ TPM_MEDIUM, /* 15 */
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_MEDIUM,
+ TPM_LONG,
+ TPM_SHORT, /* 20 */
+ TPM_SHORT,
+ TPM_MEDIUM,
+ TPM_MEDIUM,
+ TPM_MEDIUM,
+ TPM_SHORT, /* 25 */
+ TPM_SHORT,
+ TPM_MEDIUM,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_MEDIUM, /* 30 */
+ TPM_LONG,
+ TPM_MEDIUM,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT, /* 35 */
+ TPM_MEDIUM,
+ TPM_MEDIUM,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_MEDIUM, /* 40 */
+ TPM_LONG,
+ TPM_MEDIUM,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT, /* 45 */
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_LONG,
+ TPM_MEDIUM, /* 50 */
+ TPM_MEDIUM,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED, /* 55 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_MEDIUM, /* 60 */
+ TPM_MEDIUM,
+ TPM_MEDIUM,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_MEDIUM, /* 65 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT, /* 70 */
+ TPM_SHORT,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED, /* 75 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_LONG, /* 80 */
+ TPM_UNDEFINED,
+ TPM_MEDIUM,
+ TPM_LONG,
+ TPM_SHORT,
+ TPM_UNDEFINED, /* 85 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT, /* 90 */
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_UNDEFINED, /* 95 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_MEDIUM, /* 100 */
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED, /* 105 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT, /* 110 */
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT, /* 115 */
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_LONG, /* 120 */
+ TPM_LONG,
+ TPM_MEDIUM,
+ TPM_UNDEFINED,
+ TPM_SHORT,
+ TPM_SHORT, /* 125 */
+ TPM_SHORT,
+ TPM_LONG,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT, /* 130 */
+ TPM_MEDIUM,
+ TPM_UNDEFINED,
+ TPM_SHORT,
+ TPM_MEDIUM,
+ TPM_UNDEFINED, /* 135 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT, /* 140 */
+ TPM_SHORT,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED, /* 145 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT, /* 150 */
+ TPM_MEDIUM,
+ TPM_MEDIUM,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_UNDEFINED, /* 155 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT, /* 160 */
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED, /* 165 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_LONG, /* 170 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED, /* 175 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_MEDIUM, /* 180 */
+ TPM_SHORT,
+ TPM_MEDIUM,
+ TPM_MEDIUM,
+ TPM_MEDIUM,
+ TPM_MEDIUM, /* 185 */
+ TPM_SHORT,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED, /* 190 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED, /* 195 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT, /* 200 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT,
+ TPM_SHORT, /* 205 */
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_MEDIUM, /* 210 */
+ TPM_UNDEFINED,
+ TPM_MEDIUM,
+ TPM_MEDIUM,
+ TPM_MEDIUM,
+ TPM_UNDEFINED, /* 215 */
+ TPM_MEDIUM,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT,
+ TPM_SHORT, /* 220 */
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_SHORT,
+ TPM_UNDEFINED, /* 225 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT, /* 230 */
+ TPM_LONG,
+ TPM_MEDIUM,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED, /* 235 */
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_UNDEFINED,
+ TPM_SHORT, /* 240 */
+ TPM_UNDEFINED,
+ TPM_MEDIUM,
+};
+
+static void user_reader_timeout(unsigned long ptr)
+{
+ struct tpm_chip *chip = (struct tpm_chip *) ptr;
+
+ schedule_work(&chip->work);
+}
+
+static void timeout_work(struct work_struct *work)
+{
+ struct tpm_chip *chip = container_of(work, struct tpm_chip, work);
+
+ mutex_lock(&chip->buffer_mutex);
+ atomic_set(&chip->data_pending, 0);
+ memset(chip->data_buffer, 0, TPM_BUFSIZE);
+ mutex_unlock(&chip->buffer_mutex);
+}
+
+/*
+ * Returns max number of jiffies to wait
+ */
+unsigned long tpm_calc_ordinal_duration(struct tpm_chip *chip,
+ u32 ordinal)
+{
+ int duration_idx = TPM_UNDEFINED;
+ int duration = 0;
+ u8 category = (ordinal >> 24) & 0xFF;
+
+ if ((category == TPM_PROTECTED_COMMAND && ordinal < TPM_MAX_ORDINAL) ||
+ (category == TPM_CONNECTION_COMMAND && ordinal < TSC_MAX_ORDINAL))
+ duration_idx = tpm_ordinal_duration[ordinal];
+
+ if (duration_idx != TPM_UNDEFINED)
+ duration = chip->vendor.duration[duration_idx];
+ if (duration <= 0)
+ return 2 * 60 * HZ;
+ else
+ return duration;
+}
+EXPORT_SYMBOL_GPL(tpm_calc_ordinal_duration);
+
+/*
+ * Internal kernel interface to transmit TPM commands
+ */
+static ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
+ size_t bufsiz)
+{
+ ssize_t rc;
+ u32 count, ordinal;
+ unsigned long stop;
+
+ if (bufsiz > TPM_BUFSIZE)
+ bufsiz = TPM_BUFSIZE;
+
+ count = be32_to_cpu(*((__be32 *) (buf + 2)));
+ ordinal = be32_to_cpu(*((__be32 *) (buf + 6)));
+ if (count == 0)
+ return -ENODATA;
+ if (count > bufsiz) {
+ dev_err(chip->dev,
+ "invalid count value %x %zx\n", count, bufsiz);
+ return -E2BIG;
+ }
+
+ mutex_lock(&chip->tpm_mutex);
+
+ rc = chip->vendor.send(chip, (u8 *) buf, count);
+ if (rc < 0) {
+ dev_err(chip->dev,
+ "tpm_transmit: tpm_send: error %zd\n", rc);
+ goto out;
+ }
+
+ if (chip->vendor.irq)
+ goto out_recv;
+
+ stop = jiffies + tpm_calc_ordinal_duration(chip, ordinal);
+ do {
+ u8 status = chip->vendor.status(chip);
+ if ((status & chip->vendor.req_complete_mask) ==
+ chip->vendor.req_complete_val)
+ goto out_recv;
+
+ if (chip->vendor.req_canceled(chip, status)) {
+ dev_err(chip->dev, "Operation Canceled\n");
+ rc = -ECANCELED;
+ goto out;
+ }
+
+ msleep(TPM_TIMEOUT); /* CHECK */
+ rmb();
+ } while (time_before(jiffies, stop));
+
+ chip->vendor.cancel(chip);
+ dev_err(chip->dev, "Operation Timed out\n");
+ rc = -ETIME;
+ goto out;
+
+out_recv:
+ rc = chip->vendor.recv(chip, (u8 *) buf, bufsiz);
+ if (rc < 0)
+ dev_err(chip->dev,
+ "tpm_transmit: tpm_recv: error %zd\n", rc);
+out:
+ mutex_unlock(&chip->tpm_mutex);
+ return rc;
+}
+
+#define TPM_DIGEST_SIZE 20
+#define TPM_RET_CODE_IDX 6
+
+enum tpm_capabilities {
+ TPM_CAP_FLAG = cpu_to_be32(4),
+ TPM_CAP_PROP = cpu_to_be32(5),
+ CAP_VERSION_1_1 = cpu_to_be32(0x06),
+ CAP_VERSION_1_2 = cpu_to_be32(0x1A)
+};
+
+enum tpm_sub_capabilities {
+ TPM_CAP_PROP_PCR = cpu_to_be32(0x101),
+ TPM_CAP_PROP_MANUFACTURER = cpu_to_be32(0x103),
+ TPM_CAP_FLAG_PERM = cpu_to_be32(0x108),
+ TPM_CAP_FLAG_VOL = cpu_to_be32(0x109),
+ TPM_CAP_PROP_OWNER = cpu_to_be32(0x111),
+ TPM_CAP_PROP_TIS_TIMEOUT = cpu_to_be32(0x115),
+ TPM_CAP_PROP_TIS_DURATION = cpu_to_be32(0x120),
+
+};
+
+static ssize_t transmit_cmd(struct tpm_chip *chip, struct tpm_cmd_t *cmd,
+ int len, const char *desc)
+{
+ int err;
+
+ len = tpm_transmit(chip, (u8 *) cmd, len);
+ if (len < 0)
+ return len;
+ else if (len < TPM_HEADER_SIZE)
+ return -EFAULT;
+
+ err = be32_to_cpu(cmd->header.out.return_code);
+ if (err != 0 && desc)
+ dev_err(chip->dev, "A TPM error (%d) occurred %s\n", err, desc);
+
+ return err;
+}
+
+#define TPM_INTERNAL_RESULT_SIZE 200
+#define TPM_TAG_RQU_COMMAND cpu_to_be16(193)
+#define TPM_ORD_GET_CAP cpu_to_be32(101)
+#define TPM_ORD_GET_RANDOM cpu_to_be32(70)
+
+static const struct tpm_input_header tpm_getcap_header = {
+ .tag = TPM_TAG_RQU_COMMAND,
+ .length = cpu_to_be32(22),
+ .ordinal = TPM_ORD_GET_CAP
+};
+
+ssize_t tpm_getcap(struct device *dev, __be32 subcap_id, cap_t *cap,
+ const char *desc)
+{
+ struct tpm_cmd_t tpm_cmd;
+ int rc;
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+
+ tpm_cmd.header.in = tpm_getcap_header;
+ if (subcap_id == CAP_VERSION_1_1 || subcap_id == CAP_VERSION_1_2) {
+ tpm_cmd.params.getcap_in.cap = subcap_id;
+ /*subcap field not necessary */
+ tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(0);
+ tpm_cmd.header.in.length -= cpu_to_be32(sizeof(__be32));
+ } else {
+ if (subcap_id == TPM_CAP_FLAG_PERM ||
+ subcap_id == TPM_CAP_FLAG_VOL)
+ tpm_cmd.params.getcap_in.cap = TPM_CAP_FLAG;
+ else
+ tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP;
+ tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
+ tpm_cmd.params.getcap_in.subcap = subcap_id;
+ }
+ rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, desc);
+ if (!rc)
+ *cap = tpm_cmd.params.getcap_out.cap;
+ return rc;
+}
+
+void tpm_gen_interrupt(struct tpm_chip *chip)
+{
+ struct tpm_cmd_t tpm_cmd;
+ ssize_t rc;
+
+ tpm_cmd.header.in = tpm_getcap_header;
+ tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP;
+ tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
+ tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_TIMEOUT;
+
+ rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE,
+ "attempting to determine the timeouts");
+}
+EXPORT_SYMBOL_GPL(tpm_gen_interrupt);
+
+#define TPM_ORD_STARTUP cpu_to_be32(153)
+#define TPM_ST_CLEAR cpu_to_be16(1)
+#define TPM_ST_STATE cpu_to_be16(2)
+#define TPM_ST_DEACTIVATED cpu_to_be16(3)
+static const struct tpm_input_header tpm_startup_header = {
+ .tag = TPM_TAG_RQU_COMMAND,
+ .length = cpu_to_be32(12),
+ .ordinal = TPM_ORD_STARTUP
+};
+
+static int tpm_startup(struct tpm_chip *chip, __be16 startup_type)
+{
+ struct tpm_cmd_t start_cmd;
+ start_cmd.header.in = tpm_startup_header;
+ start_cmd.params.startup_in.startup_type = startup_type;
+ return transmit_cmd(chip, &start_cmd, TPM_INTERNAL_RESULT_SIZE,
+ "attempting to start the TPM");
+}
+
+int tpm_get_timeouts(struct tpm_chip *chip)
+{
+ struct tpm_cmd_t tpm_cmd;
+ struct timeout_t *timeout_cap;
+ struct duration_t *duration_cap;
+ ssize_t rc;
+ u32 timeout;
+ unsigned int scale = 1;
+
+ tpm_cmd.header.in = tpm_getcap_header;
+ tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP;
+ tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
+ tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_TIMEOUT;
+ rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, NULL);
+
+ if (rc == TPM_ERR_INVALID_POSTINIT) {
+ /* The TPM is not started, we are the first to talk to it.
+ Execute a startup command. */
+ dev_info(chip->dev, "Issuing TPM_STARTUP");
+ if (tpm_startup(chip, TPM_ST_CLEAR))
+ return rc;
+
+ tpm_cmd.header.in = tpm_getcap_header;
+ tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP;
+ tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
+ tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_TIMEOUT;
+ rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE,
+ NULL);
+ }
+ if (rc) {
+ dev_err(chip->dev,
+ "A TPM error (%zd) occurred attempting to determine the timeouts\n",
+ rc);
+ goto duration;
+ }
+
+ if (be32_to_cpu(tpm_cmd.header.out.return_code) != 0 ||
+ be32_to_cpu(tpm_cmd.header.out.length)
+ != sizeof(tpm_cmd.header.out) + sizeof(u32) + 4 * sizeof(u32))
+ return -EINVAL;
+
+ timeout_cap = &tpm_cmd.params.getcap_out.cap.timeout;
+ /* Don't overwrite default if value is 0 */
+ timeout = be32_to_cpu(timeout_cap->a);
+ if (timeout && timeout < 1000) {
+ /* timeouts in msec rather usec */
+ scale = 1000;
+ chip->vendor.timeout_adjusted = true;
+ }
+ if (timeout)
+ chip->vendor.timeout_a = usecs_to_jiffies(timeout * scale);
+ timeout = be32_to_cpu(timeout_cap->b);
+ if (timeout)
+ chip->vendor.timeout_b = usecs_to_jiffies(timeout * scale);
+ timeout = be32_to_cpu(timeout_cap->c);
+ if (timeout)
+ chip->vendor.timeout_c = usecs_to_jiffies(timeout * scale);
+ timeout = be32_to_cpu(timeout_cap->d);
+ if (timeout)
+ chip->vendor.timeout_d = usecs_to_jiffies(timeout * scale);
+
+duration:
+ tpm_cmd.header.in = tpm_getcap_header;
+ tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP;
+ tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
+ tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_DURATION;
+
+ rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE,
+ "attempting to determine the durations");
+ if (rc)
+ return rc;
+
+ if (be32_to_cpu(tpm_cmd.header.out.return_code) != 0 ||
+ be32_to_cpu(tpm_cmd.header.out.length)
+ != sizeof(tpm_cmd.header.out) + sizeof(u32) + 3 * sizeof(u32))
+ return -EINVAL;
+
+ duration_cap = &tpm_cmd.params.getcap_out.cap.duration;
+ chip->vendor.duration[TPM_SHORT] =
+ usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_short));
+ chip->vendor.duration[TPM_MEDIUM] =
+ usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_medium));
+ chip->vendor.duration[TPM_LONG] =
+ usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_long));
+
+ /* The Broadcom BCM0102 chipset in a Dell Latitude D820 gets the above
+ * value wrong and apparently reports msecs rather than usecs. So we
+ * fix up the resulting too-small TPM_SHORT value to make things work.
+ * We also scale the TPM_MEDIUM and -_LONG values by 1000.
+ */
+ if (chip->vendor.duration[TPM_SHORT] < (HZ / 100)) {
+ chip->vendor.duration[TPM_SHORT] = HZ;
+ chip->vendor.duration[TPM_MEDIUM] *= 1000;
+ chip->vendor.duration[TPM_LONG] *= 1000;
+ chip->vendor.duration_adjusted = true;
+ dev_info(chip->dev, "Adjusting TPM timeout parameters.");
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(tpm_get_timeouts);
+
+#define TPM_ORD_CONTINUE_SELFTEST 83
+#define CONTINUE_SELFTEST_RESULT_SIZE 10
+
+static struct tpm_input_header continue_selftest_header = {
+ .tag = TPM_TAG_RQU_COMMAND,
+ .length = cpu_to_be32(10),
+ .ordinal = cpu_to_be32(TPM_ORD_CONTINUE_SELFTEST),
+};
+
+/**
+ * tpm_continue_selftest -- run TPM's selftest
+ * @chip: TPM chip to use
+ *
+ * Returns 0 on success, < 0 in case of fatal error or a value > 0 representing
+ * a TPM error code.
+ */
+static int tpm_continue_selftest(struct tpm_chip *chip)
+{
+ int rc;
+ struct tpm_cmd_t cmd;
+
+ cmd.header.in = continue_selftest_header;
+ rc = transmit_cmd(chip, &cmd, CONTINUE_SELFTEST_RESULT_SIZE,
+ "continue selftest");
+ return rc;
+}
+
+ssize_t tpm_show_enabled(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ cap_t cap;
+ ssize_t rc;
+
+ rc = tpm_getcap(dev, TPM_CAP_FLAG_PERM, &cap,
+ "attempting to determine the permanent enabled state");
+ if (rc)
+ return 0;
+
+ rc = sprintf(buf, "%d\n", !cap.perm_flags.disable);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tpm_show_enabled);
+
+ssize_t tpm_show_active(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ cap_t cap;
+ ssize_t rc;
+
+ rc = tpm_getcap(dev, TPM_CAP_FLAG_PERM, &cap,
+ "attempting to determine the permanent active state");
+ if (rc)
+ return 0;
+
+ rc = sprintf(buf, "%d\n", !cap.perm_flags.deactivated);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tpm_show_active);
+
+ssize_t tpm_show_owned(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ cap_t cap;
+ ssize_t rc;
+
+ rc = tpm_getcap(dev, TPM_CAP_PROP_OWNER, &cap,
+ "attempting to determine the owner state");
+ if (rc)
+ return 0;
+
+ rc = sprintf(buf, "%d\n", cap.owned);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tpm_show_owned);
+
+ssize_t tpm_show_temp_deactivated(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ cap_t cap;
+ ssize_t rc;
+
+ rc = tpm_getcap(dev, TPM_CAP_FLAG_VOL, &cap,
+ "attempting to determine the temporary state");
+ if (rc)
+ return 0;
+
+ rc = sprintf(buf, "%d\n", cap.stclear_flags.deactivated);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tpm_show_temp_deactivated);
+
+/*
+ * tpm_chip_find_get - return tpm_chip for given chip number
+ */
+static struct tpm_chip *tpm_chip_find_get(int chip_num)
+{
+ struct tpm_chip *pos, *chip = NULL;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(pos, &tpm_chip_list, list) {
+ if (chip_num != TPM_ANY_NUM && chip_num != pos->dev_num)
+ continue;
+
+ if (try_module_get(pos->dev->driver->owner)) {
+ chip = pos;
+ break;
+ }
+ }
+ rcu_read_unlock();
+ return chip;
+}
+
+#define TPM_ORDINAL_PCRREAD cpu_to_be32(21)
+#define READ_PCR_RESULT_SIZE 30
+static struct tpm_input_header pcrread_header = {
+ .tag = TPM_TAG_RQU_COMMAND,
+ .length = cpu_to_be32(14),
+ .ordinal = TPM_ORDINAL_PCRREAD
+};
+
+static int __tpm_pcr_read(struct tpm_chip *chip, int pcr_idx, u8 *res_buf)
+{
+ int rc;
+ struct tpm_cmd_t cmd;
+
+ cmd.header.in = pcrread_header;
+ cmd.params.pcrread_in.pcr_idx = cpu_to_be32(pcr_idx);
+ rc = transmit_cmd(chip, &cmd, READ_PCR_RESULT_SIZE,
+ "attempting to read a pcr value");
+
+ if (rc == 0)
+ memcpy(res_buf, cmd.params.pcrread_out.pcr_result,
+ TPM_DIGEST_SIZE);
+ return rc;
+}
+
+/**
+ * tpm_pcr_read - read a pcr value
+ * @chip_num: tpm idx # or ANY
+ * @pcr_idx: pcr idx to retrieve
+ * @res_buf: TPM_PCR value
+ * size of res_buf is 20 bytes (or NULL if you don't care)
+ *
+ * The TPM driver should be built-in, but for whatever reason it
+ * isn't, protect against the chip disappearing, by incrementing
+ * the module usage count.
+ */
+int tpm_pcr_read(u32 chip_num, int pcr_idx, u8 *res_buf)
+{
+ struct tpm_chip *chip;
+ int rc;
+
+ chip = tpm_chip_find_get(chip_num);
+ if (chip == NULL)
+ return -ENODEV;
+ rc = __tpm_pcr_read(chip, pcr_idx, res_buf);
+ tpm_chip_put(chip);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tpm_pcr_read);
+
+/**
+ * tpm_pcr_extend - extend pcr value with hash
+ * @chip_num: tpm idx # or AN&
+ * @pcr_idx: pcr idx to extend
+ * @hash: hash value used to extend pcr value
+ *
+ * The TPM driver should be built-in, but for whatever reason it
+ * isn't, protect against the chip disappearing, by incrementing
+ * the module usage count.
+ */
+#define TPM_ORD_PCR_EXTEND cpu_to_be32(20)
+#define EXTEND_PCR_RESULT_SIZE 34
+static struct tpm_input_header pcrextend_header = {
+ .tag = TPM_TAG_RQU_COMMAND,
+ .length = cpu_to_be32(34),
+ .ordinal = TPM_ORD_PCR_EXTEND
+};
+
+int tpm_pcr_extend(u32 chip_num, int pcr_idx, const u8 *hash)
+{
+ struct tpm_cmd_t cmd;
+ int rc;
+ struct tpm_chip *chip;
+
+ chip = tpm_chip_find_get(chip_num);
+ if (chip == NULL)
+ return -ENODEV;
+
+ cmd.header.in = pcrextend_header;
+ cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(pcr_idx);
+ memcpy(cmd.params.pcrextend_in.hash, hash, TPM_DIGEST_SIZE);
+ rc = transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE,
+ "attempting extend a PCR value");
+
+ tpm_chip_put(chip);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tpm_pcr_extend);
+
+/**
+ * tpm_do_selftest - have the TPM continue its selftest and wait until it
+ * can receive further commands
+ * @chip: TPM chip to use
+ *
+ * Returns 0 on success, < 0 in case of fatal error or a value > 0 representing
+ * a TPM error code.
+ */
+int tpm_do_selftest(struct tpm_chip *chip)
+{
+ int rc;
+ unsigned int loops;
+ unsigned int delay_msec = 100;
+ unsigned long duration;
+ struct tpm_cmd_t cmd;
+
+ duration = tpm_calc_ordinal_duration(chip, TPM_ORD_CONTINUE_SELFTEST);
+
+ loops = jiffies_to_msecs(duration) / delay_msec;
+
+ rc = tpm_continue_selftest(chip);
+ /* This may fail if there was no TPM driver during a suspend/resume
+ * cycle; some may return 10 (BAD_ORDINAL), others 28 (FAILEDSELFTEST)
+ */
+ if (rc)
+ return rc;
+
+ do {
+ /* Attempt to read a PCR value */
+ cmd.header.in = pcrread_header;
+ cmd.params.pcrread_in.pcr_idx = cpu_to_be32(0);
+ rc = tpm_transmit(chip, (u8 *) &cmd, READ_PCR_RESULT_SIZE);
+ /* Some buggy TPMs will not respond to tpm_tis_ready() for
+ * around 300ms while the self test is ongoing, keep trying
+ * until the self test duration expires. */
+ if (rc == -ETIME) {
+ dev_info(chip->dev, HW_ERR "TPM command timed out during continue self test");
+ msleep(delay_msec);
+ continue;
+ }
+
+ if (rc < TPM_HEADER_SIZE)
+ return -EFAULT;
+
+ rc = be32_to_cpu(cmd.header.out.return_code);
+ if (rc == TPM_ERR_DISABLED || rc == TPM_ERR_DEACTIVATED) {
+ dev_info(chip->dev,
+ "TPM is disabled/deactivated (0x%X)\n", rc);
+ /* TPM is disabled and/or deactivated; driver can
+ * proceed and TPM does handle commands for
+ * suspend/resume correctly
+ */
+ return 0;
+ }
+ if (rc != TPM_WARN_DOING_SELFTEST)
+ return rc;
+ msleep(delay_msec);
+ } while (--loops > 0);
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tpm_do_selftest);
+
+int tpm_send(u32 chip_num, void *cmd, size_t buflen)
+{
+ struct tpm_chip *chip;
+ int rc;
+
+ chip = tpm_chip_find_get(chip_num);
+ if (chip == NULL)
+ return -ENODEV;
+
+ rc = transmit_cmd(chip, cmd, buflen, "attempting tpm_cmd");
+
+ tpm_chip_put(chip);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tpm_send);
+
+ssize_t tpm_show_pcrs(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ cap_t cap;
+ u8 digest[TPM_DIGEST_SIZE];
+ ssize_t rc;
+ int i, j, num_pcrs;
+ char *str = buf;
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+
+ rc = tpm_getcap(dev, TPM_CAP_PROP_PCR, &cap,
+ "attempting to determine the number of PCRS");
+ if (rc)
+ return 0;
+
+ num_pcrs = be32_to_cpu(cap.num_pcrs);
+ for (i = 0; i < num_pcrs; i++) {
+ rc = __tpm_pcr_read(chip, i, digest);
+ if (rc)
+ break;
+ str += sprintf(str, "PCR-%02d: ", i);
+ for (j = 0; j < TPM_DIGEST_SIZE; j++)
+ str += sprintf(str, "%02X ", digest[j]);
+ str += sprintf(str, "\n");
+ }
+ return str - buf;
+}
+EXPORT_SYMBOL_GPL(tpm_show_pcrs);
+
+#define READ_PUBEK_RESULT_SIZE 314
+#define TPM_ORD_READPUBEK cpu_to_be32(124)
+static struct tpm_input_header tpm_readpubek_header = {
+ .tag = TPM_TAG_RQU_COMMAND,
+ .length = cpu_to_be32(30),
+ .ordinal = TPM_ORD_READPUBEK
+};
+
+ssize_t tpm_show_pubek(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ u8 *data;
+ struct tpm_cmd_t tpm_cmd;
+ ssize_t err;
+ int i, rc;
+ char *str = buf;
+
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+
+ tpm_cmd.header.in = tpm_readpubek_header;
+ err = transmit_cmd(chip, &tpm_cmd, READ_PUBEK_RESULT_SIZE,
+ "attempting to read the PUBEK");
+ if (err)
+ goto out;
+
+ /*
+ ignore header 10 bytes
+ algorithm 32 bits (1 == RSA )
+ encscheme 16 bits
+ sigscheme 16 bits
+ parameters (RSA 12->bytes: keybit, #primes, expbit)
+ keylenbytes 32 bits
+ 256 byte modulus
+ ignore checksum 20 bytes
+ */
+ data = tpm_cmd.params.readpubek_out_buffer;
+ str +=
+ sprintf(str,
+ "Algorithm: %02X %02X %02X %02X\n"
+ "Encscheme: %02X %02X\n"
+ "Sigscheme: %02X %02X\n"
+ "Parameters: %02X %02X %02X %02X "
+ "%02X %02X %02X %02X "
+ "%02X %02X %02X %02X\n"
+ "Modulus length: %d\n"
+ "Modulus:\n",
+ data[0], data[1], data[2], data[3],
+ data[4], data[5],
+ data[6], data[7],
+ data[12], data[13], data[14], data[15],
+ data[16], data[17], data[18], data[19],
+ data[20], data[21], data[22], data[23],
+ be32_to_cpu(*((__be32 *) (data + 24))));
+
+ for (i = 0; i < 256; i++) {
+ str += sprintf(str, "%02X ", data[i + 28]);
+ if ((i + 1) % 16 == 0)
+ str += sprintf(str, "\n");
+ }
+out:
+ rc = str - buf;
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tpm_show_pubek);
+
+
+ssize_t tpm_show_caps(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ cap_t cap;
+ ssize_t rc;
+ char *str = buf;
+
+ rc = tpm_getcap(dev, TPM_CAP_PROP_MANUFACTURER, &cap,
+ "attempting to determine the manufacturer");
+ if (rc)
+ return 0;
+ str += sprintf(str, "Manufacturer: 0x%x\n",
+ be32_to_cpu(cap.manufacturer_id));
+
+ /* Try to get a TPM version 1.2 TPM_CAP_VERSION_INFO */
+ rc = tpm_getcap(dev, CAP_VERSION_1_2, &cap,
+ "attempting to determine the 1.2 version");
+ if (!rc) {
+ str += sprintf(str,
+ "TCG version: %d.%d\nFirmware version: %d.%d\n",
+ cap.tpm_version_1_2.Major,
+ cap.tpm_version_1_2.Minor,
+ cap.tpm_version_1_2.revMajor,
+ cap.tpm_version_1_2.revMinor);
+ } else {
+ /* Otherwise just use TPM_STRUCT_VER */
+ rc = tpm_getcap(dev, CAP_VERSION_1_1, &cap,
+ "attempting to determine the 1.1 version");
+ if (rc)
+ return 0;
+ str += sprintf(str,
+ "TCG version: %d.%d\nFirmware version: %d.%d\n",
+ cap.tpm_version.Major,
+ cap.tpm_version.Minor,
+ cap.tpm_version.revMajor,
+ cap.tpm_version.revMinor);
+ }
+
+ return str - buf;
+}
+EXPORT_SYMBOL_GPL(tpm_show_caps);
+
+ssize_t tpm_show_durations(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip->vendor.duration[TPM_LONG] == 0)
+ return 0;
+
+ return sprintf(buf, "%d %d %d [%s]\n",
+ jiffies_to_usecs(chip->vendor.duration[TPM_SHORT]),
+ jiffies_to_usecs(chip->vendor.duration[TPM_MEDIUM]),
+ jiffies_to_usecs(chip->vendor.duration[TPM_LONG]),
+ chip->vendor.duration_adjusted
+ ? "adjusted" : "original");
+}
+EXPORT_SYMBOL_GPL(tpm_show_durations);
+
+ssize_t tpm_show_timeouts(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+
+ return sprintf(buf, "%d %d %d %d [%s]\n",
+ jiffies_to_usecs(chip->vendor.timeout_a),
+ jiffies_to_usecs(chip->vendor.timeout_b),
+ jiffies_to_usecs(chip->vendor.timeout_c),
+ jiffies_to_usecs(chip->vendor.timeout_d),
+ chip->vendor.timeout_adjusted
+ ? "adjusted" : "original");
+}
+EXPORT_SYMBOL_GPL(tpm_show_timeouts);
+
+ssize_t tpm_store_cancel(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+ if (chip == NULL)
+ return 0;
+
+ chip->vendor.cancel(chip);
+ return count;
+}
+EXPORT_SYMBOL_GPL(tpm_store_cancel);
+
+static bool wait_for_tpm_stat_cond(struct tpm_chip *chip, u8 mask,
+ bool check_cancel, bool *canceled)
+{
+ u8 status = chip->vendor.status(chip);
+
+ *canceled = false;
+ if ((status & mask) == mask)
+ return true;
+ if (check_cancel && chip->vendor.req_canceled(chip, status)) {
+ *canceled = true;
+ return true;
+ }
+ return false;
+}
+
+int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask, unsigned long timeout,
+ wait_queue_head_t *queue, bool check_cancel)
+{
+ unsigned long stop;
+ long rc;
+ u8 status;
+ bool canceled = false;
+
+ /* check current status */
+ status = chip->vendor.status(chip);
+ if ((status & mask) == mask)
+ return 0;
+
+ stop = jiffies + timeout;
+
+ if (chip->vendor.irq) {
+again:
+ timeout = stop - jiffies;
+ if ((long)timeout <= 0)
+ return -ETIME;
+ rc = wait_event_interruptible_timeout(*queue,
+ wait_for_tpm_stat_cond(chip, mask, check_cancel,
+ &canceled),
+ timeout);
+ if (rc > 0) {
+ if (canceled)
+ return -ECANCELED;
+ return 0;
+ }
+ if (rc == -ERESTARTSYS && freezing(current)) {
+ clear_thread_flag(TIF_SIGPENDING);
+ goto again;
+ }
+ } else {
+ do {
+ msleep(TPM_TIMEOUT);
+ status = chip->vendor.status(chip);
+ if ((status & mask) == mask)
+ return 0;
+ } while (time_before(jiffies, stop));
+ }
+ return -ETIME;
+}
+EXPORT_SYMBOL_GPL(wait_for_tpm_stat);
+/*
+ * Device file system interface to the TPM
+ *
+ * It's assured that the chip will be opened just once,
+ * by the check of is_open variable, which is protected
+ * by driver_lock.
+ */
+int tpm_open(struct inode *inode, struct file *file)
+{
+ struct miscdevice *misc = file->private_data;
+ struct tpm_chip *chip = container_of(misc, struct tpm_chip,
+ vendor.miscdev);
+
+ if (test_and_set_bit(0, &chip->is_open)) {
+ dev_dbg(chip->dev, "Another process owns this TPM\n");
+ return -EBUSY;
+ }
+
+ chip->data_buffer = kzalloc(TPM_BUFSIZE, GFP_KERNEL);
+ if (chip->data_buffer == NULL) {
+ clear_bit(0, &chip->is_open);
+ return -ENOMEM;
+ }
+
+ atomic_set(&chip->data_pending, 0);
+
+ file->private_data = chip;
+ get_device(chip->dev);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(tpm_open);
+
+/*
+ * Called on file close
+ */
+int tpm_release(struct inode *inode, struct file *file)
+{
+ struct tpm_chip *chip = file->private_data;
+
+ del_singleshot_timer_sync(&chip->user_read_timer);
+ flush_work(&chip->work);
+ file->private_data = NULL;
+ atomic_set(&chip->data_pending, 0);
+ kzfree(chip->data_buffer);
+ clear_bit(0, &chip->is_open);
+ put_device(chip->dev);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(tpm_release);
+
+ssize_t tpm_write(struct file *file, const char __user *buf,
+ size_t size, loff_t *off)
+{
+ struct tpm_chip *chip = file->private_data;
+ size_t in_size = size;
+ ssize_t out_size;
+
+ /* cannot perform a write until the read has cleared
+ either via tpm_read or a user_read_timer timeout.
+ This also prevents splitted buffered writes from blocking here.
+ */
+ if (atomic_read(&chip->data_pending) != 0)
+ return -EBUSY;
+
+ if (in_size > TPM_BUFSIZE)
+ return -E2BIG;
+
+ mutex_lock(&chip->buffer_mutex);
+
+ if (copy_from_user
+ (chip->data_buffer, (void __user *) buf, in_size)) {
+ mutex_unlock(&chip->buffer_mutex);
+ return -EFAULT;
+ }
+
+ /* atomic tpm command send and result receive */
+ out_size = tpm_transmit(chip, chip->data_buffer, TPM_BUFSIZE);
+ if (out_size < 0) {
+ mutex_unlock(&chip->buffer_mutex);
+ return out_size;
+ }
+
+ atomic_set(&chip->data_pending, out_size);
+ mutex_unlock(&chip->buffer_mutex);
+
+ /* Set a timeout by which the reader must come claim the result */
+ mod_timer(&chip->user_read_timer, jiffies + (60 * HZ));
+
+ return in_size;
+}
+EXPORT_SYMBOL_GPL(tpm_write);
+
+ssize_t tpm_read(struct file *file, char __user *buf,
+ size_t size, loff_t *off)
+{
+ struct tpm_chip *chip = file->private_data;
+ ssize_t ret_size;
+ int rc;
+
+ del_singleshot_timer_sync(&chip->user_read_timer);
+ flush_work(&chip->work);
+ ret_size = atomic_read(&chip->data_pending);
+ if (ret_size > 0) { /* relay data */
+ ssize_t orig_ret_size = ret_size;
+ if (size < ret_size)
+ ret_size = size;
+
+ mutex_lock(&chip->buffer_mutex);
+ rc = copy_to_user(buf, chip->data_buffer, ret_size);
+ memset(chip->data_buffer, 0, orig_ret_size);
+ if (rc)
+ ret_size = -EFAULT;
+
+ mutex_unlock(&chip->buffer_mutex);
+ }
+
+ atomic_set(&chip->data_pending, 0);
+
+ return ret_size;
+}
+EXPORT_SYMBOL_GPL(tpm_read);
+
+void tpm_remove_hardware(struct device *dev)
+{
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL) {
+ dev_err(dev, "No device data found\n");
+ return;
+ }
+
+ spin_lock(&driver_lock);
+ list_del_rcu(&chip->list);
+ spin_unlock(&driver_lock);
+ synchronize_rcu();
+
+ misc_deregister(&chip->vendor.miscdev);
+ sysfs_remove_group(&dev->kobj, chip->vendor.attr_group);
+ tpm_remove_ppi(&dev->kobj);
+ tpm_bios_log_teardown(chip->bios_dir);
+
+ /* write it this way to be explicit (chip->dev == dev) */
+ put_device(chip->dev);
+}
+EXPORT_SYMBOL_GPL(tpm_remove_hardware);
+
+#define TPM_ORD_SAVESTATE cpu_to_be32(152)
+#define SAVESTATE_RESULT_SIZE 10
+
+static struct tpm_input_header savestate_header = {
+ .tag = TPM_TAG_RQU_COMMAND,
+ .length = cpu_to_be32(10),
+ .ordinal = TPM_ORD_SAVESTATE
+};
+
+/*
+ * We are about to suspend. Save the TPM state
+ * so that it can be restored.
+ */
+int tpm_pm_suspend(struct device *dev)
+{
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+ struct tpm_cmd_t cmd;
+ int rc, try;
+
+ u8 dummy_hash[TPM_DIGEST_SIZE] = { 0 };
+
+ if (chip == NULL)
+ return -ENODEV;
+
+ /* for buggy tpm, flush pcrs with extend to selected dummy */
+ if (tpm_suspend_pcr) {
+ cmd.header.in = pcrextend_header;
+ cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(tpm_suspend_pcr);
+ memcpy(cmd.params.pcrextend_in.hash, dummy_hash,
+ TPM_DIGEST_SIZE);
+ rc = transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE,
+ "extending dummy pcr before suspend");
+ }
+
+ /* now do the actual savestate */
+ for (try = 0; try < TPM_RETRY; try++) {
+ cmd.header.in = savestate_header;
+ rc = transmit_cmd(chip, &cmd, SAVESTATE_RESULT_SIZE, NULL);
+
+ /*
+ * If the TPM indicates that it is too busy to respond to
+ * this command then retry before giving up. It can take
+ * several seconds for this TPM to be ready.
+ *
+ * This can happen if the TPM has already been sent the
+ * SaveState command before the driver has loaded. TCG 1.2
+ * specification states that any communication after SaveState
+ * may cause the TPM to invalidate previously saved state.
+ */
+ if (rc != TPM_WARN_RETRY)
+ break;
+ msleep(TPM_TIMEOUT_RETRY);
+ }
+
+ if (rc)
+ dev_err(chip->dev,
+ "Error (%d) sending savestate before suspend\n", rc);
+ else if (try > 0)
+ dev_warn(chip->dev, "TPM savestate took %dms\n",
+ try * TPM_TIMEOUT_RETRY);
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tpm_pm_suspend);
+
+/*
+ * Resume from a power safe. The BIOS already restored
+ * the TPM state.
+ */
+int tpm_pm_resume(struct device *dev)
+{
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL)
+ return -ENODEV;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(tpm_pm_resume);
+
+#define TPM_GETRANDOM_RESULT_SIZE 18
+static struct tpm_input_header tpm_getrandom_header = {
+ .tag = TPM_TAG_RQU_COMMAND,
+ .length = cpu_to_be32(14),
+ .ordinal = TPM_ORD_GET_RANDOM
+};
+
+/**
+ * tpm_get_random() - Get random bytes from the tpm's RNG
+ * @chip_num: A specific chip number for the request or TPM_ANY_NUM
+ * @out: destination buffer for the random bytes
+ * @max: the max number of bytes to write to @out
+ *
+ * Returns < 0 on error and the number of bytes read on success
+ */
+int tpm_get_random(u32 chip_num, u8 *out, size_t max)
+{
+ struct tpm_chip *chip;
+ struct tpm_cmd_t tpm_cmd;
+ u32 recd, num_bytes = min_t(u32, max, TPM_MAX_RNG_DATA);
+ int err, total = 0, retries = 5;
+ u8 *dest = out;
+
+ chip = tpm_chip_find_get(chip_num);
+ if (chip == NULL)
+ return -ENODEV;
+
+ if (!out || !num_bytes || max > TPM_MAX_RNG_DATA)
+ return -EINVAL;
+
+ do {
+ tpm_cmd.header.in = tpm_getrandom_header;
+ tpm_cmd.params.getrandom_in.num_bytes = cpu_to_be32(num_bytes);
+
+ err = transmit_cmd(chip, &tpm_cmd,
+ TPM_GETRANDOM_RESULT_SIZE + num_bytes,
+ "attempting get random");
+ if (err)
+ break;
+
+ recd = be32_to_cpu(tpm_cmd.params.getrandom_out.rng_data_len);
+ memcpy(dest, tpm_cmd.params.getrandom_out.rng_data, recd);
+
+ dest += recd;
+ total += recd;
+ num_bytes -= recd;
+ } while (retries-- && total < max);
+
+ return total ? total : -EIO;
+}
+EXPORT_SYMBOL_GPL(tpm_get_random);
+
+/* In case vendor provided release function, call it too.*/
+
+void tpm_dev_vendor_release(struct tpm_chip *chip)
+{
+ if (!chip)
+ return;
+
+ if (chip->vendor.release)
+ chip->vendor.release(chip->dev);
+
+ clear_bit(chip->dev_num, dev_mask);
+}
+EXPORT_SYMBOL_GPL(tpm_dev_vendor_release);
+
+
+/*
+ * Once all references to platform device are down to 0,
+ * release all allocated structures.
+ */
+void tpm_dev_release(struct device *dev)
+{
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+
+ if (!chip)
+ return;
+
+ tpm_dev_vendor_release(chip);
+
+ chip->release(dev);
+ kfree(chip);
+}
+EXPORT_SYMBOL_GPL(tpm_dev_release);
+
+/*
+ * Called from tpm_<specific>.c probe function only for devices
+ * the driver has determined it should claim. Prior to calling
+ * this function the specific probe function has called pci_enable_device
+ * upon errant exit from this function specific probe function should call
+ * pci_disable_device
+ */
+struct tpm_chip *tpm_register_hardware(struct device *dev,
+ const struct tpm_vendor_specific *entry)
+{
+ struct tpm_chip *chip;
+
+ /* Driver specific per-device data */
+ chip = kzalloc(sizeof(*chip), GFP_KERNEL);
+
+ if (chip == NULL)
+ return NULL;
+
+ mutex_init(&chip->buffer_mutex);
+ mutex_init(&chip->tpm_mutex);
+ INIT_LIST_HEAD(&chip->list);
+
+ INIT_WORK(&chip->work, timeout_work);
+
+ setup_timer(&chip->user_read_timer, user_reader_timeout,
+ (unsigned long)chip);
+
+ memcpy(&chip->vendor, entry, sizeof(struct tpm_vendor_specific));
+
+ chip->dev_num = find_first_zero_bit(dev_mask, TPM_NUM_DEVICES);
+
+ if (chip->dev_num >= TPM_NUM_DEVICES) {
+ dev_err(dev, "No available tpm device numbers\n");
+ goto out_free;
+ } else if (chip->dev_num == 0)
+ chip->vendor.miscdev.minor = TPM_MINOR;
+ else
+ chip->vendor.miscdev.minor = MISC_DYNAMIC_MINOR;
+
+ set_bit(chip->dev_num, dev_mask);
+
+ scnprintf(chip->devname, sizeof(chip->devname), "%s%d", "tpm",
+ chip->dev_num);
+ chip->vendor.miscdev.name = chip->devname;
+
+ chip->vendor.miscdev.parent = dev;
+ chip->dev = get_device(dev);
+ chip->release = dev->release;
+ dev->release = tpm_dev_release;
+ dev_set_drvdata(dev, chip);
+
+ if (misc_register(&chip->vendor.miscdev)) {
+ dev_err(chip->dev,
+ "unable to misc_register %s, minor %d\n",
+ chip->vendor.miscdev.name,
+ chip->vendor.miscdev.minor);
+ goto put_device;
+ }
+
+ if (sysfs_create_group(&dev->kobj, chip->vendor.attr_group)) {
+ misc_deregister(&chip->vendor.miscdev);
+ goto put_device;
+ }
+
+ if (tpm_add_ppi(&dev->kobj)) {
+ misc_deregister(&chip->vendor.miscdev);
+ goto put_device;
+ }
+
+ chip->bios_dir = tpm_bios_log_setup(chip->devname);
+
+ /* Make chip available */
+ spin_lock(&driver_lock);
+ list_add_rcu(&chip->list, &tpm_chip_list);
+ spin_unlock(&driver_lock);
+
+ return chip;
+
+put_device:
+ put_device(chip->dev);
+out_free:
+ kfree(chip);
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(tpm_register_hardware);
+
+MODULE_AUTHOR("Leendert van Doorn (leendert@watson.ibm.com)");
+MODULE_DESCRIPTION("TPM Driver");
+MODULE_VERSION("2.0");
+MODULE_LICENSE("GPL");
+++ /dev/null
-/*
- * Copyright (C) 2004 IBM Corporation
- *
- * Authors:
- * Leendert van Doorn <leendert@watson.ibm.com>
- * Dave Safford <safford@watson.ibm.com>
- * Reiner Sailer <sailer@watson.ibm.com>
- * Kylene Hall <kjhall@us.ibm.com>
- *
- * Maintained by: <tpmdd-devel@lists.sourceforge.net>
- *
- * Device driver for TCG/TCPA TPM (trusted platform module).
- * Specifications at www.trustedcomputinggroup.org
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation, version 2 of the
- * License.
- *
- * Note, the TPM chip is not interrupt driven (only polling)
- * and can have very long timeouts (minutes!). Hence the unusual
- * calls to msleep.
- *
- */
-
-#include <linux/poll.h>
-#include <linux/slab.h>
-#include <linux/mutex.h>
-#include <linux/spinlock.h>
-#include <linux/freezer.h>
-
-#include "tpm.h"
-#include "tpm_eventlog.h"
-
-enum tpm_duration {
- TPM_SHORT = 0,
- TPM_MEDIUM = 1,
- TPM_LONG = 2,
- TPM_UNDEFINED,
-};
-
-#define TPM_MAX_ORDINAL 243
-#define TSC_MAX_ORDINAL 12
-#define TPM_PROTECTED_COMMAND 0x00
-#define TPM_CONNECTION_COMMAND 0x40
-
-/*
- * Bug workaround - some TPM's don't flush the most
- * recently changed pcr on suspend, so force the flush
- * with an extend to the selected _unused_ non-volatile pcr.
- */
-static int tpm_suspend_pcr;
-module_param_named(suspend_pcr, tpm_suspend_pcr, uint, 0644);
-MODULE_PARM_DESC(suspend_pcr,
- "PCR to use for dummy writes to faciltate flush on suspend.");
-
-static LIST_HEAD(tpm_chip_list);
-static DEFINE_SPINLOCK(driver_lock);
-static DECLARE_BITMAP(dev_mask, TPM_NUM_DEVICES);
-
-/*
- * Array with one entry per ordinal defining the maximum amount
- * of time the chip could take to return the result. The ordinal
- * designation of short, medium or long is defined in a table in
- * TCG Specification TPM Main Part 2 TPM Structures Section 17. The
- * values of the SHORT, MEDIUM, and LONG durations are retrieved
- * from the chip during initialization with a call to tpm_get_timeouts.
- */
-static const u8 tpm_ordinal_duration[TPM_MAX_ORDINAL] = {
- TPM_UNDEFINED, /* 0 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED, /* 5 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT, /* 10 */
- TPM_SHORT,
- TPM_MEDIUM,
- TPM_LONG,
- TPM_LONG,
- TPM_MEDIUM, /* 15 */
- TPM_SHORT,
- TPM_SHORT,
- TPM_MEDIUM,
- TPM_LONG,
- TPM_SHORT, /* 20 */
- TPM_SHORT,
- TPM_MEDIUM,
- TPM_MEDIUM,
- TPM_MEDIUM,
- TPM_SHORT, /* 25 */
- TPM_SHORT,
- TPM_MEDIUM,
- TPM_SHORT,
- TPM_SHORT,
- TPM_MEDIUM, /* 30 */
- TPM_LONG,
- TPM_MEDIUM,
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT, /* 35 */
- TPM_MEDIUM,
- TPM_MEDIUM,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_MEDIUM, /* 40 */
- TPM_LONG,
- TPM_MEDIUM,
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT, /* 45 */
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT,
- TPM_LONG,
- TPM_MEDIUM, /* 50 */
- TPM_MEDIUM,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED, /* 55 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_MEDIUM, /* 60 */
- TPM_MEDIUM,
- TPM_MEDIUM,
- TPM_SHORT,
- TPM_SHORT,
- TPM_MEDIUM, /* 65 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT, /* 70 */
- TPM_SHORT,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED, /* 75 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_LONG, /* 80 */
- TPM_UNDEFINED,
- TPM_MEDIUM,
- TPM_LONG,
- TPM_SHORT,
- TPM_UNDEFINED, /* 85 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT, /* 90 */
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT,
- TPM_UNDEFINED, /* 95 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_MEDIUM, /* 100 */
- TPM_SHORT,
- TPM_SHORT,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED, /* 105 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT, /* 110 */
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT, /* 115 */
- TPM_SHORT,
- TPM_SHORT,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_LONG, /* 120 */
- TPM_LONG,
- TPM_MEDIUM,
- TPM_UNDEFINED,
- TPM_SHORT,
- TPM_SHORT, /* 125 */
- TPM_SHORT,
- TPM_LONG,
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT, /* 130 */
- TPM_MEDIUM,
- TPM_UNDEFINED,
- TPM_SHORT,
- TPM_MEDIUM,
- TPM_UNDEFINED, /* 135 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT, /* 140 */
- TPM_SHORT,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED, /* 145 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT, /* 150 */
- TPM_MEDIUM,
- TPM_MEDIUM,
- TPM_SHORT,
- TPM_SHORT,
- TPM_UNDEFINED, /* 155 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT, /* 160 */
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT,
- TPM_UNDEFINED,
- TPM_UNDEFINED, /* 165 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_LONG, /* 170 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED, /* 175 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_MEDIUM, /* 180 */
- TPM_SHORT,
- TPM_MEDIUM,
- TPM_MEDIUM,
- TPM_MEDIUM,
- TPM_MEDIUM, /* 185 */
- TPM_SHORT,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED, /* 190 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED, /* 195 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT, /* 200 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT,
- TPM_SHORT, /* 205 */
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT,
- TPM_MEDIUM, /* 210 */
- TPM_UNDEFINED,
- TPM_MEDIUM,
- TPM_MEDIUM,
- TPM_MEDIUM,
- TPM_UNDEFINED, /* 215 */
- TPM_MEDIUM,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT,
- TPM_SHORT, /* 220 */
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT,
- TPM_SHORT,
- TPM_UNDEFINED, /* 225 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT, /* 230 */
- TPM_LONG,
- TPM_MEDIUM,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED, /* 235 */
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_UNDEFINED,
- TPM_SHORT, /* 240 */
- TPM_UNDEFINED,
- TPM_MEDIUM,
-};
-
-static void user_reader_timeout(unsigned long ptr)
-{
- struct tpm_chip *chip = (struct tpm_chip *) ptr;
-
- schedule_work(&chip->work);
-}
-
-static void timeout_work(struct work_struct *work)
-{
- struct tpm_chip *chip = container_of(work, struct tpm_chip, work);
-
- mutex_lock(&chip->buffer_mutex);
- atomic_set(&chip->data_pending, 0);
- memset(chip->data_buffer, 0, TPM_BUFSIZE);
- mutex_unlock(&chip->buffer_mutex);
-}
-
-/*
- * Returns max number of jiffies to wait
- */
-unsigned long tpm_calc_ordinal_duration(struct tpm_chip *chip,
- u32 ordinal)
-{
- int duration_idx = TPM_UNDEFINED;
- int duration = 0;
- u8 category = (ordinal >> 24) & 0xFF;
-
- if ((category == TPM_PROTECTED_COMMAND && ordinal < TPM_MAX_ORDINAL) ||
- (category == TPM_CONNECTION_COMMAND && ordinal < TSC_MAX_ORDINAL))
- duration_idx = tpm_ordinal_duration[ordinal];
-
- if (duration_idx != TPM_UNDEFINED)
- duration = chip->vendor.duration[duration_idx];
- if (duration <= 0)
- return 2 * 60 * HZ;
- else
- return duration;
-}
-EXPORT_SYMBOL_GPL(tpm_calc_ordinal_duration);
-
-/*
- * Internal kernel interface to transmit TPM commands
- */
-static ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
- size_t bufsiz)
-{
- ssize_t rc;
- u32 count, ordinal;
- unsigned long stop;
-
- if (bufsiz > TPM_BUFSIZE)
- bufsiz = TPM_BUFSIZE;
-
- count = be32_to_cpu(*((__be32 *) (buf + 2)));
- ordinal = be32_to_cpu(*((__be32 *) (buf + 6)));
- if (count == 0)
- return -ENODATA;
- if (count > bufsiz) {
- dev_err(chip->dev,
- "invalid count value %x %zx \n", count, bufsiz);
- return -E2BIG;
- }
-
- mutex_lock(&chip->tpm_mutex);
-
- if ((rc = chip->vendor.send(chip, (u8 *) buf, count)) < 0) {
- dev_err(chip->dev,
- "tpm_transmit: tpm_send: error %zd\n", rc);
- goto out;
- }
-
- if (chip->vendor.irq)
- goto out_recv;
-
- stop = jiffies + tpm_calc_ordinal_duration(chip, ordinal);
- do {
- u8 status = chip->vendor.status(chip);
- if ((status & chip->vendor.req_complete_mask) ==
- chip->vendor.req_complete_val)
- goto out_recv;
-
- if (chip->vendor.req_canceled(chip, status)) {
- dev_err(chip->dev, "Operation Canceled\n");
- rc = -ECANCELED;
- goto out;
- }
-
- msleep(TPM_TIMEOUT); /* CHECK */
- rmb();
- } while (time_before(jiffies, stop));
-
- chip->vendor.cancel(chip);
- dev_err(chip->dev, "Operation Timed out\n");
- rc = -ETIME;
- goto out;
-
-out_recv:
- rc = chip->vendor.recv(chip, (u8 *) buf, bufsiz);
- if (rc < 0)
- dev_err(chip->dev,
- "tpm_transmit: tpm_recv: error %zd\n", rc);
-out:
- mutex_unlock(&chip->tpm_mutex);
- return rc;
-}
-
-#define TPM_DIGEST_SIZE 20
-#define TPM_RET_CODE_IDX 6
-
-enum tpm_capabilities {
- TPM_CAP_FLAG = cpu_to_be32(4),
- TPM_CAP_PROP = cpu_to_be32(5),
- CAP_VERSION_1_1 = cpu_to_be32(0x06),
- CAP_VERSION_1_2 = cpu_to_be32(0x1A)
-};
-
-enum tpm_sub_capabilities {
- TPM_CAP_PROP_PCR = cpu_to_be32(0x101),
- TPM_CAP_PROP_MANUFACTURER = cpu_to_be32(0x103),
- TPM_CAP_FLAG_PERM = cpu_to_be32(0x108),
- TPM_CAP_FLAG_VOL = cpu_to_be32(0x109),
- TPM_CAP_PROP_OWNER = cpu_to_be32(0x111),
- TPM_CAP_PROP_TIS_TIMEOUT = cpu_to_be32(0x115),
- TPM_CAP_PROP_TIS_DURATION = cpu_to_be32(0x120),
-
-};
-
-static ssize_t transmit_cmd(struct tpm_chip *chip, struct tpm_cmd_t *cmd,
- int len, const char *desc)
-{
- int err;
-
- len = tpm_transmit(chip,(u8 *) cmd, len);
- if (len < 0)
- return len;
- else if (len < TPM_HEADER_SIZE)
- return -EFAULT;
-
- err = be32_to_cpu(cmd->header.out.return_code);
- if (err != 0 && desc)
- dev_err(chip->dev, "A TPM error (%d) occurred %s\n", err, desc);
-
- return err;
-}
-
-#define TPM_INTERNAL_RESULT_SIZE 200
-#define TPM_TAG_RQU_COMMAND cpu_to_be16(193)
-#define TPM_ORD_GET_CAP cpu_to_be32(101)
-#define TPM_ORD_GET_RANDOM cpu_to_be32(70)
-
-static const struct tpm_input_header tpm_getcap_header = {
- .tag = TPM_TAG_RQU_COMMAND,
- .length = cpu_to_be32(22),
- .ordinal = TPM_ORD_GET_CAP
-};
-
-ssize_t tpm_getcap(struct device *dev, __be32 subcap_id, cap_t *cap,
- const char *desc)
-{
- struct tpm_cmd_t tpm_cmd;
- int rc;
- struct tpm_chip *chip = dev_get_drvdata(dev);
-
- tpm_cmd.header.in = tpm_getcap_header;
- if (subcap_id == CAP_VERSION_1_1 || subcap_id == CAP_VERSION_1_2) {
- tpm_cmd.params.getcap_in.cap = subcap_id;
- /*subcap field not necessary */
- tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(0);
- tpm_cmd.header.in.length -= cpu_to_be32(sizeof(__be32));
- } else {
- if (subcap_id == TPM_CAP_FLAG_PERM ||
- subcap_id == TPM_CAP_FLAG_VOL)
- tpm_cmd.params.getcap_in.cap = TPM_CAP_FLAG;
- else
- tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP;
- tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
- tpm_cmd.params.getcap_in.subcap = subcap_id;
- }
- rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, desc);
- if (!rc)
- *cap = tpm_cmd.params.getcap_out.cap;
- return rc;
-}
-
-void tpm_gen_interrupt(struct tpm_chip *chip)
-{
- struct tpm_cmd_t tpm_cmd;
- ssize_t rc;
-
- tpm_cmd.header.in = tpm_getcap_header;
- tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP;
- tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
- tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_TIMEOUT;
-
- rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE,
- "attempting to determine the timeouts");
-}
-EXPORT_SYMBOL_GPL(tpm_gen_interrupt);
-
-#define TPM_ORD_STARTUP cpu_to_be32(153)
-#define TPM_ST_CLEAR cpu_to_be16(1)
-#define TPM_ST_STATE cpu_to_be16(2)
-#define TPM_ST_DEACTIVATED cpu_to_be16(3)
-static const struct tpm_input_header tpm_startup_header = {
- .tag = TPM_TAG_RQU_COMMAND,
- .length = cpu_to_be32(12),
- .ordinal = TPM_ORD_STARTUP
-};
-
-static int tpm_startup(struct tpm_chip *chip, __be16 startup_type)
-{
- struct tpm_cmd_t start_cmd;
- start_cmd.header.in = tpm_startup_header;
- start_cmd.params.startup_in.startup_type = startup_type;
- return transmit_cmd(chip, &start_cmd, TPM_INTERNAL_RESULT_SIZE,
- "attempting to start the TPM");
-}
-
-int tpm_get_timeouts(struct tpm_chip *chip)
-{
- struct tpm_cmd_t tpm_cmd;
- struct timeout_t *timeout_cap;
- struct duration_t *duration_cap;
- ssize_t rc;
- u32 timeout;
- unsigned int scale = 1;
-
- tpm_cmd.header.in = tpm_getcap_header;
- tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP;
- tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
- tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_TIMEOUT;
- rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, NULL);
-
- if (rc == TPM_ERR_INVALID_POSTINIT) {
- /* The TPM is not started, we are the first to talk to it.
- Execute a startup command. */
- dev_info(chip->dev, "Issuing TPM_STARTUP");
- if (tpm_startup(chip, TPM_ST_CLEAR))
- return rc;
-
- tpm_cmd.header.in = tpm_getcap_header;
- tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP;
- tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
- tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_TIMEOUT;
- rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE,
- NULL);
- }
- if (rc) {
- dev_err(chip->dev,
- "A TPM error (%zd) occurred attempting to determine the timeouts\n",
- rc);
- goto duration;
- }
-
- if (be32_to_cpu(tpm_cmd.header.out.return_code) != 0 ||
- be32_to_cpu(tpm_cmd.header.out.length)
- != sizeof(tpm_cmd.header.out) + sizeof(u32) + 4 * sizeof(u32))
- return -EINVAL;
-
- timeout_cap = &tpm_cmd.params.getcap_out.cap.timeout;
- /* Don't overwrite default if value is 0 */
- timeout = be32_to_cpu(timeout_cap->a);
- if (timeout && timeout < 1000) {
- /* timeouts in msec rather usec */
- scale = 1000;
- chip->vendor.timeout_adjusted = true;
- }
- if (timeout)
- chip->vendor.timeout_a = usecs_to_jiffies(timeout * scale);
- timeout = be32_to_cpu(timeout_cap->b);
- if (timeout)
- chip->vendor.timeout_b = usecs_to_jiffies(timeout * scale);
- timeout = be32_to_cpu(timeout_cap->c);
- if (timeout)
- chip->vendor.timeout_c = usecs_to_jiffies(timeout * scale);
- timeout = be32_to_cpu(timeout_cap->d);
- if (timeout)
- chip->vendor.timeout_d = usecs_to_jiffies(timeout * scale);
-
-duration:
- tpm_cmd.header.in = tpm_getcap_header;
- tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP;
- tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
- tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_DURATION;
-
- rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE,
- "attempting to determine the durations");
- if (rc)
- return rc;
-
- if (be32_to_cpu(tpm_cmd.header.out.return_code) != 0 ||
- be32_to_cpu(tpm_cmd.header.out.length)
- != sizeof(tpm_cmd.header.out) + sizeof(u32) + 3 * sizeof(u32))
- return -EINVAL;
-
- duration_cap = &tpm_cmd.params.getcap_out.cap.duration;
- chip->vendor.duration[TPM_SHORT] =
- usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_short));
- chip->vendor.duration[TPM_MEDIUM] =
- usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_medium));
- chip->vendor.duration[TPM_LONG] =
- usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_long));
-
- /* The Broadcom BCM0102 chipset in a Dell Latitude D820 gets the above
- * value wrong and apparently reports msecs rather than usecs. So we
- * fix up the resulting too-small TPM_SHORT value to make things work.
- * We also scale the TPM_MEDIUM and -_LONG values by 1000.
- */
- if (chip->vendor.duration[TPM_SHORT] < (HZ / 100)) {
- chip->vendor.duration[TPM_SHORT] = HZ;
- chip->vendor.duration[TPM_MEDIUM] *= 1000;
- chip->vendor.duration[TPM_LONG] *= 1000;
- chip->vendor.duration_adjusted = true;
- dev_info(chip->dev, "Adjusting TPM timeout parameters.");
- }
- return 0;
-}
-EXPORT_SYMBOL_GPL(tpm_get_timeouts);
-
-#define TPM_ORD_CONTINUE_SELFTEST 83
-#define CONTINUE_SELFTEST_RESULT_SIZE 10
-
-static struct tpm_input_header continue_selftest_header = {
- .tag = TPM_TAG_RQU_COMMAND,
- .length = cpu_to_be32(10),
- .ordinal = cpu_to_be32(TPM_ORD_CONTINUE_SELFTEST),
-};
-
-/**
- * tpm_continue_selftest -- run TPM's selftest
- * @chip: TPM chip to use
- *
- * Returns 0 on success, < 0 in case of fatal error or a value > 0 representing
- * a TPM error code.
- */
-static int tpm_continue_selftest(struct tpm_chip *chip)
-{
- int rc;
- struct tpm_cmd_t cmd;
-
- cmd.header.in = continue_selftest_header;
- rc = transmit_cmd(chip, &cmd, CONTINUE_SELFTEST_RESULT_SIZE,
- "continue selftest");
- return rc;
-}
-
-ssize_t tpm_show_enabled(struct device * dev, struct device_attribute * attr,
- char *buf)
-{
- cap_t cap;
- ssize_t rc;
-
- rc = tpm_getcap(dev, TPM_CAP_FLAG_PERM, &cap,
- "attempting to determine the permanent enabled state");
- if (rc)
- return 0;
-
- rc = sprintf(buf, "%d\n", !cap.perm_flags.disable);
- return rc;
-}
-EXPORT_SYMBOL_GPL(tpm_show_enabled);
-
-ssize_t tpm_show_active(struct device * dev, struct device_attribute * attr,
- char *buf)
-{
- cap_t cap;
- ssize_t rc;
-
- rc = tpm_getcap(dev, TPM_CAP_FLAG_PERM, &cap,
- "attempting to determine the permanent active state");
- if (rc)
- return 0;
-
- rc = sprintf(buf, "%d\n", !cap.perm_flags.deactivated);
- return rc;
-}
-EXPORT_SYMBOL_GPL(tpm_show_active);
-
-ssize_t tpm_show_owned(struct device * dev, struct device_attribute * attr,
- char *buf)
-{
- cap_t cap;
- ssize_t rc;
-
- rc = tpm_getcap(dev, TPM_CAP_PROP_OWNER, &cap,
- "attempting to determine the owner state");
- if (rc)
- return 0;
-
- rc = sprintf(buf, "%d\n", cap.owned);
- return rc;
-}
-EXPORT_SYMBOL_GPL(tpm_show_owned);
-
-ssize_t tpm_show_temp_deactivated(struct device * dev,
- struct device_attribute * attr, char *buf)
-{
- cap_t cap;
- ssize_t rc;
-
- rc = tpm_getcap(dev, TPM_CAP_FLAG_VOL, &cap,
- "attempting to determine the temporary state");
- if (rc)
- return 0;
-
- rc = sprintf(buf, "%d\n", cap.stclear_flags.deactivated);
- return rc;
-}
-EXPORT_SYMBOL_GPL(tpm_show_temp_deactivated);
-
-/*
- * tpm_chip_find_get - return tpm_chip for given chip number
- */
-static struct tpm_chip *tpm_chip_find_get(int chip_num)
-{
- struct tpm_chip *pos, *chip = NULL;
-
- rcu_read_lock();
- list_for_each_entry_rcu(pos, &tpm_chip_list, list) {
- if (chip_num != TPM_ANY_NUM && chip_num != pos->dev_num)
- continue;
-
- if (try_module_get(pos->dev->driver->owner)) {
- chip = pos;
- break;
- }
- }
- rcu_read_unlock();
- return chip;
-}
-
-#define TPM_ORDINAL_PCRREAD cpu_to_be32(21)
-#define READ_PCR_RESULT_SIZE 30
-static struct tpm_input_header pcrread_header = {
- .tag = TPM_TAG_RQU_COMMAND,
- .length = cpu_to_be32(14),
- .ordinal = TPM_ORDINAL_PCRREAD
-};
-
-static int __tpm_pcr_read(struct tpm_chip *chip, int pcr_idx, u8 *res_buf)
-{
- int rc;
- struct tpm_cmd_t cmd;
-
- cmd.header.in = pcrread_header;
- cmd.params.pcrread_in.pcr_idx = cpu_to_be32(pcr_idx);
- rc = transmit_cmd(chip, &cmd, READ_PCR_RESULT_SIZE,
- "attempting to read a pcr value");
-
- if (rc == 0)
- memcpy(res_buf, cmd.params.pcrread_out.pcr_result,
- TPM_DIGEST_SIZE);
- return rc;
-}
-
-/**
- * tpm_pcr_read - read a pcr value
- * @chip_num: tpm idx # or ANY
- * @pcr_idx: pcr idx to retrieve
- * @res_buf: TPM_PCR value
- * size of res_buf is 20 bytes (or NULL if you don't care)
- *
- * The TPM driver should be built-in, but for whatever reason it
- * isn't, protect against the chip disappearing, by incrementing
- * the module usage count.
- */
-int tpm_pcr_read(u32 chip_num, int pcr_idx, u8 *res_buf)
-{
- struct tpm_chip *chip;
- int rc;
-
- chip = tpm_chip_find_get(chip_num);
- if (chip == NULL)
- return -ENODEV;
- rc = __tpm_pcr_read(chip, pcr_idx, res_buf);
- tpm_chip_put(chip);
- return rc;
-}
-EXPORT_SYMBOL_GPL(tpm_pcr_read);
-
-/**
- * tpm_pcr_extend - extend pcr value with hash
- * @chip_num: tpm idx # or AN&
- * @pcr_idx: pcr idx to extend
- * @hash: hash value used to extend pcr value
- *
- * The TPM driver should be built-in, but for whatever reason it
- * isn't, protect against the chip disappearing, by incrementing
- * the module usage count.
- */
-#define TPM_ORD_PCR_EXTEND cpu_to_be32(20)
-#define EXTEND_PCR_RESULT_SIZE 34
-static struct tpm_input_header pcrextend_header = {
- .tag = TPM_TAG_RQU_COMMAND,
- .length = cpu_to_be32(34),
- .ordinal = TPM_ORD_PCR_EXTEND
-};
-
-int tpm_pcr_extend(u32 chip_num, int pcr_idx, const u8 *hash)
-{
- struct tpm_cmd_t cmd;
- int rc;
- struct tpm_chip *chip;
-
- chip = tpm_chip_find_get(chip_num);
- if (chip == NULL)
- return -ENODEV;
-
- cmd.header.in = pcrextend_header;
- cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(pcr_idx);
- memcpy(cmd.params.pcrextend_in.hash, hash, TPM_DIGEST_SIZE);
- rc = transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE,
- "attempting extend a PCR value");
-
- tpm_chip_put(chip);
- return rc;
-}
-EXPORT_SYMBOL_GPL(tpm_pcr_extend);
-
-/**
- * tpm_do_selftest - have the TPM continue its selftest and wait until it
- * can receive further commands
- * @chip: TPM chip to use
- *
- * Returns 0 on success, < 0 in case of fatal error or a value > 0 representing
- * a TPM error code.
- */
-int tpm_do_selftest(struct tpm_chip *chip)
-{
- int rc;
- unsigned int loops;
- unsigned int delay_msec = 100;
- unsigned long duration;
- struct tpm_cmd_t cmd;
-
- duration = tpm_calc_ordinal_duration(chip,
- TPM_ORD_CONTINUE_SELFTEST);
-
- loops = jiffies_to_msecs(duration) / delay_msec;
-
- rc = tpm_continue_selftest(chip);
- /* This may fail if there was no TPM driver during a suspend/resume
- * cycle; some may return 10 (BAD_ORDINAL), others 28 (FAILEDSELFTEST)
- */
- if (rc)
- return rc;
-
- do {
- /* Attempt to read a PCR value */
- cmd.header.in = pcrread_header;
- cmd.params.pcrread_in.pcr_idx = cpu_to_be32(0);
- rc = tpm_transmit(chip, (u8 *) &cmd, READ_PCR_RESULT_SIZE);
- /* Some buggy TPMs will not respond to tpm_tis_ready() for
- * around 300ms while the self test is ongoing, keep trying
- * until the self test duration expires. */
- if (rc == -ETIME) {
- dev_info(chip->dev, HW_ERR "TPM command timed out during continue self test");
- msleep(delay_msec);
- continue;
- }
-
- if (rc < TPM_HEADER_SIZE)
- return -EFAULT;
-
- rc = be32_to_cpu(cmd.header.out.return_code);
- if (rc == TPM_ERR_DISABLED || rc == TPM_ERR_DEACTIVATED) {
- dev_info(chip->dev,
- "TPM is disabled/deactivated (0x%X)\n", rc);
- /* TPM is disabled and/or deactivated; driver can
- * proceed and TPM does handle commands for
- * suspend/resume correctly
- */
- return 0;
- }
- if (rc != TPM_WARN_DOING_SELFTEST)
- return rc;
- msleep(delay_msec);
- } while (--loops > 0);
-
- return rc;
-}
-EXPORT_SYMBOL_GPL(tpm_do_selftest);
-
-int tpm_send(u32 chip_num, void *cmd, size_t buflen)
-{
- struct tpm_chip *chip;
- int rc;
-
- chip = tpm_chip_find_get(chip_num);
- if (chip == NULL)
- return -ENODEV;
-
- rc = transmit_cmd(chip, cmd, buflen, "attempting tpm_cmd");
-
- tpm_chip_put(chip);
- return rc;
-}
-EXPORT_SYMBOL_GPL(tpm_send);
-
-ssize_t tpm_show_pcrs(struct device *dev, struct device_attribute *attr,
- char *buf)
-{
- cap_t cap;
- u8 digest[TPM_DIGEST_SIZE];
- ssize_t rc;
- int i, j, num_pcrs;
- char *str = buf;
- struct tpm_chip *chip = dev_get_drvdata(dev);
-
- rc = tpm_getcap(dev, TPM_CAP_PROP_PCR, &cap,
- "attempting to determine the number of PCRS");
- if (rc)
- return 0;
-
- num_pcrs = be32_to_cpu(cap.num_pcrs);
- for (i = 0; i < num_pcrs; i++) {
- rc = __tpm_pcr_read(chip, i, digest);
- if (rc)
- break;
- str += sprintf(str, "PCR-%02d: ", i);
- for (j = 0; j < TPM_DIGEST_SIZE; j++)
- str += sprintf(str, "%02X ", digest[j]);
- str += sprintf(str, "\n");
- }
- return str - buf;
-}
-EXPORT_SYMBOL_GPL(tpm_show_pcrs);
-
-#define READ_PUBEK_RESULT_SIZE 314
-#define TPM_ORD_READPUBEK cpu_to_be32(124)
-static struct tpm_input_header tpm_readpubek_header = {
- .tag = TPM_TAG_RQU_COMMAND,
- .length = cpu_to_be32(30),
- .ordinal = TPM_ORD_READPUBEK
-};
-
-ssize_t tpm_show_pubek(struct device *dev, struct device_attribute *attr,
- char *buf)
-{
- u8 *data;
- struct tpm_cmd_t tpm_cmd;
- ssize_t err;
- int i, rc;
- char *str = buf;
-
- struct tpm_chip *chip = dev_get_drvdata(dev);
-
- tpm_cmd.header.in = tpm_readpubek_header;
- err = transmit_cmd(chip, &tpm_cmd, READ_PUBEK_RESULT_SIZE,
- "attempting to read the PUBEK");
- if (err)
- goto out;
-
- /*
- ignore header 10 bytes
- algorithm 32 bits (1 == RSA )
- encscheme 16 bits
- sigscheme 16 bits
- parameters (RSA 12->bytes: keybit, #primes, expbit)
- keylenbytes 32 bits
- 256 byte modulus
- ignore checksum 20 bytes
- */
- data = tpm_cmd.params.readpubek_out_buffer;
- str +=
- sprintf(str,
- "Algorithm: %02X %02X %02X %02X\n"
- "Encscheme: %02X %02X\n"
- "Sigscheme: %02X %02X\n"
- "Parameters: %02X %02X %02X %02X "
- "%02X %02X %02X %02X "
- "%02X %02X %02X %02X\n"
- "Modulus length: %d\n"
- "Modulus:\n",
- data[0], data[1], data[2], data[3],
- data[4], data[5],
- data[6], data[7],
- data[12], data[13], data[14], data[15],
- data[16], data[17], data[18], data[19],
- data[20], data[21], data[22], data[23],
- be32_to_cpu(*((__be32 *) (data + 24))));
-
- for (i = 0; i < 256; i++) {
- str += sprintf(str, "%02X ", data[i + 28]);
- if ((i + 1) % 16 == 0)
- str += sprintf(str, "\n");
- }
-out:
- rc = str - buf;
- return rc;
-}
-EXPORT_SYMBOL_GPL(tpm_show_pubek);
-
-
-ssize_t tpm_show_caps(struct device *dev, struct device_attribute *attr,
- char *buf)
-{
- cap_t cap;
- ssize_t rc;
- char *str = buf;
-
- rc = tpm_getcap(dev, TPM_CAP_PROP_MANUFACTURER, &cap,
- "attempting to determine the manufacturer");
- if (rc)
- return 0;
- str += sprintf(str, "Manufacturer: 0x%x\n",
- be32_to_cpu(cap.manufacturer_id));
-
- rc = tpm_getcap(dev, CAP_VERSION_1_1, &cap,
- "attempting to determine the 1.1 version");
- if (rc)
- return 0;
- str += sprintf(str,
- "TCG version: %d.%d\nFirmware version: %d.%d\n",
- cap.tpm_version.Major, cap.tpm_version.Minor,
- cap.tpm_version.revMajor, cap.tpm_version.revMinor);
- return str - buf;
-}
-EXPORT_SYMBOL_GPL(tpm_show_caps);
-
-ssize_t tpm_show_caps_1_2(struct device * dev,
- struct device_attribute * attr, char *buf)
-{
- cap_t cap;
- ssize_t rc;
- char *str = buf;
-
- rc = tpm_getcap(dev, TPM_CAP_PROP_MANUFACTURER, &cap,
- "attempting to determine the manufacturer");
- if (rc)
- return 0;
- str += sprintf(str, "Manufacturer: 0x%x\n",
- be32_to_cpu(cap.manufacturer_id));
- rc = tpm_getcap(dev, CAP_VERSION_1_2, &cap,
- "attempting to determine the 1.2 version");
- if (rc)
- return 0;
- str += sprintf(str,
- "TCG version: %d.%d\nFirmware version: %d.%d\n",
- cap.tpm_version_1_2.Major, cap.tpm_version_1_2.Minor,
- cap.tpm_version_1_2.revMajor,
- cap.tpm_version_1_2.revMinor);
- return str - buf;
-}
-EXPORT_SYMBOL_GPL(tpm_show_caps_1_2);
-
-ssize_t tpm_show_durations(struct device *dev, struct device_attribute *attr,
- char *buf)
-{
- struct tpm_chip *chip = dev_get_drvdata(dev);
-
- if (chip->vendor.duration[TPM_LONG] == 0)
- return 0;
-
- return sprintf(buf, "%d %d %d [%s]\n",
- jiffies_to_usecs(chip->vendor.duration[TPM_SHORT]),
- jiffies_to_usecs(chip->vendor.duration[TPM_MEDIUM]),
- jiffies_to_usecs(chip->vendor.duration[TPM_LONG]),
- chip->vendor.duration_adjusted
- ? "adjusted" : "original");
-}
-EXPORT_SYMBOL_GPL(tpm_show_durations);
-
-ssize_t tpm_show_timeouts(struct device *dev, struct device_attribute *attr,
- char *buf)
-{
- struct tpm_chip *chip = dev_get_drvdata(dev);
-
- return sprintf(buf, "%d %d %d %d [%s]\n",
- jiffies_to_usecs(chip->vendor.timeout_a),
- jiffies_to_usecs(chip->vendor.timeout_b),
- jiffies_to_usecs(chip->vendor.timeout_c),
- jiffies_to_usecs(chip->vendor.timeout_d),
- chip->vendor.timeout_adjusted
- ? "adjusted" : "original");
-}
-EXPORT_SYMBOL_GPL(tpm_show_timeouts);
-
-ssize_t tpm_store_cancel(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t count)
-{
- struct tpm_chip *chip = dev_get_drvdata(dev);
- if (chip == NULL)
- return 0;
-
- chip->vendor.cancel(chip);
- return count;
-}
-EXPORT_SYMBOL_GPL(tpm_store_cancel);
-
-static bool wait_for_tpm_stat_cond(struct tpm_chip *chip, u8 mask, bool check_cancel,
- bool *canceled)
-{
- u8 status = chip->vendor.status(chip);
-
- *canceled = false;
- if ((status & mask) == mask)
- return true;
- if (check_cancel && chip->vendor.req_canceled(chip, status)) {
- *canceled = true;
- return true;
- }
- return false;
-}
-
-int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask, unsigned long timeout,
- wait_queue_head_t *queue, bool check_cancel)
-{
- unsigned long stop;
- long rc;
- u8 status;
- bool canceled = false;
-
- /* check current status */
- status = chip->vendor.status(chip);
- if ((status & mask) == mask)
- return 0;
-
- stop = jiffies + timeout;
-
- if (chip->vendor.irq) {
-again:
- timeout = stop - jiffies;
- if ((long)timeout <= 0)
- return -ETIME;
- rc = wait_event_interruptible_timeout(*queue,
- wait_for_tpm_stat_cond(chip, mask, check_cancel,
- &canceled),
- timeout);
- if (rc > 0) {
- if (canceled)
- return -ECANCELED;
- return 0;
- }
- if (rc == -ERESTARTSYS && freezing(current)) {
- clear_thread_flag(TIF_SIGPENDING);
- goto again;
- }
- } else {
- do {
- msleep(TPM_TIMEOUT);
- status = chip->vendor.status(chip);
- if ((status & mask) == mask)
- return 0;
- } while (time_before(jiffies, stop));
- }
- return -ETIME;
-}
-EXPORT_SYMBOL_GPL(wait_for_tpm_stat);
-/*
- * Device file system interface to the TPM
- *
- * It's assured that the chip will be opened just once,
- * by the check of is_open variable, which is protected
- * by driver_lock.
- */
-int tpm_open(struct inode *inode, struct file *file)
-{
- int minor = iminor(inode);
- struct tpm_chip *chip = NULL, *pos;
-
- rcu_read_lock();
- list_for_each_entry_rcu(pos, &tpm_chip_list, list) {
- if (pos->vendor.miscdev.minor == minor) {
- chip = pos;
- get_device(chip->dev);
- break;
- }
- }
- rcu_read_unlock();
-
- if (!chip)
- return -ENODEV;
-
- if (test_and_set_bit(0, &chip->is_open)) {
- dev_dbg(chip->dev, "Another process owns this TPM\n");
- put_device(chip->dev);
- return -EBUSY;
- }
-
- chip->data_buffer = kzalloc(TPM_BUFSIZE, GFP_KERNEL);
- if (chip->data_buffer == NULL) {
- clear_bit(0, &chip->is_open);
- put_device(chip->dev);
- return -ENOMEM;
- }
-
- atomic_set(&chip->data_pending, 0);
-
- file->private_data = chip;
- return 0;
-}
-EXPORT_SYMBOL_GPL(tpm_open);
-
-/*
- * Called on file close
- */
-int tpm_release(struct inode *inode, struct file *file)
-{
- struct tpm_chip *chip = file->private_data;
-
- del_singleshot_timer_sync(&chip->user_read_timer);
- flush_work(&chip->work);
- file->private_data = NULL;
- atomic_set(&chip->data_pending, 0);
- kzfree(chip->data_buffer);
- clear_bit(0, &chip->is_open);
- put_device(chip->dev);
- return 0;
-}
-EXPORT_SYMBOL_GPL(tpm_release);
-
-ssize_t tpm_write(struct file *file, const char __user *buf,
- size_t size, loff_t *off)
-{
- struct tpm_chip *chip = file->private_data;
- size_t in_size = size;
- ssize_t out_size;
-
- /* cannot perform a write until the read has cleared
- either via tpm_read or a user_read_timer timeout.
- This also prevents splitted buffered writes from blocking here.
- */
- if (atomic_read(&chip->data_pending) != 0)
- return -EBUSY;
-
- if (in_size > TPM_BUFSIZE)
- return -E2BIG;
-
- mutex_lock(&chip->buffer_mutex);
-
- if (copy_from_user
- (chip->data_buffer, (void __user *) buf, in_size)) {
- mutex_unlock(&chip->buffer_mutex);
- return -EFAULT;
- }
-
- /* atomic tpm command send and result receive */
- out_size = tpm_transmit(chip, chip->data_buffer, TPM_BUFSIZE);
- if (out_size < 0) {
- mutex_unlock(&chip->buffer_mutex);
- return out_size;
- }
-
- atomic_set(&chip->data_pending, out_size);
- mutex_unlock(&chip->buffer_mutex);
-
- /* Set a timeout by which the reader must come claim the result */
- mod_timer(&chip->user_read_timer, jiffies + (60 * HZ));
-
- return in_size;
-}
-EXPORT_SYMBOL_GPL(tpm_write);
-
-ssize_t tpm_read(struct file *file, char __user *buf,
- size_t size, loff_t *off)
-{
- struct tpm_chip *chip = file->private_data;
- ssize_t ret_size;
- int rc;
-
- del_singleshot_timer_sync(&chip->user_read_timer);
- flush_work(&chip->work);
- ret_size = atomic_read(&chip->data_pending);
- if (ret_size > 0) { /* relay data */
- ssize_t orig_ret_size = ret_size;
- if (size < ret_size)
- ret_size = size;
-
- mutex_lock(&chip->buffer_mutex);
- rc = copy_to_user(buf, chip->data_buffer, ret_size);
- memset(chip->data_buffer, 0, orig_ret_size);
- if (rc)
- ret_size = -EFAULT;
-
- mutex_unlock(&chip->buffer_mutex);
- }
-
- atomic_set(&chip->data_pending, 0);
-
- return ret_size;
-}
-EXPORT_SYMBOL_GPL(tpm_read);
-
-void tpm_remove_hardware(struct device *dev)
-{
- struct tpm_chip *chip = dev_get_drvdata(dev);
-
- if (chip == NULL) {
- dev_err(dev, "No device data found\n");
- return;
- }
-
- spin_lock(&driver_lock);
- list_del_rcu(&chip->list);
- spin_unlock(&driver_lock);
- synchronize_rcu();
-
- misc_deregister(&chip->vendor.miscdev);
- sysfs_remove_group(&dev->kobj, chip->vendor.attr_group);
- tpm_remove_ppi(&dev->kobj);
- tpm_bios_log_teardown(chip->bios_dir);
-
- /* write it this way to be explicit (chip->dev == dev) */
- put_device(chip->dev);
-}
-EXPORT_SYMBOL_GPL(tpm_remove_hardware);
-
-#define TPM_ORD_SAVESTATE cpu_to_be32(152)
-#define SAVESTATE_RESULT_SIZE 10
-
-static struct tpm_input_header savestate_header = {
- .tag = TPM_TAG_RQU_COMMAND,
- .length = cpu_to_be32(10),
- .ordinal = TPM_ORD_SAVESTATE
-};
-
-/*
- * We are about to suspend. Save the TPM state
- * so that it can be restored.
- */
-int tpm_pm_suspend(struct device *dev)
-{
- struct tpm_chip *chip = dev_get_drvdata(dev);
- struct tpm_cmd_t cmd;
- int rc, try;
-
- u8 dummy_hash[TPM_DIGEST_SIZE] = { 0 };
-
- if (chip == NULL)
- return -ENODEV;
-
- /* for buggy tpm, flush pcrs with extend to selected dummy */
- if (tpm_suspend_pcr) {
- cmd.header.in = pcrextend_header;
- cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(tpm_suspend_pcr);
- memcpy(cmd.params.pcrextend_in.hash, dummy_hash,
- TPM_DIGEST_SIZE);
- rc = transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE,
- "extending dummy pcr before suspend");
- }
-
- /* now do the actual savestate */
- for (try = 0; try < TPM_RETRY; try++) {
- cmd.header.in = savestate_header;
- rc = transmit_cmd(chip, &cmd, SAVESTATE_RESULT_SIZE, NULL);
-
- /*
- * If the TPM indicates that it is too busy to respond to
- * this command then retry before giving up. It can take
- * several seconds for this TPM to be ready.
- *
- * This can happen if the TPM has already been sent the
- * SaveState command before the driver has loaded. TCG 1.2
- * specification states that any communication after SaveState
- * may cause the TPM to invalidate previously saved state.
- */
- if (rc != TPM_WARN_RETRY)
- break;
- msleep(TPM_TIMEOUT_RETRY);
- }
-
- if (rc)
- dev_err(chip->dev,
- "Error (%d) sending savestate before suspend\n", rc);
- else if (try > 0)
- dev_warn(chip->dev, "TPM savestate took %dms\n",
- try * TPM_TIMEOUT_RETRY);
-
- return rc;
-}
-EXPORT_SYMBOL_GPL(tpm_pm_suspend);
-
-/*
- * Resume from a power safe. The BIOS already restored
- * the TPM state.
- */
-int tpm_pm_resume(struct device *dev)
-{
- struct tpm_chip *chip = dev_get_drvdata(dev);
-
- if (chip == NULL)
- return -ENODEV;
-
- return 0;
-}
-EXPORT_SYMBOL_GPL(tpm_pm_resume);
-
-#define TPM_GETRANDOM_RESULT_SIZE 18
-static struct tpm_input_header tpm_getrandom_header = {
- .tag = TPM_TAG_RQU_COMMAND,
- .length = cpu_to_be32(14),
- .ordinal = TPM_ORD_GET_RANDOM
-};
-
-/**
- * tpm_get_random() - Get random bytes from the tpm's RNG
- * @chip_num: A specific chip number for the request or TPM_ANY_NUM
- * @out: destination buffer for the random bytes
- * @max: the max number of bytes to write to @out
- *
- * Returns < 0 on error and the number of bytes read on success
- */
-int tpm_get_random(u32 chip_num, u8 *out, size_t max)
-{
- struct tpm_chip *chip;
- struct tpm_cmd_t tpm_cmd;
- u32 recd, num_bytes = min_t(u32, max, TPM_MAX_RNG_DATA);
- int err, total = 0, retries = 5;
- u8 *dest = out;
-
- chip = tpm_chip_find_get(chip_num);
- if (chip == NULL)
- return -ENODEV;
-
- if (!out || !num_bytes || max > TPM_MAX_RNG_DATA)
- return -EINVAL;
-
- do {
- tpm_cmd.header.in = tpm_getrandom_header;
- tpm_cmd.params.getrandom_in.num_bytes = cpu_to_be32(num_bytes);
-
- err = transmit_cmd(chip, &tpm_cmd,
- TPM_GETRANDOM_RESULT_SIZE + num_bytes,
- "attempting get random");
- if (err)
- break;
-
- recd = be32_to_cpu(tpm_cmd.params.getrandom_out.rng_data_len);
- memcpy(dest, tpm_cmd.params.getrandom_out.rng_data, recd);
-
- dest += recd;
- total += recd;
- num_bytes -= recd;
- } while (retries-- && total < max);
-
- return total ? total : -EIO;
-}
-EXPORT_SYMBOL_GPL(tpm_get_random);
-
-/* In case vendor provided release function, call it too.*/
-
-void tpm_dev_vendor_release(struct tpm_chip *chip)
-{
- if (!chip)
- return;
-
- if (chip->vendor.release)
- chip->vendor.release(chip->dev);
-
- clear_bit(chip->dev_num, dev_mask);
- kfree(chip->vendor.miscdev.name);
-}
-EXPORT_SYMBOL_GPL(tpm_dev_vendor_release);
-
-
-/*
- * Once all references to platform device are down to 0,
- * release all allocated structures.
- */
-void tpm_dev_release(struct device *dev)
-{
- struct tpm_chip *chip = dev_get_drvdata(dev);
-
- if (!chip)
- return;
-
- tpm_dev_vendor_release(chip);
-
- chip->release(dev);
- kfree(chip);
-}
-EXPORT_SYMBOL_GPL(tpm_dev_release);
-
-/*
- * Called from tpm_<specific>.c probe function only for devices
- * the driver has determined it should claim. Prior to calling
- * this function the specific probe function has called pci_enable_device
- * upon errant exit from this function specific probe function should call
- * pci_disable_device
- */
-struct tpm_chip *tpm_register_hardware(struct device *dev,
- const struct tpm_vendor_specific *entry)
-{
-#define DEVNAME_SIZE 7
-
- char *devname;
- struct tpm_chip *chip;
-
- /* Driver specific per-device data */
- chip = kzalloc(sizeof(*chip), GFP_KERNEL);
- devname = kmalloc(DEVNAME_SIZE, GFP_KERNEL);
-
- if (chip == NULL || devname == NULL)
- goto out_free;
-
- mutex_init(&chip->buffer_mutex);
- mutex_init(&chip->tpm_mutex);
- INIT_LIST_HEAD(&chip->list);
-
- INIT_WORK(&chip->work, timeout_work);
-
- setup_timer(&chip->user_read_timer, user_reader_timeout,
- (unsigned long)chip);
-
- memcpy(&chip->vendor, entry, sizeof(struct tpm_vendor_specific));
-
- chip->dev_num = find_first_zero_bit(dev_mask, TPM_NUM_DEVICES);
-
- if (chip->dev_num >= TPM_NUM_DEVICES) {
- dev_err(dev, "No available tpm device numbers\n");
- goto out_free;
- } else if (chip->dev_num == 0)
- chip->vendor.miscdev.minor = TPM_MINOR;
- else
- chip->vendor.miscdev.minor = MISC_DYNAMIC_MINOR;
-
- set_bit(chip->dev_num, dev_mask);
-
- scnprintf(devname, DEVNAME_SIZE, "%s%d", "tpm", chip->dev_num);
- chip->vendor.miscdev.name = devname;
-
- chip->vendor.miscdev.parent = dev;
- chip->dev = get_device(dev);
- chip->release = dev->release;
- dev->release = tpm_dev_release;
- dev_set_drvdata(dev, chip);
-
- if (misc_register(&chip->vendor.miscdev)) {
- dev_err(chip->dev,
- "unable to misc_register %s, minor %d\n",
- chip->vendor.miscdev.name,
- chip->vendor.miscdev.minor);
- goto put_device;
- }
-
- if (sysfs_create_group(&dev->kobj, chip->vendor.attr_group)) {
- misc_deregister(&chip->vendor.miscdev);
- goto put_device;
- }
-
- if (tpm_add_ppi(&dev->kobj)) {
- misc_deregister(&chip->vendor.miscdev);
- goto put_device;
- }
-
- chip->bios_dir = tpm_bios_log_setup(devname);
-
- /* Make chip available */
- spin_lock(&driver_lock);
- list_add_rcu(&chip->list, &tpm_chip_list);
- spin_unlock(&driver_lock);
-
- return chip;
-
-put_device:
- put_device(chip->dev);
-out_free:
- kfree(chip);
- kfree(devname);
- return NULL;
-}
-EXPORT_SYMBOL_GPL(tpm_register_hardware);
-
-MODULE_AUTHOR("Leendert van Doorn (leendert@watson.ibm.com)");
-MODULE_DESCRIPTION("TPM Driver");
-MODULE_VERSION("2.0");
-MODULE_LICENSE("GPL");
char *);
extern ssize_t tpm_show_caps(struct device *, struct device_attribute *attr,
char *);
-extern ssize_t tpm_show_caps_1_2(struct device *, struct device_attribute *attr,
- char *);
extern ssize_t tpm_store_cancel(struct device *, struct device_attribute *attr,
const char *, size_t);
extern ssize_t tpm_show_enabled(struct device *, struct device_attribute *attr,
struct device *dev; /* Device stuff */
int dev_num; /* /dev/tpm# */
+ char devname[7];
unsigned long is_open; /* only one allowed */
int time_expired;
have_region =
(atmel_request_region
- (tpm_atmel.base, region_size, "tpm_atmel0") == NULL) ? 0 : 1;
+ (base, region_size, "tpm_atmel0") == NULL) ? 0 : 1;
pdev = platform_device_register_simple("tpm_atmel", -1, NULL, 0);
if (IS_ERR(pdev)) {
out:
return NULL;
}
-EXPORT_SYMBOL_GPL(tpm_bios_log_setup);
void tpm_bios_log_teardown(struct dentry **lst)
{
for (i = 0; i < 3; i++)
securityfs_remove(lst[i]);
}
-EXPORT_SYMBOL_GPL(tpm_bios_log_teardown);
-MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * ATMEL I2C TPM AT97SC3204T
+ *
+ * Copyright (C) 2012 V Lab Technologies
+ * Teddy Reed <teddy@prosauce.org>
+ * Copyright (C) 2013, Obsidian Research Corp.
+ * Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
+ * Device driver for ATMEL I2C TPMs.
+ *
+ * Teddy Reed determined the basic I2C command flow, unlike other I2C TPM
+ * devices the raw TCG formatted TPM command data is written via I2C and then
+ * raw TCG formatted TPM command data is returned via I2C.
+ *
+ * TGC status/locality/etc functions seen in the LPC implementation do not
+ * seem to be present.
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see http://www.gnu.org/licenses/>.
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/slab.h>
+#include <linux/i2c.h>
+#include "tpm.h"
+
+#define I2C_DRIVER_NAME "tpm_i2c_atmel"
+
+#define TPM_I2C_SHORT_TIMEOUT 750 /* ms */
+#define TPM_I2C_LONG_TIMEOUT 2000 /* 2 sec */
+
+#define ATMEL_STS_OK 1
+
+struct priv_data {
+ size_t len;
+ /* This is the amount we read on the first try. 25 was chosen to fit a
+ * fair number of read responses in the buffer so a 2nd retry can be
+ * avoided in small message cases. */
+ u8 buffer[sizeof(struct tpm_output_header) + 25];
+};
+
+static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len)
+{
+ struct priv_data *priv = chip->vendor.priv;
+ struct i2c_client *client = to_i2c_client(chip->dev);
+ s32 status;
+
+ priv->len = 0;
+
+ if (len <= 2)
+ return -EIO;
+
+ status = i2c_master_send(client, buf, len);
+
+ dev_dbg(chip->dev,
+ "%s(buf=%*ph len=%0zx) -> sts=%d\n", __func__,
+ (int)min_t(size_t, 64, len), buf, len, status);
+ return status;
+}
+
+static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+{
+ struct priv_data *priv = chip->vendor.priv;
+ struct i2c_client *client = to_i2c_client(chip->dev);
+ struct tpm_output_header *hdr =
+ (struct tpm_output_header *)priv->buffer;
+ u32 expected_len;
+ int rc;
+
+ if (priv->len == 0)
+ return -EIO;
+
+ /* Get the message size from the message header, if we didn't get the
+ * whole message in read_status then we need to re-read the
+ * message. */
+ expected_len = be32_to_cpu(hdr->length);
+ if (expected_len > count)
+ return -ENOMEM;
+
+ if (priv->len >= expected_len) {
+ dev_dbg(chip->dev,
+ "%s early(buf=%*ph count=%0zx) -> ret=%d\n", __func__,
+ (int)min_t(size_t, 64, expected_len), buf, count,
+ expected_len);
+ memcpy(buf, priv->buffer, expected_len);
+ return expected_len;
+ }
+
+ rc = i2c_master_recv(client, buf, expected_len);
+ dev_dbg(chip->dev,
+ "%s reread(buf=%*ph count=%0zx) -> ret=%d\n", __func__,
+ (int)min_t(size_t, 64, expected_len), buf, count,
+ expected_len);
+ return rc;
+}
+
+static void i2c_atmel_cancel(struct tpm_chip *chip)
+{
+ dev_err(chip->dev, "TPM operation cancellation was requested, but is not supported");
+}
+
+static u8 i2c_atmel_read_status(struct tpm_chip *chip)
+{
+ struct priv_data *priv = chip->vendor.priv;
+ struct i2c_client *client = to_i2c_client(chip->dev);
+ int rc;
+
+ /* The TPM fails the I2C read until it is ready, so we do the entire
+ * transfer here and buffer it locally. This way the common code can
+ * properly handle the timeouts. */
+ priv->len = 0;
+ memset(priv->buffer, 0, sizeof(priv->buffer));
+
+
+ /* Once the TPM has completed the command the command remains readable
+ * until another command is issued. */
+ rc = i2c_master_recv(client, priv->buffer, sizeof(priv->buffer));
+ dev_dbg(chip->dev,
+ "%s: sts=%d", __func__, rc);
+ if (rc <= 0)
+ return 0;
+
+ priv->len = rc;
+
+ return ATMEL_STS_OK;
+}
+
+static const struct file_operations i2c_atmel_ops = {
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .open = tpm_open,
+ .read = tpm_read,
+ .write = tpm_write,
+ .release = tpm_release,
+};
+
+static DEVICE_ATTR(pubek, S_IRUGO, tpm_show_pubek, NULL);
+static DEVICE_ATTR(pcrs, S_IRUGO, tpm_show_pcrs, NULL);
+static DEVICE_ATTR(enabled, S_IRUGO, tpm_show_enabled, NULL);
+static DEVICE_ATTR(active, S_IRUGO, tpm_show_active, NULL);
+static DEVICE_ATTR(owned, S_IRUGO, tpm_show_owned, NULL);
+static DEVICE_ATTR(temp_deactivated, S_IRUGO, tpm_show_temp_deactivated, NULL);
+static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps, NULL);
+static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel);
+static DEVICE_ATTR(durations, S_IRUGO, tpm_show_durations, NULL);
+static DEVICE_ATTR(timeouts, S_IRUGO, tpm_show_timeouts, NULL);
+
+static struct attribute *i2c_atmel_attrs[] = {
+ &dev_attr_pubek.attr,
+ &dev_attr_pcrs.attr,
+ &dev_attr_enabled.attr,
+ &dev_attr_active.attr,
+ &dev_attr_owned.attr,
+ &dev_attr_temp_deactivated.attr,
+ &dev_attr_caps.attr,
+ &dev_attr_cancel.attr,
+ &dev_attr_durations.attr,
+ &dev_attr_timeouts.attr,
+ NULL,
+};
+
+static struct attribute_group i2c_atmel_attr_grp = {
+ .attrs = i2c_atmel_attrs
+};
+
+static bool i2c_atmel_req_canceled(struct tpm_chip *chip, u8 status)
+{
+ return 0;
+}
+
+static const struct tpm_vendor_specific i2c_atmel = {
+ .status = i2c_atmel_read_status,
+ .recv = i2c_atmel_recv,
+ .send = i2c_atmel_send,
+ .cancel = i2c_atmel_cancel,
+ .req_complete_mask = ATMEL_STS_OK,
+ .req_complete_val = ATMEL_STS_OK,
+ .req_canceled = i2c_atmel_req_canceled,
+ .attr_group = &i2c_atmel_attr_grp,
+ .miscdev.fops = &i2c_atmel_ops,
+};
+
+static int i2c_atmel_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ int rc;
+ struct tpm_chip *chip;
+ struct device *dev = &client->dev;
+
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C))
+ return -ENODEV;
+
+ chip = tpm_register_hardware(dev, &i2c_atmel);
+ if (!chip) {
+ dev_err(dev, "%s() error in tpm_register_hardware\n", __func__);
+ return -ENODEV;
+ }
+
+ chip->vendor.priv = devm_kzalloc(dev, sizeof(struct priv_data),
+ GFP_KERNEL);
+
+ /* Default timeouts */
+ chip->vendor.timeout_a = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
+ chip->vendor.timeout_b = msecs_to_jiffies(TPM_I2C_LONG_TIMEOUT);
+ chip->vendor.timeout_c = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
+ chip->vendor.timeout_d = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
+ chip->vendor.irq = 0;
+
+ /* There is no known way to probe for this device, and all version
+ * information seems to be read via TPM commands. Thus we rely on the
+ * TPM startup process in the common code to detect the device. */
+ if (tpm_get_timeouts(chip)) {
+ rc = -ENODEV;
+ goto out_err;
+ }
+
+ if (tpm_do_selftest(chip)) {
+ rc = -ENODEV;
+ goto out_err;
+ }
+
+ return 0;
+
+out_err:
+ tpm_dev_vendor_release(chip);
+ tpm_remove_hardware(chip->dev);
+ return rc;
+}
+
+static int i2c_atmel_remove(struct i2c_client *client)
+{
+ struct device *dev = &(client->dev);
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip)
+ tpm_dev_vendor_release(chip);
+ tpm_remove_hardware(dev);
+ kfree(chip);
+ return 0;
+}
+
+static const struct i2c_device_id i2c_atmel_id[] = {
+ {I2C_DRIVER_NAME, 0},
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, i2c_atmel_id);
+
+#ifdef CONFIG_OF
+static const struct of_device_id i2c_atmel_of_match[] = {
+ {.compatible = "atmel,at97sc3204t"},
+ {},
+};
+MODULE_DEVICE_TABLE(of, i2c_atmel_of_match);
+#endif
+
+static SIMPLE_DEV_PM_OPS(i2c_atmel_pm_ops, tpm_pm_suspend, tpm_pm_resume);
+
+static struct i2c_driver i2c_atmel_driver = {
+ .id_table = i2c_atmel_id,
+ .probe = i2c_atmel_probe,
+ .remove = i2c_atmel_remove,
+ .driver = {
+ .name = I2C_DRIVER_NAME,
+ .owner = THIS_MODULE,
+ .pm = &i2c_atmel_pm_ops,
+ .of_match_table = of_match_ptr(i2c_atmel_of_match),
+ },
+};
+
+module_i2c_driver(i2c_atmel_driver);
+
+MODULE_AUTHOR("Jason Gunthorpe <jgunthorpe@obsidianresearch.com>");
+MODULE_DESCRIPTION("Atmel TPM I2C Driver");
+MODULE_LICENSE("GPL");
static DEVICE_ATTR(active, S_IRUGO, tpm_show_active, NULL);
static DEVICE_ATTR(owned, S_IRUGO, tpm_show_owned, NULL);
static DEVICE_ATTR(temp_deactivated, S_IRUGO, tpm_show_temp_deactivated, NULL);
-static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps_1_2, NULL);
+static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps, NULL);
static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel);
static DEVICE_ATTR(durations, S_IRUGO, tpm_show_durations, NULL);
static DEVICE_ATTR(timeouts, S_IRUGO, tpm_show_timeouts, NULL);
chip->dev->release = NULL;
chip->release = NULL;
tpm_dev.client = NULL;
- dev_set_drvdata(chip->dev, chip);
out_err:
return rc;
}
chip->dev->release = NULL;
chip->release = NULL;
tpm_dev.client = NULL;
- dev_set_drvdata(chip->dev, chip);
return 0;
}
--- /dev/null
+/******************************************************************************
+ * Nuvoton TPM I2C Device Driver Interface for WPCT301/NPCT501,
+ * based on the TCG TPM Interface Spec version 1.2.
+ * Specifications at www.trustedcomputinggroup.org
+ *
+ * Copyright (C) 2011, Nuvoton Technology Corporation.
+ * Dan Morav <dan.morav@nuvoton.com>
+ * Copyright (C) 2013, Obsidian Research Corp.
+ * Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see http://www.gnu.org/licenses/>.
+ *
+ * Nuvoton contact information: APC.Support@nuvoton.com
+ *****************************************************************************/
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/wait.h>
+#include <linux/i2c.h>
+#include "tpm.h"
+
+/* I2C interface offsets */
+#define TPM_STS 0x00
+#define TPM_BURST_COUNT 0x01
+#define TPM_DATA_FIFO_W 0x20
+#define TPM_DATA_FIFO_R 0x40
+#define TPM_VID_DID_RID 0x60
+/* TPM command header size */
+#define TPM_HEADER_SIZE 10
+#define TPM_RETRY 5
+/*
+ * I2C bus device maximum buffer size w/o counting I2C address or command
+ * i.e. max size required for I2C write is 34 = addr, command, 32 bytes data
+ */
+#define TPM_I2C_MAX_BUF_SIZE 32
+#define TPM_I2C_RETRY_COUNT 32
+#define TPM_I2C_BUS_DELAY 1 /* msec */
+#define TPM_I2C_RETRY_DELAY_SHORT 2 /* msec */
+#define TPM_I2C_RETRY_DELAY_LONG 10 /* msec */
+
+#define I2C_DRIVER_NAME "tpm_i2c_nuvoton"
+
+struct priv_data {
+ unsigned int intrs;
+};
+
+static s32 i2c_nuvoton_read_buf(struct i2c_client *client, u8 offset, u8 size,
+ u8 *data)
+{
+ s32 status;
+
+ status = i2c_smbus_read_i2c_block_data(client, offset, size, data);
+ dev_dbg(&client->dev,
+ "%s(offset=%u size=%u data=%*ph) -> sts=%d\n", __func__,
+ offset, size, (int)size, data, status);
+ return status;
+}
+
+static s32 i2c_nuvoton_write_buf(struct i2c_client *client, u8 offset, u8 size,
+ u8 *data)
+{
+ s32 status;
+
+ status = i2c_smbus_write_i2c_block_data(client, offset, size, data);
+ dev_dbg(&client->dev,
+ "%s(offset=%u size=%u data=%*ph) -> sts=%d\n", __func__,
+ offset, size, (int)size, data, status);
+ return status;
+}
+
+#define TPM_STS_VALID 0x80
+#define TPM_STS_COMMAND_READY 0x40
+#define TPM_STS_GO 0x20
+#define TPM_STS_DATA_AVAIL 0x10
+#define TPM_STS_EXPECT 0x08
+#define TPM_STS_RESPONSE_RETRY 0x02
+#define TPM_STS_ERR_VAL 0x07 /* bit2...bit0 reads always 0 */
+
+#define TPM_I2C_SHORT_TIMEOUT 750 /* ms */
+#define TPM_I2C_LONG_TIMEOUT 2000 /* 2 sec */
+
+/* read TPM_STS register */
+static u8 i2c_nuvoton_read_status(struct tpm_chip *chip)
+{
+ struct i2c_client *client = to_i2c_client(chip->dev);
+ s32 status;
+ u8 data;
+
+ status = i2c_nuvoton_read_buf(client, TPM_STS, 1, &data);
+ if (status <= 0) {
+ dev_err(chip->dev, "%s() error return %d\n", __func__,
+ status);
+ data = TPM_STS_ERR_VAL;
+ }
+
+ return data;
+}
+
+/* write byte to TPM_STS register */
+static s32 i2c_nuvoton_write_status(struct i2c_client *client, u8 data)
+{
+ s32 status;
+ int i;
+
+ /* this causes the current command to be aborted */
+ for (i = 0, status = -1; i < TPM_I2C_RETRY_COUNT && status < 0; i++) {
+ status = i2c_nuvoton_write_buf(client, TPM_STS, 1, &data);
+ msleep(TPM_I2C_BUS_DELAY);
+ }
+ return status;
+}
+
+/* write commandReady to TPM_STS register */
+static void i2c_nuvoton_ready(struct tpm_chip *chip)
+{
+ struct i2c_client *client = to_i2c_client(chip->dev);
+ s32 status;
+
+ /* this causes the current command to be aborted */
+ status = i2c_nuvoton_write_status(client, TPM_STS_COMMAND_READY);
+ if (status < 0)
+ dev_err(chip->dev,
+ "%s() fail to write TPM_STS.commandReady\n", __func__);
+}
+
+/* read burstCount field from TPM_STS register
+ * return -1 on fail to read */
+static int i2c_nuvoton_get_burstcount(struct i2c_client *client,
+ struct tpm_chip *chip)
+{
+ unsigned long stop = jiffies + chip->vendor.timeout_d;
+ s32 status;
+ int burst_count = -1;
+ u8 data;
+
+ /* wait for burstcount to be non-zero */
+ do {
+ /* in I2C burstCount is 1 byte */
+ status = i2c_nuvoton_read_buf(client, TPM_BURST_COUNT, 1,
+ &data);
+ if (status > 0 && data > 0) {
+ burst_count = min_t(u8, TPM_I2C_MAX_BUF_SIZE, data);
+ break;
+ }
+ msleep(TPM_I2C_BUS_DELAY);
+ } while (time_before(jiffies, stop));
+
+ return burst_count;
+}
+
+/*
+ * WPCT301/NPCT501 SINT# supports only dataAvail
+ * any call to this function which is not waiting for dataAvail will
+ * set queue to NULL to avoid waiting for interrupt
+ */
+static bool i2c_nuvoton_check_status(struct tpm_chip *chip, u8 mask, u8 value)
+{
+ u8 status = i2c_nuvoton_read_status(chip);
+ return (status != TPM_STS_ERR_VAL) && ((status & mask) == value);
+}
+
+static int i2c_nuvoton_wait_for_stat(struct tpm_chip *chip, u8 mask, u8 value,
+ u32 timeout, wait_queue_head_t *queue)
+{
+ if (chip->vendor.irq && queue) {
+ s32 rc;
+ DEFINE_WAIT(wait);
+ struct priv_data *priv = chip->vendor.priv;
+ unsigned int cur_intrs = priv->intrs;
+
+ enable_irq(chip->vendor.irq);
+ rc = wait_event_interruptible_timeout(*queue,
+ cur_intrs != priv->intrs,
+ timeout);
+ if (rc > 0)
+ return 0;
+ /* At this point we know that the SINT pin is asserted, so we
+ * do not need to do i2c_nuvoton_check_status */
+ } else {
+ unsigned long ten_msec, stop;
+ bool status_valid;
+
+ /* check current status */
+ status_valid = i2c_nuvoton_check_status(chip, mask, value);
+ if (status_valid)
+ return 0;
+
+ /* use polling to wait for the event */
+ ten_msec = jiffies + msecs_to_jiffies(TPM_I2C_RETRY_DELAY_LONG);
+ stop = jiffies + timeout;
+ do {
+ if (time_before(jiffies, ten_msec))
+ msleep(TPM_I2C_RETRY_DELAY_SHORT);
+ else
+ msleep(TPM_I2C_RETRY_DELAY_LONG);
+ status_valid = i2c_nuvoton_check_status(chip, mask,
+ value);
+ if (status_valid)
+ return 0;
+ } while (time_before(jiffies, stop));
+ }
+ dev_err(chip->dev, "%s(%02x, %02x) -> timeout\n", __func__, mask,
+ value);
+ return -ETIMEDOUT;
+}
+
+/* wait for dataAvail field to be set in the TPM_STS register */
+static int i2c_nuvoton_wait_for_data_avail(struct tpm_chip *chip, u32 timeout,
+ wait_queue_head_t *queue)
+{
+ return i2c_nuvoton_wait_for_stat(chip,
+ TPM_STS_DATA_AVAIL | TPM_STS_VALID,
+ TPM_STS_DATA_AVAIL | TPM_STS_VALID,
+ timeout, queue);
+}
+
+/* Read @count bytes into @buf from TPM_RD_FIFO register */
+static int i2c_nuvoton_recv_data(struct i2c_client *client,
+ struct tpm_chip *chip, u8 *buf, size_t count)
+{
+ s32 rc;
+ int burst_count, bytes2read, size = 0;
+
+ while (size < count &&
+ i2c_nuvoton_wait_for_data_avail(chip,
+ chip->vendor.timeout_c,
+ &chip->vendor.read_queue) == 0) {
+ burst_count = i2c_nuvoton_get_burstcount(client, chip);
+ if (burst_count < 0) {
+ dev_err(chip->dev,
+ "%s() fail to read burstCount=%d\n", __func__,
+ burst_count);
+ return -EIO;
+ }
+ bytes2read = min_t(size_t, burst_count, count - size);
+ rc = i2c_nuvoton_read_buf(client, TPM_DATA_FIFO_R,
+ bytes2read, &buf[size]);
+ if (rc < 0) {
+ dev_err(chip->dev,
+ "%s() fail on i2c_nuvoton_read_buf()=%d\n",
+ __func__, rc);
+ return -EIO;
+ }
+ dev_dbg(chip->dev, "%s(%d):", __func__, bytes2read);
+ size += bytes2read;
+ }
+
+ return size;
+}
+
+/* Read TPM command results */
+static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+{
+ struct device *dev = chip->dev;
+ struct i2c_client *client = to_i2c_client(dev);
+ s32 rc;
+ int expected, status, burst_count, retries, size = 0;
+
+ if (count < TPM_HEADER_SIZE) {
+ i2c_nuvoton_ready(chip); /* return to idle */
+ dev_err(dev, "%s() count < header size\n", __func__);
+ return -EIO;
+ }
+ for (retries = 0; retries < TPM_RETRY; retries++) {
+ if (retries > 0) {
+ /* if this is not the first trial, set responseRetry */
+ i2c_nuvoton_write_status(client,
+ TPM_STS_RESPONSE_RETRY);
+ }
+ /*
+ * read first available (> 10 bytes), including:
+ * tag, paramsize, and result
+ */
+ status = i2c_nuvoton_wait_for_data_avail(
+ chip, chip->vendor.timeout_c, &chip->vendor.read_queue);
+ if (status != 0) {
+ dev_err(dev, "%s() timeout on dataAvail\n", __func__);
+ size = -ETIMEDOUT;
+ continue;
+ }
+ burst_count = i2c_nuvoton_get_burstcount(client, chip);
+ if (burst_count < 0) {
+ dev_err(dev, "%s() fail to get burstCount\n", __func__);
+ size = -EIO;
+ continue;
+ }
+ size = i2c_nuvoton_recv_data(client, chip, buf,
+ burst_count);
+ if (size < TPM_HEADER_SIZE) {
+ dev_err(dev, "%s() fail to read header\n", __func__);
+ size = -EIO;
+ continue;
+ }
+ /*
+ * convert number of expected bytes field from big endian 32 bit
+ * to machine native
+ */
+ expected = be32_to_cpu(*(__be32 *) (buf + 2));
+ if (expected > count) {
+ dev_err(dev, "%s() expected > count\n", __func__);
+ size = -EIO;
+ continue;
+ }
+ rc = i2c_nuvoton_recv_data(client, chip, &buf[size],
+ expected - size);
+ size += rc;
+ if (rc < 0 || size < expected) {
+ dev_err(dev, "%s() fail to read remainder of result\n",
+ __func__);
+ size = -EIO;
+ continue;
+ }
+ if (i2c_nuvoton_wait_for_stat(
+ chip, TPM_STS_VALID | TPM_STS_DATA_AVAIL,
+ TPM_STS_VALID, chip->vendor.timeout_c,
+ NULL)) {
+ dev_err(dev, "%s() error left over data\n", __func__);
+ size = -ETIMEDOUT;
+ continue;
+ }
+ break;
+ }
+ i2c_nuvoton_ready(chip);
+ dev_dbg(chip->dev, "%s() -> %d\n", __func__, size);
+ return size;
+}
+
+/*
+ * Send TPM command.
+ *
+ * If interrupts are used (signaled by an irq set in the vendor structure)
+ * tpm.c can skip polling for the data to be available as the interrupt is
+ * waited for here
+ */
+static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
+{
+ struct device *dev = chip->dev;
+ struct i2c_client *client = to_i2c_client(dev);
+ u32 ordinal;
+ size_t count = 0;
+ int burst_count, bytes2write, retries, rc = -EIO;
+
+ for (retries = 0; retries < TPM_RETRY; retries++) {
+ i2c_nuvoton_ready(chip);
+ if (i2c_nuvoton_wait_for_stat(chip, TPM_STS_COMMAND_READY,
+ TPM_STS_COMMAND_READY,
+ chip->vendor.timeout_b, NULL)) {
+ dev_err(dev, "%s() timeout on commandReady\n",
+ __func__);
+ rc = -EIO;
+ continue;
+ }
+ rc = 0;
+ while (count < len - 1) {
+ burst_count = i2c_nuvoton_get_burstcount(client,
+ chip);
+ if (burst_count < 0) {
+ dev_err(dev, "%s() fail get burstCount\n",
+ __func__);
+ rc = -EIO;
+ break;
+ }
+ bytes2write = min_t(size_t, burst_count,
+ len - 1 - count);
+ rc = i2c_nuvoton_write_buf(client, TPM_DATA_FIFO_W,
+ bytes2write, &buf[count]);
+ if (rc < 0) {
+ dev_err(dev, "%s() fail i2cWriteBuf\n",
+ __func__);
+ break;
+ }
+ dev_dbg(dev, "%s(%d):", __func__, bytes2write);
+ count += bytes2write;
+ rc = i2c_nuvoton_wait_for_stat(chip,
+ TPM_STS_VALID |
+ TPM_STS_EXPECT,
+ TPM_STS_VALID |
+ TPM_STS_EXPECT,
+ chip->vendor.timeout_c,
+ NULL);
+ if (rc < 0) {
+ dev_err(dev, "%s() timeout on Expect\n",
+ __func__);
+ rc = -ETIMEDOUT;
+ break;
+ }
+ }
+ if (rc < 0)
+ continue;
+
+ /* write last byte */
+ rc = i2c_nuvoton_write_buf(client, TPM_DATA_FIFO_W, 1,
+ &buf[count]);
+ if (rc < 0) {
+ dev_err(dev, "%s() fail to write last byte\n",
+ __func__);
+ rc = -EIO;
+ continue;
+ }
+ dev_dbg(dev, "%s(last): %02x", __func__, buf[count]);
+ rc = i2c_nuvoton_wait_for_stat(chip,
+ TPM_STS_VALID | TPM_STS_EXPECT,
+ TPM_STS_VALID,
+ chip->vendor.timeout_c, NULL);
+ if (rc) {
+ dev_err(dev, "%s() timeout on Expect to clear\n",
+ __func__);
+ rc = -ETIMEDOUT;
+ continue;
+ }
+ break;
+ }
+ if (rc < 0) {
+ /* retries == TPM_RETRY */
+ i2c_nuvoton_ready(chip);
+ return rc;
+ }
+ /* execute the TPM command */
+ rc = i2c_nuvoton_write_status(client, TPM_STS_GO);
+ if (rc < 0) {
+ dev_err(dev, "%s() fail to write Go\n", __func__);
+ i2c_nuvoton_ready(chip);
+ return rc;
+ }
+ ordinal = be32_to_cpu(*((__be32 *) (buf + 6)));
+ rc = i2c_nuvoton_wait_for_data_avail(chip,
+ tpm_calc_ordinal_duration(chip,
+ ordinal),
+ &chip->vendor.read_queue);
+ if (rc) {
+ dev_err(dev, "%s() timeout command duration\n", __func__);
+ i2c_nuvoton_ready(chip);
+ return rc;
+ }
+
+ dev_dbg(dev, "%s() -> %zd\n", __func__, len);
+ return len;
+}
+
+static bool i2c_nuvoton_req_canceled(struct tpm_chip *chip, u8 status)
+{
+ return (status == TPM_STS_COMMAND_READY);
+}
+
+static const struct file_operations i2c_nuvoton_ops = {
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .open = tpm_open,
+ .read = tpm_read,
+ .write = tpm_write,
+ .release = tpm_release,
+};
+
+static DEVICE_ATTR(pubek, S_IRUGO, tpm_show_pubek, NULL);
+static DEVICE_ATTR(pcrs, S_IRUGO, tpm_show_pcrs, NULL);
+static DEVICE_ATTR(enabled, S_IRUGO, tpm_show_enabled, NULL);
+static DEVICE_ATTR(active, S_IRUGO, tpm_show_active, NULL);
+static DEVICE_ATTR(owned, S_IRUGO, tpm_show_owned, NULL);
+static DEVICE_ATTR(temp_deactivated, S_IRUGO, tpm_show_temp_deactivated, NULL);
+static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps, NULL);
+static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel);
+static DEVICE_ATTR(durations, S_IRUGO, tpm_show_durations, NULL);
+static DEVICE_ATTR(timeouts, S_IRUGO, tpm_show_timeouts, NULL);
+
+static struct attribute *i2c_nuvoton_attrs[] = {
+ &dev_attr_pubek.attr,
+ &dev_attr_pcrs.attr,
+ &dev_attr_enabled.attr,
+ &dev_attr_active.attr,
+ &dev_attr_owned.attr,
+ &dev_attr_temp_deactivated.attr,
+ &dev_attr_caps.attr,
+ &dev_attr_cancel.attr,
+ &dev_attr_durations.attr,
+ &dev_attr_timeouts.attr,
+ NULL,
+};
+
+static struct attribute_group i2c_nuvoton_attr_grp = {
+ .attrs = i2c_nuvoton_attrs
+};
+
+static const struct tpm_vendor_specific tpm_i2c = {
+ .status = i2c_nuvoton_read_status,
+ .recv = i2c_nuvoton_recv,
+ .send = i2c_nuvoton_send,
+ .cancel = i2c_nuvoton_ready,
+ .req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
+ .req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
+ .req_canceled = i2c_nuvoton_req_canceled,
+ .attr_group = &i2c_nuvoton_attr_grp,
+ .miscdev.fops = &i2c_nuvoton_ops,
+};
+
+/* The only purpose for the handler is to signal to any waiting threads that
+ * the interrupt is currently being asserted. The driver does not do any
+ * processing triggered by interrupts, and the chip provides no way to mask at
+ * the source (plus that would be slow over I2C). Run the IRQ as a one-shot,
+ * this means it cannot be shared. */
+static irqreturn_t i2c_nuvoton_int_handler(int dummy, void *dev_id)
+{
+ struct tpm_chip *chip = dev_id;
+ struct priv_data *priv = chip->vendor.priv;
+
+ priv->intrs++;
+ wake_up(&chip->vendor.read_queue);
+ disable_irq_nosync(chip->vendor.irq);
+ return IRQ_HANDLED;
+}
+
+static int get_vid(struct i2c_client *client, u32 *res)
+{
+ static const u8 vid_did_rid_value[] = { 0x50, 0x10, 0xfe };
+ u32 temp;
+ s32 rc;
+
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+ return -ENODEV;
+ rc = i2c_nuvoton_read_buf(client, TPM_VID_DID_RID, 4, (u8 *)&temp);
+ if (rc < 0)
+ return rc;
+
+ /* check WPCT301 values - ignore RID */
+ if (memcmp(&temp, vid_did_rid_value, sizeof(vid_did_rid_value))) {
+ /*
+ * f/w rev 2.81 has an issue where the VID_DID_RID is not
+ * reporting the right value. so give it another chance at
+ * offset 0x20 (FIFO_W).
+ */
+ rc = i2c_nuvoton_read_buf(client, TPM_DATA_FIFO_W, 4,
+ (u8 *) (&temp));
+ if (rc < 0)
+ return rc;
+
+ /* check WPCT301 values - ignore RID */
+ if (memcmp(&temp, vid_did_rid_value,
+ sizeof(vid_did_rid_value)))
+ return -ENODEV;
+ }
+
+ *res = temp;
+ return 0;
+}
+
+static int i2c_nuvoton_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ int rc;
+ struct tpm_chip *chip;
+ struct device *dev = &client->dev;
+ u32 vid = 0;
+
+ rc = get_vid(client, &vid);
+ if (rc)
+ return rc;
+
+ dev_info(dev, "VID: %04X DID: %02X RID: %02X\n", (u16) vid,
+ (u8) (vid >> 16), (u8) (vid >> 24));
+
+ chip = tpm_register_hardware(dev, &tpm_i2c);
+ if (!chip) {
+ dev_err(dev, "%s() error in tpm_register_hardware\n", __func__);
+ return -ENODEV;
+ }
+
+ chip->vendor.priv = devm_kzalloc(dev, sizeof(struct priv_data),
+ GFP_KERNEL);
+ init_waitqueue_head(&chip->vendor.read_queue);
+ init_waitqueue_head(&chip->vendor.int_queue);
+
+ /* Default timeouts */
+ chip->vendor.timeout_a = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
+ chip->vendor.timeout_b = msecs_to_jiffies(TPM_I2C_LONG_TIMEOUT);
+ chip->vendor.timeout_c = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
+ chip->vendor.timeout_d = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
+
+ /*
+ * I2C intfcaps (interrupt capabilitieis) in the chip are hard coded to:
+ * TPM_INTF_INT_LEVEL_LOW | TPM_INTF_DATA_AVAIL_INT
+ * The IRQ should be set in the i2c_board_info (which is done
+ * automatically in of_i2c_register_devices, for device tree users */
+ chip->vendor.irq = client->irq;
+
+ if (chip->vendor.irq) {
+ dev_dbg(dev, "%s() chip-vendor.irq\n", __func__);
+ rc = devm_request_irq(dev, chip->vendor.irq,
+ i2c_nuvoton_int_handler,
+ IRQF_TRIGGER_LOW,
+ chip->vendor.miscdev.name,
+ chip);
+ if (rc) {
+ dev_err(dev, "%s() Unable to request irq: %d for use\n",
+ __func__, chip->vendor.irq);
+ chip->vendor.irq = 0;
+ } else {
+ /* Clear any pending interrupt */
+ i2c_nuvoton_ready(chip);
+ /* - wait for TPM_STS==0xA0 (stsValid, commandReady) */
+ rc = i2c_nuvoton_wait_for_stat(chip,
+ TPM_STS_COMMAND_READY,
+ TPM_STS_COMMAND_READY,
+ chip->vendor.timeout_b,
+ NULL);
+ if (rc == 0) {
+ /*
+ * TIS is in ready state
+ * write dummy byte to enter reception state
+ * TPM_DATA_FIFO_W <- rc (0)
+ */
+ rc = i2c_nuvoton_write_buf(client,
+ TPM_DATA_FIFO_W,
+ 1, (u8 *) (&rc));
+ if (rc < 0)
+ goto out_err;
+ /* TPM_STS <- 0x40 (commandReady) */
+ i2c_nuvoton_ready(chip);
+ } else {
+ /*
+ * timeout_b reached - command was
+ * aborted. TIS should now be in idle state -
+ * only TPM_STS_VALID should be set
+ */
+ if (i2c_nuvoton_read_status(chip) !=
+ TPM_STS_VALID) {
+ rc = -EIO;
+ goto out_err;
+ }
+ }
+ }
+ }
+
+ if (tpm_get_timeouts(chip)) {
+ rc = -ENODEV;
+ goto out_err;
+ }
+
+ if (tpm_do_selftest(chip)) {
+ rc = -ENODEV;
+ goto out_err;
+ }
+
+ return 0;
+
+out_err:
+ tpm_dev_vendor_release(chip);
+ tpm_remove_hardware(chip->dev);
+ return rc;
+}
+
+static int i2c_nuvoton_remove(struct i2c_client *client)
+{
+ struct device *dev = &(client->dev);
+ struct tpm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip)
+ tpm_dev_vendor_release(chip);
+ tpm_remove_hardware(dev);
+ kfree(chip);
+ return 0;
+}
+
+
+static const struct i2c_device_id i2c_nuvoton_id[] = {
+ {I2C_DRIVER_NAME, 0},
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, i2c_nuvoton_id);
+
+#ifdef CONFIG_OF
+static const struct of_device_id i2c_nuvoton_of_match[] = {
+ {.compatible = "nuvoton,npct501"},
+ {.compatible = "winbond,wpct301"},
+ {},
+};
+MODULE_DEVICE_TABLE(of, i2c_nuvoton_of_match);
+#endif
+
+static SIMPLE_DEV_PM_OPS(i2c_nuvoton_pm_ops, tpm_pm_suspend, tpm_pm_resume);
+
+static struct i2c_driver i2c_nuvoton_driver = {
+ .id_table = i2c_nuvoton_id,
+ .probe = i2c_nuvoton_probe,
+ .remove = i2c_nuvoton_remove,
+ .driver = {
+ .name = I2C_DRIVER_NAME,
+ .owner = THIS_MODULE,
+ .pm = &i2c_nuvoton_pm_ops,
+ .of_match_table = of_match_ptr(i2c_nuvoton_of_match),
+ },
+};
+
+module_i2c_driver(i2c_nuvoton_driver);
+
+MODULE_AUTHOR("Dan Morav (dan.morav@nuvoton.com)");
+MODULE_DESCRIPTION("Nuvoton TPM I2C Driver");
+MODULE_LICENSE("GPL");
static DEVICE_ATTR(active, S_IRUGO, tpm_show_active, NULL);
static DEVICE_ATTR(owned, S_IRUGO, tpm_show_owned, NULL);
static DEVICE_ATTR(temp_deactivated, S_IRUGO, tpm_show_temp_deactivated, NULL);
-static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps_1_2, NULL);
+static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps, NULL);
static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel);
static struct attribute *stm_tpm_attrs[] = {
tpm_get_timeouts(chip);
- i2c_set_clientdata(client, chip);
-
dev_info(chip->dev, "TPM I2C Initialized\n");
return 0;
_irq_set:
#ifdef CONFIG_PM_SLEEP
/*
* tpm_st33_i2c_pm_suspend suspend the TPM device
- * Added: Work around when suspend and no tpm application is running, suspend
- * may fail because chip->data_buffer is not set (only set in tpm_open in Linux
- * TPM core)
* @param: client, the i2c_client drescription (TPM I2C description).
* @param: mesg, the power management message.
* @return: 0 in case of success.
*/
static int tpm_st33_i2c_pm_suspend(struct device *dev)
{
- struct tpm_chip *chip = dev_get_drvdata(dev);
struct st33zp24_platform_data *pin_infos = dev->platform_data;
int ret = 0;
if (power_mgt) {
gpio_set_value(pin_infos->io_lpcpd, 0);
} else {
- if (chip->data_buffer == NULL)
- chip->data_buffer = pin_infos->tpm_i2c_buffer[0];
ret = tpm_pm_suspend(dev);
}
return ret;
TPM_STS_VALID) == TPM_STS_VALID,
chip->vendor.timeout_b);
} else {
- if (chip->data_buffer == NULL)
- chip->data_buffer = pin_infos->tpm_i2c_buffer[0];
ret = tpm_pm_resume(dev);
if (!ret)
tpm_do_selftest(chip);
if (count < len) {
dev_err(ibmvtpm->dev,
- "Invalid size in recv: count=%ld, crq_size=%d\n",
+ "Invalid size in recv: count=%zd, crq_size=%d\n",
count, len);
return -EIO;
}
if (count > ibmvtpm->rtce_size) {
dev_err(ibmvtpm->dev,
- "Invalid size in send: count=%ld, rtce_size=%d\n",
+ "Invalid size in send: count=%zd, rtce_size=%d\n",
count, ibmvtpm->rtce_size);
return -EIO;
}
static DEVICE_ATTR(owned, S_IRUGO, tpm_show_owned, NULL);
static DEVICE_ATTR(temp_deactivated, S_IRUGO, tpm_show_temp_deactivated,
NULL);
-static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps_1_2, NULL);
+static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps, NULL);
static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel);
static DEVICE_ATTR(durations, S_IRUGO, tpm_show_durations, NULL);
static DEVICE_ATTR(timeouts, S_IRUGO, tpm_show_timeouts, NULL);
{
return sysfs_create_group(parent, &ppi_attr_grp);
}
-EXPORT_SYMBOL_GPL(tpm_add_ppi);
void tpm_remove_ppi(struct kobject *parent)
{
sysfs_remove_group(parent, &ppi_attr_grp);
}
-EXPORT_SYMBOL_GPL(tpm_remove_ppi);
-
-MODULE_LICENSE("GPL");
static DEVICE_ATTR(owned, S_IRUGO, tpm_show_owned, NULL);
static DEVICE_ATTR(temp_deactivated, S_IRUGO, tpm_show_temp_deactivated,
NULL);
-static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps_1_2, NULL);
+static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps, NULL);
static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel);
static DEVICE_ATTR(durations, S_IRUGO, tpm_show_durations, NULL);
static DEVICE_ATTR(timeouts, S_IRUGO, tpm_show_timeouts, NULL);
tpm_get_timeouts(priv->chip);
- dev_set_drvdata(&dev->dev, priv->chip);
-
return rv;
}
struct s2mps11_clk *s2mps11 = to_s2mps11_clk(hw);
int ret;
- ret = regmap_update_bits(s2mps11->iodev->regmap,
+ ret = regmap_update_bits(s2mps11->iodev->regmap_pmic,
S2MPS11_REG_RTC_CTRL,
s2mps11->mask, s2mps11->mask);
if (!ret)
struct s2mps11_clk *s2mps11 = to_s2mps11_clk(hw);
int ret;
- ret = regmap_update_bits(s2mps11->iodev->regmap, S2MPS11_REG_RTC_CTRL,
+ ret = regmap_update_bits(s2mps11->iodev->regmap_pmic, S2MPS11_REG_RTC_CTRL,
s2mps11->mask, ~s2mps11->mask);
if (!ret)
s2mps11_clk->hw.init = &s2mps11_clks_init[i];
s2mps11_clk->mask = 1 << i;
- ret = regmap_read(s2mps11_clk->iodev->regmap,
+ ret = regmap_read(s2mps11_clk->iodev->regmap_pmic,
S2MPS11_REG_RTC_CTRL, &val);
if (ret < 0)
goto err_reg;
config CLKSRC_EFM32
bool "Clocksource for Energy Micro's EFM32 SoCs" if !ARCH_EFM32
depends on OF && ARM && (ARCH_EFM32 || COMPILE_TEST)
+ select CLKSRC_MMIO
default ARCH_EFM32
help
Support to use the timers of EFM32 SoCs as clock source and clock
config ARM_ARCH_TIMER_EVTSTREAM
bool "Support for ARM architected timer event stream generation"
default y if ARM_ARCH_TIMER
+ depends on ARM_ARCH_TIMER
help
This option enables support for event stream generation based on
the ARM architected timer. It is used for waking up CPUs executing
init_func = match->data;
init_func(np);
- of_node_put(np);
}
}
static u64 read_sched_clock(void)
{
- return __raw_readl(sched_io_base);
+ return ~__raw_readl(sched_io_base);
}
static const struct of_device_id sptimer_ids[] __initconst = {
{ .compatible = "picochip,pc3x2-rtc" },
- { .compatible = "snps,dw-apb-timer-sp" },
{ /* Sentinel */ },
};
num_called++;
}
CLOCKSOURCE_OF_DECLARE(pc3x2_timer, "picochip,pc3x2-timer", dw_apb_timer_init);
-CLOCKSOURCE_OF_DECLARE(apb_timer, "snps,dw-apb-timer-osc", dw_apb_timer_init);
+CLOCKSOURCE_OF_DECLARE(apb_timer_osc, "snps,dw-apb-timer-osc", dw_apb_timer_init);
+CLOCKSOURCE_OF_DECLARE(apb_timer_sp, "snps,dw-apb-timer-sp", dw_apb_timer_init);
+CLOCKSOURCE_OF_DECLARE(apb_timer, "snps,dw-apb-timer", dw_apb_timer_init);
goto err1;
}
- return sh_mtu2_register(p, (char *)dev_name(&p->pdev->dev),
- cfg->clockevent_rating);
+ ret = clk_prepare(p->clk);
+ if (ret < 0)
+ goto err2;
+
+ ret = sh_mtu2_register(p, (char *)dev_name(&p->pdev->dev),
+ cfg->clockevent_rating);
+ if (ret < 0)
+ goto err3;
+
+ return 0;
+ err3:
+ clk_unprepare(p->clk);
+ err2:
+ clk_put(p->clk);
err1:
iounmap(p->mapbase);
err0:
ret = PTR_ERR(p->clk);
goto err1;
}
+
+ ret = clk_prepare(p->clk);
+ if (ret < 0)
+ goto err2;
+
p->cs_enabled = false;
p->enable_count = 0;
- return sh_tmu_register(p, (char *)dev_name(&p->pdev->dev),
- cfg->clockevent_rating,
- cfg->clocksource_rating);
+ ret = sh_tmu_register(p, (char *)dev_name(&p->pdev->dev),
+ cfg->clockevent_rating,
+ cfg->clocksource_rating);
+ if (ret < 0)
+ goto err3;
+
+ return 0;
+
+ err3:
+ clk_unprepare(p->clk);
+ err2:
+ clk_put(p->clk);
err1:
iounmap(p->mapbase);
err0:
writel(TIMER_CTL_CLK_SRC(TIMER_CTL_CLK_SRC_OSC24M),
timer_base + TIMER_CTL_REG(0));
+ /* Make sure timer is stopped before playing with interrupts */
+ sun4i_clkevt_time_stop(0);
+
ret = setup_irq(irq, &sun4i_timer_irq);
if (ret)
pr_warn("failed to setup irq %d\n", irq);
ticks_per_jiffy = (timer_clk + HZ / 2) / HZ;
- /*
- * Set scale and timer for sched_clock.
- */
- sched_clock_register(armada_370_xp_read_sched_clock, 32, timer_clk);
-
/*
* Setup free-running clocksource timer (interrupts
* disabled).
timer_ctrl_clrset(0, TIMER0_EN | TIMER0_RELOAD_EN |
TIMER0_DIV(TIMER_DIVIDER_SHIFT));
+ /*
+ * Set scale and timer for sched_clock.
+ */
+ sched_clock_register(armada_370_xp_read_sched_clock, 32, timer_clk);
+
clocksource_mmio_init(timer_base + TIMER0_VAL_OFF,
"armada_370_xp_clocksource",
timer_clk, 300, 32, clocksource_mmio_readl_down);
return 0;
}
-static int __init at32_cpufreq_driver_init(struct cpufreq_policy *policy)
+static int at32_cpufreq_driver_init(struct cpufreq_policy *policy)
{
unsigned int frequency, rate, min_freq;
int retval, steps, i;
int ret = 0;
memcpy(&new_policy, policy, sizeof(*policy));
+
+ /* Use the default policy if its valid. */
+ if (cpufreq_driver->setpolicy)
+ cpufreq_parse_governor(policy->governor->name,
+ &new_policy.policy, NULL);
+
/* assure that the starting sequence is run in cpufreq_set_policy */
policy->governor = NULL;
/* set default policy */
ret = cpufreq_set_policy(policy, &new_policy);
- policy->user_policy.policy = policy->policy;
- policy->user_policy.governor = policy->governor;
-
if (ret) {
pr_debug("setting policy failed\n");
if (cpufreq_driver->exit)
#ifdef CONFIG_HOTPLUG_CPU
static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy,
- unsigned int cpu, struct device *dev,
- bool frozen)
+ unsigned int cpu, struct device *dev)
{
int ret = 0;
unsigned long flags;
}
}
- /* Don't touch sysfs links during light-weight init */
- if (!frozen)
- ret = sysfs_create_link(&dev->kobj, &policy->kobj, "cpufreq");
-
- return ret;
+ return sysfs_create_link(&dev->kobj, &policy->kobj, "cpufreq");
}
#endif
return NULL;
}
+static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy)
+{
+ struct kobject *kobj;
+ struct completion *cmp;
+
+ down_read(&policy->rwsem);
+ kobj = &policy->kobj;
+ cmp = &policy->kobj_unregister;
+ up_read(&policy->rwsem);
+ kobject_put(kobj);
+
+ /*
+ * We need to make sure that the underlying kobj is
+ * actually not referenced anymore by anybody before we
+ * proceed with unloading.
+ */
+ pr_debug("waiting for dropping of refcount\n");
+ wait_for_completion(cmp);
+ pr_debug("wait complete\n");
+}
+
static void cpufreq_policy_free(struct cpufreq_policy *policy)
{
free_cpumask_var(policy->related_cpus);
list_for_each_entry(tpolicy, &cpufreq_policy_list, policy_list) {
if (cpumask_test_cpu(cpu, tpolicy->related_cpus)) {
read_unlock_irqrestore(&cpufreq_driver_lock, flags);
- ret = cpufreq_add_policy_cpu(tpolicy, cpu, dev, frozen);
+ ret = cpufreq_add_policy_cpu(tpolicy, cpu, dev);
up_read(&cpufreq_rwsem);
return ret;
}
read_unlock_irqrestore(&cpufreq_driver_lock, flags);
#endif
- if (frozen)
- /* Restore the saved policy when doing light-weight init */
- policy = cpufreq_policy_restore(cpu);
- else
+ /*
+ * Restore the saved policy when doing light-weight init and fall back
+ * to the full init if that fails.
+ */
+ policy = frozen ? cpufreq_policy_restore(cpu) : NULL;
+ if (!policy) {
+ frozen = false;
policy = cpufreq_policy_alloc();
-
- if (!policy)
- goto nomem_out;
-
+ if (!policy)
+ goto nomem_out;
+ }
/*
* In the resume path, since we restore a saved policy, the assignment
*/
cpumask_and(policy->cpus, policy->cpus, cpu_online_mask);
- policy->user_policy.min = policy->min;
- policy->user_policy.max = policy->max;
+ if (!frozen) {
+ policy->user_policy.min = policy->min;
+ policy->user_policy.max = policy->max;
+ }
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_START, policy);
cpufreq_init_policy(policy);
+ if (!frozen) {
+ policy->user_policy.policy = policy->policy;
+ policy->user_policy.governor = policy->governor;
+ }
+
kobject_uevent(&policy->kobj, KOBJ_ADD);
up_read(&cpufreq_rwsem);
if (cpufreq_driver->exit)
cpufreq_driver->exit(policy);
err_set_policy_cpu:
+ if (frozen) {
+ /* Do not leave stale fallback data behind. */
+ per_cpu(cpufreq_cpu_data_fallback, cpu) = NULL;
+ cpufreq_policy_put_kobj(policy);
+ }
cpufreq_policy_free(policy);
+
nomem_out:
up_read(&cpufreq_rwsem);
}
static int cpufreq_nominate_new_policy_cpu(struct cpufreq_policy *policy,
- unsigned int old_cpu, bool frozen)
+ unsigned int old_cpu)
{
struct device *cpu_dev;
int ret;
/* first sibling now owns the new sysfs dir */
cpu_dev = get_cpu_device(cpumask_any_but(policy->cpus, old_cpu));
- /* Don't touch sysfs files during light-weight tear-down */
- if (frozen)
- return cpu_dev->id;
-
sysfs_remove_link(&cpu_dev->kobj, "cpufreq");
ret = kobject_move(&policy->kobj, &cpu_dev->kobj);
if (ret) {
if (!frozen)
sysfs_remove_link(&dev->kobj, "cpufreq");
} else if (cpus > 1) {
- new_cpu = cpufreq_nominate_new_policy_cpu(policy, cpu, frozen);
+ new_cpu = cpufreq_nominate_new_policy_cpu(policy, cpu);
if (new_cpu >= 0) {
update_policy_cpu(policy, new_cpu);
int ret;
unsigned long flags;
struct cpufreq_policy *policy;
- struct kobject *kobj;
- struct completion *cmp;
read_lock_irqsave(&cpufreq_driver_lock, flags);
policy = per_cpu(cpufreq_cpu_data, cpu);
}
}
- if (!frozen) {
- down_read(&policy->rwsem);
- kobj = &policy->kobj;
- cmp = &policy->kobj_unregister;
- up_read(&policy->rwsem);
- kobject_put(kobj);
-
- /*
- * We need to make sure that the underlying kobj is
- * actually not referenced anymore by anybody before we
- * proceed with unloading.
- */
- pr_debug("waiting for dropping of refcount\n");
- wait_for_completion(cmp);
- pr_debug("wait complete\n");
- }
+ if (!frozen)
+ cpufreq_policy_put_kobj(policy);
/*
* Perform the ->exit() even during light-weight tear-down,
dbs_info->requested_freq += get_freq_target(cs_tuners, policy);
+ if (dbs_info->requested_freq > policy->max)
+ dbs_info->requested_freq = policy->max;
+
__cpufreq_driver_target(policy, dbs_info->requested_freq,
CPUFREQ_RELATION_H);
return;
dbs_data->cdata->gov_dbs_timer);
}
- /*
- * conservative does not implement micro like ondemand
- * governor, thus we are bound to jiffes/HZ
- */
if (dbs_data->cdata->governor == GOV_CONSERVATIVE) {
cs_dbs_info->down_skip = 0;
cs_dbs_info->enable = 1;
pr_debug("%s: failed initialization\n", __func__);
return -EINVAL;
}
-EXPORT_SYMBOL(exynos4210_cpufreq_init);
pr_debug("%s: failed initialization\n", __func__);
return -EINVAL;
}
-EXPORT_SYMBOL(exynos4x12_cpufreq_init);
pr_err("%s: failed initialization\n", __func__);
return -EINVAL;
}
-EXPORT_SYMBOL(exynos5250_cpufreq_init);
cpu = all_cpu_data[cpunum];
intel_pstate_get_cpu_pstates(cpu);
+ if (!cpu->pstate.current_pstate) {
+ all_cpu_data[cpunum] = NULL;
+ kfree(cpu);
+ return -ENODATA;
+ }
cpu->cpu = cpunum;
static int omap_target(struct cpufreq_policy *policy, unsigned int index)
{
+ int r, ret;
struct dev_pm_opp *opp;
unsigned long freq, volt = 0, volt_old = 0, tol = 0;
unsigned int old_freq, new_freq;
mutex_lock(&tegra_cpu_lock);
- if (is_suspended) {
- ret = -EBUSY;
+ if (is_suspended)
goto out;
- }
freq = freq_table[index].frequency;
.state_count = 2,
};
-static int __init calxeda_cpuidle_probe(struct platform_device *pdev)
+static int calxeda_cpuidle_probe(struct platform_device *pdev)
{
return cpuidle_register(&calxeda_idle_driver, NULL);
}
*/
void cpuidle_unregister_device(struct cpuidle_device *dev)
{
- if (dev->registered == 0)
+ if (!dev || dev->registered == 0)
return;
cpuidle_pause_and_lock();
help
Enables the driver module for Freescale's Cryptographic Accelerator
and Assurance Module (CAAM), also known as the SEC version 4 (SEC4).
- This module adds a job ring operation interface, and configures h/w
+ This module creates job ring devices, and configures h/w
to operate as a DPAA component automatically, depending
on h/w feature availability.
To compile this driver as a module, choose M here: the module
will be called caam.
+config CRYPTO_DEV_FSL_CAAM_JR
+ tristate "Freescale CAAM Job Ring driver backend"
+ depends on CRYPTO_DEV_FSL_CAAM
+ default y
+ help
+ Enables the driver module for Job Rings which are part of
+ Freescale's Cryptographic Accelerator
+ and Assurance Module (CAAM). This module adds a job ring operation
+ interface.
+
+ To compile this driver as a module, choose M here: the module
+ will be called caam_jr.
+
config CRYPTO_DEV_FSL_CAAM_RINGSIZE
int "Job Ring size"
- depends on CRYPTO_DEV_FSL_CAAM
+ depends on CRYPTO_DEV_FSL_CAAM_JR
range 2 9
default "9"
help
config CRYPTO_DEV_FSL_CAAM_INTC
bool "Job Ring interrupt coalescing"
- depends on CRYPTO_DEV_FSL_CAAM
+ depends on CRYPTO_DEV_FSL_CAAM_JR
default n
help
Enable the Job Ring's interrupt coalescing feature.
config CRYPTO_DEV_FSL_CAAM_CRYPTO_API
tristate "Register algorithm implementations with the Crypto API"
- depends on CRYPTO_DEV_FSL_CAAM
+ depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR
default y
select CRYPTO_ALGAPI
select CRYPTO_AUTHENC
config CRYPTO_DEV_FSL_CAAM_AHASH_API
tristate "Register hash algorithm implementations with Crypto API"
- depends on CRYPTO_DEV_FSL_CAAM
+ depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR
default y
select CRYPTO_HASH
help
config CRYPTO_DEV_FSL_CAAM_RNG_API
tristate "Register caam device for hwrng API"
- depends on CRYPTO_DEV_FSL_CAAM
+ depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR
default y
select CRYPTO_RNG
select HW_RANDOM
endif
obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam.o
+obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_JR) += caam_jr.o
obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API) += caamalg.o
obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_AHASH_API) += caamhash.o
obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API) += caamrng.o
-caam-objs := ctrl.o jr.o error.o key_gen.o
+caam-objs := ctrl.o
+caam_jr-objs := jr.o key_gen.o error.o
#else
#define debug(format, arg...)
#endif
+static struct list_head alg_list;
/* Set DK bit in class 1 operation if shared */
static inline void append_dec_op1(u32 *desc, u32 type)
ivsize, 1);
print_hex_dump(KERN_ERR, "dst @"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, sg_virt(req->dst),
- req->cryptlen, 1);
+ req->cryptlen - ctx->authsize, 1);
#endif
if (err) {
(edesc->src_nents ? : 1);
in_options = LDST_SGF;
}
- if (encrypt)
- append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize +
- req->cryptlen - authsize, in_options);
- else
- append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize +
- req->cryptlen, in_options);
+
+ append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize + req->cryptlen,
+ in_options);
if (likely(req->src == req->dst)) {
if (all_contig) {
}
}
if (encrypt)
- append_seq_out_ptr(desc, dst_dma, req->cryptlen, out_options);
+ append_seq_out_ptr(desc, dst_dma, req->cryptlen + authsize,
+ out_options);
else
append_seq_out_ptr(desc, dst_dma, req->cryptlen - authsize,
out_options);
sec4_sg_index += edesc->assoc_nents + 1 + edesc->src_nents;
in_options = LDST_SGF;
}
- append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize +
- req->cryptlen - authsize, in_options);
+ append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize + req->cryptlen,
+ in_options);
if (contig & GIV_DST_CONTIG) {
dst_dma = edesc->iv_dma;
}
}
- append_seq_out_ptr(desc, dst_dma, ivsize + req->cryptlen, out_options);
+ append_seq_out_ptr(desc, dst_dma, ivsize + req->cryptlen + authsize,
+ out_options);
}
/*
* allocate and map the aead extended descriptor
*/
static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
- int desc_bytes, bool *all_contig_ptr)
+ int desc_bytes, bool *all_contig_ptr,
+ bool encrypt)
{
struct crypto_aead *aead = crypto_aead_reqtfm(req);
struct caam_ctx *ctx = crypto_aead_ctx(aead);
bool assoc_chained = false, src_chained = false, dst_chained = false;
int ivsize = crypto_aead_ivsize(aead);
int sec4_sg_index, sec4_sg_len = 0, sec4_sg_bytes;
+ unsigned int authsize = ctx->authsize;
assoc_nents = sg_count(req->assoc, req->assoclen, &assoc_chained);
- src_nents = sg_count(req->src, req->cryptlen, &src_chained);
- if (unlikely(req->dst != req->src))
- dst_nents = sg_count(req->dst, req->cryptlen, &dst_chained);
+ if (unlikely(req->dst != req->src)) {
+ src_nents = sg_count(req->src, req->cryptlen, &src_chained);
+ dst_nents = sg_count(req->dst,
+ req->cryptlen +
+ (encrypt ? authsize : (-authsize)),
+ &dst_chained);
+ } else {
+ src_nents = sg_count(req->src,
+ req->cryptlen +
+ (encrypt ? authsize : 0),
+ &src_chained);
+ }
sgc = dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
DMA_TO_DEVICE, assoc_chained);
u32 *desc;
int ret = 0;
- req->cryptlen += ctx->authsize;
-
/* allocate extended descriptor */
edesc = aead_edesc_alloc(req, DESC_JOB_IO_LEN *
- CAAM_CMD_SZ, &all_contig);
+ CAAM_CMD_SZ, &all_contig, true);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
/* allocate extended descriptor */
edesc = aead_edesc_alloc(req, DESC_JOB_IO_LEN *
- CAAM_CMD_SZ, &all_contig);
+ CAAM_CMD_SZ, &all_contig, false);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
src_nents = sg_count(req->src, req->cryptlen, &src_chained);
if (unlikely(req->dst != req->src))
- dst_nents = sg_count(req->dst, req->cryptlen, &dst_chained);
+ dst_nents = sg_count(req->dst, req->cryptlen + ctx->authsize,
+ &dst_chained);
sgc = dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
DMA_TO_DEVICE, assoc_chained);
u32 *desc;
int ret = 0;
- req->cryptlen += ctx->authsize;
-
/* allocate extended descriptor */
edesc = aead_giv_edesc_alloc(areq, DESC_JOB_IO_LEN *
CAAM_CMD_SZ, &contig);
struct caam_crypto_alg {
struct list_head entry;
- struct device *ctrldev;
int class1_alg_type;
int class2_alg_type;
int alg_op;
struct caam_crypto_alg *caam_alg =
container_of(alg, struct caam_crypto_alg, crypto_alg);
struct caam_ctx *ctx = crypto_tfm_ctx(tfm);
- struct caam_drv_private *priv = dev_get_drvdata(caam_alg->ctrldev);
- int tgt_jr = atomic_inc_return(&priv->tfm_count);
- /*
- * distribute tfms across job rings to ensure in-order
- * crypto request processing per tfm
- */
- ctx->jrdev = priv->jrdev[(tgt_jr / 2) % priv->total_jobrs];
+ ctx->jrdev = caam_jr_alloc();
+ if (IS_ERR(ctx->jrdev)) {
+ pr_err("Job Ring Device allocation for transform failed\n");
+ return PTR_ERR(ctx->jrdev);
+ }
/* copy descriptor header template value */
ctx->class1_alg_type = OP_TYPE_CLASS1_ALG | caam_alg->class1_alg_type;
dma_unmap_single(ctx->jrdev, ctx->sh_desc_givenc_dma,
desc_bytes(ctx->sh_desc_givenc),
DMA_TO_DEVICE);
+
+ caam_jr_free(ctx->jrdev);
}
static void __exit caam_algapi_exit(void)
{
- struct device_node *dev_node;
- struct platform_device *pdev;
- struct device *ctrldev;
- struct caam_drv_private *priv;
struct caam_crypto_alg *t_alg, *n;
- dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0");
- if (!dev_node) {
- dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec4.0");
- if (!dev_node)
- return;
- }
-
- pdev = of_find_device_by_node(dev_node);
- if (!pdev)
- return;
-
- ctrldev = &pdev->dev;
- of_node_put(dev_node);
- priv = dev_get_drvdata(ctrldev);
-
- if (!priv->alg_list.next)
+ if (!alg_list.next)
return;
- list_for_each_entry_safe(t_alg, n, &priv->alg_list, entry) {
+ list_for_each_entry_safe(t_alg, n, &alg_list, entry) {
crypto_unregister_alg(&t_alg->crypto_alg);
list_del(&t_alg->entry);
kfree(t_alg);
}
}
-static struct caam_crypto_alg *caam_alg_alloc(struct device *ctrldev,
- struct caam_alg_template
+static struct caam_crypto_alg *caam_alg_alloc(struct caam_alg_template
*template)
{
struct caam_crypto_alg *t_alg;
t_alg = kzalloc(sizeof(struct caam_crypto_alg), GFP_KERNEL);
if (!t_alg) {
- dev_err(ctrldev, "failed to allocate t_alg\n");
+ pr_err("failed to allocate t_alg\n");
return ERR_PTR(-ENOMEM);
}
t_alg->class1_alg_type = template->class1_alg_type;
t_alg->class2_alg_type = template->class2_alg_type;
t_alg->alg_op = template->alg_op;
- t_alg->ctrldev = ctrldev;
return t_alg;
}
static int __init caam_algapi_init(void)
{
- struct device_node *dev_node;
- struct platform_device *pdev;
- struct device *ctrldev;
- struct caam_drv_private *priv;
int i = 0, err = 0;
- dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0");
- if (!dev_node) {
- dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec4.0");
- if (!dev_node)
- return -ENODEV;
- }
-
- pdev = of_find_device_by_node(dev_node);
- if (!pdev)
- return -ENODEV;
-
- ctrldev = &pdev->dev;
- priv = dev_get_drvdata(ctrldev);
- of_node_put(dev_node);
-
- INIT_LIST_HEAD(&priv->alg_list);
-
- atomic_set(&priv->tfm_count, -1);
+ INIT_LIST_HEAD(&alg_list);
/* register crypto algorithms the device supports */
for (i = 0; i < ARRAY_SIZE(driver_algs); i++) {
/* TODO: check if h/w supports alg */
struct caam_crypto_alg *t_alg;
- t_alg = caam_alg_alloc(ctrldev, &driver_algs[i]);
+ t_alg = caam_alg_alloc(&driver_algs[i]);
if (IS_ERR(t_alg)) {
err = PTR_ERR(t_alg);
- dev_warn(ctrldev, "%s alg allocation failed\n",
- driver_algs[i].driver_name);
+ pr_warn("%s alg allocation failed\n",
+ driver_algs[i].driver_name);
continue;
}
err = crypto_register_alg(&t_alg->crypto_alg);
if (err) {
- dev_warn(ctrldev, "%s alg registration failed\n",
+ pr_warn("%s alg registration failed\n",
t_alg->crypto_alg.cra_driver_name);
kfree(t_alg);
} else
- list_add_tail(&t_alg->entry, &priv->alg_list);
+ list_add_tail(&t_alg->entry, &alg_list);
}
- if (!list_empty(&priv->alg_list))
- dev_info(ctrldev, "%s algorithms registered in /proc/crypto\n",
- (char *)of_get_property(dev_node, "compatible", NULL));
+ if (!list_empty(&alg_list))
+ pr_info("caam algorithms registered in /proc/crypto\n");
return err;
}
#define debug(format, arg...)
#endif
+
+static struct list_head hash_list;
+
/* ahash per-session context */
struct caam_hash_ctx {
struct device *jrdev;
struct caam_hash_alg {
struct list_head entry;
- struct device *ctrldev;
int alg_type;
int alg_op;
struct ahash_alg ahash_alg;
struct caam_hash_alg *caam_hash =
container_of(alg, struct caam_hash_alg, ahash_alg);
struct caam_hash_ctx *ctx = crypto_tfm_ctx(tfm);
- struct caam_drv_private *priv = dev_get_drvdata(caam_hash->ctrldev);
/* Sizes for MDHA running digests: MD5, SHA1, 224, 256, 384, 512 */
static const u8 runninglen[] = { HASH_MSG_LEN + MD5_DIGEST_SIZE,
HASH_MSG_LEN + SHA1_DIGEST_SIZE,
HASH_MSG_LEN + SHA256_DIGEST_SIZE,
HASH_MSG_LEN + 64,
HASH_MSG_LEN + SHA512_DIGEST_SIZE };
- int tgt_jr = atomic_inc_return(&priv->tfm_count);
int ret = 0;
/*
- * distribute tfms across job rings to ensure in-order
+ * Get a Job ring from Job Ring driver to ensure in-order
* crypto request processing per tfm
*/
- ctx->jrdev = priv->jrdev[tgt_jr % priv->total_jobrs];
-
+ ctx->jrdev = caam_jr_alloc();
+ if (IS_ERR(ctx->jrdev)) {
+ pr_err("Job Ring Device allocation for transform failed\n");
+ return PTR_ERR(ctx->jrdev);
+ }
/* copy descriptor header template value */
ctx->alg_type = OP_TYPE_CLASS2_ALG | caam_hash->alg_type;
ctx->alg_op = OP_TYPE_CLASS2_ALG | caam_hash->alg_op;
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_finup_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_finup_dma,
desc_bytes(ctx->sh_desc_finup), DMA_TO_DEVICE);
+
+ caam_jr_free(ctx->jrdev);
}
static void __exit caam_algapi_hash_exit(void)
{
- struct device_node *dev_node;
- struct platform_device *pdev;
- struct device *ctrldev;
- struct caam_drv_private *priv;
struct caam_hash_alg *t_alg, *n;
- dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0");
- if (!dev_node) {
- dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec4.0");
- if (!dev_node)
- return;
- }
-
- pdev = of_find_device_by_node(dev_node);
- if (!pdev)
+ if (!hash_list.next)
return;
- ctrldev = &pdev->dev;
- of_node_put(dev_node);
- priv = dev_get_drvdata(ctrldev);
-
- if (!priv->hash_list.next)
- return;
-
- list_for_each_entry_safe(t_alg, n, &priv->hash_list, entry) {
+ list_for_each_entry_safe(t_alg, n, &hash_list, entry) {
crypto_unregister_ahash(&t_alg->ahash_alg);
list_del(&t_alg->entry);
kfree(t_alg);
}
static struct caam_hash_alg *
-caam_hash_alloc(struct device *ctrldev, struct caam_hash_template *template,
+caam_hash_alloc(struct caam_hash_template *template,
bool keyed)
{
struct caam_hash_alg *t_alg;
t_alg = kzalloc(sizeof(struct caam_hash_alg), GFP_KERNEL);
if (!t_alg) {
- dev_err(ctrldev, "failed to allocate t_alg\n");
+ pr_err("failed to allocate t_alg\n");
return ERR_PTR(-ENOMEM);
}
t_alg->alg_type = template->alg_type;
t_alg->alg_op = template->alg_op;
- t_alg->ctrldev = ctrldev;
return t_alg;
}
static int __init caam_algapi_hash_init(void)
{
- struct device_node *dev_node;
- struct platform_device *pdev;
- struct device *ctrldev;
- struct caam_drv_private *priv;
int i = 0, err = 0;
- dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0");
- if (!dev_node) {
- dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec4.0");
- if (!dev_node)
- return -ENODEV;
- }
-
- pdev = of_find_device_by_node(dev_node);
- if (!pdev)
- return -ENODEV;
-
- ctrldev = &pdev->dev;
- priv = dev_get_drvdata(ctrldev);
- of_node_put(dev_node);
-
- INIT_LIST_HEAD(&priv->hash_list);
-
- atomic_set(&priv->tfm_count, -1);
+ INIT_LIST_HEAD(&hash_list);
/* register crypto algorithms the device supports */
for (i = 0; i < ARRAY_SIZE(driver_hash); i++) {
struct caam_hash_alg *t_alg;
/* register hmac version */
- t_alg = caam_hash_alloc(ctrldev, &driver_hash[i], true);
+ t_alg = caam_hash_alloc(&driver_hash[i], true);
if (IS_ERR(t_alg)) {
err = PTR_ERR(t_alg);
- dev_warn(ctrldev, "%s alg allocation failed\n",
- driver_hash[i].driver_name);
+ pr_warn("%s alg allocation failed\n",
+ driver_hash[i].driver_name);
continue;
}
err = crypto_register_ahash(&t_alg->ahash_alg);
if (err) {
- dev_warn(ctrldev, "%s alg registration failed\n",
+ pr_warn("%s alg registration failed\n",
t_alg->ahash_alg.halg.base.cra_driver_name);
kfree(t_alg);
} else
- list_add_tail(&t_alg->entry, &priv->hash_list);
+ list_add_tail(&t_alg->entry, &hash_list);
/* register unkeyed version */
- t_alg = caam_hash_alloc(ctrldev, &driver_hash[i], false);
+ t_alg = caam_hash_alloc(&driver_hash[i], false);
if (IS_ERR(t_alg)) {
err = PTR_ERR(t_alg);
- dev_warn(ctrldev, "%s alg allocation failed\n",
- driver_hash[i].driver_name);
+ pr_warn("%s alg allocation failed\n",
+ driver_hash[i].driver_name);
continue;
}
err = crypto_register_ahash(&t_alg->ahash_alg);
if (err) {
- dev_warn(ctrldev, "%s alg registration failed\n",
+ pr_warn("%s alg registration failed\n",
t_alg->ahash_alg.halg.base.cra_driver_name);
kfree(t_alg);
} else
- list_add_tail(&t_alg->entry, &priv->hash_list);
+ list_add_tail(&t_alg->entry, &hash_list);
}
return err;
static void __exit caam_rng_exit(void)
{
+ caam_jr_free(rng_ctx.jrdev);
hwrng_unregister(&caam_rng);
}
static int __init caam_rng_init(void)
{
- struct device_node *dev_node;
- struct platform_device *pdev;
- struct device *ctrldev;
- struct caam_drv_private *priv;
-
- dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0");
- if (!dev_node) {
- dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec4.0");
- if (!dev_node)
- return -ENODEV;
- }
-
- pdev = of_find_device_by_node(dev_node);
- if (!pdev)
- return -ENODEV;
+ struct device *dev;
- ctrldev = &pdev->dev;
- priv = dev_get_drvdata(ctrldev);
- of_node_put(dev_node);
+ dev = caam_jr_alloc();
+ if (IS_ERR(dev)) {
+ pr_err("Job Ring Device allocation for transform failed\n");
+ return PTR_ERR(dev);
+ }
- caam_init_rng(&rng_ctx, priv->jrdev[0]);
+ caam_init_rng(&rng_ctx, dev);
- dev_info(priv->jrdev[0], "registering rng-caam\n");
+ dev_info(dev, "registering rng-caam\n");
return hwrng_register(&caam_rng);
}
#include "error.h"
#include "ctrl.h"
-static int caam_remove(struct platform_device *pdev)
-{
- struct device *ctrldev;
- struct caam_drv_private *ctrlpriv;
- struct caam_drv_private_jr *jrpriv;
- struct caam_full __iomem *topregs;
- int ring, ret = 0;
-
- ctrldev = &pdev->dev;
- ctrlpriv = dev_get_drvdata(ctrldev);
- topregs = (struct caam_full __iomem *)ctrlpriv->ctrl;
-
- /* shut down JobRs */
- for (ring = 0; ring < ctrlpriv->total_jobrs; ring++) {
- ret |= caam_jr_shutdown(ctrlpriv->jrdev[ring]);
- jrpriv = dev_get_drvdata(ctrlpriv->jrdev[ring]);
- irq_dispose_mapping(jrpriv->irq);
- }
-
- /* Shut down debug views */
-#ifdef CONFIG_DEBUG_FS
- debugfs_remove_recursive(ctrlpriv->dfs_root);
-#endif
-
- /* Unmap controller region */
- iounmap(&topregs->ctrl);
-
- kfree(ctrlpriv->jrdev);
- kfree(ctrlpriv);
-
- return ret;
-}
-
/*
* Descriptor to instantiate RNG State Handle 0 in normal mode and
* load the JDKEK, TDKEK and TDSK registers
*/
-static void build_instantiation_desc(u32 *desc)
+static void build_instantiation_desc(u32 *desc, int handle, int do_sk)
{
- u32 *jump_cmd;
+ u32 *jump_cmd, op_flags;
init_job_desc(desc, 0);
+ op_flags = OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |
+ (handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INIT;
+
/* INIT RNG in non-test mode */
- append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |
- OP_ALG_AS_INIT);
+ append_operation(desc, op_flags);
+
+ if (!handle && do_sk) {
+ /*
+ * For SH0, Secure Keys must be generated as well
+ */
+
+ /* wait for done */
+ jump_cmd = append_jump(desc, JUMP_CLASS_CLASS1);
+ set_jump_tgt_here(desc, jump_cmd);
+
+ /*
+ * load 1 to clear written reg:
+ * resets the done interrrupt and returns the RNG to idle.
+ */
+ append_load_imm_u32(desc, 1, LDST_SRCDST_WORD_CLRW);
+
+ /* Initialize State Handle */
+ append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |
+ OP_ALG_AAI_RNG4_SK);
+ }
- /* wait for done */
- jump_cmd = append_jump(desc, JUMP_CLASS_CLASS1);
- set_jump_tgt_here(desc, jump_cmd);
+ append_jump(desc, JUMP_CLASS_CLASS1 | JUMP_TYPE_HALT);
+}
- /*
- * load 1 to clear written reg:
- * resets the done interrupt and returns the RNG to idle.
- */
- append_load_imm_u32(desc, 1, LDST_SRCDST_WORD_CLRW);
+/* Descriptor for deinstantiation of State Handle 0 of the RNG block. */
+static void build_deinstantiation_desc(u32 *desc, int handle)
+{
+ init_job_desc(desc, 0);
- /* generate secure keys (non-test) */
+ /* Uninstantiate State Handle 0 */
append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |
- OP_ALG_RNG4_SK);
+ (handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INITFINAL);
+
+ append_jump(desc, JUMP_CLASS_CLASS1 | JUMP_TYPE_HALT);
}
-static int instantiate_rng(struct device *ctrldev)
+/*
+ * run_descriptor_deco0 - runs a descriptor on DECO0, under direct control of
+ * the software (no JR/QI used).
+ * @ctrldev - pointer to device
+ * @status - descriptor status, after being run
+ *
+ * Return: - 0 if no error occurred
+ * - -ENODEV if the DECO couldn't be acquired
+ * - -EAGAIN if an error occurred while executing the descriptor
+ */
+static inline int run_descriptor_deco0(struct device *ctrldev, u32 *desc,
+ u32 *status)
{
struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctrldev);
struct caam_full __iomem *topregs;
unsigned int timeout = 100000;
- u32 *desc;
- int i, ret = 0;
-
- desc = kmalloc(CAAM_CMD_SZ * 6, GFP_KERNEL | GFP_DMA);
- if (!desc) {
- dev_err(ctrldev, "can't allocate RNG init descriptor memory\n");
- return -ENOMEM;
- }
- build_instantiation_desc(desc);
+ u32 deco_dbg_reg, flags;
+ int i;
/* Set the bit to request direct access to DECO0 */
topregs = (struct caam_full __iomem *)ctrlpriv->ctrl;
if (!timeout) {
dev_err(ctrldev, "failed to acquire DECO 0\n");
- ret = -EIO;
- goto out;
+ clrbits32(&topregs->ctrl.deco_rq, DECORR_RQD0ENABLE);
+ return -ENODEV;
}
for (i = 0; i < desc_len(desc); i++)
- topregs->deco.descbuf[i] = *(desc + i);
+ wr_reg32(&topregs->deco.descbuf[i], *(desc + i));
+
+ flags = DECO_JQCR_WHL;
+ /*
+ * If the descriptor length is longer than 4 words, then the
+ * FOUR bit in JRCTRL register must be set.
+ */
+ if (desc_len(desc) >= 4)
+ flags |= DECO_JQCR_FOUR;
- wr_reg32(&topregs->deco.jr_ctl_hi, DECO_JQCR_WHL | DECO_JQCR_FOUR);
+ /* Instruct the DECO to execute it */
+ wr_reg32(&topregs->deco.jr_ctl_hi, flags);
timeout = 10000000;
- while ((rd_reg32(&topregs->deco.desc_dbg) & DECO_DBG_VALID) &&
- --timeout)
+ do {
+ deco_dbg_reg = rd_reg32(&topregs->deco.desc_dbg);
+ /*
+ * If an error occured in the descriptor, then
+ * the DECO status field will be set to 0x0D
+ */
+ if ((deco_dbg_reg & DESC_DBG_DECO_STAT_MASK) ==
+ DESC_DBG_DECO_STAT_HOST_ERR)
+ break;
cpu_relax();
+ } while ((deco_dbg_reg & DESC_DBG_DECO_STAT_VALID) && --timeout);
- if (!timeout) {
- dev_err(ctrldev, "failed to instantiate RNG\n");
- ret = -EIO;
- }
+ *status = rd_reg32(&topregs->deco.op_status_hi) &
+ DECO_OP_STATUS_HI_ERR_MASK;
+ /* Mark the DECO as free */
clrbits32(&topregs->ctrl.deco_rq, DECORR_RQD0ENABLE);
-out:
+
+ if (!timeout)
+ return -EAGAIN;
+
+ return 0;
+}
+
+/*
+ * instantiate_rng - builds and executes a descriptor on DECO0,
+ * which initializes the RNG block.
+ * @ctrldev - pointer to device
+ * @state_handle_mask - bitmask containing the instantiation status
+ * for the RNG4 state handles which exist in
+ * the RNG4 block: 1 if it's been instantiated
+ * by an external entry, 0 otherwise.
+ * @gen_sk - generate data to be loaded into the JDKEK, TDKEK and TDSK;
+ * Caution: this can be done only once; if the keys need to be
+ * regenerated, a POR is required
+ *
+ * Return: - 0 if no error occurred
+ * - -ENOMEM if there isn't enough memory to allocate the descriptor
+ * - -ENODEV if DECO0 couldn't be acquired
+ * - -EAGAIN if an error occurred when executing the descriptor
+ * f.i. there was a RNG hardware error due to not "good enough"
+ * entropy being aquired.
+ */
+static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
+ int gen_sk)
+{
+ struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctrldev);
+ struct caam_full __iomem *topregs;
+ struct rng4tst __iomem *r4tst;
+ u32 *desc, status, rdsta_val;
+ int ret = 0, sh_idx;
+
+ topregs = (struct caam_full __iomem *)ctrlpriv->ctrl;
+ r4tst = &topregs->ctrl.r4tst[0];
+
+ desc = kmalloc(CAAM_CMD_SZ * 7, GFP_KERNEL);
+ if (!desc)
+ return -ENOMEM;
+
+ for (sh_idx = 0; sh_idx < RNG4_MAX_HANDLES; sh_idx++) {
+ /*
+ * If the corresponding bit is set, this state handle
+ * was initialized by somebody else, so it's left alone.
+ */
+ if ((1 << sh_idx) & state_handle_mask)
+ continue;
+
+ /* Create the descriptor for instantiating RNG State Handle */
+ build_instantiation_desc(desc, sh_idx, gen_sk);
+
+ /* Try to run it through DECO0 */
+ ret = run_descriptor_deco0(ctrldev, desc, &status);
+
+ /*
+ * If ret is not 0, or descriptor status is not 0, then
+ * something went wrong. No need to try the next state
+ * handle (if available), bail out here.
+ * Also, if for some reason, the State Handle didn't get
+ * instantiated although the descriptor has finished
+ * without any error (HW optimizations for later
+ * CAAM eras), then try again.
+ */
+ rdsta_val =
+ rd_reg32(&topregs->ctrl.r4tst[0].rdsta) & RDSTA_IFMASK;
+ if (status || !(rdsta_val & (1 << sh_idx)))
+ ret = -EAGAIN;
+ if (ret)
+ break;
+
+ dev_info(ctrldev, "Instantiated RNG4 SH%d\n", sh_idx);
+ /* Clear the contents before recreating the descriptor */
+ memset(desc, 0x00, CAAM_CMD_SZ * 7);
+ }
+
kfree(desc);
+
return ret;
}
/*
- * By default, the TRNG runs for 200 clocks per sample;
- * 1600 clocks per sample generates better entropy.
+ * deinstantiate_rng - builds and executes a descriptor on DECO0,
+ * which deinitializes the RNG block.
+ * @ctrldev - pointer to device
+ * @state_handle_mask - bitmask containing the instantiation status
+ * for the RNG4 state handles which exist in
+ * the RNG4 block: 1 if it's been instantiated
+ *
+ * Return: - 0 if no error occurred
+ * - -ENOMEM if there isn't enough memory to allocate the descriptor
+ * - -ENODEV if DECO0 couldn't be acquired
+ * - -EAGAIN if an error occurred when executing the descriptor
*/
-static void kick_trng(struct platform_device *pdev)
+static int deinstantiate_rng(struct device *ctrldev, int state_handle_mask)
+{
+ u32 *desc, status;
+ int sh_idx, ret = 0;
+
+ desc = kmalloc(CAAM_CMD_SZ * 3, GFP_KERNEL);
+ if (!desc)
+ return -ENOMEM;
+
+ for (sh_idx = 0; sh_idx < RNG4_MAX_HANDLES; sh_idx++) {
+ /*
+ * If the corresponding bit is set, then it means the state
+ * handle was initialized by us, and thus it needs to be
+ * deintialized as well
+ */
+ if ((1 << sh_idx) & state_handle_mask) {
+ /*
+ * Create the descriptor for deinstantating this state
+ * handle
+ */
+ build_deinstantiation_desc(desc, sh_idx);
+
+ /* Try to run it through DECO0 */
+ ret = run_descriptor_deco0(ctrldev, desc, &status);
+
+ if (ret || status) {
+ dev_err(ctrldev,
+ "Failed to deinstantiate RNG4 SH%d\n",
+ sh_idx);
+ break;
+ }
+ dev_info(ctrldev, "Deinstantiated RNG4 SH%d\n", sh_idx);
+ }
+ }
+
+ kfree(desc);
+
+ return ret;
+}
+
+static int caam_remove(struct platform_device *pdev)
+{
+ struct device *ctrldev;
+ struct caam_drv_private *ctrlpriv;
+ struct caam_full __iomem *topregs;
+ int ring, ret = 0;
+
+ ctrldev = &pdev->dev;
+ ctrlpriv = dev_get_drvdata(ctrldev);
+ topregs = (struct caam_full __iomem *)ctrlpriv->ctrl;
+
+ /* Remove platform devices for JobRs */
+ for (ring = 0; ring < ctrlpriv->total_jobrs; ring++) {
+ if (ctrlpriv->jrpdev[ring])
+ of_device_unregister(ctrlpriv->jrpdev[ring]);
+ }
+
+ /* De-initialize RNG state handles initialized by this driver. */
+ if (ctrlpriv->rng4_sh_init)
+ deinstantiate_rng(ctrldev, ctrlpriv->rng4_sh_init);
+
+ /* Shut down debug views */
+#ifdef CONFIG_DEBUG_FS
+ debugfs_remove_recursive(ctrlpriv->dfs_root);
+#endif
+
+ /* Unmap controller region */
+ iounmap(&topregs->ctrl);
+
+ kfree(ctrlpriv->jrpdev);
+ kfree(ctrlpriv);
+
+ return ret;
+}
+
+/*
+ * kick_trng - sets the various parameters for enabling the initialization
+ * of the RNG4 block in CAAM
+ * @pdev - pointer to the platform device
+ * @ent_delay - Defines the length (in system clocks) of each entropy sample.
+ */
+static void kick_trng(struct platform_device *pdev, int ent_delay)
{
struct device *ctrldev = &pdev->dev;
struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctrldev);
/* put RNG4 into program mode */
setbits32(&r4tst->rtmctl, RTMCTL_PRGM);
- /* 1600 clocks per sample */
+
+ /*
+ * Performance-wise, it does not make sense to
+ * set the delay to a value that is lower
+ * than the last one that worked (i.e. the state handles
+ * were instantiated properly. Thus, instead of wasting
+ * time trying to set the values controlling the sample
+ * frequency, the function simply returns.
+ */
+ val = (rd_reg32(&r4tst->rtsdctl) & RTSDCTL_ENT_DLY_MASK)
+ >> RTSDCTL_ENT_DLY_SHIFT;
+ if (ent_delay <= val) {
+ /* put RNG4 into run mode */
+ clrbits32(&r4tst->rtmctl, RTMCTL_PRGM);
+ return;
+ }
+
val = rd_reg32(&r4tst->rtsdctl);
- val = (val & ~RTSDCTL_ENT_DLY_MASK) | (1600 << RTSDCTL_ENT_DLY_SHIFT);
+ val = (val & ~RTSDCTL_ENT_DLY_MASK) |
+ (ent_delay << RTSDCTL_ENT_DLY_SHIFT);
wr_reg32(&r4tst->rtsdctl, val);
- /* min. freq. count */
- wr_reg32(&r4tst->rtfrqmin, 400);
- /* max. freq. count */
- wr_reg32(&r4tst->rtfrqmax, 6400);
+ /* min. freq. count, equal to 1/4 of the entropy sample length */
+ wr_reg32(&r4tst->rtfrqmin, ent_delay >> 2);
+ /* max. freq. count, equal to 8 times the entropy sample length */
+ wr_reg32(&r4tst->rtfrqmax, ent_delay << 3);
/* put RNG4 into run mode */
clrbits32(&r4tst->rtmctl, RTMCTL_PRGM);
}
/* Probe routine for CAAM top (controller) level */
static int caam_probe(struct platform_device *pdev)
{
- int ret, ring, rspec;
+ int ret, ring, rspec, gen_sk, ent_delay = RTSDCTL_ENT_DLY_MIN;
u64 caam_id;
struct device *dev;
struct device_node *nprop, *np;
rspec++;
}
- ctrlpriv->jrdev = kzalloc(sizeof(struct device *) * rspec, GFP_KERNEL);
- if (ctrlpriv->jrdev == NULL) {
+ ctrlpriv->jrpdev = kzalloc(sizeof(struct platform_device *) * rspec,
+ GFP_KERNEL);
+ if (ctrlpriv->jrpdev == NULL) {
iounmap(&topregs->ctrl);
return -ENOMEM;
}
ring = 0;
ctrlpriv->total_jobrs = 0;
for_each_compatible_node(np, NULL, "fsl,sec-v4.0-job-ring") {
- caam_jr_probe(pdev, np, ring);
+ ctrlpriv->jrpdev[ring] =
+ of_platform_device_create(np, NULL, dev);
+ if (!ctrlpriv->jrpdev[ring]) {
+ pr_warn("JR%d Platform device creation error\n", ring);
+ continue;
+ }
ctrlpriv->total_jobrs++;
ring++;
}
if (!ring) {
for_each_compatible_node(np, NULL, "fsl,sec4.0-job-ring") {
- caam_jr_probe(pdev, np, ring);
+ ctrlpriv->jrpdev[ring] =
+ of_platform_device_create(np, NULL, dev);
+ if (!ctrlpriv->jrpdev[ring]) {
+ pr_warn("JR%d Platform device creation error\n",
+ ring);
+ continue;
+ }
ctrlpriv->total_jobrs++;
ring++;
}
/*
* If SEC has RNG version >= 4 and RNG state handle has not been
- * already instantiated ,do RNG instantiation
+ * already instantiated, do RNG instantiation
*/
- if ((cha_vid & CHA_ID_RNG_MASK) >> CHA_ID_RNG_SHIFT >= 4 &&
- !(rd_reg32(&topregs->ctrl.r4tst[0].rdsta) & RDSTA_IF0)) {
- kick_trng(pdev);
- ret = instantiate_rng(dev);
+ if ((cha_vid & CHA_ID_RNG_MASK) >> CHA_ID_RNG_SHIFT >= 4) {
+ ctrlpriv->rng4_sh_init =
+ rd_reg32(&topregs->ctrl.r4tst[0].rdsta);
+ /*
+ * If the secure keys (TDKEK, JDKEK, TDSK), were already
+ * generated, signal this to the function that is instantiating
+ * the state handles. An error would occur if RNG4 attempts
+ * to regenerate these keys before the next POR.
+ */
+ gen_sk = ctrlpriv->rng4_sh_init & RDSTA_SKVN ? 0 : 1;
+ ctrlpriv->rng4_sh_init &= RDSTA_IFMASK;
+ do {
+ int inst_handles =
+ rd_reg32(&topregs->ctrl.r4tst[0].rdsta) &
+ RDSTA_IFMASK;
+ /*
+ * If either SH were instantiated by somebody else
+ * (e.g. u-boot) then it is assumed that the entropy
+ * parameters are properly set and thus the function
+ * setting these (kick_trng(...)) is skipped.
+ * Also, if a handle was instantiated, do not change
+ * the TRNG parameters.
+ */
+ if (!(ctrlpriv->rng4_sh_init || inst_handles)) {
+ kick_trng(pdev, ent_delay);
+ ent_delay += 400;
+ }
+ /*
+ * if instantiate_rng(...) fails, the loop will rerun
+ * and the kick_trng(...) function will modfiy the
+ * upper and lower limits of the entropy sampling
+ * interval, leading to a sucessful initialization of
+ * the RNG.
+ */
+ ret = instantiate_rng(dev, inst_handles,
+ gen_sk);
+ } while ((ret == -EAGAIN) && (ent_delay < RTSDCTL_ENT_DLY_MAX));
if (ret) {
+ dev_err(dev, "failed to instantiate RNG");
caam_remove(pdev);
return ret;
}
+ /*
+ * Set handles init'ed by this module as the complement of the
+ * already initialized ones
+ */
+ ctrlpriv->rng4_sh_init = ~ctrlpriv->rng4_sh_init & RDSTA_IFMASK;
/* Enable RDB bit so that RNG works faster */
setbits32(&topregs->ctrl.scfgr, SCFGR_RDBENABLE);
/* randomizer AAI set */
#define OP_ALG_AAI_RNG (0x00 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_RNG_NOZERO (0x10 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_RNG_ODD (0x20 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_NZB (0x10 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG_OBP (0x20 << OP_ALG_AAI_SHIFT)
+
+/* RNG4 AAI set */
+#define OP_ALG_AAI_RNG4_SH_0 (0x00 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_1 (0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_PS (0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_AI (0x80 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SK (0x100 << OP_ALG_AAI_SHIFT)
/* hmac/smac AAI set */
#define OP_ALG_AAI_HASH (0x00 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_GSM (0x10 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_EDGE (0x20 << OP_ALG_AAI_SHIFT)
-/* RNG4 set */
-#define OP_ALG_RNG4_SHIFT 4
-#define OP_ALG_RNG4_MASK (0x1f3 << OP_ALG_RNG4_SHIFT)
-
-#define OP_ALG_RNG4_SK (0x100 << OP_ALG_RNG4_SHIFT)
-
#define OP_ALG_AS_SHIFT 2
#define OP_ALG_AS_MASK (0x3 << OP_ALG_AS_SHIFT)
#define OP_ALG_AS_UPDATE (0 << OP_ALG_AS_SHIFT)
/* Private sub-storage for a single JobR */
struct caam_drv_private_jr {
- struct device *parentdev; /* points back to controller dev */
- struct platform_device *jr_pdev;/* points to platform device for JR */
+ struct list_head list_node; /* Job Ring device list */
+ struct device *dev;
int ridx;
struct caam_job_ring __iomem *rregs; /* JobR's register space */
struct tasklet_struct irqtask;
int irq; /* One per queue */
+ /* Number of scatterlist crypt transforms active on the JobR */
+ atomic_t tfm_count ____cacheline_aligned;
+
/* Job ring info */
int ringsize; /* Size of rings (assume input = output) */
struct caam_jrentry_info *entinfo; /* Alloc'ed 1 per ring entry */
struct caam_drv_private {
struct device *dev;
- struct device **jrdev; /* Alloc'ed array per sub-device */
+ struct platform_device **jrpdev; /* Alloc'ed array per sub-device */
struct platform_device *pdev;
/* Physical-presence section */
u8 qi_present; /* Nonzero if QI present in device */
int secvio_irq; /* Security violation interrupt number */
- /* which jr allocated to scatterlist crypto */
- atomic_t tfm_count ____cacheline_aligned;
- /* list of registered crypto algorithms (mk generic context handle?) */
- struct list_head alg_list;
- /* list of registered hash algorithms (mk generic context handle?) */
- struct list_head hash_list;
+#define RNG4_MAX_HANDLES 2
+ /* RNG4 block */
+ u32 rng4_sh_init; /* This bitmap shows which of the State
+ Handles of the RNG4 block are initialized
+ by this driver */
/*
* debugfs entries for developer view into driver/device
*/
#include <linux/of_irq.h>
+#include <linux/of_address.h>
#include "compat.h"
#include "regs.h"
#include "desc.h"
#include "intern.h"
+struct jr_driver_data {
+ /* List of Physical JobR's with the Driver */
+ struct list_head jr_list;
+ spinlock_t jr_alloc_lock; /* jr_list lock */
+} ____cacheline_aligned;
+
+static struct jr_driver_data driver_data;
+
+static int caam_reset_hw_jr(struct device *dev)
+{
+ struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
+ unsigned int timeout = 100000;
+
+ /*
+ * mask interrupts since we are going to poll
+ * for reset completion status
+ */
+ setbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK);
+
+ /* initiate flush (required prior to reset) */
+ wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET);
+ while (((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) ==
+ JRINT_ERR_HALT_INPROGRESS) && --timeout)
+ cpu_relax();
+
+ if ((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) !=
+ JRINT_ERR_HALT_COMPLETE || timeout == 0) {
+ dev_err(dev, "failed to flush job ring %d\n", jrp->ridx);
+ return -EIO;
+ }
+
+ /* initiate reset */
+ timeout = 100000;
+ wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET);
+ while ((rd_reg32(&jrp->rregs->jrcommand) & JRCR_RESET) && --timeout)
+ cpu_relax();
+
+ if (timeout == 0) {
+ dev_err(dev, "failed to reset job ring %d\n", jrp->ridx);
+ return -EIO;
+ }
+
+ /* unmask interrupts */
+ clrbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK);
+
+ return 0;
+}
+
+/*
+ * Shutdown JobR independent of platform property code
+ */
+int caam_jr_shutdown(struct device *dev)
+{
+ struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
+ dma_addr_t inpbusaddr, outbusaddr;
+ int ret;
+
+ ret = caam_reset_hw_jr(dev);
+
+ tasklet_kill(&jrp->irqtask);
+
+ /* Release interrupt */
+ free_irq(jrp->irq, dev);
+
+ /* Free rings */
+ inpbusaddr = rd_reg64(&jrp->rregs->inpring_base);
+ outbusaddr = rd_reg64(&jrp->rregs->outring_base);
+ dma_free_coherent(dev, sizeof(dma_addr_t) * JOBR_DEPTH,
+ jrp->inpring, inpbusaddr);
+ dma_free_coherent(dev, sizeof(struct jr_outentry) * JOBR_DEPTH,
+ jrp->outring, outbusaddr);
+ kfree(jrp->entinfo);
+
+ return ret;
+}
+
+static int caam_jr_remove(struct platform_device *pdev)
+{
+ int ret;
+ struct device *jrdev;
+ struct caam_drv_private_jr *jrpriv;
+
+ jrdev = &pdev->dev;
+ jrpriv = dev_get_drvdata(jrdev);
+
+ /*
+ * Return EBUSY if job ring already allocated.
+ */
+ if (atomic_read(&jrpriv->tfm_count)) {
+ dev_err(jrdev, "Device is busy\n");
+ return -EBUSY;
+ }
+
+ /* Remove the node from Physical JobR list maintained by driver */
+ spin_lock(&driver_data.jr_alloc_lock);
+ list_del(&jrpriv->list_node);
+ spin_unlock(&driver_data.jr_alloc_lock);
+
+ /* Release ring */
+ ret = caam_jr_shutdown(jrdev);
+ if (ret)
+ dev_err(jrdev, "Failed to shut down job ring\n");
+ irq_dispose_mapping(jrpriv->irq);
+
+ return ret;
+}
+
/* Main per-ring interrupt handler */
static irqreturn_t caam_jr_interrupt(int irq, void *st_dev)
{
clrbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK);
}
+/**
+ * caam_jr_alloc() - Alloc a job ring for someone to use as needed.
+ *
+ * returns : pointer to the newly allocated physical
+ * JobR dev can be written to if successful.
+ **/
+struct device *caam_jr_alloc(void)
+{
+ struct caam_drv_private_jr *jrpriv, *min_jrpriv = NULL;
+ struct device *dev = NULL;
+ int min_tfm_cnt = INT_MAX;
+ int tfm_cnt;
+
+ spin_lock(&driver_data.jr_alloc_lock);
+
+ if (list_empty(&driver_data.jr_list)) {
+ spin_unlock(&driver_data.jr_alloc_lock);
+ return ERR_PTR(-ENODEV);
+ }
+
+ list_for_each_entry(jrpriv, &driver_data.jr_list, list_node) {
+ tfm_cnt = atomic_read(&jrpriv->tfm_count);
+ if (tfm_cnt < min_tfm_cnt) {
+ min_tfm_cnt = tfm_cnt;
+ min_jrpriv = jrpriv;
+ }
+ if (!min_tfm_cnt)
+ break;
+ }
+
+ if (min_jrpriv) {
+ atomic_inc(&min_jrpriv->tfm_count);
+ dev = min_jrpriv->dev;
+ }
+ spin_unlock(&driver_data.jr_alloc_lock);
+
+ return dev;
+}
+EXPORT_SYMBOL(caam_jr_alloc);
+
+/**
+ * caam_jr_free() - Free the Job Ring
+ * @rdev - points to the dev that identifies the Job ring to
+ * be released.
+ **/
+void caam_jr_free(struct device *rdev)
+{
+ struct caam_drv_private_jr *jrpriv = dev_get_drvdata(rdev);
+
+ atomic_dec(&jrpriv->tfm_count);
+}
+EXPORT_SYMBOL(caam_jr_free);
+
/**
* caam_jr_enqueue() - Enqueue a job descriptor head. Returns 0 if OK,
* -EBUSY if the queue is full, -EIO if it cannot map the caller's
}
EXPORT_SYMBOL(caam_jr_enqueue);
-static int caam_reset_hw_jr(struct device *dev)
-{
- struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
- unsigned int timeout = 100000;
-
- /*
- * mask interrupts since we are going to poll
- * for reset completion status
- */
- setbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK);
-
- /* initiate flush (required prior to reset) */
- wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET);
- while (((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) ==
- JRINT_ERR_HALT_INPROGRESS) && --timeout)
- cpu_relax();
-
- if ((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) !=
- JRINT_ERR_HALT_COMPLETE || timeout == 0) {
- dev_err(dev, "failed to flush job ring %d\n", jrp->ridx);
- return -EIO;
- }
-
- /* initiate reset */
- timeout = 100000;
- wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET);
- while ((rd_reg32(&jrp->rregs->jrcommand) & JRCR_RESET) && --timeout)
- cpu_relax();
-
- if (timeout == 0) {
- dev_err(dev, "failed to reset job ring %d\n", jrp->ridx);
- return -EIO;
- }
-
- /* unmask interrupts */
- clrbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK);
-
- return 0;
-}
-
/*
* Init JobR independent of platform property detection
*/
/* Connect job ring interrupt handler. */
error = request_irq(jrp->irq, caam_jr_interrupt, IRQF_SHARED,
- "caam-jobr", dev);
+ dev_name(dev), dev);
if (error) {
dev_err(dev, "can't connect JobR %d interrupt (%d)\n",
jrp->ridx, jrp->irq);
return 0;
}
-/*
- * Shutdown JobR independent of platform property code
- */
-int caam_jr_shutdown(struct device *dev)
-{
- struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
- dma_addr_t inpbusaddr, outbusaddr;
- int ret;
-
- ret = caam_reset_hw_jr(dev);
-
- tasklet_kill(&jrp->irqtask);
-
- /* Release interrupt */
- free_irq(jrp->irq, dev);
-
- /* Free rings */
- inpbusaddr = rd_reg64(&jrp->rregs->inpring_base);
- outbusaddr = rd_reg64(&jrp->rregs->outring_base);
- dma_free_coherent(dev, sizeof(dma_addr_t) * JOBR_DEPTH,
- jrp->inpring, inpbusaddr);
- dma_free_coherent(dev, sizeof(struct jr_outentry) * JOBR_DEPTH,
- jrp->outring, outbusaddr);
- kfree(jrp->entinfo);
- of_device_unregister(jrp->jr_pdev);
-
- return ret;
-}
/*
- * Probe routine for each detected JobR subsystem. It assumes that
- * property detection was picked up externally.
+ * Probe routine for each detected JobR subsystem.
*/
-int caam_jr_probe(struct platform_device *pdev, struct device_node *np,
- int ring)
+static int caam_jr_probe(struct platform_device *pdev)
{
- struct device *ctrldev, *jrdev;
- struct platform_device *jr_pdev;
- struct caam_drv_private *ctrlpriv;
+ struct device *jrdev;
+ struct device_node *nprop;
+ struct caam_job_ring __iomem *ctrl;
struct caam_drv_private_jr *jrpriv;
- u32 *jroffset;
+ static int total_jobrs;
int error;
- ctrldev = &pdev->dev;
- ctrlpriv = dev_get_drvdata(ctrldev);
-
+ jrdev = &pdev->dev;
jrpriv = kmalloc(sizeof(struct caam_drv_private_jr),
GFP_KERNEL);
- if (jrpriv == NULL) {
- dev_err(ctrldev, "can't alloc private mem for job ring %d\n",
- ring);
+ if (!jrpriv)
return -ENOMEM;
- }
- jrpriv->parentdev = ctrldev; /* point back to parent */
- jrpriv->ridx = ring; /* save ring identity relative to detection */
- /*
- * Derive a pointer to the detected JobRs regs
- * Driver has already iomapped the entire space, we just
- * need to add in the offset to this JobR. Don't know if I
- * like this long-term, but it'll run
- */
- jroffset = (u32 *)of_get_property(np, "reg", NULL);
- jrpriv->rregs = (struct caam_job_ring __iomem *)((void *)ctrlpriv->ctrl
- + *jroffset);
+ dev_set_drvdata(jrdev, jrpriv);
- /* Build a local dev for each detected queue */
- jr_pdev = of_platform_device_create(np, NULL, ctrldev);
- if (jr_pdev == NULL) {
- kfree(jrpriv);
- return -EINVAL;
+ /* save ring identity relative to detection */
+ jrpriv->ridx = total_jobrs++;
+
+ nprop = pdev->dev.of_node;
+ /* Get configuration properties from device tree */
+ /* First, get register page */
+ ctrl = of_iomap(nprop, 0);
+ if (!ctrl) {
+ dev_err(jrdev, "of_iomap() failed\n");
+ return -ENOMEM;
}
- jrpriv->jr_pdev = jr_pdev;
- jrdev = &jr_pdev->dev;
- dev_set_drvdata(jrdev, jrpriv);
- ctrlpriv->jrdev[ring] = jrdev;
+ jrpriv->rregs = (struct caam_job_ring __force *)ctrl;
if (sizeof(dma_addr_t) == sizeof(u64))
- if (of_device_is_compatible(np, "fsl,sec-v5.0-job-ring"))
+ if (of_device_is_compatible(nprop, "fsl,sec-v5.0-job-ring"))
dma_set_mask(jrdev, DMA_BIT_MASK(40));
else
dma_set_mask(jrdev, DMA_BIT_MASK(36));
dma_set_mask(jrdev, DMA_BIT_MASK(32));
/* Identify the interrupt */
- jrpriv->irq = irq_of_parse_and_map(np, 0);
+ jrpriv->irq = irq_of_parse_and_map(nprop, 0);
/* Now do the platform independent part */
error = caam_jr_init(jrdev); /* now turn on hardware */
if (error) {
- of_device_unregister(jr_pdev);
kfree(jrpriv);
return error;
}
- return error;
+ jrpriv->dev = jrdev;
+ spin_lock(&driver_data.jr_alloc_lock);
+ list_add_tail(&jrpriv->list_node, &driver_data.jr_list);
+ spin_unlock(&driver_data.jr_alloc_lock);
+
+ atomic_set(&jrpriv->tfm_count, 0);
+
+ return 0;
+}
+
+static struct of_device_id caam_jr_match[] = {
+ {
+ .compatible = "fsl,sec-v4.0-job-ring",
+ },
+ {
+ .compatible = "fsl,sec4.0-job-ring",
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(of, caam_jr_match);
+
+static struct platform_driver caam_jr_driver = {
+ .driver = {
+ .name = "caam_jr",
+ .owner = THIS_MODULE,
+ .of_match_table = caam_jr_match,
+ },
+ .probe = caam_jr_probe,
+ .remove = caam_jr_remove,
+};
+
+static int __init jr_driver_init(void)
+{
+ spin_lock_init(&driver_data.jr_alloc_lock);
+ INIT_LIST_HEAD(&driver_data.jr_list);
+ return platform_driver_register(&caam_jr_driver);
+}
+
+static void __exit jr_driver_exit(void)
+{
+ platform_driver_unregister(&caam_jr_driver);
}
+
+module_init(jr_driver_init);
+module_exit(jr_driver_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("FSL CAAM JR request backend");
+MODULE_AUTHOR("Freescale Semiconductor - NMG/STC");
#define JR_H
/* Prototypes for backend-level services exposed to APIs */
+struct device *caam_jr_alloc(void);
+void caam_jr_free(struct device *rdev);
int caam_jr_enqueue(struct device *dev, u32 *desc,
void (*cbk)(struct device *dev, u32 *desc, u32 status,
void *areq),
void *areq);
-extern int caam_jr_probe(struct platform_device *pdev, struct device_node *np,
- int ring);
-extern int caam_jr_shutdown(struct device *dev);
#endif /* JR_H */
/* RNG4 TRNG test registers */
struct rng4tst {
-#define RTMCTL_PRGM 0x00010000 /* 1 -> program mode, 0 -> run mode */
+#define RTMCTL_PRGM 0x00010000 /* 1 -> program mode, 0 -> run mode */
u32 rtmctl; /* misc. control register */
u32 rtscmisc; /* statistical check misc. register */
u32 rtpkrrng; /* poker range register */
};
#define RTSDCTL_ENT_DLY_SHIFT 16
#define RTSDCTL_ENT_DLY_MASK (0xffff << RTSDCTL_ENT_DLY_SHIFT)
+#define RTSDCTL_ENT_DLY_MIN 1200
+#define RTSDCTL_ENT_DLY_MAX 12800
u32 rtsdctl; /* seed control register */
union {
u32 rtsblim; /* PRGM=1: sparse bit limit register */
u32 rtfrqcnt; /* PRGM=0: freq. count register */
};
u32 rsvd1[40];
+#define RDSTA_SKVT 0x80000000
+#define RDSTA_SKVN 0x40000000
#define RDSTA_IF0 0x00000001
+#define RDSTA_IF1 0x00000002
+#define RDSTA_IFMASK (RDSTA_IF1 | RDSTA_IF0)
u32 rdsta;
u32 rsvd2[15];
};
u32 jr_ctl_hi; /* CxJRR - JobR Control Register @800 */
u32 jr_ctl_lo;
u64 jr_descaddr; /* CxDADR - JobR Descriptor Address */
+#define DECO_OP_STATUS_HI_ERR_MASK 0xF00000FF
u32 op_status_hi; /* DxOPSTA - DECO Operation Status */
u32 op_status_lo;
u32 rsvd24[2];
u32 rsvd29[48];
u32 descbuf[64]; /* DxDESB - Descriptor buffer */
u32 rscvd30[193];
+#define DESC_DBG_DECO_STAT_HOST_ERR 0x00D00000
+#define DESC_DBG_DECO_STAT_VALID 0x80000000
+#define DESC_DBG_DECO_STAT_MASK 0x00F00000
u32 desc_dbg; /* DxDDR - DECO Debug Register */
u32 rsvd31[126];
};
-/* DECO DBG Register Valid Bit*/
-#define DECO_DBG_VALID 0x80000000
#define DECO_JQCR_WHL 0x20000000
#define DECO_JQCR_FOUR 0x10000000
return nents;
}
+/* Map SG page in kernel virtual address space and copy */
+static inline void sg_map_copy(u8 *dest, struct scatterlist *sg,
+ int len, int offset)
+{
+ u8 *mapped_addr;
+
+ /*
+ * Page here can be user-space pinned using get_user_pages
+ * Same must be kmapped before use and kunmapped subsequently
+ */
+ mapped_addr = kmap_atomic(sg_page(sg));
+ memcpy(dest, mapped_addr + offset, len);
+ kunmap_atomic(mapped_addr);
+}
+
/* Copy from len bytes of sg to dest, starting from beginning */
static inline void sg_copy(u8 *dest, struct scatterlist *sg, unsigned int len)
{
int cpy_index = 0, next_cpy_index = current_sg->length;
while (next_cpy_index < len) {
- memcpy(dest + cpy_index, (u8 *) sg_virt(current_sg),
- current_sg->length);
+ sg_map_copy(dest + cpy_index, current_sg, current_sg->length,
+ current_sg->offset);
current_sg = scatterwalk_sg_next(current_sg);
cpy_index = next_cpy_index;
next_cpy_index += current_sg->length;
}
if (cpy_index < len)
- memcpy(dest + cpy_index, (u8 *) sg_virt(current_sg),
- len - cpy_index);
+ sg_map_copy(dest + cpy_index, current_sg, len-cpy_index,
+ current_sg->offset);
}
/* Copy sg data, from to_skip to end, to dest */
int to_skip, unsigned int end)
{
struct scatterlist *current_sg = sg;
- int sg_index, cpy_index;
+ int sg_index, cpy_index, offset;
sg_index = current_sg->length;
while (sg_index <= to_skip) {
sg_index += current_sg->length;
}
cpy_index = sg_index - to_skip;
- memcpy(dest, (u8 *) sg_virt(current_sg) +
- current_sg->length - cpy_index, cpy_index);
- current_sg = scatterwalk_sg_next(current_sg);
- if (end - sg_index)
+ offset = current_sg->offset + current_sg->length - cpy_index;
+ sg_map_copy(dest, current_sg, cpy_index, offset);
+ if (end - sg_index) {
+ current_sg = scatterwalk_sg_next(current_sg);
sg_copy(dest + cpy_index, current_sg, end - sg_index);
+ }
}
platform_set_drvdata(pdev, dev);
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!r) {
- dev_err(&pdev->dev, "failed to get IORESOURCE_MEM\n");
- return -ENXIO;
- }
- dev->dcp_regs_base = devm_ioremap(&pdev->dev, r->start,
- resource_size(r));
+ dev->dcp_regs_base = devm_ioremap_resource(&pdev->dev, r);
+ if (IS_ERR(dev->dcp_regs_base))
+ return PTR_ERR(dev->dcp_regs_base);
dcp_set(dev, DCP_CTRL_SFRST, DCP_REG_CTRL);
udelay(10);
return -EIO;
}
dev->dcp_vmi_irq = r->start;
- ret = request_irq(dev->dcp_vmi_irq, dcp_vmi_irq, 0, "dcp", dev);
+ ret = devm_request_irq(&pdev->dev, dev->dcp_vmi_irq, dcp_vmi_irq, 0,
+ "dcp", dev);
if (ret != 0) {
dev_err(&pdev->dev, "can't request_irq (0)\n");
return -EIO;
r = platform_get_resource(pdev, IORESOURCE_IRQ, 1);
if (!r) {
dev_err(&pdev->dev, "can't get IRQ resource (1)\n");
- ret = -EIO;
- goto err_free_irq0;
+ return -EIO;
}
dev->dcp_irq = r->start;
- ret = request_irq(dev->dcp_irq, dcp_irq, 0, "dcp", dev);
+ ret = devm_request_irq(&pdev->dev, dev->dcp_irq, dcp_irq, 0, "dcp",
+ dev);
if (ret != 0) {
dev_err(&pdev->dev, "can't request_irq (1)\n");
- ret = -EIO;
- goto err_free_irq0;
+ return -EIO;
}
dev->hw_pkg[0] = dma_alloc_coherent(&pdev->dev,
GFP_KERNEL);
if (!dev->hw_pkg[0]) {
dev_err(&pdev->dev, "Could not allocate hw descriptors\n");
- ret = -ENOMEM;
- goto err_free_irq1;
+ return -ENOMEM;
}
for (i = 1; i < DCP_MAX_PKG; i++) {
for (j = 0; j < i; j++)
crypto_unregister_alg(&algs[j]);
err_free_key_iv:
+ tasklet_kill(&dev->done_task);
+ tasklet_kill(&dev->queue_task);
dma_free_coherent(&pdev->dev, 2 * AES_KEYSIZE_128, dev->payload_base,
dev->payload_base_dma);
err_free_hw_packet:
dma_free_coherent(&pdev->dev, DCP_MAX_PKG *
sizeof(struct dcp_hw_packet), dev->hw_pkg[0],
dev->hw_phys_pkg);
-err_free_irq1:
- free_irq(dev->dcp_irq, dev);
-err_free_irq0:
- free_irq(dev->dcp_vmi_irq, dev);
return ret;
}
int j;
dev = platform_get_drvdata(pdev);
- dma_free_coherent(&pdev->dev,
- DCP_MAX_PKG * sizeof(struct dcp_hw_packet),
- dev->hw_pkg[0], dev->hw_phys_pkg);
-
- dma_free_coherent(&pdev->dev, 2 * AES_KEYSIZE_128, dev->payload_base,
- dev->payload_base_dma);
+ misc_deregister(&dev->dcp_bootstream_misc);
- free_irq(dev->dcp_irq, dev);
- free_irq(dev->dcp_vmi_irq, dev);
+ for (j = 0; j < ARRAY_SIZE(algs); j++)
+ crypto_unregister_alg(&algs[j]);
tasklet_kill(&dev->done_task);
tasklet_kill(&dev->queue_task);
- for (j = 0; j < ARRAY_SIZE(algs); j++)
- crypto_unregister_alg(&algs[j]);
+ dma_free_coherent(&pdev->dev, 2 * AES_KEYSIZE_128, dev->payload_base,
+ dev->payload_base_dma);
- misc_deregister(&dev->dcp_bootstream_misc);
+ dma_free_coherent(&pdev->dev,
+ DCP_MAX_PKG * sizeof(struct dcp_hw_packet),
+ dev->hw_pkg[0], dev->hw_phys_pkg);
return 0;
}
unsigned int keylen)
{
struct ixp_ctx *ctx = crypto_aead_ctx(tfm);
- struct rtattr *rta = (struct rtattr *)key;
- struct crypto_authenc_key_param *param;
+ struct crypto_authenc_keys keys;
- if (!RTA_OK(rta, keylen))
+ if (crypto_authenc_extractkeys(&keys, key, keylen) != 0)
goto badkey;
- if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
- goto badkey;
- if (RTA_PAYLOAD(rta) < sizeof(*param))
- goto badkey;
-
- param = RTA_DATA(rta);
- ctx->enckey_len = be32_to_cpu(param->enckeylen);
- key += RTA_ALIGN(rta->rta_len);
- keylen -= RTA_ALIGN(rta->rta_len);
+ if (keys.authkeylen > sizeof(ctx->authkey))
+ goto badkey;
- if (keylen < ctx->enckey_len)
+ if (keys.enckeylen > sizeof(ctx->enckey))
goto badkey;
- ctx->authkey_len = keylen - ctx->enckey_len;
- memcpy(ctx->enckey, key + ctx->authkey_len, ctx->enckey_len);
- memcpy(ctx->authkey, key, ctx->authkey_len);
+ memcpy(ctx->authkey, keys.authkey, keys.authkeylen);
+ memcpy(ctx->enckey, keys.enckey, keys.enckeylen);
+ ctx->authkey_len = keys.authkeylen;
+ ctx->enckey_len = keys.enckeylen;
return aead_setup(tfm, crypto_aead_authsize(tfm));
badkey:
- ctx->enckey_len = 0;
crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
return -EINVAL;
}
static int __init ixp_module_init(void)
{
int num = ARRAY_SIZE(ixp4xx_algos);
- int i, err ;
+ int i, err;
pdev = platform_device_register_full(&ixp_dev_info);
if (IS_ERR(pdev))
return PTR_ERR(pdev);
- dev = &pdev->dev;
-
spin_lock_init(&desc_lock);
spin_lock_init(&emerg_lock);
return mv_cra_hash_init(tfm, "sha1", COP_HMAC_SHA1, SHA1_BLOCK_SIZE);
}
-irqreturn_t crypto_int(int irq, void *priv)
+static irqreturn_t crypto_int(int irq, void *priv)
{
u32 val;
return IRQ_HANDLED;
}
-struct crypto_alg mv_aes_alg_ecb = {
+static struct crypto_alg mv_aes_alg_ecb = {
.cra_name = "ecb(aes)",
.cra_driver_name = "mv-ecb-aes",
.cra_priority = 300,
},
};
-struct crypto_alg mv_aes_alg_cbc = {
+static struct crypto_alg mv_aes_alg_cbc = {
.cra_name = "cbc(aes)",
.cra_driver_name = "mv-cbc-aes",
.cra_priority = 300,
},
};
-struct ahash_alg mv_sha1_alg = {
+static struct ahash_alg mv_sha1_alg = {
.init = mv_hash_init,
.update = mv_hash_update,
.final = mv_hash_final,
}
};
-struct ahash_alg mv_hmac_sha1_alg = {
+static struct ahash_alg mv_hmac_sha1_alg = {
.init = mv_hash_init,
.update = mv_hash_update,
.final = mv_hash_final,
goto err_unmap_sram;
}
- ret = request_irq(irq, crypto_int, IRQF_DISABLED, dev_name(&pdev->dev),
+ ret = request_irq(irq, crypto_int, 0, dev_name(&pdev->dev),
cp);
if (ret)
goto err_thread;
.driver = {
.owner = THIS_MODULE,
.name = "mv_crypto",
- .of_match_table = of_match_ptr(mv_cesa_of_match_table),
+ .of_match_table = mv_cesa_of_match_table,
},
};
MODULE_ALIAS("platform:mv_crypto");
if (dd->flags & FLAGS_CBC)
val |= AES_REG_CTRL_CBC;
if (dd->flags & FLAGS_CTR) {
- val |= AES_REG_CTRL_CTR | AES_REG_CTRL_CTR_WIDTH_32;
+ val |= AES_REG_CTRL_CTR | AES_REG_CTRL_CTR_WIDTH_128;
mask = AES_REG_CTRL_CTR | AES_REG_CTRL_CTR_WIDTH_MASK;
}
if (dd->flags & FLAGS_ENCRYPT)
return err;
}
-int omap_aes_check_aligned(struct scatterlist *sg)
+static int omap_aes_check_aligned(struct scatterlist *sg)
{
while (sg) {
if (!IS_ALIGNED(sg->offset, 4))
return 0;
}
-int omap_aes_copy_sgs(struct omap_aes_dev *dd)
+static int omap_aes_copy_sgs(struct omap_aes_dev *dd)
{
void *buf_in, *buf_out;
int pages;
MODULE_DESCRIPTION("OMAP SHA1/MD5 hw acceleration support.");
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Dmitry Kasatkin");
+MODULE_ALIAS("platform:omap-sham");
{
struct spacc_aead_ctx *ctx = crypto_aead_ctx(tfm);
struct spacc_alg *alg = to_spacc_alg(tfm->base.__crt_alg);
- struct rtattr *rta = (void *)key;
- struct crypto_authenc_key_param *param;
- unsigned int authkeylen, enckeylen;
+ struct crypto_authenc_keys keys;
int err = -EINVAL;
- if (!RTA_OK(rta, keylen))
+ if (crypto_authenc_extractkeys(&keys, key, keylen) != 0)
goto badkey;
- if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+ if (keys.enckeylen > AES_MAX_KEY_SIZE)
goto badkey;
- if (RTA_PAYLOAD(rta) < sizeof(*param))
- goto badkey;
-
- param = RTA_DATA(rta);
- enckeylen = be32_to_cpu(param->enckeylen);
-
- key += RTA_ALIGN(rta->rta_len);
- keylen -= RTA_ALIGN(rta->rta_len);
-
- if (keylen < enckeylen)
- goto badkey;
-
- authkeylen = keylen - enckeylen;
-
- if (enckeylen > AES_MAX_KEY_SIZE)
+ if (keys.authkeylen > sizeof(ctx->hash_ctx))
goto badkey;
if ((alg->ctrl_default & SPACC_CRYPTO_ALG_MASK) ==
SPA_CTRL_CIPH_ALG_AES)
- err = spacc_aead_aes_setkey(tfm, key + authkeylen, enckeylen);
+ err = spacc_aead_aes_setkey(tfm, keys.enckey, keys.enckeylen);
else
- err = spacc_aead_des_setkey(tfm, key + authkeylen, enckeylen);
+ err = spacc_aead_des_setkey(tfm, keys.enckey, keys.enckeylen);
if (err)
goto badkey;
- memcpy(ctx->hash_ctx, key, authkeylen);
- ctx->hash_key_len = authkeylen;
+ memcpy(ctx->hash_ctx, keys.authkey, keys.authkeylen);
+ ctx->hash_key_len = keys.authkeylen;
return 0;
.driver = {
.name = SAHARA_NAME,
.owner = THIS_MODULE,
- .of_match_table = of_match_ptr(sahara_dt_ids),
+ .of_match_table = sahara_dt_ids,
},
.id_table = sahara_platform_ids,
};
const u8 *key, unsigned int keylen)
{
struct talitos_ctx *ctx = crypto_aead_ctx(authenc);
- struct rtattr *rta = (void *)key;
- struct crypto_authenc_key_param *param;
- unsigned int authkeylen;
- unsigned int enckeylen;
-
- if (!RTA_OK(rta, keylen))
- goto badkey;
+ struct crypto_authenc_keys keys;
- if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+ if (crypto_authenc_extractkeys(&keys, key, keylen) != 0)
goto badkey;
- if (RTA_PAYLOAD(rta) < sizeof(*param))
+ if (keys.authkeylen + keys.enckeylen > TALITOS_MAX_KEY_SIZE)
goto badkey;
- param = RTA_DATA(rta);
- enckeylen = be32_to_cpu(param->enckeylen);
-
- key += RTA_ALIGN(rta->rta_len);
- keylen -= RTA_ALIGN(rta->rta_len);
+ memcpy(ctx->key, keys.authkey, keys.authkeylen);
+ memcpy(&ctx->key[keys.authkeylen], keys.enckey, keys.enckeylen);
- if (keylen < enckeylen)
- goto badkey;
-
- authkeylen = keylen - enckeylen;
-
- if (keylen > TALITOS_MAX_KEY_SIZE)
- goto badkey;
-
- memcpy(&ctx->key, key, keylen);
-
- ctx->keylen = keylen;
- ctx->enckeylen = enckeylen;
- ctx->authkeylen = authkeylen;
+ ctx->keylen = keys.authkeylen + keys.enckeylen;
+ ctx->enckeylen = keys.enckeylen;
+ ctx->authkeylen = keys.authkeylen;
return 0;
if (edesc->assoc_chained)
talitos_unmap_sg_chain(dev, areq->assoc, DMA_TO_DEVICE);
- else
+ else if (areq->assoclen)
/* assoc_nents counts also for IV in non-contiguous cases */
dma_unmap_sg(dev, areq->assoc,
edesc->assoc_nents ? edesc->assoc_nents - 1 : 1,
dma_sync_single_for_device(dev, edesc->dma_link_tbl,
edesc->dma_len, DMA_BIDIRECTIONAL);
} else {
- to_talitos_ptr(&desc->ptr[1], sg_dma_address(areq->assoc));
+ if (areq->assoclen)
+ to_talitos_ptr(&desc->ptr[1],
+ sg_dma_address(areq->assoc));
+ else
+ to_talitos_ptr(&desc->ptr[1], edesc->iv_dma);
desc->ptr[1].j_extent = 0;
}
unsigned int authsize,
unsigned int ivsize,
int icv_stashing,
- u32 cryptoflags)
+ u32 cryptoflags,
+ bool encrypt)
{
struct talitos_edesc *edesc;
int assoc_nents = 0, src_nents, dst_nents, alloc_len, dma_len;
return ERR_PTR(-EINVAL);
}
- if (iv)
+ if (ivsize)
iv_dma = dma_map_single(dev, iv, ivsize, DMA_TO_DEVICE);
- if (assoc) {
+ if (assoclen) {
/*
* Currently it is assumed that iv is provided whenever assoc
* is.
assoc_nents = assoc_nents ? assoc_nents + 1 : 2;
}
- src_nents = sg_count(src, cryptlen + authsize, &src_chained);
- src_nents = (src_nents == 1) ? 0 : src_nents;
-
- if (!dst) {
- dst_nents = 0;
- } else {
- if (dst == src) {
- dst_nents = src_nents;
- } else {
- dst_nents = sg_count(dst, cryptlen + authsize,
- &dst_chained);
- dst_nents = (dst_nents == 1) ? 0 : dst_nents;
- }
+ if (!dst || dst == src) {
+ src_nents = sg_count(src, cryptlen + authsize, &src_chained);
+ src_nents = (src_nents == 1) ? 0 : src_nents;
+ dst_nents = dst ? src_nents : 0;
+ } else { /* dst && dst != src*/
+ src_nents = sg_count(src, cryptlen + (encrypt ? 0 : authsize),
+ &src_chained);
+ src_nents = (src_nents == 1) ? 0 : src_nents;
+ dst_nents = sg_count(dst, cryptlen + (encrypt ? authsize : 0),
+ &dst_chained);
+ dst_nents = (dst_nents == 1) ? 0 : dst_nents;
}
/*
edesc = kmalloc(alloc_len, GFP_DMA | flags);
if (!edesc) {
- talitos_unmap_sg_chain(dev, assoc, DMA_TO_DEVICE);
+ if (assoc_chained)
+ talitos_unmap_sg_chain(dev, assoc, DMA_TO_DEVICE);
+ else if (assoclen)
+ dma_unmap_sg(dev, assoc,
+ assoc_nents ? assoc_nents - 1 : 1,
+ DMA_TO_DEVICE);
+
if (iv_dma)
dma_unmap_single(dev, iv_dma, ivsize, DMA_TO_DEVICE);
+
dev_err(dev, "could not allocate edescriptor\n");
return ERR_PTR(-ENOMEM);
}
}
static struct talitos_edesc *aead_edesc_alloc(struct aead_request *areq, u8 *iv,
- int icv_stashing)
+ int icv_stashing, bool encrypt)
{
struct crypto_aead *authenc = crypto_aead_reqtfm(areq);
struct talitos_ctx *ctx = crypto_aead_ctx(authenc);
return talitos_edesc_alloc(ctx->dev, areq->assoc, areq->src, areq->dst,
iv, areq->assoclen, areq->cryptlen,
ctx->authsize, ivsize, icv_stashing,
- areq->base.flags);
+ areq->base.flags, encrypt);
}
static int aead_encrypt(struct aead_request *req)
struct talitos_edesc *edesc;
/* allocate extended descriptor */
- edesc = aead_edesc_alloc(req, req->iv, 0);
+ edesc = aead_edesc_alloc(req, req->iv, 0, true);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
req->cryptlen -= authsize;
/* allocate extended descriptor */
- edesc = aead_edesc_alloc(req, req->iv, 1);
+ edesc = aead_edesc_alloc(req, req->iv, 1, false);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
struct talitos_edesc *edesc;
/* allocate extended descriptor */
- edesc = aead_edesc_alloc(areq, req->giv, 0);
+ edesc = aead_edesc_alloc(areq, req->giv, 0, true);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
}
static struct talitos_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request *
- areq)
+ areq, bool encrypt)
{
struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq);
struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher);
return talitos_edesc_alloc(ctx->dev, NULL, areq->src, areq->dst,
areq->info, 0, areq->nbytes, 0, ivsize, 0,
- areq->base.flags);
+ areq->base.flags, encrypt);
}
static int ablkcipher_encrypt(struct ablkcipher_request *areq)
struct talitos_edesc *edesc;
/* allocate extended descriptor */
- edesc = ablkcipher_edesc_alloc(areq);
+ edesc = ablkcipher_edesc_alloc(areq, true);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
struct talitos_edesc *edesc;
/* allocate extended descriptor */
- edesc = ablkcipher_edesc_alloc(areq);
+ edesc = ablkcipher_edesc_alloc(areq, false);
if (IS_ERR(edesc))
return PTR_ERR(edesc);
struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
return talitos_edesc_alloc(ctx->dev, NULL, req_ctx->psrc, NULL, NULL, 0,
- nbytes, 0, 0, 0, areq->base.flags);
+ nbytes, 0, 0, 0, areq->base.flags, false);
}
static int ahash_init(struct ahash_request *areq)
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/module.h>
#include <linux/init.h>
#include <linux/errno.h>
static DECLARE_WORK(aes_work, aes_workqueue_handler);
static struct workqueue_struct *aes_wq;
-extern unsigned long long tegra_chip_uid(void);
-
static inline u32 aes_readl(struct tegra_aes_dev *dd, u32 offset)
{
return readl(dd->io_base + offset);
struct tegra_aes_dev *dd = aes_dev;
struct tegra_aes_ctx *ctx = &rng_ctx;
struct tegra_aes_slot *key_slot;
- struct timespec ts;
int ret = 0;
- u64 nsec, tmp[2];
+ u8 tmp[16]; /* 16 bytes = 128 bits of entropy */
u8 *dt;
if (!ctx || !dd) {
- dev_err(dd->dev, "ctx=0x%x, dd=0x%x\n",
+ pr_err("ctx=0x%x, dd=0x%x\n",
(unsigned int)ctx, (unsigned int)dd);
return -EINVAL;
}
if (dd->ivlen >= (2 * DEFAULT_RNG_BLK_SZ + AES_KEYSIZE_128)) {
dt = dd->iv + DEFAULT_RNG_BLK_SZ + AES_KEYSIZE_128;
} else {
- getnstimeofday(&ts);
- nsec = timespec_to_ns(&ts);
- do_div(nsec, 1000);
- nsec ^= dd->ctr << 56;
- dd->ctr++;
- tmp[0] = nsec;
- tmp[1] = tegra_chip_uid();
- dt = (u8 *)tmp;
+ get_random_bytes(tmp, sizeof(tmp));
+ dt = tmp;
}
memcpy(dd->dt, dt, DEFAULT_RNG_BLK_SZ);
return 0;
}
-void tegra_aes_cra_exit(struct crypto_tfm *tfm)
+static void tegra_aes_cra_exit(struct crypto_tfm *tfm)
{
struct tegra_aes_ctx *ctx =
crypto_ablkcipher_ctx((struct crypto_ablkcipher *)tfm);
}
/* Initialize the vde clock */
- dd->aes_clk = clk_get(dev, "vde");
+ dd->aes_clk = devm_clk_get(dev, "vde");
if (IS_ERR(dd->aes_clk)) {
dev_err(dev, "iclock intialization failed.\n");
err = -ENODEV;
if (dd->buf_out)
dma_free_coherent(dev, AES_HW_DMA_BUFFER_SIZE_BYTES,
dd->buf_out, dd->dma_buf_out);
- if (!IS_ERR(dd->aes_clk))
- clk_put(dd->aes_clk);
if (aes_wq)
destroy_workqueue(aes_wq);
spin_lock(&list_lock);
dd->buf_in, dd->dma_buf_in);
dma_free_coherent(dev, AES_HW_DMA_BUFFER_SIZE_BYTES,
dd->buf_out, dd->dma_buf_out);
- clk_put(dd->aes_clk);
aes_dev = NULL;
return 0;
tristate "Intel I/OAT DMA support"
depends on PCI && X86
select DMA_ENGINE
+ select DMA_ENGINE_RAID
select DCA
help
Enable support for the Intel(R) I/OAT DMA engine present
Support the Atmel AHB DMA controller.
config FSL_DMA
- tristate "Freescale Elo and Elo Plus DMA support"
+ tristate "Freescale Elo series DMA support"
depends on FSL_SOC
select DMA_ENGINE
select ASYNC_TX_ENABLE_CHANNEL_SWITCH
---help---
- Enable support for the Freescale Elo and Elo Plus DMA controllers.
- The Elo is the DMA controller on some 82xx and 83xx parts, and the
- Elo Plus is the DMA controller on 85xx and 86xx parts.
+ Enable support for the Freescale Elo series DMA controllers.
+ The Elo is the DMA controller on some mpc82xx and mpc83xx parts, the
+ EloPlus is on mpc85xx and mpc86xx and Pxxx parts, and the Elo3 is on
+ some Txxx and Bxxx parts.
config MPC512X_DMA
tristate "Freescale MPC512x built-in DMA engine support"
bool "Marvell XOR engine support"
depends on PLAT_ORION
select DMA_ENGINE
+ select DMA_ENGINE_RAID
select ASYNC_TX_ENABLE_CHANNEL_SWITCH
---help---
Enable support for the Marvell XOR engine.
tristate "AMCC PPC440SPe ADMA support"
depends on 440SPe || 440SP
select DMA_ENGINE
+ select DMA_ENGINE_RAID
select ARCH_HAS_ASYNC_TX_FIND_CHANNEL
select ASYNC_TX_ENABLE_CHANNEL_SWITCH
help
bool "Network: TCP receive copy offload"
depends on DMA_ENGINE && NET
default (INTEL_IOATDMA || FSL_DMA)
+ depends on BROKEN
help
This enables the use of DMA engines in the network stack to
offload receive copy-to-user operations, freeing CPU cycles.
Simple DMA test client. Say N unless you're debugging a
DMA Device driver.
+config DMA_ENGINE_RAID
+ bool
+
endif
kfree(txd);
}
-static void pl08x_unmap_buffers(struct pl08x_txd *txd)
-{
- struct device *dev = txd->vd.tx.chan->device->dev;
- struct pl08x_sg *dsg;
-
- if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- if (txd->vd.tx.flags & DMA_COMPL_SRC_UNMAP_SINGLE)
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_single(dev, dsg->src_addr, dsg->len,
- DMA_TO_DEVICE);
- else {
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_page(dev, dsg->src_addr, dsg->len,
- DMA_TO_DEVICE);
- }
- }
- if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- if (txd->vd.tx.flags & DMA_COMPL_DEST_UNMAP_SINGLE)
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_single(dev, dsg->dst_addr, dsg->len,
- DMA_FROM_DEVICE);
- else
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_page(dev, dsg->dst_addr, dsg->len,
- DMA_FROM_DEVICE);
- }
-}
-
static void pl08x_desc_free(struct virt_dma_desc *vd)
{
struct pl08x_txd *txd = to_pl08x_txd(&vd->tx);
struct pl08x_dma_chan *plchan = to_pl08x_chan(vd->tx.chan);
- if (!plchan->slave)
- pl08x_unmap_buffers(txd);
-
+ dma_descriptor_unmap(&vd->tx);
if (!txd->done)
pl08x_release_mux(plchan);
size_t bytes = 0;
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS)
+ if (ret == DMA_COMPLETE)
return ret;
/*
spin_lock_irqsave(&plchan->vc.lock, flags);
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret != DMA_SUCCESS) {
+ if (ret != DMA_COMPLETE) {
vd = vchan_find_desc(&plchan->vc, cookie);
if (vd) {
/* On the issued list, so hasn't been processed yet */
writel(0x000000FF, pl08x->base + PL080_ERR_CLEAR);
writel(0x000000FF, pl08x->base + PL080_TC_CLEAR);
- ret = request_irq(adev->irq[0], pl08x_irq, IRQF_DISABLED,
- DRIVER_NAME, pl08x);
+ ret = request_irq(adev->irq[0], pl08x_irq, 0, DRIVER_NAME, pl08x);
if (ret) {
dev_err(&adev->dev, "%s failed to request interrupt %d\n",
__func__, adev->irq[0]);
/* move myself to free_list */
list_move(&desc->desc_node, &atchan->free_list);
- /* unmap dma addresses (not on slave channels) */
- if (!atchan->chan_common.private) {
- struct device *parent = chan2parent(&atchan->chan_common);
- if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- if (txd->flags & DMA_COMPL_DEST_UNMAP_SINGLE)
- dma_unmap_single(parent,
- desc->lli.daddr,
- desc->len, DMA_FROM_DEVICE);
- else
- dma_unmap_page(parent,
- desc->lli.daddr,
- desc->len, DMA_FROM_DEVICE);
- }
- if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- if (txd->flags & DMA_COMPL_SRC_UNMAP_SINGLE)
- dma_unmap_single(parent,
- desc->lli.saddr,
- desc->len, DMA_TO_DEVICE);
- else
- dma_unmap_page(parent,
- desc->lli.saddr,
- desc->len, DMA_TO_DEVICE);
- }
- }
-
+ dma_descriptor_unmap(txd);
/* for cyclic transfers,
* no need to replay callback function while stopping */
if (!atc_chan_is_cyclic(atchan)) {
int bytes = 0;
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS)
+ if (ret == DMA_COMPLETE)
return ret;
/*
* There's no point calculating the residue if there's
{
return &chan->dev->device;
}
-static struct device *chan2parent(struct dma_chan *chan)
-{
- return chan->dev->device.parent;
-}
#if defined(VERBOSE_DEBUG)
static void vdbg_dump_regs(struct at_dma_chan *atchan)
enum dma_status ret;
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS)
+ if (ret == DMA_COMPLETE)
return ret;
dma_set_residue(txstate, coh901318_get_bytes_left(chan));
if (irq < 0)
return irq;
- err = devm_request_irq(&pdev->dev, irq, dma_irq_handler, IRQF_DISABLED,
+ err = devm_request_irq(&pdev->dev, irq, dma_irq_handler, 0,
"coh901318", base);
if (err)
return err;
const struct chan_queues *queues_rx;
const struct chan_queues *queues_tx;
struct chan_queues td_queue;
+
+ /* context for suspend/resume */
+ unsigned int dma_tdfdq;
};
#define FIST_COMPLETION_QUEUE 93
return val & ((1 << (DESC_LENGTH_BITS_NUM + 1)) - 1);
}
+static u32 cppi41_pop_desc(struct cppi41_dd *cdd, unsigned queue_num)
+{
+ u32 desc;
+
+ desc = cppi_readl(cdd->qmgr_mem + QMGR_QUEUE_D(queue_num));
+ desc &= ~0x1f;
+ return desc;
+}
+
static irqreturn_t cppi41_irq(int irq, void *data)
{
struct cppi41_dd *cdd = data;
q_num = __fls(val);
val &= ~(1 << q_num);
q_num += 32 * i;
- desc = cppi_readl(cdd->qmgr_mem + QMGR_QUEUE_D(q_num));
- desc &= ~0x1f;
+ desc = cppi41_pop_desc(cdd, q_num);
c = desc_to_chan(cdd, desc);
if (WARN_ON(!c)) {
pr_err("%s() q %d desc %08x\n", __func__,
/* lock */
ret = dma_cookie_status(chan, cookie, txstate);
- if (txstate && ret == DMA_SUCCESS)
+ if (txstate && ret == DMA_COMPLETE)
txstate->residue = c->residue;
/* unlock */
d->pd0 = DESC_TYPE_TEARD << DESC_TYPE;
}
-static u32 cppi41_pop_desc(struct cppi41_dd *cdd, unsigned queue_num)
-{
- u32 desc;
-
- desc = cppi_readl(cdd->qmgr_mem + QMGR_QUEUE_D(queue_num));
- desc &= ~0x1f;
- return desc;
-}
-
static int cppi41_tear_down_chan(struct cppi41_channel *c)
{
struct cppi41_dd *cdd = c->cdd;
c->td_retry = 100;
}
- if (!c->td_seen) {
- unsigned td_comp_queue;
+ if (!c->td_seen || !c->td_desc_seen) {
- if (c->is_tx)
- td_comp_queue = cdd->td_queue.complete;
- else
- td_comp_queue = c->q_comp_num;
+ desc_phys = cppi41_pop_desc(cdd, cdd->td_queue.complete);
+ if (!desc_phys)
+ desc_phys = cppi41_pop_desc(cdd, c->q_comp_num);
- desc_phys = cppi41_pop_desc(cdd, td_comp_queue);
- if (desc_phys) {
- __iormb();
+ if (desc_phys == c->desc_phys) {
+ c->td_desc_seen = 1;
+
+ } else if (desc_phys == td_desc_phys) {
+ u32 pd0;
- if (desc_phys == td_desc_phys) {
- u32 pd0;
- pd0 = td->pd0;
- WARN_ON((pd0 >> DESC_TYPE) != DESC_TYPE_TEARD);
- WARN_ON(!c->is_tx && !(pd0 & TD_DESC_IS_RX));
- WARN_ON((pd0 & 0x1f) != c->port_num);
- } else {
- WARN_ON_ONCE(1);
- }
- c->td_seen = 1;
- }
- }
- if (!c->td_desc_seen) {
- desc_phys = cppi41_pop_desc(cdd, c->q_comp_num);
- if (desc_phys) {
__iormb();
- WARN_ON(c->desc_phys != desc_phys);
- c->td_desc_seen = 1;
+ pd0 = td->pd0;
+ WARN_ON((pd0 >> DESC_TYPE) != DESC_TYPE_TEARD);
+ WARN_ON(!c->is_tx && !(pd0 & TD_DESC_IS_RX));
+ WARN_ON((pd0 & 0x1f) != c->port_num);
+ c->td_seen = 1;
+ } else if (desc_phys) {
+ WARN_ON_ONCE(1);
}
}
c->td_retry--;
WARN_ON(!c->td_retry);
if (!c->td_desc_seen) {
- desc_phys = cppi_readl(cdd->qmgr_mem + QMGR_QUEUE_D(c->q_num));
+ desc_phys = cppi41_pop_desc(cdd, c->q_num);
WARN_ON(!desc_phys);
}
}
}
-static int cppi41_add_chans(struct platform_device *pdev, struct cppi41_dd *cdd)
+static int cppi41_add_chans(struct device *dev, struct cppi41_dd *cdd)
{
struct cppi41_channel *cchan;
int i;
int ret;
u32 n_chans;
- ret = of_property_read_u32(pdev->dev.of_node, "#dma-channels",
+ ret = of_property_read_u32(dev->of_node, "#dma-channels",
&n_chans);
if (ret)
return ret;
return -ENOMEM;
}
-static void purge_descs(struct platform_device *pdev, struct cppi41_dd *cdd)
+static void purge_descs(struct device *dev, struct cppi41_dd *cdd)
{
unsigned int mem_decs;
int i;
cppi_writel(0, cdd->qmgr_mem + QMGR_MEMBASE(i));
cppi_writel(0, cdd->qmgr_mem + QMGR_MEMCTRL(i));
- dma_free_coherent(&pdev->dev, mem_decs, cdd->cd,
+ dma_free_coherent(dev, mem_decs, cdd->cd,
cdd->descs_phys);
}
}
cppi_writel(0, cdd->sched_mem + DMA_SCHED_CTRL);
}
-static void deinit_cpii41(struct platform_device *pdev, struct cppi41_dd *cdd)
+static void deinit_cppi41(struct device *dev, struct cppi41_dd *cdd)
{
disable_sched(cdd);
- purge_descs(pdev, cdd);
+ purge_descs(dev, cdd);
cppi_writel(0, cdd->qmgr_mem + QMGR_LRAM0_BASE);
cppi_writel(0, cdd->qmgr_mem + QMGR_LRAM0_BASE);
- dma_free_coherent(&pdev->dev, QMGR_SCRATCH_SIZE, cdd->qmgr_scratch,
+ dma_free_coherent(dev, QMGR_SCRATCH_SIZE, cdd->qmgr_scratch,
cdd->scratch_phys);
}
-static int init_descs(struct platform_device *pdev, struct cppi41_dd *cdd)
+static int init_descs(struct device *dev, struct cppi41_dd *cdd)
{
unsigned int desc_size;
unsigned int mem_decs;
reg |= ilog2(ALLOC_DECS_NUM) - 5;
BUILD_BUG_ON(DESCS_AREAS != 1);
- cdd->cd = dma_alloc_coherent(&pdev->dev, mem_decs,
+ cdd->cd = dma_alloc_coherent(dev, mem_decs,
&cdd->descs_phys, GFP_KERNEL);
if (!cdd->cd)
return -ENOMEM;
cppi_writel(reg, cdd->sched_mem + DMA_SCHED_CTRL);
}
-static int init_cppi41(struct platform_device *pdev, struct cppi41_dd *cdd)
+static int init_cppi41(struct device *dev, struct cppi41_dd *cdd)
{
int ret;
BUILD_BUG_ON(QMGR_SCRATCH_SIZE > ((1 << 14) - 1));
- cdd->qmgr_scratch = dma_alloc_coherent(&pdev->dev, QMGR_SCRATCH_SIZE,
+ cdd->qmgr_scratch = dma_alloc_coherent(dev, QMGR_SCRATCH_SIZE,
&cdd->scratch_phys, GFP_KERNEL);
if (!cdd->qmgr_scratch)
return -ENOMEM;
cppi_writel(QMGR_SCRATCH_SIZE, cdd->qmgr_mem + QMGR_LRAM_SIZE);
cppi_writel(0, cdd->qmgr_mem + QMGR_LRAM1_BASE);
- ret = init_descs(pdev, cdd);
+ ret = init_descs(dev, cdd);
if (ret)
goto err_td;
init_sched(cdd);
return 0;
err_td:
- deinit_cpii41(pdev, cdd);
+ deinit_cppi41(dev, cdd);
return ret;
}
};
MODULE_DEVICE_TABLE(of, cppi41_dma_ids);
-static const struct cppi_glue_infos *get_glue_info(struct platform_device *pdev)
+static const struct cppi_glue_infos *get_glue_info(struct device *dev)
{
const struct of_device_id *of_id;
- of_id = of_match_node(cppi41_dma_ids, pdev->dev.of_node);
+ of_id = of_match_node(cppi41_dma_ids, dev->of_node);
if (!of_id)
return NULL;
return of_id->data;
static int cppi41_dma_probe(struct platform_device *pdev)
{
struct cppi41_dd *cdd;
+ struct device *dev = &pdev->dev;
const struct cppi_glue_infos *glue_info;
int irq;
int ret;
- glue_info = get_glue_info(pdev);
+ glue_info = get_glue_info(dev);
if (!glue_info)
return -EINVAL;
cdd->ddev.device_issue_pending = cppi41_dma_issue_pending;
cdd->ddev.device_prep_slave_sg = cppi41_dma_prep_slave_sg;
cdd->ddev.device_control = cppi41_dma_control;
- cdd->ddev.dev = &pdev->dev;
+ cdd->ddev.dev = dev;
INIT_LIST_HEAD(&cdd->ddev.channels);
cpp41_dma_info.dma_cap = cdd->ddev.cap_mask;
- cdd->usbss_mem = of_iomap(pdev->dev.of_node, 0);
- cdd->ctrl_mem = of_iomap(pdev->dev.of_node, 1);
- cdd->sched_mem = of_iomap(pdev->dev.of_node, 2);
- cdd->qmgr_mem = of_iomap(pdev->dev.of_node, 3);
+ cdd->usbss_mem = of_iomap(dev->of_node, 0);
+ cdd->ctrl_mem = of_iomap(dev->of_node, 1);
+ cdd->sched_mem = of_iomap(dev->of_node, 2);
+ cdd->qmgr_mem = of_iomap(dev->of_node, 3);
if (!cdd->usbss_mem || !cdd->ctrl_mem || !cdd->sched_mem ||
!cdd->qmgr_mem) {
goto err_remap;
}
- pm_runtime_enable(&pdev->dev);
- ret = pm_runtime_get_sync(&pdev->dev);
- if (ret)
+ pm_runtime_enable(dev);
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0)
goto err_get_sync;
cdd->queues_rx = glue_info->queues_rx;
cdd->queues_tx = glue_info->queues_tx;
cdd->td_queue = glue_info->td_queue;
- ret = init_cppi41(pdev, cdd);
+ ret = init_cppi41(dev, cdd);
if (ret)
goto err_init_cppi;
- ret = cppi41_add_chans(pdev, cdd);
+ ret = cppi41_add_chans(dev, cdd);
if (ret)
goto err_chans;
- irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+ irq = irq_of_parse_and_map(dev->of_node, 0);
if (!irq)
goto err_irq;
cppi_writel(USBSS_IRQ_PD_COMP, cdd->usbss_mem + USBSS_IRQ_ENABLER);
ret = request_irq(irq, glue_info->isr, IRQF_SHARED,
- dev_name(&pdev->dev), cdd);
+ dev_name(dev), cdd);
if (ret)
goto err_irq;
cdd->irq = irq;
if (ret)
goto err_dma_reg;
- ret = of_dma_controller_register(pdev->dev.of_node,
+ ret = of_dma_controller_register(dev->of_node,
cppi41_dma_xlate, &cpp41_dma_info);
if (ret)
goto err_of;
cppi_writel(0, cdd->usbss_mem + USBSS_IRQ_CLEARR);
cleanup_chans(cdd);
err_chans:
- deinit_cpii41(pdev, cdd);
+ deinit_cppi41(dev, cdd);
err_init_cppi:
- pm_runtime_put(&pdev->dev);
+ pm_runtime_put(dev);
err_get_sync:
- pm_runtime_disable(&pdev->dev);
+ pm_runtime_disable(dev);
iounmap(cdd->usbss_mem);
iounmap(cdd->ctrl_mem);
iounmap(cdd->sched_mem);
cppi_writel(0, cdd->usbss_mem + USBSS_IRQ_CLEARR);
free_irq(cdd->irq, cdd);
cleanup_chans(cdd);
- deinit_cpii41(pdev, cdd);
+ deinit_cppi41(&pdev->dev, cdd);
iounmap(cdd->usbss_mem);
iounmap(cdd->ctrl_mem);
iounmap(cdd->sched_mem);
return 0;
}
+#ifdef CONFIG_PM_SLEEP
+static int cppi41_suspend(struct device *dev)
+{
+ struct cppi41_dd *cdd = dev_get_drvdata(dev);
+
+ cdd->dma_tdfdq = cppi_readl(cdd->ctrl_mem + DMA_TDFDQ);
+ cppi_writel(0, cdd->usbss_mem + USBSS_IRQ_CLEARR);
+ disable_sched(cdd);
+
+ return 0;
+}
+
+static int cppi41_resume(struct device *dev)
+{
+ struct cppi41_dd *cdd = dev_get_drvdata(dev);
+ struct cppi41_channel *c;
+ int i;
+
+ for (i = 0; i < DESCS_AREAS; i++)
+ cppi_writel(cdd->descs_phys, cdd->qmgr_mem + QMGR_MEMBASE(i));
+
+ list_for_each_entry(c, &cdd->ddev.channels, chan.device_node)
+ if (!c->is_tx)
+ cppi_writel(c->q_num, c->gcr_reg + RXHPCRA0);
+
+ init_sched(cdd);
+
+ cppi_writel(cdd->dma_tdfdq, cdd->ctrl_mem + DMA_TDFDQ);
+ cppi_writel(cdd->scratch_phys, cdd->qmgr_mem + QMGR_LRAM0_BASE);
+ cppi_writel(QMGR_SCRATCH_SIZE, cdd->qmgr_mem + QMGR_LRAM_SIZE);
+ cppi_writel(0, cdd->qmgr_mem + QMGR_LRAM1_BASE);
+
+ cppi_writel(USBSS_IRQ_PD_COMP, cdd->usbss_mem + USBSS_IRQ_ENABLER);
+
+ return 0;
+}
+#endif
+
+static SIMPLE_DEV_PM_OPS(cppi41_pm_ops, cppi41_suspend, cppi41_resume);
+
static struct platform_driver cpp41_dma_driver = {
.probe = cppi41_dma_probe,
.remove = cppi41_dma_remove,
.driver = {
.name = "cppi41-dma-engine",
.owner = THIS_MODULE,
+ .pm = &cppi41_pm_ops,
.of_match_table = of_match_ptr(cppi41_dma_ids),
},
};
unsigned long flags;
status = dma_cookie_status(c, cookie, state);
- if (status == DMA_SUCCESS || !state)
+ if (status == DMA_COMPLETE || !state)
return status;
spin_lock_irqsave(&chan->vchan.lock, flags);
#include <linux/acpi.h>
#include <linux/acpi_dma.h>
#include <linux/of_dma.h>
+#include <linux/mempool.h>
static DEFINE_MUTEX(dma_list_mutex);
static DEFINE_IDR(dma_idr);
}
EXPORT_SYMBOL(dma_async_device_unregister);
-/**
- * dma_async_memcpy_buf_to_buf - offloaded copy between virtual addresses
- * @chan: DMA channel to offload copy to
- * @dest: destination address (virtual)
- * @src: source address (virtual)
- * @len: length
- *
- * Both @dest and @src must be mappable to a bus address according to the
- * DMA mapping API rules for streaming mappings.
- * Both @dest and @src must stay memory resident (kernel memory or locked
- * user space pages).
- */
-dma_cookie_t
-dma_async_memcpy_buf_to_buf(struct dma_chan *chan, void *dest,
- void *src, size_t len)
-{
- struct dma_device *dev = chan->device;
- struct dma_async_tx_descriptor *tx;
- dma_addr_t dma_dest, dma_src;
- dma_cookie_t cookie;
- unsigned long flags;
+struct dmaengine_unmap_pool {
+ struct kmem_cache *cache;
+ const char *name;
+ mempool_t *pool;
+ size_t size;
+};
- dma_src = dma_map_single(dev->dev, src, len, DMA_TO_DEVICE);
- dma_dest = dma_map_single(dev->dev, dest, len, DMA_FROM_DEVICE);
- flags = DMA_CTRL_ACK |
- DMA_COMPL_SRC_UNMAP_SINGLE |
- DMA_COMPL_DEST_UNMAP_SINGLE;
- tx = dev->device_prep_dma_memcpy(chan, dma_dest, dma_src, len, flags);
+#define __UNMAP_POOL(x) { .size = x, .name = "dmaengine-unmap-" __stringify(x) }
+static struct dmaengine_unmap_pool unmap_pool[] = {
+ __UNMAP_POOL(2),
+ #if IS_ENABLED(CONFIG_DMA_ENGINE_RAID)
+ __UNMAP_POOL(16),
+ __UNMAP_POOL(128),
+ __UNMAP_POOL(256),
+ #endif
+};
- if (!tx) {
- dma_unmap_single(dev->dev, dma_src, len, DMA_TO_DEVICE);
- dma_unmap_single(dev->dev, dma_dest, len, DMA_FROM_DEVICE);
- return -ENOMEM;
+static struct dmaengine_unmap_pool *__get_unmap_pool(int nr)
+{
+ int order = get_count_order(nr);
+
+ switch (order) {
+ case 0 ... 1:
+ return &unmap_pool[0];
+ case 2 ... 4:
+ return &unmap_pool[1];
+ case 5 ... 7:
+ return &unmap_pool[2];
+ case 8:
+ return &unmap_pool[3];
+ default:
+ BUG();
+ return NULL;
}
+}
- tx->callback = NULL;
- cookie = tx->tx_submit(tx);
+static void dmaengine_unmap(struct kref *kref)
+{
+ struct dmaengine_unmap_data *unmap = container_of(kref, typeof(*unmap), kref);
+ struct device *dev = unmap->dev;
+ int cnt, i;
+
+ cnt = unmap->to_cnt;
+ for (i = 0; i < cnt; i++)
+ dma_unmap_page(dev, unmap->addr[i], unmap->len,
+ DMA_TO_DEVICE);
+ cnt += unmap->from_cnt;
+ for (; i < cnt; i++)
+ dma_unmap_page(dev, unmap->addr[i], unmap->len,
+ DMA_FROM_DEVICE);
+ cnt += unmap->bidi_cnt;
+ for (; i < cnt; i++) {
+ if (unmap->addr[i] == 0)
+ continue;
+ dma_unmap_page(dev, unmap->addr[i], unmap->len,
+ DMA_BIDIRECTIONAL);
+ }
+ mempool_free(unmap, __get_unmap_pool(cnt)->pool);
+}
- preempt_disable();
- __this_cpu_add(chan->local->bytes_transferred, len);
- __this_cpu_inc(chan->local->memcpy_count);
- preempt_enable();
+void dmaengine_unmap_put(struct dmaengine_unmap_data *unmap)
+{
+ if (unmap)
+ kref_put(&unmap->kref, dmaengine_unmap);
+}
+EXPORT_SYMBOL_GPL(dmaengine_unmap_put);
- return cookie;
+static void dmaengine_destroy_unmap_pool(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(unmap_pool); i++) {
+ struct dmaengine_unmap_pool *p = &unmap_pool[i];
+
+ if (p->pool)
+ mempool_destroy(p->pool);
+ p->pool = NULL;
+ if (p->cache)
+ kmem_cache_destroy(p->cache);
+ p->cache = NULL;
+ }
}
-EXPORT_SYMBOL(dma_async_memcpy_buf_to_buf);
-/**
- * dma_async_memcpy_buf_to_pg - offloaded copy from address to page
- * @chan: DMA channel to offload copy to
- * @page: destination page
- * @offset: offset in page to copy to
- * @kdata: source address (virtual)
- * @len: length
- *
- * Both @page/@offset and @kdata must be mappable to a bus address according
- * to the DMA mapping API rules for streaming mappings.
- * Both @page/@offset and @kdata must stay memory resident (kernel memory or
- * locked user space pages)
- */
-dma_cookie_t
-dma_async_memcpy_buf_to_pg(struct dma_chan *chan, struct page *page,
- unsigned int offset, void *kdata, size_t len)
+static int __init dmaengine_init_unmap_pool(void)
{
- struct dma_device *dev = chan->device;
- struct dma_async_tx_descriptor *tx;
- dma_addr_t dma_dest, dma_src;
- dma_cookie_t cookie;
- unsigned long flags;
+ int i;
- dma_src = dma_map_single(dev->dev, kdata, len, DMA_TO_DEVICE);
- dma_dest = dma_map_page(dev->dev, page, offset, len, DMA_FROM_DEVICE);
- flags = DMA_CTRL_ACK | DMA_COMPL_SRC_UNMAP_SINGLE;
- tx = dev->device_prep_dma_memcpy(chan, dma_dest, dma_src, len, flags);
+ for (i = 0; i < ARRAY_SIZE(unmap_pool); i++) {
+ struct dmaengine_unmap_pool *p = &unmap_pool[i];
+ size_t size;
- if (!tx) {
- dma_unmap_single(dev->dev, dma_src, len, DMA_TO_DEVICE);
- dma_unmap_page(dev->dev, dma_dest, len, DMA_FROM_DEVICE);
- return -ENOMEM;
+ size = sizeof(struct dmaengine_unmap_data) +
+ sizeof(dma_addr_t) * p->size;
+
+ p->cache = kmem_cache_create(p->name, size, 0,
+ SLAB_HWCACHE_ALIGN, NULL);
+ if (!p->cache)
+ break;
+ p->pool = mempool_create_slab_pool(1, p->cache);
+ if (!p->pool)
+ break;
}
- tx->callback = NULL;
- cookie = tx->tx_submit(tx);
+ if (i == ARRAY_SIZE(unmap_pool))
+ return 0;
- preempt_disable();
- __this_cpu_add(chan->local->bytes_transferred, len);
- __this_cpu_inc(chan->local->memcpy_count);
- preempt_enable();
+ dmaengine_destroy_unmap_pool();
+ return -ENOMEM;
+}
- return cookie;
+struct dmaengine_unmap_data *
+dmaengine_get_unmap_data(struct device *dev, int nr, gfp_t flags)
+{
+ struct dmaengine_unmap_data *unmap;
+
+ unmap = mempool_alloc(__get_unmap_pool(nr)->pool, flags);
+ if (!unmap)
+ return NULL;
+
+ memset(unmap, 0, sizeof(*unmap));
+ kref_init(&unmap->kref);
+ unmap->dev = dev;
+
+ return unmap;
}
-EXPORT_SYMBOL(dma_async_memcpy_buf_to_pg);
+EXPORT_SYMBOL(dmaengine_get_unmap_data);
/**
* dma_async_memcpy_pg_to_pg - offloaded copy from page to page
{
struct dma_device *dev = chan->device;
struct dma_async_tx_descriptor *tx;
- dma_addr_t dma_dest, dma_src;
+ struct dmaengine_unmap_data *unmap;
dma_cookie_t cookie;
unsigned long flags;
- dma_src = dma_map_page(dev->dev, src_pg, src_off, len, DMA_TO_DEVICE);
- dma_dest = dma_map_page(dev->dev, dest_pg, dest_off, len,
- DMA_FROM_DEVICE);
+ unmap = dmaengine_get_unmap_data(dev->dev, 2, GFP_NOWAIT);
+ if (!unmap)
+ return -ENOMEM;
+
+ unmap->to_cnt = 1;
+ unmap->from_cnt = 1;
+ unmap->addr[0] = dma_map_page(dev->dev, src_pg, src_off, len,
+ DMA_TO_DEVICE);
+ unmap->addr[1] = dma_map_page(dev->dev, dest_pg, dest_off, len,
+ DMA_FROM_DEVICE);
+ unmap->len = len;
flags = DMA_CTRL_ACK;
- tx = dev->device_prep_dma_memcpy(chan, dma_dest, dma_src, len, flags);
+ tx = dev->device_prep_dma_memcpy(chan, unmap->addr[1], unmap->addr[0],
+ len, flags);
if (!tx) {
- dma_unmap_page(dev->dev, dma_src, len, DMA_TO_DEVICE);
- dma_unmap_page(dev->dev, dma_dest, len, DMA_FROM_DEVICE);
+ dmaengine_unmap_put(unmap);
return -ENOMEM;
}
- tx->callback = NULL;
+ dma_set_unmap(tx, unmap);
cookie = tx->tx_submit(tx);
+ dmaengine_unmap_put(unmap);
preempt_disable();
__this_cpu_add(chan->local->bytes_transferred, len);
}
EXPORT_SYMBOL(dma_async_memcpy_pg_to_pg);
+/**
+ * dma_async_memcpy_buf_to_buf - offloaded copy between virtual addresses
+ * @chan: DMA channel to offload copy to
+ * @dest: destination address (virtual)
+ * @src: source address (virtual)
+ * @len: length
+ *
+ * Both @dest and @src must be mappable to a bus address according to the
+ * DMA mapping API rules for streaming mappings.
+ * Both @dest and @src must stay memory resident (kernel memory or locked
+ * user space pages).
+ */
+dma_cookie_t
+dma_async_memcpy_buf_to_buf(struct dma_chan *chan, void *dest,
+ void *src, size_t len)
+{
+ return dma_async_memcpy_pg_to_pg(chan, virt_to_page(dest),
+ (unsigned long) dest & ~PAGE_MASK,
+ virt_to_page(src),
+ (unsigned long) src & ~PAGE_MASK, len);
+}
+EXPORT_SYMBOL(dma_async_memcpy_buf_to_buf);
+
+/**
+ * dma_async_memcpy_buf_to_pg - offloaded copy from address to page
+ * @chan: DMA channel to offload copy to
+ * @page: destination page
+ * @offset: offset in page to copy to
+ * @kdata: source address (virtual)
+ * @len: length
+ *
+ * Both @page/@offset and @kdata must be mappable to a bus address according
+ * to the DMA mapping API rules for streaming mappings.
+ * Both @page/@offset and @kdata must stay memory resident (kernel memory or
+ * locked user space pages)
+ */
+dma_cookie_t
+dma_async_memcpy_buf_to_pg(struct dma_chan *chan, struct page *page,
+ unsigned int offset, void *kdata, size_t len)
+{
+ return dma_async_memcpy_pg_to_pg(chan, page, offset,
+ virt_to_page(kdata),
+ (unsigned long) kdata & ~PAGE_MASK, len);
+}
+EXPORT_SYMBOL(dma_async_memcpy_buf_to_pg);
+
void dma_async_tx_descriptor_init(struct dma_async_tx_descriptor *tx,
struct dma_chan *chan)
{
unsigned long dma_sync_wait_timeout = jiffies + msecs_to_jiffies(5000);
if (!tx)
- return DMA_SUCCESS;
+ return DMA_COMPLETE;
while (tx->cookie == -EBUSY) {
if (time_after_eq(jiffies, dma_sync_wait_timeout)) {
static int __init dma_bus_init(void)
{
+ int err = dmaengine_init_unmap_pool();
+
+ if (err)
+ return err;
return class_register(&dma_devclass);
}
arch_initcall(dma_bus_init);
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/random.h>
#include <linux/slab.h>
#include <linux/wait.h>
-#include <linux/ctype.h>
-#include <linux/debugfs.h>
-#include <linux/uaccess.h>
-#include <linux/seq_file.h>
static unsigned int test_buf_size = 16384;
module_param(test_buf_size, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(timeout, "Transfer Timeout in msec (default: 3000), "
"Pass -1 for infinite timeout");
-/* Maximum amount of mismatched bytes in buffer to print */
-#define MAX_ERROR_COUNT 32
-
-/*
- * Initialization patterns. All bytes in the source buffer has bit 7
- * set, all bytes in the destination buffer has bit 7 cleared.
- *
- * Bit 6 is set for all bytes which are to be copied by the DMA
- * engine. Bit 5 is set for all bytes which are to be overwritten by
- * the DMA engine.
- *
- * The remaining bits are the inverse of a counter which increments by
- * one for each byte address.
- */
-#define PATTERN_SRC 0x80
-#define PATTERN_DST 0x00
-#define PATTERN_COPY 0x40
-#define PATTERN_OVERWRITE 0x20
-#define PATTERN_COUNT_MASK 0x1f
-
-enum dmatest_error_type {
- DMATEST_ET_OK,
- DMATEST_ET_MAP_SRC,
- DMATEST_ET_MAP_DST,
- DMATEST_ET_PREP,
- DMATEST_ET_SUBMIT,
- DMATEST_ET_TIMEOUT,
- DMATEST_ET_DMA_ERROR,
- DMATEST_ET_DMA_IN_PROGRESS,
- DMATEST_ET_VERIFY,
- DMATEST_ET_VERIFY_BUF,
-};
-
-struct dmatest_verify_buffer {
- unsigned int index;
- u8 expected;
- u8 actual;
-};
-
-struct dmatest_verify_result {
- unsigned int error_count;
- struct dmatest_verify_buffer data[MAX_ERROR_COUNT];
- u8 pattern;
- bool is_srcbuf;
-};
-
-struct dmatest_thread_result {
- struct list_head node;
- unsigned int n;
- unsigned int src_off;
- unsigned int dst_off;
- unsigned int len;
- enum dmatest_error_type type;
- union {
- unsigned long data;
- dma_cookie_t cookie;
- enum dma_status status;
- int error;
- struct dmatest_verify_result *vr;
- };
-};
-
-struct dmatest_result {
- struct list_head node;
- char *name;
- struct list_head results;
-};
-
-struct dmatest_info;
-
-struct dmatest_thread {
- struct list_head node;
- struct dmatest_info *info;
- struct task_struct *task;
- struct dma_chan *chan;
- u8 **srcs;
- u8 **dsts;
- enum dma_transaction_type type;
- bool done;
-};
+static bool noverify;
+module_param(noverify, bool, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(noverify, "Disable random data setup and verification");
-struct dmatest_chan {
- struct list_head node;
- struct dma_chan *chan;
- struct list_head threads;
-};
+static bool verbose;
+module_param(verbose, bool, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(verbose, "Enable \"success\" result messages (default: off)");
/**
* struct dmatest_params - test parameters.
unsigned int xor_sources;
unsigned int pq_sources;
int timeout;
+ bool noverify;
};
/**
* @params: test parameters
* @lock: access protection to the fields of this structure
*/
-struct dmatest_info {
+static struct dmatest_info {
/* Test parameters */
struct dmatest_params params;
struct list_head channels;
unsigned int nr_channels;
struct mutex lock;
+ bool did_init;
+} test_info = {
+ .channels = LIST_HEAD_INIT(test_info.channels),
+ .lock = __MUTEX_INITIALIZER(test_info.lock),
+};
+
+static int dmatest_run_set(const char *val, const struct kernel_param *kp);
+static int dmatest_run_get(char *val, const struct kernel_param *kp);
+static struct kernel_param_ops run_ops = {
+ .set = dmatest_run_set,
+ .get = dmatest_run_get,
+};
+static bool dmatest_run;
+module_param_cb(run, &run_ops, &dmatest_run, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(run, "Run the test (default: false)");
+
+/* Maximum amount of mismatched bytes in buffer to print */
+#define MAX_ERROR_COUNT 32
+
+/*
+ * Initialization patterns. All bytes in the source buffer has bit 7
+ * set, all bytes in the destination buffer has bit 7 cleared.
+ *
+ * Bit 6 is set for all bytes which are to be copied by the DMA
+ * engine. Bit 5 is set for all bytes which are to be overwritten by
+ * the DMA engine.
+ *
+ * The remaining bits are the inverse of a counter which increments by
+ * one for each byte address.
+ */
+#define PATTERN_SRC 0x80
+#define PATTERN_DST 0x00
+#define PATTERN_COPY 0x40
+#define PATTERN_OVERWRITE 0x20
+#define PATTERN_COUNT_MASK 0x1f
- /* debugfs related stuff */
- struct dentry *root;
+struct dmatest_thread {
+ struct list_head node;
+ struct dmatest_info *info;
+ struct task_struct *task;
+ struct dma_chan *chan;
+ u8 **srcs;
+ u8 **dsts;
+ enum dma_transaction_type type;
+ bool done;
+};
- /* Test results */
- struct list_head results;
- struct mutex results_lock;
+struct dmatest_chan {
+ struct list_head node;
+ struct dma_chan *chan;
+ struct list_head threads;
};
-static struct dmatest_info test_info;
+static DECLARE_WAIT_QUEUE_HEAD(thread_wait);
+static bool wait;
+
+static bool is_threaded_test_run(struct dmatest_info *info)
+{
+ struct dmatest_chan *dtc;
+
+ list_for_each_entry(dtc, &info->channels, node) {
+ struct dmatest_thread *thread;
+
+ list_for_each_entry(thread, &dtc->threads, node) {
+ if (!thread->done)
+ return true;
+ }
+ }
+
+ return false;
+}
+
+static int dmatest_wait_get(char *val, const struct kernel_param *kp)
+{
+ struct dmatest_info *info = &test_info;
+ struct dmatest_params *params = &info->params;
+
+ if (params->iterations)
+ wait_event(thread_wait, !is_threaded_test_run(info));
+ wait = true;
+ return param_get_bool(val, kp);
+}
+
+static struct kernel_param_ops wait_ops = {
+ .get = dmatest_wait_get,
+ .set = param_set_bool,
+};
+module_param_cb(wait, &wait_ops, &wait, S_IRUGO);
+MODULE_PARM_DESC(wait, "Wait for tests to complete (default: false)");
static bool dmatest_match_channel(struct dmatest_params *params,
struct dma_chan *chan)
{
unsigned long buf;
- get_random_bytes(&buf, sizeof(buf));
+ prandom_bytes(&buf, sizeof(buf));
return buf;
}
}
}
-static unsigned int dmatest_verify(struct dmatest_verify_result *vr, u8 **bufs,
- unsigned int start, unsigned int end, unsigned int counter,
- u8 pattern, bool is_srcbuf)
+static void dmatest_mismatch(u8 actual, u8 pattern, unsigned int index,
+ unsigned int counter, bool is_srcbuf)
+{
+ u8 diff = actual ^ pattern;
+ u8 expected = pattern | (~counter & PATTERN_COUNT_MASK);
+ const char *thread_name = current->comm;
+
+ if (is_srcbuf)
+ pr_warn("%s: srcbuf[0x%x] overwritten! Expected %02x, got %02x\n",
+ thread_name, index, expected, actual);
+ else if ((pattern & PATTERN_COPY)
+ && (diff & (PATTERN_COPY | PATTERN_OVERWRITE)))
+ pr_warn("%s: dstbuf[0x%x] not copied! Expected %02x, got %02x\n",
+ thread_name, index, expected, actual);
+ else if (diff & PATTERN_SRC)
+ pr_warn("%s: dstbuf[0x%x] was copied! Expected %02x, got %02x\n",
+ thread_name, index, expected, actual);
+ else
+ pr_warn("%s: dstbuf[0x%x] mismatch! Expected %02x, got %02x\n",
+ thread_name, index, expected, actual);
+}
+
+static unsigned int dmatest_verify(u8 **bufs, unsigned int start,
+ unsigned int end, unsigned int counter, u8 pattern,
+ bool is_srcbuf)
{
unsigned int i;
unsigned int error_count = 0;
u8 expected;
u8 *buf;
unsigned int counter_orig = counter;
- struct dmatest_verify_buffer *vb;
for (; (buf = *bufs); bufs++) {
counter = counter_orig;
actual = buf[i];
expected = pattern | (~counter & PATTERN_COUNT_MASK);
if (actual != expected) {
- if (error_count < MAX_ERROR_COUNT && vr) {
- vb = &vr->data[error_count];
- vb->index = i;
- vb->expected = expected;
- vb->actual = actual;
- }
+ if (error_count < MAX_ERROR_COUNT)
+ dmatest_mismatch(actual, pattern, i,
+ counter, is_srcbuf);
error_count++;
}
counter++;
}
if (error_count > MAX_ERROR_COUNT)
- pr_warning("%s: %u errors suppressed\n",
+ pr_warn("%s: %u errors suppressed\n",
current->comm, error_count - MAX_ERROR_COUNT);
return error_count;
wake_up_all(done->wait);
}
-static inline void unmap_src(struct device *dev, dma_addr_t *addr, size_t len,
- unsigned int count)
-{
- while (count--)
- dma_unmap_single(dev, addr[count], len, DMA_TO_DEVICE);
-}
-
-static inline void unmap_dst(struct device *dev, dma_addr_t *addr, size_t len,
- unsigned int count)
-{
- while (count--)
- dma_unmap_single(dev, addr[count], len, DMA_BIDIRECTIONAL);
-}
-
static unsigned int min_odd(unsigned int x, unsigned int y)
{
unsigned int val = min(x, y);
return val % 2 ? val : val - 1;
}
-static char *verify_result_get_one(struct dmatest_verify_result *vr,
- unsigned int i)
+static void result(const char *err, unsigned int n, unsigned int src_off,
+ unsigned int dst_off, unsigned int len, unsigned long data)
{
- struct dmatest_verify_buffer *vb = &vr->data[i];
- u8 diff = vb->actual ^ vr->pattern;
- static char buf[512];
- char *msg;
-
- if (vr->is_srcbuf)
- msg = "srcbuf overwritten!";
- else if ((vr->pattern & PATTERN_COPY)
- && (diff & (PATTERN_COPY | PATTERN_OVERWRITE)))
- msg = "dstbuf not copied!";
- else if (diff & PATTERN_SRC)
- msg = "dstbuf was copied!";
- else
- msg = "dstbuf mismatch!";
-
- snprintf(buf, sizeof(buf) - 1, "%s [0x%x] Expected %02x, got %02x", msg,
- vb->index, vb->expected, vb->actual);
-
- return buf;
+ pr_info("%s: result #%u: '%s' with src_off=0x%x dst_off=0x%x len=0x%x (%lu)",
+ current->comm, n, err, src_off, dst_off, len, data);
}
-static char *thread_result_get(const char *name,
- struct dmatest_thread_result *tr)
+static void dbg_result(const char *err, unsigned int n, unsigned int src_off,
+ unsigned int dst_off, unsigned int len,
+ unsigned long data)
{
- static const char * const messages[] = {
- [DMATEST_ET_OK] = "No errors",
- [DMATEST_ET_MAP_SRC] = "src mapping error",
- [DMATEST_ET_MAP_DST] = "dst mapping error",
- [DMATEST_ET_PREP] = "prep error",
- [DMATEST_ET_SUBMIT] = "submit error",
- [DMATEST_ET_TIMEOUT] = "test timed out",
- [DMATEST_ET_DMA_ERROR] =
- "got completion callback (DMA_ERROR)",
- [DMATEST_ET_DMA_IN_PROGRESS] =
- "got completion callback (DMA_IN_PROGRESS)",
- [DMATEST_ET_VERIFY] = "errors",
- [DMATEST_ET_VERIFY_BUF] = "verify errors",
- };
- static char buf[512];
-
- snprintf(buf, sizeof(buf) - 1,
- "%s: #%u: %s with src_off=0x%x ""dst_off=0x%x len=0x%x (%lu)",
- name, tr->n, messages[tr->type], tr->src_off, tr->dst_off,
- tr->len, tr->data);
-
- return buf;
+ pr_debug("%s: result #%u: '%s' with src_off=0x%x dst_off=0x%x len=0x%x (%lu)",
+ current->comm, n, err, src_off, dst_off, len, data);
}
-static int thread_result_add(struct dmatest_info *info,
- struct dmatest_result *r, enum dmatest_error_type type,
- unsigned int n, unsigned int src_off, unsigned int dst_off,
- unsigned int len, unsigned long data)
-{
- struct dmatest_thread_result *tr;
-
- tr = kzalloc(sizeof(*tr), GFP_KERNEL);
- if (!tr)
- return -ENOMEM;
-
- tr->type = type;
- tr->n = n;
- tr->src_off = src_off;
- tr->dst_off = dst_off;
- tr->len = len;
- tr->data = data;
+#define verbose_result(err, n, src_off, dst_off, len, data) ({ \
+ if (verbose) \
+ result(err, n, src_off, dst_off, len, data); \
+ else \
+ dbg_result(err, n, src_off, dst_off, len, data); \
+})
- mutex_lock(&info->results_lock);
- list_add_tail(&tr->node, &r->results);
- mutex_unlock(&info->results_lock);
-
- if (tr->type == DMATEST_ET_OK)
- pr_debug("%s\n", thread_result_get(r->name, tr));
- else
- pr_warn("%s\n", thread_result_get(r->name, tr));
-
- return 0;
-}
-
-static unsigned int verify_result_add(struct dmatest_info *info,
- struct dmatest_result *r, unsigned int n,
- unsigned int src_off, unsigned int dst_off, unsigned int len,
- u8 **bufs, int whence, unsigned int counter, u8 pattern,
- bool is_srcbuf)
+static unsigned long long dmatest_persec(s64 runtime, unsigned int val)
{
- struct dmatest_verify_result *vr;
- unsigned int error_count;
- unsigned int buf_off = is_srcbuf ? src_off : dst_off;
- unsigned int start, end;
-
- if (whence < 0) {
- start = 0;
- end = buf_off;
- } else if (whence > 0) {
- start = buf_off + len;
- end = info->params.buf_size;
- } else {
- start = buf_off;
- end = buf_off + len;
- }
+ unsigned long long per_sec = 1000000;
- vr = kmalloc(sizeof(*vr), GFP_KERNEL);
- if (!vr) {
- pr_warn("dmatest: No memory to store verify result\n");
- return dmatest_verify(NULL, bufs, start, end, counter, pattern,
- is_srcbuf);
- }
-
- vr->pattern = pattern;
- vr->is_srcbuf = is_srcbuf;
-
- error_count = dmatest_verify(vr, bufs, start, end, counter, pattern,
- is_srcbuf);
- if (error_count) {
- vr->error_count = error_count;
- thread_result_add(info, r, DMATEST_ET_VERIFY_BUF, n, src_off,
- dst_off, len, (unsigned long)vr);
- return error_count;
- }
-
- kfree(vr);
- return 0;
-}
-
-static void result_free(struct dmatest_info *info, const char *name)
-{
- struct dmatest_result *r, *_r;
-
- mutex_lock(&info->results_lock);
- list_for_each_entry_safe(r, _r, &info->results, node) {
- struct dmatest_thread_result *tr, *_tr;
-
- if (name && strcmp(r->name, name))
- continue;
-
- list_for_each_entry_safe(tr, _tr, &r->results, node) {
- if (tr->type == DMATEST_ET_VERIFY_BUF)
- kfree(tr->vr);
- list_del(&tr->node);
- kfree(tr);
- }
+ if (runtime <= 0)
+ return 0;
- kfree(r->name);
- list_del(&r->node);
- kfree(r);
+ /* drop precision until runtime is 32-bits */
+ while (runtime > UINT_MAX) {
+ runtime >>= 1;
+ per_sec <<= 1;
}
- mutex_unlock(&info->results_lock);
+ per_sec *= val;
+ do_div(per_sec, runtime);
+ return per_sec;
}
-static struct dmatest_result *result_init(struct dmatest_info *info,
- const char *name)
+static unsigned long long dmatest_KBs(s64 runtime, unsigned long long len)
{
- struct dmatest_result *r;
-
- r = kzalloc(sizeof(*r), GFP_KERNEL);
- if (r) {
- r->name = kstrdup(name, GFP_KERNEL);
- INIT_LIST_HEAD(&r->results);
- mutex_lock(&info->results_lock);
- list_add_tail(&r->node, &info->results);
- mutex_unlock(&info->results_lock);
- }
- return r;
+ return dmatest_persec(runtime, len >> 10);
}
/*
struct dmatest_params *params;
struct dma_chan *chan;
struct dma_device *dev;
- const char *thread_name;
unsigned int src_off, dst_off, len;
unsigned int error_count;
unsigned int failed_tests = 0;
int src_cnt;
int dst_cnt;
int i;
- struct dmatest_result *result;
+ ktime_t ktime;
+ s64 runtime = 0;
+ unsigned long long total_len = 0;
- thread_name = current->comm;
set_freezable();
ret = -ENOMEM;
} else
goto err_thread_type;
- result = result_init(info, thread_name);
- if (!result)
- goto err_srcs;
-
thread->srcs = kcalloc(src_cnt+1, sizeof(u8 *), GFP_KERNEL);
if (!thread->srcs)
goto err_srcs;
set_user_nice(current, 10);
/*
- * src buffers are freed by the DMAEngine code with dma_unmap_single()
- * dst buffers are freed by ourselves below
+ * src and dst buffers are freed by ourselves below
*/
- flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT
- | DMA_COMPL_SKIP_DEST_UNMAP | DMA_COMPL_SRC_UNMAP_SINGLE;
+ flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
+ ktime = ktime_get();
while (!kthread_should_stop()
&& !(params->iterations && total_tests >= params->iterations)) {
struct dma_async_tx_descriptor *tx = NULL;
- dma_addr_t dma_srcs[src_cnt];
- dma_addr_t dma_dsts[dst_cnt];
+ struct dmaengine_unmap_data *um;
+ dma_addr_t srcs[src_cnt];
+ dma_addr_t *dsts;
u8 align = 0;
total_tests++;
break;
}
- len = dmatest_random() % params->buf_size + 1;
+ if (params->noverify) {
+ len = params->buf_size;
+ src_off = 0;
+ dst_off = 0;
+ } else {
+ len = dmatest_random() % params->buf_size + 1;
+ len = (len >> align) << align;
+ if (!len)
+ len = 1 << align;
+ src_off = dmatest_random() % (params->buf_size - len + 1);
+ dst_off = dmatest_random() % (params->buf_size - len + 1);
+
+ src_off = (src_off >> align) << align;
+ dst_off = (dst_off >> align) << align;
+
+ dmatest_init_srcs(thread->srcs, src_off, len,
+ params->buf_size);
+ dmatest_init_dsts(thread->dsts, dst_off, len,
+ params->buf_size);
+ }
+
len = (len >> align) << align;
if (!len)
len = 1 << align;
- src_off = dmatest_random() % (params->buf_size - len + 1);
- dst_off = dmatest_random() % (params->buf_size - len + 1);
+ total_len += len;
- src_off = (src_off >> align) << align;
- dst_off = (dst_off >> align) << align;
-
- dmatest_init_srcs(thread->srcs, src_off, len, params->buf_size);
- dmatest_init_dsts(thread->dsts, dst_off, len, params->buf_size);
+ um = dmaengine_get_unmap_data(dev->dev, src_cnt+dst_cnt,
+ GFP_KERNEL);
+ if (!um) {
+ failed_tests++;
+ result("unmap data NULL", total_tests,
+ src_off, dst_off, len, ret);
+ continue;
+ }
+ um->len = params->buf_size;
for (i = 0; i < src_cnt; i++) {
- u8 *buf = thread->srcs[i] + src_off;
-
- dma_srcs[i] = dma_map_single(dev->dev, buf, len,
- DMA_TO_DEVICE);
- ret = dma_mapping_error(dev->dev, dma_srcs[i]);
+ void *buf = thread->srcs[i];
+ struct page *pg = virt_to_page(buf);
+ unsigned pg_off = (unsigned long) buf & ~PAGE_MASK;
+
+ um->addr[i] = dma_map_page(dev->dev, pg, pg_off,
+ um->len, DMA_TO_DEVICE);
+ srcs[i] = um->addr[i] + src_off;
+ ret = dma_mapping_error(dev->dev, um->addr[i]);
if (ret) {
- unmap_src(dev->dev, dma_srcs, len, i);
- thread_result_add(info, result,
- DMATEST_ET_MAP_SRC,
- total_tests, src_off, dst_off,
- len, ret);
+ dmaengine_unmap_put(um);
+ result("src mapping error", total_tests,
+ src_off, dst_off, len, ret);
failed_tests++;
continue;
}
+ um->to_cnt++;
}
/* map with DMA_BIDIRECTIONAL to force writeback/invalidate */
+ dsts = &um->addr[src_cnt];
for (i = 0; i < dst_cnt; i++) {
- dma_dsts[i] = dma_map_single(dev->dev, thread->dsts[i],
- params->buf_size,
- DMA_BIDIRECTIONAL);
- ret = dma_mapping_error(dev->dev, dma_dsts[i]);
+ void *buf = thread->dsts[i];
+ struct page *pg = virt_to_page(buf);
+ unsigned pg_off = (unsigned long) buf & ~PAGE_MASK;
+
+ dsts[i] = dma_map_page(dev->dev, pg, pg_off, um->len,
+ DMA_BIDIRECTIONAL);
+ ret = dma_mapping_error(dev->dev, dsts[i]);
if (ret) {
- unmap_src(dev->dev, dma_srcs, len, src_cnt);
- unmap_dst(dev->dev, dma_dsts, params->buf_size,
- i);
- thread_result_add(info, result,
- DMATEST_ET_MAP_DST,
- total_tests, src_off, dst_off,
- len, ret);
+ dmaengine_unmap_put(um);
+ result("dst mapping error", total_tests,
+ src_off, dst_off, len, ret);
failed_tests++;
continue;
}
+ um->bidi_cnt++;
}
if (thread->type == DMA_MEMCPY)
tx = dev->device_prep_dma_memcpy(chan,
- dma_dsts[0] + dst_off,
- dma_srcs[0], len,
- flags);
+ dsts[0] + dst_off,
+ srcs[0], len, flags);
else if (thread->type == DMA_XOR)
tx = dev->device_prep_dma_xor(chan,
- dma_dsts[0] + dst_off,
- dma_srcs, src_cnt,
+ dsts[0] + dst_off,
+ srcs, src_cnt,
len, flags);
else if (thread->type == DMA_PQ) {
dma_addr_t dma_pq[dst_cnt];
for (i = 0; i < dst_cnt; i++)
- dma_pq[i] = dma_dsts[i] + dst_off;
- tx = dev->device_prep_dma_pq(chan, dma_pq, dma_srcs,
+ dma_pq[i] = dsts[i] + dst_off;
+ tx = dev->device_prep_dma_pq(chan, dma_pq, srcs,
src_cnt, pq_coefs,
len, flags);
}
if (!tx) {
- unmap_src(dev->dev, dma_srcs, len, src_cnt);
- unmap_dst(dev->dev, dma_dsts, params->buf_size,
- dst_cnt);
- thread_result_add(info, result, DMATEST_ET_PREP,
- total_tests, src_off, dst_off,
- len, 0);
+ dmaengine_unmap_put(um);
+ result("prep error", total_tests, src_off,
+ dst_off, len, ret);
msleep(100);
failed_tests++;
continue;
cookie = tx->tx_submit(tx);
if (dma_submit_error(cookie)) {
- thread_result_add(info, result, DMATEST_ET_SUBMIT,
- total_tests, src_off, dst_off,
- len, cookie);
+ dmaengine_unmap_put(um);
+ result("submit error", total_tests, src_off,
+ dst_off, len, ret);
msleep(100);
failed_tests++;
continue;
* free it this time?" dancing. For now, just
* leave it dangling.
*/
- thread_result_add(info, result, DMATEST_ET_TIMEOUT,
- total_tests, src_off, dst_off,
- len, 0);
+ dmaengine_unmap_put(um);
+ result("test timed out", total_tests, src_off, dst_off,
+ len, 0);
failed_tests++;
continue;
- } else if (status != DMA_SUCCESS) {
- enum dmatest_error_type type = (status == DMA_ERROR) ?
- DMATEST_ET_DMA_ERROR : DMATEST_ET_DMA_IN_PROGRESS;
- thread_result_add(info, result, type,
- total_tests, src_off, dst_off,
- len, status);
+ } else if (status != DMA_COMPLETE) {
+ dmaengine_unmap_put(um);
+ result(status == DMA_ERROR ?
+ "completion error status" :
+ "completion busy status", total_tests, src_off,
+ dst_off, len, ret);
failed_tests++;
continue;
}
- /* Unmap by myself (see DMA_COMPL_SKIP_DEST_UNMAP above) */
- unmap_dst(dev->dev, dma_dsts, params->buf_size, dst_cnt);
+ dmaengine_unmap_put(um);
- error_count = 0;
+ if (params->noverify) {
+ verbose_result("test passed", total_tests, src_off,
+ dst_off, len, 0);
+ continue;
+ }
- pr_debug("%s: verifying source buffer...\n", thread_name);
- error_count += verify_result_add(info, result, total_tests,
- src_off, dst_off, len, thread->srcs, -1,
+ pr_debug("%s: verifying source buffer...\n", current->comm);
+ error_count = dmatest_verify(thread->srcs, 0, src_off,
0, PATTERN_SRC, true);
- error_count += verify_result_add(info, result, total_tests,
- src_off, dst_off, len, thread->srcs, 0,
- src_off, PATTERN_SRC | PATTERN_COPY, true);
- error_count += verify_result_add(info, result, total_tests,
- src_off, dst_off, len, thread->srcs, 1,
- src_off + len, PATTERN_SRC, true);
-
- pr_debug("%s: verifying dest buffer...\n", thread_name);
- error_count += verify_result_add(info, result, total_tests,
- src_off, dst_off, len, thread->dsts, -1,
+ error_count += dmatest_verify(thread->srcs, src_off,
+ src_off + len, src_off,
+ PATTERN_SRC | PATTERN_COPY, true);
+ error_count += dmatest_verify(thread->srcs, src_off + len,
+ params->buf_size, src_off + len,
+ PATTERN_SRC, true);
+
+ pr_debug("%s: verifying dest buffer...\n", current->comm);
+ error_count += dmatest_verify(thread->dsts, 0, dst_off,
0, PATTERN_DST, false);
- error_count += verify_result_add(info, result, total_tests,
- src_off, dst_off, len, thread->dsts, 0,
- src_off, PATTERN_SRC | PATTERN_COPY, false);
- error_count += verify_result_add(info, result, total_tests,
- src_off, dst_off, len, thread->dsts, 1,
- dst_off + len, PATTERN_DST, false);
+ error_count += dmatest_verify(thread->dsts, dst_off,
+ dst_off + len, src_off,
+ PATTERN_SRC | PATTERN_COPY, false);
+ error_count += dmatest_verify(thread->dsts, dst_off + len,
+ params->buf_size, dst_off + len,
+ PATTERN_DST, false);
if (error_count) {
- thread_result_add(info, result, DMATEST_ET_VERIFY,
- total_tests, src_off, dst_off,
- len, error_count);
+ result("data error", total_tests, src_off, dst_off,
+ len, error_count);
failed_tests++;
} else {
- thread_result_add(info, result, DMATEST_ET_OK,
- total_tests, src_off, dst_off,
- len, 0);
+ verbose_result("test passed", total_tests, src_off,
+ dst_off, len, 0);
}
}
+ runtime = ktime_us_delta(ktime_get(), ktime);
ret = 0;
for (i = 0; thread->dsts[i]; i++)
err_srcs:
kfree(pq_coefs);
err_thread_type:
- pr_notice("%s: terminating after %u tests, %u failures (status %d)\n",
- thread_name, total_tests, failed_tests, ret);
+ pr_info("%s: summary %u tests, %u failures %llu iops %llu KB/s (%d)\n",
+ current->comm, total_tests, failed_tests,
+ dmatest_persec(runtime, total_tests),
+ dmatest_KBs(runtime, total_len), ret);
/* terminate all transfers on specified channels */
if (ret)
dmaengine_terminate_all(chan);
thread->done = true;
-
- if (params->iterations > 0)
- while (!kthread_should_stop()) {
- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wait_dmatest_exit);
- interruptible_sleep_on(&wait_dmatest_exit);
- }
+ wake_up(&thread_wait);
return ret;
}
list_for_each_entry_safe(thread, _thread, &dtc->threads, node) {
ret = kthread_stop(thread->task);
- pr_debug("dmatest: thread %s exited with status %d\n",
- thread->task->comm, ret);
+ pr_debug("thread %s exited with status %d\n",
+ thread->task->comm, ret);
list_del(&thread->node);
+ put_task_struct(thread->task);
kfree(thread);
}
for (i = 0; i < params->threads_per_chan; i++) {
thread = kzalloc(sizeof(struct dmatest_thread), GFP_KERNEL);
if (!thread) {
- pr_warning("dmatest: No memory for %s-%s%u\n",
- dma_chan_name(chan), op, i);
-
+ pr_warn("No memory for %s-%s%u\n",
+ dma_chan_name(chan), op, i);
break;
}
thread->info = info;
thread->chan = dtc->chan;
thread->type = type;
smp_wmb();
- thread->task = kthread_run(dmatest_func, thread, "%s-%s%u",
+ thread->task = kthread_create(dmatest_func, thread, "%s-%s%u",
dma_chan_name(chan), op, i);
if (IS_ERR(thread->task)) {
- pr_warning("dmatest: Failed to run thread %s-%s%u\n",
- dma_chan_name(chan), op, i);
+ pr_warn("Failed to create thread %s-%s%u\n",
+ dma_chan_name(chan), op, i);
kfree(thread);
break;
}
/* srcbuf and dstbuf are allocated by the thread itself */
-
+ get_task_struct(thread->task);
list_add_tail(&thread->node, &dtc->threads);
+ wake_up_process(thread->task);
}
return i;
dtc = kmalloc(sizeof(struct dmatest_chan), GFP_KERNEL);
if (!dtc) {
- pr_warning("dmatest: No memory for %s\n", dma_chan_name(chan));
+ pr_warn("No memory for %s\n", dma_chan_name(chan));
return -ENOMEM;
}
thread_count += cnt > 0 ? cnt : 0;
}
- pr_info("dmatest: Started %u threads using %s\n",
+ pr_info("Started %u threads using %s\n",
thread_count, dma_chan_name(chan));
list_add_tail(&dtc->node, &info->channels);
return true;
}
-static int __run_threaded_test(struct dmatest_info *info)
+static void request_channels(struct dmatest_info *info,
+ enum dma_transaction_type type)
{
dma_cap_mask_t mask;
- struct dma_chan *chan;
- struct dmatest_params *params = &info->params;
- int err = 0;
dma_cap_zero(mask);
- dma_cap_set(DMA_MEMCPY, mask);
+ dma_cap_set(type, mask);
for (;;) {
+ struct dmatest_params *params = &info->params;
+ struct dma_chan *chan;
+
chan = dma_request_channel(mask, filter, params);
if (chan) {
- err = dmatest_add_channel(info, chan);
- if (err) {
+ if (dmatest_add_channel(info, chan)) {
dma_release_channel(chan);
break; /* add_channel failed, punt */
}
info->nr_channels >= params->max_channels)
break; /* we have all we need */
}
- return err;
}
-#ifndef MODULE
-static int run_threaded_test(struct dmatest_info *info)
+static void run_threaded_test(struct dmatest_info *info)
{
- int ret;
+ struct dmatest_params *params = &info->params;
- mutex_lock(&info->lock);
- ret = __run_threaded_test(info);
- mutex_unlock(&info->lock);
- return ret;
+ /* Copy test parameters */
+ params->buf_size = test_buf_size;
+ strlcpy(params->channel, strim(test_channel), sizeof(params->channel));
+ strlcpy(params->device, strim(test_device), sizeof(params->device));
+ params->threads_per_chan = threads_per_chan;
+ params->max_channels = max_channels;
+ params->iterations = iterations;
+ params->xor_sources = xor_sources;
+ params->pq_sources = pq_sources;
+ params->timeout = timeout;
+ params->noverify = noverify;
+
+ request_channels(info, DMA_MEMCPY);
+ request_channels(info, DMA_XOR);
+ request_channels(info, DMA_PQ);
}
-#endif
-static void __stop_threaded_test(struct dmatest_info *info)
+static void stop_threaded_test(struct dmatest_info *info)
{
struct dmatest_chan *dtc, *_dtc;
struct dma_chan *chan;
list_del(&dtc->node);
chan = dtc->chan;
dmatest_cleanup_channel(dtc);
- pr_debug("dmatest: dropped channel %s\n", dma_chan_name(chan));
+ pr_debug("dropped channel %s\n", dma_chan_name(chan));
dma_release_channel(chan);
}
info->nr_channels = 0;
}
-static void stop_threaded_test(struct dmatest_info *info)
+static void restart_threaded_test(struct dmatest_info *info, bool run)
{
- mutex_lock(&info->lock);
- __stop_threaded_test(info);
- mutex_unlock(&info->lock);
-}
-
-static int __restart_threaded_test(struct dmatest_info *info, bool run)
-{
- struct dmatest_params *params = &info->params;
+ /* we might be called early to set run=, defer running until all
+ * parameters have been evaluated
+ */
+ if (!info->did_init)
+ return;
/* Stop any running test first */
- __stop_threaded_test(info);
-
- if (run == false)
- return 0;
-
- /* Clear results from previous run */
- result_free(info, NULL);
-
- /* Copy test parameters */
- params->buf_size = test_buf_size;
- strlcpy(params->channel, strim(test_channel), sizeof(params->channel));
- strlcpy(params->device, strim(test_device), sizeof(params->device));
- params->threads_per_chan = threads_per_chan;
- params->max_channels = max_channels;
- params->iterations = iterations;
- params->xor_sources = xor_sources;
- params->pq_sources = pq_sources;
- params->timeout = timeout;
+ stop_threaded_test(info);
/* Run test with new parameters */
- return __run_threaded_test(info);
-}
-
-static bool __is_threaded_test_run(struct dmatest_info *info)
-{
- struct dmatest_chan *dtc;
-
- list_for_each_entry(dtc, &info->channels, node) {
- struct dmatest_thread *thread;
-
- list_for_each_entry(thread, &dtc->threads, node) {
- if (!thread->done)
- return true;
- }
- }
-
- return false;
+ run_threaded_test(info);
}
-static ssize_t dtf_read_run(struct file *file, char __user *user_buf,
- size_t count, loff_t *ppos)
+static int dmatest_run_get(char *val, const struct kernel_param *kp)
{
- struct dmatest_info *info = file->private_data;
- char buf[3];
+ struct dmatest_info *info = &test_info;
mutex_lock(&info->lock);
-
- if (__is_threaded_test_run(info)) {
- buf[0] = 'Y';
+ if (is_threaded_test_run(info)) {
+ dmatest_run = true;
} else {
- __stop_threaded_test(info);
- buf[0] = 'N';
+ stop_threaded_test(info);
+ dmatest_run = false;
}
-
mutex_unlock(&info->lock);
- buf[1] = '\n';
- buf[2] = 0x00;
- return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
-}
-
-static ssize_t dtf_write_run(struct file *file, const char __user *user_buf,
- size_t count, loff_t *ppos)
-{
- struct dmatest_info *info = file->private_data;
- char buf[16];
- bool bv;
- int ret = 0;
- if (copy_from_user(buf, user_buf, min(count, (sizeof(buf) - 1))))
- return -EFAULT;
-
- if (strtobool(buf, &bv) == 0) {
- mutex_lock(&info->lock);
-
- if (__is_threaded_test_run(info))
- ret = -EBUSY;
- else
- ret = __restart_threaded_test(info, bv);
-
- mutex_unlock(&info->lock);
- }
-
- return ret ? ret : count;
+ return param_get_bool(val, kp);
}
-static const struct file_operations dtf_run_fops = {
- .read = dtf_read_run,
- .write = dtf_write_run,
- .open = simple_open,
- .llseek = default_llseek,
-};
-
-static int dtf_results_show(struct seq_file *sf, void *data)
+static int dmatest_run_set(const char *val, const struct kernel_param *kp)
{
- struct dmatest_info *info = sf->private;
- struct dmatest_result *result;
- struct dmatest_thread_result *tr;
- unsigned int i;
+ struct dmatest_info *info = &test_info;
+ int ret;
- mutex_lock(&info->results_lock);
- list_for_each_entry(result, &info->results, node) {
- list_for_each_entry(tr, &result->results, node) {
- seq_printf(sf, "%s\n",
- thread_result_get(result->name, tr));
- if (tr->type == DMATEST_ET_VERIFY_BUF) {
- for (i = 0; i < tr->vr->error_count; i++) {
- seq_printf(sf, "\t%s\n",
- verify_result_get_one(tr->vr, i));
- }
- }
- }
+ mutex_lock(&info->lock);
+ ret = param_set_bool(val, kp);
+ if (ret) {
+ mutex_unlock(&info->lock);
+ return ret;
}
- mutex_unlock(&info->results_lock);
- return 0;
-}
-
-static int dtf_results_open(struct inode *inode, struct file *file)
-{
- return single_open(file, dtf_results_show, inode->i_private);
-}
-
-static const struct file_operations dtf_results_fops = {
- .open = dtf_results_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = single_release,
-};
-
-static int dmatest_register_dbgfs(struct dmatest_info *info)
-{
- struct dentry *d;
-
- d = debugfs_create_dir("dmatest", NULL);
- if (IS_ERR(d))
- return PTR_ERR(d);
- if (!d)
- goto err_root;
+ if (is_threaded_test_run(info))
+ ret = -EBUSY;
+ else if (dmatest_run)
+ restart_threaded_test(info, dmatest_run);
- info->root = d;
-
- /* Run or stop threaded test */
- debugfs_create_file("run", S_IWUSR | S_IRUGO, info->root, info,
- &dtf_run_fops);
-
- /* Results of test in progress */
- debugfs_create_file("results", S_IRUGO, info->root, info,
- &dtf_results_fops);
-
- return 0;
+ mutex_unlock(&info->lock);
-err_root:
- pr_err("dmatest: Failed to initialize debugfs\n");
- return -ENOMEM;
+ return ret;
}
static int __init dmatest_init(void)
{
struct dmatest_info *info = &test_info;
- int ret;
-
- memset(info, 0, sizeof(*info));
+ struct dmatest_params *params = &info->params;
- mutex_init(&info->lock);
- INIT_LIST_HEAD(&info->channels);
+ if (dmatest_run) {
+ mutex_lock(&info->lock);
+ run_threaded_test(info);
+ mutex_unlock(&info->lock);
+ }
- mutex_init(&info->results_lock);
- INIT_LIST_HEAD(&info->results);
+ if (params->iterations && wait)
+ wait_event(thread_wait, !is_threaded_test_run(info));
- ret = dmatest_register_dbgfs(info);
- if (ret)
- return ret;
+ /* module parameters are stable, inittime tests are started,
+ * let userspace take over 'run' control
+ */
+ info->did_init = true;
-#ifdef MODULE
return 0;
-#else
- return run_threaded_test(info);
-#endif
}
/* when compiled-in wait for drivers to load first */
late_initcall(dmatest_init);
{
struct dmatest_info *info = &test_info;
- debugfs_remove_recursive(info->root);
+ mutex_lock(&info->lock);
stop_threaded_test(info);
- result_free(info, NULL);
+ mutex_unlock(&info->lock);
}
module_exit(dmatest_exit);
{
return &chan->dev->device;
}
-static struct device *chan2parent(struct dma_chan *chan)
-{
- return chan->dev->device.parent;
-}
static struct dw_desc *dwc_first_active(struct dw_dma_chan *dwc)
{
list_splice_init(&desc->tx_list, &dwc->free_list);
list_move(&desc->desc_node, &dwc->free_list);
- if (!is_slave_direction(dwc->direction)) {
- struct device *parent = chan2parent(&dwc->chan);
- if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- if (txd->flags & DMA_COMPL_DEST_UNMAP_SINGLE)
- dma_unmap_single(parent, desc->lli.dar,
- desc->total_len, DMA_FROM_DEVICE);
- else
- dma_unmap_page(parent, desc->lli.dar,
- desc->total_len, DMA_FROM_DEVICE);
- }
- if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- if (txd->flags & DMA_COMPL_SRC_UNMAP_SINGLE)
- dma_unmap_single(parent, desc->lli.sar,
- desc->total_len, DMA_TO_DEVICE);
- else
- dma_unmap_page(parent, desc->lli.sar,
- desc->total_len, DMA_TO_DEVICE);
- }
- }
-
+ dma_descriptor_unmap(txd);
spin_unlock_irqrestore(&dwc->lock, flags);
if (callback)
enum dma_status ret;
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS)
+ if (ret == DMA_COMPLETE)
return ret;
dwc_scan_descriptors(to_dw_dma(chan->device), dwc);
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret != DMA_SUCCESS)
+ if (ret != DMA_COMPLETE)
dma_set_residue(txstate, dwc_get_residue(dwc));
if (dwc->paused && ret == DMA_IN_PROGRESS)
#define EDMA_CHANS 64
#endif /* CONFIG_ARCH_DAVINCI_DA8XX */
-/* Max of 16 segments per channel to conserve PaRAM slots */
-#define MAX_NR_SG 16
+/*
+ * Max of 20 segments per channel to conserve PaRAM slots
+ * Also note that MAX_NR_SG should be atleast the no.of periods
+ * that are required for ASoC, otherwise DMA prep calls will
+ * fail. Today davinci-pcm is the only user of this driver and
+ * requires atleast 17 slots, so we setup the default to 20.
+ */
+#define MAX_NR_SG 20
#define EDMA_MAX_SLOTS MAX_NR_SG
#define EDMA_DESCRIPTORS 16
struct edma_desc {
struct virt_dma_desc vdesc;
struct list_head node;
+ int cyclic;
int absync;
int pset_nr;
int processed;
* then setup a link to the dummy slot, this results in all future
* events being absorbed and that's OK because we're done
*/
- if (edesc->processed == edesc->pset_nr)
- edma_link(echan->slot[nslots-1], echan->ecc->dummy_slot);
+ if (edesc->processed == edesc->pset_nr) {
+ if (edesc->cyclic)
+ edma_link(echan->slot[nslots-1], echan->slot[1]);
+ else
+ edma_link(echan->slot[nslots-1],
+ echan->ecc->dummy_slot);
+ }
edma_resume(echan->ch_num);
return ret;
}
+/*
+ * A PaRAM set configuration abstraction used by other modes
+ * @chan: Channel who's PaRAM set we're configuring
+ * @pset: PaRAM set to initialize and setup.
+ * @src_addr: Source address of the DMA
+ * @dst_addr: Destination address of the DMA
+ * @burst: In units of dev_width, how much to send
+ * @dev_width: How much is the dev_width
+ * @dma_length: Total length of the DMA transfer
+ * @direction: Direction of the transfer
+ */
+static int edma_config_pset(struct dma_chan *chan, struct edmacc_param *pset,
+ dma_addr_t src_addr, dma_addr_t dst_addr, u32 burst,
+ enum dma_slave_buswidth dev_width, unsigned int dma_length,
+ enum dma_transfer_direction direction)
+{
+ struct edma_chan *echan = to_edma_chan(chan);
+ struct device *dev = chan->device->dev;
+ int acnt, bcnt, ccnt, cidx;
+ int src_bidx, dst_bidx, src_cidx, dst_cidx;
+ int absync;
+
+ acnt = dev_width;
+ /*
+ * If the maxburst is equal to the fifo width, use
+ * A-synced transfers. This allows for large contiguous
+ * buffer transfers using only one PaRAM set.
+ */
+ if (burst == 1) {
+ /*
+ * For the A-sync case, bcnt and ccnt are the remainder
+ * and quotient respectively of the division of:
+ * (dma_length / acnt) by (SZ_64K -1). This is so
+ * that in case bcnt over flows, we have ccnt to use.
+ * Note: In A-sync tranfer only, bcntrld is used, but it
+ * only applies for sg_dma_len(sg) >= SZ_64K.
+ * In this case, the best way adopted is- bccnt for the
+ * first frame will be the remainder below. Then for
+ * every successive frame, bcnt will be SZ_64K-1. This
+ * is assured as bcntrld = 0xffff in end of function.
+ */
+ absync = false;
+ ccnt = dma_length / acnt / (SZ_64K - 1);
+ bcnt = dma_length / acnt - ccnt * (SZ_64K - 1);
+ /*
+ * If bcnt is non-zero, we have a remainder and hence an
+ * extra frame to transfer, so increment ccnt.
+ */
+ if (bcnt)
+ ccnt++;
+ else
+ bcnt = SZ_64K - 1;
+ cidx = acnt;
+ } else {
+ /*
+ * If maxburst is greater than the fifo address_width,
+ * use AB-synced transfers where A count is the fifo
+ * address_width and B count is the maxburst. In this
+ * case, we are limited to transfers of C count frames
+ * of (address_width * maxburst) where C count is limited
+ * to SZ_64K-1. This places an upper bound on the length
+ * of an SG segment that can be handled.
+ */
+ absync = true;
+ bcnt = burst;
+ ccnt = dma_length / (acnt * bcnt);
+ if (ccnt > (SZ_64K - 1)) {
+ dev_err(dev, "Exceeded max SG segment size\n");
+ return -EINVAL;
+ }
+ cidx = acnt * bcnt;
+ }
+
+ if (direction == DMA_MEM_TO_DEV) {
+ src_bidx = acnt;
+ src_cidx = cidx;
+ dst_bidx = 0;
+ dst_cidx = 0;
+ } else if (direction == DMA_DEV_TO_MEM) {
+ src_bidx = 0;
+ src_cidx = 0;
+ dst_bidx = acnt;
+ dst_cidx = cidx;
+ } else {
+ dev_err(dev, "%s: direction not implemented yet\n", __func__);
+ return -EINVAL;
+ }
+
+ pset->opt = EDMA_TCC(EDMA_CHAN_SLOT(echan->ch_num));
+ /* Configure A or AB synchronized transfers */
+ if (absync)
+ pset->opt |= SYNCDIM;
+
+ pset->src = src_addr;
+ pset->dst = dst_addr;
+
+ pset->src_dst_bidx = (dst_bidx << 16) | src_bidx;
+ pset->src_dst_cidx = (dst_cidx << 16) | src_cidx;
+
+ pset->a_b_cnt = bcnt << 16 | acnt;
+ pset->ccnt = ccnt;
+ /*
+ * Only time when (bcntrld) auto reload is required is for
+ * A-sync case, and in this case, a requirement of reload value
+ * of SZ_64K-1 only is assured. 'link' is initially set to NULL
+ * and then later will be populated by edma_execute.
+ */
+ pset->link_bcntrld = 0xffffffff;
+ return absync;
+}
+
static struct dma_async_tx_descriptor *edma_prep_slave_sg(
struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction direction,
struct edma_chan *echan = to_edma_chan(chan);
struct device *dev = chan->device->dev;
struct edma_desc *edesc;
- dma_addr_t dev_addr;
+ dma_addr_t src_addr = 0, dst_addr = 0;
enum dma_slave_buswidth dev_width;
u32 burst;
struct scatterlist *sg;
- int acnt, bcnt, ccnt, src, dst, cidx;
- int src_bidx, dst_bidx, src_cidx, dst_cidx;
- int i, nslots;
+ int i, nslots, ret;
if (unlikely(!echan || !sgl || !sg_len))
return NULL;
if (direction == DMA_DEV_TO_MEM) {
- dev_addr = echan->cfg.src_addr;
+ src_addr = echan->cfg.src_addr;
dev_width = echan->cfg.src_addr_width;
burst = echan->cfg.src_maxburst;
} else if (direction == DMA_MEM_TO_DEV) {
- dev_addr = echan->cfg.dst_addr;
+ dst_addr = echan->cfg.dst_addr;
dev_width = echan->cfg.dst_addr_width;
burst = echan->cfg.dst_maxburst;
} else {
if (echan->slot[i] < 0) {
kfree(edesc);
dev_err(dev, "Failed to allocate slot\n");
- kfree(edesc);
return NULL;
}
}
/* Configure PaRAM sets for each SG */
for_each_sg(sgl, sg, sg_len, i) {
-
- acnt = dev_width;
-
- /*
- * If the maxburst is equal to the fifo width, use
- * A-synced transfers. This allows for large contiguous
- * buffer transfers using only one PaRAM set.
- */
- if (burst == 1) {
- edesc->absync = false;
- ccnt = sg_dma_len(sg) / acnt / (SZ_64K - 1);
- bcnt = sg_dma_len(sg) / acnt - ccnt * (SZ_64K - 1);
- if (bcnt)
- ccnt++;
- else
- bcnt = SZ_64K - 1;
- cidx = acnt;
- /*
- * If maxburst is greater than the fifo address_width,
- * use AB-synced transfers where A count is the fifo
- * address_width and B count is the maxburst. In this
- * case, we are limited to transfers of C count frames
- * of (address_width * maxburst) where C count is limited
- * to SZ_64K-1. This places an upper bound on the length
- * of an SG segment that can be handled.
- */
- } else {
- edesc->absync = true;
- bcnt = burst;
- ccnt = sg_dma_len(sg) / (acnt * bcnt);
- if (ccnt > (SZ_64K - 1)) {
- dev_err(dev, "Exceeded max SG segment size\n");
- kfree(edesc);
- return NULL;
- }
- cidx = acnt * bcnt;
+ /* Get address for each SG */
+ if (direction == DMA_DEV_TO_MEM)
+ dst_addr = sg_dma_address(sg);
+ else
+ src_addr = sg_dma_address(sg);
+
+ ret = edma_config_pset(chan, &edesc->pset[i], src_addr,
+ dst_addr, burst, dev_width,
+ sg_dma_len(sg), direction);
+ if (ret < 0) {
+ kfree(edesc);
+ return NULL;
}
- if (direction == DMA_MEM_TO_DEV) {
- src = sg_dma_address(sg);
- dst = dev_addr;
- src_bidx = acnt;
- src_cidx = cidx;
- dst_bidx = 0;
- dst_cidx = 0;
- } else {
- src = dev_addr;
- dst = sg_dma_address(sg);
- src_bidx = 0;
- src_cidx = 0;
- dst_bidx = acnt;
- dst_cidx = cidx;
- }
-
- edesc->pset[i].opt = EDMA_TCC(EDMA_CHAN_SLOT(echan->ch_num));
- /* Configure A or AB synchronized transfers */
- if (edesc->absync)
- edesc->pset[i].opt |= SYNCDIM;
+ edesc->absync = ret;
/* If this is the last in a current SG set of transactions,
enable interrupts so that next set is processed */
/* If this is the last set, enable completion interrupt flag */
if (i == sg_len - 1)
edesc->pset[i].opt |= TCINTEN;
+ }
- edesc->pset[i].src = src;
- edesc->pset[i].dst = dst;
+ return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags);
+}
- edesc->pset[i].src_dst_bidx = (dst_bidx << 16) | src_bidx;
- edesc->pset[i].src_dst_cidx = (dst_cidx << 16) | src_cidx;
+static struct dma_async_tx_descriptor *edma_prep_dma_cyclic(
+ struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
+ size_t period_len, enum dma_transfer_direction direction,
+ unsigned long tx_flags, void *context)
+{
+ struct edma_chan *echan = to_edma_chan(chan);
+ struct device *dev = chan->device->dev;
+ struct edma_desc *edesc;
+ dma_addr_t src_addr, dst_addr;
+ enum dma_slave_buswidth dev_width;
+ u32 burst;
+ int i, ret, nslots;
+
+ if (unlikely(!echan || !buf_len || !period_len))
+ return NULL;
+
+ if (direction == DMA_DEV_TO_MEM) {
+ src_addr = echan->cfg.src_addr;
+ dst_addr = buf_addr;
+ dev_width = echan->cfg.src_addr_width;
+ burst = echan->cfg.src_maxburst;
+ } else if (direction == DMA_MEM_TO_DEV) {
+ src_addr = buf_addr;
+ dst_addr = echan->cfg.dst_addr;
+ dev_width = echan->cfg.dst_addr_width;
+ burst = echan->cfg.dst_maxburst;
+ } else {
+ dev_err(dev, "%s: bad direction?\n", __func__);
+ return NULL;
+ }
+
+ if (dev_width == DMA_SLAVE_BUSWIDTH_UNDEFINED) {
+ dev_err(dev, "Undefined slave buswidth\n");
+ return NULL;
+ }
+
+ if (unlikely(buf_len % period_len)) {
+ dev_err(dev, "Period should be multiple of Buffer length\n");
+ return NULL;
+ }
+
+ nslots = (buf_len / period_len) + 1;
+
+ /*
+ * Cyclic DMA users such as audio cannot tolerate delays introduced
+ * by cases where the number of periods is more than the maximum
+ * number of SGs the EDMA driver can handle at a time. For DMA types
+ * such as Slave SGs, such delays are tolerable and synchronized,
+ * but the synchronization is difficult to achieve with Cyclic and
+ * cannot be guaranteed, so we error out early.
+ */
+ if (nslots > MAX_NR_SG)
+ return NULL;
+
+ edesc = kzalloc(sizeof(*edesc) + nslots *
+ sizeof(edesc->pset[0]), GFP_ATOMIC);
+ if (!edesc) {
+ dev_dbg(dev, "Failed to allocate a descriptor\n");
+ return NULL;
+ }
+
+ edesc->cyclic = 1;
+ edesc->pset_nr = nslots;
+
+ dev_dbg(dev, "%s: nslots=%d\n", __func__, nslots);
+ dev_dbg(dev, "%s: period_len=%d\n", __func__, period_len);
+ dev_dbg(dev, "%s: buf_len=%d\n", __func__, buf_len);
+
+ for (i = 0; i < nslots; i++) {
+ /* Allocate a PaRAM slot, if needed */
+ if (echan->slot[i] < 0) {
+ echan->slot[i] =
+ edma_alloc_slot(EDMA_CTLR(echan->ch_num),
+ EDMA_SLOT_ANY);
+ if (echan->slot[i] < 0) {
+ dev_err(dev, "Failed to allocate slot\n");
+ return NULL;
+ }
+ }
+
+ if (i == nslots - 1) {
+ memcpy(&edesc->pset[i], &edesc->pset[0],
+ sizeof(edesc->pset[0]));
+ break;
+ }
+
+ ret = edma_config_pset(chan, &edesc->pset[i], src_addr,
+ dst_addr, burst, dev_width, period_len,
+ direction);
+ if (ret < 0)
+ return NULL;
- edesc->pset[i].a_b_cnt = bcnt << 16 | acnt;
- edesc->pset[i].ccnt = ccnt;
- edesc->pset[i].link_bcntrld = 0xffffffff;
+ if (direction == DMA_DEV_TO_MEM)
+ dst_addr += period_len;
+ else
+ src_addr += period_len;
+ dev_dbg(dev, "%s: Configure period %d of buf:\n", __func__, i);
+ dev_dbg(dev,
+ "\n pset[%d]:\n"
+ " chnum\t%d\n"
+ " slot\t%d\n"
+ " opt\t%08x\n"
+ " src\t%08x\n"
+ " dst\t%08x\n"
+ " abcnt\t%08x\n"
+ " ccnt\t%08x\n"
+ " bidx\t%08x\n"
+ " cidx\t%08x\n"
+ " lkrld\t%08x\n",
+ i, echan->ch_num, echan->slot[i],
+ edesc->pset[i].opt,
+ edesc->pset[i].src,
+ edesc->pset[i].dst,
+ edesc->pset[i].a_b_cnt,
+ edesc->pset[i].ccnt,
+ edesc->pset[i].src_dst_bidx,
+ edesc->pset[i].src_dst_cidx,
+ edesc->pset[i].link_bcntrld);
+
+ edesc->absync = ret;
+
+ /*
+ * Enable interrupts for every period because callback
+ * has to be called for every period.
+ */
+ edesc->pset[i].opt |= TCINTEN;
}
return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags);
unsigned long flags;
struct edmacc_param p;
- /* Pause the channel */
- edma_pause(echan->ch_num);
+ edesc = echan->edesc;
+
+ /* Pause the channel for non-cyclic */
+ if (!edesc || (edesc && !edesc->cyclic))
+ edma_pause(echan->ch_num);
switch (ch_status) {
- case DMA_COMPLETE:
+ case EDMA_DMA_COMPLETE:
spin_lock_irqsave(&echan->vchan.lock, flags);
- edesc = echan->edesc;
if (edesc) {
- if (edesc->processed == edesc->pset_nr) {
+ if (edesc->cyclic) {
+ vchan_cyclic_callback(&edesc->vdesc);
+ } else if (edesc->processed == edesc->pset_nr) {
dev_dbg(dev, "Transfer complete, stopping channel %d\n", ch_num);
edma_stop(echan->ch_num);
vchan_cookie_complete(&edesc->vdesc);
+ edma_execute(echan);
} else {
dev_dbg(dev, "Intermediate transfer complete on channel %d\n", ch_num);
+ edma_execute(echan);
}
-
- edma_execute(echan);
}
spin_unlock_irqrestore(&echan->vchan.lock, flags);
break;
- case DMA_CC_ERROR:
+ case EDMA_DMA_CC_ERROR:
spin_lock_irqsave(&echan->vchan.lock, flags);
edma_read_slot(EDMA_CHAN_SLOT(echan->slot[0]), &p);
unsigned long flags;
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS || !txstate)
+ if (ret == DMA_COMPLETE || !txstate)
return ret;
spin_lock_irqsave(&echan->vchan.lock, flags);
struct device *dev)
{
dma->device_prep_slave_sg = edma_prep_slave_sg;
+ dma->device_prep_dma_cyclic = edma_prep_dma_cyclic;
dma->device_alloc_chan_resources = edma_alloc_chan_resources;
dma->device_free_chan_resources = edma_free_chan_resources;
dma->device_issue_pending = edma_issue_pending;
spin_unlock_irqrestore(&edmac->lock, flags);
}
-static void ep93xx_dma_unmap_buffers(struct ep93xx_dma_desc *desc)
-{
- struct device *dev = desc->txd.chan->device->dev;
-
- if (!(desc->txd.flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- if (desc->txd.flags & DMA_COMPL_SRC_UNMAP_SINGLE)
- dma_unmap_single(dev, desc->src_addr, desc->size,
- DMA_TO_DEVICE);
- else
- dma_unmap_page(dev, desc->src_addr, desc->size,
- DMA_TO_DEVICE);
- }
- if (!(desc->txd.flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- if (desc->txd.flags & DMA_COMPL_DEST_UNMAP_SINGLE)
- dma_unmap_single(dev, desc->dst_addr, desc->size,
- DMA_FROM_DEVICE);
- else
- dma_unmap_page(dev, desc->dst_addr, desc->size,
- DMA_FROM_DEVICE);
- }
-}
-
static void ep93xx_dma_tasklet(unsigned long data)
{
struct ep93xx_dma_chan *edmac = (struct ep93xx_dma_chan *)data;
/* Now we can release all the chained descriptors */
list_for_each_entry_safe(desc, d, &list, node) {
- /*
- * For the memcpy channels the API requires us to unmap the
- * buffers unless requested otherwise.
- */
- if (!edmac->chan.private)
- ep93xx_dma_unmap_buffers(desc);
-
+ dma_descriptor_unmap(&desc->txd);
ep93xx_dma_desc_put(edmac, desc);
}
hw->count = CPU_TO_DMA(chan, count, 32);
}
-static u32 get_desc_cnt(struct fsldma_chan *chan, struct fsl_desc_sw *desc)
-{
- return DMA_TO_CPU(chan, desc->hw.count, 32);
-}
-
static void set_desc_src(struct fsldma_chan *chan,
struct fsl_dma_ld_hw *hw, dma_addr_t src)
{
hw->src_addr = CPU_TO_DMA(chan, snoop_bits | src, 64);
}
-static dma_addr_t get_desc_src(struct fsldma_chan *chan,
- struct fsl_desc_sw *desc)
-{
- u64 snoop_bits;
-
- snoop_bits = ((chan->feature & FSL_DMA_IP_MASK) == FSL_DMA_IP_85XX)
- ? ((u64)FSL_DMA_SATR_SREADTYPE_SNOOP_READ << 32) : 0;
- return DMA_TO_CPU(chan, desc->hw.src_addr, 64) & ~snoop_bits;
-}
-
static void set_desc_dst(struct fsldma_chan *chan,
struct fsl_dma_ld_hw *hw, dma_addr_t dst)
{
hw->dst_addr = CPU_TO_DMA(chan, snoop_bits | dst, 64);
}
-static dma_addr_t get_desc_dst(struct fsldma_chan *chan,
- struct fsl_desc_sw *desc)
-{
- u64 snoop_bits;
-
- snoop_bits = ((chan->feature & FSL_DMA_IP_MASK) == FSL_DMA_IP_85XX)
- ? ((u64)FSL_DMA_DATR_DWRITETYPE_SNOOP_WRITE << 32) : 0;
- return DMA_TO_CPU(chan, desc->hw.dst_addr, 64) & ~snoop_bits;
-}
-
static void set_desc_next(struct fsldma_chan *chan,
struct fsl_dma_ld_hw *hw, dma_addr_t next)
{
struct fsl_desc_sw *desc = tx_to_fsl_desc(tx);
struct fsl_desc_sw *child;
unsigned long flags;
- dma_cookie_t cookie;
+ dma_cookie_t cookie = -EINVAL;
spin_lock_irqsave(&chan->desc_lock, flags);
struct fsl_desc_sw *desc)
{
struct dma_async_tx_descriptor *txd = &desc->async_tx;
- struct device *dev = chan->common.device->dev;
- dma_addr_t src = get_desc_src(chan, desc);
- dma_addr_t dst = get_desc_dst(chan, desc);
- u32 len = get_desc_cnt(chan, desc);
/* Run the link descriptor callback function */
if (txd->callback) {
/* Run any dependencies */
dma_run_dependencies(txd);
- /* Unmap the dst buffer, if requested */
- if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- if (txd->flags & DMA_COMPL_DEST_UNMAP_SINGLE)
- dma_unmap_single(dev, dst, len, DMA_FROM_DEVICE);
- else
- dma_unmap_page(dev, dst, len, DMA_FROM_DEVICE);
- }
-
- /* Unmap the src buffer, if requested */
- if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- if (txd->flags & DMA_COMPL_SRC_UNMAP_SINGLE)
- dma_unmap_single(dev, src, len, DMA_TO_DEVICE);
- else
- dma_unmap_page(dev, src, len, DMA_TO_DEVICE);
- }
-
+ dma_descriptor_unmap(txd);
#ifdef FSL_DMA_LD_DEBUG
chan_dbg(chan, "LD %p free\n", desc);
#endif
WARN_ON(fdev->feature != chan->feature);
chan->dev = fdev->dev;
- chan->id = ((res.start - 0x100) & 0xfff) >> 7;
+ chan->id = (res.start & 0xfff) < 0x300 ?
+ ((res.start - 0x100) & 0xfff) >> 7 :
+ ((res.start - 0x200) & 0xfff) >> 7;
if (chan->id >= FSL_DMA_MAX_CHANS_PER_DEVICE) {
dev_err(fdev->dev, "too many channels for device\n");
err = -EINVAL;
}
static const struct of_device_id fsldma_of_ids[] = {
+ { .compatible = "fsl,elo3-dma", },
{ .compatible = "fsl,eloplus-dma", },
{ .compatible = "fsl,elo-dma", },
{}
static __init int fsldma_init(void)
{
- pr_info("Freescale Elo / Elo Plus DMA driver\n");
+ pr_info("Freescale Elo series DMA driver\n");
return platform_driver_register(&fsldma_of_driver);
}
subsys_initcall(fsldma_init);
module_exit(fsldma_exit);
-MODULE_DESCRIPTION("Freescale Elo / Elo Plus DMA driver");
+MODULE_DESCRIPTION("Freescale Elo series DMA driver");
MODULE_LICENSE("GPL");
};
struct fsldma_chan;
-#define FSL_DMA_MAX_CHANS_PER_DEVICE 4
+#define FSL_DMA_MAX_CHANS_PER_DEVICE 8
struct fsldma_device {
void __iomem *regs; /* DGSR register base */
imx_dmav1_writel(imxdma, d->len, DMA_CNTR(imxdmac->channel));
- dev_dbg(imxdma->dev, "%s channel: %d dest=0x%08x src=0x%08x "
- "dma_length=%d\n", __func__, imxdmac->channel,
- d->dest, d->src, d->len);
+ dev_dbg(imxdma->dev,
+ "%s channel: %d dest=0x%08llx src=0x%08llx dma_length=%zu\n",
+ __func__, imxdmac->channel,
+ (unsigned long long)d->dest,
+ (unsigned long long)d->src, d->len);
break;
/* Cyclic transfer is the same as slave_sg with special sg configuration. */
imx_dmav1_writel(imxdma, imxdmac->ccr_from_device,
DMA_CCR(imxdmac->channel));
- dev_dbg(imxdma->dev, "%s channel: %d sg=%p sgcount=%d "
- "total length=%d dev_addr=0x%08x (dev2mem)\n",
- __func__, imxdmac->channel, d->sg, d->sgcount,
- d->len, imxdmac->per_address);
+ dev_dbg(imxdma->dev,
+ "%s channel: %d sg=%p sgcount=%d total length=%zu dev_addr=0x%08llx (dev2mem)\n",
+ __func__, imxdmac->channel,
+ d->sg, d->sgcount, d->len,
+ (unsigned long long)imxdmac->per_address);
} else if (d->direction == DMA_MEM_TO_DEV) {
imx_dmav1_writel(imxdma, imxdmac->per_address,
DMA_DAR(imxdmac->channel));
imx_dmav1_writel(imxdma, imxdmac->ccr_to_device,
DMA_CCR(imxdmac->channel));
- dev_dbg(imxdma->dev, "%s channel: %d sg=%p sgcount=%d "
- "total length=%d dev_addr=0x%08x (mem2dev)\n",
- __func__, imxdmac->channel, d->sg, d->sgcount,
- d->len, imxdmac->per_address);
+ dev_dbg(imxdma->dev,
+ "%s channel: %d sg=%p sgcount=%d total length=%zu dev_addr=0x%08llx (mem2dev)\n",
+ __func__, imxdmac->channel,
+ d->sg, d->sgcount, d->len,
+ (unsigned long long)imxdmac->per_address);
} else {
dev_err(imxdma->dev, "%s channel: %d bad dma mode\n",
__func__, imxdmac->channel);
desc->desc.tx_submit = imxdma_tx_submit;
/* txd.flags will be overwritten in prep funcs */
desc->desc.flags = DMA_CTRL_ACK;
- desc->status = DMA_SUCCESS;
+ desc->status = DMA_COMPLETE;
list_add_tail(&desc->node, &imxdmac->ld_free);
imxdmac->descs_allocated++;
int i;
unsigned int periods = buf_len / period_len;
- dev_dbg(imxdma->dev, "%s channel: %d buf_len=%d period_len=%d\n",
+ dev_dbg(imxdma->dev, "%s channel: %d buf_len=%zu period_len=%zu\n",
__func__, imxdmac->channel, buf_len, period_len);
if (list_empty(&imxdmac->ld_free) ||
struct imxdma_engine *imxdma = imxdmac->imxdma;
struct imxdma_desc *desc;
- dev_dbg(imxdma->dev, "%s channel: %d src=0x%x dst=0x%x len=%d\n",
- __func__, imxdmac->channel, src, dest, len);
+ dev_dbg(imxdma->dev, "%s channel: %d src=0x%llx dst=0x%llx len=%zu\n",
+ __func__, imxdmac->channel, (unsigned long long)src,
+ (unsigned long long)dest, len);
if (list_empty(&imxdmac->ld_free) ||
imxdma_chan_is_doing_cyclic(imxdmac))
struct imxdma_engine *imxdma = imxdmac->imxdma;
struct imxdma_desc *desc;
- dev_dbg(imxdma->dev, "%s channel: %d src_start=0x%x dst_start=0x%x\n"
- " src_sgl=%s dst_sgl=%s numf=%d frame_size=%d\n", __func__,
- imxdmac->channel, xt->src_start, xt->dst_start,
+ dev_dbg(imxdma->dev, "%s channel: %d src_start=0x%llx dst_start=0x%llx\n"
+ " src_sgl=%s dst_sgl=%s numf=%zu frame_size=%zu\n", __func__,
+ imxdmac->channel, (unsigned long long)xt->src_start,
+ (unsigned long long) xt->dst_start,
xt->src_sgl ? "true" : "false", xt->dst_sgl ? "true" : "false",
xt->numf, xt->frame_size);
if (error)
sdmac->status = DMA_ERROR;
else
- sdmac->status = DMA_SUCCESS;
+ sdmac->status = DMA_COMPLETE;
dma_cookie_complete(&sdmac->desc);
if (sdmac->desc.callback)
param &= ~BD_CONT;
}
- dev_dbg(sdma->dev, "entry %d: count: %d dma: 0x%08x %s%s\n",
- i, count, sg->dma_address,
+ dev_dbg(sdma->dev, "entry %d: count: %d dma: %#llx %s%s\n",
+ i, count, (u64)sg->dma_address,
param & BD_WRAP ? "wrap" : "",
param & BD_INTR ? " intr" : "");
if (i + 1 == num_periods)
param |= BD_WRAP;
- dev_dbg(sdma->dev, "entry %d: count: %d dma: 0x%08x %s%s\n",
- i, period_len, dma_addr,
+ dev_dbg(sdma->dev, "entry %d: count: %d dma: %#llx %s%s\n",
+ i, period_len, (u64)dma_addr,
param & BD_WRAP ? "wrap" : "",
param & BD_INTR ? " intr" : "");
callback_txd(param_txd);
}
if (midc->raw_tfr) {
- desc->status = DMA_SUCCESS;
+ desc->status = DMA_COMPLETE;
if (desc->lli != NULL) {
pci_pool_free(desc->lli_pool, desc->lli,
desc->lli_phys);
enum dma_status ret;
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret != DMA_SUCCESS) {
+ if (ret != DMA_COMPLETE) {
spin_lock_bh(&midc->lock);
midc_scan_descriptors(to_middma_device(chan->device), midc);
spin_unlock_bh(&midc->lock);
writew(IOAT_CHANCTRL_RUN, ioat->base.reg_base + IOAT_CHANCTRL_OFFSET);
}
-void ioat_dma_unmap(struct ioat_chan_common *chan, enum dma_ctrl_flags flags,
- size_t len, struct ioat_dma_descriptor *hw)
-{
- struct pci_dev *pdev = chan->device->pdev;
- size_t offset = len - hw->size;
-
- if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP))
- ioat_unmap(pdev, hw->dst_addr - offset, len,
- PCI_DMA_FROMDEVICE, flags, 1);
-
- if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP))
- ioat_unmap(pdev, hw->src_addr - offset, len,
- PCI_DMA_TODEVICE, flags, 0);
-}
-
dma_addr_t ioat_get_current_completion(struct ioat_chan_common *chan)
{
dma_addr_t phys_complete;
dump_desc_dbg(ioat, desc);
if (tx->cookie) {
dma_cookie_complete(tx);
- ioat_dma_unmap(chan, tx->flags, desc->len, desc->hw);
+ dma_descriptor_unmap(tx);
ioat->active -= desc->hw->tx_cnt;
if (tx->callback) {
tx->callback(tx->callback_param);
enum dma_status ret;
ret = dma_cookie_status(c, cookie, txstate);
- if (ret == DMA_SUCCESS)
+ if (ret == DMA_COMPLETE)
return ret;
device->cleanup_fn((unsigned long) c);
}
dma_src = dma_map_single(dev, src, IOAT_TEST_SIZE, DMA_TO_DEVICE);
+ if (dma_mapping_error(dev, dma_src)) {
+ dev_err(dev, "mapping src buffer failed\n");
+ goto free_resources;
+ }
dma_dest = dma_map_single(dev, dest, IOAT_TEST_SIZE, DMA_FROM_DEVICE);
- flags = DMA_COMPL_SKIP_SRC_UNMAP | DMA_COMPL_SKIP_DEST_UNMAP |
- DMA_PREP_INTERRUPT;
+ if (dma_mapping_error(dev, dma_dest)) {
+ dev_err(dev, "mapping dest buffer failed\n");
+ goto unmap_src;
+ }
+ flags = DMA_PREP_INTERRUPT;
tx = device->common.device_prep_dma_memcpy(dma_chan, dma_dest, dma_src,
IOAT_TEST_SIZE, flags);
if (!tx) {
if (tmo == 0 ||
dma->device_tx_status(dma_chan, cookie, NULL)
- != DMA_SUCCESS) {
+ != DMA_COMPLETE) {
dev_err(dev, "Self-test copy timed out, disabling\n");
err = -ENODEV;
goto unmap_dma;
}
unmap_dma:
- dma_unmap_single(dev, dma_src, IOAT_TEST_SIZE, DMA_TO_DEVICE);
dma_unmap_single(dev, dma_dest, IOAT_TEST_SIZE, DMA_FROM_DEVICE);
+unmap_src:
+ dma_unmap_single(dev, dma_src, IOAT_TEST_SIZE, DMA_TO_DEVICE);
free_resources:
dma->device_free_chan_resources(dma_chan);
out:
module_param_string(ioat_interrupt_style, ioat_interrupt_style,
sizeof(ioat_interrupt_style), 0644);
MODULE_PARM_DESC(ioat_interrupt_style,
- "set ioat interrupt style: msix (default), "
- "msix-single-vector, msi, intx)");
+ "set ioat interrupt style: msix (default), msi, intx");
/**
* ioat_dma_setup_interrupts - setup interrupt handler
if (!strcmp(ioat_interrupt_style, "msix"))
goto msix;
- if (!strcmp(ioat_interrupt_style, "msix-single-vector"))
- goto msix_single_vector;
if (!strcmp(ioat_interrupt_style, "msi"))
goto msi;
if (!strcmp(ioat_interrupt_style, "intx"))
device->msix_entries[i].entry = i;
err = pci_enable_msix(pdev, device->msix_entries, msixcnt);
- if (err < 0)
+ if (err)
goto msi;
- if (err > 0)
- goto msix_single_vector;
for (i = 0; i < msixcnt; i++) {
msix = &device->msix_entries[i];
chan = ioat_chan_by_index(device, j);
devm_free_irq(dev, msix->vector, chan);
}
- goto msix_single_vector;
+ goto msi;
}
}
intrctrl |= IOAT_INTRCTRL_MSIX_VECTOR_CONTROL;
device->irq_mode = IOAT_MSIX;
goto done;
-msix_single_vector:
- msix = &device->msix_entries[0];
- msix->entry = 0;
- err = pci_enable_msix(pdev, device->msix_entries, 1);
- if (err)
- goto msi;
-
- err = devm_request_irq(dev, msix->vector, ioat_dma_do_interrupt, 0,
- "ioat-msix", device);
- if (err) {
- pci_disable_msix(pdev);
- goto msi;
- }
- device->irq_mode = IOAT_MSIX_SINGLE;
- goto done;
-
msi:
err = pci_enable_msi(pdev);
if (err)
pci_disable_msi(pdev);
goto intx;
}
- device->irq_mode = IOAT_MSIX;
+ device->irq_mode = IOAT_MSI;
goto done;
intx:
enum ioat_irq_mode {
IOAT_NOIRQ = 0,
IOAT_MSIX,
- IOAT_MSIX_SINGLE,
IOAT_MSI,
IOAT_INTX
};
struct pci_pool *completion_pool;
#define MAX_SED_POOLS 5
struct dma_pool *sed_hw_pool[MAX_SED_POOLS];
- struct kmem_cache *sed_pool;
struct dma_device common;
u8 version;
struct msix_entry msix_entries[4];
return !!err;
}
-static inline void ioat_unmap(struct pci_dev *pdev, dma_addr_t addr, size_t len,
- int direction, enum dma_ctrl_flags flags, bool dst)
-{
- if ((dst && (flags & DMA_COMPL_DEST_UNMAP_SINGLE)) ||
- (!dst && (flags & DMA_COMPL_SRC_UNMAP_SINGLE)))
- pci_unmap_single(pdev, addr, len, direction);
- else
- pci_unmap_page(pdev, addr, len, direction);
-}
-
int ioat_probe(struct ioatdma_device *device);
int ioat_register(struct ioatdma_device *device);
int ioat1_dma_probe(struct ioatdma_device *dev, int dca);
struct ioat_chan_common *chan, int idx);
enum dma_status ioat_dma_tx_status(struct dma_chan *c, dma_cookie_t cookie,
struct dma_tx_state *txstate);
-void ioat_dma_unmap(struct ioat_chan_common *chan, enum dma_ctrl_flags flags,
- size_t len, struct ioat_dma_descriptor *hw);
bool ioat_cleanup_preamble(struct ioat_chan_common *chan,
dma_addr_t *phys_complete);
void ioat_kobject_add(struct ioatdma_device *device, struct kobj_type *type);
tx = &desc->txd;
dump_desc_dbg(ioat, desc);
if (tx->cookie) {
- ioat_dma_unmap(chan, tx->flags, desc->len, desc->hw);
+ dma_descriptor_unmap(tx);
dma_cookie_complete(tx);
if (tx->callback) {
tx->callback(tx->callback_param);
int ioat2_dma_probe(struct ioatdma_device *dev, int dca);
int ioat3_dma_probe(struct ioatdma_device *dev, int dca);
-void ioat3_dma_remove(struct ioatdma_device *dev);
struct dca_provider *ioat2_dca_init(struct pci_dev *pdev, void __iomem *iobase);
struct dca_provider *ioat3_dca_init(struct pci_dev *pdev, void __iomem *iobase);
int ioat2_check_space_lock(struct ioat2_dma_chan *ioat, int num_descs);
#include "dma.h"
#include "dma_v2.h"
+extern struct kmem_cache *ioat3_sed_cache;
+
/* ioat hardware assumes at least two sources for raid operations */
#define src_cnt_to_sw(x) ((x) + 2)
#define src_cnt_to_hw(x) ((x) - 2)
static const u8 pq16_idx_to_field[] = { 1, 4, 1, 2, 3, 4, 5, 6, 7,
0, 1, 2, 3, 4, 5, 6 };
-/*
- * technically sources 1 and 2 do not require SED, but the op will have
- * at least 9 descriptors so that's irrelevant.
- */
-static const u8 pq16_idx_to_sed[] = { 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 1, 1, 1, 1, 1, 1, 1 };
-
static void ioat3_eh(struct ioat2_dma_chan *ioat);
-static dma_addr_t xor_get_src(struct ioat_raw_descriptor *descs[2], int idx)
-{
- struct ioat_raw_descriptor *raw = descs[xor_idx_to_desc >> idx & 1];
-
- return raw->field[xor_idx_to_field[idx]];
-}
-
static void xor_set_src(struct ioat_raw_descriptor *descs[2],
dma_addr_t addr, u32 offset, int idx)
{
pq->coef[idx] = coef;
}
-static int sed_get_pq16_pool_idx(int src_cnt)
-{
-
- return pq16_idx_to_sed[src_cnt];
-}
-
static bool is_jf_ioat(struct pci_dev *pdev)
{
switch (pdev->device) {
struct ioat_sed_ent *sed;
gfp_t flags = __GFP_ZERO | GFP_ATOMIC;
- sed = kmem_cache_alloc(device->sed_pool, flags);
+ sed = kmem_cache_alloc(ioat3_sed_cache, flags);
if (!sed)
return NULL;
sed->hw = dma_pool_alloc(device->sed_hw_pool[hw_pool],
flags, &sed->dma);
if (!sed->hw) {
- kmem_cache_free(device->sed_pool, sed);
+ kmem_cache_free(ioat3_sed_cache, sed);
return NULL;
}
return;
dma_pool_free(device->sed_hw_pool[sed->hw_pool], sed->hw, sed->dma);
- kmem_cache_free(device->sed_pool, sed);
-}
-
-static void ioat3_dma_unmap(struct ioat2_dma_chan *ioat,
- struct ioat_ring_ent *desc, int idx)
-{
- struct ioat_chan_common *chan = &ioat->base;
- struct pci_dev *pdev = chan->device->pdev;
- size_t len = desc->len;
- size_t offset = len - desc->hw->size;
- struct dma_async_tx_descriptor *tx = &desc->txd;
- enum dma_ctrl_flags flags = tx->flags;
-
- switch (desc->hw->ctl_f.op) {
- case IOAT_OP_COPY:
- if (!desc->hw->ctl_f.null) /* skip 'interrupt' ops */
- ioat_dma_unmap(chan, flags, len, desc->hw);
- break;
- case IOAT_OP_XOR_VAL:
- case IOAT_OP_XOR: {
- struct ioat_xor_descriptor *xor = desc->xor;
- struct ioat_ring_ent *ext;
- struct ioat_xor_ext_descriptor *xor_ex = NULL;
- int src_cnt = src_cnt_to_sw(xor->ctl_f.src_cnt);
- struct ioat_raw_descriptor *descs[2];
- int i;
-
- if (src_cnt > 5) {
- ext = ioat2_get_ring_ent(ioat, idx + 1);
- xor_ex = ext->xor_ex;
- }
-
- if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- descs[0] = (struct ioat_raw_descriptor *) xor;
- descs[1] = (struct ioat_raw_descriptor *) xor_ex;
- for (i = 0; i < src_cnt; i++) {
- dma_addr_t src = xor_get_src(descs, i);
-
- ioat_unmap(pdev, src - offset, len,
- PCI_DMA_TODEVICE, flags, 0);
- }
-
- /* dest is a source in xor validate operations */
- if (xor->ctl_f.op == IOAT_OP_XOR_VAL) {
- ioat_unmap(pdev, xor->dst_addr - offset, len,
- PCI_DMA_TODEVICE, flags, 1);
- break;
- }
- }
-
- if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP))
- ioat_unmap(pdev, xor->dst_addr - offset, len,
- PCI_DMA_FROMDEVICE, flags, 1);
- break;
- }
- case IOAT_OP_PQ_VAL:
- case IOAT_OP_PQ: {
- struct ioat_pq_descriptor *pq = desc->pq;
- struct ioat_ring_ent *ext;
- struct ioat_pq_ext_descriptor *pq_ex = NULL;
- int src_cnt = src_cnt_to_sw(pq->ctl_f.src_cnt);
- struct ioat_raw_descriptor *descs[2];
- int i;
-
- if (src_cnt > 3) {
- ext = ioat2_get_ring_ent(ioat, idx + 1);
- pq_ex = ext->pq_ex;
- }
-
- /* in the 'continue' case don't unmap the dests as sources */
- if (dmaf_p_disabled_continue(flags))
- src_cnt--;
- else if (dmaf_continue(flags))
- src_cnt -= 3;
-
- if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- descs[0] = (struct ioat_raw_descriptor *) pq;
- descs[1] = (struct ioat_raw_descriptor *) pq_ex;
- for (i = 0; i < src_cnt; i++) {
- dma_addr_t src = pq_get_src(descs, i);
-
- ioat_unmap(pdev, src - offset, len,
- PCI_DMA_TODEVICE, flags, 0);
- }
-
- /* the dests are sources in pq validate operations */
- if (pq->ctl_f.op == IOAT_OP_XOR_VAL) {
- if (!(flags & DMA_PREP_PQ_DISABLE_P))
- ioat_unmap(pdev, pq->p_addr - offset,
- len, PCI_DMA_TODEVICE, flags, 0);
- if (!(flags & DMA_PREP_PQ_DISABLE_Q))
- ioat_unmap(pdev, pq->q_addr - offset,
- len, PCI_DMA_TODEVICE, flags, 0);
- break;
- }
- }
-
- if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- if (!(flags & DMA_PREP_PQ_DISABLE_P))
- ioat_unmap(pdev, pq->p_addr - offset, len,
- PCI_DMA_BIDIRECTIONAL, flags, 1);
- if (!(flags & DMA_PREP_PQ_DISABLE_Q))
- ioat_unmap(pdev, pq->q_addr - offset, len,
- PCI_DMA_BIDIRECTIONAL, flags, 1);
- }
- break;
- }
- case IOAT_OP_PQ_16S:
- case IOAT_OP_PQ_VAL_16S: {
- struct ioat_pq_descriptor *pq = desc->pq;
- int src_cnt = src16_cnt_to_sw(pq->ctl_f.src_cnt);
- struct ioat_raw_descriptor *descs[4];
- int i;
-
- /* in the 'continue' case don't unmap the dests as sources */
- if (dmaf_p_disabled_continue(flags))
- src_cnt--;
- else if (dmaf_continue(flags))
- src_cnt -= 3;
-
- if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- descs[0] = (struct ioat_raw_descriptor *)pq;
- descs[1] = (struct ioat_raw_descriptor *)(desc->sed->hw);
- descs[2] = (struct ioat_raw_descriptor *)(&desc->sed->hw->b[0]);
- for (i = 0; i < src_cnt; i++) {
- dma_addr_t src = pq16_get_src(descs, i);
-
- ioat_unmap(pdev, src - offset, len,
- PCI_DMA_TODEVICE, flags, 0);
- }
-
- /* the dests are sources in pq validate operations */
- if (pq->ctl_f.op == IOAT_OP_XOR_VAL) {
- if (!(flags & DMA_PREP_PQ_DISABLE_P))
- ioat_unmap(pdev, pq->p_addr - offset,
- len, PCI_DMA_TODEVICE,
- flags, 0);
- if (!(flags & DMA_PREP_PQ_DISABLE_Q))
- ioat_unmap(pdev, pq->q_addr - offset,
- len, PCI_DMA_TODEVICE,
- flags, 0);
- break;
- }
- }
-
- if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- if (!(flags & DMA_PREP_PQ_DISABLE_P))
- ioat_unmap(pdev, pq->p_addr - offset, len,
- PCI_DMA_BIDIRECTIONAL, flags, 1);
- if (!(flags & DMA_PREP_PQ_DISABLE_Q))
- ioat_unmap(pdev, pq->q_addr - offset, len,
- PCI_DMA_BIDIRECTIONAL, flags, 1);
- }
- break;
- }
- default:
- dev_err(&pdev->dev, "%s: unknown op type: %#x\n",
- __func__, desc->hw->ctl_f.op);
- }
+ kmem_cache_free(ioat3_sed_cache, sed);
}
static bool desc_has_ext(struct ioat_ring_ent *desc)
tx = &desc->txd;
if (tx->cookie) {
dma_cookie_complete(tx);
- ioat3_dma_unmap(ioat, desc, idx + i);
+ dma_descriptor_unmap(tx);
if (tx->callback) {
tx->callback(tx->callback_param);
tx->callback = NULL;
enum dma_status ret;
ret = dma_cookie_status(c, cookie, txstate);
- if (ret == DMA_SUCCESS)
+ if (ret == DMA_COMPLETE)
return ret;
ioat3_cleanup(ioat);
u8 op;
int i, s, idx, num_descs;
- /* this function only handles src_cnt 9 - 16 */
- BUG_ON(src_cnt < 9);
-
/* this function is only called with 9-16 sources */
op = result ? IOAT_OP_PQ_VAL_16S : IOAT_OP_PQ_16S;
descs[0] = (struct ioat_raw_descriptor *) pq;
- desc->sed = ioat3_alloc_sed(device,
- sed_get_pq16_pool_idx(src_cnt));
+ desc->sed = ioat3_alloc_sed(device, (src_cnt-2) >> 3);
if (!desc->sed) {
dev_err(to_dev(chan),
"%s: no free sed entries\n", __func__);
return &desc->txd;
}
+static int src_cnt_flags(unsigned int src_cnt, unsigned long flags)
+{
+ if (dmaf_p_disabled_continue(flags))
+ return src_cnt + 1;
+ else if (dmaf_continue(flags))
+ return src_cnt + 3;
+ else
+ return src_cnt;
+}
+
static struct dma_async_tx_descriptor *
ioat3_prep_pq(struct dma_chan *chan, dma_addr_t *dst, dma_addr_t *src,
unsigned int src_cnt, const unsigned char *scf, size_t len,
unsigned long flags)
{
- struct dma_device *dma = chan->device;
-
/* specify valid address for disabled result */
if (flags & DMA_PREP_PQ_DISABLE_P)
dst[0] = dst[1];
single_source_coef[0] = scf[0];
single_source_coef[1] = 0;
- return (src_cnt > 8) && (dma->max_pq > 8) ?
+ return src_cnt_flags(src_cnt, flags) > 8 ?
__ioat3_prep_pq16_lock(chan, NULL, dst, single_source,
2, single_source_coef, len,
flags) :
single_source_coef, len, flags);
} else {
- return (src_cnt > 8) && (dma->max_pq > 8) ?
+ return src_cnt_flags(src_cnt, flags) > 8 ?
__ioat3_prep_pq16_lock(chan, NULL, dst, src, src_cnt,
scf, len, flags) :
__ioat3_prep_pq_lock(chan, NULL, dst, src, src_cnt,
unsigned int src_cnt, const unsigned char *scf, size_t len,
enum sum_check_flags *pqres, unsigned long flags)
{
- struct dma_device *dma = chan->device;
-
/* specify valid address for disabled result */
if (flags & DMA_PREP_PQ_DISABLE_P)
pq[0] = pq[1];
*/
*pqres = 0;
- return (src_cnt > 8) && (dma->max_pq > 8) ?
+ return src_cnt_flags(src_cnt, flags) > 8 ?
__ioat3_prep_pq16_lock(chan, pqres, pq, src, src_cnt, scf, len,
flags) :
__ioat3_prep_pq_lock(chan, pqres, pq, src, src_cnt, scf, len,
ioat3_prep_pqxor(struct dma_chan *chan, dma_addr_t dst, dma_addr_t *src,
unsigned int src_cnt, size_t len, unsigned long flags)
{
- struct dma_device *dma = chan->device;
unsigned char scf[src_cnt];
dma_addr_t pq[2];
flags |= DMA_PREP_PQ_DISABLE_Q;
pq[1] = dst; /* specify valid address for disabled result */
- return (src_cnt > 8) && (dma->max_pq > 8) ?
+ return src_cnt_flags(src_cnt, flags) > 8 ?
__ioat3_prep_pq16_lock(chan, NULL, pq, src, src_cnt, scf, len,
flags) :
__ioat3_prep_pq_lock(chan, NULL, pq, src, src_cnt, scf, len,
unsigned int src_cnt, size_t len,
enum sum_check_flags *result, unsigned long flags)
{
- struct dma_device *dma = chan->device;
unsigned char scf[src_cnt];
dma_addr_t pq[2];
flags |= DMA_PREP_PQ_DISABLE_Q;
pq[1] = pq[0]; /* specify valid address for disabled result */
-
- return (src_cnt > 8) && (dma->max_pq > 8) ?
+ return src_cnt_flags(src_cnt, flags) > 8 ?
__ioat3_prep_pq16_lock(chan, result, pq, &src[1], src_cnt - 1,
scf, len, flags) :
__ioat3_prep_pq_lock(chan, result, pq, &src[1], src_cnt - 1,
DMA_TO_DEVICE);
tx = dma->device_prep_dma_xor(dma_chan, dest_dma, dma_srcs,
IOAT_NUM_SRC_TEST, PAGE_SIZE,
- DMA_PREP_INTERRUPT |
- DMA_COMPL_SKIP_SRC_UNMAP |
- DMA_COMPL_SKIP_DEST_UNMAP);
+ DMA_PREP_INTERRUPT);
if (!tx) {
dev_err(dev, "Self-test xor prep failed\n");
tmo = wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000));
- if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_SUCCESS) {
+ if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) {
dev_err(dev, "Self-test xor timed out\n");
err = -ENODEV;
goto dma_unmap;
DMA_TO_DEVICE);
tx = dma->device_prep_dma_xor_val(dma_chan, dma_srcs,
IOAT_NUM_SRC_TEST + 1, PAGE_SIZE,
- &xor_val_result, DMA_PREP_INTERRUPT |
- DMA_COMPL_SKIP_SRC_UNMAP |
- DMA_COMPL_SKIP_DEST_UNMAP);
+ &xor_val_result, DMA_PREP_INTERRUPT);
if (!tx) {
dev_err(dev, "Self-test zero prep failed\n");
err = -ENODEV;
tmo = wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000));
- if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_SUCCESS) {
+ if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) {
dev_err(dev, "Self-test validate timed out\n");
err = -ENODEV;
goto dma_unmap;
goto free_resources;
}
+ memset(page_address(dest), 0, PAGE_SIZE);
+
/* test for non-zero parity sum */
op = IOAT_OP_XOR_VAL;
DMA_TO_DEVICE);
tx = dma->device_prep_dma_xor_val(dma_chan, dma_srcs,
IOAT_NUM_SRC_TEST + 1, PAGE_SIZE,
- &xor_val_result, DMA_PREP_INTERRUPT |
- DMA_COMPL_SKIP_SRC_UNMAP |
- DMA_COMPL_SKIP_DEST_UNMAP);
+ &xor_val_result, DMA_PREP_INTERRUPT);
if (!tx) {
dev_err(dev, "Self-test 2nd zero prep failed\n");
err = -ENODEV;
tmo = wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000));
- if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_SUCCESS) {
+ if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) {
dev_err(dev, "Self-test 2nd validate timed out\n");
err = -ENODEV;
goto dma_unmap;
static int ioat3_irq_reinit(struct ioatdma_device *device)
{
- int msixcnt = device->common.chancnt;
struct pci_dev *pdev = device->pdev;
- int i;
- struct msix_entry *msix;
- struct ioat_chan_common *chan;
- int err = 0;
+ int irq = pdev->irq, i;
+
+ if (!is_bwd_ioat(pdev))
+ return 0;
switch (device->irq_mode) {
case IOAT_MSIX:
+ for (i = 0; i < device->common.chancnt; i++) {
+ struct msix_entry *msix = &device->msix_entries[i];
+ struct ioat_chan_common *chan;
- for (i = 0; i < msixcnt; i++) {
- msix = &device->msix_entries[i];
chan = ioat_chan_by_index(device, i);
devm_free_irq(&pdev->dev, msix->vector, chan);
}
pci_disable_msix(pdev);
break;
-
- case IOAT_MSIX_SINGLE:
- msix = &device->msix_entries[0];
- chan = ioat_chan_by_index(device, 0);
- devm_free_irq(&pdev->dev, msix->vector, chan);
- pci_disable_msix(pdev);
- break;
-
case IOAT_MSI:
- chan = ioat_chan_by_index(device, 0);
- devm_free_irq(&pdev->dev, pdev->irq, chan);
pci_disable_msi(pdev);
- break;
-
+ /* fall through */
case IOAT_INTX:
- chan = ioat_chan_by_index(device, 0);
- devm_free_irq(&pdev->dev, pdev->irq, chan);
+ devm_free_irq(&pdev->dev, irq, device);
break;
-
default:
return 0;
}
-
device->irq_mode = IOAT_NOIRQ;
- err = ioat_dma_setup_interrupts(device);
-
- return err;
+ return ioat_dma_setup_interrupts(device);
}
static int ioat3_reset_hw(struct ioat_chan_common *chan)
}
err = ioat2_reset_sync(chan, msecs_to_jiffies(200));
- if (err) {
- dev_err(&pdev->dev, "Failed to reset!\n");
- return err;
- }
-
- if (device->irq_mode != IOAT_NOIRQ && is_bwd_ioat(pdev))
+ if (!err)
err = ioat3_irq_reinit(device);
+ if (err)
+ dev_err(&pdev->dev, "Failed to reset: %d\n", err);
+
return err;
}
char pool_name[14];
int i;
- /* allocate sw descriptor pool for SED */
- device->sed_pool = kmem_cache_create("ioat_sed",
- sizeof(struct ioat_sed_ent), 0, 0, NULL);
- if (!device->sed_pool)
- return -ENOMEM;
-
for (i = 0; i < MAX_SED_POOLS; i++) {
snprintf(pool_name, 14, "ioat_hw%d_sed", i);
/* allocate SED DMA pool */
- device->sed_hw_pool[i] = dma_pool_create(pool_name,
+ device->sed_hw_pool[i] = dmam_pool_create(pool_name,
&pdev->dev,
SED_SIZE * (i + 1), 64, 0);
if (!device->sed_hw_pool[i])
- goto sed_pool_cleanup;
+ return -ENOMEM;
}
}
device->dca = ioat3_dca_init(pdev, device->reg_base);
return 0;
-
-sed_pool_cleanup:
- if (device->sed_pool) {
- int i;
- kmem_cache_destroy(device->sed_pool);
-
- for (i = 0; i < MAX_SED_POOLS; i++)
- if (device->sed_hw_pool[i])
- dma_pool_destroy(device->sed_hw_pool[i]);
- }
-
- return -ENOMEM;
-}
-
-void ioat3_dma_remove(struct ioatdma_device *device)
-{
- if (device->sed_pool) {
- int i;
- kmem_cache_destroy(device->sed_pool);
-
- for (i = 0; i < MAX_SED_POOLS; i++)
- if (device->sed_hw_pool[i])
- dma_pool_destroy(device->sed_hw_pool[i]);
- }
}
MODULE_PARM_DESC(ioat_dca_enabled, "control support of dca service (default: 1)");
struct kmem_cache *ioat2_cache;
+struct kmem_cache *ioat3_sed_cache;
#define DRV_NAME "ioatdma"
if (!device)
return;
- if (device->version >= IOAT_VER_3_0)
- ioat3_dma_remove(device);
-
dev_err(&pdev->dev, "Removing dma and dca services\n");
if (device->dca) {
unregister_dca_provider(device->dca, &pdev->dev);
static int __init ioat_init_module(void)
{
- int err;
+ int err = -ENOMEM;
pr_info("%s: Intel(R) QuickData Technology Driver %s\n",
DRV_NAME, IOAT_DMA_VERSION);
if (!ioat2_cache)
return -ENOMEM;
+ ioat3_sed_cache = KMEM_CACHE(ioat_sed_ent, 0);
+ if (!ioat3_sed_cache)
+ goto err_ioat2_cache;
+
err = pci_register_driver(&ioat_pci_driver);
if (err)
- kmem_cache_destroy(ioat2_cache);
+ goto err_ioat3_cache;
+
+ return 0;
+
+ err_ioat3_cache:
+ kmem_cache_destroy(ioat3_sed_cache);
+
+ err_ioat2_cache:
+ kmem_cache_destroy(ioat2_cache);
return err;
}
}
}
-static void
-iop_desc_unmap(struct iop_adma_chan *iop_chan, struct iop_adma_desc_slot *desc)
-{
- struct dma_async_tx_descriptor *tx = &desc->async_tx;
- struct iop_adma_desc_slot *unmap = desc->group_head;
- struct device *dev = &iop_chan->device->pdev->dev;
- u32 len = unmap->unmap_len;
- enum dma_ctrl_flags flags = tx->flags;
- u32 src_cnt;
- dma_addr_t addr;
- dma_addr_t dest;
-
- src_cnt = unmap->unmap_src_cnt;
- dest = iop_desc_get_dest_addr(unmap, iop_chan);
- if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- enum dma_data_direction dir;
-
- if (src_cnt > 1) /* is xor? */
- dir = DMA_BIDIRECTIONAL;
- else
- dir = DMA_FROM_DEVICE;
-
- dma_unmap_page(dev, dest, len, dir);
- }
-
- if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- while (src_cnt--) {
- addr = iop_desc_get_src_addr(unmap, iop_chan, src_cnt);
- if (addr == dest)
- continue;
- dma_unmap_page(dev, addr, len, DMA_TO_DEVICE);
- }
- }
- desc->group_head = NULL;
-}
-
-static void
-iop_desc_unmap_pq(struct iop_adma_chan *iop_chan, struct iop_adma_desc_slot *desc)
-{
- struct dma_async_tx_descriptor *tx = &desc->async_tx;
- struct iop_adma_desc_slot *unmap = desc->group_head;
- struct device *dev = &iop_chan->device->pdev->dev;
- u32 len = unmap->unmap_len;
- enum dma_ctrl_flags flags = tx->flags;
- u32 src_cnt = unmap->unmap_src_cnt;
- dma_addr_t pdest = iop_desc_get_dest_addr(unmap, iop_chan);
- dma_addr_t qdest = iop_desc_get_qdest_addr(unmap, iop_chan);
- int i;
-
- if (tx->flags & DMA_PREP_CONTINUE)
- src_cnt -= 3;
-
- if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP) && !desc->pq_check_result) {
- dma_unmap_page(dev, pdest, len, DMA_BIDIRECTIONAL);
- dma_unmap_page(dev, qdest, len, DMA_BIDIRECTIONAL);
- }
-
- if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- dma_addr_t addr;
-
- for (i = 0; i < src_cnt; i++) {
- addr = iop_desc_get_src_addr(unmap, iop_chan, i);
- dma_unmap_page(dev, addr, len, DMA_TO_DEVICE);
- }
- if (desc->pq_check_result) {
- dma_unmap_page(dev, pdest, len, DMA_TO_DEVICE);
- dma_unmap_page(dev, qdest, len, DMA_TO_DEVICE);
- }
- }
-
- desc->group_head = NULL;
-}
-
-
static dma_cookie_t
iop_adma_run_tx_complete_actions(struct iop_adma_desc_slot *desc,
struct iop_adma_chan *iop_chan, dma_cookie_t cookie)
if (tx->callback)
tx->callback(tx->callback_param);
- /* unmap dma addresses
- * (unmap_single vs unmap_page?)
- */
- if (desc->group_head && desc->unmap_len) {
- if (iop_desc_is_pq(desc))
- iop_desc_unmap_pq(iop_chan, desc);
- else
- iop_desc_unmap(iop_chan, desc);
- }
+ dma_descriptor_unmap(tx);
+ if (desc->group_head)
+ desc->group_head = NULL;
}
/* run dependent operations */
if (sw_desc) {
grp_start = sw_desc->group_head;
iop_desc_init_interrupt(grp_start, iop_chan);
- grp_start->unmap_len = 0;
sw_desc->async_tx.flags = flags;
}
spin_unlock_bh(&iop_chan->lock);
iop_desc_set_byte_count(grp_start, iop_chan, len);
iop_desc_set_dest_addr(grp_start, iop_chan, dma_dest);
iop_desc_set_memcpy_src_addr(grp_start, dma_src);
- sw_desc->unmap_src_cnt = 1;
- sw_desc->unmap_len = len;
sw_desc->async_tx.flags = flags;
}
spin_unlock_bh(&iop_chan->lock);
iop_desc_init_xor(grp_start, src_cnt, flags);
iop_desc_set_byte_count(grp_start, iop_chan, len);
iop_desc_set_dest_addr(grp_start, iop_chan, dma_dest);
- sw_desc->unmap_src_cnt = src_cnt;
- sw_desc->unmap_len = len;
sw_desc->async_tx.flags = flags;
while (src_cnt--)
iop_desc_set_xor_src_addr(grp_start, src_cnt,
grp_start->xor_check_result = result;
pr_debug("\t%s: grp_start->xor_check_result: %p\n",
__func__, grp_start->xor_check_result);
- sw_desc->unmap_src_cnt = src_cnt;
- sw_desc->unmap_len = len;
sw_desc->async_tx.flags = flags;
while (src_cnt--)
iop_desc_set_zero_sum_src_addr(grp_start, src_cnt,
dst[0] = dst[1] & 0x7;
iop_desc_set_pq_addr(g, dst);
- sw_desc->unmap_src_cnt = src_cnt;
- sw_desc->unmap_len = len;
sw_desc->async_tx.flags = flags;
for (i = 0; i < src_cnt; i++)
iop_desc_set_pq_src_addr(g, i, src[i], scf[i]);
g->pq_check_result = pqres;
pr_debug("\t%s: g->pq_check_result: %p\n",
__func__, g->pq_check_result);
- sw_desc->unmap_src_cnt = src_cnt+2;
- sw_desc->unmap_len = len;
sw_desc->async_tx.flags = flags;
while (src_cnt--)
iop_desc_set_pq_zero_sum_src_addr(g, src_cnt,
int ret;
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS)
+ if (ret == DMA_COMPLETE)
return ret;
iop_adma_slot_cleanup(iop_chan);
msleep(1);
if (iop_adma_status(dma_chan, cookie, NULL) !=
- DMA_SUCCESS) {
+ DMA_COMPLETE) {
dev_err(dma_chan->device->dev,
"Self-test copy timed out, disabling\n");
err = -ENODEV;
msleep(8);
if (iop_adma_status(dma_chan, cookie, NULL) !=
- DMA_SUCCESS) {
+ DMA_COMPLETE) {
dev_err(dma_chan->device->dev,
"Self-test xor timed out, disabling\n");
err = -ENODEV;
iop_adma_issue_pending(dma_chan);
msleep(8);
- if (iop_adma_status(dma_chan, cookie, NULL) != DMA_SUCCESS) {
+ if (iop_adma_status(dma_chan, cookie, NULL) != DMA_COMPLETE) {
dev_err(dma_chan->device->dev,
"Self-test zero sum timed out, disabling\n");
err = -ENODEV;
iop_adma_issue_pending(dma_chan);
msleep(8);
- if (iop_adma_status(dma_chan, cookie, NULL) != DMA_SUCCESS) {
+ if (iop_adma_status(dma_chan, cookie, NULL) != DMA_COMPLETE) {
dev_err(dma_chan->device->dev,
"Self-test non-zero sum timed out, disabling\n");
err = -ENODEV;
msleep(8);
if (iop_adma_status(dma_chan, cookie, NULL) !=
- DMA_SUCCESS) {
+ DMA_COMPLETE) {
dev_err(dev, "Self-test pq timed out, disabling\n");
err = -ENODEV;
goto free_resources;
msleep(8);
if (iop_adma_status(dma_chan, cookie, NULL) !=
- DMA_SUCCESS) {
+ DMA_COMPLETE) {
dev_err(dev, "Self-test pq-zero-sum timed out, disabling\n");
err = -ENODEV;
goto free_resources;
msleep(8);
if (iop_adma_status(dma_chan, cookie, NULL) !=
- DMA_SUCCESS) {
+ DMA_COMPLETE) {
dev_err(dev, "Self-test !pq-zero-sum timed out, disabling\n");
err = -ENODEV;
goto free_resources;
desc = list_entry(ichan->queue.next, struct idmac_tx_desc, list);
descnew = desc;
- dev_dbg(dev, "IDMAC irq %d, dma 0x%08x, next dma 0x%08x, current %d, curbuf 0x%08x\n",
- irq, sg_dma_address(*sg), sgnext ? sg_dma_address(sgnext) : 0, ichan->active_buffer, curbuf);
+ dev_dbg(dev, "IDMAC irq %d, dma %#llx, next dma %#llx, current %d, curbuf %#x\n",
+ irq, (u64)sg_dma_address(*sg),
+ sgnext ? (u64)sg_dma_address(sgnext) : 0,
+ ichan->active_buffer, curbuf);
/* Find the descriptor of sgnext */
sgnew = idmac_sg_next(ichan, &descnew, *sg);
size_t bytes = 0;
ret = dma_cookie_status(&c->vc.chan, cookie, state);
- if (ret == DMA_SUCCESS)
+ if (ret == DMA_COMPLETE)
return ret;
spin_lock_irqsave(&c->vc.lock, flags);
irq = platform_get_irq(op, 0);
ret = devm_request_irq(&op->dev, irq,
- k3_dma_int_handler, IRQF_DISABLED, DRIVER_NAME, d);
+ k3_dma_int_handler, 0, DRIVER_NAME, d);
if (ret)
return ret;
* move the descriptors to a temporary list so we can drop
* the lock during the entire cleanup operation
*/
- list_del(&desc->node);
- list_add(&desc->node, &chain_cleanup);
+ list_move(&desc->node, &chain_cleanup);
/*
* Look for the first list entry which has the ENDIRQEN flag
if (irq) {
ret = devm_request_irq(pdev->dev, irq,
- mmp_pdma_chan_handler, IRQF_DISABLED, "pdma", phy);
+ mmp_pdma_chan_handler, 0, "pdma", phy);
if (ret) {
dev_err(pdev->dev, "channel request irq fail!\n");
return ret;
/* all chan share one irq, demux inside */
irq = platform_get_irq(op, 0);
ret = devm_request_irq(pdev->dev, irq,
- mmp_pdma_int_handler, IRQF_DISABLED, "pdma", pdev);
+ mmp_pdma_int_handler, 0, "pdma", pdev);
if (ret)
return ret;
}
}
}
+ platform_set_drvdata(op, pdev);
dev_info(pdev->device.dev, "initialized %d channels\n", dma_channels);
return 0;
}
#define TDCR_BURSTSZ_16B (0x3 << 6)
#define TDCR_BURSTSZ_32B (0x6 << 6)
#define TDCR_BURSTSZ_64B (0x7 << 6)
+#define TDCR_BURSTSZ_SQU_1B (0x5 << 6)
+#define TDCR_BURSTSZ_SQU_2B (0x6 << 6)
+#define TDCR_BURSTSZ_SQU_4B (0x0 << 6)
+#define TDCR_BURSTSZ_SQU_8B (0x1 << 6)
+#define TDCR_BURSTSZ_SQU_16B (0x3 << 6)
#define TDCR_BURSTSZ_SQU_32B (0x7 << 6)
#define TDCR_BURSTSZ_128B (0x5 << 6)
#define TDCR_DSTDIR_MSK (0x3 << 4) /* Dst Direction */
/* disable irq */
writel(0, tdmac->reg_base + TDIMR);
- tdmac->status = DMA_SUCCESS;
+ tdmac->status = DMA_COMPLETE;
}
static void mmp_tdma_resume_chan(struct mmp_tdma_chan *tdmac)
return -EINVAL;
}
} else if (tdmac->type == PXA910_SQU) {
- tdcr |= TDCR_BURSTSZ_SQU_32B;
tdcr |= TDCR_SSPMOD;
+
+ switch (tdmac->burst_sz) {
+ case 1:
+ tdcr |= TDCR_BURSTSZ_SQU_1B;
+ break;
+ case 2:
+ tdcr |= TDCR_BURSTSZ_SQU_2B;
+ break;
+ case 4:
+ tdcr |= TDCR_BURSTSZ_SQU_4B;
+ break;
+ case 8:
+ tdcr |= TDCR_BURSTSZ_SQU_8B;
+ break;
+ case 16:
+ tdcr |= TDCR_BURSTSZ_SQU_16B;
+ break;
+ case 32:
+ tdcr |= TDCR_BURSTSZ_SQU_32B;
+ break;
+ default:
+ dev_err(tdmac->dev, "mmp_tdma: unknown burst size.\n");
+ return -EINVAL;
+ }
}
writel(tdcr, tdmac->reg_base + TDCR);
if (tdmac->irq) {
ret = devm_request_irq(tdmac->dev, tdmac->irq,
- mmp_tdma_chan_handler, IRQF_DISABLED, "tdma", tdmac);
+ mmp_tdma_chan_handler, 0, "tdma", tdmac);
if (ret)
return ret;
}
int num_periods = buf_len / period_len;
int i = 0, buf = 0;
- if (tdmac->status != DMA_SUCCESS)
+ if (tdmac->status != DMA_COMPLETE)
return NULL;
if (period_len > TDMA_MAX_XFER_BYTES) {
tdmac->idx = idx;
tdmac->type = type;
tdmac->reg_base = (unsigned long)tdev->base + idx * 4;
- tdmac->status = DMA_SUCCESS;
+ tdmac->status = DMA_COMPLETE;
tdev->tdmac[tdmac->idx] = tdmac;
tasklet_init(&tdmac->tasklet, dma_do_tasklet, (unsigned long)tdmac);
if (irq_num != chan_num) {
irq = platform_get_irq(pdev, 0);
ret = devm_request_irq(&pdev->dev, irq,
- mmp_tdma_int_handler, IRQF_DISABLED, "tdma", tdev);
+ mmp_tdma_int_handler, 0, "tdma", tdev);
if (ret)
return ret;
}
hw_desc->desc_command = (1 << 31);
}
-static u32 mv_desc_get_dest_addr(struct mv_xor_desc_slot *desc)
-{
- struct mv_xor_desc *hw_desc = desc->hw_desc;
- return hw_desc->phy_dest_addr;
-}
-
-static u32 mv_desc_get_src_addr(struct mv_xor_desc_slot *desc,
- int src_idx)
-{
- struct mv_xor_desc *hw_desc = desc->hw_desc;
- return hw_desc->phy_src_addr[mv_phy_src_idx(src_idx)];
-}
-
-
static void mv_desc_set_byte_count(struct mv_xor_desc_slot *desc,
u32 byte_count)
{
desc->async_tx.callback(
desc->async_tx.callback_param);
- /* unmap dma addresses
- * (unmap_single vs unmap_page?)
- */
- if (desc->group_head && desc->unmap_len) {
- struct mv_xor_desc_slot *unmap = desc->group_head;
- struct device *dev = mv_chan_to_devp(mv_chan);
- u32 len = unmap->unmap_len;
- enum dma_ctrl_flags flags = desc->async_tx.flags;
- u32 src_cnt;
- dma_addr_t addr;
- dma_addr_t dest;
-
- src_cnt = unmap->unmap_src_cnt;
- dest = mv_desc_get_dest_addr(unmap);
- if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- enum dma_data_direction dir;
-
- if (src_cnt > 1) /* is xor ? */
- dir = DMA_BIDIRECTIONAL;
- else
- dir = DMA_FROM_DEVICE;
- dma_unmap_page(dev, dest, len, dir);
- }
-
- if (!(flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- while (src_cnt--) {
- addr = mv_desc_get_src_addr(unmap,
- src_cnt);
- if (addr == dest)
- continue;
- dma_unmap_page(dev, addr, len,
- DMA_TO_DEVICE);
- }
- }
+ dma_descriptor_unmap(&desc->async_tx);
+ if (desc->group_head)
desc->group_head = NULL;
- }
}
/* run dependent operations */
enum dma_status ret;
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS) {
+ if (ret == DMA_COMPLETE) {
mv_xor_clean_completed_slots(mv_chan);
return ret;
}
/*
* Perform a transaction to verify the HW works.
*/
-#define MV_XOR_TEST_SIZE 2000
static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan)
{
struct dma_chan *dma_chan;
dma_cookie_t cookie;
struct dma_async_tx_descriptor *tx;
+ struct dmaengine_unmap_data *unmap;
int err = 0;
- src = kmalloc(sizeof(u8) * MV_XOR_TEST_SIZE, GFP_KERNEL);
+ src = kmalloc(sizeof(u8) * PAGE_SIZE, GFP_KERNEL);
if (!src)
return -ENOMEM;
- dest = kzalloc(sizeof(u8) * MV_XOR_TEST_SIZE, GFP_KERNEL);
+ dest = kzalloc(sizeof(u8) * PAGE_SIZE, GFP_KERNEL);
if (!dest) {
kfree(src);
return -ENOMEM;
}
/* Fill in src buffer */
- for (i = 0; i < MV_XOR_TEST_SIZE; i++)
+ for (i = 0; i < PAGE_SIZE; i++)
((u8 *) src)[i] = (u8)i;
dma_chan = &mv_chan->dmachan;
goto out;
}
- dest_dma = dma_map_single(dma_chan->device->dev, dest,
- MV_XOR_TEST_SIZE, DMA_FROM_DEVICE);
+ unmap = dmaengine_get_unmap_data(dma_chan->device->dev, 2, GFP_KERNEL);
+ if (!unmap) {
+ err = -ENOMEM;
+ goto free_resources;
+ }
+
+ src_dma = dma_map_page(dma_chan->device->dev, virt_to_page(src), 0,
+ PAGE_SIZE, DMA_TO_DEVICE);
+ unmap->to_cnt = 1;
+ unmap->addr[0] = src_dma;
- src_dma = dma_map_single(dma_chan->device->dev, src,
- MV_XOR_TEST_SIZE, DMA_TO_DEVICE);
+ dest_dma = dma_map_page(dma_chan->device->dev, virt_to_page(dest), 0,
+ PAGE_SIZE, DMA_FROM_DEVICE);
+ unmap->from_cnt = 1;
+ unmap->addr[1] = dest_dma;
+
+ unmap->len = PAGE_SIZE;
tx = mv_xor_prep_dma_memcpy(dma_chan, dest_dma, src_dma,
- MV_XOR_TEST_SIZE, 0);
+ PAGE_SIZE, 0);
cookie = mv_xor_tx_submit(tx);
mv_xor_issue_pending(dma_chan);
async_tx_ack(tx);
msleep(1);
if (mv_xor_status(dma_chan, cookie, NULL) !=
- DMA_SUCCESS) {
+ DMA_COMPLETE) {
dev_err(dma_chan->device->dev,
"Self-test copy timed out, disabling\n");
err = -ENODEV;
}
dma_sync_single_for_cpu(dma_chan->device->dev, dest_dma,
- MV_XOR_TEST_SIZE, DMA_FROM_DEVICE);
- if (memcmp(src, dest, MV_XOR_TEST_SIZE)) {
+ PAGE_SIZE, DMA_FROM_DEVICE);
+ if (memcmp(src, dest, PAGE_SIZE)) {
dev_err(dma_chan->device->dev,
"Self-test copy failed compare, disabling\n");
err = -ENODEV;
}
free_resources:
+ dmaengine_unmap_put(unmap);
mv_xor_free_chan_resources(dma_chan);
out:
kfree(src);
dma_addr_t dma_srcs[MV_XOR_NUM_SRC_TEST];
dma_addr_t dest_dma;
struct dma_async_tx_descriptor *tx;
+ struct dmaengine_unmap_data *unmap;
struct dma_chan *dma_chan;
dma_cookie_t cookie;
u8 cmp_byte = 0;
u32 cmp_word;
int err = 0;
+ int src_count = MV_XOR_NUM_SRC_TEST;
- for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) {
+ for (src_idx = 0; src_idx < src_count; src_idx++) {
xor_srcs[src_idx] = alloc_page(GFP_KERNEL);
if (!xor_srcs[src_idx]) {
while (src_idx--)
}
/* Fill in src buffers */
- for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) {
+ for (src_idx = 0; src_idx < src_count; src_idx++) {
u8 *ptr = page_address(xor_srcs[src_idx]);
for (i = 0; i < PAGE_SIZE; i++)
ptr[i] = (1 << src_idx);
}
- for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++)
+ for (src_idx = 0; src_idx < src_count; src_idx++)
cmp_byte ^= (u8) (1 << src_idx);
cmp_word = (cmp_byte << 24) | (cmp_byte << 16) |
goto out;
}
+ unmap = dmaengine_get_unmap_data(dma_chan->device->dev, src_count + 1,
+ GFP_KERNEL);
+ if (!unmap) {
+ err = -ENOMEM;
+ goto free_resources;
+ }
+
/* test xor */
- dest_dma = dma_map_page(dma_chan->device->dev, dest, 0, PAGE_SIZE,
- DMA_FROM_DEVICE);
+ for (i = 0; i < src_count; i++) {
+ unmap->addr[i] = dma_map_page(dma_chan->device->dev, xor_srcs[i],
+ 0, PAGE_SIZE, DMA_TO_DEVICE);
+ dma_srcs[i] = unmap->addr[i];
+ unmap->to_cnt++;
+ }
- for (i = 0; i < MV_XOR_NUM_SRC_TEST; i++)
- dma_srcs[i] = dma_map_page(dma_chan->device->dev, xor_srcs[i],
- 0, PAGE_SIZE, DMA_TO_DEVICE);
+ unmap->addr[src_count] = dma_map_page(dma_chan->device->dev, dest, 0, PAGE_SIZE,
+ DMA_FROM_DEVICE);
+ dest_dma = unmap->addr[src_count];
+ unmap->from_cnt = 1;
+ unmap->len = PAGE_SIZE;
tx = mv_xor_prep_dma_xor(dma_chan, dest_dma, dma_srcs,
- MV_XOR_NUM_SRC_TEST, PAGE_SIZE, 0);
+ src_count, PAGE_SIZE, 0);
cookie = mv_xor_tx_submit(tx);
mv_xor_issue_pending(dma_chan);
msleep(8);
if (mv_xor_status(dma_chan, cookie, NULL) !=
- DMA_SUCCESS) {
+ DMA_COMPLETE) {
dev_err(dma_chan->device->dev,
"Self-test xor timed out, disabling\n");
err = -ENODEV;
}
free_resources:
+ dmaengine_unmap_put(unmap);
mv_xor_free_chan_resources(dma_chan);
out:
- src_idx = MV_XOR_NUM_SRC_TEST;
+ src_idx = src_count;
while (src_idx--)
__free_page(xor_srcs[src_idx]);
__free_page(dest);
}
mv_chan->mmr_base = xordev->xor_base;
- if (!mv_chan->mmr_base) {
- ret = -ENOMEM;
- goto err_free_dma;
- }
+ mv_chan->mmr_high_base = xordev->xor_high_base;
tasklet_init(&mv_chan->irq_tasklet, mv_xor_tasklet, (unsigned long)
mv_chan);
mv_xor_conf_mbus_windows(struct mv_xor_device *xordev,
const struct mbus_dram_target_info *dram)
{
- void __iomem *base = xordev->xor_base;
+ void __iomem *base = xordev->xor_high_base;
u32 win_enable = 0;
int i;
int i = 0;
for_each_child_of_node(pdev->dev.of_node, np) {
+ struct mv_xor_chan *chan;
dma_cap_mask_t cap_mask;
int irq;
goto err_channel_add;
}
- xordev->channels[i] =
- mv_xor_channel_add(xordev, pdev, i,
- cap_mask, irq);
- if (IS_ERR(xordev->channels[i])) {
- ret = PTR_ERR(xordev->channels[i]);
- xordev->channels[i] = NULL;
+ chan = mv_xor_channel_add(xordev, pdev, i,
+ cap_mask, irq);
+ if (IS_ERR(chan)) {
+ ret = PTR_ERR(chan);
irq_dispose_mapping(irq);
goto err_channel_add;
}
+ xordev->channels[i] = chan;
i++;
}
} else if (pdata && pdata->channels) {
for (i = 0; i < MV_XOR_MAX_CHANNELS; i++) {
struct mv_xor_channel_data *cd;
+ struct mv_xor_chan *chan;
int irq;
cd = &pdata->channels[i];
goto err_channel_add;
}
- xordev->channels[i] =
- mv_xor_channel_add(xordev, pdev, i,
- cd->cap_mask, irq);
- if (IS_ERR(xordev->channels[i])) {
- ret = PTR_ERR(xordev->channels[i]);
+ chan = mv_xor_channel_add(xordev, pdev, i,
+ cd->cap_mask, irq);
+ if (IS_ERR(chan)) {
+ ret = PTR_ERR(chan);
goto err_channel_add;
}
+
+ xordev->channels[i] = chan;
}
}
#define XOR_OPERATION_MODE_MEMCPY 2
#define XOR_DESCRIPTOR_SWAP BIT(14)
-#define XOR_CURR_DESC(chan) (chan->mmr_base + 0x210 + (chan->idx * 4))
-#define XOR_NEXT_DESC(chan) (chan->mmr_base + 0x200 + (chan->idx * 4))
-#define XOR_BYTE_COUNT(chan) (chan->mmr_base + 0x220 + (chan->idx * 4))
-#define XOR_DEST_POINTER(chan) (chan->mmr_base + 0x2B0 + (chan->idx * 4))
-#define XOR_BLOCK_SIZE(chan) (chan->mmr_base + 0x2C0 + (chan->idx * 4))
-#define XOR_INIT_VALUE_LOW(chan) (chan->mmr_base + 0x2E0)
-#define XOR_INIT_VALUE_HIGH(chan) (chan->mmr_base + 0x2E4)
+#define XOR_CURR_DESC(chan) (chan->mmr_high_base + 0x10 + (chan->idx * 4))
+#define XOR_NEXT_DESC(chan) (chan->mmr_high_base + 0x00 + (chan->idx * 4))
+#define XOR_BYTE_COUNT(chan) (chan->mmr_high_base + 0x20 + (chan->idx * 4))
+#define XOR_DEST_POINTER(chan) (chan->mmr_high_base + 0xB0 + (chan->idx * 4))
+#define XOR_BLOCK_SIZE(chan) (chan->mmr_high_base + 0xC0 + (chan->idx * 4))
+#define XOR_INIT_VALUE_LOW(chan) (chan->mmr_high_base + 0xE0)
+#define XOR_INIT_VALUE_HIGH(chan) (chan->mmr_high_base + 0xE4)
#define XOR_CONFIG(chan) (chan->mmr_base + 0x10 + (chan->idx * 4))
#define XOR_ACTIVATION(chan) (chan->mmr_base + 0x20 + (chan->idx * 4))
#define XOR_ERROR_ADDR(chan) (chan->mmr_base + 0x60)
#define XOR_INTR_MASK_VALUE 0x3F5
-#define WINDOW_BASE(w) (0x250 + ((w) << 2))
-#define WINDOW_SIZE(w) (0x270 + ((w) << 2))
-#define WINDOW_REMAP_HIGH(w) (0x290 + ((w) << 2))
-#define WINDOW_BAR_ENABLE(chan) (0x240 + ((chan) << 2))
-#define WINDOW_OVERRIDE_CTRL(chan) (0x2A0 + ((chan) << 2))
+#define WINDOW_BASE(w) (0x50 + ((w) << 2))
+#define WINDOW_SIZE(w) (0x70 + ((w) << 2))
+#define WINDOW_REMAP_HIGH(w) (0x90 + ((w) << 2))
+#define WINDOW_BAR_ENABLE(chan) (0x40 + ((chan) << 2))
+#define WINDOW_OVERRIDE_CTRL(chan) (0xA0 + ((chan) << 2))
struct mv_xor_device {
void __iomem *xor_base;
int pending;
spinlock_t lock; /* protects the descriptor slot pool */
void __iomem *mmr_base;
+ void __iomem *mmr_high_base;
unsigned int idx;
int irq;
enum dma_transaction_type current_type;
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_dma.h>
+#include <linux/list.h>
#include <asm/irq.h>
(((dma_is_apbh(d) && apbh_is_old(d)) ? 0x050 : 0x110) + (n) * 0x70)
#define HW_APBHX_CHn_SEMA(d, n) \
(((dma_is_apbh(d) && apbh_is_old(d)) ? 0x080 : 0x140) + (n) * 0x70)
+#define HW_APBHX_CHn_BAR(d, n) \
+ (((dma_is_apbh(d) && apbh_is_old(d)) ? 0x070 : 0x130) + (n) * 0x70)
+#define HW_APBX_CHn_DEBUG1(d, n) (0x150 + (n) * 0x70)
/*
* ccw bits definitions
int desc_count;
enum dma_status status;
unsigned int flags;
+ bool reset;
#define MXS_DMA_SG_LOOP (1 << 0)
+#define MXS_DMA_USE_SEMAPHORE (1 << 1)
};
#define MXS_DMA_CHANNELS 16
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
int chan_id = mxs_chan->chan.chan_id;
- if (dma_is_apbh(mxs_dma) && apbh_is_old(mxs_dma))
+ /*
+ * mxs dma channel resets can cause a channel stall. To recover from a
+ * channel stall, we have to reset the whole DMA engine. To avoid this,
+ * we use cyclic DMA with semaphores, that are enhanced in
+ * mxs_dma_int_handler. To reset the channel, we can simply stop writing
+ * into the semaphore counter.
+ */
+ if (mxs_chan->flags & MXS_DMA_USE_SEMAPHORE &&
+ mxs_chan->flags & MXS_DMA_SG_LOOP) {
+ mxs_chan->reset = true;
+ } else if (dma_is_apbh(mxs_dma) && apbh_is_old(mxs_dma)) {
writel(1 << (chan_id + BP_APBH_CTRL0_RESET_CHANNEL),
mxs_dma->base + HW_APBHX_CTRL0 + STMP_OFFSET_REG_SET);
- else
+ } else {
+ unsigned long elapsed = 0;
+ const unsigned long max_wait = 50000; /* 50ms */
+ void __iomem *reg_dbg1 = mxs_dma->base +
+ HW_APBX_CHn_DEBUG1(mxs_dma, chan_id);
+
+ /*
+ * On i.MX28 APBX, the DMA channel can stop working if we reset
+ * the channel while it is in READ_FLUSH (0x08) state.
+ * We wait here until we leave the state. Then we trigger the
+ * reset. Waiting a maximum of 50ms, the kernel shouldn't crash
+ * because of this.
+ */
+ while ((readl(reg_dbg1) & 0xf) == 0x8 && elapsed < max_wait) {
+ udelay(100);
+ elapsed += 100;
+ }
+
+ if (elapsed >= max_wait)
+ dev_err(&mxs_chan->mxs_dma->pdev->dev,
+ "Failed waiting for the DMA channel %d to leave state READ_FLUSH, trying to reset channel in READ_FLUSH state now\n",
+ chan_id);
+
writel(1 << (chan_id + BP_APBHX_CHANNEL_CTRL_RESET_CHANNEL),
mxs_dma->base + HW_APBHX_CHANNEL_CTRL + STMP_OFFSET_REG_SET);
+ }
+
+ mxs_chan->status = DMA_COMPLETE;
}
static void mxs_dma_enable_chan(struct mxs_dma_chan *mxs_chan)
mxs_dma->base + HW_APBHX_CHn_NXTCMDAR(mxs_dma, chan_id));
/* write 1 to SEMA to kick off the channel */
- writel(1, mxs_dma->base + HW_APBHX_CHn_SEMA(mxs_dma, chan_id));
+ if (mxs_chan->flags & MXS_DMA_USE_SEMAPHORE &&
+ mxs_chan->flags & MXS_DMA_SG_LOOP) {
+ /* A cyclic DMA consists of at least 2 segments, so initialize
+ * the semaphore with 2 so we have enough time to add 1 to the
+ * semaphore if we need to */
+ writel(2, mxs_dma->base + HW_APBHX_CHn_SEMA(mxs_dma, chan_id));
+ } else {
+ writel(1, mxs_dma->base + HW_APBHX_CHn_SEMA(mxs_dma, chan_id));
+ }
+ mxs_chan->reset = false;
}
static void mxs_dma_disable_chan(struct mxs_dma_chan *mxs_chan)
{
- mxs_chan->status = DMA_SUCCESS;
+ mxs_chan->status = DMA_COMPLETE;
}
static void mxs_dma_pause_chan(struct mxs_dma_chan *mxs_chan)
mxs_chan->desc.callback(mxs_chan->desc.callback_param);
}
+static int mxs_dma_irq_to_chan(struct mxs_dma_engine *mxs_dma, int irq)
+{
+ int i;
+
+ for (i = 0; i != mxs_dma->nr_channels; ++i)
+ if (mxs_dma->mxs_chans[i].chan_irq == irq)
+ return i;
+
+ return -EINVAL;
+}
+
static irqreturn_t mxs_dma_int_handler(int irq, void *dev_id)
{
struct mxs_dma_engine *mxs_dma = dev_id;
- u32 stat1, stat2;
+ struct mxs_dma_chan *mxs_chan;
+ u32 completed;
+ u32 err;
+ int chan = mxs_dma_irq_to_chan(mxs_dma, irq);
+
+ if (chan < 0)
+ return IRQ_NONE;
/* completion status */
- stat1 = readl(mxs_dma->base + HW_APBHX_CTRL1);
- stat1 &= MXS_DMA_CHANNELS_MASK;
- writel(stat1, mxs_dma->base + HW_APBHX_CTRL1 + STMP_OFFSET_REG_CLR);
+ completed = readl(mxs_dma->base + HW_APBHX_CTRL1);
+ completed = (completed >> chan) & 0x1;
+
+ /* Clear interrupt */
+ writel((1 << chan),
+ mxs_dma->base + HW_APBHX_CTRL1 + STMP_OFFSET_REG_CLR);
/* error status */
- stat2 = readl(mxs_dma->base + HW_APBHX_CTRL2);
- writel(stat2, mxs_dma->base + HW_APBHX_CTRL2 + STMP_OFFSET_REG_CLR);
+ err = readl(mxs_dma->base + HW_APBHX_CTRL2);
+ err &= (1 << (MXS_DMA_CHANNELS + chan)) | (1 << chan);
+
+ /*
+ * error status bit is in the upper 16 bits, error irq bit in the lower
+ * 16 bits. We transform it into a simpler error code:
+ * err: 0x00 = no error, 0x01 = TERMINATION, 0x02 = BUS_ERROR
+ */
+ err = (err >> (MXS_DMA_CHANNELS + chan)) + (err >> chan);
+
+ /* Clear error irq */
+ writel((1 << chan),
+ mxs_dma->base + HW_APBHX_CTRL2 + STMP_OFFSET_REG_CLR);
/*
* When both completion and error of termination bits set at the
* same time, we do not take it as an error. IOW, it only becomes
- * an error we need to handle here in case of either it's (1) a bus
- * error or (2) a termination error with no completion.
+ * an error we need to handle here in case of either it's a bus
+ * error or a termination error with no completion. 0x01 is termination
+ * error, so we can subtract err & completed to get the real error case.
*/
- stat2 = ((stat2 >> MXS_DMA_CHANNELS) & stat2) | /* (1) */
- (~(stat2 >> MXS_DMA_CHANNELS) & stat2 & ~stat1); /* (2) */
-
- /* combine error and completion status for checking */
- stat1 = (stat2 << MXS_DMA_CHANNELS) | stat1;
- while (stat1) {
- int channel = fls(stat1) - 1;
- struct mxs_dma_chan *mxs_chan =
- &mxs_dma->mxs_chans[channel % MXS_DMA_CHANNELS];
-
- if (channel >= MXS_DMA_CHANNELS) {
- dev_dbg(mxs_dma->dma_device.dev,
- "%s: error in channel %d\n", __func__,
- channel - MXS_DMA_CHANNELS);
- mxs_chan->status = DMA_ERROR;
- mxs_dma_reset_chan(mxs_chan);
- } else {
- if (mxs_chan->flags & MXS_DMA_SG_LOOP)
- mxs_chan->status = DMA_IN_PROGRESS;
- else
- mxs_chan->status = DMA_SUCCESS;
- }
+ err -= err & completed;
- stat1 &= ~(1 << channel);
+ mxs_chan = &mxs_dma->mxs_chans[chan];
- if (mxs_chan->status == DMA_SUCCESS)
- dma_cookie_complete(&mxs_chan->desc);
+ if (err) {
+ dev_dbg(mxs_dma->dma_device.dev,
+ "%s: error in channel %d\n", __func__,
+ chan);
+ mxs_chan->status = DMA_ERROR;
+ mxs_dma_reset_chan(mxs_chan);
+ } else if (mxs_chan->status != DMA_COMPLETE) {
+ if (mxs_chan->flags & MXS_DMA_SG_LOOP) {
+ mxs_chan->status = DMA_IN_PROGRESS;
+ if (mxs_chan->flags & MXS_DMA_USE_SEMAPHORE)
+ writel(1, mxs_dma->base +
+ HW_APBHX_CHn_SEMA(mxs_dma, chan));
+ } else {
+ mxs_chan->status = DMA_COMPLETE;
+ }
+ }
- /* schedule tasklet on this channel */
- tasklet_schedule(&mxs_chan->tasklet);
+ if (mxs_chan->status == DMA_COMPLETE) {
+ if (mxs_chan->reset)
+ return IRQ_HANDLED;
+ dma_cookie_complete(&mxs_chan->desc);
}
+ /* schedule tasklet on this channel */
+ tasklet_schedule(&mxs_chan->tasklet);
+
return IRQ_HANDLED;
}
mxs_chan->status = DMA_IN_PROGRESS;
mxs_chan->flags |= MXS_DMA_SG_LOOP;
+ mxs_chan->flags |= MXS_DMA_USE_SEMAPHORE;
if (num_periods > NUM_CCW) {
dev_err(mxs_dma->dma_device.dev,
ccw->bits |= CCW_IRQ;
ccw->bits |= CCW_HALT_ON_TERM;
ccw->bits |= CCW_TERM_FLUSH;
+ ccw->bits |= CCW_DEC_SEM;
ccw->bits |= BF_CCW(direction == DMA_DEV_TO_MEM ?
MXS_DMA_CMD_WRITE : MXS_DMA_CMD_READ, COMMAND);
dma_cookie_t cookie, struct dma_tx_state *txstate)
{
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
+ struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
+ u32 residue = 0;
+
+ if (mxs_chan->status == DMA_IN_PROGRESS &&
+ mxs_chan->flags & MXS_DMA_SG_LOOP) {
+ struct mxs_dma_ccw *last_ccw;
+ u32 bar;
+
+ last_ccw = &mxs_chan->ccw[mxs_chan->desc_count - 1];
+ residue = last_ccw->xfer_bytes + last_ccw->bufaddr;
+
+ bar = readl(mxs_dma->base +
+ HW_APBHX_CHn_BAR(mxs_dma, chan->chan_id));
+ residue -= bar;
+ }
- dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, 0);
+ dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie,
+ residue);
return mxs_chan->status;
}
unsigned long flags;
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS || !txstate)
+ if (ret == DMA_COMPLETE || !txstate)
return ret;
spin_lock_irqsave(&c->vc.lock, flags);
list_move_tail(&desc->node, &pch->dmac->desc_pool);
}
+ dma_descriptor_unmap(&desc->txd);
+
if (callback) {
spin_unlock_irqrestore(&pch->lock, flags);
callback(callback_param);
return false;
peri_id = chan->private;
- return *peri_id == (unsigned)param;
+ return *peri_id == (unsigned long)param;
}
EXPORT_SYMBOL(pl330_filter);
static inline void _init_desc(struct dma_pl330_desc *desc)
{
- desc->pchan = NULL;
desc->req.x = &desc->px;
desc->req.token = desc;
desc->rqcfg.swap = SWAP_NO;
- desc->rqcfg.privileged = 0;
- desc->rqcfg.insnaccess = 0;
desc->rqcfg.scctl = SCCTRL0;
desc->rqcfg.dcctl = DCCTRL0;
desc->req.cfg = &desc->rqcfg;
if (!pdmac)
return 0;
- desc = kmalloc(count * sizeof(*desc), flg);
+ desc = kcalloc(count, sizeof(*desc), flg);
if (!desc)
return 0;
amba_set_drvdata(adev, pdmac);
- irq = adev->irq[0];
- ret = request_irq(irq, pl330_irq_handler, 0,
- dev_name(&adev->dev), pi);
- if (ret)
- return ret;
+ for (i = 0; i < AMBA_NR_IRQS; i++) {
+ irq = adev->irq[i];
+ if (irq) {
+ ret = devm_request_irq(&adev->dev, irq,
+ pl330_irq_handler, 0,
+ dev_name(&adev->dev), pi);
+ if (ret)
+ return ret;
+ } else {
+ break;
+ }
+ }
pi->pcfg.periph_id = adev->periphid;
ret = pl330_add(pi);
if (ret)
- goto probe_err1;
+ return ret;
INIT_LIST_HEAD(&pdmac->desc_pool);
spin_lock_init(&pdmac->pool_lock);
return 0;
probe_err3:
- amba_set_drvdata(adev, NULL);
-
/* Idle the DMAC */
list_for_each_entry_safe(pch, _p, &pdmac->ddma.channels,
chan.device_node) {
}
probe_err2:
pl330_del(pi);
-probe_err1:
- free_irq(irq, pi);
return ret;
}
struct dma_pl330_dmac *pdmac = amba_get_drvdata(adev);
struct dma_pl330_chan *pch, *_p;
struct pl330_info *pi;
- int irq;
if (!pdmac)
return 0;
of_dma_controller_free(adev->dev.of_node);
dma_async_device_unregister(&pdmac->ddma);
- amba_set_drvdata(adev, NULL);
/* Idle the DMAC */
list_for_each_entry_safe(pch, _p, &pdmac->ddma.channels,
pl330_del(pi);
- irq = adev->irq[0];
- free_irq(irq, pi);
-
return 0;
}
hw_desc->opc = DMA_CDB_OPC_MV_SG1_SG2;
}
-/**
- * ppc440spe_desc_init_memset - initialize the descriptor for MEMSET operation
- */
-static void ppc440spe_desc_init_memset(struct ppc440spe_adma_desc_slot *desc,
- int value, unsigned long flags)
-{
- struct dma_cdb *hw_desc = desc->hw_desc;
-
- memset(desc->hw_desc, 0, sizeof(struct dma_cdb));
- desc->hw_next = NULL;
- desc->src_cnt = 1;
- desc->dst_cnt = 1;
-
- if (flags & DMA_PREP_INTERRUPT)
- set_bit(PPC440SPE_DESC_INT, &desc->flags);
- else
- clear_bit(PPC440SPE_DESC_INT, &desc->flags);
-
- hw_desc->sg1u = hw_desc->sg1l = cpu_to_le32((u32)value);
- hw_desc->sg3u = hw_desc->sg3l = cpu_to_le32((u32)value);
- hw_desc->opc = DMA_CDB_OPC_DFILL128;
-}
-
/**
* ppc440spe_desc_set_src_addr - set source address into the descriptor
*/
local_irq_restore(flags);
}
-/**
- * ppc440spe_desc_get_src_addr - extract the source address from the descriptor
- */
-static u32 ppc440spe_desc_get_src_addr(struct ppc440spe_adma_desc_slot *desc,
- struct ppc440spe_adma_chan *chan, int src_idx)
-{
- struct dma_cdb *dma_hw_desc;
- struct xor_cb *xor_hw_desc;
-
- switch (chan->device->id) {
- case PPC440SPE_DMA0_ID:
- case PPC440SPE_DMA1_ID:
- dma_hw_desc = desc->hw_desc;
- /* May have 0, 1, 2, or 3 sources */
- switch (dma_hw_desc->opc) {
- case DMA_CDB_OPC_NO_OP:
- case DMA_CDB_OPC_DFILL128:
- return 0;
- case DMA_CDB_OPC_DCHECK128:
- if (unlikely(src_idx)) {
- printk(KERN_ERR "%s: try to get %d source for"
- " DCHECK128\n", __func__, src_idx);
- BUG();
- }
- return le32_to_cpu(dma_hw_desc->sg1l);
- case DMA_CDB_OPC_MULTICAST:
- case DMA_CDB_OPC_MV_SG1_SG2:
- if (unlikely(src_idx > 2)) {
- printk(KERN_ERR "%s: try to get %d source from"
- " DMA descr\n", __func__, src_idx);
- BUG();
- }
- if (src_idx) {
- if (le32_to_cpu(dma_hw_desc->sg1u) &
- DMA_CUED_XOR_WIN_MSK) {
- u8 region;
-
- if (src_idx == 1)
- return le32_to_cpu(
- dma_hw_desc->sg1l) +
- desc->unmap_len;
-
- region = (le32_to_cpu(
- dma_hw_desc->sg1u)) >>
- DMA_CUED_REGION_OFF;
-
- region &= DMA_CUED_REGION_MSK;
- switch (region) {
- case DMA_RXOR123:
- return le32_to_cpu(
- dma_hw_desc->sg1l) +
- (desc->unmap_len << 1);
- case DMA_RXOR124:
- return le32_to_cpu(
- dma_hw_desc->sg1l) +
- (desc->unmap_len * 3);
- case DMA_RXOR125:
- return le32_to_cpu(
- dma_hw_desc->sg1l) +
- (desc->unmap_len << 2);
- default:
- printk(KERN_ERR
- "%s: try to"
- " get src3 for region %02x"
- "PPC440SPE_DESC_RXOR12?\n",
- __func__, region);
- BUG();
- }
- } else {
- printk(KERN_ERR
- "%s: try to get %d"
- " source for non-cued descr\n",
- __func__, src_idx);
- BUG();
- }
- }
- return le32_to_cpu(dma_hw_desc->sg1l);
- default:
- printk(KERN_ERR "%s: unknown OPC 0x%02x\n",
- __func__, dma_hw_desc->opc);
- BUG();
- }
- return le32_to_cpu(dma_hw_desc->sg1l);
- case PPC440SPE_XOR_ID:
- /* May have up to 16 sources */
- xor_hw_desc = desc->hw_desc;
- return xor_hw_desc->ops[src_idx].l;
- }
- return 0;
-}
-
-/**
- * ppc440spe_desc_get_dest_addr - extract the destination address from the
- * descriptor
- */
-static u32 ppc440spe_desc_get_dest_addr(struct ppc440spe_adma_desc_slot *desc,
- struct ppc440spe_adma_chan *chan, int idx)
-{
- struct dma_cdb *dma_hw_desc;
- struct xor_cb *xor_hw_desc;
-
- switch (chan->device->id) {
- case PPC440SPE_DMA0_ID:
- case PPC440SPE_DMA1_ID:
- dma_hw_desc = desc->hw_desc;
-
- if (likely(!idx))
- return le32_to_cpu(dma_hw_desc->sg2l);
- return le32_to_cpu(dma_hw_desc->sg3l);
- case PPC440SPE_XOR_ID:
- xor_hw_desc = desc->hw_desc;
- return xor_hw_desc->cbtal;
- }
- return 0;
-}
-
-/**
- * ppc440spe_desc_get_src_num - extract the number of source addresses from
- * the descriptor
- */
-static u32 ppc440spe_desc_get_src_num(struct ppc440spe_adma_desc_slot *desc,
- struct ppc440spe_adma_chan *chan)
-{
- struct dma_cdb *dma_hw_desc;
- struct xor_cb *xor_hw_desc;
-
- switch (chan->device->id) {
- case PPC440SPE_DMA0_ID:
- case PPC440SPE_DMA1_ID:
- dma_hw_desc = desc->hw_desc;
-
- switch (dma_hw_desc->opc) {
- case DMA_CDB_OPC_NO_OP:
- case DMA_CDB_OPC_DFILL128:
- return 0;
- case DMA_CDB_OPC_DCHECK128:
- return 1;
- case DMA_CDB_OPC_MV_SG1_SG2:
- case DMA_CDB_OPC_MULTICAST:
- /*
- * Only for RXOR operations we have more than
- * one source
- */
- if (le32_to_cpu(dma_hw_desc->sg1u) &
- DMA_CUED_XOR_WIN_MSK) {
- /* RXOR op, there are 2 or 3 sources */
- if (((le32_to_cpu(dma_hw_desc->sg1u) >>
- DMA_CUED_REGION_OFF) &
- DMA_CUED_REGION_MSK) == DMA_RXOR12) {
- /* RXOR 1-2 */
- return 2;
- } else {
- /* RXOR 1-2-3/1-2-4/1-2-5 */
- return 3;
- }
- }
- return 1;
- default:
- printk(KERN_ERR "%s: unknown OPC 0x%02x\n",
- __func__, dma_hw_desc->opc);
- BUG();
- }
- case PPC440SPE_XOR_ID:
- /* up to 16 sources */
- xor_hw_desc = desc->hw_desc;
- return xor_hw_desc->cbc & XOR_CDCR_OAC_MSK;
- default:
- BUG();
- }
- return 0;
-}
-
-/**
- * ppc440spe_desc_get_dst_num - get the number of destination addresses in
- * this descriptor
- */
-static u32 ppc440spe_desc_get_dst_num(struct ppc440spe_adma_desc_slot *desc,
- struct ppc440spe_adma_chan *chan)
-{
- struct dma_cdb *dma_hw_desc;
-
- switch (chan->device->id) {
- case PPC440SPE_DMA0_ID:
- case PPC440SPE_DMA1_ID:
- /* May be 1 or 2 destinations */
- dma_hw_desc = desc->hw_desc;
- switch (dma_hw_desc->opc) {
- case DMA_CDB_OPC_NO_OP:
- case DMA_CDB_OPC_DCHECK128:
- return 0;
- case DMA_CDB_OPC_MV_SG1_SG2:
- case DMA_CDB_OPC_DFILL128:
- return 1;
- case DMA_CDB_OPC_MULTICAST:
- if (desc->dst_cnt == 2)
- return 2;
- else
- return 1;
- default:
- printk(KERN_ERR "%s: unknown OPC 0x%02x\n",
- __func__, dma_hw_desc->opc);
- BUG();
- }
- case PPC440SPE_XOR_ID:
- /* Always only 1 destination */
- return 1;
- default:
- BUG();
- }
- return 0;
-}
-
/**
* ppc440spe_desc_get_link - get the address of the descriptor that
* follows this one
}
}
-static void ppc440spe_adma_unmap(struct ppc440spe_adma_chan *chan,
- struct ppc440spe_adma_desc_slot *desc)
-{
- u32 src_cnt, dst_cnt;
- dma_addr_t addr;
-
- /*
- * get the number of sources & destination
- * included in this descriptor and unmap
- * them all
- */
- src_cnt = ppc440spe_desc_get_src_num(desc, chan);
- dst_cnt = ppc440spe_desc_get_dst_num(desc, chan);
-
- /* unmap destinations */
- if (!(desc->async_tx.flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- while (dst_cnt--) {
- addr = ppc440spe_desc_get_dest_addr(
- desc, chan, dst_cnt);
- dma_unmap_page(chan->device->dev,
- addr, desc->unmap_len,
- DMA_FROM_DEVICE);
- }
- }
-
- /* unmap sources */
- if (!(desc->async_tx.flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- while (src_cnt--) {
- addr = ppc440spe_desc_get_src_addr(
- desc, chan, src_cnt);
- dma_unmap_page(chan->device->dev,
- addr, desc->unmap_len,
- DMA_TO_DEVICE);
- }
- }
-}
-
/**
* ppc440spe_adma_run_tx_complete_actions - call functions to be called
* upon completion
struct ppc440spe_adma_chan *chan,
dma_cookie_t cookie)
{
- int i;
-
BUG_ON(desc->async_tx.cookie < 0);
if (desc->async_tx.cookie > 0) {
cookie = desc->async_tx.cookie;
desc->async_tx.callback(
desc->async_tx.callback_param);
- /* unmap dma addresses
- * (unmap_single vs unmap_page?)
- *
- * actually, ppc's dma_unmap_page() functions are empty, so
- * the following code is just for the sake of completeness
- */
- if (chan && chan->needs_unmap && desc->group_head &&
- desc->unmap_len) {
- struct ppc440spe_adma_desc_slot *unmap =
- desc->group_head;
- /* assume 1 slot per op always */
- u32 slot_count = unmap->slot_cnt;
-
- /* Run through the group list and unmap addresses */
- for (i = 0; i < slot_count; i++) {
- BUG_ON(!unmap);
- ppc440spe_adma_unmap(chan, unmap);
- unmap = unmap->hw_next;
- }
- }
+ dma_descriptor_unmap(&desc->async_tx);
}
/* run dependent operations */
ppc440spe_chan = to_ppc440spe_adma_chan(chan);
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS)
+ if (ret == DMA_COMPLETE)
return ret;
ppc440spe_adma_slot_cleanup(ppc440spe_chan);
ppc440spe_adma_prep_dma_interrupt;
}
pr_info("%s: AMCC(R) PPC440SP(E) ADMA Engine: "
- "( %s%s%s%s%s%s%s)\n",
+ "( %s%s%s%s%s%s)\n",
dev_name(adev->dev),
dma_has_cap(DMA_PQ, adev->common.cap_mask) ? "pq " : "",
dma_has_cap(DMA_PQ_VAL, adev->common.cap_mask) ? "pq_val " : "",
s3cchan->state = S3C24XX_DMA_CHAN_IDLE;
}
-static void s3c24xx_dma_unmap_buffers(struct s3c24xx_txd *txd)
-{
- struct device *dev = txd->vd.tx.chan->device->dev;
- struct s3c24xx_sg *dsg;
-
- if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- if (txd->vd.tx.flags & DMA_COMPL_SRC_UNMAP_SINGLE)
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_single(dev, dsg->src_addr, dsg->len,
- DMA_TO_DEVICE);
- else {
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_page(dev, dsg->src_addr, dsg->len,
- DMA_TO_DEVICE);
- }
- }
-
- if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- if (txd->vd.tx.flags & DMA_COMPL_DEST_UNMAP_SINGLE)
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_single(dev, dsg->dst_addr, dsg->len,
- DMA_FROM_DEVICE);
- else
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_page(dev, dsg->dst_addr, dsg->len,
- DMA_FROM_DEVICE);
- }
-}
-
static void s3c24xx_dma_desc_free(struct virt_dma_desc *vd)
{
struct s3c24xx_txd *txd = to_s3c24xx_txd(&vd->tx);
struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(vd->tx.chan);
if (!s3cchan->slave)
- s3c24xx_dma_unmap_buffers(txd);
+ dma_descriptor_unmap(&vd->tx);
s3c24xx_dma_free_txd(txd);
}
spin_lock_irqsave(&s3cchan->vc.lock, flags);
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS) {
+ if (ret == DMA_COMPLETE) {
spin_unlock_irqrestore(&s3cchan->vc.lock, flags);
return ret;
}
enum dma_status ret;
ret = dma_cookie_status(&c->vc.chan, cookie, state);
- if (ret == DMA_SUCCESS)
+ if (ret == DMA_COMPLETE)
return ret;
if (!state)
#define HPB_DMAE_DSTPR_DMSTP BIT(0)
/* DMA status register (DSTSR) bits */
+#define HPB_DMAE_DSTSR_DQSTS BIT(2)
#define HPB_DMAE_DSTSR_DMSTS BIT(0)
/* DMA common registers */
ch_reg_write(chan, HPB_DMAE_DCMDR_DQEND, HPB_DMAE_DCMDR);
ch_reg_write(chan, HPB_DMAE_DSTPR_DMSTP, HPB_DMAE_DSTPR);
+
+ chan->plane_idx = 0;
+ chan->first_desc = true;
}
static const struct hpb_dmae_slave_config *
struct hpb_dmae_chan *chan = to_chan(schan);
u32 dstsr = ch_reg_read(chan, HPB_DMAE_DSTSR);
- return (dstsr & HPB_DMAE_DSTSR_DMSTS) == HPB_DMAE_DSTSR_DMSTS;
+ if (chan->xfer_mode == XFER_DOUBLE)
+ return dstsr & HPB_DMAE_DSTSR_DQSTS;
+ else
+ return dstsr & HPB_DMAE_DSTSR_DMSTS;
}
static int
}
schan = &new_hpb_chan->shdma_chan;
+ schan->max_xfer_len = HPB_DMA_TCR_MAX;
+
shdma_chan_probe(sdev, schan, id);
if (pdev->id >= 0)
* If we don't find cookie on the queue, it has been aborted and we have
* to report error
*/
- if (status != DMA_SUCCESS) {
+ if (status != DMA_COMPLETE) {
struct shdma_desc *sdesc;
status = DMA_ERROR;
list_for_each_entry(sdesc, &schan->ld_queue, node)
static int sh_dmae_probe(struct platform_device *pdev)
{
const struct sh_dmae_pdata *pdata;
- unsigned long irqflags = IRQF_DISABLED,
+ unsigned long irqflags = 0,
chan_flag[SH_DMAE_MAX_CHANNELS] = {};
int errirq, chan_irq[SH_DMAE_MAX_CHANNELS];
int err, i, irq_cnt = 0, irqres = 0, irq_cap = 0;
IORESOURCE_IRQ_SHAREABLE)
chan_flag[irq_cnt] = IRQF_SHARED;
else
- chan_flag[irq_cnt] = IRQF_DISABLED;
+ chan_flag[irq_cnt] = 0;
dev_dbg(&pdev->dev,
"Found IRQ %d for channel %d\n",
i, irq_cnt);
#include <linux/platform_device.h>
#include <linux/clk.h>
#include <linux/delay.h>
+#include <linux/log2.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/err.h>
}
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret != DMA_SUCCESS)
+ if (ret != DMA_COMPLETE)
dma_set_residue(txstate, stedma40_residue(chan));
if (d40_is_paused(d40c))
src_addr_width > DMA_SLAVE_BUSWIDTH_8_BYTES ||
dst_addr_width <= DMA_SLAVE_BUSWIDTH_UNDEFINED ||
dst_addr_width > DMA_SLAVE_BUSWIDTH_8_BYTES ||
- ((src_addr_width > 1) && (src_addr_width & 1)) ||
- ((dst_addr_width > 1) && (dst_addr_width & 1)))
+ !is_power_of_2(src_addr_width) ||
+ !is_power_of_2(dst_addr_width))
return -EINVAL;
cfg->src_info.data_width = src_addr_width;
list_del(&sgreq->node);
if (sgreq->last_sg) {
- dma_desc->dma_status = DMA_SUCCESS;
+ dma_desc->dma_status = DMA_COMPLETE;
dma_cookie_complete(&dma_desc->txd);
if (!dma_desc->cb_count)
list_add_tail(&dma_desc->cb_node, &tdc->cb_desc);
unsigned int residual;
ret = dma_cookie_status(dc, cookie, txstate);
- if (ret == DMA_SUCCESS)
+ if (ret == DMA_COMPLETE)
return ret;
spin_lock_irqsave(&tdc->lock, flags);
return &dma_desc->txd;
}
-struct dma_async_tx_descriptor *tegra_dma_prep_dma_cyclic(
+static struct dma_async_tx_descriptor *tegra_dma_prep_dma_cyclic(
struct dma_chan *dc, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
return done;
}
-static void __td_unmap_desc(struct timb_dma_chan *td_chan, const u8 *dma_desc,
- bool single)
-{
- dma_addr_t addr;
- int len;
-
- addr = (dma_desc[7] << 24) | (dma_desc[6] << 16) | (dma_desc[5] << 8) |
- dma_desc[4];
-
- len = (dma_desc[3] << 8) | dma_desc[2];
-
- if (single)
- dma_unmap_single(chan2dev(&td_chan->chan), addr, len,
- DMA_TO_DEVICE);
- else
- dma_unmap_page(chan2dev(&td_chan->chan), addr, len,
- DMA_TO_DEVICE);
-}
-
-static void __td_unmap_descs(struct timb_dma_desc *td_desc, bool single)
-{
- struct timb_dma_chan *td_chan = container_of(td_desc->txd.chan,
- struct timb_dma_chan, chan);
- u8 *descs;
-
- for (descs = td_desc->desc_list; ; descs += TIMB_DMA_DESC_SIZE) {
- __td_unmap_desc(td_chan, descs, single);
- if (descs[0] & 0x02)
- break;
- }
-}
-
static int td_fill_desc(struct timb_dma_chan *td_chan, u8 *dma_desc,
struct scatterlist *sg, bool last)
{
list_move(&td_desc->desc_node, &td_chan->free_list);
- if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP))
- __td_unmap_descs(td_desc,
- txd->flags & DMA_COMPL_SRC_UNMAP_SINGLE);
-
+ dma_descriptor_unmap(txd);
/*
* The API requires that no submissions are done from a
* callback, so we don't need to drop the lock here
dma_async_tx_callback callback;
void *param;
struct dma_async_tx_descriptor *txd = &desc->txd;
- struct txx9dmac_slave *ds = dc->chan.private;
dev_vdbg(chan2dev(&dc->chan), "descriptor %u %p complete\n",
txd->cookie, desc);
list_splice_init(&desc->tx_list, &dc->free_list);
list_move(&desc->desc_node, &dc->free_list);
- if (!ds) {
- dma_addr_t dmaaddr;
- if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- dmaaddr = is_dmac64(dc) ?
- desc->hwdesc.DAR : desc->hwdesc32.DAR;
- if (txd->flags & DMA_COMPL_DEST_UNMAP_SINGLE)
- dma_unmap_single(chan2parent(&dc->chan),
- dmaaddr, desc->len, DMA_FROM_DEVICE);
- else
- dma_unmap_page(chan2parent(&dc->chan),
- dmaaddr, desc->len, DMA_FROM_DEVICE);
- }
- if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- dmaaddr = is_dmac64(dc) ?
- desc->hwdesc.SAR : desc->hwdesc32.SAR;
- if (txd->flags & DMA_COMPL_SRC_UNMAP_SINGLE)
- dma_unmap_single(chan2parent(&dc->chan),
- dmaaddr, desc->len, DMA_TO_DEVICE);
- else
- dma_unmap_page(chan2parent(&dc->chan),
- dmaaddr, desc->len, DMA_TO_DEVICE);
- }
- }
-
+ dma_descriptor_unmap(txd);
/*
* The API requires that no submissions are done from a
* callback, so we don't need to drop the lock here
enum dma_status ret;
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS)
- return DMA_SUCCESS;
+ if (ret == DMA_COMPLETE)
+ return DMA_COMPLETE;
spin_lock_bh(&dc->lock);
txx9dmac_scan_descriptors(dc);
u32 tad_offset;
u32 rir_way;
u32 mb, kb;
- u64 ch_addr, offset, limit, prv = 0;
+ u64 ch_addr, offset, limit = 0, prv = 0;
/*
static int arizona_extcon_probe(struct platform_device *pdev)
{
struct arizona *arizona = dev_get_drvdata(pdev->dev.parent);
- struct arizona_pdata *pdata;
+ struct arizona_pdata *pdata = &arizona->pdata;
struct arizona_extcon_info *info;
unsigned int val;
int jack_irq_fall, jack_irq_rise;
if (!arizona->dapm || !arizona->dapm->card)
return -EPROBE_DEFER;
- pdata = dev_get_platdata(arizona->dev);
-
info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
if (!info) {
dev_err(&pdev->dev, "Failed to allocate memory\n");
return;
}
+ device_unregister(&edev->dev);
+
if (edev->mutually_exclusive && edev->max_supported) {
for (index = 0; edev->mutually_exclusive[index];
index++)
if (switch_class)
class_compat_remove_link(switch_class, &edev->dev, NULL);
#endif
- device_unregister(&edev->dev);
put_device(&edev->dev);
}
EXPORT_SYMBOL_GPL(extcon_dev_unregister);
obj-$(CONFIG_GOOGLE_FIRMWARE) += google/
obj-$(CONFIG_EFI) += efi/
+obj-$(CONFIG_UEFI_CPER) += efi/
backend for pstore by default. This setting can be overridden
using the efivars module's pstore_disable parameter.
-config UEFI_CPER
- def_bool n
-
endmenu
+
+config UEFI_CPER
+ bool
#
# Makefile for linux kernel
#
-obj-y += efi.o vars.o
+obj-$(CONFIG_EFI) += efi.o vars.o
obj-$(CONFIG_EFI_VARS) += efivars.o
obj-$(CONFIG_EFI_VARS_PSTORE) += efi-pstore.o
obj-$(CONFIG_UEFI_CPER) += cper.o
static int efi_pstore_open(struct pstore_info *psi)
{
- efivar_entry_iter_begin();
psi->data = NULL;
return 0;
}
static int efi_pstore_close(struct pstore_info *psi)
{
- efivar_entry_iter_end();
psi->data = NULL;
return 0;
}
char **buf;
};
+static inline u64 generic_id(unsigned long timestamp,
+ unsigned int part, int count)
+{
+ return (timestamp * 100 + part) * 1000 + count;
+}
+
static int efi_pstore_read_func(struct efivar_entry *entry, void *data)
{
efi_guid_t vendor = LINUX_EFI_CRASH_GUID;
if (sscanf(name, "dump-type%u-%u-%d-%lu-%c",
cb_data->type, &part, &cnt, &time, &data_type) == 5) {
- *cb_data->id = part;
+ *cb_data->id = generic_id(time, part, cnt);
*cb_data->count = cnt;
cb_data->timespec->tv_sec = time;
cb_data->timespec->tv_nsec = 0;
*cb_data->compressed = false;
} else if (sscanf(name, "dump-type%u-%u-%d-%lu",
cb_data->type, &part, &cnt, &time) == 4) {
- *cb_data->id = part;
+ *cb_data->id = generic_id(time, part, cnt);
*cb_data->count = cnt;
cb_data->timespec->tv_sec = time;
cb_data->timespec->tv_nsec = 0;
* which doesn't support holding
* multiple logs, remains.
*/
- *cb_data->id = part;
+ *cb_data->id = generic_id(time, part, 0);
*cb_data->count = 0;
cb_data->timespec->tv_sec = time;
cb_data->timespec->tv_nsec = 0;
__efivar_entry_get(entry, &entry->var.Attributes,
&entry->var.DataSize, entry->var.Data);
size = entry->var.DataSize;
+ memcpy(*cb_data->buf, entry->var.Data,
+ (size_t)min_t(unsigned long, EFIVARS_DATA_SIZE_MAX, size));
- *cb_data->buf = kmemdup(entry->var.Data, size, GFP_KERNEL);
- if (*cb_data->buf == NULL)
- return -ENOMEM;
return size;
}
+/**
+ * efi_pstore_scan_sysfs_enter
+ * @entry: scanning entry
+ * @next: next entry
+ * @head: list head
+ */
+static void efi_pstore_scan_sysfs_enter(struct efivar_entry *pos,
+ struct efivar_entry *next,
+ struct list_head *head)
+{
+ pos->scanning = true;
+ if (&next->list != head)
+ next->scanning = true;
+}
+
+/**
+ * __efi_pstore_scan_sysfs_exit
+ * @entry: deleting entry
+ * @turn_off_scanning: Check if a scanning flag should be turned off
+ */
+static inline void __efi_pstore_scan_sysfs_exit(struct efivar_entry *entry,
+ bool turn_off_scanning)
+{
+ if (entry->deleting) {
+ list_del(&entry->list);
+ efivar_entry_iter_end();
+ efivar_unregister(entry);
+ efivar_entry_iter_begin();
+ } else if (turn_off_scanning)
+ entry->scanning = false;
+}
+
+/**
+ * efi_pstore_scan_sysfs_exit
+ * @pos: scanning entry
+ * @next: next entry
+ * @head: list head
+ * @stop: a flag checking if scanning will stop
+ */
+static void efi_pstore_scan_sysfs_exit(struct efivar_entry *pos,
+ struct efivar_entry *next,
+ struct list_head *head, bool stop)
+{
+ __efi_pstore_scan_sysfs_exit(pos, true);
+ if (stop)
+ __efi_pstore_scan_sysfs_exit(next, &next->list != head);
+}
+
+/**
+ * efi_pstore_sysfs_entry_iter
+ *
+ * @data: function-specific data to pass to callback
+ * @pos: entry to begin iterating from
+ *
+ * You MUST call efivar_enter_iter_begin() before this function, and
+ * efivar_entry_iter_end() afterwards.
+ *
+ * It is possible to begin iteration from an arbitrary entry within
+ * the list by passing @pos. @pos is updated on return to point to
+ * the next entry of the last one passed to efi_pstore_read_func().
+ * To begin iterating from the beginning of the list @pos must be %NULL.
+ */
+static int efi_pstore_sysfs_entry_iter(void *data, struct efivar_entry **pos)
+{
+ struct efivar_entry *entry, *n;
+ struct list_head *head = &efivar_sysfs_list;
+ int size = 0;
+
+ if (!*pos) {
+ list_for_each_entry_safe(entry, n, head, list) {
+ efi_pstore_scan_sysfs_enter(entry, n, head);
+
+ size = efi_pstore_read_func(entry, data);
+ efi_pstore_scan_sysfs_exit(entry, n, head, size < 0);
+ if (size)
+ break;
+ }
+ *pos = n;
+ return size;
+ }
+
+ list_for_each_entry_safe_from((*pos), n, head, list) {
+ efi_pstore_scan_sysfs_enter((*pos), n, head);
+
+ size = efi_pstore_read_func((*pos), data);
+ efi_pstore_scan_sysfs_exit((*pos), n, head, size < 0);
+ if (size)
+ break;
+ }
+ *pos = n;
+ return size;
+}
+
+/**
+ * efi_pstore_read
+ *
+ * This function returns a size of NVRAM entry logged via efi_pstore_write().
+ * The meaning and behavior of efi_pstore/pstore are as below.
+ *
+ * size > 0: Got data of an entry logged via efi_pstore_write() successfully,
+ * and pstore filesystem will continue reading subsequent entries.
+ * size == 0: Entry was not logged via efi_pstore_write(),
+ * and efi_pstore driver will continue reading subsequent entries.
+ * size < 0: Failed to get data of entry logging via efi_pstore_write(),
+ * and pstore will stop reading entry.
+ */
static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type,
int *count, struct timespec *timespec,
char **buf, bool *compressed,
struct pstore_info *psi)
{
struct pstore_read_data data;
+ ssize_t size;
data.id = id;
data.type = type;
data.compressed = compressed;
data.buf = buf;
- return __efivar_entry_iter(efi_pstore_read_func, &efivar_sysfs_list, &data,
- (struct efivar_entry **)&psi->data);
+ *data.buf = kzalloc(EFIVARS_DATA_SIZE_MAX, GFP_KERNEL);
+ if (!*data.buf)
+ return -ENOMEM;
+
+ efivar_entry_iter_begin();
+ size = efi_pstore_sysfs_entry_iter(&data,
+ (struct efivar_entry **)&psi->data);
+ efivar_entry_iter_end();
+ if (size <= 0)
+ kfree(*data.buf);
+ return size;
}
static int efi_pstore_write(enum pstore_type_id type,
return 0;
}
+ if (entry->scanning) {
+ /*
+ * Skip deletion because this entry will be deleted
+ * after scanning is completed.
+ */
+ entry->deleting = true;
+ } else
+ list_del(&entry->list);
+
/* found */
__efivar_entry_delete(entry);
- list_del(&entry->list);
return 1;
}
char name[DUMP_NAME_LEN];
efi_char16_t efi_name[DUMP_NAME_LEN];
int found, i;
+ unsigned int part;
- sprintf(name, "dump-type%u-%u-%d-%lu", type, (unsigned int)id, count,
- time.tv_sec);
+ do_div(id, 1000);
+ part = do_div(id, 100);
+ sprintf(name, "dump-type%u-%u-%d-%lu", type, part, count, time.tv_sec);
for (i = 0; i < DUMP_NAME_LEN; i++)
efi_name[i] = name[i];
- edata.id = id;
+ edata.id = part;
edata.type = type;
edata.count = count;
edata.time = time;
efivar_entry_iter_begin();
found = __efivar_entry_iter(efi_pstore_erase_func, &efivar_sysfs_list, &edata, &entry);
- efivar_entry_iter_end();
- if (found)
+ if (found && !entry->scanning) {
+ efivar_entry_iter_end();
efivar_unregister(entry);
+ } else
+ efivar_entry_iter_end();
return 0;
}
static struct pstore_info efi_pstore_info = {
.owner = THIS_MODULE,
.name = "efi",
+ .flags = PSTORE_FLAGS_FRAGILE,
.open = efi_pstore_open,
.close = efi_pstore_close,
.read = efi_pstore_read,
else if (__efivar_entry_delete(entry))
err = -EIO;
- efivar_entry_iter_end();
-
- if (err)
+ if (err) {
+ efivar_entry_iter_end();
return err;
+ }
- efivar_unregister(entry);
+ if (!entry->scanning) {
+ efivar_entry_iter_end();
+ efivar_unregister(entry);
+ } else
+ efivar_entry_iter_end();
/* It's dead Jim.... */
return count;
if (!found)
return NULL;
- if (remove)
- list_del(&entry->list);
+ if (remove) {
+ if (entry->scanning) {
+ /*
+ * The entry will be deleted
+ * after scanning is completed.
+ */
+ entry->deleting = true;
+ } else
+ list_del(&entry->list);
+ }
return entry;
}
spin_unlock_irqrestore(&kona_gpio->lock, flags);
/* return the specified bit status */
- return !!(val & bit);
+ return !!(val & BIT(bit));
}
static int bcm_kona_gpio_direction_input(struct gpio_chip *chip, unsigned gpio)
* NOTE: we assume for now that only irqs in the first gpio_chip
* can provide direct-mapped IRQs to AINTC (up to 32 GPIOs).
*/
- if (offset < d->irq_base)
+ if (offset < d->gpio_unbanked)
return d->gpio_irq + offset;
else
return -ENODEV;
/* pass "bank 0" GPIO IRQs to AINTC */
chips[0].chip.to_irq = gpio_to_irq_unbanked;
+ chips[0].gpio_irq = bank_irq;
+ chips[0].gpio_unbanked = pdata->gpio_unbanked;
binten = BIT(0);
/* AINTC handles mask/unmask; GPIO handles triggering */
u32 val;
struct of_mm_gpio_chip *mm = to_of_mm_gpio_chip(gc);
struct mpc8xxx_gpio_chip *mpc8xxx_gc = to_mpc8xxx_gpio_chip(mm);
+ u32 out_mask, out_shadow;
- val = in_be32(mm->regs + GPIO_DAT) & ~in_be32(mm->regs + GPIO_DIR);
+ out_mask = in_be32(mm->regs + GPIO_DIR);
- return (val | mpc8xxx_gc->data) & mpc8xxx_gpio2mask(gpio);
+ val = in_be32(mm->regs + GPIO_DAT) & ~out_mask;
+ out_shadow = mpc8xxx_gc->data & out_mask;
+
+ return (val | out_shadow) & mpc8xxx_gpio2mask(gpio);
}
static int mpc8xxx_gpio_get(struct gpio_chip *gc, unsigned int gpio)
DECLARE_BITMAP(wake_irqs, MAX_NR_GPIO);
DECLARE_BITMAP(dual_edge_irqs, MAX_NR_GPIO);
struct irq_domain *domain;
- unsigned int summary_irq;
+ int summary_irq;
void __iomem *msm_tlmm_base;
};
spin_lock_irqsave(&tlmm_lock, irq_flags);
writel(TARGET_PROC_NONE, GPIO_INTR_CFG_SU(gpio));
- clear_gpio_bits(INTR_RAW_STATUS_EN | INTR_ENABLE, GPIO_INTR_CFG(gpio));
+ clear_gpio_bits(BIT(INTR_RAW_STATUS_EN) | BIT(INTR_ENABLE), GPIO_INTR_CFG(gpio));
__clear_bit(gpio, msm_gpio.enabled_irqs);
spin_unlock_irqrestore(&tlmm_lock, irq_flags);
}
spin_lock_irqsave(&tlmm_lock, irq_flags);
__set_bit(gpio, msm_gpio.enabled_irqs);
- set_gpio_bits(INTR_RAW_STATUS_EN | INTR_ENABLE, GPIO_INTR_CFG(gpio));
+ set_gpio_bits(BIT(INTR_RAW_STATUS_EN) | BIT(INTR_ENABLE), GPIO_INTR_CFG(gpio));
writel(TARGET_PROC_SCORPION, GPIO_INTR_CFG_SU(gpio));
spin_unlock_irqrestore(&tlmm_lock, irq_flags);
}
spinlock_t lock;
void __iomem *membase;
void __iomem *percpu_membase;
- unsigned int irqbase;
+ int irqbase;
struct irq_domain *domain;
int soc_variant;
};
if (!chip->base)
return -ENOMEM;
- chip->domain = irq_domain_add_simple(adev->dev.of_node, PL061_GPIO_NR,
- irq_base, &pl061_domain_ops, chip);
- if (!chip->domain)
- return -ENODEV;
-
spin_lock_init(&chip->lock);
chip->gc.request = pl061_gpio_request;
irq_set_chained_handler(irq, pl061_irq_handler);
irq_set_handler_data(irq, chip);
+ chip->domain = irq_domain_add_simple(adev->dev.of_node, PL061_GPIO_NR,
+ irq_base, &pl061_domain_ops, chip);
+ if (!chip->domain)
+ return -ENODEV;
+
for (i = 0; i < PL061_GPIO_NR; i++) {
if (pdata) {
if (pdata->directions & (1 << i))
u32 pending;
unsigned int offset, irqs_handled = 0;
- while ((pending = gpio_rcar_read(p, INTDT))) {
+ while ((pending = gpio_rcar_read(p, INTDT) &
+ gpio_rcar_read(p, INTMSK))) {
offset = __ffs(pending);
gpio_rcar_write(p, INTCLR, BIT(offset));
generic_handle_irq(irq_find_mapping(p->irq_domain, offset));
if (!p->irq_domain) {
ret = -ENXIO;
dev_err(&pdev->dev, "cannot initialize irq domain\n");
- goto err1;
+ goto err0;
}
if (devm_request_irq(&pdev->dev, irq->start,
int mask = BIT(offset);
int val = TB10X_GPIO_DIR_OUT << offset;
+ tb10x_gpio_set(chip, offset, value);
tb10x_set_bits(tb10x_gpio, OFFSET_TO_REG_DDR, mask, val);
return 0;
if (offset < TWL4030_GPIO_MAX)
ret = twl4030_set_gpio_direction(offset, 1);
else
- ret = -EINVAL;
+ ret = -EINVAL; /* LED outputs can't be set as input */
if (!ret)
priv->direction &= ~BIT(offset);
static int twl_direction_out(struct gpio_chip *chip, unsigned offset, int value)
{
struct gpio_twl4030_priv *priv = to_gpio_twl4030(chip);
+ int ret = 0;
mutex_lock(&priv->mutex);
- if (offset < TWL4030_GPIO_MAX)
- twl4030_set_gpio_dataout(offset, value);
+ if (offset < TWL4030_GPIO_MAX) {
+ ret = twl4030_set_gpio_direction(offset, 0);
+ if (ret) {
+ mutex_unlock(&priv->mutex);
+ return ret;
+ }
+ }
+
+ /*
+ * LED gpios i.e. offset >= TWL4030_GPIO_MAX are always output
+ */
priv->direction |= BIT(offset);
mutex_unlock(&priv->mutex);
twl_set(chip, offset, value);
- return 0;
+ return ret;
}
static int twl_to_irq(struct gpio_chip *chip, unsigned offset)
static int gpio_twl4030_remove(struct platform_device *pdev);
-static struct twl4030_gpio_platform_data *of_gpio_twl4030(struct device *dev)
+static struct twl4030_gpio_platform_data *of_gpio_twl4030(struct device *dev,
+ struct twl4030_gpio_platform_data *pdata)
{
struct twl4030_gpio_platform_data *omap_twl_info;
if (!omap_twl_info)
return NULL;
+ if (pdata)
+ *omap_twl_info = *pdata;
+
omap_twl_info->use_leds = of_property_read_bool(dev->of_node,
"ti,use-leds");
mutex_init(&priv->mutex);
if (node)
- pdata = of_gpio_twl4030(&pdev->dev);
+ pdata = of_gpio_twl4030(&pdev->dev, pdata);
if (pdata == NULL) {
dev_err(&pdev->dev, "Platform data is missing\n");
MODULE_DESCRIPTION("Philips UCB1400 GPIO driver");
MODULE_LICENSE("GPL");
+MODULE_ALIAS("platform:ucb1400_gpio");
#include <linux/acpi_gpio.h>
#include <linux/idr.h>
#include <linux/slab.h>
+#include <linux/acpi.h>
+#include <linux/gpio/driver.h>
#define CREATE_TRACE_POINTS
#include <trace/events/gpio.h>
}
EXPORT_SYMBOL_GPL(gpiochip_find);
+static int gpiochip_match_name(struct gpio_chip *chip, void *data)
+{
+ const char *name = data;
+
+ return !strcmp(chip->label, name);
+}
+
+static struct gpio_chip *find_chip_by_name(const char *name)
+{
+ return gpiochip_find((void *)name, gpiochip_match_name);
+}
+
#ifdef CONFIG_PINCTRL
/**
ret = pinctrl_get_group_pins(pctldev, pin_group,
&pin_range->range.pins,
&pin_range->range.npins);
- if (ret < 0)
+ if (ret < 0) {
+ kfree(pin_range);
return ret;
+ }
pinctrl_add_gpio_range(pctldev, &pin_range->range);
mutex_unlock(&gpio_lookup_lock);
}
-/*
- * Caller must have a acquired gpio_lookup_lock
- */
-static struct gpio_chip *find_chip_by_name(const char *name)
-{
- struct gpio_chip *chip = NULL;
-
- list_for_each_entry(chip, &gpio_lookup_list, list) {
- if (chip->label == NULL)
- continue;
- if (!strcmp(chip->label, name))
- break;
- }
-
- return chip;
-}
-
#ifdef CONFIG_OF
static struct gpio_desc *of_find_gpio(struct device *dev, const char *con_id,
- unsigned int idx, unsigned long *flags)
+ unsigned int idx,
+ enum gpio_lookup_flags *flags)
{
char prop_name[32]; /* 32 is max size of property name */
enum of_gpio_flags of_flags;
return desc;
if (of_flags & OF_GPIO_ACTIVE_LOW)
- *flags |= GPIOF_ACTIVE_LOW;
+ *flags |= GPIO_ACTIVE_LOW;
return desc;
}
#else
static struct gpio_desc *of_find_gpio(struct device *dev, const char *con_id,
- unsigned int idx, unsigned long *flags)
+ unsigned int idx,
+ enum gpio_lookup_flags *flags)
{
return ERR_PTR(-ENODEV);
}
#endif
static struct gpio_desc *acpi_find_gpio(struct device *dev, const char *con_id,
- unsigned int idx, unsigned long *flags)
+ unsigned int idx,
+ enum gpio_lookup_flags *flags)
{
struct acpi_gpio_info info;
struct gpio_desc *desc;
return desc;
if (info.gpioint && info.active_low)
- *flags |= GPIOF_ACTIVE_LOW;
+ *flags |= GPIO_ACTIVE_LOW;
return desc;
}
static struct gpio_desc *gpiod_find(struct device *dev, const char *con_id,
- unsigned int idx, unsigned long *flags)
+ unsigned int idx,
+ enum gpio_lookup_flags *flags)
{
const char *dev_id = dev ? dev_name(dev) : NULL;
struct gpio_desc *desc = ERR_PTR(-ENODEV);
continue;
}
- if (chip->ngpio >= p->chip_hwnum) {
+ if (chip->ngpio <= p->chip_hwnum) {
dev_warn(dev, "GPIO chip %s has %d GPIOs\n",
chip->label, chip->ngpio);
continue;
const char *con_id,
unsigned int idx)
{
- struct gpio_desc *desc;
+ struct gpio_desc *desc = NULL;
int status;
- unsigned long flags = 0;
+ enum gpio_lookup_flags flags = 0;
dev_dbg(dev, "GPIO lookup for consumer %s\n", con_id);
} else if (IS_ENABLED(CONFIG_ACPI) && dev && ACPI_HANDLE(dev)) {
dev_dbg(dev, "using ACPI for GPIO lookup\n");
desc = acpi_find_gpio(dev, con_id, idx, &flags);
- } else {
+ }
+
+ /*
+ * Either we are not using DT or ACPI, or their lookup did not return
+ * a result. In that case, use platform lookup as a fallback.
+ */
+ if (!desc || IS_ERR(desc)) {
+ struct gpio_desc *pdesc;
dev_dbg(dev, "using lookup tables for GPIO lookup");
- desc = gpiod_find(dev, con_id, idx, &flags);
+ pdesc = gpiod_find(dev, con_id, idx, &flags);
+ /* If used as fallback, do not replace the previous error */
+ if (!IS_ERR(pdesc) || !desc)
+ desc = pdesc;
}
if (IS_ERR(desc)) {
- dev_warn(dev, "lookup for GPIO %s failed\n", con_id);
+ dev_dbg(dev, "lookup for GPIO %s failed\n", con_id);
return desc;
}
if (status < 0)
return ERR_PTR(status);
- if (flags & GPIOF_ACTIVE_LOW)
+ if (flags & GPIO_ACTIVE_LOW)
set_bit(FLAG_ACTIVE_LOW, &desc->flags);
+ if (flags & GPIO_OPEN_DRAIN)
+ set_bit(FLAG_OPEN_DRAIN, &desc->flags);
+ if (flags & GPIO_OPEN_SOURCE)
+ set_bit(FLAG_OPEN_SOURCE, &desc->flags);
return desc;
}
extern const struct drm_mode_config_funcs armada_drm_mode_config_funcs;
int armada_fbdev_init(struct drm_device *);
+void armada_fbdev_lastclose(struct drm_device *);
void armada_fbdev_fini(struct drm_device *);
int armada_overlay_plane_create(struct drm_device *, unsigned long);
DRM_UNLOCKED),
};
+static void armada_drm_lastclose(struct drm_device *dev)
+{
+ armada_fbdev_lastclose(dev);
+}
+
static const struct file_operations armada_drm_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.open = NULL,
.preclose = NULL,
.postclose = NULL,
- .lastclose = NULL,
+ .lastclose = armada_drm_lastclose,
.unload = armada_drm_unload,
.get_vblank_counter = drm_vblank_count,
.enable_vblank = armada_drm_enable_vblank,
drm_fb_helper_fill_fix(info, dfb->fb.pitches[0], dfb->fb.depth);
drm_fb_helper_fill_var(info, fbh, sizes->fb_width, sizes->fb_height);
- DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08x\n",
- dfb->fb.width, dfb->fb.height,
- dfb->fb.bits_per_pixel, obj->phys_addr);
+ DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08llx\n",
+ dfb->fb.width, dfb->fb.height, dfb->fb.bits_per_pixel,
+ (unsigned long long)obj->phys_addr);
return 0;
return ret;
}
+void armada_fbdev_lastclose(struct drm_device *dev)
+{
+ struct armada_private *priv = dev->dev_private;
+
+ drm_modeset_lock_all(dev);
+ if (priv->fbdev)
+ drm_fb_helper_restore_fbdev_mode(priv->fbdev);
+ drm_modeset_unlock_all(dev);
+}
+
void armada_fbdev_fini(struct drm_device *dev)
{
struct armada_private *priv = dev->dev_private;
framebuffer_release(info);
}
+ drm_fb_helper_fini(fbh);
+
if (fbh->fb)
fbh->fb->funcs->destroy(fbh->fb);
- drm_fb_helper_fini(fbh);
-
priv->fbdev = NULL;
}
}
obj->dev_addr = obj->linear->start;
}
- DRM_DEBUG_DRIVER("obj %p phys %#x dev %#x\n",
- obj, obj->phys_addr, obj->dev_addr);
+ DRM_DEBUG_DRIVER("obj %p phys %#llx dev %#llx\n", obj,
+ (unsigned long long)obj->phys_addr,
+ (unsigned long long)obj->dev_addr);
return 0;
}
* refcount on the gem object itself.
*/
drm_gem_object_reference(obj);
- dma_buf_put(buf);
return obj;
}
}
}
dobj->obj.import_attach = attach;
+ get_dma_buf(buf);
/*
* Don't call dma_buf_map_attachment() here - it maps the
#define EDID_QUIRK_DETAILED_SYNC_PP (1 << 6)
/* Force reduced-blanking timings for detailed modes */
#define EDID_QUIRK_FORCE_REDUCED_BLANKING (1 << 7)
+/* Force 8bpc */
+#define EDID_QUIRK_FORCE_8BPC (1 << 8)
struct detailed_mode_closure {
struct drm_connector *connector;
/* Medion MD 30217 PG */
{ "MED", 0x7b8, EDID_QUIRK_PREFER_LARGE_75 },
+
+ /* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */
+ { "SEC", 0xd033, EDID_QUIRK_FORCE_8BPC },
};
/*
int modes = 0;
u8 cea_mode;
- if (video_db == NULL || video_index > video_len)
+ if (video_db == NULL || video_index >= video_len)
return 0;
/* CEA modes are numbered 1..127 */
if (structure & (1 << 8)) {
newmode = drm_mode_duplicate(dev, &edid_cea_modes[cea_mode]);
if (newmode) {
- newmode->flags = DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF;
+ newmode->flags |= DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF;
drm_mode_probed_add(connector, newmode);
modes++;
}
drm_add_display_info(edid, &connector->display_info);
+ if (quirks & EDID_QUIRK_FORCE_8BPC)
+ connector->display_info.bpc = 8;
+
return num_modes;
}
EXPORT_SYMBOL(drm_add_edid_modes);
if (dev->driver->unload)
dev->driver->unload(dev);
err_primary_node:
- drm_put_minor(dev->primary);
+ drm_unplug_minor(dev->primary);
err_render_node:
- drm_put_minor(dev->render);
+ drm_unplug_minor(dev->render);
err_control_node:
- drm_put_minor(dev->control);
+ drm_unplug_minor(dev->control);
err_agp:
if (dev->driver->bus->agp_destroy)
dev->driver->bus->agp_destroy(dev);
}
EXPORT_SYMBOL(drm_sysfs_hotplug_event);
+static void drm_sysfs_release(struct device *dev)
+{
+ kfree(dev);
+}
+
/**
* drm_sysfs_device_add - adds a class device to sysfs for a character driver
* @dev: DRM device to be added
int drm_sysfs_device_add(struct drm_minor *minor)
{
char *minor_str;
+ int r;
if (minor->type == DRM_MINOR_CONTROL)
minor_str = "controlD%d";
else
minor_str = "card%d";
- minor->kdev = device_create(drm_class, minor->dev->dev,
- MKDEV(DRM_MAJOR, minor->index),
- minor, minor_str, minor->index);
- if (IS_ERR(minor->kdev)) {
- DRM_ERROR("device create failed %ld\n", PTR_ERR(minor->kdev));
- return PTR_ERR(minor->kdev);
+ minor->kdev = kzalloc(sizeof(*minor->kdev), GFP_KERNEL);
+ if (!minor->kdev) {
+ r = -ENOMEM;
+ goto error;
}
+
+ device_initialize(minor->kdev);
+ minor->kdev->devt = MKDEV(DRM_MAJOR, minor->index);
+ minor->kdev->class = drm_class;
+ minor->kdev->type = &drm_sysfs_device_minor;
+ minor->kdev->parent = minor->dev->dev;
+ minor->kdev->release = drm_sysfs_release;
+ dev_set_drvdata(minor->kdev, minor);
+
+ r = dev_set_name(minor->kdev, minor_str, minor->index);
+ if (r < 0)
+ goto error;
+
+ r = device_add(minor->kdev);
+ if (r < 0)
+ goto error;
+
return 0;
+
+error:
+ DRM_ERROR("device create failed %d\n", r);
+ put_device(minor->kdev);
+ return r;
}
/**
void drm_sysfs_device_remove(struct drm_minor *minor)
{
if (minor->kdev)
- device_destroy(drm_class, MKDEV(DRM_MAJOR, minor->index));
+ device_unregister(minor->kdev);
minor->kdev = NULL;
}
static void exynos_drm_preclose(struct drm_device *dev,
struct drm_file *file)
+{
+ exynos_drm_subdrv_close(dev, file);
+}
+
+static void exynos_drm_postclose(struct drm_device *dev, struct drm_file *file)
{
struct exynos_drm_private *private = dev->dev_private;
- struct drm_pending_vblank_event *e, *t;
+ struct drm_pending_vblank_event *v, *vt;
+ struct drm_pending_event *e, *et;
unsigned long flags;
- /* release events of current file */
+ if (!file->driver_priv)
+ return;
+
+ /* Release all events not unhandled by page flip handler. */
spin_lock_irqsave(&dev->event_lock, flags);
- list_for_each_entry_safe(e, t, &private->pageflip_event_list,
+ list_for_each_entry_safe(v, vt, &private->pageflip_event_list,
base.link) {
- if (e->base.file_priv == file) {
- list_del(&e->base.link);
- e->base.destroy(&e->base);
+ if (v->base.file_priv == file) {
+ list_del(&v->base.link);
+ drm_vblank_put(dev, v->pipe);
+ v->base.destroy(&v->base);
}
}
- spin_unlock_irqrestore(&dev->event_lock, flags);
- exynos_drm_subdrv_close(dev, file);
-}
+ /* Release all events handled by page flip handler but not freed. */
+ list_for_each_entry_safe(e, et, &file->event_list, link) {
+ list_del(&e->link);
+ e->destroy(e);
+ }
+ spin_unlock_irqrestore(&dev->event_lock, flags);
-static void exynos_drm_postclose(struct drm_device *dev, struct drm_file *file)
-{
- if (!file->driver_priv)
- return;
kfree(file->driver_priv);
file->driver_priv = NULL;
#include "exynos_drm_iommu.h"
/*
- * FIMD is stand for Fully Interactive Mobile Display and
+ * FIMD stands for Fully Interactive Mobile Display and
* as a display controller, it transfers contents drawn on memory
* to a LCD Panel through Display Interfaces such as RGB or
* CPU Interface.
g2d_userptr->npages,
g2d_userptr->vma);
+ exynos_gem_put_vma(g2d_userptr->vma);
+
if (!g2d_userptr->out_of_list)
list_del_init(&g2d_userptr->list);
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_master_private *master_priv;
+ /*
+ * The dri breadcrumb update races against the drm master disappearing.
+ * Instead of trying to fix this (this is by far not the only ums issue)
+ * just don't do the update in kms mode.
+ */
+ if (drm_core_check_feature(dev, DRIVER_MODESET))
+ return;
+
if (dev->primary->master) {
master_priv = dev->primary->master->driver_priv;
if (master_priv->sarea_priv)
spin_lock_init(&dev_priv->uncore.lock);
spin_lock_init(&dev_priv->mm.object_stat_lock);
mutex_init(&dev_priv->dpio_lock);
- mutex_init(&dev_priv->rps.hw_lock);
mutex_init(&dev_priv->modeset_restore_lock);
- mutex_init(&dev_priv->pc8.lock);
- dev_priv->pc8.requirements_met = false;
- dev_priv->pc8.gpu_idle = false;
- dev_priv->pc8.irqs_disabled = false;
- dev_priv->pc8.enabled = false;
- dev_priv->pc8.disable_count = 2; /* requirements_met + gpu_idle */
- INIT_DELAYED_WORK(&dev_priv->pc8.enable_work, hsw_enable_pc8_work);
+ intel_pm_setup(dev);
intel_display_crc_init(dev);
}
intel_irq_init(dev);
- intel_pm_init(dev);
intel_uncore_sanitize(dev);
/* Try to make sure MCHBAR is enabled before poking at it */
void i915_driver_preclose(struct drm_device * dev, struct drm_file *file_priv)
{
+ mutex_lock(&dev->struct_mutex);
i915_gem_context_close(dev, file_priv);
i915_gem_release(dev, file_priv);
+ mutex_unlock(&dev->struct_mutex);
}
void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
* Disable CRTCs directly since we want to preserve sw state
* for _thaw.
*/
+ mutex_lock(&dev->mode_config.mutex);
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
dev_priv->display.crtc_disable(crtc);
+ mutex_unlock(&dev->mode_config.mutex);
intel_modeset_suspend_hw(dev);
}
intel_modeset_init_hw(dev);
drm_modeset_lock_all(dev);
+ drm_mode_config_reset(dev);
intel_modeset_setup_hw_state(dev, true);
drm_modeset_unlock_all(dev);
#define IS_MOBILE(dev) (INTEL_INFO(dev)->is_mobile)
#define IS_HSW_EARLY_SDV(dev) (IS_HASWELL(dev) && \
((dev)->pdev->device & 0xFF00) == 0x0C00)
-#define IS_ULT(dev) (IS_HASWELL(dev) && \
+#define IS_BDW_ULT(dev) (IS_BROADWELL(dev) && \
+ (((dev)->pdev->device & 0xf) == 0x2 || \
+ ((dev)->pdev->device & 0xf) == 0x6 || \
+ ((dev)->pdev->device & 0xf) == 0xe))
+#define IS_HSW_ULT(dev) (IS_HASWELL(dev) && \
((dev)->pdev->device & 0xFF00) == 0x0A00)
+#define IS_ULT(dev) (IS_HSW_ULT(dev) || IS_BDW_ULT(dev))
#define IS_HSW_GT3(dev) (IS_HASWELL(dev) && \
((dev)->pdev->device & 0x00F0) == 0x0020)
#define IS_PRELIMINARY_HW(intel_info) ((intel_info)->is_preliminary)
#define HAS_POWER_WELL(dev) (IS_HASWELL(dev) || IS_BROADWELL(dev))
#define HAS_FPGA_DBG_UNCLAIMED(dev) (INTEL_INFO(dev)->has_fpga_dbg)
#define HAS_PSR(dev) (IS_HASWELL(dev) || IS_BROADWELL(dev))
+#define HAS_PC8(dev) (IS_HASWELL(dev)) /* XXX HSW:ULX */
#define INTEL_PCH_DEVICE_ID_MASK 0xff00
#define INTEL_PCH_IBX_DEVICE_ID_TYPE 0x3b00
void i915_handle_error(struct drm_device *dev, bool wedged);
extern void intel_irq_init(struct drm_device *dev);
-extern void intel_pm_init(struct drm_device *dev);
extern void intel_hpd_init(struct drm_device *dev);
-extern void intel_pm_init(struct drm_device *dev);
extern void intel_uncore_sanitize(struct drm_device *dev);
extern void intel_uncore_early_sanitize(struct drm_device *dev);
kfree(request);
}
-static void i915_gem_reset_ring_lists(struct drm_i915_private *dev_priv,
- struct intel_ring_buffer *ring)
+static void i915_gem_reset_ring_status(struct drm_i915_private *dev_priv,
+ struct intel_ring_buffer *ring)
{
- u32 completed_seqno;
- u32 acthd;
+ u32 completed_seqno = ring->get_seqno(ring, false);
+ u32 acthd = intel_ring_get_active_head(ring);
+ struct drm_i915_gem_request *request;
+
+ list_for_each_entry(request, &ring->request_list, list) {
+ if (i915_seqno_passed(completed_seqno, request->seqno))
+ continue;
- acthd = intel_ring_get_active_head(ring);
- completed_seqno = ring->get_seqno(ring, false);
+ i915_set_reset_status(ring, request, acthd);
+ }
+}
+static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
+ struct intel_ring_buffer *ring)
+{
while (!list_empty(&ring->request_list)) {
struct drm_i915_gem_request *request;
struct drm_i915_gem_request,
list);
- if (request->seqno > completed_seqno)
- i915_set_reset_status(ring, request, acthd);
-
i915_gem_free_request(request);
}
struct intel_ring_buffer *ring;
int i;
+ /*
+ * Before we free the objects from the requests, we need to inspect
+ * them for finding the guilty party. As the requests only borrow
+ * their reference to the objects, the inspection must be done first.
+ */
+ for_each_ring(ring, dev_priv, i)
+ i915_gem_reset_ring_status(dev_priv, ring);
+
for_each_ring(ring, dev_priv, i)
- i915_gem_reset_ring_lists(dev_priv, ring);
+ i915_gem_reset_ring_cleanup(dev_priv, ring);
i915_gem_cleanup_ringbuffer(dev);
if (dev_priv->ellc_size)
I915_WRITE(HSW_IDICR, I915_READ(HSW_IDICR) | IDIHASHMSK(0xf));
- if (IS_HSW_GT3(dev))
- I915_WRITE(MI_PREDICATE_RESULT_2, LOWER_SLICE_ENABLED);
- else
- I915_WRITE(MI_PREDICATE_RESULT_2, LOWER_SLICE_DISABLED);
+ if (IS_HASWELL(dev))
+ I915_WRITE(MI_PREDICATE_RESULT_2, IS_HSW_GT3(dev) ?
+ LOWER_SLICE_ENABLED : LOWER_SLICE_DISABLED);
if (HAS_PCH_NOP(dev)) {
u32 temp = I915_READ(GEN7_MSG_CTL);
{
struct drm_i915_file_private *file_priv = file->driver_priv;
- mutex_lock(&dev->struct_mutex);
idr_for_each(&file_priv->context_idr, context_idr_cleanup, NULL);
idr_destroy(&file_priv->context_idr);
- mutex_unlock(&dev->struct_mutex);
}
static struct i915_hw_context *
if (ret)
return ret;
- /* Clear this page out of any CPU caches for coherent swap-in/out. Note
+ /*
+ * Pin can switch back to the default context if we end up calling into
+ * evict_everything - as a last ditch gtt defrag effort that also
+ * switches to the default context. Hence we need to reload from here.
+ */
+ from = ring->last_context;
+
+ /*
+ * Clear this page out of any CPU caches for coherent swap-in/out. Note
* that thanks to write = false in this call and us not setting any gpu
* write domains when putting a context object onto the active list
* (when switching away from it), this won't block.
- * XXX: We need a real interface to do this instead of trickery. */
+ *
+ * XXX: We need a real interface to do this instead of trickery.
+ */
ret = i915_gem_object_set_to_gtt_domain(to->obj, false);
if (ret) {
i915_gem_object_unpin(to->obj);
ret = i915_gem_object_get_pages(obj);
if (ret)
- goto error;
+ goto err;
+
+ i915_gem_object_pin_pages(obj);
ret = -ENOMEM;
pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages));
if (pages == NULL)
- goto error;
+ goto err_unpin;
i = 0;
for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0)
drm_free_large(pages);
if (!obj->dma_buf_vmapping)
- goto error;
+ goto err_unpin;
obj->vmapping_count = 1;
- i915_gem_object_pin_pages(obj);
out_unlock:
mutex_unlock(&dev->struct_mutex);
return obj->dma_buf_vmapping;
-error:
+err_unpin:
+ i915_gem_object_unpin_pages(obj);
+err:
mutex_unlock(&dev->struct_mutex);
return ERR_PTR(ret);
}
} else
drm_mm_init_scan(&vm->mm, min_size, alignment, cache_level);
+search_again:
/* First see if there is a large enough contiguous idle region... */
list_for_each_entry(vma, &vm->inactive_list, mm_list) {
if (mark_free(vma, &unwind_list))
list_del_init(&vma->exec_list);
}
- /* We expect the caller to unpin, evict all and try again, or give up.
- * So calling i915_gem_evict_vm() is unnecessary.
+ /* Can we unpin some objects such as idle hw contents,
+ * or pending flips?
*/
- return -ENOSPC;
+ ret = nonblocking ? -ENOSPC : i915_gpu_idle(dev);
+ if (ret)
+ return ret;
+
+ /* Only idle the GPU and repeat the search once */
+ i915_gem_retire_requests(dev);
+ nonblocking = true;
+ goto search_again;
found:
/* drm_mm doesn't allow any other other operations while
#include "intel_drv.h"
#include <linux/dma_remapping.h>
+#define __EXEC_OBJECT_HAS_PIN (1<<31)
+#define __EXEC_OBJECT_HAS_FENCE (1<<30)
+
struct eb_vmas {
struct list_head vmas;
int and;
{
struct drm_i915_gem_object *obj;
struct list_head objects;
- int i, ret = 0;
+ int i, ret;
INIT_LIST_HEAD(&objects);
spin_lock(&file->table_lock);
DRM_DEBUG("Invalid object handle %d at index %d\n",
exec[i].handle, i);
ret = -ENOENT;
- goto out;
+ goto err;
}
if (!list_empty(&obj->obj_exec_link)) {
DRM_DEBUG("Object %p [handle %d, index %d] appears more than once in object list\n",
obj, exec[i].handle, i);
ret = -EINVAL;
- goto out;
+ goto err;
}
drm_gem_object_reference(&obj->base);
spin_unlock(&file->table_lock);
i = 0;
- list_for_each_entry(obj, &objects, obj_exec_link) {
+ while (!list_empty(&objects)) {
struct i915_vma *vma;
+ obj = list_first_entry(&objects,
+ struct drm_i915_gem_object,
+ obj_exec_link);
+
/*
* NOTE: We can leak any vmas created here when something fails
* later on. But that's no issue since vma_unbind can deal with
if (IS_ERR(vma)) {
DRM_DEBUG("Failed to lookup VMA\n");
ret = PTR_ERR(vma);
- goto out;
+ goto err;
}
+ /* Transfer ownership from the objects list to the vmas list. */
list_add_tail(&vma->exec_list, &eb->vmas);
+ list_del_init(&obj->obj_exec_link);
vma->exec_entry = &exec[i];
if (eb->and < 0) {
++i;
}
+ return 0;
-out:
+
+err:
while (!list_empty(&objects)) {
obj = list_first_entry(&objects,
struct drm_i915_gem_object,
obj_exec_link);
list_del_init(&obj->obj_exec_link);
- if (ret)
- drm_gem_object_unreference(&obj->base);
+ drm_gem_object_unreference(&obj->base);
}
+ /*
+ * Objects already transfered to the vmas list will be unreferenced by
+ * eb_destroy.
+ */
+
return ret;
}
}
}
-static void eb_destroy(struct eb_vmas *eb) {
+static void
+i915_gem_execbuffer_unreserve_vma(struct i915_vma *vma)
+{
+ struct drm_i915_gem_exec_object2 *entry;
+ struct drm_i915_gem_object *obj = vma->obj;
+
+ if (!drm_mm_node_allocated(&vma->node))
+ return;
+
+ entry = vma->exec_entry;
+
+ if (entry->flags & __EXEC_OBJECT_HAS_FENCE)
+ i915_gem_object_unpin_fence(obj);
+
+ if (entry->flags & __EXEC_OBJECT_HAS_PIN)
+ i915_gem_object_unpin(obj);
+
+ entry->flags &= ~(__EXEC_OBJECT_HAS_FENCE | __EXEC_OBJECT_HAS_PIN);
+}
+
+static void eb_destroy(struct eb_vmas *eb)
+{
while (!list_empty(&eb->vmas)) {
struct i915_vma *vma;
struct i915_vma,
exec_list);
list_del_init(&vma->exec_list);
+ i915_gem_execbuffer_unreserve_vma(vma);
drm_gem_object_unreference(&vma->obj->base);
}
kfree(eb);
return ret;
}
-#define __EXEC_OBJECT_HAS_PIN (1<<31)
-#define __EXEC_OBJECT_HAS_FENCE (1<<30)
-
static int
need_reloc_mappable(struct i915_vma *vma)
{
return 0;
}
-static void
-i915_gem_execbuffer_unreserve_vma(struct i915_vma *vma)
-{
- struct drm_i915_gem_exec_object2 *entry;
- struct drm_i915_gem_object *obj = vma->obj;
-
- if (!drm_mm_node_allocated(&vma->node))
- return;
-
- entry = vma->exec_entry;
-
- if (entry->flags & __EXEC_OBJECT_HAS_FENCE)
- i915_gem_object_unpin_fence(obj);
-
- if (entry->flags & __EXEC_OBJECT_HAS_PIN)
- i915_gem_object_unpin(obj);
-
- entry->flags &= ~(__EXEC_OBJECT_HAS_FENCE | __EXEC_OBJECT_HAS_PIN);
-}
-
static int
i915_gem_execbuffer_reserve(struct intel_ring_buffer *ring,
struct list_head *vmas,
goto err;
}
-err: /* Decrement pin count for bound objects */
- list_for_each_entry(vma, vmas, exec_list)
- i915_gem_execbuffer_unreserve_vma(vma);
-
+err:
if (ret != -ENOSPC || retry++)
return ret;
+ /* Decrement pin count for bound objects */
+ list_for_each_entry(vma, vmas, exec_list)
+ i915_gem_execbuffer_unreserve_vma(vma);
+
ret = i915_gem_evict_vm(vm, true);
if (ret)
return ret;
while (!list_empty(&eb->vmas)) {
vma = list_first_entry(&eb->vmas, struct i915_vma, exec_list);
list_del_init(&vma->exec_list);
+ i915_gem_execbuffer_unreserve_vma(vma);
drm_gem_object_unreference(&vma->obj->base);
}
#define HSW_WB_LLC_AGE3 HSW_CACHEABILITY_CONTROL(0x2)
#define HSW_WB_LLC_AGE0 HSW_CACHEABILITY_CONTROL(0x3)
#define HSW_WB_ELLC_LLC_AGE0 HSW_CACHEABILITY_CONTROL(0xb)
+#define HSW_WB_ELLC_LLC_AGE3 HSW_CACHEABILITY_CONTROL(0x8)
#define HSW_WT_ELLC_LLC_AGE0 HSW_CACHEABILITY_CONTROL(0x6)
+#define HSW_WT_ELLC_LLC_AGE3 HSW_CACHEABILITY_CONTROL(0x7)
#define GEN8_PTES_PER_PAGE (PAGE_SIZE / sizeof(gen8_gtt_pte_t))
#define GEN8_PDES_PER_PAGE (PAGE_SIZE / sizeof(gen8_ppgtt_pde_t))
case I915_CACHE_NONE:
break;
case I915_CACHE_WT:
- pte |= HSW_WT_ELLC_LLC_AGE0;
+ pte |= HSW_WT_ELLC_LLC_AGE3;
break;
default:
- pte |= HSW_WB_ELLC_LLC_AGE0;
+ pte |= HSW_WB_ELLC_LLC_AGE3;
break;
}
kfree(ppgtt->gen8_pt_dma_addr[i]);
}
- __free_pages(ppgtt->gen8_pt_pages, ppgtt->num_pt_pages << PAGE_SHIFT);
- __free_pages(ppgtt->pd_pages, ppgtt->num_pd_pages << PAGE_SHIFT);
+ __free_pages(ppgtt->gen8_pt_pages, get_order(ppgtt->num_pt_pages << PAGE_SHIFT));
+ __free_pages(ppgtt->pd_pages, get_order(ppgtt->num_pd_pages << PAGE_SHIFT));
}
/**
bdw_gmch_ctl &= BDW_GMCH_GGMS_MASK;
if (bdw_gmch_ctl)
bdw_gmch_ctl = 1 << bdw_gmch_ctl;
+ if (bdw_gmch_ctl > 4) {
+ WARN_ON(!i915_preliminary_hw_support);
+ return 4<<20;
+ }
+
return bdw_gmch_ctl << 20;
}
*/
#define MI_LOAD_REGISTER_IMM(x) MI_INSTR(0x22, 2*x-1)
#define MI_STORE_REGISTER_MEM(x) MI_INSTR(0x24, 2*x-1)
+#define MI_SRM_LRM_GLOBAL_GTT (1<<22)
#define MI_FLUSH_DW MI_INSTR(0x26, 1) /* for GEN6 */
#define MI_FLUSH_DW_STORE_INDEX (1<<21)
#define MI_INVALIDATE_TLB (1<<18)
acpi_handle dhandle;
int ret;
- dhandle = DEVICE_ACPI_HANDLE(&pdev->dev);
+ dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle)
return false;
/* Default to using SSC */
dev_priv->vbt.lvds_use_ssc = 1;
- dev_priv->vbt.lvds_ssc_freq = intel_bios_ssc_frequency(dev, 1);
+ /*
+ * Core/SandyBridge/IvyBridge use alternative (120MHz) reference
+ * clock for LVDS.
+ */
+ dev_priv->vbt.lvds_ssc_freq = intel_bios_ssc_frequency(dev,
+ !HAS_PCH_SPLIT(dev));
DRM_DEBUG_KMS("Set default to SSC at %dMHz\n", dev_priv->vbt.lvds_ssc_freq);
for (port = PORT_A; port < I915_MAX_PORTS; port++) {
ddi_translations = ddi_translations_dp;
break;
case PORT_D:
- if (intel_dpd_is_edp(dev))
+ if (intel_dp_is_edp(dev, PORT_D))
ddi_translations = ddi_translations_edp;
else
ddi_translations = ddi_translations_dp;
if (wait)
intel_wait_ddi_buf_idle(dev_priv, port);
- if (type == INTEL_OUTPUT_EDP) {
+ if (type == INTEL_OUTPUT_DISPLAYPORT || type == INTEL_OUTPUT_EDP) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
ironlake_edp_panel_vdd_on(intel_dp);
+ intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
ironlake_edp_panel_off(intel_dp);
}
default:
break;
}
+
+ if (encoder->type == INTEL_OUTPUT_EDP && dev_priv->vbt.edp_bpp &&
+ pipe_config->pipe_bpp > dev_priv->vbt.edp_bpp) {
+ /*
+ * This is a big fat ugly hack.
+ *
+ * Some machines in UEFI boot mode provide us a VBT that has 18
+ * bpp and 1.62 GHz link bandwidth for eDP, which for reasons
+ * unknown we fail to light up. Yet the same BIOS boots up with
+ * 24 bpp and 2.7 GHz link. Use the same bpp as the BIOS uses as
+ * max, not what it tells us to use.
+ *
+ * Note: This will still be broken if the eDP panel is not lit
+ * up by the BIOS, and thus we can't get the mode at module
+ * load.
+ */
+ DRM_DEBUG_KMS("pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n",
+ pipe_config->pipe_bpp, dev_priv->vbt.edp_bpp);
+ dev_priv->vbt.edp_bpp = pipe_config->pipe_bpp;
+ }
}
static void intel_ddi_destroy(struct drm_encoder *encoder)
uint16_t postoff = 0;
if (intel_crtc->config.limited_color_range)
- postoff = (16 * (1 << 13) / 255) & 0x1fff;
+ postoff = (16 * (1 << 12) / 255) & 0x1fff;
I915_WRITE(PIPE_CSC_POSTOFF_HI(pipe), postoff);
I915_WRITE(PIPE_CSC_POSTOFF_ME(pipe), postoff);
uint32_t val;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, base.head)
- WARN(crtc->base.enabled, "CRTC for pipe %c enabled\n",
+ WARN(crtc->active, "CRTC for pipe %c enabled\n",
pipe_name(crtc->pipe));
WARN(I915_READ(HSW_PWR_WELL_DRIVER), "Power well on\n");
/* Make sure we're not on PC8 state before disabling PC8, otherwise
* we'll hang the machine! */
- dev_priv->uncore.funcs.force_wake_get(dev_priv);
+ gen6_gt_force_wake_get(dev_priv);
if (val & LCPLL_POWER_DOWN_ALLOW) {
val &= ~LCPLL_POWER_DOWN_ALLOW;
DRM_ERROR("Switching back to LCPLL failed\n");
}
- dev_priv->uncore.funcs.force_wake_put(dev_priv);
+ gen6_gt_force_wake_put(dev_priv);
}
void hsw_enable_pc8_work(struct work_struct *__work)
void hsw_enable_package_c8(struct drm_i915_private *dev_priv)
{
+ if (!HAS_PC8(dev_priv->dev))
+ return;
+
mutex_lock(&dev_priv->pc8.lock);
__hsw_enable_package_c8(dev_priv);
mutex_unlock(&dev_priv->pc8.lock);
void hsw_disable_package_c8(struct drm_i915_private *dev_priv)
{
+ if (!HAS_PC8(dev_priv->dev))
+ return;
+
mutex_lock(&dev_priv->pc8.lock);
__hsw_disable_package_c8(dev_priv);
mutex_unlock(&dev_priv->pc8.lock);
struct drm_i915_private *dev_priv = dev->dev_private;
bool allow;
+ if (!HAS_PC8(dev_priv->dev))
+ return;
+
if (!i915_enable_pc8)
return;
static void hsw_package_c8_gpu_idle(struct drm_i915_private *dev_priv)
{
+ if (!HAS_PC8(dev_priv->dev))
+ return;
+
+ mutex_lock(&dev_priv->pc8.lock);
if (!dev_priv->pc8.gpu_idle) {
dev_priv->pc8.gpu_idle = true;
- hsw_enable_package_c8(dev_priv);
+ __hsw_enable_package_c8(dev_priv);
}
+ mutex_unlock(&dev_priv->pc8.lock);
}
static void hsw_package_c8_gpu_busy(struct drm_i915_private *dev_priv)
{
+ if (!HAS_PC8(dev_priv->dev))
+ return;
+
+ mutex_lock(&dev_priv->pc8.lock);
if (dev_priv->pc8.gpu_idle) {
dev_priv->pc8.gpu_idle = false;
- hsw_disable_package_c8(dev_priv);
+ __hsw_disable_package_c8(dev_priv);
}
+ mutex_unlock(&dev_priv->pc8.lock);
}
#define for_each_power_domain(domain, mask) \
intel_crtc->cursor_visible = visible;
}
/* and commit changes on next vblank */
+ POSTING_READ(CURCNTR(pipe));
I915_WRITE(CURBASE(pipe), base);
+ POSTING_READ(CURBASE(pipe));
}
static void ivb_update_cursor(struct drm_crtc *crtc, u32 base)
intel_crtc->cursor_visible = visible;
}
/* and commit changes on next vblank */
+ POSTING_READ(CURCNTR_IVB(pipe));
I915_WRITE(CURBASE_IVB(pipe), base);
+ POSTING_READ(CURBASE_IVB(pipe));
}
/* If no-part of the cursor is visible on the framebuffer, then the GPU may hang... */
intel_ring_emit(ring, ~(DERRMR_PIPEA_PRI_FLIP_DONE |
DERRMR_PIPEB_PRI_FLIP_DONE |
DERRMR_PIPEC_PRI_FLIP_DONE));
- intel_ring_emit(ring, MI_STORE_REGISTER_MEM(1));
+ intel_ring_emit(ring, MI_STORE_REGISTER_MEM(1) |
+ MI_SRM_LRM_GLOBAL_GTT);
intel_ring_emit(ring, DERRMR);
intel_ring_emit(ring, ring->scratch.gtt_offset + 256);
}
if (IS_G4X(dev) || INTEL_INFO(dev)->gen >= 5)
PIPE_CONF_CHECK_I(pipe_bpp);
- if (!IS_HASWELL(dev)) {
+ if (!HAS_DDI(dev)) {
PIPE_CONF_CHECK_CLOCK_FUZZY(adjusted_mode.crtc_clock);
PIPE_CONF_CHECK_CLOCK_FUZZY(port_clock);
}
enum pipe pipe;
if (encoder->base.crtc != &crtc->base)
continue;
- if (encoder->get_config &&
- encoder->get_hw_state(encoder, &pipe))
+ if (encoder->get_hw_state(encoder, &pipe))
encoder->get_config(encoder, &pipe_config);
}
intel_ddi_init(dev, PORT_D);
} else if (HAS_PCH_SPLIT(dev)) {
int found;
- dpd_is_edp = intel_dpd_is_edp(dev);
+ dpd_is_edp = intel_dp_is_edp(dev, PORT_D);
if (has_edp_a(dev))
intel_dp_init(dev, DP_A, PORT_A);
intel_hdmi_init(dev, VLV_DISPLAY_BASE + GEN4_HDMIC,
PORT_C);
if (I915_READ(VLV_DISPLAY_BASE + DP_C) & DP_DETECTED)
- intel_dp_init(dev, VLV_DISPLAY_BASE + DP_C,
- PORT_C);
+ intel_dp_init(dev, VLV_DISPLAY_BASE + DP_C, PORT_C);
}
intel_dsi_init(dev);
if (encoder->get_hw_state(encoder, &pipe)) {
crtc = to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
encoder->base.crtc = &crtc->base;
- if (encoder->get_config)
- encoder->get_config(encoder, &crtc->config);
+ encoder->get_config(encoder, &crtc->config);
} else {
encoder->base.crtc = NULL;
}
}
intel_modeset_check_state(dev);
-
- drm_mode_config_reset(dev);
}
void intel_modeset_gem_init(struct drm_device *dev)
intel_setup_overlay(dev);
+ drm_modeset_lock_all(dev);
+ drm_mode_config_reset(dev);
intel_modeset_setup_hw_state(dev, false);
+ drm_modeset_unlock_all(dev);
}
void intel_modeset_cleanup(struct drm_device *dev)
int intel_modeset_vga_set_state(struct drm_device *dev, bool state)
{
struct drm_i915_private *dev_priv = dev->dev_private;
+ unsigned reg = INTEL_INFO(dev)->gen >= 6 ? SNB_GMCH_CTRL : INTEL_GMCH_CTRL;
u16 gmch_ctrl;
- pci_read_config_word(dev_priv->bridge_dev, INTEL_GMCH_CTRL, &gmch_ctrl);
+ pci_read_config_word(dev_priv->bridge_dev, reg, &gmch_ctrl);
if (state)
gmch_ctrl &= ~INTEL_GMCH_VGA_DISABLE;
else
gmch_ctrl |= INTEL_GMCH_VGA_DISABLE;
- pci_write_config_word(dev_priv->bridge_dev, INTEL_GMCH_CTRL, gmch_ctrl);
+ pci_write_config_word(dev_priv->bridge_dev, reg, gmch_ctrl);
return 0;
}
* ensure that we have vdd while we switch off the panel. */
ironlake_edp_panel_vdd_on(intel_dp);
ironlake_edp_backlight_off(intel_dp);
- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
+ intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
ironlake_edp_panel_off(intel_dp);
/* cpu edp my only be disable _after_ the cpu pipe/plane is disabled. */
}
/* check the VBT to see whether the eDP is on DP-D port */
-bool intel_dpd_is_edp(struct drm_device *dev)
+bool intel_dp_is_edp(struct drm_device *dev, enum port port)
{
struct drm_i915_private *dev_priv = dev->dev_private;
union child_device_config *p_child;
int i;
+ static const short port_mapping[] = {
+ [PORT_B] = PORT_IDPB,
+ [PORT_C] = PORT_IDPC,
+ [PORT_D] = PORT_IDPD,
+ };
+
+ if (port == PORT_A)
+ return true;
if (!dev_priv->vbt.child_dev_num)
return false;
for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {
p_child = dev_priv->vbt.child_dev + i;
- if (p_child->common.dvo_port == PORT_IDPD &&
+ if (p_child->common.dvo_port == port_mapping[port] &&
(p_child->common.device_type & DEVICE_TYPE_eDP_BITS) ==
(DEVICE_TYPE_eDP & DEVICE_TYPE_eDP_BITS))
return true;
intel_dp->DP = I915_READ(intel_dp->output_reg);
intel_dp->attached_connector = intel_connector;
- type = DRM_MODE_CONNECTOR_DisplayPort;
- /*
- * FIXME : We need to initialize built-in panels before external panels.
- * For X0, DP_C is fixed as eDP. Revisit this as part of VLV eDP cleanup
- */
- switch (port) {
- case PORT_A:
+ if (intel_dp_is_edp(dev, port))
type = DRM_MODE_CONNECTOR_eDP;
- break;
- case PORT_C:
- if (IS_VALLEYVIEW(dev))
- type = DRM_MODE_CONNECTOR_eDP;
- break;
- case PORT_D:
- if (HAS_PCH_SPLIT(dev) && intel_dpd_is_edp(dev))
- type = DRM_MODE_CONNECTOR_eDP;
- break;
- default: /* silence GCC warning */
- break;
- }
+ else
+ type = DRM_MODE_CONNECTOR_DisplayPort;
/*
* For eDP we always set the encoder type to INTEL_OUTPUT_EDP, but
void intel_dp_check_link_status(struct intel_dp *intel_dp);
bool intel_dp_compute_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config);
-bool intel_dpd_is_edp(struct drm_device *dev);
+bool intel_dp_is_edp(struct drm_device *dev, enum port port);
void ironlake_edp_backlight_on(struct intel_dp *intel_dp);
void ironlake_edp_backlight_off(struct intel_dp *intel_dp);
void ironlake_edp_panel_on(struct intel_dp *intel_dp);
uint32_t sprite_width, int pixel_size,
bool enabled, bool scaled);
void intel_init_pm(struct drm_device *dev);
+void intel_pm_setup(struct drm_device *dev);
bool intel_fbc_enabled(struct drm_device *dev);
void intel_update_fbc(struct drm_device *dev);
void intel_gpu_ips_init(struct drm_i915_private *dev_priv);
u32 temp;
int i = 0;
- handle = DEVICE_ACPI_HANDLE(&dev->pdev->dev);
+ handle = ACPI_HANDLE(&dev->pdev->dev);
if (!handle || acpi_bus_get_device(handle, &acpi_dev))
return;
spin_lock_irqsave(&dev_priv->backlight.lock, flags);
- if (HAS_PCH_SPLIT(dev)) {
+ if (IS_BROADWELL(dev)) {
+ val = I915_READ(BLC_PWM_PCH_CTL2) & BACKLIGHT_DUTY_CYCLE_MASK;
+ } else if (HAS_PCH_SPLIT(dev)) {
val = I915_READ(BLC_PWM_CPU_CTL) & BACKLIGHT_DUTY_CYCLE_MASK;
} else {
if (IS_VALLEYVIEW(dev))
return val;
}
+static void intel_bdw_panel_set_backlight(struct drm_device *dev, u32 level)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ u32 val = I915_READ(BLC_PWM_PCH_CTL2) & ~BACKLIGHT_DUTY_CYCLE_MASK;
+ I915_WRITE(BLC_PWM_PCH_CTL2, val | level);
+}
+
static void intel_pch_panel_set_backlight(struct drm_device *dev, u32 level)
{
struct drm_i915_private *dev_priv = dev->dev_private;
DRM_DEBUG_DRIVER("set backlight PWM = %d\n", level);
level = intel_panel_compute_brightness(dev, pipe, level);
- if (HAS_PCH_SPLIT(dev))
+ if (IS_BROADWELL(dev))
+ return intel_bdw_panel_set_backlight(dev, level);
+ else if (HAS_PCH_SPLIT(dev))
return intel_pch_panel_set_backlight(dev, level);
if (is_backlight_combination_mode(dev)) {
POSTING_READ(reg);
I915_WRITE(reg, tmp | BLM_PWM_ENABLE);
- if (HAS_PCH_SPLIT(dev) &&
+ if (IS_BROADWELL(dev)) {
+ /*
+ * Broadwell requires PCH override to drive the PCH
+ * backlight pin. The above will configure the CPU
+ * backlight pin, which we don't plan to use.
+ */
+ tmp = I915_READ(BLC_PWM_PCH_CTL1);
+ tmp |= BLM_PCH_OVERRIDE_ENABLE | BLM_PCH_PWM_ENABLE;
+ I915_WRITE(BLC_PWM_PCH_CTL1, tmp);
+ } else if (HAS_PCH_SPLIT(dev) &&
!(dev_priv->quirks & QUIRK_NO_PCH_PWM_ENABLE)) {
tmp = I915_READ(BLC_PWM_PCH_CTL1);
tmp |= BLM_PCH_PWM_ENABLE;
adjusted_mode = &to_intel_crtc(crtc)->config.adjusted_mode;
clock = adjusted_mode->crtc_clock;
- htotal = adjusted_mode->htotal;
+ htotal = adjusted_mode->crtc_htotal;
hdisplay = to_intel_crtc(crtc)->config.pipe_src_w;
pixel_size = crtc->fb->bits_per_pixel / 8;
crtc = intel_get_crtc_for_plane(dev, plane);
adjusted_mode = &to_intel_crtc(crtc)->config.adjusted_mode;
clock = adjusted_mode->crtc_clock;
- htotal = adjusted_mode->htotal;
+ htotal = adjusted_mode->crtc_htotal;
hdisplay = to_intel_crtc(crtc)->config.pipe_src_w;
pixel_size = crtc->fb->bits_per_pixel / 8;
const struct drm_display_mode *adjusted_mode =
&to_intel_crtc(crtc)->config.adjusted_mode;
int clock = adjusted_mode->crtc_clock;
- int htotal = adjusted_mode->htotal;
+ int htotal = adjusted_mode->crtc_htotal;
int hdisplay = to_intel_crtc(crtc)->config.pipe_src_w;
int pixel_size = crtc->fb->bits_per_pixel / 8;
unsigned long line_time_us;
const struct drm_display_mode *adjusted_mode =
&to_intel_crtc(enabled)->config.adjusted_mode;
int clock = adjusted_mode->crtc_clock;
- int htotal = adjusted_mode->htotal;
- int hdisplay = to_intel_crtc(crtc)->config.pipe_src_w;
+ int htotal = adjusted_mode->crtc_htotal;
+ int hdisplay = to_intel_crtc(enabled)->config.pipe_src_w;
int pixel_size = enabled->fb->bits_per_pixel / 8;
unsigned long line_time_us;
int entries;
crtc = intel_get_crtc_for_plane(dev, plane);
adjusted_mode = &to_intel_crtc(crtc)->config.adjusted_mode;
clock = adjusted_mode->crtc_clock;
- htotal = adjusted_mode->htotal;
+ htotal = adjusted_mode->crtc_htotal;
hdisplay = to_intel_crtc(crtc)->config.pipe_src_w;
pixel_size = crtc->fb->bits_per_pixel / 8;
/* The WM are computed with base on how long it takes to fill a single
* row at the given clock rate, multiplied by 8.
* */
- linetime = DIV_ROUND_CLOSEST(mode->htotal * 1000 * 8, mode->clock);
- ips_linetime = DIV_ROUND_CLOSEST(mode->htotal * 1000 * 8,
+ linetime = DIV_ROUND_CLOSEST(mode->crtc_htotal * 1000 * 8,
+ mode->crtc_clock);
+ ips_linetime = DIV_ROUND_CLOSEST(mode->crtc_htotal * 1000 * 8,
intel_ddi_get_cdclk_freq(dev_priv));
return PIPE_WM_LINETIME_IPS_LINETIME(ips_linetime) |
I915_WRITE(GEN6_RC_SLEEP, 0);
I915_WRITE(GEN6_RC1e_THRESHOLD, 1000);
- if (INTEL_INFO(dev)->gen <= 6 || IS_IVYBRIDGE(dev))
+ if (IS_IVYBRIDGE(dev))
I915_WRITE(GEN6_RC6_THRESHOLD, 125000);
else
I915_WRITE(GEN6_RC6_THRESHOLD, 50000);
{
struct drm_i915_private *dev_priv = dev->dev_private;
bool is_enabled, enable_requested;
+ unsigned long irqflags;
uint32_t tmp;
+ WARN_ON(dev_priv->pc8.enabled);
+
tmp = I915_READ(HSW_PWR_WELL_DRIVER);
is_enabled = tmp & HSW_PWR_WELL_STATE_ENABLED;
enable_requested = tmp & HSW_PWR_WELL_ENABLE_REQUEST;
HSW_PWR_WELL_STATE_ENABLED), 20))
DRM_ERROR("Timeout enabling power well\n");
}
+
+ if (IS_BROADWELL(dev)) {
+ spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
+ I915_WRITE(GEN8_DE_PIPE_IMR(PIPE_B),
+ dev_priv->de_irq_mask[PIPE_B]);
+ I915_WRITE(GEN8_DE_PIPE_IER(PIPE_B),
+ ~dev_priv->de_irq_mask[PIPE_B] |
+ GEN8_PIPE_VBLANK);
+ I915_WRITE(GEN8_DE_PIPE_IMR(PIPE_C),
+ dev_priv->de_irq_mask[PIPE_C]);
+ I915_WRITE(GEN8_DE_PIPE_IER(PIPE_C),
+ ~dev_priv->de_irq_mask[PIPE_C] |
+ GEN8_PIPE_VBLANK);
+ POSTING_READ(GEN8_DE_PIPE_IER(PIPE_C));
+ spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+ }
} else {
if (enable_requested) {
- unsigned long irqflags;
enum pipe p;
I915_WRITE(HSW_PWR_WELL_DRIVER, 0);
static void __intel_power_well_get(struct drm_device *dev,
struct i915_power_well *power_well)
{
- if (!power_well->count++)
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ if (!power_well->count++) {
+ hsw_disable_package_c8(dev_priv);
__intel_set_power_well(dev, true);
+ }
}
static void __intel_power_well_put(struct drm_device *dev,
struct i915_power_well *power_well)
{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
WARN_ON(!power_well->count);
- if (!--power_well->count && i915_disable_power_well)
+ if (!--power_well->count && i915_disable_power_well) {
__intel_set_power_well(dev, false);
+ hsw_enable_package_c8(dev_priv);
+ }
}
void intel_display_power_get(struct drm_device *dev,
return val;
}
-void intel_pm_init(struct drm_device *dev)
+void intel_pm_setup(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
+ mutex_init(&dev_priv->rps.hw_lock);
+
+ mutex_init(&dev_priv->pc8.lock);
+ dev_priv->pc8.requirements_met = false;
+ dev_priv->pc8.gpu_idle = false;
+ dev_priv->pc8.irqs_disabled = false;
+ dev_priv->pc8.enabled = false;
+ dev_priv->pc8.disable_count = 2; /* requirements_met + gpu_idle */
+ INIT_DELAYED_WORK(&dev_priv->pc8.enable_work, hsw_enable_pc8_work);
INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work,
intel_gen6_powersave_work);
}
} else if (IS_GEN6(ring->dev)) {
mmio = RING_HWS_PGA_GEN6(ring->mmio_base);
} else {
+ /* XXX: gen8 returns to sanity */
mmio = RING_HWS_PGA(ring->mmio_base);
}
}
+static void
+intel_tv_get_config(struct intel_encoder *encoder,
+ struct intel_crtc_config *pipe_config)
+{
+ pipe_config->adjusted_mode.crtc_clock = pipe_config->port_clock;
+}
+
static bool
intel_tv_compute_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
DRM_MODE_ENCODER_TVDAC);
intel_encoder->compute_config = intel_tv_compute_config;
+ intel_encoder->get_config = intel_tv_get_config;
intel_encoder->mode_set = intel_tv_mode_set;
intel_encoder->enable = intel_enable_tv;
intel_encoder->disable = intel_disable_tv;
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
}
+static void intel_uncore_forcewake_reset(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ if (IS_VALLEYVIEW(dev)) {
+ vlv_force_wake_reset(dev_priv);
+ } else if (INTEL_INFO(dev)->gen >= 6) {
+ __gen6_gt_force_wake_reset(dev_priv);
+ if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev))
+ __gen6_gt_force_wake_mt_reset(dev_priv);
+ }
+}
+
void intel_uncore_early_sanitize(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
dev_priv->ellc_size = 128;
DRM_INFO("Found %zuMB of eLLC\n", dev_priv->ellc_size);
}
-}
-static void intel_uncore_forcewake_reset(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (IS_VALLEYVIEW(dev)) {
- vlv_force_wake_reset(dev_priv);
- } else if (INTEL_INFO(dev)->gen >= 6) {
- __gen6_gt_force_wake_reset(dev_priv);
- if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev))
- __gen6_gt_force_wake_mt_reset(dev_priv);
- }
+ intel_uncore_forcewake_reset(dev);
}
void intel_uncore_sanitize(struct drm_device *dev)
int intel_gpu_reset(struct drm_device *dev)
{
switch (INTEL_INFO(dev)->gen) {
+ case 8:
case 7:
case 6: return gen6_do_reset(dev);
case 5: return ironlake_do_reset(dev);
nouveau-y += core/subdev/clock/nv50.o
nouveau-y += core/subdev/clock/nv84.o
nouveau-y += core/subdev/clock/nva3.o
+nouveau-y += core/subdev/clock/nvaa.o
nouveau-y += core/subdev/clock/nvc0.o
nouveau-y += core/subdev/clock/nve0.o
nouveau-y += core/subdev/clock/pllnv04.o
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = &nv50_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = &nv94_i2c_oclass;
- device->oclass[NVDEV_SUBDEV_CLOCK ] = nv84_clock_oclass;
+ device->oclass[NVDEV_SUBDEV_CLOCK ] = nvaa_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_DEVINIT] = &nv50_devinit_oclass;
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
device->oclass[NVDEV_SUBDEV_GPIO ] = &nv50_gpio_oclass;
device->oclass[NVDEV_SUBDEV_I2C ] = &nv94_i2c_oclass;
- device->oclass[NVDEV_SUBDEV_CLOCK ] = nv84_clock_oclass;
+ device->oclass[NVDEV_SUBDEV_CLOCK ] = nvaa_clock_oclass;
device->oclass[NVDEV_SUBDEV_THERM ] = &nv84_therm_oclass;
device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
device->oclass[NVDEV_SUBDEV_DEVINIT] = &nv50_devinit_oclass;
#include <engine/dmaobj.h>
#include <engine/fifo.h>
+#include "nv04.h"
#include "nv50.h"
/*******************************************************************************
nv_subdev(priv)->intr = nv04_fifo_intr;
nv_engine(priv)->cclass = &nv50_fifo_cclass;
nv_engine(priv)->sclass = nv50_fifo_sclass;
+ priv->base.pause = nv04_fifo_pause;
+ priv->base.start = nv04_fifo_start;
return 0;
}
#include <engine/dmaobj.h>
#include <engine/fifo.h>
+#include "nv04.h"
#include "nv50.h"
/*******************************************************************************
nv_subdev(priv)->intr = nv04_fifo_intr;
nv_engine(priv)->cclass = &nv84_fifo_cclass;
nv_engine(priv)->sclass = nv84_fifo_sclass;
+ priv->base.pause = nv04_fifo_pause;
+ priv->base.start = nv04_fifo_start;
return 0;
}
if (ret)
return ret;
- chan->vblank.nr_event = pdisp->vblank->index_nr;
+ chan->vblank.nr_event = pdisp ? pdisp->vblank->index_nr : 0;
chan->vblank.event = kzalloc(chan->vblank.nr_event *
sizeof(*chan->vblank.event), GFP_KERNEL);
if (!chan->vblank.event)
nv_clk_src_hclk,
nv_clk_src_hclkm3,
nv_clk_src_hclkm3d2,
+ nv_clk_src_hclkm2d3, /* NVAA */
+ nv_clk_src_hclkm4, /* NVAA */
+ nv_clk_src_cclk, /* NVAA */
nv_clk_src_host,
extern struct nouveau_oclass nv40_clock_oclass;
extern struct nouveau_oclass *nv50_clock_oclass;
extern struct nouveau_oclass *nv84_clock_oclass;
+extern struct nouveau_oclass *nvaa_clock_oclass;
extern struct nouveau_oclass nva3_clock_oclass;
extern struct nouveau_oclass nvc0_clock_oclass;
extern struct nouveau_oclass nve0_clock_oclass;
return 0;
}
+static struct nouveau_clocks
+nv04_domain[] = {
+ { nv_clk_src_max }
+};
+
static int
nv04_clock_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 size,
struct nv04_clock_priv *priv;
int ret;
- ret = nouveau_clock_create(parent, engine, oclass, NULL, &priv);
+ ret = nouveau_clock_create(parent, engine, oclass, nv04_domain, &priv);
*pobject = nv_object(priv);
if (ret)
return ret;
--- /dev/null
+/*
+ * Copyright 2012 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs
+ */
+
+#include <engine/fifo.h>
+#include <subdev/bios.h>
+#include <subdev/bios/pll.h>
+#include <subdev/timer.h>
+#include <subdev/clock.h>
+
+#include "pll.h"
+
+struct nvaa_clock_priv {
+ struct nouveau_clock base;
+ enum nv_clk_src csrc, ssrc, vsrc;
+ u32 cctrl, sctrl;
+ u32 ccoef, scoef;
+ u32 cpost, spost;
+ u32 vdiv;
+};
+
+static u32
+read_div(struct nouveau_clock *clk)
+{
+ return nv_rd32(clk, 0x004600);
+}
+
+static u32
+read_pll(struct nouveau_clock *clk, u32 base)
+{
+ u32 ctrl = nv_rd32(clk, base + 0);
+ u32 coef = nv_rd32(clk, base + 4);
+ u32 ref = clk->read(clk, nv_clk_src_href);
+ u32 post_div = 0;
+ u32 clock = 0;
+ int N1, M1;
+
+ switch (base){
+ case 0x4020:
+ post_div = 1 << ((nv_rd32(clk, 0x4070) & 0x000f0000) >> 16);
+ break;
+ case 0x4028:
+ post_div = (nv_rd32(clk, 0x4040) & 0x000f0000) >> 16;
+ break;
+ default:
+ break;
+ }
+
+ N1 = (coef & 0x0000ff00) >> 8;
+ M1 = (coef & 0x000000ff);
+ if ((ctrl & 0x80000000) && M1) {
+ clock = ref * N1 / M1;
+ clock = clock / post_div;
+ }
+
+ return clock;
+}
+
+static int
+nvaa_clock_read(struct nouveau_clock *clk, enum nv_clk_src src)
+{
+ struct nvaa_clock_priv *priv = (void *)clk;
+ u32 mast = nv_rd32(clk, 0x00c054);
+ u32 P = 0;
+
+ switch (src) {
+ case nv_clk_src_crystal:
+ return nv_device(priv)->crystal;
+ case nv_clk_src_href:
+ return 100000; /* PCIE reference clock */
+ case nv_clk_src_hclkm4:
+ return clk->read(clk, nv_clk_src_href) * 4;
+ case nv_clk_src_hclkm2d3:
+ return clk->read(clk, nv_clk_src_href) * 2 / 3;
+ case nv_clk_src_host:
+ switch (mast & 0x000c0000) {
+ case 0x00000000: return clk->read(clk, nv_clk_src_hclkm2d3);
+ case 0x00040000: break;
+ case 0x00080000: return clk->read(clk, nv_clk_src_hclkm4);
+ case 0x000c0000: return clk->read(clk, nv_clk_src_cclk);
+ }
+ break;
+ case nv_clk_src_core:
+ P = (nv_rd32(clk, 0x004028) & 0x00070000) >> 16;
+
+ switch (mast & 0x00000003) {
+ case 0x00000000: return clk->read(clk, nv_clk_src_crystal) >> P;
+ case 0x00000001: return 0;
+ case 0x00000002: return clk->read(clk, nv_clk_src_hclkm4) >> P;
+ case 0x00000003: return read_pll(clk, 0x004028) >> P;
+ }
+ break;
+ case nv_clk_src_cclk:
+ if ((mast & 0x03000000) != 0x03000000)
+ return clk->read(clk, nv_clk_src_core);
+
+ if ((mast & 0x00000200) == 0x00000000)
+ return clk->read(clk, nv_clk_src_core);
+
+ switch (mast & 0x00000c00) {
+ case 0x00000000: return clk->read(clk, nv_clk_src_href);
+ case 0x00000400: return clk->read(clk, nv_clk_src_hclkm4);
+ case 0x00000800: return clk->read(clk, nv_clk_src_hclkm2d3);
+ default: return 0;
+ }
+ case nv_clk_src_shader:
+ P = (nv_rd32(clk, 0x004020) & 0x00070000) >> 16;
+ switch (mast & 0x00000030) {
+ case 0x00000000:
+ if (mast & 0x00000040)
+ return clk->read(clk, nv_clk_src_href) >> P;
+ return clk->read(clk, nv_clk_src_crystal) >> P;
+ case 0x00000010: break;
+ case 0x00000020: return read_pll(clk, 0x004028) >> P;
+ case 0x00000030: return read_pll(clk, 0x004020) >> P;
+ }
+ break;
+ case nv_clk_src_mem:
+ return 0;
+ break;
+ case nv_clk_src_vdec:
+ P = (read_div(clk) & 0x00000700) >> 8;
+
+ switch (mast & 0x00400000) {
+ case 0x00400000:
+ return clk->read(clk, nv_clk_src_core) >> P;
+ break;
+ default:
+ return 500000 >> P;
+ break;
+ }
+ break;
+ default:
+ break;
+ }
+
+ nv_debug(priv, "unknown clock source %d 0x%08x\n", src, mast);
+ return 0;
+}
+
+static u32
+calc_pll(struct nvaa_clock_priv *priv, u32 reg,
+ u32 clock, int *N, int *M, int *P)
+{
+ struct nouveau_bios *bios = nouveau_bios(priv);
+ struct nvbios_pll pll;
+ struct nouveau_clock *clk = &priv->base;
+ int ret;
+
+ ret = nvbios_pll_parse(bios, reg, &pll);
+ if (ret)
+ return 0;
+
+ pll.vco2.max_freq = 0;
+ pll.refclk = clk->read(clk, nv_clk_src_href);
+ if (!pll.refclk)
+ return 0;
+
+ return nv04_pll_calc(nv_subdev(priv), &pll, clock, N, M, NULL, NULL, P);
+}
+
+static inline u32
+calc_P(u32 src, u32 target, int *div)
+{
+ u32 clk0 = src, clk1 = src;
+ for (*div = 0; *div <= 7; (*div)++) {
+ if (clk0 <= target) {
+ clk1 = clk0 << (*div ? 1 : 0);
+ break;
+ }
+ clk0 >>= 1;
+ }
+
+ if (target - clk0 <= clk1 - target)
+ return clk0;
+ (*div)--;
+ return clk1;
+}
+
+static int
+nvaa_clock_calc(struct nouveau_clock *clk, struct nouveau_cstate *cstate)
+{
+ struct nvaa_clock_priv *priv = (void *)clk;
+ const int shader = cstate->domain[nv_clk_src_shader];
+ const int core = cstate->domain[nv_clk_src_core];
+ const int vdec = cstate->domain[nv_clk_src_vdec];
+ u32 out = 0, clock = 0;
+ int N, M, P1, P2 = 0;
+ int divs = 0;
+
+ /* cclk: find suitable source, disable PLL if we can */
+ if (core < clk->read(clk, nv_clk_src_hclkm4))
+ out = calc_P(clk->read(clk, nv_clk_src_hclkm4), core, &divs);
+
+ /* Calculate clock * 2, so shader clock can use it too */
+ clock = calc_pll(priv, 0x4028, (core << 1), &N, &M, &P1);
+
+ if (abs(core - out) <=
+ abs(core - (clock >> 1))) {
+ priv->csrc = nv_clk_src_hclkm4;
+ priv->cctrl = divs << 16;
+ } else {
+ /* NVCTRL is actually used _after_ NVPOST, and after what we
+ * call NVPLL. To make matters worse, NVPOST is an integer
+ * divider instead of a right-shift number. */
+ if(P1 > 2) {
+ P2 = P1 - 2;
+ P1 = 2;
+ }
+
+ priv->csrc = nv_clk_src_core;
+ priv->ccoef = (N << 8) | M;
+
+ priv->cctrl = (P2 + 1) << 16;
+ priv->cpost = (1 << P1) << 16;
+ }
+
+ /* sclk: nvpll + divisor, href or spll */
+ out = 0;
+ if (shader == clk->read(clk, nv_clk_src_href)) {
+ priv->ssrc = nv_clk_src_href;
+ } else {
+ clock = calc_pll(priv, 0x4020, shader, &N, &M, &P1);
+ if (priv->csrc == nv_clk_src_core) {
+ out = calc_P((core << 1), shader, &divs);
+ }
+
+ if (abs(shader - out) <=
+ abs(shader - clock) &&
+ (divs + P2) <= 7) {
+ priv->ssrc = nv_clk_src_core;
+ priv->sctrl = (divs + P2) << 16;
+ } else {
+ priv->ssrc = nv_clk_src_shader;
+ priv->scoef = (N << 8) | M;
+ priv->sctrl = P1 << 16;
+ }
+ }
+
+ /* vclk */
+ out = calc_P(core, vdec, &divs);
+ clock = calc_P(500000, vdec, &P1);
+ if(abs(vdec - out) <=
+ abs(vdec - clock)) {
+ priv->vsrc = nv_clk_src_cclk;
+ priv->vdiv = divs << 16;
+ } else {
+ priv->vsrc = nv_clk_src_vdec;
+ priv->vdiv = P1 << 16;
+ }
+
+ /* Print strategy! */
+ nv_debug(priv, "nvpll: %08x %08x %08x\n",
+ priv->ccoef, priv->cpost, priv->cctrl);
+ nv_debug(priv, " spll: %08x %08x %08x\n",
+ priv->scoef, priv->spost, priv->sctrl);
+ nv_debug(priv, " vdiv: %08x\n", priv->vdiv);
+ if (priv->csrc == nv_clk_src_hclkm4)
+ nv_debug(priv, "core: hrefm4\n");
+ else
+ nv_debug(priv, "core: nvpll\n");
+
+ if (priv->ssrc == nv_clk_src_hclkm4)
+ nv_debug(priv, "shader: hrefm4\n");
+ else if (priv->ssrc == nv_clk_src_core)
+ nv_debug(priv, "shader: nvpll\n");
+ else
+ nv_debug(priv, "shader: spll\n");
+
+ if (priv->vsrc == nv_clk_src_hclkm4)
+ nv_debug(priv, "vdec: 500MHz\n");
+ else
+ nv_debug(priv, "vdec: core\n");
+
+ return 0;
+}
+
+static int
+nvaa_clock_prog(struct nouveau_clock *clk)
+{
+ struct nvaa_clock_priv *priv = (void *)clk;
+ struct nouveau_fifo *pfifo = nouveau_fifo(clk);
+ unsigned long flags;
+ u32 pllmask = 0, mast, ptherm_gate;
+ int ret = -EBUSY;
+
+ /* halt and idle execution engines */
+ ptherm_gate = nv_mask(clk, 0x020060, 0x00070000, 0x00000000);
+ nv_mask(clk, 0x002504, 0x00000001, 0x00000001);
+ /* Wait until the interrupt handler is finished */
+ if (!nv_wait(clk, 0x000100, 0xffffffff, 0x00000000))
+ goto resume;
+
+ if (pfifo)
+ pfifo->pause(pfifo, &flags);
+
+ if (!nv_wait(clk, 0x002504, 0x00000010, 0x00000010))
+ goto resume;
+ if (!nv_wait(clk, 0x00251c, 0x0000003f, 0x0000003f))
+ goto resume;
+
+ /* First switch to safe clocks: href */
+ mast = nv_mask(clk, 0xc054, 0x03400e70, 0x03400640);
+ mast &= ~0x00400e73;
+ mast |= 0x03000000;
+
+ switch (priv->csrc) {
+ case nv_clk_src_hclkm4:
+ nv_mask(clk, 0x4028, 0x00070000, priv->cctrl);
+ mast |= 0x00000002;
+ break;
+ case nv_clk_src_core:
+ nv_wr32(clk, 0x402c, priv->ccoef);
+ nv_wr32(clk, 0x4028, 0x80000000 | priv->cctrl);
+ nv_wr32(clk, 0x4040, priv->cpost);
+ pllmask |= (0x3 << 8);
+ mast |= 0x00000003;
+ break;
+ default:
+ nv_warn(priv,"Reclocking failed: unknown core clock\n");
+ goto resume;
+ }
+
+ switch (priv->ssrc) {
+ case nv_clk_src_href:
+ nv_mask(clk, 0x4020, 0x00070000, 0x00000000);
+ /* mast |= 0x00000000; */
+ break;
+ case nv_clk_src_core:
+ nv_mask(clk, 0x4020, 0x00070000, priv->sctrl);
+ mast |= 0x00000020;
+ break;
+ case nv_clk_src_shader:
+ nv_wr32(clk, 0x4024, priv->scoef);
+ nv_wr32(clk, 0x4020, 0x80000000 | priv->sctrl);
+ nv_wr32(clk, 0x4070, priv->spost);
+ pllmask |= (0x3 << 12);
+ mast |= 0x00000030;
+ break;
+ default:
+ nv_warn(priv,"Reclocking failed: unknown sclk clock\n");
+ goto resume;
+ }
+
+ if (!nv_wait(clk, 0x004080, pllmask, pllmask)) {
+ nv_warn(priv,"Reclocking failed: unstable PLLs\n");
+ goto resume;
+ }
+
+ switch (priv->vsrc) {
+ case nv_clk_src_cclk:
+ mast |= 0x00400000;
+ default:
+ nv_wr32(clk, 0x4600, priv->vdiv);
+ }
+
+ nv_wr32(clk, 0xc054, mast);
+ ret = 0;
+
+resume:
+ if (pfifo)
+ pfifo->start(pfifo, &flags);
+
+ nv_mask(clk, 0x002504, 0x00000001, 0x00000000);
+ nv_wr32(clk, 0x020060, ptherm_gate);
+
+ /* Disable some PLLs and dividers when unused */
+ if (priv->csrc != nv_clk_src_core) {
+ nv_wr32(clk, 0x4040, 0x00000000);
+ nv_mask(clk, 0x4028, 0x80000000, 0x00000000);
+ }
+
+ if (priv->ssrc != nv_clk_src_shader) {
+ nv_wr32(clk, 0x4070, 0x00000000);
+ nv_mask(clk, 0x4020, 0x80000000, 0x00000000);
+ }
+
+ return ret;
+}
+
+static void
+nvaa_clock_tidy(struct nouveau_clock *clk)
+{
+}
+
+static struct nouveau_clocks
+nvaa_domains[] = {
+ { nv_clk_src_crystal, 0xff },
+ { nv_clk_src_href , 0xff },
+ { nv_clk_src_core , 0xff, 0, "core", 1000 },
+ { nv_clk_src_shader , 0xff, 0, "shader", 1000 },
+ { nv_clk_src_vdec , 0xff, 0, "vdec", 1000 },
+ { nv_clk_src_max }
+};
+
+static int
+nvaa_clock_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
+ struct nouveau_oclass *oclass, void *data, u32 size,
+ struct nouveau_object **pobject)
+{
+ struct nvaa_clock_priv *priv;
+ int ret;
+
+ ret = nouveau_clock_create(parent, engine, oclass, nvaa_domains, &priv);
+ *pobject = nv_object(priv);
+ if (ret)
+ return ret;
+
+ priv->base.read = nvaa_clock_read;
+ priv->base.calc = nvaa_clock_calc;
+ priv->base.prog = nvaa_clock_prog;
+ priv->base.tidy = nvaa_clock_tidy;
+ return 0;
+}
+
+struct nouveau_oclass *
+nvaa_clock_oclass = &(struct nouveau_oclass) {
+ .handle = NV_SUBDEV(CLOCK, 0xaa),
+ .ofuncs = &(struct nouveau_ofuncs) {
+ .ctor = nvaa_clock_ctor,
+ .dtor = _nouveau_clock_dtor,
+ .init = _nouveau_clock_init,
+ .fini = _nouveau_clock_fini,
+ },
+};
acpi_handle handle;
int ret;
- handle = DEVICE_ACPI_HANDLE(&device->pdev->dev);
+ handle = ACPI_HANDLE(&device->pdev->dev);
if (!handle)
return false;
};
static uint32_t formats[] = {
- DRM_FORMAT_NV12,
DRM_FORMAT_UYVY,
+ DRM_FORMAT_NV12,
};
/* Sine can be approximated with
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
struct nouveau_bo *cur = nv_plane->cur;
bool flip = nv_plane->flip;
- int format = ALIGN(src_w * 4, 0x100);
int soff = NV_PCRTC0_SIZE * nv_crtc->index;
int soff2 = NV_PCRTC0_SIZE * !nv_crtc->index;
- int ret;
+ int format, ret;
+
+ /* Source parameters given in 16.16 fixed point, ignore fractional. */
+ src_x >>= 16;
+ src_y >>= 16;
+ src_w >>= 16;
+ src_h >>= 16;
+
+ format = ALIGN(src_w * 4, 0x100);
if (format > 0xffff)
- return -EINVAL;
+ return -ERANGE;
+
+ if (dev->chipset >= 0x30) {
+ if (crtc_w < (src_w >> 1) || crtc_h < (src_h >> 1))
+ return -ERANGE;
+ } else {
+ if (crtc_w < (src_w >> 3) || crtc_h < (src_h >> 3))
+ return -ERANGE;
+ }
ret = nouveau_bo_pin(nv_fb->nvbo, TTM_PL_FLAG_VRAM);
if (ret)
nv_plane->cur = nv_fb->nvbo;
- /* Source parameters given in 16.16 fixed point, ignore fractional. */
- src_x = src_x >> 16;
- src_y = src_y >> 16;
- src_w = src_w >> 16;
- src_h = src_h >> 16;
-
nv_mask(dev, NV_PCRTC_ENGINE_CTRL + soff, NV_CRTC_FSEL_OVERLAY, NV_CRTC_FSEL_OVERLAY);
nv_mask(dev, NV_PCRTC_ENGINE_CTRL + soff2, NV_CRTC_FSEL_OVERLAY, 0);
{
struct nouveau_device *dev = nouveau_dev(device);
struct nouveau_plane *plane = kzalloc(sizeof(struct nouveau_plane), GFP_KERNEL);
+ int num_formats = ARRAY_SIZE(formats);
int ret;
if (!plane)
return;
+ switch (dev->chipset) {
+ case 0x10:
+ case 0x11:
+ case 0x15:
+ case 0x1a:
+ case 0x20:
+ num_formats = 1;
+ break;
+ }
+
ret = drm_plane_init(device, &plane->base, 3 /* both crtc's */,
&nv10_plane_funcs,
- formats, ARRAY_SIZE(formats), false);
+ formats, num_formats, false);
if (ret)
goto err;
bool dsm_detected;
bool optimus_detected;
acpi_handle dhandle;
+ acpi_handle other_handle;
acpi_handle rom_handle;
} nouveau_dsm_priv;
acpi_handle dhandle;
int retval = 0;
- dhandle = DEVICE_ACPI_HANDLE(&pdev->dev);
+ dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle)
return false;
- if (!acpi_has_method(dhandle, "_DSM"))
+ if (!acpi_has_method(dhandle, "_DSM")) {
+ nouveau_dsm_priv.other_handle = dhandle;
return false;
-
+ }
if (nouveau_test_dsm(dhandle, nouveau_dsm, NOUVEAU_DSM_POWER))
retval |= NOUVEAU_DSM_HAS_MUX;
printk(KERN_INFO "VGA switcheroo: detected DSM switching method %s handle\n",
acpi_method_name);
nouveau_dsm_priv.dsm_detected = true;
+ /*
+ * On some systems hotplug events are generated for the device
+ * being switched off when _DSM is executed. They cause ACPI
+ * hotplug to trigger and attempt to remove the device from
+ * the system, which causes it to break down. Prevent that from
+ * happening by setting the no_hotplug flag for the involved
+ * ACPI device objects.
+ */
+ acpi_bus_no_hotplug(nouveau_dsm_priv.dhandle);
+ acpi_bus_no_hotplug(nouveau_dsm_priv.other_handle);
ret = true;
}
if (!nouveau_dsm_priv.dsm_detected && !nouveau_dsm_priv.optimus_detected)
return false;
- dhandle = DEVICE_ACPI_HANDLE(&pdev->dev);
+ dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle)
return false;
return NULL;
}
- handle = DEVICE_ACPI_HANDLE(&dev->pdev->dev);
+ handle = ACPI_HANDLE(&dev->pdev->dev);
if (!handle)
return NULL;
fence = nouveau_fence_ref(new_bo->bo.sync_obj);
spin_unlock(&new_bo->bo.bdev->fence_lock);
ret = nouveau_fence_sync(fence, chan);
+ nouveau_fence_unref(&fence);
if (ret)
return ret;
s = list_first_entry(&fctx->flip, struct nouveau_page_flip_state, head);
if (s->event)
- drm_send_vblank_event(dev, -1, s->event);
+ drm_send_vblank_event(dev, s->crtc, s->event);
list_del(&s->head);
if (ps)
if (nouveau_runtime_pm == 0)
return -EINVAL;
+ /* are we optimus enabled? */
+ if (nouveau_runtime_pm == -1 && !nouveau_is_optimus() && !nouveau_is_v1_dsm()) {
+ DRM_DEBUG_DRIVER("failing to power off - not optimus\n");
+ return -EINVAL;
+ }
+
nv_debug_level(SILENT);
drm_kms_helper_poll_disable(drm_dev);
vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_OFF);
hwmon->hwmon = NULL;
return ret;
#else
- hwmon->hwmon = NULL;
return 0;
#endif
}
uint32_t start, uint32_t size)
{
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
- u32 end = max(start + size, (u32)256);
+ u32 end = min_t(u32, start + size, 256);
u32 i;
for (i = start; i < end; i++) {
select DRM_KMS_HELPER
select DRM_KMS_FB_HELPER
select DRM_TTM
+ select CRC32
help
QXL virtual GPU for Spice virtualization desktop integration. Do not enable this driver unless your distro ships a corresponding X.org QXL driver that can handle kernel modesetting.
*/
-#include "linux/crc32.h"
+#include <linux/crc32.h>
#include "qxl_drv.h"
#include "qxl_object.h"
- DRM_FILE_OFFSET);
qxl_fence_remove_release(&bo->fence, release->id);
qxl_bo_unref(&bo);
+ kfree(entry);
}
spin_lock(&qdev->release_idr_lock);
idr_remove(&qdev->release_idr, release->id);
}
if (tiling_flags & RADEON_TILING_MACRO) {
- if (rdev->family >= CHIP_BONAIRE)
- tmp = rdev->config.cik.tile_config;
- else if (rdev->family >= CHIP_TAHITI)
- tmp = rdev->config.si.tile_config;
- else if (rdev->family >= CHIP_CAYMAN)
- tmp = rdev->config.cayman.tile_config;
- else
- tmp = rdev->config.evergreen.tile_config;
+ evergreen_tiling_fields(tiling_flags, &bankw, &bankh, &mtaspect, &tile_split);
- switch ((tmp & 0xf0) >> 4) {
- case 0: /* 4 banks */
- fb_format |= EVERGREEN_GRPH_NUM_BANKS(EVERGREEN_ADDR_SURF_4_BANK);
- break;
- case 1: /* 8 banks */
- default:
- fb_format |= EVERGREEN_GRPH_NUM_BANKS(EVERGREEN_ADDR_SURF_8_BANK);
- break;
- case 2: /* 16 banks */
- fb_format |= EVERGREEN_GRPH_NUM_BANKS(EVERGREEN_ADDR_SURF_16_BANK);
- break;
+ /* Set NUM_BANKS. */
+ if (rdev->family >= CHIP_BONAIRE) {
+ unsigned tileb, index, num_banks, tile_split_bytes;
+
+ /* Calculate the macrotile mode index. */
+ tile_split_bytes = 64 << tile_split;
+ tileb = 8 * 8 * target_fb->bits_per_pixel / 8;
+ tileb = min(tile_split_bytes, tileb);
+
+ for (index = 0; tileb > 64; index++) {
+ tileb >>= 1;
+ }
+
+ if (index >= 16) {
+ DRM_ERROR("Wrong screen bpp (%u) or tile split (%u)\n",
+ target_fb->bits_per_pixel, tile_split);
+ return -EINVAL;
+ }
+
+ num_banks = (rdev->config.cik.macrotile_mode_array[index] >> 6) & 0x3;
+ fb_format |= EVERGREEN_GRPH_NUM_BANKS(num_banks);
+ } else {
+ /* SI and older. */
+ if (rdev->family >= CHIP_TAHITI)
+ tmp = rdev->config.si.tile_config;
+ else if (rdev->family >= CHIP_CAYMAN)
+ tmp = rdev->config.cayman.tile_config;
+ else
+ tmp = rdev->config.evergreen.tile_config;
+
+ switch ((tmp & 0xf0) >> 4) {
+ case 0: /* 4 banks */
+ fb_format |= EVERGREEN_GRPH_NUM_BANKS(EVERGREEN_ADDR_SURF_4_BANK);
+ break;
+ case 1: /* 8 banks */
+ default:
+ fb_format |= EVERGREEN_GRPH_NUM_BANKS(EVERGREEN_ADDR_SURF_8_BANK);
+ break;
+ case 2: /* 16 banks */
+ fb_format |= EVERGREEN_GRPH_NUM_BANKS(EVERGREEN_ADDR_SURF_16_BANK);
+ break;
+ }
}
fb_format |= EVERGREEN_GRPH_ARRAY_MODE(EVERGREEN_GRPH_ARRAY_2D_TILED_THIN1);
-
- evergreen_tiling_fields(tiling_flags, &bankw, &bankh, &mtaspect, &tile_split);
fb_format |= EVERGREEN_GRPH_TILE_SPLIT(tile_split);
fb_format |= EVERGREEN_GRPH_BANK_WIDTH(bankw);
fb_format |= EVERGREEN_GRPH_BANK_HEIGHT(bankh);
fb_format |= EVERGREEN_GRPH_ARRAY_MODE(EVERGREEN_GRPH_ARRAY_1D_TILED_THIN1);
if (rdev->family >= CHIP_BONAIRE) {
- u32 num_pipe_configs = rdev->config.cik.max_tile_pipes;
- u32 num_rb = rdev->config.cik.max_backends_per_se;
- if (num_pipe_configs > 8)
- num_pipe_configs = 8;
- if (num_pipe_configs == 8)
- fb_format |= CIK_GRPH_PIPE_CONFIG(CIK_ADDR_SURF_P8_32x32_16x16);
- else if (num_pipe_configs == 4) {
- if (num_rb == 4)
- fb_format |= CIK_GRPH_PIPE_CONFIG(CIK_ADDR_SURF_P4_16x16);
- else if (num_rb < 4)
- fb_format |= CIK_GRPH_PIPE_CONFIG(CIK_ADDR_SURF_P4_8x16);
- } else if (num_pipe_configs == 2)
- fb_format |= CIK_GRPH_PIPE_CONFIG(CIK_ADDR_SURF_P2);
+ /* Read the pipe config from the 2D TILED SCANOUT mode.
+ * It should be the same for the other modes too, but not all
+ * modes set the pipe config field. */
+ u32 pipe_config = (rdev->config.cik.tile_mode_array[10] >> 6) & 0x1f;
+
+ fb_format |= CIK_GRPH_PIPE_CONFIG(pipe_config);
} else if ((rdev->family == CHIP_TAHITI) ||
(rdev->family == CHIP_PITCAIRN))
fb_format |= SI_GRPH_PIPE_CONFIG(SI_ADDR_SURF_P8_32x32_8x16);
- else if (rdev->family == CHIP_VERDE)
+ else if ((rdev->family == CHIP_VERDE) ||
+ (rdev->family == CHIP_OLAND) ||
+ (rdev->family == CHIP_HAINAN)) /* for completeness. HAINAN has no display hw */
fb_format |= SI_GRPH_PIPE_CONFIG(SI_ADDR_SURF_P4_8x16);
switch (radeon_crtc->crtc_id) {
PROCESS_I2C_CHANNEL_TRANSACTION_PS_ALLOCATION args;
int index = GetIndexIntoMasterTable(COMMAND, ProcessI2cChannelTransaction);
unsigned char *base;
- u16 out;
+ u16 out = cpu_to_le16(0);
memset(&args, 0, sizeof(args));
DRM_ERROR("hw i2c: tried to write too many bytes (%d vs 3)\n", num);
return -EINVAL;
}
- args.ucRegIndex = buf[0];
- if (num > 1)
- memcpy(&out, &buf[1], num - 1);
+ if (buf == NULL)
+ args.ucRegIndex = 0;
+ else
+ args.ucRegIndex = buf[0];
+ if (num)
+ num--;
+ if (num)
+ memcpy(&out, &buf[1], num);
args.lpI2CDataOut = cpu_to_le16(out);
} else {
if (num > ATOM_MAX_HW_I2C_READ) {
struct radeon_i2c_chan *i2c = i2c_get_adapdata(i2c_adap);
struct i2c_msg *p;
int i, remaining, current_count, buffer_offset, max_bytes, ret;
- u8 buf = 0, flags;
+ u8 flags;
/* check for bus probe */
p = &msgs[0];
if ((num == 1) && (p->len == 0)) {
ret = radeon_process_i2c_ch(i2c,
p->addr, HW_I2C_WRITE,
- &buf, 1);
+ NULL, 0);
if (ret)
return ret;
else
* cik_mm_rdoorbell - read a doorbell dword
*
* @rdev: radeon_device pointer
- * @offset: byte offset into the aperture
+ * @index: doorbell index
*
* Returns the value in the doorbell aperture at the
- * requested offset (CIK).
+ * requested doorbell index (CIK).
*/
-u32 cik_mm_rdoorbell(struct radeon_device *rdev, u32 offset)
+u32 cik_mm_rdoorbell(struct radeon_device *rdev, u32 index)
{
- if (offset < rdev->doorbell.size) {
- return readl(((void __iomem *)rdev->doorbell.ptr) + offset);
+ if (index < rdev->doorbell.num_doorbells) {
+ return readl(rdev->doorbell.ptr + index);
} else {
- DRM_ERROR("reading beyond doorbell aperture: 0x%08x!\n", offset);
+ DRM_ERROR("reading beyond doorbell aperture: 0x%08x!\n", index);
return 0;
}
}
* cik_mm_wdoorbell - write a doorbell dword
*
* @rdev: radeon_device pointer
- * @offset: byte offset into the aperture
+ * @index: doorbell index
* @v: value to write
*
* Writes @v to the doorbell aperture at the
- * requested offset (CIK).
+ * requested doorbell index (CIK).
*/
-void cik_mm_wdoorbell(struct radeon_device *rdev, u32 offset, u32 v)
+void cik_mm_wdoorbell(struct radeon_device *rdev, u32 index, u32 v)
{
- if (offset < rdev->doorbell.size) {
- writel(v, ((void __iomem *)rdev->doorbell.ptr) + offset);
+ if (index < rdev->doorbell.num_doorbells) {
+ writel(v, rdev->doorbell.ptr + index);
} else {
- DRM_ERROR("writing beyond doorbell aperture: 0x%08x!\n", offset);
+ DRM_ERROR("writing beyond doorbell aperture: 0x%08x!\n", index);
}
}
gb_tile_moden = 0;
break;
}
+ rdev->config.cik.macrotile_mode_array[reg_offset] = gb_tile_moden;
WREG32(GB_MACROTILE_MODE0 + (reg_offset * 4), gb_tile_moden);
}
} else if (num_pipe_configs == 4) {
gb_tile_moden = 0;
break;
}
+ rdev->config.cik.macrotile_mode_array[reg_offset] = gb_tile_moden;
WREG32(GB_MACROTILE_MODE0 + (reg_offset * 4), gb_tile_moden);
}
} else if (num_pipe_configs == 2) {
gb_tile_moden = 0;
break;
}
+ rdev->config.cik.macrotile_mode_array[reg_offset] = gb_tile_moden;
WREG32(GB_MACROTILE_MODE0 + (reg_offset * 4), gb_tile_moden);
}
} else
* Returns the disabled RB bitmask.
*/
static u32 cik_get_rb_disabled(struct radeon_device *rdev,
- u32 max_rb_num, u32 se_num,
+ u32 max_rb_num_per_se,
u32 sh_per_se)
{
u32 data, mask;
data >>= BACKEND_DISABLE_SHIFT;
- mask = cik_create_bitmask(max_rb_num / se_num / sh_per_se);
+ mask = cik_create_bitmask(max_rb_num_per_se / sh_per_se);
return data & mask;
}
*/
static void cik_setup_rb(struct radeon_device *rdev,
u32 se_num, u32 sh_per_se,
- u32 max_rb_num)
+ u32 max_rb_num_per_se)
{
int i, j;
u32 data, mask;
for (i = 0; i < se_num; i++) {
for (j = 0; j < sh_per_se; j++) {
cik_select_se_sh(rdev, i, j);
- data = cik_get_rb_disabled(rdev, max_rb_num, se_num, sh_per_se);
+ data = cik_get_rb_disabled(rdev, max_rb_num_per_se, sh_per_se);
if (rdev->family == CHIP_HAWAII)
disabled_rbs |= data << ((i * sh_per_se + j) * HAWAII_RB_BITMAP_WIDTH_PER_SH);
else
cik_select_se_sh(rdev, 0xffffffff, 0xffffffff);
mask = 1;
- for (i = 0; i < max_rb_num; i++) {
+ for (i = 0; i < max_rb_num_per_se * se_num; i++) {
if (!(disabled_rbs & mask))
enabled_rbs |= mask;
mask <<= 1;
}
+ rdev->config.cik.backend_enable_mask = enabled_rbs;
+
for (i = 0; i < se_num; i++) {
cik_select_se_sh(rdev, i, 0xffffffff);
data = 0;
radeon_ring_write(ring, 0);
}
-void cik_semaphore_ring_emit(struct radeon_device *rdev,
+bool cik_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait)
{
+/* TODO: figure out why semaphore cause lockups */
+#if 0
uint64_t addr = semaphore->gpu_addr;
unsigned sel = emit_wait ? PACKET3_SEM_SEL_WAIT : PACKET3_SEM_SEL_SIGNAL;
radeon_ring_write(ring, PACKET3(PACKET3_MEM_SEMAPHORE, 1));
radeon_ring_write(ring, addr & 0xffffffff);
radeon_ring_write(ring, (upper_32_bits(addr) & 0xffff) | sel);
+
+ return true;
+#else
+ return false;
+#endif
}
/**
return r;
}
- if (radeon_fence_need_sync(*fence, ring->idx)) {
- radeon_semaphore_sync_rings(rdev, sem, (*fence)->ring,
- ring->idx);
- radeon_fence_note_sync(*fence, ring->idx);
- } else {
- radeon_semaphore_free(rdev, &sem, NULL);
- }
+ radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
cur_size_in_bytes = size_in_bytes;
struct radeon_ring *ring)
{
rdev->wb.wb[ring->wptr_offs/4] = cpu_to_le32(ring->wptr);
- WDOORBELL32(ring->doorbell_offset, ring->wptr);
+ WDOORBELL32(ring->doorbell_index, ring->wptr);
}
/**
return r;
}
- /* doorbell offset */
- rdev->ring[idx].doorbell_offset =
- (rdev->ring[idx].doorbell_page_num * PAGE_SIZE) + 0;
-
/* init the mqd struct */
memset(buf, 0, sizeof(struct bonaire_mqd));
RREG32(CP_HQD_PQ_DOORBELL_CONTROL);
mqd->queue_state.cp_hqd_pq_doorbell_control &= ~DOORBELL_OFFSET_MASK;
mqd->queue_state.cp_hqd_pq_doorbell_control |=
- DOORBELL_OFFSET(rdev->ring[idx].doorbell_offset / 4);
+ DOORBELL_OFFSET(rdev->ring[idx].doorbell_index);
mqd->queue_state.cp_hqd_pq_doorbell_control |= DOORBELL_EN;
mqd->queue_state.cp_hqd_pq_doorbell_control &=
~(DOORBELL_SOURCE | DOORBELL_HIT);
ring = &rdev->ring[CAYMAN_RING_TYPE_CP1_INDEX];
ring->ring_obj = NULL;
r600_ring_init(rdev, ring, 1024 * 1024);
- r = radeon_doorbell_get(rdev, &ring->doorbell_page_num);
+ r = radeon_doorbell_get(rdev, &ring->doorbell_index);
if (r)
return r;
ring = &rdev->ring[CAYMAN_RING_TYPE_CP2_INDEX];
ring->ring_obj = NULL;
r600_ring_init(rdev, ring, 1024 * 1024);
- r = radeon_doorbell_get(rdev, &ring->doorbell_page_num);
+ r = radeon_doorbell_get(rdev, &ring->doorbell_index);
if (r)
return r;
* Add a DMA semaphore packet to the ring wait on or signal
* other rings (CIK).
*/
-void cik_sdma_semaphore_ring_emit(struct radeon_device *rdev,
+bool cik_sdma_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait)
radeon_ring_write(ring, SDMA_PACKET(SDMA_OPCODE_SEMAPHORE, 0, extra_bits));
radeon_ring_write(ring, addr & 0xfffffff8);
radeon_ring_write(ring, upper_32_bits(addr) & 0xffffffff);
+
+ return true;
}
/**
return r;
}
- if (radeon_fence_need_sync(*fence, ring->idx)) {
- radeon_semaphore_sync_rings(rdev, sem, (*fence)->ring,
- ring->idx);
- radeon_fence_note_sync(*fence, ring->idx);
- } else {
- radeon_semaphore_free(rdev, &sem, NULL);
- }
+ radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
cur_size_in_bytes = size_in_bytes;
radeon_ring_write(ring, 0); /* src/dst endian swap */
radeon_ring_write(ring, src_offset & 0xffffffff);
radeon_ring_write(ring, upper_32_bits(src_offset) & 0xffffffff);
- radeon_ring_write(ring, dst_offset & 0xfffffffc);
+ radeon_ring_write(ring, dst_offset & 0xffffffff);
radeon_ring_write(ring, upper_32_bits(dst_offset) & 0xffffffff);
src_offset += cur_size_in_bytes;
dst_offset += cur_size_in_bytes;
static int cypress_pcie_performance_request(struct radeon_device *rdev,
u8 perf_req, bool advertise)
{
+#if defined(CONFIG_ACPI)
struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+#endif
u32 tmp;
udelay(10);
struct radeon_device *rdev = encoder->dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
- u32 offset = dig->afmt->offset;
+ u32 offset;
- if (!dig->afmt->pin)
+ if (!dig || !dig->afmt || !dig->afmt->pin)
return;
+ offset = dig->afmt->offset;
+
WREG32(AFMT_AUDIO_SRC_CONTROL + offset,
AFMT_AUDIO_SRC_SELECT(dig->afmt->pin->id));
}
struct radeon_connector *radeon_connector = NULL;
u32 tmp = 0, offset;
- if (!dig->afmt->pin)
+ if (!dig || !dig->afmt || !dig->afmt->pin)
return;
offset = dig->afmt->pin->offset;
u8 *sadb;
int sad_count;
- if (!dig->afmt->pin)
+ if (!dig || !dig->afmt || !dig->afmt->pin)
return;
offset = dig->afmt->pin->offset;
}
sad_count = drm_edid_to_speaker_allocation(radeon_connector->edid, &sadb);
- if (sad_count < 0) {
+ if (sad_count <= 0) {
DRM_ERROR("Couldn't read Speaker Allocation Data Block: %d\n", sad_count);
return;
}
{ AZ_F0_CODEC_PIN_CONTROL_AUDIO_DESCRIPTOR13, HDMI_AUDIO_CODING_TYPE_WMA_PRO },
};
- if (!dig->afmt->pin)
+ if (!dig || !dig->afmt || !dig->afmt->pin)
return;
offset = dig->afmt->pin->offset;
}
sad_count = drm_edid_to_sad(radeon_connector->edid, &sads);
- if (sad_count < 0) {
+ if (sad_count <= 0) {
DRM_ERROR("Couldn't read SADs: %d\n", sad_count);
return;
}
rdev->audio.enabled = true;
if (ASIC_IS_DCE8(rdev))
- rdev->audio.num_pins = 7;
+ rdev->audio.num_pins = 6;
+ else if (ASIC_IS_DCE61(rdev))
+ rdev->audio.num_pins = 4;
else
rdev->audio.num_pins = 6;
return r;
}
- if (radeon_fence_need_sync(*fence, ring->idx)) {
- radeon_semaphore_sync_rings(rdev, sem, (*fence)->ring,
- ring->idx);
- radeon_fence_note_sync(*fence, ring->idx);
- } else {
- radeon_semaphore_free(rdev, &sem, NULL);
- }
+ radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
cur_size_in_dw = size_in_dw;
}
sad_count = drm_edid_to_speaker_allocation(radeon_connector->edid, &sadb);
- if (sad_count < 0) {
+ if (sad_count <= 0) {
DRM_ERROR("Couldn't read Speaker Allocation Data Block: %d\n", sad_count);
return;
}
}
sad_count = drm_edid_to_sad(radeon_connector->edid, &sads);
- if (sad_count < 0) {
+ if (sad_count <= 0) {
DRM_ERROR("Couldn't read SADs: %d\n", sad_count);
return;
}
(rdev->pdev->device == 0x999C)) {
rdev->config.cayman.max_simds_per_se = 6;
rdev->config.cayman.max_backends_per_se = 2;
+ rdev->config.cayman.max_hw_contexts = 8;
+ rdev->config.cayman.sx_max_export_size = 256;
+ rdev->config.cayman.sx_max_export_pos_size = 64;
+ rdev->config.cayman.sx_max_export_smx_size = 192;
} else if ((rdev->pdev->device == 0x9903) ||
(rdev->pdev->device == 0x9904) ||
(rdev->pdev->device == 0x990A) ||
(rdev->pdev->device == 0x999D)) {
rdev->config.cayman.max_simds_per_se = 4;
rdev->config.cayman.max_backends_per_se = 2;
+ rdev->config.cayman.max_hw_contexts = 8;
+ rdev->config.cayman.sx_max_export_size = 256;
+ rdev->config.cayman.sx_max_export_pos_size = 64;
+ rdev->config.cayman.sx_max_export_smx_size = 192;
} else if ((rdev->pdev->device == 0x9919) ||
(rdev->pdev->device == 0x9990) ||
(rdev->pdev->device == 0x9991) ||
(rdev->pdev->device == 0x99A0)) {
rdev->config.cayman.max_simds_per_se = 3;
rdev->config.cayman.max_backends_per_se = 1;
+ rdev->config.cayman.max_hw_contexts = 4;
+ rdev->config.cayman.sx_max_export_size = 128;
+ rdev->config.cayman.sx_max_export_pos_size = 32;
+ rdev->config.cayman.sx_max_export_smx_size = 96;
} else {
rdev->config.cayman.max_simds_per_se = 2;
rdev->config.cayman.max_backends_per_se = 1;
+ rdev->config.cayman.max_hw_contexts = 4;
+ rdev->config.cayman.sx_max_export_size = 128;
+ rdev->config.cayman.sx_max_export_pos_size = 32;
+ rdev->config.cayman.sx_max_export_smx_size = 96;
}
rdev->config.cayman.max_texture_channel_caches = 2;
rdev->config.cayman.max_gprs = 256;
rdev->config.cayman.max_gs_threads = 32;
rdev->config.cayman.max_stack_entries = 512;
rdev->config.cayman.sx_num_of_sets = 8;
- rdev->config.cayman.sx_max_export_size = 256;
- rdev->config.cayman.sx_max_export_pos_size = 64;
- rdev->config.cayman.sx_max_export_smx_size = 192;
- rdev->config.cayman.max_hw_contexts = 8;
rdev->config.cayman.sq_num_cf_insts = 2;
rdev->config.cayman.sc_prim_fifo_size = 0x40;
struct ni_ps *ps = ni_get_ps(rps);
struct radeon_clock_and_voltage_limits *max_limits;
bool disable_mclk_switching;
- u32 mclk, sclk;
- u16 vddc, vddci;
+ u32 mclk;
+ u16 vddci;
u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc;
int i;
/* XXX validate the min clocks required for display */
+ /* adjust low state */
if (disable_mclk_switching) {
- mclk = ps->performance_levels[ps->performance_level_count - 1].mclk;
- sclk = ps->performance_levels[0].sclk;
- vddc = ps->performance_levels[0].vddc;
- vddci = ps->performance_levels[ps->performance_level_count - 1].vddci;
- } else {
- sclk = ps->performance_levels[0].sclk;
- mclk = ps->performance_levels[0].mclk;
- vddc = ps->performance_levels[0].vddc;
- vddci = ps->performance_levels[0].vddci;
+ ps->performance_levels[0].mclk =
+ ps->performance_levels[ps->performance_level_count - 1].mclk;
+ ps->performance_levels[0].vddci =
+ ps->performance_levels[ps->performance_level_count - 1].vddci;
}
- /* adjusted low state */
- ps->performance_levels[0].sclk = sclk;
- ps->performance_levels[0].mclk = mclk;
- ps->performance_levels[0].vddc = vddc;
- ps->performance_levels[0].vddci = vddci;
-
btc_skip_blacklist_clocks(rdev, max_limits->sclk, max_limits->mclk,
&ps->performance_levels[0].sclk,
&ps->performance_levels[0].mclk);
ps->performance_levels[i].vddc = ps->performance_levels[i - 1].vddc;
}
+ /* adjust remaining states */
if (disable_mclk_switching) {
mclk = ps->performance_levels[0].mclk;
+ vddci = ps->performance_levels[0].vddci;
for (i = 1; i < ps->performance_level_count; i++) {
if (mclk < ps->performance_levels[i].mclk)
mclk = ps->performance_levels[i].mclk;
+ if (vddci < ps->performance_levels[i].vddci)
+ vddci = ps->performance_levels[i].vddci;
}
for (i = 0; i < ps->performance_level_count; i++) {
ps->performance_levels[i].mclk = mclk;
static int ni_pcie_performance_request(struct radeon_device *rdev,
u8 perf_req, bool advertise)
{
+#if defined(CONFIG_ACPI)
struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
-#if defined(CONFIG_ACPI)
if ((perf_req == PCIE_PERF_REQ_PECI_GEN1) ||
(perf_req == PCIE_PERF_REQ_PECI_GEN2)) {
if (eg_pi->pcie_performance_request_registered == false)
radeon_ring_write(ring, RADEON_SW_INT_FIRE);
}
-void r100_semaphore_ring_emit(struct radeon_device *rdev,
+bool r100_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait)
{
/* Unused on older asics, since we don't have semaphores or multiple rings */
BUG();
+ return false;
}
int r100_copy_blit(struct radeon_device *rdev,
}
}
-void r600_semaphore_ring_emit(struct radeon_device *rdev,
+bool r600_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait)
radeon_ring_write(ring, PACKET3(PACKET3_MEM_SEMAPHORE, 1));
radeon_ring_write(ring, addr & 0xffffffff);
radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel);
+
+ return true;
}
/**
return r;
}
- if (radeon_fence_need_sync(*fence, ring->idx)) {
- radeon_semaphore_sync_rings(rdev, sem, (*fence)->ring,
- ring->idx);
- radeon_fence_note_sync(*fence, ring->idx);
- } else {
- radeon_semaphore_free(rdev, &sem, NULL);
- }
+ radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_rings(rdev, sem, ring->idx);
radeon_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
radeon_ring_write(ring, (WAIT_UNTIL - PACKET3_SET_CONFIG_REG_OFFSET) >> 2);
* Add a DMA semaphore packet to the ring wait on or signal
* other rings (r6xx-SI).
*/
-void r600_dma_semaphore_ring_emit(struct radeon_device *rdev,
+bool r600_dma_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait)
radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_SEMAPHORE, 0, s, 0));
radeon_ring_write(ring, addr & 0xfffffffc);
radeon_ring_write(ring, upper_32_bits(addr) & 0xff);
+
+ return true;
}
/**
return r;
}
- if (radeon_fence_need_sync(*fence, ring->idx)) {
- radeon_semaphore_sync_rings(rdev, sem, (*fence)->ring,
- ring->idx);
- radeon_fence_note_sync(*fence, ring->idx);
- } else {
- radeon_semaphore_free(rdev, &sem, NULL);
- }
+ radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
cur_size_in_dw = size_in_dw;
WREG32(DCCG_AUDIO_DTO1_MODULE, dto_modulo);
WREG32(DCCG_AUDIO_DTO_SELECT, 1); /* select DTO1 */
}
- } else if (ASIC_IS_DCE3(rdev)) {
+ } else {
/* according to the reg specs, this should DCE3.2 only, but in
- * practice it seems to cover DCE3.0/3.1 as well.
+ * practice it seems to cover DCE2.0/3.0/3.1 as well.
*/
if (dig->dig_encoder == 0) {
WREG32(DCCG_AUDIO_DTO0_PHASE, base_rate * 100);
WREG32(DCCG_AUDIO_DTO1_MODULE, clock * 100);
WREG32(DCCG_AUDIO_DTO_SELECT, 1); /* select DTO1 */
}
- } else {
- /* according to the reg specs, this should be DCE2.0 and DCE3.0/3.1 */
- WREG32(AUDIO_DTO, AUDIO_DTO_PHASE(base_rate / 10) |
- AUDIO_DTO_MODULE(clock / 10));
}
}
void radeon_fence_process(struct radeon_device *rdev, int ring);
bool radeon_fence_signaled(struct radeon_fence *fence);
int radeon_fence_wait(struct radeon_fence *fence, bool interruptible);
+int radeon_fence_wait_locked(struct radeon_fence *fence);
int radeon_fence_wait_next_locked(struct radeon_device *rdev, int ring);
int radeon_fence_wait_empty_locked(struct radeon_device *rdev, int ring);
int radeon_fence_wait_any(struct radeon_device *rdev,
struct radeon_sa_bo *sa_bo;
signed waiters;
uint64_t gpu_addr;
+ struct radeon_fence *sync_to[RADEON_NUM_RINGS];
};
int radeon_semaphore_create(struct radeon_device *rdev,
struct radeon_semaphore **semaphore);
-void radeon_semaphore_emit_signal(struct radeon_device *rdev, int ring,
+bool radeon_semaphore_emit_signal(struct radeon_device *rdev, int ring,
struct radeon_semaphore *semaphore);
-void radeon_semaphore_emit_wait(struct radeon_device *rdev, int ring,
+bool radeon_semaphore_emit_wait(struct radeon_device *rdev, int ring,
struct radeon_semaphore *semaphore);
+void radeon_semaphore_sync_to(struct radeon_semaphore *semaphore,
+ struct radeon_fence *fence);
int radeon_semaphore_sync_rings(struct radeon_device *rdev,
struct radeon_semaphore *semaphore,
- int signaler, int waiter);
+ int waiting_ring);
void radeon_semaphore_free(struct radeon_device *rdev,
struct radeon_semaphore **semaphore,
struct radeon_fence *fence);
/*
* GPU doorbell structures, functions & helpers
*/
+#define RADEON_MAX_DOORBELLS 1024 /* Reserve at most 1024 doorbell slots for radeon-owned rings. */
+
struct radeon_doorbell {
- u32 num_pages;
- bool free[1024];
/* doorbell mmio */
- resource_size_t base;
- resource_size_t size;
- void __iomem *ptr;
+ resource_size_t base;
+ resource_size_t size;
+ u32 __iomem *ptr;
+ u32 num_doorbells; /* Number of doorbells actually reserved for radeon. */
+ unsigned long used[DIV_ROUND_UP(RADEON_MAX_DOORBELLS, BITS_PER_LONG)];
};
int radeon_doorbell_get(struct radeon_device *rdev, u32 *page);
struct radeon_fence *fence;
struct radeon_vm *vm;
bool is_const_ib;
- struct radeon_fence *sync_to[RADEON_NUM_RINGS];
struct radeon_semaphore *semaphore;
};
u32 pipe;
u32 queue;
struct radeon_bo *mqd_obj;
- u32 doorbell_page_num;
- u32 doorbell_offset;
+ u32 doorbell_index;
unsigned wptr_offs;
};
struct radeon_ib *ib, struct radeon_vm *vm,
unsigned size);
void radeon_ib_free(struct radeon_device *rdev, struct radeon_ib *ib);
-void radeon_ib_sync_to(struct radeon_ib *ib, struct radeon_fence *fence);
int radeon_ib_schedule(struct radeon_device *rdev, struct radeon_ib *ib,
struct radeon_ib *const_ib);
int radeon_ib_pool_init(struct radeon_device *rdev);
/* command emmit functions */
void (*ib_execute)(struct radeon_device *rdev, struct radeon_ib *ib);
void (*emit_fence)(struct radeon_device *rdev, struct radeon_fence *fence);
- void (*emit_semaphore)(struct radeon_device *rdev, struct radeon_ring *cp,
+ bool (*emit_semaphore)(struct radeon_device *rdev, struct radeon_ring *cp,
struct radeon_semaphore *semaphore, bool emit_wait);
void (*vm_flush)(struct radeon_device *rdev, int ridx, struct radeon_vm *vm);
unsigned sc_earlyz_tile_fifo_size;
unsigned num_tile_pipes;
- unsigned num_backends_per_se;
+ unsigned backend_enable_mask;
unsigned backend_disable_mask_per_asic;
unsigned backend_map;
unsigned num_texture_channel_caches;
unsigned sc_earlyz_tile_fifo_size;
unsigned num_tile_pipes;
- unsigned num_backends_per_se;
+ unsigned backend_enable_mask;
unsigned backend_disable_mask_per_asic;
unsigned backend_map;
unsigned num_texture_channel_caches;
unsigned tile_config;
uint32_t tile_mode_array[32];
+ uint32_t macrotile_mode_array[16];
};
union radeon_asic_config {
u32 r100_io_rreg(struct radeon_device *rdev, u32 reg);
void r100_io_wreg(struct radeon_device *rdev, u32 reg, u32 v);
-u32 cik_mm_rdoorbell(struct radeon_device *rdev, u32 offset);
-void cik_mm_wdoorbell(struct radeon_device *rdev, u32 offset, u32 v);
+u32 cik_mm_rdoorbell(struct radeon_device *rdev, u32 index);
+void cik_mm_wdoorbell(struct radeon_device *rdev, u32 index, u32 v);
/*
* Cast helper
#define RREG32_IO(reg) r100_io_rreg(rdev, (reg))
#define WREG32_IO(reg, v) r100_io_wreg(rdev, (reg), (v))
-#define RDOORBELL32(offset) cik_mm_rdoorbell(rdev, (offset))
-#define WDOORBELL32(offset, v) cik_mm_wdoorbell(rdev, (offset), (v))
+#define RDOORBELL32(index) cik_mm_rdoorbell(rdev, (index))
+#define WDOORBELL32(index, v) cik_mm_wdoorbell(rdev, (index), (v))
/*
* Indirect registers accessor
struct radeon_vm *vm,
struct radeon_fence *fence);
uint64_t radeon_vm_map_gart(struct radeon_device *rdev, uint64_t addr);
-int radeon_vm_bo_update_pte(struct radeon_device *rdev,
- struct radeon_vm *vm,
- struct radeon_bo *bo,
- struct ttm_mem_reg *mem);
+int radeon_vm_bo_update(struct radeon_device *rdev,
+ struct radeon_vm *vm,
+ struct radeon_bo *bo,
+ struct ttm_mem_reg *mem);
void radeon_vm_bo_invalidate(struct radeon_device *rdev,
struct radeon_bo *bo);
struct radeon_bo_va *radeon_vm_bo_find(struct radeon_vm *vm,
return NOTIFY_DONE;
/* Check pending SBIOS requests */
- handle = DEVICE_ACPI_HANDLE(&rdev->pdev->dev);
+ handle = ACPI_HANDLE(&rdev->pdev->dev);
count = radeon_atif_get_sbios_requests(handle, &req);
if (count <= 0)
struct radeon_atcs *atcs = &rdev->atcs;
/* Get the device handle */
- handle = DEVICE_ACPI_HANDLE(&rdev->pdev->dev);
+ handle = ACPI_HANDLE(&rdev->pdev->dev);
if (!handle)
return -EINVAL;
u32 retry = 3;
/* Get the device handle */
- handle = DEVICE_ACPI_HANDLE(&rdev->pdev->dev);
+ handle = ACPI_HANDLE(&rdev->pdev->dev);
if (!handle)
return -EINVAL;
int ret;
/* Get the device handle */
- handle = DEVICE_ACPI_HANDLE(&rdev->pdev->dev);
+ handle = ACPI_HANDLE(&rdev->pdev->dev);
/* No need to proceed if we're sure that ATIF is not supported */
if (!ASIC_IS_AVIVO(rdev) || !rdev->bios || !handle)
.bandwidth_update = &dce8_bandwidth_update,
.get_vblank_counter = &evergreen_get_vblank_counter,
.wait_for_vblank = &dce4_wait_for_vblank,
+ .set_backlight_level = &atombios_set_backlight_level,
+ .get_backlight_level = &atombios_get_backlight_level,
.hdmi_enable = &evergreen_hdmi_enable,
.hdmi_setmode = &evergreen_hdmi_setmode,
},
.copy = {
- .blit = NULL,
+ .blit = &cik_copy_cpdma,
.blit_ring_index = RADEON_RING_TYPE_GFX_INDEX,
.dma = &cik_copy_dma,
.dma_ring_index = R600_RING_TYPE_DMA_INDEX,
.bandwidth_update = &dce8_bandwidth_update,
.get_vblank_counter = &evergreen_get_vblank_counter,
.wait_for_vblank = &dce4_wait_for_vblank,
+ .set_backlight_level = &atombios_set_backlight_level,
+ .get_backlight_level = &atombios_get_backlight_level,
.hdmi_enable = &evergreen_hdmi_enable,
.hdmi_setmode = &evergreen_hdmi_setmode,
},
.copy = {
- .blit = NULL,
+ .blit = &cik_copy_cpdma,
.blit_ring_index = RADEON_RING_TYPE_GFX_INDEX,
.dma = &cik_copy_dma,
.dma_ring_index = R600_RING_TYPE_DMA_INDEX,
int r100_irq_process(struct radeon_device *rdev);
void r100_fence_ring_emit(struct radeon_device *rdev,
struct radeon_fence *fence);
-void r100_semaphore_ring_emit(struct radeon_device *rdev,
+bool r100_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *cp,
struct radeon_semaphore *semaphore,
bool emit_wait);
int r600_dma_cs_parse(struct radeon_cs_parser *p);
void r600_fence_ring_emit(struct radeon_device *rdev,
struct radeon_fence *fence);
-void r600_semaphore_ring_emit(struct radeon_device *rdev,
+bool r600_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *cp,
struct radeon_semaphore *semaphore,
bool emit_wait);
void r600_dma_fence_ring_emit(struct radeon_device *rdev,
struct radeon_fence *fence);
-void r600_dma_semaphore_ring_emit(struct radeon_device *rdev,
+bool r600_dma_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait);
*/
void cayman_fence_ring_emit(struct radeon_device *rdev,
struct radeon_fence *fence);
-void cayman_uvd_semaphore_emit(struct radeon_device *rdev,
- struct radeon_ring *ring,
- struct radeon_semaphore *semaphore,
- bool emit_wait);
void cayman_pcie_gart_tlb_flush(struct radeon_device *rdev);
int cayman_init(struct radeon_device *rdev);
void cayman_fini(struct radeon_device *rdev);
int cik_set_uvd_clocks(struct radeon_device *rdev, u32 vclk, u32 dclk);
void cik_sdma_fence_ring_emit(struct radeon_device *rdev,
struct radeon_fence *fence);
-void cik_sdma_semaphore_ring_emit(struct radeon_device *rdev,
+bool cik_sdma_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait);
struct radeon_fence *fence);
void cik_fence_compute_ring_emit(struct radeon_device *rdev,
struct radeon_fence *fence);
-void cik_semaphore_ring_emit(struct radeon_device *rdev,
+bool cik_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *cp,
struct radeon_semaphore *semaphore,
bool emit_wait);
int uvd_v1_0_ring_test(struct radeon_device *rdev, struct radeon_ring *ring);
int uvd_v1_0_ib_test(struct radeon_device *rdev, struct radeon_ring *ring);
-void uvd_v1_0_semaphore_emit(struct radeon_device *rdev,
+bool uvd_v1_0_semaphore_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait);
struct radeon_fence *fence);
/* uvd v3.1 */
-void uvd_v3_1_semaphore_emit(struct radeon_device *rdev,
+bool uvd_v3_1_semaphore_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait);
mpll_param->dll_speed = args.ucDllSpeed;
mpll_param->bwcntl = args.ucBWCntl;
mpll_param->vco_mode =
- (args.ucPllCntlFlag & MPLL_CNTL_FLAG_VCO_MODE_MASK) ? 1 : 0;
+ (args.ucPllCntlFlag & MPLL_CNTL_FLAG_VCO_MODE_MASK);
mpll_param->yclk_sel =
(args.ucPllCntlFlag & MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
mpll_param->qdr =
*/
#include <linux/vga_switcheroo.h>
#include <linux/slab.h>
-#include <acpi/acpi.h>
-#include <acpi/acpi_bus.h>
+#include <linux/acpi.h>
#include <linux/pci.h>
#include "radeon_acpi.h"
bool atpx_detected;
/* handle for device - and atpx */
acpi_handle dhandle;
+ acpi_handle other_handle;
struct radeon_atpx atpx;
} radeon_atpx_priv;
acpi_handle dhandle, atpx_handle;
acpi_status status;
- dhandle = DEVICE_ACPI_HANDLE(&pdev->dev);
+ dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle)
return false;
status = acpi_get_handle(dhandle, "ATPX", &atpx_handle);
- if (ACPI_FAILURE(status))
+ if (ACPI_FAILURE(status)) {
+ radeon_atpx_priv.other_handle = dhandle;
return false;
-
+ }
radeon_atpx_priv.dhandle = dhandle;
radeon_atpx_priv.atpx.handle = atpx_handle;
return true;
*/
static int radeon_atpx_get_client_id(struct pci_dev *pdev)
{
- if (radeon_atpx_priv.dhandle == DEVICE_ACPI_HANDLE(&pdev->dev))
+ if (radeon_atpx_priv.dhandle == ACPI_HANDLE(&pdev->dev))
return VGA_SWITCHEROO_IGD;
else
return VGA_SWITCHEROO_DIS;
printk(KERN_INFO "VGA switcheroo: detected switching method %s handle\n",
acpi_method_name);
radeon_atpx_priv.atpx_detected = true;
+ /*
+ * On some systems hotplug events are generated for the device
+ * being switched off when ATPX is executed. They cause ACPI
+ * hotplug to trigger and attempt to remove the device from
+ * the system, which causes it to break down. Prevent that from
+ * happening by setting the no_hotplug flag for the involved
+ * ACPI device objects.
+ */
+ acpi_bus_no_hotplug(radeon_atpx_priv.dhandle);
+ acpi_bus_no_hotplug(radeon_atpx_priv.other_handle);
return true;
}
return false;
return false;
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) {
- dhandle = DEVICE_ACPI_HANDLE(&pdev->dev);
+ dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle)
continue;
if (!p->relocs[i].robj)
continue;
- radeon_ib_sync_to(&p->ib, p->relocs[i].robj->tbo.sync_obj);
+ radeon_semaphore_sync_to(p->ib.semaphore,
+ p->relocs[i].robj->tbo.sync_obj);
}
}
struct radeon_bo *bo;
int r;
- r = radeon_vm_bo_update_pte(rdev, vm, rdev->ring_tmp_bo.bo, &rdev->ring_tmp_bo.bo->tbo.mem);
+ r = radeon_vm_bo_update(rdev, vm, rdev->ring_tmp_bo.bo, &rdev->ring_tmp_bo.bo->tbo.mem);
if (r) {
return r;
}
list_for_each_entry(lobj, &parser->validated, tv.head) {
bo = lobj->bo;
- r = radeon_vm_bo_update_pte(parser->rdev, vm, bo, &bo->tbo.mem);
+ r = radeon_vm_bo_update(parser->rdev, vm, bo, &bo->tbo.mem);
if (r) {
return r;
}
goto out;
}
radeon_cs_sync_rings(parser);
- radeon_ib_sync_to(&parser->ib, vm->fence);
- radeon_ib_sync_to(&parser->ib, radeon_vm_grab_id(
- rdev, vm, parser->ring));
+ radeon_semaphore_sync_to(parser->ib.semaphore, vm->fence);
+ radeon_semaphore_sync_to(parser->ib.semaphore,
+ radeon_vm_grab_id(rdev, vm, parser->ring));
if ((rdev->family >= CHIP_TAHITI) &&
(parser->chunk_const_ib_idx != -1)) {
*/
int radeon_doorbell_init(struct radeon_device *rdev)
{
- int i;
-
/* doorbell bar mapping */
rdev->doorbell.base = pci_resource_start(rdev->pdev, 2);
rdev->doorbell.size = pci_resource_len(rdev->pdev, 2);
- /* limit to 4 MB for now */
- if (rdev->doorbell.size > (4 * 1024 * 1024))
- rdev->doorbell.size = 4 * 1024 * 1024;
+ rdev->doorbell.num_doorbells = min_t(u32, rdev->doorbell.size / sizeof(u32), RADEON_MAX_DOORBELLS);
+ if (rdev->doorbell.num_doorbells == 0)
+ return -EINVAL;
- rdev->doorbell.ptr = ioremap(rdev->doorbell.base, rdev->doorbell.size);
+ rdev->doorbell.ptr = ioremap(rdev->doorbell.base, rdev->doorbell.num_doorbells * sizeof(u32));
if (rdev->doorbell.ptr == NULL) {
return -ENOMEM;
}
DRM_INFO("doorbell mmio base: 0x%08X\n", (uint32_t)rdev->doorbell.base);
DRM_INFO("doorbell mmio size: %u\n", (unsigned)rdev->doorbell.size);
- rdev->doorbell.num_pages = rdev->doorbell.size / PAGE_SIZE;
+ memset(&rdev->doorbell.used, 0, sizeof(rdev->doorbell.used));
- for (i = 0; i < rdev->doorbell.num_pages; i++) {
- rdev->doorbell.free[i] = true;
- }
return 0;
}
}
/**
- * radeon_doorbell_get - Allocate a doorbell page
+ * radeon_doorbell_get - Allocate a doorbell entry
*
* @rdev: radeon_device pointer
- * @doorbell: doorbell page number
+ * @doorbell: doorbell index
*
- * Allocate a doorbell page for use by the driver (all asics).
+ * Allocate a doorbell for use by the driver (all asics).
* Returns 0 on success or -EINVAL on failure.
*/
int radeon_doorbell_get(struct radeon_device *rdev, u32 *doorbell)
{
- int i;
-
- for (i = 0; i < rdev->doorbell.num_pages; i++) {
- if (rdev->doorbell.free[i]) {
- rdev->doorbell.free[i] = false;
- *doorbell = i;
- return 0;
- }
+ unsigned long offset = find_first_zero_bit(rdev->doorbell.used, rdev->doorbell.num_doorbells);
+ if (offset < rdev->doorbell.num_doorbells) {
+ __set_bit(offset, rdev->doorbell.used);
+ *doorbell = offset;
+ return 0;
+ } else {
+ return -EINVAL;
}
- return -EINVAL;
}
/**
- * radeon_doorbell_free - Free a doorbell page
+ * radeon_doorbell_free - Free a doorbell entry
*
* @rdev: radeon_device pointer
- * @doorbell: doorbell page number
+ * @doorbell: doorbell index
*
- * Free a doorbell page allocated for use by the driver (all asics)
+ * Free a doorbell allocated for use by the driver (all asics)
*/
void radeon_doorbell_free(struct radeon_device *rdev, u32 doorbell)
{
- if (doorbell < rdev->doorbell.num_pages)
- rdev->doorbell.free[doorbell] = true;
+ if (doorbell < rdev->doorbell.num_doorbells)
+ __clear_bit(doorbell, rdev->doorbell.used);
}
/*
* 2.32.0 - new info request for rings working
* 2.33.0 - Add SI tiling mode array query
* 2.34.0 - Add CIK tiling mode array query
+ * 2.35.0 - Add CIK macrotile mode array query
+ * 2.36.0 - Fix CIK DCE tiling setup
*/
#define KMS_DRIVER_MAJOR 2
-#define KMS_DRIVER_MINOR 34
+#define KMS_DRIVER_MINOR 36
#define KMS_DRIVER_PATCHLEVEL 0
int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags);
int radeon_driver_unload_kms(struct drm_device *dev);
#endif
};
-
-static void
-radeon_pci_shutdown(struct pci_dev *pdev)
-{
- struct drm_device *dev = pci_get_drvdata(pdev);
-
- radeon_driver_unload_kms(dev);
-}
-
static struct drm_driver kms_driver = {
.driver_features =
DRIVER_USE_AGP |
.probe = radeon_pci_probe,
.remove = radeon_pci_remove,
.driver.pm = &radeon_pm_ops,
- .shutdown = radeon_pci_shutdown,
};
static int __init radeon_init(void)
* 1.31- Add support for num Z pipes from GET_PARAM
* 1.32- fixes for rv740 setup
* 1.33- Add r6xx/r7xx const buffer support
+ * 1.34- fix evergreen/cayman GS register
*/
#define DRIVER_MAJOR 1
-#define DRIVER_MINOR 33
+#define DRIVER_MINOR 34
#define DRIVER_PATCHLEVEL 0
long radeon_drm_ioctl(struct file *filp,
return 0;
}
+/**
+ * radeon_fence_wait_locked - wait for a fence to signal
+ *
+ * @fence: radeon fence object
+ *
+ * Wait for the requested fence to signal (all asics).
+ * Returns 0 if the fence has passed, error for all other cases.
+ */
+int radeon_fence_wait_locked(struct radeon_fence *fence)
+{
+ uint64_t seq[RADEON_NUM_RINGS] = {};
+ int r;
+
+ if (fence == NULL) {
+ WARN(1, "Querying an invalid fence : %p !\n", fence);
+ return -EINVAL;
+ }
+
+ seq[fence->ring] = fence->seq;
+ if (seq[fence->ring] == RADEON_FENCE_SIGNALED_SEQ)
+ return 0;
+
+ r = radeon_fence_wait_seq(fence->rdev, seq, false, false);
+ if (r)
+ return r;
+
+ fence->seq = RADEON_FENCE_SIGNALED_SEQ;
+ return 0;
+}
+
/**
* radeon_fence_wait_next_locked - wait for the next fence to signal
*
#include <drm/radeon_drm.h>
#include "radeon.h"
#include "radeon_reg.h"
+#include "radeon_trace.h"
/*
* GART
radeon_asic_vm_set_page(rdev, &ib, vm->pd_gpu_addr,
0, pd_entries, 0, 0);
- radeon_ib_sync_to(&ib, vm->fence);
+ radeon_semaphore_sync_to(ib.semaphore, vm->fence);
r = radeon_ib_schedule(rdev, &ib, NULL);
if (r) {
radeon_ib_free(rdev, &ib);
for (i = 0; i < 2; ++i) {
if (choices[i]) {
vm->id = choices[i];
+ trace_radeon_vm_grab_id(vm->id, ring);
return rdev->vm_manager.active[choices[i]];
}
}
}
/**
- * radeon_vm_bo_update_pte - map a bo into the vm page table
+ * radeon_vm_bo_update - map a bo into the vm page table
*
* @rdev: radeon_device pointer
* @vm: requested vm
*
* Object have to be reserved & global and local mutex must be locked!
*/
-int radeon_vm_bo_update_pte(struct radeon_device *rdev,
- struct radeon_vm *vm,
- struct radeon_bo *bo,
- struct ttm_mem_reg *mem)
+int radeon_vm_bo_update(struct radeon_device *rdev,
+ struct radeon_vm *vm,
+ struct radeon_bo *bo,
+ struct ttm_mem_reg *mem)
{
struct radeon_ib ib;
struct radeon_bo_va *bo_va;
bo_va->valid = false;
}
+ trace_radeon_vm_bo_update(bo_va);
+
nptes = radeon_bo_ngpu_pages(bo);
/* assume two extra pdes in case the mapping overlaps the borders */
return -ENOMEM;
r = radeon_ib_get(rdev, R600_RING_TYPE_DMA_INDEX, &ib, NULL, ndw * 4);
+ if (r)
+ return r;
ib.length_dw = 0;
r = radeon_vm_update_pdes(rdev, vm, &ib, bo_va->soffset, bo_va->eoffset);
radeon_vm_update_ptes(rdev, vm, &ib, bo_va->soffset, bo_va->eoffset,
addr, radeon_vm_page_flags(bo_va->flags));
- radeon_ib_sync_to(&ib, vm->fence);
+ radeon_semaphore_sync_to(ib.semaphore, vm->fence);
r = radeon_ib_schedule(rdev, &ib, NULL);
if (r) {
radeon_ib_free(rdev, &ib);
mutex_lock(&rdev->vm_manager.lock);
mutex_lock(&bo_va->vm->mutex);
if (bo_va->soffset) {
- r = radeon_vm_bo_update_pte(rdev, bo_va->vm, bo_va->bo, NULL);
+ r = radeon_vm_bo_update(rdev, bo_va->vm, bo_va->bo, NULL);
}
mutex_unlock(&rdev->vm_manager.lock);
list_del(&bo_va->vm_list);
break;
case RADEON_INFO_BACKEND_MAP:
if (rdev->family >= CHIP_BONAIRE)
- return -EINVAL;
+ *value = rdev->config.cik.backend_map;
else if (rdev->family >= CHIP_TAHITI)
*value = rdev->config.si.backend_map;
else if (rdev->family >= CHIP_CAYMAN)
return -EINVAL;
}
break;
+ case RADEON_INFO_CIK_MACROTILE_MODE_ARRAY:
+ if (rdev->family >= CHIP_BONAIRE) {
+ value = rdev->config.cik.macrotile_mode_array;
+ value_size = sizeof(uint32_t)*16;
+ } else {
+ DRM_DEBUG_KMS("macrotile mode array is cik+ only!\n");
+ return -EINVAL;
+ }
+ break;
case RADEON_INFO_SI_CP_DMA_COMPUTE:
*value = 1;
break;
+ case RADEON_INFO_SI_BACKEND_ENABLED_MASK:
+ if (rdev->family >= CHIP_BONAIRE) {
+ *value = rdev->config.cik.backend_enable_mask;
+ } else if (rdev->family >= CHIP_TAHITI) {
+ *value = rdev->config.si.backend_enable_mask;
+ } else {
+ DRM_DEBUG_KMS("BACKEND_ENABLED_MASK is si+ only!\n");
+ }
+ break;
default:
DRM_DEBUG_KMS("Invalid request %d\n", info->request);
return -EINVAL;
/* Pin framebuffer & get tilling informations */
obj = radeon_fb->obj;
rbo = gem_to_radeon_bo(obj);
+retry:
r = radeon_bo_reserve(rbo, false);
if (unlikely(r != 0))
return r;
&base);
if (unlikely(r != 0)) {
radeon_bo_unreserve(rbo);
+
+ /* On old GPU like RN50 with little vram pining can fails because
+ * current fb is taking all space needed. So instead of unpining
+ * the old buffer after pining the new one, first unpin old one
+ * and then retry pining new one.
+ *
+ * As only master can set mode only master can pin and it is
+ * unlikely the master client will race with itself especialy
+ * on those old gpu with single crtc.
+ *
+ * We don't shutdown the display controller because new buffer
+ * will end up in same spot.
+ */
+ if (!atomic && fb && fb != crtc->fb) {
+ struct radeon_bo *old_rbo;
+ unsigned long nsize, osize;
+
+ old_rbo = gem_to_radeon_bo(to_radeon_framebuffer(fb)->obj);
+ osize = radeon_bo_size(old_rbo);
+ nsize = radeon_bo_size(rbo);
+ if (nsize <= osize && !radeon_bo_reserve(old_rbo, false)) {
+ radeon_bo_unpin(old_rbo);
+ radeon_bo_unreserve(old_rbo);
+ fb = NULL;
+ goto retry;
+ }
+ }
return -EINVAL;
}
radeon_bo_get_tiling_flags(rbo, &tiling_flags, NULL);
struct device_attribute *attr,
char *buf)
{
- struct drm_device *ddev = dev_get_drvdata(dev);
- struct radeon_device *rdev = ddev->dev_private;
+ struct radeon_device *rdev = dev_get_drvdata(dev);
int temp;
if (rdev->asic->pm.get_temperature)
struct device_attribute *attr,
char *buf)
{
- struct drm_device *ddev = dev_get_drvdata(dev);
- struct radeon_device *rdev = ddev->dev_private;
+ struct radeon_device *rdev = dev_get_drvdata(dev);
int hyst = to_sensor_dev_attr(attr)->index;
int temp;
return snprintf(buf, PAGE_SIZE, "%d\n", temp);
}
-static ssize_t radeon_hwmon_show_name(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- return sprintf(buf, "radeon\n");
-}
-
static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, radeon_hwmon_show_temp, NULL, 0);
static SENSOR_DEVICE_ATTR(temp1_crit, S_IRUGO, radeon_hwmon_show_temp_thresh, NULL, 0);
static SENSOR_DEVICE_ATTR(temp1_crit_hyst, S_IRUGO, radeon_hwmon_show_temp_thresh, NULL, 1);
-static SENSOR_DEVICE_ATTR(name, S_IRUGO, radeon_hwmon_show_name, NULL, 0);
static struct attribute *hwmon_attributes[] = {
&sensor_dev_attr_temp1_input.dev_attr.attr,
&sensor_dev_attr_temp1_crit.dev_attr.attr,
&sensor_dev_attr_temp1_crit_hyst.dev_attr.attr,
- &sensor_dev_attr_name.dev_attr.attr,
NULL
};
struct attribute *attr, int index)
{
struct device *dev = container_of(kobj, struct device, kobj);
- struct drm_device *ddev = dev_get_drvdata(dev);
- struct radeon_device *rdev = ddev->dev_private;
+ struct radeon_device *rdev = dev_get_drvdata(dev);
/* Skip limit attributes if DPM is not enabled */
if (rdev->pm.pm_method != PM_METHOD_DPM &&
.is_visible = hwmon_attributes_visible,
};
+static const struct attribute_group *hwmon_groups[] = {
+ &hwmon_attrgroup,
+ NULL
+};
+
static int radeon_hwmon_init(struct radeon_device *rdev)
{
int err = 0;
-
- rdev->pm.int_hwmon_dev = NULL;
+ struct device *hwmon_dev;
switch (rdev->pm.int_thermal_type) {
case THERMAL_TYPE_RV6XX:
case THERMAL_TYPE_KV:
if (rdev->asic->pm.get_temperature == NULL)
return err;
- rdev->pm.int_hwmon_dev = hwmon_device_register(rdev->dev);
- if (IS_ERR(rdev->pm.int_hwmon_dev)) {
- err = PTR_ERR(rdev->pm.int_hwmon_dev);
+ hwmon_dev = hwmon_device_register_with_groups(rdev->dev,
+ "radeon", rdev,
+ hwmon_groups);
+ if (IS_ERR(hwmon_dev)) {
+ err = PTR_ERR(hwmon_dev);
dev_err(rdev->dev,
"Unable to register hwmon device: %d\n", err);
- break;
- }
- dev_set_drvdata(rdev->pm.int_hwmon_dev, rdev->ddev);
- err = sysfs_create_group(&rdev->pm.int_hwmon_dev->kobj,
- &hwmon_attrgroup);
- if (err) {
- dev_err(rdev->dev,
- "Unable to create hwmon sysfs file: %d\n", err);
- hwmon_device_unregister(rdev->dev);
}
break;
default:
return err;
}
-static void radeon_hwmon_fini(struct radeon_device *rdev)
-{
- if (rdev->pm.int_hwmon_dev) {
- sysfs_remove_group(&rdev->pm.int_hwmon_dev->kobj, &hwmon_attrgroup);
- hwmon_device_unregister(rdev->pm.int_hwmon_dev);
- }
-}
-
static void radeon_dpm_thermal_work_handler(struct work_struct *work)
{
struct radeon_device *rdev =
case CHIP_RS780:
case CHIP_RS880:
case CHIP_CAYMAN:
- case CHIP_ARUBA:
case CHIP_BONAIRE:
case CHIP_KABINI:
case CHIP_KAVERI:
case CHIP_BARTS:
case CHIP_TURKS:
case CHIP_CAICOS:
+ case CHIP_ARUBA:
case CHIP_TAHITI:
case CHIP_PITCAIRN:
case CHIP_VERDE:
if (rdev->pm.power_state)
kfree(rdev->pm.power_state);
-
- radeon_hwmon_fini(rdev);
}
static void radeon_pm_fini_dpm(struct radeon_device *rdev)
if (rdev->pm.power_state)
kfree(rdev->pm.power_state);
-
- radeon_hwmon_fini(rdev);
}
void radeon_pm_fini(struct radeon_device *rdev)
struct radeon_ib *ib, struct radeon_vm *vm,
unsigned size)
{
- int i, r;
+ int r;
r = radeon_sa_bo_new(rdev, &rdev->ring_tmp_bo, &ib->sa_bo, size, 256, true);
if (r) {
ib->gpu_addr = radeon_sa_bo_gpu_addr(ib->sa_bo);
}
ib->is_const_ib = false;
- for (i = 0; i < RADEON_NUM_RINGS; ++i)
- ib->sync_to[i] = NULL;
return 0;
}
radeon_fence_unref(&ib->fence);
}
-/**
- * radeon_ib_sync_to - sync to fence before executing the IB
- *
- * @ib: IB object to add fence to
- * @fence: fence to sync to
- *
- * Sync to the fence before executing the IB
- */
-void radeon_ib_sync_to(struct radeon_ib *ib, struct radeon_fence *fence)
-{
- struct radeon_fence *other;
-
- if (!fence)
- return;
-
- other = ib->sync_to[fence->ring];
- ib->sync_to[fence->ring] = radeon_fence_later(fence, other);
-}
-
/**
* radeon_ib_schedule - schedule an IB (Indirect Buffer) on the ring
*
struct radeon_ib *const_ib)
{
struct radeon_ring *ring = &rdev->ring[ib->ring];
- bool need_sync = false;
- int i, r = 0;
+ int r = 0;
if (!ib->length_dw || !ring->ready) {
/* TODO: Nothings in the ib we should report. */
dev_err(rdev->dev, "scheduling IB failed (%d).\n", r);
return r;
}
- for (i = 0; i < RADEON_NUM_RINGS; ++i) {
- struct radeon_fence *fence = ib->sync_to[i];
- if (radeon_fence_need_sync(fence, ib->ring)) {
- need_sync = true;
- radeon_semaphore_sync_rings(rdev, ib->semaphore,
- fence->ring, ib->ring);
- radeon_fence_note_sync(fence, ib->ring);
- }
- }
- /* immediately free semaphore when we don't need to sync */
- if (!need_sync) {
- radeon_semaphore_free(rdev, &ib->semaphore, NULL);
+
+ /* sync with other rings */
+ r = radeon_semaphore_sync_rings(rdev, ib->semaphore, ib->ring);
+ if (r) {
+ dev_err(rdev->dev, "failed to sync rings (%d)\n", r);
+ radeon_ring_unlock_undo(rdev, ring);
+ return r;
}
+
/* if we can't remember our last VM flush then flush now! */
/* XXX figure out why we have to flush for every IB */
if (ib->vm /*&& !ib->vm->last_flush*/) {
*/
#include <drm/drmP.h>
#include "radeon.h"
-
+#include "radeon_trace.h"
int radeon_semaphore_create(struct radeon_device *rdev,
struct radeon_semaphore **semaphore)
{
- int r;
+ int i, r;
*semaphore = kmalloc(sizeof(struct radeon_semaphore), GFP_KERNEL);
if (*semaphore == NULL) {
(*semaphore)->waiters = 0;
(*semaphore)->gpu_addr = radeon_sa_bo_gpu_addr((*semaphore)->sa_bo);
*((uint64_t*)radeon_sa_bo_cpu_addr((*semaphore)->sa_bo)) = 0;
+
+ for (i = 0; i < RADEON_NUM_RINGS; ++i)
+ (*semaphore)->sync_to[i] = NULL;
+
return 0;
}
-void radeon_semaphore_emit_signal(struct radeon_device *rdev, int ring,
+bool radeon_semaphore_emit_signal(struct radeon_device *rdev, int ridx,
struct radeon_semaphore *semaphore)
{
- --semaphore->waiters;
- radeon_semaphore_ring_emit(rdev, ring, &rdev->ring[ring], semaphore, false);
+ struct radeon_ring *ring = &rdev->ring[ridx];
+
+ trace_radeon_semaphore_signale(ridx, semaphore);
+
+ if (radeon_semaphore_ring_emit(rdev, ridx, ring, semaphore, false)) {
+ --semaphore->waiters;
+
+ /* for debugging lockup only, used by sysfs debug files */
+ ring->last_semaphore_signal_addr = semaphore->gpu_addr;
+ return true;
+ }
+ return false;
}
-void radeon_semaphore_emit_wait(struct radeon_device *rdev, int ring,
+bool radeon_semaphore_emit_wait(struct radeon_device *rdev, int ridx,
struct radeon_semaphore *semaphore)
{
- ++semaphore->waiters;
- radeon_semaphore_ring_emit(rdev, ring, &rdev->ring[ring], semaphore, true);
+ struct radeon_ring *ring = &rdev->ring[ridx];
+
+ trace_radeon_semaphore_wait(ridx, semaphore);
+
+ if (radeon_semaphore_ring_emit(rdev, ridx, ring, semaphore, true)) {
+ ++semaphore->waiters;
+
+ /* for debugging lockup only, used by sysfs debug files */
+ ring->last_semaphore_wait_addr = semaphore->gpu_addr;
+ return true;
+ }
+ return false;
+}
+
+/**
+ * radeon_semaphore_sync_to - use the semaphore to sync to a fence
+ *
+ * @semaphore: semaphore object to add fence to
+ * @fence: fence to sync to
+ *
+ * Sync to the fence using this semaphore object
+ */
+void radeon_semaphore_sync_to(struct radeon_semaphore *semaphore,
+ struct radeon_fence *fence)
+{
+ struct radeon_fence *other;
+
+ if (!fence)
+ return;
+
+ other = semaphore->sync_to[fence->ring];
+ semaphore->sync_to[fence->ring] = radeon_fence_later(fence, other);
}
-/* caller must hold ring lock */
+/**
+ * radeon_semaphore_sync_rings - sync ring to all registered fences
+ *
+ * @rdev: radeon_device pointer
+ * @semaphore: semaphore object to use for sync
+ * @ring: ring that needs sync
+ *
+ * Ensure that all registered fences are signaled before letting
+ * the ring continue. The caller must hold the ring lock.
+ */
int radeon_semaphore_sync_rings(struct radeon_device *rdev,
struct radeon_semaphore *semaphore,
- int signaler, int waiter)
+ int ring)
{
- int r;
+ int i, r;
- /* no need to signal and wait on the same ring */
- if (signaler == waiter) {
- return 0;
- }
+ for (i = 0; i < RADEON_NUM_RINGS; ++i) {
+ struct radeon_fence *fence = semaphore->sync_to[i];
- /* prevent GPU deadlocks */
- if (!rdev->ring[signaler].ready) {
- dev_err(rdev->dev, "Trying to sync to a disabled ring!");
- return -EINVAL;
- }
+ /* check if we really need to sync */
+ if (!radeon_fence_need_sync(fence, ring))
+ continue;
- r = radeon_ring_alloc(rdev, &rdev->ring[signaler], 8);
- if (r) {
- return r;
- }
- radeon_semaphore_emit_signal(rdev, signaler, semaphore);
- radeon_ring_commit(rdev, &rdev->ring[signaler]);
+ /* prevent GPU deadlocks */
+ if (!rdev->ring[i].ready) {
+ dev_err(rdev->dev, "Syncing to a disabled ring!");
+ return -EINVAL;
+ }
- /* we assume caller has already allocated space on waiters ring */
- radeon_semaphore_emit_wait(rdev, waiter, semaphore);
+ /* allocate enough space for sync command */
+ r = radeon_ring_alloc(rdev, &rdev->ring[i], 16);
+ if (r) {
+ return r;
+ }
- /* for debugging lockup only, used by sysfs debug files */
- rdev->ring[signaler].last_semaphore_signal_addr = semaphore->gpu_addr;
- rdev->ring[waiter].last_semaphore_wait_addr = semaphore->gpu_addr;
+ /* emit the signal semaphore */
+ if (!radeon_semaphore_emit_signal(rdev, i, semaphore)) {
+ /* signaling wasn't successful wait manually */
+ radeon_ring_undo(&rdev->ring[i]);
+ radeon_fence_wait_locked(fence);
+ continue;
+ }
+
+ /* we assume caller has already allocated space on waiters ring */
+ if (!radeon_semaphore_emit_wait(rdev, ring, semaphore)) {
+ /* waiting wasn't successful wait manually */
+ radeon_ring_undo(&rdev->ring[i]);
+ radeon_fence_wait_locked(fence);
+ continue;
+ }
+
+ radeon_ring_commit(rdev, &rdev->ring[i]);
+ radeon_fence_note_sync(fence, ring);
+ }
return 0;
}
__entry->fences)
);
+TRACE_EVENT(radeon_vm_grab_id,
+ TP_PROTO(unsigned vmid, int ring),
+ TP_ARGS(vmid, ring),
+ TP_STRUCT__entry(
+ __field(u32, vmid)
+ __field(u32, ring)
+ ),
+
+ TP_fast_assign(
+ __entry->vmid = vmid;
+ __entry->ring = ring;
+ ),
+ TP_printk("vmid=%u, ring=%u", __entry->vmid, __entry->ring)
+);
+
+TRACE_EVENT(radeon_vm_bo_update,
+ TP_PROTO(struct radeon_bo_va *bo_va),
+ TP_ARGS(bo_va),
+ TP_STRUCT__entry(
+ __field(u64, soffset)
+ __field(u64, eoffset)
+ __field(u32, flags)
+ ),
+
+ TP_fast_assign(
+ __entry->soffset = bo_va->soffset;
+ __entry->eoffset = bo_va->eoffset;
+ __entry->flags = bo_va->flags;
+ ),
+ TP_printk("soffs=%010llx, eoffs=%010llx, flags=%08x",
+ __entry->soffset, __entry->eoffset, __entry->flags)
+);
+
TRACE_EVENT(radeon_vm_set_page,
TP_PROTO(uint64_t pe, uint64_t addr, unsigned count,
uint32_t incr, uint32_t flags),
TP_ARGS(dev, seqno)
);
+DECLARE_EVENT_CLASS(radeon_semaphore_request,
+
+ TP_PROTO(int ring, struct radeon_semaphore *sem),
+
+ TP_ARGS(ring, sem),
+
+ TP_STRUCT__entry(
+ __field(int, ring)
+ __field(signed, waiters)
+ __field(uint64_t, gpu_addr)
+ ),
+
+ TP_fast_assign(
+ __entry->ring = ring;
+ __entry->waiters = sem->waiters;
+ __entry->gpu_addr = sem->gpu_addr;
+ ),
+
+ TP_printk("ring=%u, waiters=%d, addr=%010Lx", __entry->ring,
+ __entry->waiters, __entry->gpu_addr)
+);
+
+DEFINE_EVENT(radeon_semaphore_request, radeon_semaphore_signale,
+
+ TP_PROTO(int ring, struct radeon_semaphore *sem),
+
+ TP_ARGS(ring, sem)
+);
+
+DEFINE_EVENT(radeon_semaphore_request, radeon_semaphore_wait,
+
+ TP_PROTO(int ring, struct radeon_semaphore *sem),
+
+ TP_ARGS(ring, sem)
+);
+
#endif
/* This part must be outside protection */
return -EINVAL;
}
- if ((start >> 28) != (end >> 28)) {
+ if ((start >> 28) != ((end - 1) >> 28)) {
DRM_ERROR("reloc %LX-%LX crossing 256MB boundary!\n",
start, end);
return -EINVAL;
0x000089AC VGT_COMPUTE_THREAD_GOURP_SIZE
0x000089B0 VGT_HS_OFFCHIP_PARAM
0x00008A14 PA_CL_ENHANCE
-0x00008A60 PA_SC_LINE_STIPPLE_VALUE
+0x00008A60 PA_SU_LINE_STIPPLE_VALUE
0x00008B10 PA_SC_LINE_STIPPLE_STATE
0x00008BF0 PA_SC_ENHANCE
0x00008D8C SQ_DYN_GPR_CNTL_PS_FLUSH_REQ
0x00028B84 PA_SU_POLY_OFFSET_FRONT_OFFSET
0x00028B88 PA_SU_POLY_OFFSET_BACK_SCALE
0x00028B8C PA_SU_POLY_OFFSET_BACK_OFFSET
-0x00028B74 VGT_GS_INSTANCE_CNT
+0x00028B90 VGT_GS_INSTANCE_CNT
0x00028BD4 PA_SC_CENTROID_PRIORITY_0
0x00028BD8 PA_SC_CENTROID_PRIORITY_1
0x00028BDC PA_SC_LINE_CNTL
0x000089A4 VGT_COMPUTE_START_Z
0x000089AC VGT_COMPUTE_THREAD_GOURP_SIZE
0x00008A14 PA_CL_ENHANCE
-0x00008A60 PA_SC_LINE_STIPPLE_VALUE
+0x00008A60 PA_SU_LINE_STIPPLE_VALUE
0x00008B10 PA_SC_LINE_STIPPLE_STATE
0x00008BF0 PA_SC_ENHANCE
0x00008D8C SQ_DYN_GPR_CNTL_PS_FLUSH_REQ
0x00028B84 PA_SU_POLY_OFFSET_FRONT_OFFSET
0x00028B88 PA_SU_POLY_OFFSET_BACK_SCALE
0x00028B8C PA_SU_POLY_OFFSET_BACK_OFFSET
-0x00028B74 VGT_GS_INSTANCE_CNT
+0x00028B90 VGT_GS_INSTANCE_CNT
0x00028C00 PA_SC_LINE_CNTL
0x00028C08 PA_SU_VTX_CNTL
0x00028C0C PA_CL_GB_VERT_CLIP_ADJ
base = RREG32_MC(R_000100_MCCFG_FB_LOCATION);
base = G_000100_MC_FB_START(base) << 16;
rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev);
+ /* Some boards seem to be configured for 128MB of sideport memory,
+ * but really only have 64MB. Just skip the sideport and use
+ * UMA memory.
+ */
+ if (rdev->mc.igp_sideport_enabled &&
+ (rdev->mc.real_vram_size == (384 * 1024 * 1024))) {
+ base += 128 * 1024 * 1024;
+ rdev->mc.real_vram_size -= 128 * 1024 * 1024;
+ rdev->mc.mc_vram_size = rdev->mc.real_vram_size;
+ }
/* Use K8 direct mapping for fast fb access. */
rdev->fastfb_working = false;
return r;
}
- if (radeon_fence_need_sync(*fence, ring->idx)) {
- radeon_semaphore_sync_rings(rdev, sem, (*fence)->ring,
- ring->idx);
- radeon_fence_note_sync(*fence, ring->idx);
- } else {
- radeon_semaphore_free(rdev, &sem, NULL);
- }
+ radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
cur_size_in_dw = size_in_dw;
pi->mclk_ss = radeon_atombios_get_asic_ss_info(rdev, &ss,
ASIC_INTERNAL_MEMORY_SS, 0);
+ /* disable ss, causes hangs on some cayman boards */
+ if (rdev->family == CHIP_CAYMAN) {
+ pi->sclk_ss = false;
+ pi->mclk_ss = false;
+ }
+
if (pi->sclk_ss || pi->mclk_ss)
pi->dynamic_ss = true;
else
}
static u32 si_get_rb_disabled(struct radeon_device *rdev,
- u32 max_rb_num, u32 se_num,
+ u32 max_rb_num_per_se,
u32 sh_per_se)
{
u32 data, mask;
data >>= BACKEND_DISABLE_SHIFT;
- mask = si_create_bitmask(max_rb_num / se_num / sh_per_se);
+ mask = si_create_bitmask(max_rb_num_per_se / sh_per_se);
return data & mask;
}
static void si_setup_rb(struct radeon_device *rdev,
u32 se_num, u32 sh_per_se,
- u32 max_rb_num)
+ u32 max_rb_num_per_se)
{
int i, j;
u32 data, mask;
for (i = 0; i < se_num; i++) {
for (j = 0; j < sh_per_se; j++) {
si_select_se_sh(rdev, i, j);
- data = si_get_rb_disabled(rdev, max_rb_num, se_num, sh_per_se);
+ data = si_get_rb_disabled(rdev, max_rb_num_per_se, sh_per_se);
disabled_rbs |= data << ((i * sh_per_se + j) * TAHITI_RB_BITMAP_WIDTH_PER_SH);
}
}
si_select_se_sh(rdev, 0xffffffff, 0xffffffff);
mask = 1;
- for (i = 0; i < max_rb_num; i++) {
+ for (i = 0; i < max_rb_num_per_se * se_num; i++) {
if (!(disabled_rbs & mask))
enabled_rbs |= mask;
mask <<= 1;
}
+ rdev->config.si.backend_enable_mask = enabled_rbs;
+
for (i = 0; i < se_num; i++) {
si_select_se_sh(rdev, i, 0xffffffff);
data = 0;
rdev->mc.aper_base = pci_resource_start(rdev->pdev, 0);
rdev->mc.aper_size = pci_resource_len(rdev->pdev, 0);
/* size in MB on si */
- rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE) * 1024ULL * 1024ULL;
- rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE) * 1024ULL * 1024ULL;
+ tmp = RREG32(CONFIG_MEMSIZE);
+ /* some boards may have garbage in the upper 16 bits */
+ if (tmp & 0xffff0000) {
+ DRM_INFO("Probable bad vram size: 0x%08x\n", tmp);
+ if (tmp & 0xffff)
+ tmp &= 0xffff;
+ }
+ rdev->mc.mc_vram_size = tmp * 1024ULL * 1024ULL;
+ rdev->mc.real_vram_size = rdev->mc.mc_vram_size;
rdev->mc.visible_vram_size = rdev->mc.aper_size;
si_vram_gtt_location(rdev, &rdev->mc);
radeon_update_bandwidth_info(rdev);
return r;
}
- if (radeon_fence_need_sync(*fence, ring->idx)) {
- radeon_semaphore_sync_rings(rdev, sem, (*fence)->ring,
- ring->idx);
- radeon_fence_note_sync(*fence, ring->idx);
- } else {
- radeon_semaphore_free(rdev, &sem, NULL);
- }
+ radeon_semaphore_sync_to(sem, *fence);
+ radeon_semaphore_sync_rings(rdev, sem, ring->idx);
for (i = 0; i < num_loops; i++) {
cur_size_in_bytes = size_in_bytes;
pi->enable_sclk_ds = true;
pi->enable_gfx_power_gating = true;
pi->enable_gfx_clock_gating = true;
- pi->enable_mg_clock_gating = true;
- pi->enable_gfx_dynamic_mgpg = true; /* ??? */
- pi->override_dynamic_mgpg = true;
+ pi->enable_mg_clock_gating = false;
+ pi->enable_gfx_dynamic_mgpg = false;
+ pi->override_dynamic_mgpg = false;
pi->enable_auto_thermal_throttling = true;
pi->voltage_drop_in_dce = false; /* need to restructure dpm/modeset interaction */
pi->uvd_dpm = true; /* ??? */
*
* Emit a semaphore command (either wait or signal) to the UVD ring.
*/
-void uvd_v1_0_semaphore_emit(struct radeon_device *rdev,
+bool uvd_v1_0_semaphore_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait)
radeon_ring_write(ring, PACKET0(UVD_SEMA_CMD, 0));
radeon_ring_write(ring, emit_wait ? 1 : 0);
+
+ return true;
}
/**
*
* Emit a semaphore command (either wait or signal) to the UVD ring.
*/
-void uvd_v3_1_semaphore_emit(struct radeon_device *rdev,
+bool uvd_v3_1_semaphore_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait)
radeon_ring_write(ring, PACKET0(UVD_SEMA_CMD, 0));
radeon_ring_write(ring, 0x80 | (emit_wait ? 1 : 0));
+
+ return true;
}
unsigned int num_relocs = args->num_relocs;
unsigned int num_waitchks = args->num_waitchks;
struct drm_tegra_cmdbuf __user *cmdbufs =
- (void * __user)(uintptr_t)args->cmdbufs;
+ (void __user *)(uintptr_t)args->cmdbufs;
struct drm_tegra_reloc __user *relocs =
- (void * __user)(uintptr_t)args->relocs;
+ (void __user *)(uintptr_t)args->relocs;
struct drm_tegra_waitchk __user *waitchks =
- (void * __user)(uintptr_t)args->waitchks;
+ (void __user *)(uintptr_t)args->waitchks;
struct drm_tegra_syncpt syncpt;
struct host1x_job *job;
int err;
struct drm_tegra_cmdbuf cmdbuf;
struct host1x_bo *bo;
- err = copy_from_user(&cmdbuf, cmdbufs, sizeof(cmdbuf));
- if (err)
+ if (copy_from_user(&cmdbuf, cmdbufs, sizeof(cmdbuf))) {
+ err = -EFAULT;
goto fail;
+ }
bo = host1x_bo_lookup(drm, file, cmdbuf.handle);
if (!bo) {
cmdbufs++;
}
- err = copy_from_user(job->relocarray, relocs,
- sizeof(*relocs) * num_relocs);
- if (err)
+ if (copy_from_user(job->relocarray, relocs,
+ sizeof(*relocs) * num_relocs)) {
+ err = -EFAULT;
goto fail;
+ }
while (num_relocs--) {
struct host1x_reloc *reloc = &job->relocarray[num_relocs];
}
}
- err = copy_from_user(job->waitchk, waitchks,
- sizeof(*waitchks) * num_waitchks);
- if (err)
+ if (copy_from_user(job->waitchk, waitchks,
+ sizeof(*waitchks) * num_waitchks)) {
+ err = -EFAULT;
goto fail;
+ }
- err = copy_from_user(&syncpt, (void * __user)(uintptr_t)args->syncpts,
- sizeof(syncpt));
- if (err)
+ if (copy_from_user(&syncpt, (void __user *)(uintptr_t)args->syncpts,
+ sizeof(syncpt))) {
+ err = -EFAULT;
goto fail;
+ }
job->is_addr_reg = context->client->ops->is_addr_reg;
job->syncpt_incrs = syncpt.incrs;
}
#endif
-struct drm_driver tegra_drm_driver = {
+static struct drm_driver tegra_drm_driver = {
.driver_features = DRIVER_MODESET | DRIVER_GEM,
.load = tegra_drm_load,
.unload = tegra_drm_unload,
static inline struct tegra_dc *to_tegra_dc(struct drm_crtc *crtc)
{
- return container_of(crtc, struct tegra_dc, base);
+ return crtc ? container_of(crtc, struct tegra_dc, base) : NULL;
}
static inline void tegra_dc_writel(struct tegra_dc *dc, unsigned long value,
info->var.yoffset * fb->pitches[0];
drm->mode_config.fb_base = (resource_size_t)bo->paddr;
- info->screen_base = bo->vaddr + offset;
+ info->screen_base = (void __iomem *)bo->vaddr + offset;
info->screen_size = size;
info->fix.smem_start = (unsigned long)(bo->paddr + offset);
info->fix.smem_len = size;
struct tegra_rgb {
struct tegra_output output;
+ struct tegra_dc *dc;
+
struct clk *clk_parent;
struct clk *clk;
};
static int tegra_output_rgb_enable(struct tegra_output *output)
{
- struct tegra_dc *dc = to_tegra_dc(output->encoder.crtc);
+ struct tegra_rgb *rgb = to_rgb(output);
- tegra_dc_write_regs(dc, rgb_enable, ARRAY_SIZE(rgb_enable));
+ tegra_dc_write_regs(rgb->dc, rgb_enable, ARRAY_SIZE(rgb_enable));
return 0;
}
static int tegra_output_rgb_disable(struct tegra_output *output)
{
- struct tegra_dc *dc = to_tegra_dc(output->encoder.crtc);
+ struct tegra_rgb *rgb = to_rgb(output);
- tegra_dc_write_regs(dc, rgb_disable, ARRAY_SIZE(rgb_disable));
+ tegra_dc_write_regs(rgb->dc, rgb_disable, ARRAY_SIZE(rgb_disable));
return 0;
}
rgb->output.dev = dc->dev;
rgb->output.of_node = np;
+ rgb->dc = dc;
err = tegra_output_probe(&rgb->output);
if (err < 0)
atomic_dec(&bo->glob->bo_count);
if (bo->resv == &bo->ttm_resv)
reservation_object_fini(&bo->ttm_resv);
-
+ mutex_destroy(&bo->wu_mutex);
if (bo->destroy)
bo->destroy(bo);
else {
INIT_LIST_HEAD(&bo->ddestroy);
INIT_LIST_HEAD(&bo->swap);
INIT_LIST_HEAD(&bo->io_reserve_lru);
+ mutex_init(&bo->wu_mutex);
bo->bdev = bdev;
bo->glob = bdev->glob;
bo->type = type;
;
}
EXPORT_SYMBOL(ttm_bo_swapout_all);
+
+/**
+ * ttm_bo_wait_unreserved - interruptible wait for a buffer object to become
+ * unreserved
+ *
+ * @bo: Pointer to buffer
+ */
+int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo)
+{
+ int ret;
+
+ /*
+ * In the absense of a wait_unlocked API,
+ * Use the bo::wu_mutex to avoid triggering livelocks due to
+ * concurrent use of this function. Note that this use of
+ * bo::wu_mutex can go away if we change locking order to
+ * mmap_sem -> bo::reserve.
+ */
+ ret = mutex_lock_interruptible(&bo->wu_mutex);
+ if (unlikely(ret != 0))
+ return -ERESTARTSYS;
+ if (!ww_mutex_is_locked(&bo->resv->lock))
+ goto out_unlock;
+ ret = ttm_bo_reserve_nolru(bo, true, false, false, NULL);
+ if (unlikely(ret != 0))
+ goto out_unlock;
+ ww_mutex_unlock(&bo->resv->lock);
+
+out_unlock:
+ mutex_unlock(&bo->wu_mutex);
+ return ret;
+}
goto out2;
/*
- * Move nonexistent data. NOP.
+ * Don't move nonexistent data. Clear destination instead.
*/
- if (old_iomap == NULL && ttm == NULL)
+ if (old_iomap == NULL &&
+ (ttm == NULL || (ttm->state == tt_unpopulated &&
+ !(ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)))) {
+ memset_io(new_iomap, 0, new_mem->num_pages*PAGE_SIZE);
goto out2;
+ }
/*
* TTM might be null for moves within the same region.
/*
* Work around locking order reversal in fault / nopfn
* between mmap_sem and bo_reserve: Perform a trylock operation
- * for reserve, and if it fails, retry the fault after scheduling.
+ * for reserve, and if it fails, retry the fault after waiting
+ * for the buffer to become unreserved.
*/
-
- ret = ttm_bo_reserve(bo, true, true, false, 0);
+ ret = ttm_bo_reserve(bo, true, true, false, NULL);
if (unlikely(ret != 0)) {
- if (ret == -EBUSY)
- set_need_resched();
+ if (ret != -EBUSY)
+ return VM_FAULT_NOPAGE;
+
+ if (vmf->flags & FAULT_FLAG_ALLOW_RETRY) {
+ if (!(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) {
+ up_read(&vma->vm_mm->mmap_sem);
+ (void) ttm_bo_wait_unreserved(bo);
+ }
+
+ return VM_FAULT_RETRY;
+ }
+
+ /*
+ * If we'd want to change locking order to
+ * mmap_sem -> bo::reserve, we'd use a blocking reserve here
+ * instead of retrying the fault...
+ */
return VM_FAULT_NOPAGE;
}
case 0:
break;
case -EBUSY:
- set_need_resched();
case -ERESTARTSYS:
retval = VM_FAULT_NOPAGE;
goto out_unlock;
}
page_offset = ((address - vma->vm_start) >> PAGE_SHIFT) +
- drm_vma_node_start(&bo->vma_node) - vma->vm_pgoff;
- page_last = vma_pages(vma) +
- drm_vma_node_start(&bo->vma_node) - vma->vm_pgoff;
+ vma->vm_pgoff - drm_vma_node_start(&bo->vma_node);
+ page_last = vma_pages(vma) + vma->vm_pgoff -
+ drm_vma_node_start(&bo->vma_node);
if (unlikely(page_offset >= bo->num_pages)) {
retval = VM_FAULT_SIGBUS;
#include <linux/sched.h>
#include <linux/module.h>
-static void ttm_eu_backoff_reservation_locked(struct list_head *list,
- struct ww_acquire_ctx *ticket)
+static void ttm_eu_backoff_reservation_locked(struct list_head *list)
{
struct ttm_validate_buffer *entry;
entry = list_first_entry(list, struct ttm_validate_buffer, head);
glob = entry->bo->glob;
spin_lock(&glob->lru_lock);
- ttm_eu_backoff_reservation_locked(list, ticket);
- ww_acquire_fini(ticket);
+ ttm_eu_backoff_reservation_locked(list);
+ if (ticket)
+ ww_acquire_fini(ticket);
spin_unlock(&glob->lru_lock);
}
EXPORT_SYMBOL(ttm_eu_backoff_reservation);
entry = list_first_entry(list, struct ttm_validate_buffer, head);
glob = entry->bo->glob;
- ww_acquire_init(ticket, &reservation_ww_class);
+ if (ticket)
+ ww_acquire_init(ticket, &reservation_ww_class);
retry:
list_for_each_entry(entry, list, head) {
struct ttm_buffer_object *bo = entry->bo;
if (entry->reserved)
continue;
-
- ret = ttm_bo_reserve_nolru(bo, true, false, true, ticket);
+ ret = ttm_bo_reserve_nolru(bo, true, (ticket == NULL), true,
+ ticket);
if (ret == -EDEADLK) {
/* uh oh, we lost out, drop every reservation and try
* to only reserve this buffer, then start over if
* this succeeds.
*/
+ BUG_ON(ticket == NULL);
spin_lock(&glob->lru_lock);
- ttm_eu_backoff_reservation_locked(list, ticket);
+ ttm_eu_backoff_reservation_locked(list);
spin_unlock(&glob->lru_lock);
ttm_eu_list_ref_sub(list);
ret = ww_mutex_lock_slow_interruptible(&bo->resv->lock,
}
}
- ww_acquire_done(ticket);
+ if (ticket)
+ ww_acquire_done(ticket);
spin_lock(&glob->lru_lock);
ttm_eu_del_from_lru_locked(list);
spin_unlock(&glob->lru_lock);
err:
spin_lock(&glob->lru_lock);
- ttm_eu_backoff_reservation_locked(list, ticket);
+ ttm_eu_backoff_reservation_locked(list);
spin_unlock(&glob->lru_lock);
ttm_eu_list_ref_sub(list);
err_fini:
- ww_acquire_done(ticket);
- ww_acquire_fini(ticket);
+ if (ticket) {
+ ww_acquire_done(ticket);
+ ww_acquire_fini(ticket);
+ }
return ret;
}
EXPORT_SYMBOL(ttm_eu_reserve_buffers);
}
spin_unlock(&bdev->fence_lock);
spin_unlock(&glob->lru_lock);
- ww_acquire_fini(ticket);
+ if (ticket)
+ ww_acquire_fini(ticket);
list_for_each_entry(entry, list, head) {
if (entry->old_sync_obj)
/**************************************************************************
*
- * Copyright (c) 2009 VMware, Inc., Palo Alto, CA., USA
+ * Copyright (c) 2009-2013 VMware, Inc., Palo Alto, CA., USA
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
**************************************************************************/
/*
* Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
+ *
+ * While no substantial code is shared, the prime code is inspired by
+ * drm_prime.c, with
+ * Authors:
+ * Dave Airlie <airlied@redhat.com>
+ * Rob Clark <rob.clark@linaro.org>
*/
/** @file ttm_ref_object.c
*
* and release on file close.
*/
+
/**
* struct ttm_object_file
*
struct drm_open_hash object_hash;
atomic_t object_count;
struct ttm_mem_global *mem_glob;
+ struct dma_buf_ops ops;
+ void (*dmabuf_release)(struct dma_buf *dma_buf);
+ size_t dma_buf_size;
};
/**
struct ttm_object_file *tfile;
};
+static void ttm_prime_dmabuf_release(struct dma_buf *dma_buf);
+
static inline struct ttm_object_file *
ttm_object_file_ref(struct ttm_object_file *tfile)
{
}
EXPORT_SYMBOL(ttm_object_file_init);
-struct ttm_object_device *ttm_object_device_init(struct ttm_mem_global
- *mem_glob,
- unsigned int hash_order)
+struct ttm_object_device *
+ttm_object_device_init(struct ttm_mem_global *mem_glob,
+ unsigned int hash_order,
+ const struct dma_buf_ops *ops)
{
struct ttm_object_device *tdev = kmalloc(sizeof(*tdev), GFP_KERNEL);
int ret;
spin_lock_init(&tdev->object_lock);
atomic_set(&tdev->object_count, 0);
ret = drm_ht_create(&tdev->object_hash, hash_order);
+ if (ret != 0)
+ goto out_no_object_hash;
- if (likely(ret == 0))
- return tdev;
+ tdev->ops = *ops;
+ tdev->dmabuf_release = tdev->ops.release;
+ tdev->ops.release = ttm_prime_dmabuf_release;
+ tdev->dma_buf_size = ttm_round_pot(sizeof(struct dma_buf)) +
+ ttm_round_pot(sizeof(struct file));
+ return tdev;
+out_no_object_hash:
kfree(tdev);
return NULL;
}
kfree(tdev);
}
EXPORT_SYMBOL(ttm_object_device_release);
+
+/**
+ * get_dma_buf_unless_doomed - get a dma_buf reference if possible.
+ *
+ * @dma_buf: Non-refcounted pointer to a struct dma-buf.
+ *
+ * Obtain a file reference from a lookup structure that doesn't refcount
+ * the file, but synchronizes with its release method to make sure it has
+ * not been freed yet. See for example kref_get_unless_zero documentation.
+ * Returns true if refcounting succeeds, false otherwise.
+ *
+ * Nobody really wants this as a public API yet, so let it mature here
+ * for some time...
+ */
+static bool __must_check get_dma_buf_unless_doomed(struct dma_buf *dmabuf)
+{
+ return atomic_long_inc_not_zero(&dmabuf->file->f_count) != 0L;
+}
+
+/**
+ * ttm_prime_refcount_release - refcount release method for a prime object.
+ *
+ * @p_base: Pointer to ttm_base_object pointer.
+ *
+ * This is a wrapper that calls the refcount_release founction of the
+ * underlying object. At the same time it cleans up the prime object.
+ * This function is called when all references to the base object we
+ * derive from are gone.
+ */
+static void ttm_prime_refcount_release(struct ttm_base_object **p_base)
+{
+ struct ttm_base_object *base = *p_base;
+ struct ttm_prime_object *prime;
+
+ *p_base = NULL;
+ prime = container_of(base, struct ttm_prime_object, base);
+ BUG_ON(prime->dma_buf != NULL);
+ mutex_destroy(&prime->mutex);
+ if (prime->refcount_release)
+ prime->refcount_release(&base);
+}
+
+/**
+ * ttm_prime_dmabuf_release - Release method for the dma-bufs we export
+ *
+ * @dma_buf:
+ *
+ * This function first calls the dma_buf release method the driver
+ * provides. Then it cleans up our dma_buf pointer used for lookup,
+ * and finally releases the reference the dma_buf has on our base
+ * object.
+ */
+static void ttm_prime_dmabuf_release(struct dma_buf *dma_buf)
+{
+ struct ttm_prime_object *prime =
+ (struct ttm_prime_object *) dma_buf->priv;
+ struct ttm_base_object *base = &prime->base;
+ struct ttm_object_device *tdev = base->tfile->tdev;
+
+ if (tdev->dmabuf_release)
+ tdev->dmabuf_release(dma_buf);
+ mutex_lock(&prime->mutex);
+ if (prime->dma_buf == dma_buf)
+ prime->dma_buf = NULL;
+ mutex_unlock(&prime->mutex);
+ ttm_mem_global_free(tdev->mem_glob, tdev->dma_buf_size);
+ ttm_base_object_unref(&base);
+}
+
+/**
+ * ttm_prime_fd_to_handle - Get a base object handle from a prime fd
+ *
+ * @tfile: A struct ttm_object_file identifying the caller.
+ * @fd: The prime / dmabuf fd.
+ * @handle: The returned handle.
+ *
+ * This function returns a handle to an object that previously exported
+ * a dma-buf. Note that we don't handle imports yet, because we simply
+ * have no consumers of that implementation.
+ */
+int ttm_prime_fd_to_handle(struct ttm_object_file *tfile,
+ int fd, u32 *handle)
+{
+ struct ttm_object_device *tdev = tfile->tdev;
+ struct dma_buf *dma_buf;
+ struct ttm_prime_object *prime;
+ struct ttm_base_object *base;
+ int ret;
+
+ dma_buf = dma_buf_get(fd);
+ if (IS_ERR(dma_buf))
+ return PTR_ERR(dma_buf);
+
+ if (dma_buf->ops != &tdev->ops)
+ return -ENOSYS;
+
+ prime = (struct ttm_prime_object *) dma_buf->priv;
+ base = &prime->base;
+ *handle = base->hash.key;
+ ret = ttm_ref_object_add(tfile, base, TTM_REF_USAGE, NULL);
+
+ dma_buf_put(dma_buf);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ttm_prime_fd_to_handle);
+
+/**
+ * ttm_prime_handle_to_fd - Return a dma_buf fd from a ttm prime object
+ *
+ * @tfile: Struct ttm_object_file identifying the caller.
+ * @handle: Handle to the object we're exporting from.
+ * @flags: flags for dma-buf creation. We just pass them on.
+ * @prime_fd: The returned file descriptor.
+ *
+ */
+int ttm_prime_handle_to_fd(struct ttm_object_file *tfile,
+ uint32_t handle, uint32_t flags,
+ int *prime_fd)
+{
+ struct ttm_object_device *tdev = tfile->tdev;
+ struct ttm_base_object *base;
+ struct dma_buf *dma_buf;
+ struct ttm_prime_object *prime;
+ int ret;
+
+ base = ttm_base_object_lookup(tfile, handle);
+ if (unlikely(base == NULL ||
+ base->object_type != ttm_prime_type)) {
+ ret = -ENOENT;
+ goto out_unref;
+ }
+
+ prime = container_of(base, struct ttm_prime_object, base);
+ if (unlikely(!base->shareable)) {
+ ret = -EPERM;
+ goto out_unref;
+ }
+
+ ret = mutex_lock_interruptible(&prime->mutex);
+ if (unlikely(ret != 0)) {
+ ret = -ERESTARTSYS;
+ goto out_unref;
+ }
+
+ dma_buf = prime->dma_buf;
+ if (!dma_buf || !get_dma_buf_unless_doomed(dma_buf)) {
+
+ /*
+ * Need to create a new dma_buf, with memory accounting.
+ */
+ ret = ttm_mem_global_alloc(tdev->mem_glob, tdev->dma_buf_size,
+ false, true);
+ if (unlikely(ret != 0)) {
+ mutex_unlock(&prime->mutex);
+ goto out_unref;
+ }
+
+ dma_buf = dma_buf_export(prime, &tdev->ops,
+ prime->size, flags);
+ if (IS_ERR(dma_buf)) {
+ ret = PTR_ERR(dma_buf);
+ ttm_mem_global_free(tdev->mem_glob,
+ tdev->dma_buf_size);
+ mutex_unlock(&prime->mutex);
+ goto out_unref;
+ }
+
+ /*
+ * dma_buf has taken the base object reference
+ */
+ base = NULL;
+ prime->dma_buf = dma_buf;
+ }
+ mutex_unlock(&prime->mutex);
+
+ ret = dma_buf_fd(dma_buf, flags);
+ if (ret >= 0) {
+ *prime_fd = ret;
+ ret = 0;
+ } else
+ dma_buf_put(dma_buf);
+
+out_unref:
+ if (base)
+ ttm_base_object_unref(&base);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ttm_prime_handle_to_fd);
+
+/**
+ * ttm_prime_object_init - Initialize a ttm_prime_object
+ *
+ * @tfile: struct ttm_object_file identifying the caller
+ * @size: The size of the dma_bufs we export.
+ * @prime: The object to be initialized.
+ * @shareable: See ttm_base_object_init
+ * @type: See ttm_base_object_init
+ * @refcount_release: See ttm_base_object_init
+ * @ref_obj_release: See ttm_base_object_init
+ *
+ * Initializes an object which is compatible with the drm_prime model
+ * for data sharing between processes and devices.
+ */
+int ttm_prime_object_init(struct ttm_object_file *tfile, size_t size,
+ struct ttm_prime_object *prime, bool shareable,
+ enum ttm_object_type type,
+ void (*refcount_release) (struct ttm_base_object **),
+ void (*ref_obj_release) (struct ttm_base_object *,
+ enum ttm_ref_type ref_type))
+{
+ mutex_init(&prime->mutex);
+ prime->size = PAGE_ALIGN(size);
+ prime->real_type = type;
+ prime->dma_buf = NULL;
+ prime->refcount_release = refcount_release;
+ return ttm_base_object_init(tfile, &prime->base, shareable,
+ ttm_prime_type,
+ ttm_prime_refcount_release,
+ ref_obj_release);
+}
+EXPORT_SYMBOL(ttm_prime_object_init);
static void udl_gem_put_pages(struct udl_gem_object *obj)
{
+ if (obj->base.import_attach) {
+ drm_free_large(obj->pages);
+ obj->pages = NULL;
+ return;
+ }
+
drm_gem_put_pages(&obj->base, obj->pages, false, false);
obj->pages = NULL;
}
vmwgfx_fifo.o vmwgfx_irq.o vmwgfx_ldu.o vmwgfx_ttm_glue.o \
vmwgfx_overlay.o vmwgfx_marker.o vmwgfx_gmrid_manager.o \
vmwgfx_fence.o vmwgfx_dmabuf.o vmwgfx_scrn.o vmwgfx_context.o \
- vmwgfx_surface.o
+ vmwgfx_surface.o vmwgfx_prime.o
obj-$(CONFIG_DRM_VMWGFX) := vmwgfx.o
bool mapped;
};
+const size_t vmw_tt_size = sizeof(struct vmw_ttm_tt);
+
/**
* Helper functions to advance a struct vmw_piter iterator.
*
}
dev_priv->tdev = ttm_object_device_init
- (dev_priv->mem_global_ref.object, 12);
+ (dev_priv->mem_global_ref.object, 12, &vmw_prime_dmabuf_ops);
if (unlikely(dev_priv->tdev == NULL)) {
DRM_ERROR("Unable to initialize TTM object management.\n");
static struct drm_driver driver = {
.driver_features = DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED |
- DRIVER_MODESET,
+ DRIVER_MODESET | DRIVER_PRIME,
.load = vmw_driver_load,
.unload = vmw_driver_unload,
.lastclose = vmw_lastclose,
.dumb_map_offset = vmw_dumb_map_offset,
.dumb_destroy = vmw_dumb_destroy,
+ .prime_fd_to_handle = vmw_prime_fd_to_handle,
+ .prime_handle_to_fd = vmw_prime_handle_to_fd,
+
.fops = &vmwgfx_driver_fops,
.name = VMWGFX_DRIVER_NAME,
.desc = VMWGFX_DRIVER_DESC,
* TTM buffer object driver - vmwgfx_buffer.c
*/
+extern const size_t vmw_tt_size;
extern struct ttm_placement vmw_vram_placement;
extern struct ttm_placement vmw_vram_ne_placement;
extern struct ttm_placement vmw_vram_sys_placement;
extern const struct ttm_mem_type_manager_func vmw_gmrid_manager_func;
+/**
+ * Prime - vmwgfx_prime.c
+ */
+
+extern const struct dma_buf_ops vmw_prime_dmabuf_ops;
+extern int vmw_prime_fd_to_handle(struct drm_device *dev,
+ struct drm_file *file_priv,
+ int fd, u32 *handle);
+extern int vmw_prime_handle_to_fd(struct drm_device *dev,
+ struct drm_file *file_priv,
+ uint32_t handle, uint32_t flags,
+ int *prime_fd);
+
+
/**
* Inline helper functions
*/
SVGA_FIFO_3D_HWVERSION));
break;
}
+ case DRM_VMW_PARAM_MAX_SURF_MEMORY:
+ param->value = dev_priv->memory_size;
+ break;
default:
DRM_ERROR("Illegal vmwgfx get param request: %d\n",
param->param);
vmw_surface_unreference(&du->cursor_surface);
if (du->cursor_dmabuf)
vmw_dmabuf_unreference(&du->cursor_dmabuf);
+ drm_sysfs_connector_remove(&du->connector);
drm_crtc_cleanup(&du->crtc);
drm_encoder_cleanup(&du->encoder);
drm_connector_cleanup(&du->connector);
connector->encoder = NULL;
encoder->crtc = NULL;
crtc->fb = NULL;
+ crtc->enabled = false;
vmw_ldu_del_active(dev_priv, ldu);
crtc->x = set->x;
crtc->y = set->y;
crtc->mode = *mode;
+ crtc->enabled = true;
vmw_ldu_add_active(dev_priv, ldu, vfb);
encoder->possible_crtcs = (1 << unit);
encoder->possible_clones = 0;
+ (void) drm_sysfs_connector_add(connector);
+
drm_crtc_init(dev, crtc, &vmw_legacy_crtc_funcs);
drm_mode_crtc_set_gamma_size(crtc, 256);
--- /dev/null
+/**************************************************************************
+ *
+ * Copyright © 2013 VMware, Inc., Palo Alto, CA., USA
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ **************************************************************************/
+/*
+ * Authors:
+ * Thomas Hellstrom <thellstrom@vmware.com>
+ *
+ */
+
+#include "vmwgfx_drv.h"
+#include <linux/dma-buf.h>
+#include <drm/ttm/ttm_object.h>
+
+/*
+ * DMA-BUF attach- and mapping methods. No need to implement
+ * these until we have other virtual devices use them.
+ */
+
+static int vmw_prime_map_attach(struct dma_buf *dma_buf,
+ struct device *target_dev,
+ struct dma_buf_attachment *attach)
+{
+ return -ENOSYS;
+}
+
+static void vmw_prime_map_detach(struct dma_buf *dma_buf,
+ struct dma_buf_attachment *attach)
+{
+}
+
+static struct sg_table *vmw_prime_map_dma_buf(struct dma_buf_attachment *attach,
+ enum dma_data_direction dir)
+{
+ return ERR_PTR(-ENOSYS);
+}
+
+static void vmw_prime_unmap_dma_buf(struct dma_buf_attachment *attach,
+ struct sg_table *sgb,
+ enum dma_data_direction dir)
+{
+}
+
+static void *vmw_prime_dmabuf_vmap(struct dma_buf *dma_buf)
+{
+ return NULL;
+}
+
+static void vmw_prime_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
+{
+}
+
+static void *vmw_prime_dmabuf_kmap_atomic(struct dma_buf *dma_buf,
+ unsigned long page_num)
+{
+ return NULL;
+}
+
+static void vmw_prime_dmabuf_kunmap_atomic(struct dma_buf *dma_buf,
+ unsigned long page_num, void *addr)
+{
+
+}
+static void *vmw_prime_dmabuf_kmap(struct dma_buf *dma_buf,
+ unsigned long page_num)
+{
+ return NULL;
+}
+
+static void vmw_prime_dmabuf_kunmap(struct dma_buf *dma_buf,
+ unsigned long page_num, void *addr)
+{
+
+}
+
+static int vmw_prime_dmabuf_mmap(struct dma_buf *dma_buf,
+ struct vm_area_struct *vma)
+{
+ WARN_ONCE(true, "Attempted use of dmabuf mmap. Bad.\n");
+ return -ENOSYS;
+}
+
+const struct dma_buf_ops vmw_prime_dmabuf_ops = {
+ .attach = vmw_prime_map_attach,
+ .detach = vmw_prime_map_detach,
+ .map_dma_buf = vmw_prime_map_dma_buf,
+ .unmap_dma_buf = vmw_prime_unmap_dma_buf,
+ .release = NULL,
+ .kmap = vmw_prime_dmabuf_kmap,
+ .kmap_atomic = vmw_prime_dmabuf_kmap_atomic,
+ .kunmap = vmw_prime_dmabuf_kunmap,
+ .kunmap_atomic = vmw_prime_dmabuf_kunmap_atomic,
+ .mmap = vmw_prime_dmabuf_mmap,
+ .vmap = vmw_prime_dmabuf_vmap,
+ .vunmap = vmw_prime_dmabuf_vunmap,
+};
+
+int vmw_prime_fd_to_handle(struct drm_device *dev,
+ struct drm_file *file_priv,
+ int fd, u32 *handle)
+{
+ struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
+
+ return ttm_prime_fd_to_handle(tfile, fd, handle);
+}
+
+int vmw_prime_handle_to_fd(struct drm_device *dev,
+ struct drm_file *file_priv,
+ uint32_t handle, uint32_t flags,
+ int *prime_fd)
+{
+ struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
+
+ return ttm_prime_handle_to_fd(tfile, handle, flags, prime_fd);
+}
#define VMW_RES_EVICT_ERR_COUNT 10
struct vmw_user_dma_buffer {
- struct ttm_base_object base;
+ struct ttm_prime_object prime;
struct vmw_dma_buffer dma;
};
if (unlikely(base == NULL))
return -EINVAL;
- if (unlikely(base->object_type != converter->object_type))
+ if (unlikely(ttm_base_object_type(base) != converter->object_type))
goto out_bad_resource;
res = converter->base_obj_to_res(base);
/**
* Buffer management.
*/
+
+/**
+ * vmw_dmabuf_acc_size - Calculate the pinned memory usage of buffers
+ *
+ * @dev_priv: Pointer to a struct vmw_private identifying the device.
+ * @size: The requested buffer size.
+ * @user: Whether this is an ordinary dma buffer or a user dma buffer.
+ */
+static size_t vmw_dmabuf_acc_size(struct vmw_private *dev_priv, size_t size,
+ bool user)
+{
+ static size_t struct_size, user_struct_size;
+ size_t num_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ size_t page_array_size = ttm_round_pot(num_pages * sizeof(void *));
+
+ if (unlikely(struct_size == 0)) {
+ size_t backend_size = ttm_round_pot(vmw_tt_size);
+
+ struct_size = backend_size +
+ ttm_round_pot(sizeof(struct vmw_dma_buffer));
+ user_struct_size = backend_size +
+ ttm_round_pot(sizeof(struct vmw_user_dma_buffer));
+ }
+
+ if (dev_priv->map_mode == vmw_dma_alloc_coherent)
+ page_array_size +=
+ ttm_round_pot(num_pages * sizeof(dma_addr_t));
+
+ return ((user) ? user_struct_size : struct_size) +
+ page_array_size;
+}
+
void vmw_dmabuf_bo_free(struct ttm_buffer_object *bo)
{
struct vmw_dma_buffer *vmw_bo = vmw_dma_buffer(bo);
kfree(vmw_bo);
}
+static void vmw_user_dmabuf_destroy(struct ttm_buffer_object *bo)
+{
+ struct vmw_user_dma_buffer *vmw_user_bo = vmw_user_dma_buffer(bo);
+
+ ttm_prime_object_kfree(vmw_user_bo, prime);
+}
+
int vmw_dmabuf_init(struct vmw_private *dev_priv,
struct vmw_dma_buffer *vmw_bo,
size_t size, struct ttm_placement *placement,
struct ttm_bo_device *bdev = &dev_priv->bdev;
size_t acc_size;
int ret;
+ bool user = (bo_free == &vmw_user_dmabuf_destroy);
- BUG_ON(!bo_free);
+ BUG_ON(!bo_free && (!user && (bo_free != vmw_dmabuf_bo_free)));
- acc_size = ttm_bo_acc_size(bdev, size, sizeof(struct vmw_dma_buffer));
+ acc_size = vmw_dmabuf_acc_size(dev_priv, size, user);
memset(vmw_bo, 0, sizeof(*vmw_bo));
INIT_LIST_HEAD(&vmw_bo->res_list);
ret = ttm_bo_init(bdev, &vmw_bo->base, size,
- ttm_bo_type_device, placement,
+ (user) ? ttm_bo_type_device :
+ ttm_bo_type_kernel, placement,
0, interruptible,
NULL, acc_size, NULL, bo_free);
return ret;
}
-static void vmw_user_dmabuf_destroy(struct ttm_buffer_object *bo)
-{
- struct vmw_user_dma_buffer *vmw_user_bo = vmw_user_dma_buffer(bo);
-
- ttm_base_object_kfree(vmw_user_bo, base);
-}
-
static void vmw_user_dmabuf_release(struct ttm_base_object **p_base)
{
struct vmw_user_dma_buffer *vmw_user_bo;
if (unlikely(base == NULL))
return;
- vmw_user_bo = container_of(base, struct vmw_user_dma_buffer, base);
+ vmw_user_bo = container_of(base, struct vmw_user_dma_buffer,
+ prime.base);
bo = &vmw_user_bo->dma.base;
ttm_bo_unref(&bo);
}
return ret;
tmp = ttm_bo_reference(&user_bo->dma.base);
- ret = ttm_base_object_init(tfile,
- &user_bo->base,
- shareable,
- ttm_buffer_type,
- &vmw_user_dmabuf_release, NULL);
+ ret = ttm_prime_object_init(tfile,
+ size,
+ &user_bo->prime,
+ shareable,
+ ttm_buffer_type,
+ &vmw_user_dmabuf_release, NULL);
if (unlikely(ret != 0)) {
ttm_bo_unref(&tmp);
goto out_no_base_object;
}
*p_dma_buf = &user_bo->dma;
- *handle = user_bo->base.hash.key;
+ *handle = user_bo->prime.base.hash.key;
out_no_base_object:
return ret;
return -EPERM;
vmw_user_bo = vmw_user_dma_buffer(bo);
- return (vmw_user_bo->base.tfile == tfile ||
- vmw_user_bo->base.shareable) ? 0 : -EPERM;
+ return (vmw_user_bo->prime.base.tfile == tfile ||
+ vmw_user_bo->prime.base.shareable) ? 0 : -EPERM;
}
int vmw_dmabuf_alloc_ioctl(struct drm_device *dev, void *data,
return -ESRCH;
}
- if (unlikely(base->object_type != ttm_buffer_type)) {
+ if (unlikely(ttm_base_object_type(base) != ttm_buffer_type)) {
ttm_base_object_unref(&base);
printk(KERN_ERR "Invalid buffer object handle 0x%08lx.\n",
(unsigned long)handle);
return -EINVAL;
}
- vmw_user_bo = container_of(base, struct vmw_user_dma_buffer, base);
+ vmw_user_bo = container_of(base, struct vmw_user_dma_buffer,
+ prime.base);
(void)ttm_bo_reference(&vmw_user_bo->dma.base);
ttm_base_object_unref(&base);
*out = &vmw_user_bo->dma;
return -EINVAL;
user_bo = container_of(dma_buf, struct vmw_user_dma_buffer, dma);
- return ttm_ref_object_add(tfile, &user_bo->base, TTM_REF_USAGE, NULL);
+ return ttm_ref_object_add(tfile, &user_bo->prime.base,
+ TTM_REF_USAGE, NULL);
}
/*
}
+/**
+ * vmw_dumb_create - Create a dumb kms buffer
+ *
+ * @file_priv: Pointer to a struct drm_file identifying the caller.
+ * @dev: Pointer to the drm device.
+ * @args: Pointer to a struct drm_mode_create_dumb structure
+ *
+ * This is a driver callback for the core drm create_dumb functionality.
+ * Note that this is very similar to the vmw_dmabuf_alloc ioctl, except
+ * that the arguments have a different format.
+ */
int vmw_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
struct vmw_private *dev_priv = vmw_priv(dev);
struct vmw_master *vmaster = vmw_master(file_priv->master);
- struct vmw_user_dma_buffer *vmw_user_bo;
- struct ttm_buffer_object *tmp;
+ struct vmw_dma_buffer *dma_buf;
int ret;
args->pitch = args->width * ((args->bpp + 7) / 8);
args->size = args->pitch * args->height;
- vmw_user_bo = kzalloc(sizeof(*vmw_user_bo), GFP_KERNEL);
- if (vmw_user_bo == NULL)
- return -ENOMEM;
-
ret = ttm_read_lock(&vmaster->lock, true);
- if (ret != 0) {
- kfree(vmw_user_bo);
+ if (unlikely(ret != 0))
return ret;
- }
- ret = vmw_dmabuf_init(dev_priv, &vmw_user_bo->dma, args->size,
- &vmw_vram_sys_placement, true,
- &vmw_user_dmabuf_destroy);
- if (ret != 0)
- goto out_no_dmabuf;
-
- tmp = ttm_bo_reference(&vmw_user_bo->dma.base);
- ret = ttm_base_object_init(vmw_fpriv(file_priv)->tfile,
- &vmw_user_bo->base,
- false,
- ttm_buffer_type,
- &vmw_user_dmabuf_release, NULL);
+ ret = vmw_user_dmabuf_alloc(dev_priv, vmw_fpriv(file_priv)->tfile,
+ args->size, false, &args->handle,
+ &dma_buf);
if (unlikely(ret != 0))
- goto out_no_base_object;
-
- args->handle = vmw_user_bo->base.hash.key;
+ goto out_no_dmabuf;
-out_no_base_object:
- ttm_bo_unref(&tmp);
+ vmw_dmabuf_unreference(&dma_buf);
out_no_dmabuf:
ttm_read_unlock(&vmaster->lock);
return ret;
}
+/**
+ * vmw_dumb_map_offset - Return the address space offset of a dumb buffer
+ *
+ * @file_priv: Pointer to a struct drm_file identifying the caller.
+ * @dev: Pointer to the drm device.
+ * @handle: Handle identifying the dumb buffer.
+ * @offset: The address space offset returned.
+ *
+ * This is a driver callback for the core drm dumb_map_offset functionality.
+ */
int vmw_dumb_map_offset(struct drm_file *file_priv,
struct drm_device *dev, uint32_t handle,
uint64_t *offset)
return 0;
}
+/**
+ * vmw_dumb_destroy - Destroy a dumb boffer
+ *
+ * @file_priv: Pointer to a struct drm_file identifying the caller.
+ * @dev: Pointer to the drm device.
+ * @handle: Handle identifying the dumb buffer.
+ *
+ * This is a driver callback for the core drm dumb_destroy functionality.
+ */
int vmw_dumb_destroy(struct drm_file *file_priv,
struct drm_device *dev,
uint32_t handle)
*/
static int
vmw_resource_check_buffer(struct vmw_resource *res,
- struct ww_acquire_ctx *ticket,
bool interruptible,
struct ttm_validate_buffer *val_buf)
{
INIT_LIST_HEAD(&val_list);
val_buf->bo = ttm_bo_reference(&res->backup->base);
list_add_tail(&val_buf->head, &val_list);
- ret = ttm_eu_reserve_buffers(ticket, &val_list);
+ ret = ttm_eu_reserve_buffers(NULL, &val_list);
if (unlikely(ret != 0))
goto out_no_reserve;
return 0;
out_no_validate:
- ttm_eu_backoff_reservation(ticket, &val_list);
+ ttm_eu_backoff_reservation(NULL, &val_list);
out_no_reserve:
ttm_bo_unref(&val_buf->bo);
if (backup_dirty)
* @val_buf: Backup buffer information.
*/
static void
-vmw_resource_backoff_reservation(struct ww_acquire_ctx *ticket,
- struct ttm_validate_buffer *val_buf)
+vmw_resource_backoff_reservation(struct ttm_validate_buffer *val_buf)
{
struct list_head val_list;
INIT_LIST_HEAD(&val_list);
list_add_tail(&val_buf->head, &val_list);
- ttm_eu_backoff_reservation(ticket, &val_list);
+ ttm_eu_backoff_reservation(NULL, &val_list);
ttm_bo_unref(&val_buf->bo);
}
{
struct ttm_validate_buffer val_buf;
const struct vmw_res_func *func = res->func;
- struct ww_acquire_ctx ticket;
int ret;
BUG_ON(!func->may_evict);
val_buf.bo = NULL;
- ret = vmw_resource_check_buffer(res, &ticket, interruptible,
- &val_buf);
+ ret = vmw_resource_check_buffer(res, interruptible, &val_buf);
if (unlikely(ret != 0))
return ret;
res->backup_dirty = true;
res->res_dirty = false;
out_no_unbind:
- vmw_resource_backoff_reservation(&ticket, &val_buf);
+ vmw_resource_backoff_reservation(&val_buf);
return ret;
}
crtc->fb = NULL;
crtc->x = 0;
crtc->y = 0;
+ crtc->enabled = false;
vmw_sou_del_active(dev_priv, sou);
crtc->fb = NULL;
crtc->x = 0;
crtc->y = 0;
+ crtc->enabled = false;
return ret;
}
crtc->fb = fb;
crtc->x = set->x;
crtc->y = set->y;
+ crtc->enabled = true;
return 0;
}
encoder->possible_crtcs = (1 << unit);
encoder->possible_clones = 0;
+ (void) drm_sysfs_connector_add(connector);
+
drm_crtc_init(dev, crtc, &vmw_screen_object_crtc_funcs);
drm_mode_crtc_set_gamma_size(crtc, 256);
* @size: TTM accounting size for the surface.
*/
struct vmw_user_surface {
- struct ttm_base_object base;
+ struct ttm_prime_object prime;
struct vmw_surface srf;
uint32_t size;
uint32_t backup_handle;
static struct vmw_resource *
vmw_user_surface_base_to_res(struct ttm_base_object *base)
{
- return &(container_of(base, struct vmw_user_surface, base)->srf.res);
+ return &(container_of(base, struct vmw_user_surface,
+ prime.base)->srf.res);
}
/**
kfree(srf->offsets);
kfree(srf->sizes);
kfree(srf->snooper.image);
- ttm_base_object_kfree(user_srf, base);
+ ttm_prime_object_kfree(user_srf, prime);
ttm_mem_global_free(vmw_mem_glob(dev_priv), size);
}
{
struct ttm_base_object *base = *p_base;
struct vmw_user_surface *user_srf =
- container_of(base, struct vmw_user_surface, base);
+ container_of(base, struct vmw_user_surface, prime.base);
struct vmw_resource *res = &user_srf->srf.res;
*p_base = NULL;
}
srf->snooper.crtc = NULL;
- user_srf->base.shareable = false;
- user_srf->base.tfile = NULL;
+ user_srf->prime.base.shareable = false;
+ user_srf->prime.base.tfile = NULL;
/**
* From this point, the generic resource management functions
goto out_unlock;
tmp = vmw_resource_reference(&srf->res);
- ret = ttm_base_object_init(tfile, &user_srf->base,
- req->shareable, VMW_RES_SURFACE,
- &vmw_user_surface_base_release, NULL);
+ ret = ttm_prime_object_init(tfile, res->backup_size, &user_srf->prime,
+ req->shareable, VMW_RES_SURFACE,
+ &vmw_user_surface_base_release, NULL);
if (unlikely(ret != 0)) {
vmw_resource_unreference(&tmp);
goto out_unlock;
}
- rep->sid = user_srf->base.hash.key;
+ rep->sid = user_srf->prime.base.hash.key;
vmw_resource_unreference(&res);
ttm_read_unlock(&vmaster->lock);
out_no_offsets:
kfree(srf->sizes);
out_no_sizes:
- ttm_base_object_kfree(user_srf, base);
+ ttm_prime_object_kfree(user_srf, prime);
out_no_user_srf:
ttm_mem_global_free(vmw_mem_glob(dev_priv), size);
out_unlock:
return -EINVAL;
}
- if (unlikely(base->object_type != VMW_RES_SURFACE))
+ if (unlikely(ttm_base_object_type(base) != VMW_RES_SURFACE))
goto out_bad_resource;
- user_srf = container_of(base, struct vmw_user_surface, base);
+ user_srf = container_of(base, struct vmw_user_surface, prime.base);
srf = &user_srf->srf;
- ret = ttm_ref_object_add(tfile, &user_srf->base, TTM_REF_USAGE, NULL);
+ ret = ttm_ref_object_add(tfile, &user_srf->prime.base,
+ TTM_REF_USAGE, NULL);
if (unlikely(ret != 0)) {
DRM_ERROR("Could not add a reference to a surface.\n");
goto out_no_reference;
#include <linux/of.h>
#include <linux/slab.h>
+#include "bus.h"
#include "dev.h"
static DEFINE_MUTEX(clients_lock);
return -ENODEV;
}
-struct bus_type host1x_bus_type = {
+static struct bus_type host1x_bus_type = {
.name = "host1x",
};
device->dev.coherent_dma_mask = host1x->dev->coherent_dma_mask;
device->dev.dma_mask = &device->dev.coherent_dma_mask;
device->dev.release = host1x_device_release;
- dev_set_name(&device->dev, driver->name);
+ dev_set_name(&device->dev, "%s", driver->name);
device->dev.bus = &host1x_bus_type;
device->dev.parent = host1x->dev;
u32 *p = (u32 *)((u32)pb->mapped + getptr);
*(p++) = HOST1X_OPCODE_NOP;
*(p++) = HOST1X_OPCODE_NOP;
- dev_dbg(host1x->dev, "%s: NOP at 0x%x\n", __func__,
- pb->phys + getptr);
+ dev_dbg(host1x->dev, "%s: NOP at %#llx\n", __func__,
+ (u64)pb->phys + getptr);
getptr = (getptr + 8) & (pb->size_bytes - 1);
}
wmb();
continue;
}
- host1x_debug_output(o, " GATHER at %08x+%04x, %d words\n",
- g->base, g->offset, g->words);
+ host1x_debug_output(o, " GATHER at %#llx+%04x, %d words\n",
+ (u64)g->base, g->offset, g->words);
show_gather(o, g->base + g->offset, g->words, cdma,
g->base, mapped);
- Stantum multitouch panels
- Touch International Panels
- Unitec Panels
+ - Wistron optical touch panels
- XAT optical touch panels
- Xiroku optical touch panels
- Zytronic touch panels
appleir->hid = hid;
+ /* force input as some remotes bypass the input registration */
+ hid->quirks |= HID_QUIRK_HIDINPUT_FORCE;
+
spin_lock_init(&appleir->lock);
setup_timer(&appleir->key_up_timer,
key_up_tick, (unsigned long) appleir);
{ HID_USB_DEVICE(USB_VENDOR_ID_KENSINGTON, USB_DEVICE_ID_KS_SLIMBLADE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KEYTOUCH, USB_DEVICE_ID_KEYTOUCH_IEC) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_GENIUS_MANTICORE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_GENIUS_GX_IMPERATOR) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_ERGO_525V) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_I405X) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PRESENTER_8K_BT) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, USB_DEVICE_ID_NINTENDO_WIIMOTE) },
- { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO2, USB_DEVICE_ID_NINTENDO_WIIMOTE) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, USB_DEVICE_ID_NINTENDO_WIIMOTE2) },
{ }
};
#define USB_VENDOR_ID_KYE 0x0458
#define USB_DEVICE_ID_KYE_ERGO_525V 0x0087
#define USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE 0x0138
+#define USB_DEVICE_ID_GENIUS_MANTICORE 0x0153
#define USB_DEVICE_ID_GENIUS_GX_IMPERATOR 0x4018
#define USB_DEVICE_ID_KYE_GPEN_560 0x5003
#define USB_DEVICE_ID_KYE_EASYPEN_I405X 0x5010
#define USB_DEVICE_ID_NEXTWINDOW_TOUCHSCREEN 0x0003
#define USB_VENDOR_ID_NINTENDO 0x057e
-#define USB_VENDOR_ID_NINTENDO2 0x054c
#define USB_DEVICE_ID_NINTENDO_WIIMOTE 0x0306
#define USB_DEVICE_ID_NINTENDO_WIIMOTE2 0x0330
#define USB_DEVICE_ID_SUPER_DUAL_BOX_PRO 0x8802
#define USB_DEVICE_ID_SUPER_JOY_BOX_5_PRO 0x8804
+#define USB_VENDOR_ID_WISTRON 0x0fb8
+#define USB_DEVICE_ID_WISTRON_OPTICAL_TOUCH 0x1109
+
#define USB_VENDOR_ID_X_TENSIONS 0x1ae7
#define USB_DEVICE_ID_SPEEDLINK_VAD_CEZANNE 0x9001
rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 83,
"Genius Gx Imperator Keyboard");
break;
+ case USB_DEVICE_ID_GENIUS_MANTICORE:
+ rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 104,
+ "Genius Manticore Keyboard");
+ break;
}
return rdesc;
}
goto enabling_err;
}
break;
+ case USB_DEVICE_ID_GENIUS_MANTICORE:
+ /*
+ * The manticore keyboard needs to have all the interfaces
+ * opened at least once to be fully functional.
+ */
+ if (hid_hw_open(hdev))
+ hid_hw_close(hdev);
+ break;
}
return 0;
USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE,
USB_DEVICE_ID_GENIUS_GX_IMPERATOR) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE,
+ USB_DEVICE_ID_GENIUS_MANTICORE) },
{ }
};
MODULE_DEVICE_TABLE(hid, kye_devices);
{ .driver_data = MT_CLS_NSMU,
MT_USB_DEVICE(USB_VENDOR_ID_UNITEC,
USB_DEVICE_ID_UNITEC_USB_TOUCH_0A19) },
+
+ /* Wistron panels */
+ { .driver_data = MT_CLS_NSMU,
+ MT_USB_DEVICE(USB_VENDOR_ID_WISTRON,
+ USB_DEVICE_ID_WISTRON_OPTICAL_TOUCH) },
+
/* XAT */
{ .driver_data = MT_CLS_NSMU,
MT_USB_DEVICE(USB_VENDOR_ID_XAT,
static void sensor_hub_fill_attr_info(
struct hid_sensor_hub_attribute_info *info,
- s32 index, s32 report_id, s32 units, s32 unit_expo, s32 size)
+ s32 index, s32 report_id, struct hid_field *field)
{
info->index = index;
info->report_id = report_id;
- info->units = units;
- info->unit_expo = unit_expo;
- info->size = size/8;
+ info->units = field->unit;
+ info->unit_expo = field->unit_exponent;
+ info->size = (field->report_size * field->report_count)/8;
+ info->logical_minimum = field->logical_minimum;
+ info->logical_maximum = field->logical_maximum;
}
static struct hid_sensor_hub_callbacks *sensor_hub_get_callback(
if (field->physical == usage_id &&
field->logical == attr_usage_id) {
sensor_hub_fill_attr_info(info, i, report->id,
- field->unit, field->unit_exponent,
- field->report_size *
- field->report_count);
+ field);
ret = 0;
} else {
for (j = 0; j < field->maxusage; ++j) {
field->usage[j].collection_index ==
collection_index) {
sensor_hub_fill_attr_info(info,
- i, report->id,
- field->unit,
- field->unit_exponent,
- field->report_size *
- field->report_count);
+ i, report->id, field);
ret = 0;
break;
}
ret = -ENOMEM;
goto err_free_names;
}
+ sd->hid_sensor_hub_client_devs[
+ sd->hid_sensor_client_cnt].id = PLATFORM_DEVID_AUTO;
sd->hid_sensor_hub_client_devs[
sd->hid_sensor_client_cnt].name = name;
sd->hid_sensor_hub_client_devs[
struct sony_sc {
unsigned long quirks;
+#ifdef CONFIG_SONY_FF
+ struct work_struct rumble_worker;
+ struct hid_device *hdev;
+ __u8 left;
+ __u8 right;
+#endif
+
void *extra;
};
}
#ifdef CONFIG_SONY_FF
-static int sony_play_effect(struct input_dev *dev, void *data,
- struct ff_effect *effect)
+static void sony_rumble_worker(struct work_struct *work)
{
+ struct sony_sc *sc = container_of(work, struct sony_sc, rumble_worker);
unsigned char buf[] = {
0x01,
0x00, 0xff, 0x00, 0xff, 0x00,
0xff, 0x27, 0x10, 0x00, 0x32,
0x00, 0x00, 0x00, 0x00, 0x00
};
- __u8 left;
- __u8 right;
+
+ buf[3] = sc->right;
+ buf[5] = sc->left;
+
+ sc->hdev->hid_output_raw_report(sc->hdev, buf, sizeof(buf),
+ HID_OUTPUT_REPORT);
+}
+
+static int sony_play_effect(struct input_dev *dev, void *data,
+ struct ff_effect *effect)
+{
struct hid_device *hid = input_get_drvdata(dev);
+ struct sony_sc *sc = hid_get_drvdata(hid);
if (effect->type != FF_RUMBLE)
return 0;
- left = effect->u.rumble.strong_magnitude / 256;
- right = effect->u.rumble.weak_magnitude ? 1 : 0;
-
- buf[3] = right;
- buf[5] = left;
+ sc->left = effect->u.rumble.strong_magnitude / 256;
+ sc->right = effect->u.rumble.weak_magnitude ? 1 : 0;
- return hid->hid_output_raw_report(hid, buf, sizeof(buf),
- HID_OUTPUT_REPORT);
+ schedule_work(&sc->rumble_worker);
+ return 0;
}
static int sony_init_ff(struct hid_device *hdev)
struct hid_input *hidinput = list_entry(hdev->inputs.next,
struct hid_input, list);
struct input_dev *input_dev = hidinput->input;
+ struct sony_sc *sc = hid_get_drvdata(hdev);
+
+ sc->hdev = hdev;
+ INIT_WORK(&sc->rumble_worker, sony_rumble_worker);
input_set_capability(input_dev, EV_FF, FF_RUMBLE);
return input_ff_create_memless(input_dev, NULL, sony_play_effect);
}
+static void sony_destroy_ff(struct hid_device *hdev)
+{
+ struct sony_sc *sc = hid_get_drvdata(hdev);
+
+ cancel_work_sync(&sc->rumble_worker);
+}
+
#else
static int sony_init_ff(struct hid_device *hdev)
{
return 0;
}
+
+static void sony_destroy_ff(struct hid_device *hdev)
+{
+}
#endif
static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id)
if (sc->quirks & BUZZ_CONTROLLER)
buzz_remove(hdev);
+ sony_destroy_ff(hdev);
+
hid_hw_stop(hdev);
}
goto done;
}
- if (vendor == USB_VENDOR_ID_NINTENDO ||
- vendor == USB_VENDOR_ID_NINTENDO2) {
+ if (vendor == USB_VENDOR_ID_NINTENDO) {
if (product == USB_DEVICE_ID_NINTENDO_WIIMOTE) {
devtype = WIIMOTE_DEV_GEN10;
goto done;
static const struct hid_device_id wiimote_hid_devices[] = {
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO,
USB_DEVICE_ID_NINTENDO_WIIMOTE) },
- { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO2,
- USB_DEVICE_ID_NINTENDO_WIIMOTE) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO,
USB_DEVICE_ID_NINTENDO_WIIMOTE2) },
{ }
hid->hid_get_raw_report = i2c_hid_get_raw_report;
hid->hid_output_raw_report = i2c_hid_output_raw_report;
hid->dev.parent = &client->dev;
- ACPI_HANDLE_SET(&hid->dev, ACPI_HANDLE(&client->dev));
+ ACPI_COMPANION_SET(&hid->dev, ACPI_COMPANION(&client->dev));
hid->bus = BUS_I2C;
hid->version = le16_to_cpu(ihid->hdesc.bcdVersion);
hid->vendor = le16_to_cpu(ihid->hdesc.wVendorID);
*/
struct uhid_create_req_compat *compat;
- compat = kmalloc(sizeof(*compat), GFP_KERNEL);
+ compat = kzalloc(sizeof(*compat), GFP_KERNEL);
if (!compat)
return -ENOMEM;
- Analog Devices ADT75
- Dallas Semiconductor DS75, DS1775 and DS7505
+ - Global Mixed-mode Technology (GMT) G751
- Maxim MAX6625 and MAX6626
- Microchip MCP980x
- National Semiconductor LM75, LM75A
/* Create a symlink to domain objects */
resource->domain_devices[i] = NULL;
- status = acpi_bus_get_device(element->reference.handle,
- &resource->domain_devices[i]);
- if (ACPI_FAILURE(status))
+ if (acpi_bus_get_device(element->reference.handle,
+ &resource->domain_devices[i]))
continue;
obj = resource->domain_devices[i];
#include <linux/err.h>
#include <acpi/acpi.h>
-#include <acpi/acpixf.h>
#include <acpi/acpi_drivers.h>
#include <acpi/acpi_bus.h>
* @last_update: time of last update (jiffies)
* @temperature: cached temperature measurement value
* @humidity: cached humidity measurement value
+ * @write_length: length for I2C measurement request
*/
struct hih6130 {
struct device *hwmon_dev;
unsigned long last_update;
int temperature;
int humidity;
+ size_t write_length;
};
/**
*/
if (time_after(jiffies, hih6130->last_update + HZ) || !hih6130->valid) {
- /* write to slave address, no data, to request a measurement */
- ret = i2c_master_send(client, tmp, 0);
+ /*
+ * Write to slave address to request a measurement.
+ * According with the datasheet it should be with no data, but
+ * for systems with I2C bus drivers that do not allow zero
+ * length packets we write one dummy byte to allow sensor
+ * measurements on them.
+ */
+ tmp[0] = 0;
+ ret = i2c_master_send(client, tmp, hih6130->write_length);
if (ret < 0)
goto out;
goto fail_remove_sysfs;
}
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_QUICK))
+ hih6130->write_length = 1;
+
return 0;
fail_remove_sysfs:
ds1775,
ds75,
ds7505,
+ g751,
lm75,
lm75a,
max6625,
data->resolution = 12;
data->sample_time = HZ / 4;
break;
+ case g751:
case lm75:
case lm75a:
data->resolution = 9;
{ "ds1775", ds1775, },
{ "ds75", ds75, },
{ "ds7505", ds7505, },
+ { "g751", g751, },
{ "lm75", lm75, },
{ "lm75a", lm75a, },
{ "max6625", max6625, },
{
if (rpm <= 0)
return 255;
+ if (rpm > 1350000)
+ return 1;
return clamp_val((1350000 + rpm * div / 2) / (rpm * div), 1, 254);
}
"lm90", client);
if (err < 0) {
dev_err(dev, "cannot request IRQ %d\n", client->irq);
- goto exit_remove_files;
+ goto exit_unregister;
}
}
return 0;
+exit_unregister:
+ hwmon_device_unregister(data->hwmon_dev);
exit_remove_files:
lm90_remove_files(client, data);
exit_restore:
static const u16 NCT6775_REG_TEMP[] = {
0x27, 0x150, 0x250, 0x62b, 0x62c, 0x62d };
+static const u16 NCT6775_REG_TEMP_MON[] = { 0x73, 0x75, 0x77 };
+
static const u16 NCT6775_REG_TEMP_CONFIG[ARRAY_SIZE(NCT6775_REG_TEMP)] = {
0, 0x152, 0x252, 0x628, 0x629, 0x62A };
static const u16 NCT6775_REG_TEMP_HYST[ARRAY_SIZE(NCT6775_REG_TEMP)] = {
0x137, 0x237, 0x337, 0x837, 0x937, 0xa37 };
static const u16 NCT6779_REG_TEMP[] = { 0x27, 0x150 };
+static const u16 NCT6779_REG_TEMP_MON[] = { 0x73, 0x75, 0x77, 0x79, 0x7b };
static const u16 NCT6779_REG_TEMP_CONFIG[ARRAY_SIZE(NCT6779_REG_TEMP)] = {
0x18, 0x152 };
static const u16 NCT6779_REG_TEMP_HYST[ARRAY_SIZE(NCT6779_REG_TEMP)] = {
#define NCT6791_REG_HM_IO_SPACE_LOCK_ENABLE 0x28
+static const u16 NCT6791_REG_WEIGHT_TEMP_SEL[6] = { 0, 0x239 };
+static const u16 NCT6791_REG_WEIGHT_TEMP_STEP[6] = { 0, 0x23a };
+static const u16 NCT6791_REG_WEIGHT_TEMP_STEP_TOL[6] = { 0, 0x23b };
+static const u16 NCT6791_REG_WEIGHT_DUTY_STEP[6] = { 0, 0x23c };
+static const u16 NCT6791_REG_WEIGHT_TEMP_BASE[6] = { 0, 0x23d };
+static const u16 NCT6791_REG_WEIGHT_DUTY_BASE[6] = { 0, 0x23e };
+
static const u16 NCT6791_REG_ALARM[NUM_REG_ALARM] = {
0x459, 0x45A, 0x45B, 0x568, 0x45D };
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x07, 0x08, 0x09 };
static const u16 NCT6106_REG_TEMP[] = { 0x10, 0x11, 0x12, 0x13, 0x14, 0x15 };
+static const u16 NCT6106_REG_TEMP_MON[] = { 0x18, 0x19, 0x1a };
static const u16 NCT6106_REG_TEMP_HYST[] = {
0xc3, 0xc7, 0xcb, 0xcf, 0xd3, 0xd7 };
static const u16 NCT6106_REG_TEMP_OVER[] = {
if (reg & 0x80)
data->pwm[2][i] = 0;
+ if (!data->REG_WEIGHT_TEMP_SEL[i])
+ continue;
+
reg = nct6775_read_value(data, data->REG_WEIGHT_TEMP_SEL[i]);
data->pwm_weight_temp_sel[i] = reg & 0x1f;
/* If weight is disabled, report weight source as 0 */
if (!(data->has_pwm & (1 << pwm)))
return 0;
+ if ((nr >= 14 && nr <= 18) || nr == 21) /* weight */
+ if (!data->REG_WEIGHT_TEMP_SEL[pwm])
+ return 0;
if (nr == 19 && data->REG_PWM[3] == NULL) /* pwm_max */
return 0;
if (nr == 20 && data->REG_PWM[4] == NULL) /* pwm_step */
&sensor_dev_template_pwm_step_down_time,
&sensor_dev_template_pwm_start,
&sensor_dev_template_pwm_floor,
- &sensor_dev_template_pwm_weight_temp_sel,
+ &sensor_dev_template_pwm_weight_temp_sel, /* 14 */
&sensor_dev_template_pwm_weight_temp_step,
&sensor_dev_template_pwm_weight_temp_step_tol,
&sensor_dev_template_pwm_weight_temp_step_base,
- &sensor_dev_template_pwm_weight_duty_step,
+ &sensor_dev_template_pwm_weight_duty_step, /* 18 */
&sensor_dev_template_pwm_max, /* 19 */
&sensor_dev_template_pwm_step, /* 20 */
&sensor_dev_template_pwm_weight_duty_base, /* 21 */
int i, s, err = 0;
int src, mask, available;
const u16 *reg_temp, *reg_temp_over, *reg_temp_hyst, *reg_temp_config;
- const u16 *reg_temp_alternate, *reg_temp_crit;
+ const u16 *reg_temp_mon, *reg_temp_alternate, *reg_temp_crit;
const u16 *reg_temp_crit_l = NULL, *reg_temp_crit_h = NULL;
- int num_reg_temp;
+ int num_reg_temp, num_reg_temp_mon;
u8 cr2a;
struct attribute_group *group;
struct device *hwmon_dev;
data->BEEP_BITS = NCT6106_BEEP_BITS;
reg_temp = NCT6106_REG_TEMP;
+ reg_temp_mon = NCT6106_REG_TEMP_MON;
num_reg_temp = ARRAY_SIZE(NCT6106_REG_TEMP);
+ num_reg_temp_mon = ARRAY_SIZE(NCT6106_REG_TEMP_MON);
reg_temp_over = NCT6106_REG_TEMP_OVER;
reg_temp_hyst = NCT6106_REG_TEMP_HYST;
reg_temp_config = NCT6106_REG_TEMP_CONFIG;
data->REG_BEEP = NCT6775_REG_BEEP;
reg_temp = NCT6775_REG_TEMP;
+ reg_temp_mon = NCT6775_REG_TEMP_MON;
num_reg_temp = ARRAY_SIZE(NCT6775_REG_TEMP);
+ num_reg_temp_mon = ARRAY_SIZE(NCT6775_REG_TEMP_MON);
reg_temp_over = NCT6775_REG_TEMP_OVER;
reg_temp_hyst = NCT6775_REG_TEMP_HYST;
reg_temp_config = NCT6775_REG_TEMP_CONFIG;
data->REG_BEEP = NCT6776_REG_BEEP;
reg_temp = NCT6775_REG_TEMP;
+ reg_temp_mon = NCT6775_REG_TEMP_MON;
num_reg_temp = ARRAY_SIZE(NCT6775_REG_TEMP);
+ num_reg_temp_mon = ARRAY_SIZE(NCT6775_REG_TEMP_MON);
reg_temp_over = NCT6775_REG_TEMP_OVER;
reg_temp_hyst = NCT6775_REG_TEMP_HYST;
reg_temp_config = NCT6776_REG_TEMP_CONFIG;
data->REG_BEEP = NCT6776_REG_BEEP;
reg_temp = NCT6779_REG_TEMP;
+ reg_temp_mon = NCT6779_REG_TEMP_MON;
num_reg_temp = ARRAY_SIZE(NCT6779_REG_TEMP);
+ num_reg_temp_mon = ARRAY_SIZE(NCT6779_REG_TEMP_MON);
reg_temp_over = NCT6779_REG_TEMP_OVER;
reg_temp_hyst = NCT6779_REG_TEMP_HYST;
reg_temp_config = NCT6779_REG_TEMP_CONFIG;
data->REG_PWM[0] = NCT6775_REG_PWM;
data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT;
data->REG_PWM[2] = NCT6775_REG_FAN_STOP_OUTPUT;
- data->REG_PWM[5] = NCT6775_REG_WEIGHT_DUTY_STEP;
- data->REG_PWM[6] = NCT6776_REG_WEIGHT_DUTY_BASE;
+ data->REG_PWM[5] = NCT6791_REG_WEIGHT_DUTY_STEP;
+ data->REG_PWM[6] = NCT6791_REG_WEIGHT_DUTY_BASE;
data->REG_PWM_READ = NCT6775_REG_PWM_READ;
data->REG_PWM_MODE = NCT6776_REG_PWM_MODE;
data->PWM_MODE_MASK = NCT6776_PWM_MODE_MASK;
data->REG_TEMP_OFFSET = NCT6779_REG_TEMP_OFFSET;
data->REG_TEMP_SOURCE = NCT6775_REG_TEMP_SOURCE;
data->REG_TEMP_SEL = NCT6775_REG_TEMP_SEL;
- data->REG_WEIGHT_TEMP_SEL = NCT6775_REG_WEIGHT_TEMP_SEL;
- data->REG_WEIGHT_TEMP[0] = NCT6775_REG_WEIGHT_TEMP_STEP;
- data->REG_WEIGHT_TEMP[1] = NCT6775_REG_WEIGHT_TEMP_STEP_TOL;
- data->REG_WEIGHT_TEMP[2] = NCT6775_REG_WEIGHT_TEMP_BASE;
+ data->REG_WEIGHT_TEMP_SEL = NCT6791_REG_WEIGHT_TEMP_SEL;
+ data->REG_WEIGHT_TEMP[0] = NCT6791_REG_WEIGHT_TEMP_STEP;
+ data->REG_WEIGHT_TEMP[1] = NCT6791_REG_WEIGHT_TEMP_STEP_TOL;
+ data->REG_WEIGHT_TEMP[2] = NCT6791_REG_WEIGHT_TEMP_BASE;
data->REG_ALARM = NCT6791_REG_ALARM;
data->REG_BEEP = NCT6776_REG_BEEP;
reg_temp = NCT6779_REG_TEMP;
+ reg_temp_mon = NCT6779_REG_TEMP_MON;
num_reg_temp = ARRAY_SIZE(NCT6779_REG_TEMP);
+ num_reg_temp_mon = ARRAY_SIZE(NCT6779_REG_TEMP_MON);
reg_temp_over = NCT6779_REG_TEMP_OVER;
reg_temp_hyst = NCT6779_REG_TEMP_HYST;
reg_temp_config = NCT6779_REG_TEMP_CONFIG;
s++;
}
+ /*
+ * Repeat with temperatures used for fan control.
+ * This set of registers does not support limits.
+ */
+ for (i = 0; i < num_reg_temp_mon; i++) {
+ if (reg_temp_mon[i] == 0)
+ continue;
+
+ src = nct6775_read_value(data, data->REG_TEMP_SEL[i]) & 0x1f;
+ if (!src || (mask & (1 << src)))
+ continue;
+
+ if (src >= data->temp_label_num ||
+ !strlen(data->temp_label[src])) {
+ dev_info(dev,
+ "Invalid temperature source %d at index %d, source register 0x%x, temp register 0x%x\n",
+ src, i, data->REG_TEMP_SEL[i],
+ reg_temp_mon[i]);
+ continue;
+ }
+
+ mask |= 1 << src;
+
+ /* Use fixed index for SYSTIN(1), CPUTIN(2), AUXTIN(3) */
+ if (src <= data->temp_fixed_num) {
+ if (data->have_temp & (1 << (src - 1)))
+ continue;
+ data->have_temp |= 1 << (src - 1);
+ data->have_temp_fixed |= 1 << (src - 1);
+ data->reg_temp[0][src - 1] = reg_temp_mon[i];
+ data->temp_src[src - 1] = src;
+ continue;
+ }
+
+ if (s >= NUM_TEMP)
+ continue;
+
+ /* Use dynamic index for other sources */
+ data->have_temp |= 1 << s;
+ data->reg_temp[0][s] = reg_temp_mon[i];
+ data->temp_src[s] = src;
+ s++;
+ }
+
#ifdef USE_ALTERNATE
/*
* Go through the list of alternate temp registers and enable
{
if (rpm <= 0)
return 255;
+ if (rpm > 1350000)
+ return 1;
return clamp_val((1350000 + rpm * div / 2) / (rpm * div), 1, 254);
}
*/
static inline u8 FAN_TO_REG(long rpm, int div)
{
- if (rpm == 0)
+ if (rpm <= 0 || rpm > 1310720)
return 0;
return clamp_val(1310720 / (rpm * div), 1, 255);
}
if (err)
return err;
val = clamp_val(val, 0, 255);
+ val = DIV_ROUND_CLOSEST(val, 0x11);
mutex_lock(&data->update_lock);
- data->pwm[nr] = val;
+ data->pwm[nr] = val * 0x11;
+ val |= w83l786ng_read_value(client, W83L786NG_REG_PWM[nr]) & 0xf0;
w83l786ng_write_value(client, W83L786NG_REG_PWM[nr], val);
mutex_unlock(&data->update_lock);
return count;
mutex_lock(&data->update_lock);
reg = w83l786ng_read_value(client, W83L786NG_REG_FAN_CFG);
data->pwm_enable[nr] = val;
- reg &= ~(0x02 << W83L786NG_PWM_ENABLE_SHIFT[nr]);
+ reg &= ~(0x03 << W83L786NG_PWM_ENABLE_SHIFT[nr]);
reg |= (val - 1) << W83L786NG_PWM_ENABLE_SHIFT[nr];
w83l786ng_write_value(client, W83L786NG_REG_FAN_CFG, reg);
mutex_unlock(&data->update_lock);
((pwmcfg >> W83L786NG_PWM_MODE_SHIFT[i]) & 1)
? 0 : 1;
data->pwm_enable[i] =
- ((pwmcfg >> W83L786NG_PWM_ENABLE_SHIFT[i]) & 2) + 1;
- data->pwm[i] = w83l786ng_read_value(client,
- W83L786NG_REG_PWM[i]);
+ ((pwmcfg >> W83L786NG_PWM_ENABLE_SHIFT[i]) & 3) + 1;
+ data->pwm[i] =
+ (w83l786ng_read_value(client, W83L786NG_REG_PWM[i])
+ & 0x0f) * 0x11;
}
#include <linux/platform_device.h>
#include <linux/clk.h>
#include <linux/io.h>
-#include <linux/clk.h>
#include <linux/slab.h>
/* Hardware register offsets and field defintions */
{.compatible = "brcm,kona-i2c",},
{},
};
-MODULE_DEVICE_TABLE(of, kona_i2c_of_match);
+MODULE_DEVICE_TABLE(of, bcm_kona_i2c_of_match);
static struct platform_driver bcm_kona_i2c_driver = {
.driver = {
strlcpy(adap->name, "bcm2835 I2C adapter", sizeof(adap->name));
adap->algo = &bcm2835_i2c_algo;
adap->dev.parent = &pdev->dev;
+ adap->dev.of_node = pdev->dev.of_node;
bcm2835_i2c_writel(i2c_dev, BCM2835_I2C_C, 0);
static inline void davinci_i2c_write_reg(struct davinci_i2c_dev *i2c_dev,
int reg, u16 val)
{
- __raw_writew(val, i2c_dev->base + reg);
+ writew_relaxed(val, i2c_dev->base + reg);
}
static inline u16 davinci_i2c_read_reg(struct davinci_i2c_dev *i2c_dev, int reg)
{
- return __raw_readw(i2c_dev->base + reg);
+ return readw_relaxed(i2c_dev->base + reg);
}
/* Generate a pulse on the i2c clock pin. */
#define USB_VENDOR_ID_DIOLAN 0x0abf
#define USB_DEVICE_ID_DIOLAN_U2C 0x3370
-#define DIOLAN_OUT_EP 0x02
-#define DIOLAN_IN_EP 0x84
/* commands via USB, must match command ids in the firmware */
#define CMD_I2C_READ 0x01
struct i2c_diolan_u2c {
u8 obuffer[DIOLAN_OUTBUF_LEN]; /* output buffer */
u8 ibuffer[DIOLAN_INBUF_LEN]; /* input buffer */
+ int ep_in, ep_out; /* Endpoints */
struct usb_device *usb_dev; /* the usb device for this device */
struct usb_interface *interface;/* the interface for this device */
struct i2c_adapter adapter; /* i2c related things */
return -EINVAL;
ret = usb_bulk_msg(dev->usb_dev,
- usb_sndbulkpipe(dev->usb_dev, DIOLAN_OUT_EP),
+ usb_sndbulkpipe(dev->usb_dev, dev->ep_out),
dev->obuffer, dev->olen, &actual,
DIOLAN_USB_TIMEOUT);
if (!ret) {
tmpret = usb_bulk_msg(dev->usb_dev,
usb_rcvbulkpipe(dev->usb_dev,
- DIOLAN_IN_EP),
+ dev->ep_in),
dev->ibuffer,
sizeof(dev->ibuffer), &actual,
DIOLAN_USB_TIMEOUT);
int ret;
ret = usb_bulk_msg(dev->usb_dev,
- usb_rcvbulkpipe(dev->usb_dev, DIOLAN_IN_EP),
+ usb_rcvbulkpipe(dev->usb_dev, dev->ep_in),
dev->ibuffer, sizeof(dev->ibuffer), &actual,
DIOLAN_USB_TIMEOUT);
if (ret < 0 || actual == 0)
static int diolan_u2c_probe(struct usb_interface *interface,
const struct usb_device_id *id)
{
+ struct usb_host_interface *hostif = interface->cur_altsetting;
struct i2c_diolan_u2c *dev;
int ret;
+ if (hostif->desc.bInterfaceNumber != 0
+ || hostif->desc.bNumEndpoints < 2)
+ return -ENODEV;
+
/* allocate memory for our device state and initialize it */
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
if (dev == NULL) {
ret = -ENOMEM;
goto error;
}
+ dev->ep_out = hostif->endpoint[0].desc.bEndpointAddress;
+ dev->ep_in = hostif->endpoint[1].desc.bEndpointAddress;
dev->usb_dev = usb_get_dev(interface_to_usbdev(interface));
dev->interface = interface;
dev_dbg(&i2c_imx->adapter.dev, "<%s>\n", __func__);
- clk_prepare_enable(i2c_imx->clk);
+ result = clk_prepare_enable(i2c_imx->clk);
+ if (result)
+ return result;
imx_i2c_write_reg(i2c_imx->ifdr, i2c_imx, IMX_I2C_IFDR);
/* Enable I2C controller */
imx_i2c_write_reg(i2c_imx->hwdata->i2sr_clr_opcode, i2c_imx, IMX_I2C_I2SR);
static inline void omap_i2c_write_reg(struct omap_i2c_dev *i2c_dev,
int reg, u16 val)
{
- __raw_writew(val, i2c_dev->base +
+ writew_relaxed(val, i2c_dev->base +
(i2c_dev->regs[reg] << i2c_dev->reg_shift));
}
static inline u16 omap_i2c_read_reg(struct omap_i2c_dev *i2c_dev, int reg)
{
- return __raw_readw(i2c_dev->base +
+ return readw_relaxed(i2c_dev->base +
(i2c_dev->regs[reg] << i2c_dev->reg_shift));
}
};
#ifdef CONFIG_OF
+static struct omap_i2c_bus_platform_data omap2420_pdata = {
+ .rev = OMAP_I2C_IP_VERSION_1,
+ .flags = OMAP_I2C_FLAG_NO_FIFO |
+ OMAP_I2C_FLAG_SIMPLE_CLOCK |
+ OMAP_I2C_FLAG_16BIT_DATA_REG |
+ OMAP_I2C_FLAG_BUS_SHIFT_2,
+};
+
+static struct omap_i2c_bus_platform_data omap2430_pdata = {
+ .rev = OMAP_I2C_IP_VERSION_1,
+ .flags = OMAP_I2C_FLAG_BUS_SHIFT_2 |
+ OMAP_I2C_FLAG_FORCE_19200_INT_CLK,
+};
+
static struct omap_i2c_bus_platform_data omap3_pdata = {
.rev = OMAP_I2C_IP_VERSION_1,
.flags = OMAP_I2C_FLAG_BUS_SHIFT_2,
.compatible = "ti,omap3-i2c",
.data = &omap3_pdata,
},
+ {
+ .compatible = "ti,omap2430-i2c",
+ .data = &omap2430_pdata,
+ },
+ {
+ .compatible = "ti,omap2420-i2c",
+ .data = &omap2420_pdata,
+ },
{ },
};
MODULE_DEVICE_TABLE(of, omap_i2c_of_match);
* Read the Rev hi bit-[15:14] ie scheme this is 1 indicates ver2.
* On omap1/3/2 Offset 4 is IE Reg the bit [15:14] is 0 at reset.
* Also since the omap_i2c_read_reg uses reg_map_ip_* a
- * raw_readw is done.
+ * readw_relaxed is done.
*/
- rev = __raw_readw(dev->base + 0x04);
+ rev = readw_relaxed(dev->base + 0x04);
dev->scheme = OMAP_I2C_SCHEME(rev);
switch (dev->scheme) {
}
EXPORT_SYMBOL_GPL(i2c_unlock_adapter);
+static void i2c_dev_set_name(struct i2c_adapter *adap,
+ struct i2c_client *client)
+{
+ struct acpi_device *adev = ACPI_COMPANION(&client->dev);
+
+ if (adev) {
+ dev_set_name(&client->dev, "i2c-%s", acpi_dev_name(adev));
+ return;
+ }
+
+ /* For 10-bit clients, add an arbitrary offset to avoid collisions */
+ dev_set_name(&client->dev, "%d-%04x", i2c_adapter_id(adap),
+ client->addr | ((client->flags & I2C_CLIENT_TEN)
+ ? 0xa000 : 0));
+}
+
/**
* i2c_new_device - instantiate an i2c device
* @adap: the adapter managing the device
client->dev.bus = &i2c_bus_type;
client->dev.type = &i2c_client_type;
client->dev.of_node = info->of_node;
- ACPI_HANDLE_SET(&client->dev, info->acpi_node.handle);
+ ACPI_COMPANION_SET(&client->dev, info->acpi_node.companion);
- /* For 10-bit clients, add an arbitrary offset to avoid collisions */
- dev_set_name(&client->dev, "%d-%04x", i2c_adapter_id(adap),
- client->addr | ((client->flags & I2C_CLIENT_TEN)
- ? 0xa000 : 0));
+ i2c_dev_set_name(adap, client);
status = device_register(&client->dev);
if (status)
goto out_err;
return AE_OK;
memset(&info, 0, sizeof(info));
- info.acpi_node.handle = handle;
+ info.acpi_node.companion = adev;
info.irq = -1;
INIT_LIST_HEAD(&resource_list);
priv->adap.algo = &priv->algo;
priv->adap.algo_data = priv;
priv->adap.dev.parent = &parent->dev;
+ priv->adap.retries = parent->retries;
+ priv->adap.timeout = parent->timeout;
/* Sanity check on class */
if (i2c_mux_parent_classes(parent) & class)
* Copyright (C) 2006 Hannes Reinecke
*/
+#include <linux/acpi.h>
#include <linux/ata.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dmi.h>
#include <linux/module.h>
-#include <acpi/acpi_bus.h>
-
#define REGS_PER_GTF 7
struct GTM_buffer {
DEBPRINT("ENTER: pci %02x:%02x.%01x\n", bus, devnum, func);
- dev_handle = DEVICE_ACPI_HANDLE(dev);
+ dev_handle = ACPI_HANDLE(dev);
if (!dev_handle) {
DEBPRINT("no acpi handle for device\n");
goto err;
/*
* intel_idle.c - native hardware idle loop for modern Intel processors
*
- * Copyright (c) 2010, Intel Corporation.
+ * Copyright (c) 2013, Intel Corporation.
* Len Brown <len.brown@intel.com>
*
* This program is free software; you can redistribute it and/or modify it
{
.enter = NULL }
};
+static struct cpuidle_state avn_cstates[] __initdata = {
+ {
+ .name = "C1-AVN",
+ .desc = "MWAIT 0x00",
+ .flags = MWAIT2flg(0x00) | CPUIDLE_FLAG_TIME_VALID,
+ .exit_latency = 2,
+ .target_residency = 2,
+ .enter = &intel_idle },
+ {
+ .name = "C6-AVN",
+ .desc = "MWAIT 0x51",
+ .flags = MWAIT2flg(0x51) | CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED,
+ .exit_latency = 15,
+ .target_residency = 45,
+ .enter = &intel_idle },
+};
/**
* intel_idle
if (!current_set_polling_and_test()) {
+ if (this_cpu_has(X86_FEATURE_CLFLUSH_MONITOR))
+ clflush((void *)¤t_thread_info()->flags);
+
__monitor((void *)¤t_thread_info()->flags, 0, 0);
smp_mb();
if (!need_resched())
.disable_promotion_to_c1e = true,
};
+static const struct idle_cpu idle_cpu_avn = {
+ .state_table = avn_cstates,
+ .disable_promotion_to_c1e = true,
+};
+
#define ICPU(model, cpu) \
{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_MWAIT, (unsigned long)&cpu }
ICPU(0x3f, idle_cpu_hsw),
ICPU(0x45, idle_cpu_hsw),
ICPU(0x46, idle_cpu_hsw),
+ ICPU(0x4D, idle_cpu_avn),
{}
};
MODULE_DEVICE_TABLE(x86cpu, intel_idle_ids);
error_iio_unreg:
iio_device_unregister(indio_dev);
error_remove_trigger:
- hid_sensor_remove_trigger(indio_dev);
+ hid_sensor_remove_trigger(&accel_state->common_attributes);
error_unreg_buffer_funcs:
iio_triggered_buffer_cleanup(indio_dev);
error_free_dev_mem:
{
struct hid_sensor_hub_device *hsdev = pdev->dev.platform_data;
struct iio_dev *indio_dev = platform_get_drvdata(pdev);
+ struct accel_3d_state *accel_state = iio_priv(indio_dev);
sensor_hub_remove_callback(hsdev, HID_USAGE_SENSOR_ACCEL_3D);
iio_device_unregister(indio_dev);
- hid_sensor_remove_trigger(indio_dev);
+ hid_sensor_remove_trigger(&accel_state->common_attributes);
iio_triggered_buffer_cleanup(indio_dev);
kfree(indio_dev->channels);
mutex_lock(&st->buf_lock);
st->tx[0] = KXSD9_READ(address);
ret = spi_sync_transfer(st->us, xfers, ARRAY_SIZE(xfers));
- if (ret)
- return ret;
- return (((u16)(st->rx[0])) << 8) | (st->rx[1] & 0xF0);
+ if (!ret)
+ ret = (((u16)(st->rx[0])) << 8) | (st->rx[1] & 0xF0);
+ mutex_unlock(&st->buf_lock);
+ return ret;
}
static IIO_CONST_ATTR(accel_scale_available,
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = 1,
.scan_index = 1,
- .scan_type = IIO_ST('u', 12, 16, 0),
+ .scan_type = {
+ .sign = 'u',
+ .realbits = 12,
+ .storagebits = 16,
+ .shift = 0,
+ .endianness = IIO_BE,
+ },
},
.channel[1] = {
.type = IIO_VOLTAGE,
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = 0,
.scan_index = 0,
- .scan_type = IIO_ST('u', 12, 16, 0),
+ .scan_type = {
+ .sign = 'u',
+ .realbits = 12,
+ .storagebits = 16,
+ .shift = 0,
+ .endianness = IIO_BE,
+ },
},
.channel[2] = IIO_CHAN_SOFT_TIMESTAMP(2),
.int_vref_mv = 2500,
} else {
if (!st->caps->has_tsmr) {
dev_err(&pdev->dev, "We don't support non-TSMR adc\n");
+ ret = -ENODEV;
goto error_disable_adc_clk;
}
/* sample rates to sign extension table */
static const int mcp3422_sign_extend[4] = {
- [MCP3422_SRATE_240] = 12,
- [MCP3422_SRATE_60] = 14,
- [MCP3422_SRATE_15] = 16,
- [MCP3422_SRATE_3] = 18 };
+ [MCP3422_SRATE_240] = 11,
+ [MCP3422_SRATE_60] = 13,
+ [MCP3422_SRATE_15] = 15,
+ [MCP3422_SRATE_3] = 17 };
/* Client data (each client gets its own) */
struct mcp3422 {
unsigned long flags,
const struct iio_buffer_setup_ops *setup_ops)
{
+ struct iio_buffer *buffer;
int ret;
- indio_dev->buffer = iio_kfifo_allocate(indio_dev);
- if (!indio_dev->buffer)
+ buffer = iio_kfifo_allocate(indio_dev);
+ if (!buffer)
return -ENOMEM;
+ iio_device_attach_buffer(indio_dev, buffer);
+
ret = request_threaded_irq(irq, pollfunc_th, pollfunc_bh,
flags, indio_dev->name, indio_dev);
if (ret)
If this driver is compiled as a module, it will be named
hid-sensor-trigger.
-config HID_SENSOR_ENUM_BASE_QUIRKS
- bool "ENUM base quirks for HID Sensor IIO drivers"
- depends on HID_SENSOR_IIO_COMMON
- help
- Say yes here to build support for sensor hub FW using
- enumeration, which is using 1 as base instead of 0.
- Since logical minimum is still set 0 instead of 1,
- there is no easy way to differentiate.
-
endmenu
{
struct hid_sensor_common *st = iio_trigger_get_drvdata(trig);
int state_val;
+ int report_val;
if (state) {
if (sensor_hub_device_open(st->hsdev))
return -EIO;
- } else
+ state_val =
+ HID_USAGE_SENSOR_PROP_POWER_STATE_D0_FULL_POWER_ENUM;
+ report_val =
+ HID_USAGE_SENSOR_PROP_REPORTING_STATE_ALL_EVENTS_ENUM;
+
+ } else {
sensor_hub_device_close(st->hsdev);
+ state_val =
+ HID_USAGE_SENSOR_PROP_POWER_STATE_D4_POWER_OFF_ENUM;
+ report_val =
+ HID_USAGE_SENSOR_PROP_REPORTING_STATE_NO_EVENTS_ENUM;
+ }
- state_val = state ? 1 : 0;
- if (IS_ENABLED(CONFIG_HID_SENSOR_ENUM_BASE_QUIRKS))
- ++state_val;
st->data_ready = state;
+ state_val += st->power_state.logical_minimum;
+ report_val += st->report_state.logical_minimum;
sensor_hub_set_feature(st->hsdev, st->power_state.report_id,
st->power_state.index,
(s32)state_val);
sensor_hub_set_feature(st->hsdev, st->report_state.report_id,
st->report_state.index,
- (s32)state_val);
+ (s32)report_val);
return 0;
}
-void hid_sensor_remove_trigger(struct iio_dev *indio_dev)
+void hid_sensor_remove_trigger(struct hid_sensor_common *attrb)
{
- iio_trigger_unregister(indio_dev->trig);
- iio_trigger_free(indio_dev->trig);
- indio_dev->trig = NULL;
+ iio_trigger_unregister(attrb->trigger);
+ iio_trigger_free(attrb->trigger);
}
EXPORT_SYMBOL(hid_sensor_remove_trigger);
dev_err(&indio_dev->dev, "Trigger Register Failed\n");
goto error_free_trig;
}
- indio_dev->trig = trig;
+ indio_dev->trig = attrb->trigger = trig;
return ret;
int hid_sensor_setup_trigger(struct iio_dev *indio_dev, const char *name,
struct hid_sensor_common *attrb);
-void hid_sensor_remove_trigger(struct iio_dev *indio_dev);
+void hid_sensor_remove_trigger(struct hid_sensor_common *attrb);
#endif
error_iio_unreg:
iio_device_unregister(indio_dev);
error_remove_trigger:
- hid_sensor_remove_trigger(indio_dev);
+ hid_sensor_remove_trigger(&gyro_state->common_attributes);
error_unreg_buffer_funcs:
iio_triggered_buffer_cleanup(indio_dev);
error_free_dev_mem:
{
struct hid_sensor_hub_device *hsdev = pdev->dev.platform_data;
struct iio_dev *indio_dev = platform_get_drvdata(pdev);
+ struct gyro_3d_state *gyro_state = iio_priv(indio_dev);
sensor_hub_remove_callback(hsdev, HID_USAGE_SENSOR_GYRO_3D);
iio_device_unregister(indio_dev);
- hid_sensor_remove_trigger(indio_dev);
+ hid_sensor_remove_trigger(&gyro_state->common_attributes);
iio_triggered_buffer_cleanup(indio_dev);
kfree(indio_dev->channels);
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = ADIS16448_BARO_OUT,
.scan_index = ADIS16400_SCAN_BARO,
- .scan_type = IIO_ST('s', 16, 16, 0),
+ .scan_type = {
+ .sign = 's',
+ .realbits = 16,
+ .storagebits = 16,
+ .endianness = IIO_BE,
+ },
},
ADIS16400_TEMP_CHAN(ADIS16448_TEMP_OUT, 12),
IIO_CHAN_SOFT_TIMESTAMP(11)
depends on I2C
select IIO_BUFFER
select IIO_TRIGGERED_BUFFER
+ select IRQ_WORK
help
Say Y here if you have a Sharp GP2AP020A00F proximity/ALS combo-chip
hooked to an I2C bus.
config TCS3472
tristate "TAOS TCS3472 color light-to-digital converter"
depends on I2C
+ select IIO_BUFFER
+ select IIO_TRIGGERED_BUFFER
help
If you say yes here you get support for the TAOS TCS3472
family of color light-to-digital converters with IR filter.
return -EINVAL;
}
- return IIO_VAL_INT_PLUS_MICRO;
+ return IIO_VAL_INT;
}
static int cm36651_write_int_time(struct cm36651_data *cm36651,
error_iio_unreg:
iio_device_unregister(indio_dev);
error_remove_trigger:
- hid_sensor_remove_trigger(indio_dev);
+ hid_sensor_remove_trigger(&als_state->common_attributes);
error_unreg_buffer_funcs:
iio_triggered_buffer_cleanup(indio_dev);
error_free_dev_mem:
{
struct hid_sensor_hub_device *hsdev = pdev->dev.platform_data;
struct iio_dev *indio_dev = platform_get_drvdata(pdev);
+ struct als_state *als_state = iio_priv(indio_dev);
sensor_hub_remove_callback(hsdev, HID_USAGE_SENSOR_ALS);
iio_device_unregister(indio_dev);
- hid_sensor_remove_trigger(indio_dev);
+ hid_sensor_remove_trigger(&als_state->common_attributes);
iio_triggered_buffer_cleanup(indio_dev);
kfree(indio_dev->channels);
config MAG3110
tristate "Freescale MAG3110 3-Axis Magnetometer"
depends on I2C
+ select IIO_BUFFER
+ select IIO_TRIGGERED_BUFFER
help
Say yes here to build support for the Freescale MAG3110 3-Axis
magnetometer.
error_iio_unreg:
iio_device_unregister(indio_dev);
error_remove_trigger:
- hid_sensor_remove_trigger(indio_dev);
+ hid_sensor_remove_trigger(&magn_state->common_attributes);
error_unreg_buffer_funcs:
iio_triggered_buffer_cleanup(indio_dev);
error_free_dev_mem:
{
struct hid_sensor_hub_device *hsdev = pdev->dev.platform_data;
struct iio_dev *indio_dev = platform_get_drvdata(pdev);
+ struct magn_3d_state *magn_state = iio_priv(indio_dev);
sensor_hub_remove_callback(hsdev, HID_USAGE_SENSOR_COMPASS_3D);
iio_device_unregister(indio_dev);
- hid_sensor_remove_trigger(indio_dev);
+ hid_sensor_remove_trigger(&magn_state->common_attributes);
iio_triggered_buffer_cleanup(indio_dev);
kfree(indio_dev->channels);
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SAMP_FREQ) | \
BIT(IIO_CHAN_INFO_SCALE), \
.scan_index = idx, \
- .scan_type = IIO_ST('s', 16, 16, IIO_BE), \
+ .scan_type = { \
+ .sign = 's', \
+ .realbits = 16, \
+ .storagebits = 16, \
+ .endianness = IIO_BE, \
+ }, \
}
static const struct iio_chan_spec mag3110_channels[] = {
static void rem_ref(struct iw_cm_id *cm_id)
{
struct iwcm_id_private *cm_id_priv;
+ int cb_destroy;
+
cm_id_priv = container_of(cm_id, struct iwcm_id_private, id);
- if (iwcm_deref_id(cm_id_priv) &&
- test_bit(IWCM_F_CALLBACK_DESTROY, &cm_id_priv->flags)) {
+
+ /*
+ * Test bit before deref in case the cm_id gets freed on another
+ * thread.
+ */
+ cb_destroy = test_bit(IWCM_F_CALLBACK_DESTROY, &cm_id_priv->flags);
+ if (iwcm_deref_id(cm_id_priv) && cb_destroy) {
BUG_ON(!list_empty(&cm_id_priv->work_list));
free_cm_id(cm_id_priv);
}
#define INIT_UDATA(udata, ibuf, obuf, ilen, olen) \
do { \
- (udata)->inbuf = (void __user *) (ibuf); \
+ (udata)->inbuf = (const void __user *) (ibuf); \
(udata)->outbuf = (void __user *) (obuf); \
(udata)->inlen = (ilen); \
(udata)->outlen = (olen); \
} while (0)
+#define INIT_UDATA_BUF_OR_NULL(udata, ibuf, obuf, ilen, olen) \
+ do { \
+ (udata)->inbuf = (ilen) ? (const void __user *) (ibuf) : NULL; \
+ (udata)->outbuf = (olen) ? (void __user *) (obuf) : NULL; \
+ (udata)->inlen = (ilen); \
+ (udata)->outlen = (olen); \
+ } while (0)
+
/*
* Our lifetime rules for these structs are the following:
*
static int kern_spec_to_ib_spec(struct ib_uverbs_flow_spec *kern_spec,
union ib_flow_spec *ib_spec)
{
+ if (kern_spec->reserved)
+ return -EINVAL;
+
ib_spec->type = kern_spec->type;
switch (ib_spec->type) {
void *ib_spec;
int i;
+ if (ucore->inlen < sizeof(cmd))
+ return -EINVAL;
+
if (ucore->outlen < sizeof(resp))
return -ENOSPC;
(cmd.flow_attr.num_of_specs * sizeof(struct ib_uverbs_flow_spec)))
return -EINVAL;
+ if (cmd.flow_attr.reserved[0] ||
+ cmd.flow_attr.reserved[1])
+ return -EINVAL;
+
if (cmd.flow_attr.num_of_specs) {
kern_flow_attr = kmalloc(sizeof(*kern_flow_attr) + cmd.flow_attr.size,
GFP_KERNEL);
if (cmd.flow_attr.size || (i != flow_attr->num_of_specs)) {
pr_warn("create flow failed, flow %d: %d bytes left from uverb cmd\n",
i, cmd.flow_attr.size);
+ err = -EINVAL;
goto err_free;
}
flow_id = ib_create_flow(qp, flow_attr, IB_FLOW_DOMAIN_USER);
struct ib_uobject *uobj;
int ret;
+ if (ucore->inlen < sizeof(cmd))
+ return -EINVAL;
+
ret = ib_copy_from_udata(&cmd, ucore, sizeof(cmd));
if (ret)
return ret;
+ if (cmd.comp_mask)
+ return -EINVAL;
+
uobj = idr_write_uobj(&ib_uverbs_rule_idr, cmd.flow_handle,
file->ucontext);
if (!uobj)
if ((hdr.in_words + ex_hdr.provider_in_words) * 8 != count)
return -EINVAL;
+ if (ex_hdr.cmd_hdr_reserved)
+ return -EINVAL;
+
if (ex_hdr.response) {
if (!hdr.out_words && !ex_hdr.provider_out_words)
return -EINVAL;
+
+ if (!access_ok(VERIFY_WRITE,
+ (void __user *) (unsigned long) ex_hdr.response,
+ (hdr.out_words + ex_hdr.provider_out_words) * 8))
+ return -EFAULT;
} else {
if (hdr.out_words || ex_hdr.provider_out_words)
return -EINVAL;
}
- INIT_UDATA(&ucore,
- (hdr.in_words) ? buf : 0,
- (unsigned long)ex_hdr.response,
- hdr.in_words * 8,
- hdr.out_words * 8);
-
- INIT_UDATA(&uhw,
- (ex_hdr.provider_in_words) ? buf + ucore.inlen : 0,
- (ex_hdr.provider_out_words) ? (unsigned long)ex_hdr.response + ucore.outlen : 0,
- ex_hdr.provider_in_words * 8,
- ex_hdr.provider_out_words * 8);
+ INIT_UDATA_BUF_OR_NULL(&ucore, buf, (unsigned long) ex_hdr.response,
+ hdr.in_words * 8, hdr.out_words * 8);
+
+ INIT_UDATA_BUF_OR_NULL(&uhw,
+ buf + ucore.inlen,
+ (unsigned long) ex_hdr.response + ucore.outlen,
+ ex_hdr.provider_in_words * 8,
+ ex_hdr.provider_out_words * 8);
err = uverbs_ex_cmd_table[command](file,
&ucore,
return c4iw_l2t_send(&ep->com.dev->rdev, skb, ep->l2t);
}
-#define VLAN_NONE 0xfff
-#define FILTER_SEL_VLAN_NONE 0xffff
-#define FILTER_SEL_WIDTH_P_FC (3+1) /* port uses 3 bits, FCoE one bit */
-#define FILTER_SEL_WIDTH_VIN_P_FC \
- (6 + 7 + FILTER_SEL_WIDTH_P_FC) /* 6 bits are unused, VF uses 7 bits*/
-#define FILTER_SEL_WIDTH_TAG_P_FC \
- (3 + FILTER_SEL_WIDTH_VIN_P_FC) /* PF uses 3 bits */
-#define FILTER_SEL_WIDTH_VLD_TAG_P_FC (1 + FILTER_SEL_WIDTH_TAG_P_FC)
-
-static unsigned int select_ntuple(struct c4iw_dev *dev, struct dst_entry *dst,
- struct l2t_entry *l2t)
-{
- unsigned int ntuple = 0;
- u32 viid;
-
- switch (dev->rdev.lldi.filt_mode) {
-
- /* default filter mode */
- case HW_TPL_FR_MT_PR_IV_P_FC:
- if (l2t->vlan == VLAN_NONE)
- ntuple |= FILTER_SEL_VLAN_NONE << FILTER_SEL_WIDTH_P_FC;
- else {
- ntuple |= l2t->vlan << FILTER_SEL_WIDTH_P_FC;
- ntuple |= 1 << FILTER_SEL_WIDTH_TAG_P_FC;
- }
- ntuple |= l2t->lport << S_PORT | IPPROTO_TCP <<
- FILTER_SEL_WIDTH_VLD_TAG_P_FC;
- break;
- case HW_TPL_FR_MT_PR_OV_P_FC: {
- viid = cxgb4_port_viid(l2t->neigh->dev);
-
- ntuple |= FW_VIID_VIN_GET(viid) << FILTER_SEL_WIDTH_P_FC;
- ntuple |= FW_VIID_PFN_GET(viid) << FILTER_SEL_WIDTH_VIN_P_FC;
- ntuple |= FW_VIID_VIVLD_GET(viid) << FILTER_SEL_WIDTH_TAG_P_FC;
- ntuple |= l2t->lport << S_PORT | IPPROTO_TCP <<
- FILTER_SEL_WIDTH_VLD_TAG_P_FC;
- break;
- }
- default:
- break;
- }
- return ntuple;
-}
-
static int send_connect(struct c4iw_ep *ep)
{
struct cpl_act_open_req *req;
req->local_ip = la->sin_addr.s_addr;
req->peer_ip = ra->sin_addr.s_addr;
req->opt0 = cpu_to_be64(opt0);
- req->params = cpu_to_be32(select_ntuple(ep->com.dev,
- ep->dst, ep->l2t));
+ req->params = cpu_to_be32(cxgb4_select_ntuple(
+ ep->com.dev->rdev.lldi.ports[0],
+ ep->l2t));
req->opt2 = cpu_to_be32(opt2);
} else {
req6 = (struct cpl_act_open_req6 *)skb_put(skb, wrlen);
req6->peer_ip_lo = *((__be64 *)
(ra6->sin6_addr.s6_addr + 8));
req6->opt0 = cpu_to_be64(opt0);
- req6->params = cpu_to_be32(
- select_ntuple(ep->com.dev, ep->dst,
- ep->l2t));
+ req6->params = cpu_to_be32(cxgb4_select_ntuple(
+ ep->com.dev->rdev.lldi.ports[0],
+ ep->l2t));
req6->opt2 = cpu_to_be32(opt2);
}
} else {
t5_req->peer_ip = ra->sin_addr.s_addr;
t5_req->opt0 = cpu_to_be64(opt0);
t5_req->params = cpu_to_be64(V_FILTER_TUPLE(
- select_ntuple(ep->com.dev,
- ep->dst, ep->l2t)));
+ cxgb4_select_ntuple(
+ ep->com.dev->rdev.lldi.ports[0],
+ ep->l2t)));
t5_req->opt2 = cpu_to_be32(opt2);
} else {
t5_req6 = (struct cpl_t5_act_open_req6 *)
(ra6->sin6_addr.s6_addr + 8));
t5_req6->opt0 = cpu_to_be64(opt0);
t5_req6->params = (__force __be64)cpu_to_be32(
- select_ntuple(ep->com.dev, ep->dst, ep->l2t));
+ cxgb4_select_ntuple(
+ ep->com.dev->rdev.lldi.ports[0],
+ ep->l2t));
t5_req6->opt2 = cpu_to_be32(opt2);
}
}
memset(req, 0, sizeof(*req));
req->op_compl = htonl(V_WR_OP(FW_OFLD_CONNECTION_WR));
req->len16_pkd = htonl(FW_WR_LEN16(DIV_ROUND_UP(sizeof(*req), 16)));
- req->le.filter = cpu_to_be32(select_ntuple(ep->com.dev, ep->dst,
+ req->le.filter = cpu_to_be32(cxgb4_select_ntuple(
+ ep->com.dev->rdev.lldi.ports[0],
ep->l2t));
sin = (struct sockaddr_in *)&ep->com.local_addr;
req->le.lport = sin->sin_port;
/*
* Allocate a server TID.
*/
- if (dev->rdev.lldi.enable_fw_ofld_conn)
+ if (dev->rdev.lldi.enable_fw_ofld_conn &&
+ ep->com.local_addr.ss_family == AF_INET)
ep->stid = cxgb4_alloc_sftid(dev->rdev.lldi.tids,
cm_id->local_addr.ss_family, ep);
else
/*
* Calculate the server tid from filter hit index from cpl_rx_pkt.
*/
- stid = (__force int) cpu_to_be32((__force u32) rss->hash_val)
- - dev->rdev.lldi.tids->sftid_base
- + dev->rdev.lldi.tids->nstids;
+ stid = (__force int) cpu_to_be32((__force u32) rss->hash_val);
lep = (struct c4iw_ep *)lookup_stid(dev->rdev.lldi.tids, stid);
if (!lep) {
window = (__force u16) htons((__force u16)tcph->window);
/* Calcuate filter portion for LE region. */
- filter = (__force unsigned int) cpu_to_be32(select_ntuple(dev, dst, e));
+ filter = (__force unsigned int) cpu_to_be32(cxgb4_select_ntuple(
+ dev->rdev.lldi.ports[0],
+ e));
/*
* Synthesize the cpl_pass_accept_req. We have everything except the
return ret;
}
-int _c4iw_write_mem_dma(struct c4iw_rdev *rdev, u32 addr, u32 len, void *data)
+static int _c4iw_write_mem_dma(struct c4iw_rdev *rdev, u32 addr, u32 len, void *data)
{
u32 remain = len;
u32 dmalen;
*/
#include <linux/netdevice.h>
+#include <linux/if_arp.h> /* For ARPHRD_xxx */
#include <linux/module.h>
#include <net/rtnetlink.h>
#include "ipoib.h"
return -EINVAL;
pdev = __dev_get_by_index(src_net, nla_get_u32(tb[IFLA_LINK]));
- if (!pdev)
+ if (!pdev || pdev->type != ARPHRD_INFINIBAND)
return -ENODEV;
ppriv = netdev_priv(pdev);
#include <linux/socket.h>
#include <linux/in.h>
#include <linux/in6.h>
+#include <linux/llist.h>
#include <rdma/ib_verbs.h>
#include <rdma/rdma_cm.h>
#include <target/target_core_base.h>
isert_conn->conn_rx_descs = NULL;
}
+static void isert_cq_tx_work(struct work_struct *);
static void isert_cq_tx_callback(struct ib_cq *, void *);
+static void isert_cq_rx_work(struct work_struct *);
static void isert_cq_rx_callback(struct ib_cq *, void *);
static int
cq_desc[i].device = device;
cq_desc[i].cq_index = i;
+ INIT_WORK(&cq_desc[i].cq_rx_work, isert_cq_rx_work);
device->dev_rx_cq[i] = ib_create_cq(device->ib_device,
isert_cq_rx_callback,
isert_cq_event_callback,
(void *)&cq_desc[i],
ISER_MAX_RX_CQ_LEN, i);
- if (IS_ERR(device->dev_rx_cq[i]))
+ if (IS_ERR(device->dev_rx_cq[i])) {
+ ret = PTR_ERR(device->dev_rx_cq[i]);
+ device->dev_rx_cq[i] = NULL;
goto out_cq;
+ }
+ INIT_WORK(&cq_desc[i].cq_tx_work, isert_cq_tx_work);
device->dev_tx_cq[i] = ib_create_cq(device->ib_device,
isert_cq_tx_callback,
isert_cq_event_callback,
(void *)&cq_desc[i],
ISER_MAX_TX_CQ_LEN, i);
- if (IS_ERR(device->dev_tx_cq[i]))
+ if (IS_ERR(device->dev_tx_cq[i])) {
+ ret = PTR_ERR(device->dev_tx_cq[i]);
+ device->dev_tx_cq[i] = NULL;
goto out_cq;
+ }
- if (ib_req_notify_cq(device->dev_rx_cq[i], IB_CQ_NEXT_COMP))
+ ret = ib_req_notify_cq(device->dev_rx_cq[i], IB_CQ_NEXT_COMP);
+ if (ret)
goto out_cq;
- if (ib_req_notify_cq(device->dev_tx_cq[i], IB_CQ_NEXT_COMP))
+ ret = ib_req_notify_cq(device->dev_tx_cq[i], IB_CQ_NEXT_COMP);
+ if (ret)
goto out_cq;
}
kref_init(&isert_conn->conn_kref);
kref_get(&isert_conn->conn_kref);
mutex_init(&isert_conn->conn_mutex);
+ mutex_init(&isert_conn->conn_comp_mutex);
spin_lock_init(&isert_conn->conn_lock);
cma_id->context = isert_conn;
}
static void
-isert_init_send_wr(struct isert_cmd *isert_cmd, struct ib_send_wr *send_wr)
+isert_init_send_wr(struct isert_conn *isert_conn, struct isert_cmd *isert_cmd,
+ struct ib_send_wr *send_wr, bool coalesce)
{
+ struct iser_tx_desc *tx_desc = &isert_cmd->tx_desc;
+
isert_cmd->rdma_wr.iser_ib_op = ISER_IB_SEND;
send_wr->wr_id = (unsigned long)&isert_cmd->tx_desc;
send_wr->opcode = IB_WR_SEND;
- send_wr->send_flags = IB_SEND_SIGNALED;
- send_wr->sg_list = &isert_cmd->tx_desc.tx_sg[0];
+ send_wr->sg_list = &tx_desc->tx_sg[0];
send_wr->num_sge = isert_cmd->tx_desc.num_sge;
+ /*
+ * Coalesce send completion interrupts by only setting IB_SEND_SIGNALED
+ * bit for every ISERT_COMP_BATCH_COUNT number of ib_post_send() calls.
+ */
+ mutex_lock(&isert_conn->conn_comp_mutex);
+ if (coalesce &&
+ ++isert_conn->conn_comp_batch < ISERT_COMP_BATCH_COUNT) {
+ llist_add(&tx_desc->comp_llnode, &isert_conn->conn_comp_llist);
+ mutex_unlock(&isert_conn->conn_comp_mutex);
+ return;
+ }
+ isert_conn->conn_comp_batch = 0;
+ tx_desc->comp_llnode_batch = llist_del_all(&isert_conn->conn_comp_llist);
+ mutex_unlock(&isert_conn->conn_comp_mutex);
+
+ send_wr->send_flags = IB_SEND_SIGNALED;
}
static int
}
static void
-isert_send_completion(struct iser_tx_desc *tx_desc,
- struct isert_conn *isert_conn)
+__isert_send_completion(struct iser_tx_desc *tx_desc,
+ struct isert_conn *isert_conn)
{
struct ib_device *ib_dev = isert_conn->conn_cm_id->device;
struct isert_cmd *isert_cmd = tx_desc->isert_cmd;
}
}
+static void
+isert_send_completion(struct iser_tx_desc *tx_desc,
+ struct isert_conn *isert_conn)
+{
+ struct llist_node *llnode = tx_desc->comp_llnode_batch;
+ struct iser_tx_desc *t;
+ /*
+ * Drain coalesced completion llist starting from comp_llnode_batch
+ * setup in isert_init_send_wr(), and then complete trailing tx_desc.
+ */
+ while (llnode) {
+ t = llist_entry(llnode, struct iser_tx_desc, comp_llnode);
+ llnode = llist_next(llnode);
+ __isert_send_completion(t, isert_conn);
+ }
+ __isert_send_completion(tx_desc, isert_conn);
+}
+
static void
isert_cq_comp_err(struct iser_tx_desc *tx_desc, struct isert_conn *isert_conn)
{
{
struct isert_cq_desc *cq_desc = (struct isert_cq_desc *)context;
- INIT_WORK(&cq_desc->cq_tx_work, isert_cq_tx_work);
queue_work(isert_comp_wq, &cq_desc->cq_tx_work);
}
{
struct isert_cq_desc *cq_desc = (struct isert_cq_desc *)context;
- INIT_WORK(&cq_desc->cq_rx_work, isert_cq_rx_work);
queue_work(isert_rx_wq, &cq_desc->cq_rx_work);
}
isert_cmd->tx_desc.num_sge = 2;
}
- isert_init_send_wr(isert_cmd, send_wr);
+ isert_init_send_wr(isert_conn, isert_cmd, send_wr, true);
pr_debug("Posting SCSI Response IB_WR_SEND >>>>>>>>>>>>>>>>>>>>>>\n");
&isert_cmd->tx_desc.iscsi_header,
nopout_response);
isert_init_tx_hdrs(isert_conn, &isert_cmd->tx_desc);
- isert_init_send_wr(isert_cmd, send_wr);
+ isert_init_send_wr(isert_conn, isert_cmd, send_wr, false);
pr_debug("Posting NOPIN Response IB_WR_SEND >>>>>>>>>>>>>>>>>>>>>>\n");
iscsit_build_logout_rsp(cmd, conn, (struct iscsi_logout_rsp *)
&isert_cmd->tx_desc.iscsi_header);
isert_init_tx_hdrs(isert_conn, &isert_cmd->tx_desc);
- isert_init_send_wr(isert_cmd, send_wr);
+ isert_init_send_wr(isert_conn, isert_cmd, send_wr, false);
pr_debug("Posting Logout Response IB_WR_SEND >>>>>>>>>>>>>>>>>>>>>>\n");
iscsit_build_task_mgt_rsp(cmd, conn, (struct iscsi_tm_rsp *)
&isert_cmd->tx_desc.iscsi_header);
isert_init_tx_hdrs(isert_conn, &isert_cmd->tx_desc);
- isert_init_send_wr(isert_cmd, send_wr);
+ isert_init_send_wr(isert_conn, isert_cmd, send_wr, false);
pr_debug("Posting Task Management Response IB_WR_SEND >>>>>>>>>>>>>>>>>>>>>>\n");
tx_dsg->lkey = isert_conn->conn_mr->lkey;
isert_cmd->tx_desc.num_sge = 2;
- isert_init_send_wr(isert_cmd, send_wr);
+ isert_init_send_wr(isert_conn, isert_cmd, send_wr, false);
pr_debug("Posting Reject IB_WR_SEND >>>>>>>>>>>>>>>>>>>>>>\n");
tx_dsg->lkey = isert_conn->conn_mr->lkey;
isert_cmd->tx_desc.num_sge = 2;
}
- isert_init_send_wr(isert_cmd, send_wr);
+ isert_init_send_wr(isert_conn, isert_cmd, send_wr, false);
pr_debug("Posting Text Response IB_WR_SEND >>>>>>>>>>>>>>>>>>>>>>\n");
if (wr->iser_ib_op == ISER_IB_RDMA_WRITE) {
data_left = se_cmd->data_length;
- iscsit_increment_maxcmdsn(cmd, conn->sess);
- cmd->stat_sn = conn->stat_sn++;
} else {
sg_off = cmd->write_data_done / PAGE_SIZE;
data_left = se_cmd->data_length - cmd->write_data_done;
if (wr->iser_ib_op == ISER_IB_RDMA_WRITE) {
data_left = se_cmd->data_length;
- iscsit_increment_maxcmdsn(cmd, conn->sess);
- cmd->stat_sn = conn->stat_sn++;
} else {
sg_off = cmd->write_data_done / PAGE_SIZE;
data_left = se_cmd->data_length - cmd->write_data_done;
data_len = min(data_left, rdma_write_max);
wr->cur_rdma_length = data_len;
- spin_lock_irqsave(&isert_conn->conn_lock, flags);
- fr_desc = list_first_entry(&isert_conn->conn_frwr_pool,
- struct fast_reg_descriptor, list);
- list_del(&fr_desc->list);
- spin_unlock_irqrestore(&isert_conn->conn_lock, flags);
- wr->fr_desc = fr_desc;
+ /* if there is a single dma entry, dma mr is sufficient */
+ if (count == 1) {
+ ib_sge->addr = ib_sg_dma_address(ib_dev, &sg_start[0]);
+ ib_sge->length = ib_sg_dma_len(ib_dev, &sg_start[0]);
+ ib_sge->lkey = isert_conn->conn_mr->lkey;
+ wr->fr_desc = NULL;
+ } else {
+ spin_lock_irqsave(&isert_conn->conn_lock, flags);
+ fr_desc = list_first_entry(&isert_conn->conn_frwr_pool,
+ struct fast_reg_descriptor, list);
+ list_del(&fr_desc->list);
+ spin_unlock_irqrestore(&isert_conn->conn_lock, flags);
+ wr->fr_desc = fr_desc;
- ret = isert_fast_reg_mr(fr_desc, isert_cmd, isert_conn,
- ib_sge, offset, data_len);
- if (ret) {
- list_add_tail(&fr_desc->list, &isert_conn->conn_frwr_pool);
- goto unmap_sg;
+ ret = isert_fast_reg_mr(fr_desc, isert_cmd, isert_conn,
+ ib_sge, offset, data_len);
+ if (ret) {
+ list_add_tail(&fr_desc->list, &isert_conn->conn_frwr_pool);
+ goto unmap_sg;
+ }
}
return 0;
* Build isert_conn->tx_desc for iSCSI response PDU and attach
*/
isert_create_send_desc(isert_conn, isert_cmd, &isert_cmd->tx_desc);
- iscsit_build_rsp_pdu(cmd, conn, false, (struct iscsi_scsi_rsp *)
+ iscsit_build_rsp_pdu(cmd, conn, true, (struct iscsi_scsi_rsp *)
&isert_cmd->tx_desc.iscsi_header);
isert_init_tx_hdrs(isert_conn, &isert_cmd->tx_desc);
- isert_init_send_wr(isert_cmd, &isert_cmd->tx_desc.send_wr);
+ isert_init_send_wr(isert_conn, isert_cmd,
+ &isert_cmd->tx_desc.send_wr, true);
atomic_inc(&isert_conn->post_send_buf_count);
struct ib_sge tx_sg[2];
int num_sge;
struct isert_cmd *isert_cmd;
+ struct llist_node *comp_llnode_batch;
+ struct llist_node comp_llnode;
struct ib_send_wr send_wr;
} __packed;
int conn_frwr_pool_size;
/* lock to protect frwr_pool */
spinlock_t conn_lock;
+#define ISERT_COMP_BATCH_COUNT 8
+ int conn_comp_batch;
+ struct llist_head conn_comp_llist;
+ struct mutex conn_comp_mutex;
};
#define ISERT_MAX_CQ 64
/* XXX(hch): this is a horrible layering violation.. */
spin_lock_irqsave(&ioctx->cmd.t_state_lock, flags);
- ioctx->cmd.transport_state |= CMD_T_LUN_STOP;
ioctx->cmd.transport_state &= ~CMD_T_ACTIVE;
spin_unlock_irqrestore(&ioctx->cmd.t_state_lock, flags);
-
- complete(&ioctx->cmd.transport_lun_stop_comp);
break;
case SRPT_STATE_CMD_RSP_SENT:
/*
* not been received in time.
*/
srpt_unmap_sg_to_ib_sge(ioctx->ch, ioctx);
- spin_lock_irqsave(&ioctx->cmd.t_state_lock, flags);
- ioctx->cmd.transport_state |= CMD_T_LUN_STOP;
- spin_unlock_irqrestore(&ioctx->cmd.t_state_lock, flags);
target_put_sess_cmd(ioctx->ch->sess, &ioctx->cmd);
break;
case SRPT_STATE_MGMT_RSP_SENT:
{
struct se_cmd *cmd;
enum srpt_command_state state;
- unsigned long flags;
cmd = &ioctx->cmd;
state = srpt_get_cmd_state(ioctx);
__func__, __LINE__, state);
break;
case SRPT_RDMA_WRITE_LAST:
- spin_lock_irqsave(&ioctx->cmd.t_state_lock, flags);
- ioctx->cmd.transport_state |= CMD_T_LUN_STOP;
- spin_unlock_irqrestore(&ioctx->cmd.t_state_lock, flags);
break;
default:
printk(KERN_ERR "%s[%d]: opcode = %u\n", __func__,
break;
case EV_ABS:
+ input_alloc_absinfo(dev);
+ if (!dev->absinfo)
+ return;
+
__set_bit(code, dev->absbit);
break;
__set_bit(EV_REP, input->evbit);
for (i = 0; i < input->keycodemax; i++)
- __set_bit(kpad->keycode[i] & KEY_MAX, input->keybit);
+ if (kpad->keycode[i] <= KEY_MAX)
+ __set_bit(kpad->keycode[i], input->keybit);
__clear_bit(KEY_RESERVED, input->keybit);
if (kpad->gpimapsize)
__set_bit(EV_REP, input->evbit);
for (i = 0; i < input->keycodemax; i++)
- __set_bit(kpad->keycode[i] & KEY_MAX, input->keybit);
+ if (kpad->keycode[i] <= KEY_MAX)
+ __set_bit(kpad->keycode[i], input->keybit);
__clear_bit(KEY_RESERVED, input->keybit);
if (kpad->gpimapsize)
__set_bit(EV_REP, input->evbit);
for (i = 0; i < input->keycodemax; i++)
- __set_bit(bf54x_kpad->keycode[i] & KEY_MAX, input->keybit);
+ if (bf54x_kpad->keycode[i] <= KEY_MAX)
+ __set_bit(bf54x_kpad->keycode[i], input->keybit);
__clear_bit(KEY_RESERVED, input->keybit);
error = input_register_device(input);
/* ORIENT ADXL346 only */
#define ADXL346_2D_VALID (1 << 6)
-#define ADXL346_2D_ORIENT(x) (((x) & 0x3) >> 4)
+#define ADXL346_2D_ORIENT(x) (((x) & 0x30) >> 4)
#define ADXL346_3D_VALID (1 << 3)
#define ADXL346_3D_ORIENT(x) ((x) & 0x7)
#define ADXL346_2D_PORTRAIT_POS 0 /* +X */
if (WARN_ON(down_interruptible(&i8042tregs)))
return -1;
- if (hp_sdc_enqueue_transaction(&t)) return -1;
+ if (hp_sdc_enqueue_transaction(&t)) {
+ up(&i8042tregs);
+ return -1;
+ }
/* Sleep until results come back. */
if (WARN_ON(down_interruptible(&i8042tregs)))
idev->keycodemax = ARRAY_SIZE(lp->btncode);
for (i = 0; i < ARRAY_SIZE(pcf8574_kp_btncode); i++) {
- lp->btncode[i] = pcf8574_kp_btncode[i];
- __set_bit(lp->btncode[i] & KEY_MAX, idev->keybit);
+ if (lp->btncode[i] <= KEY_MAX) {
+ lp->btncode[i] = pcf8574_kp_btncode[i];
+ __set_bit(lp->btncode[i], idev->keybit);
+ }
}
+ __clear_bit(KEY_RESERVED, idev->keybit);
sprintf(lp->name, DRV_NAME);
sprintf(lp->phys, "kp_data/input0");
{ PSMOUSE_CMD_SETSCALE11, 0x00 }, /* f */
};
+static const struct alps_nibble_commands alps_v6_nibble_commands[] = {
+ { PSMOUSE_CMD_ENABLE, 0x00 }, /* 0 */
+ { PSMOUSE_CMD_SETRATE, 0x0a }, /* 1 */
+ { PSMOUSE_CMD_SETRATE, 0x14 }, /* 2 */
+ { PSMOUSE_CMD_SETRATE, 0x28 }, /* 3 */
+ { PSMOUSE_CMD_SETRATE, 0x3c }, /* 4 */
+ { PSMOUSE_CMD_SETRATE, 0x50 }, /* 5 */
+ { PSMOUSE_CMD_SETRATE, 0x64 }, /* 6 */
+ { PSMOUSE_CMD_SETRATE, 0xc8 }, /* 7 */
+ { PSMOUSE_CMD_GETID, 0x00 }, /* 8 */
+ { PSMOUSE_CMD_GETINFO, 0x00 }, /* 9 */
+ { PSMOUSE_CMD_SETRES, 0x00 }, /* a */
+ { PSMOUSE_CMD_SETRES, 0x01 }, /* b */
+ { PSMOUSE_CMD_SETRES, 0x02 }, /* c */
+ { PSMOUSE_CMD_SETRES, 0x03 }, /* d */
+ { PSMOUSE_CMD_SETSCALE21, 0x00 }, /* e */
+ { PSMOUSE_CMD_SETSCALE11, 0x00 }, /* f */
+};
+
#define ALPS_DUALPOINT 0x02 /* touchpad has trackstick */
#define ALPS_PASS 0x04 /* device has a pass-through port */
/* Dell Latitude E5500, E6400, E6500, Precision M4400 */
{ { 0x62, 0x02, 0x14 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf,
ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED },
+ { { 0x73, 0x00, 0x14 }, 0x00, ALPS_PROTO_V6, 0xff, 0xff, ALPS_DUALPOINT }, /* Dell XT2 */
{ { 0x73, 0x02, 0x50 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf, ALPS_FOUR_BUTTONS }, /* Dell Vostro 1400 */
{ { 0x52, 0x01, 0x14 }, 0x00, ALPS_PROTO_V2, 0xff, 0xff,
ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED }, /* Toshiba Tecra A11-11L */
alps_process_touchpad_packet_v3(psmouse);
}
+static void alps_process_packet_v6(struct psmouse *psmouse)
+{
+ struct alps_data *priv = psmouse->private;
+ unsigned char *packet = psmouse->packet;
+ struct input_dev *dev = psmouse->dev;
+ struct input_dev *dev2 = priv->dev2;
+ int x, y, z, left, right, middle;
+
+ /*
+ * We can use Byte5 to distinguish if the packet is from Touchpad
+ * or Trackpoint.
+ * Touchpad: 0 - 0x7E
+ * Trackpoint: 0x7F
+ */
+ if (packet[5] == 0x7F) {
+ /* It should be a DualPoint when received Trackpoint packet */
+ if (!(priv->flags & ALPS_DUALPOINT))
+ return;
+
+ /* Trackpoint packet */
+ x = packet[1] | ((packet[3] & 0x20) << 2);
+ y = packet[2] | ((packet[3] & 0x40) << 1);
+ z = packet[4];
+ left = packet[3] & 0x01;
+ right = packet[3] & 0x02;
+ middle = packet[3] & 0x04;
+
+ /* To prevent the cursor jump when finger lifted */
+ if (x == 0x7F && y == 0x7F && z == 0x7F)
+ x = y = z = 0;
+
+ /* Divide 4 since trackpoint's speed is too fast */
+ input_report_rel(dev2, REL_X, (char)x / 4);
+ input_report_rel(dev2, REL_Y, -((char)y / 4));
+
+ input_report_key(dev2, BTN_LEFT, left);
+ input_report_key(dev2, BTN_RIGHT, right);
+ input_report_key(dev2, BTN_MIDDLE, middle);
+
+ input_sync(dev2);
+ return;
+ }
+
+ /* Touchpad packet */
+ x = packet[1] | ((packet[3] & 0x78) << 4);
+ y = packet[2] | ((packet[4] & 0x78) << 4);
+ z = packet[5];
+ left = packet[3] & 0x01;
+ right = packet[3] & 0x02;
+
+ if (z > 30)
+ input_report_key(dev, BTN_TOUCH, 1);
+ if (z < 25)
+ input_report_key(dev, BTN_TOUCH, 0);
+
+ if (z > 0) {
+ input_report_abs(dev, ABS_X, x);
+ input_report_abs(dev, ABS_Y, y);
+ }
+
+ input_report_abs(dev, ABS_PRESSURE, z);
+ input_report_key(dev, BTN_TOOL_FINGER, z > 0);
+
+ /* v6 touchpad does not have middle button */
+ input_report_key(dev, BTN_LEFT, left);
+ input_report_key(dev, BTN_RIGHT, right);
+
+ input_sync(dev);
+}
+
static void alps_process_packet_v4(struct psmouse *psmouse)
{
struct alps_data *priv = psmouse->private;
}
/* Bytes 2 - pktsize should have 0 in the highest bit */
- if (priv->proto_version != ALPS_PROTO_V5 &&
+ if ((priv->proto_version < ALPS_PROTO_V5) &&
psmouse->pktcnt >= 2 && psmouse->pktcnt <= psmouse->pktsize &&
(psmouse->packet[psmouse->pktcnt - 1] & 0x80)) {
psmouse_dbg(psmouse, "refusing packet[%i] = %x\n",
return ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETPOLL);
}
+static int alps_monitor_mode_send_word(struct psmouse *psmouse, u16 word)
+{
+ int i, nibble;
+
+ /*
+ * b0-b11 are valid bits, send sequence is inverse.
+ * e.g. when word = 0x0123, nibble send sequence is 3, 2, 1
+ */
+ for (i = 0; i <= 8; i += 4) {
+ nibble = (word >> i) & 0xf;
+ if (alps_command_mode_send_nibble(psmouse, nibble))
+ return -1;
+ }
+
+ return 0;
+}
+
+static int alps_monitor_mode_write_reg(struct psmouse *psmouse,
+ u16 addr, u16 value)
+{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
+
+ /* 0x0A0 is the command to write the word */
+ if (ps2_command(ps2dev, NULL, PSMOUSE_CMD_ENABLE) ||
+ alps_monitor_mode_send_word(psmouse, 0x0A0) ||
+ alps_monitor_mode_send_word(psmouse, addr) ||
+ alps_monitor_mode_send_word(psmouse, value) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE))
+ return -1;
+
+ return 0;
+}
+
+static int alps_monitor_mode(struct psmouse *psmouse, bool enable)
+{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
+
+ if (enable) {
+ /* EC E9 F5 F5 E7 E6 E7 E9 to enter monitor mode */
+ if (ps2_command(ps2dev, NULL, PSMOUSE_CMD_RESET_WRAP) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_GETINFO) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE21) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE21) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_GETINFO))
+ return -1;
+ } else {
+ /* EC to exit monitor mode */
+ if (ps2_command(ps2dev, NULL, PSMOUSE_CMD_RESET_WRAP))
+ return -1;
+ }
+
+ return 0;
+}
+
+static int alps_absolute_mode_v6(struct psmouse *psmouse)
+{
+ u16 reg_val = 0x181;
+ int ret = -1;
+
+ /* enter monitor mode, to write the register */
+ if (alps_monitor_mode(psmouse, true))
+ return -1;
+
+ ret = alps_monitor_mode_write_reg(psmouse, 0x000, reg_val);
+
+ if (alps_monitor_mode(psmouse, false))
+ ret = -1;
+
+ return ret;
+}
+
static int alps_get_status(struct psmouse *psmouse, char *param)
{
/* Get status: 0xF5 0xF5 0xF5 0xE9 */
return 0;
}
+static int alps_hw_init_v6(struct psmouse *psmouse)
+{
+ unsigned char param[2] = {0xC8, 0x14};
+
+ /* Enter passthrough mode to let trackpoint enter 6byte raw mode */
+ if (alps_passthrough_mode_v2(psmouse, true))
+ return -1;
+
+ if (ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
+ ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
+ ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
+ ps2_command(&psmouse->ps2dev, ¶m[0], PSMOUSE_CMD_SETRATE) ||
+ ps2_command(&psmouse->ps2dev, ¶m[1], PSMOUSE_CMD_SETRATE))
+ return -1;
+
+ if (alps_passthrough_mode_v2(psmouse, false))
+ return -1;
+
+ if (alps_absolute_mode_v6(psmouse)) {
+ psmouse_err(psmouse, "Failed to enable absolute mode\n");
+ return -1;
+ }
+
+ return 0;
+}
+
/*
* Enable or disable passthrough mode to the trackstick.
*/
priv->hw_init = alps_hw_init_v1_v2;
priv->process_packet = alps_process_packet_v1_v2;
priv->set_abs_params = alps_set_abs_params_st;
+ priv->x_max = 1023;
+ priv->y_max = 767;
break;
case ALPS_PROTO_V3:
priv->hw_init = alps_hw_init_v3;
priv->x_bits = 23;
priv->y_bits = 12;
break;
+ case ALPS_PROTO_V6:
+ priv->hw_init = alps_hw_init_v6;
+ priv->process_packet = alps_process_packet_v6;
+ priv->set_abs_params = alps_set_abs_params_st;
+ priv->nibble_commands = alps_v6_nibble_commands;
+ priv->x_max = 2047;
+ priv->y_max = 1535;
+ break;
}
}
static void alps_set_abs_params_st(struct alps_data *priv,
struct input_dev *dev1)
{
- input_set_abs_params(dev1, ABS_X, 0, 1023, 0, 0);
- input_set_abs_params(dev1, ABS_Y, 0, 767, 0, 0);
+ input_set_abs_params(dev1, ABS_X, 0, priv->x_max, 0, 0);
+ input_set_abs_params(dev1, ABS_Y, 0, priv->y_max, 0, 0);
}
static void alps_set_abs_params_mt(struct alps_data *priv,
#define ALPS_PROTO_V3 3
#define ALPS_PROTO_V4 4
#define ALPS_PROTO_V5 5
+#define ALPS_PROTO_V6 6
/**
* struct alps_model_info - touchpad ID table
break;
case 6:
case 7:
+ case 8:
etd->hw_version = 4;
break;
default:
static DEVICE_ATTR_RO(proto);
static DEVICE_ATTR_RO(id);
static DEVICE_ATTR_RO(extra);
-static DEVICE_ATTR_RO(modalias);
-static DEVICE_ATTR_WO(drvctl);
-static DEVICE_ATTR(description, S_IRUGO, serio_show_description, NULL);
-static DEVICE_ATTR(bind_mode, S_IWUSR | S_IRUGO, serio_show_bind_mode, serio_set_bind_mode);
static struct attribute *serio_device_id_attrs[] = {
&dev_attr_type.attr,
&dev_attr_proto.attr,
&dev_attr_id.attr,
&dev_attr_extra.attr,
+ NULL
+};
+
+static struct attribute_group serio_id_attr_group = {
+ .name = "id",
+ .attrs = serio_device_id_attrs,
+};
+
+static DEVICE_ATTR_RO(modalias);
+static DEVICE_ATTR_WO(drvctl);
+static DEVICE_ATTR(description, S_IRUGO, serio_show_description, NULL);
+static DEVICE_ATTR(bind_mode, S_IWUSR | S_IRUGO, serio_show_bind_mode, serio_set_bind_mode);
+
+static struct attribute *serio_device_attrs[] = {
&dev_attr_modalias.attr,
&dev_attr_description.attr,
&dev_attr_drvctl.attr,
NULL
};
-static struct attribute_group serio_id_attr_group = {
- .name = "id",
- .attrs = serio_device_id_attrs,
+static struct attribute_group serio_device_attr_group = {
+ .attrs = serio_device_attrs,
};
static const struct attribute_group *serio_device_attr_groups[] = {
&serio_id_attr_group,
+ &serio_device_attr_group,
NULL
};
To compile this driver as a module, choose M here: the
module will be called stmpe-ts.
+config TOUCHSCREEN_SUR40
+ tristate "Samsung SUR40 (Surface 2.0/PixelSense) touchscreen"
+ depends on USB
+ select INPUT_POLLDEV
+ help
+ Say Y here if you want support for the Samsung SUR40 touchscreen
+ (also known as Microsoft Surface 2.0 or Microsoft PixelSense).
+
+ To compile this driver as a module, choose M here: the
+ module will be called sur40.
+
config TOUCHSCREEN_TPS6507X
tristate "TPS6507x based touchscreens"
depends on I2C
obj-$(CONFIG_TOUCHSCREEN_S3C2410) += s3c2410_ts.o
obj-$(CONFIG_TOUCHSCREEN_ST1232) += st1232.o
obj-$(CONFIG_TOUCHSCREEN_STMPE) += stmpe-ts.o
+obj-$(CONFIG_TOUCHSCREEN_SUR40) += sur40.o
obj-$(CONFIG_TOUCHSCREEN_TI_AM335X_TSC) += ti_am335x_tsc.o
obj-$(CONFIG_TOUCHSCREEN_TNETV107X) += tnetv107x-ts.o
obj-$(CONFIG_TOUCHSCREEN_TOUCHIT213) += touchit213.o
}
#ifdef CONFIG_PM_SLEEP
-static int atmel_wm97xx_suspend(struct *dev)
+static int atmel_wm97xx_suspend(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct atmel_wm97xx *atmel_wm97xx = platform_get_drvdata(pdev);
dev_vdbg(cd->dev, "%s: Watchdog timer triggered\n", __func__);
- if (!work_pending(&cd->watchdog_work))
- schedule_work(&cd->watchdog_work);
+ schedule_work(&cd->watchdog_work);
return;
}
--- /dev/null
+/*
+ * Surface2.0/SUR40/PixelSense input driver
+ *
+ * Copyright (c) 2013 by Florian 'floe' Echtler <floe@butterbrot.org>
+ *
+ * Derived from the USB Skeleton driver 1.1,
+ * Copyright (c) 2003 Greg Kroah-Hartman (greg@kroah.com)
+ *
+ * and from the Apple USB BCM5974 multitouch driver,
+ * Copyright (c) 2008 Henrik Rydberg (rydberg@euromail.se)
+ *
+ * and from the generic hid-multitouch driver,
+ * Copyright (c) 2010-2012 Stephane Chatty <chatty@enac.fr>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of
+ * the License, or (at your option) any later version.
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/completion.h>
+#include <linux/uaccess.h>
+#include <linux/usb.h>
+#include <linux/printk.h>
+#include <linux/input-polldev.h>
+#include <linux/input/mt.h>
+#include <linux/usb/input.h>
+
+/* read 512 bytes from endpoint 0x86 -> get header + blobs */
+struct sur40_header {
+
+ __le16 type; /* always 0x0001 */
+ __le16 count; /* count of blobs (if 0: continue prev. packet) */
+
+ __le32 packet_id; /* unique ID for all packets in one frame */
+
+ __le32 timestamp; /* milliseconds (inc. by 16 or 17 each frame) */
+ __le32 unknown; /* "epoch?" always 02/03 00 00 00 */
+
+} __packed;
+
+struct sur40_blob {
+
+ __le16 blob_id;
+
+ u8 action; /* 0x02 = enter/exit, 0x03 = update (?) */
+ u8 unknown; /* always 0x01 or 0x02 (no idea what this is?) */
+
+ __le16 bb_pos_x; /* upper left corner of bounding box */
+ __le16 bb_pos_y;
+
+ __le16 bb_size_x; /* size of bounding box */
+ __le16 bb_size_y;
+
+ __le16 pos_x; /* finger tip position */
+ __le16 pos_y;
+
+ __le16 ctr_x; /* centroid position */
+ __le16 ctr_y;
+
+ __le16 axis_x; /* somehow related to major/minor axis, mostly: */
+ __le16 axis_y; /* axis_x == bb_size_y && axis_y == bb_size_x */
+
+ __le32 angle; /* orientation in radians relative to x axis -
+ actually an IEEE754 float, don't use in kernel */
+
+ __le32 area; /* size in pixels/pressure (?) */
+
+ u8 padding[32];
+
+} __packed;
+
+/* combined header/blob data */
+struct sur40_data {
+ struct sur40_header header;
+ struct sur40_blob blobs[];
+} __packed;
+
+
+/* version information */
+#define DRIVER_SHORT "sur40"
+#define DRIVER_AUTHOR "Florian 'floe' Echtler <floe@butterbrot.org>"
+#define DRIVER_DESC "Surface2.0/SUR40/PixelSense input driver"
+
+/* vendor and device IDs */
+#define ID_MICROSOFT 0x045e
+#define ID_SUR40 0x0775
+
+/* sensor resolution */
+#define SENSOR_RES_X 1920
+#define SENSOR_RES_Y 1080
+
+/* touch data endpoint */
+#define TOUCH_ENDPOINT 0x86
+
+/* polling interval (ms) */
+#define POLL_INTERVAL 10
+
+/* maximum number of contacts FIXME: this is a guess? */
+#define MAX_CONTACTS 64
+
+/* control commands */
+#define SUR40_GET_VERSION 0xb0 /* 12 bytes string */
+#define SUR40_UNKNOWN1 0xb3 /* 5 bytes */
+#define SUR40_UNKNOWN2 0xc1 /* 24 bytes */
+
+#define SUR40_GET_STATE 0xc5 /* 4 bytes state (?) */
+#define SUR40_GET_SENSORS 0xb1 /* 8 bytes sensors */
+
+/*
+ * Note: an earlier, non-public version of this driver used USB_RECIP_ENDPOINT
+ * here by mistake which is very likely to have corrupted the firmware EEPROM
+ * on two separate SUR40 devices. Thanks to Alan Stern who spotted this bug.
+ * Should you ever run into a similar problem, the background story to this
+ * incident and instructions on how to fix the corrupted EEPROM are available
+ * at https://floe.butterbrot.org/matrix/hacking/surface/brick.html
+*/
+
+struct sur40_state {
+
+ struct usb_device *usbdev;
+ struct device *dev;
+ struct input_polled_dev *input;
+
+ struct sur40_data *bulk_in_buffer;
+ size_t bulk_in_size;
+ u8 bulk_in_epaddr;
+
+ char phys[64];
+};
+
+static int sur40_command(struct sur40_state *dev,
+ u8 command, u16 index, void *buffer, u16 size)
+{
+ return usb_control_msg(dev->usbdev, usb_rcvctrlpipe(dev->usbdev, 0),
+ command,
+ USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+ 0x00, index, buffer, size, 1000);
+}
+
+/* Initialization routine, called from sur40_open */
+static int sur40_init(struct sur40_state *dev)
+{
+ int result;
+ u8 buffer[24];
+
+ /* stupidly replay the original MS driver init sequence */
+ result = sur40_command(dev, SUR40_GET_VERSION, 0x00, buffer, 12);
+ if (result < 0)
+ return result;
+
+ result = sur40_command(dev, SUR40_GET_VERSION, 0x01, buffer, 12);
+ if (result < 0)
+ return result;
+
+ result = sur40_command(dev, SUR40_GET_VERSION, 0x02, buffer, 12);
+ if (result < 0)
+ return result;
+
+ result = sur40_command(dev, SUR40_UNKNOWN2, 0x00, buffer, 24);
+ if (result < 0)
+ return result;
+
+ result = sur40_command(dev, SUR40_UNKNOWN1, 0x00, buffer, 5);
+ if (result < 0)
+ return result;
+
+ result = sur40_command(dev, SUR40_GET_VERSION, 0x03, buffer, 12);
+
+ /*
+ * Discard the result buffer - no known data inside except
+ * some version strings, maybe extract these sometime...
+ */
+
+ return result;
+}
+
+/*
+ * Callback routines from input_polled_dev
+ */
+
+/* Enable the device, polling will now start. */
+static void sur40_open(struct input_polled_dev *polldev)
+{
+ struct sur40_state *sur40 = polldev->private;
+
+ dev_dbg(sur40->dev, "open\n");
+ sur40_init(sur40);
+}
+
+/* Disable device, polling has stopped. */
+static void sur40_close(struct input_polled_dev *polldev)
+{
+ struct sur40_state *sur40 = polldev->private;
+
+ dev_dbg(sur40->dev, "close\n");
+ /*
+ * There is no known way to stop the device, so we simply
+ * stop polling.
+ */
+}
+
+/*
+ * This function is called when a whole contact has been processed,
+ * so that it can assign it to a slot and store the data there.
+ */
+static void sur40_report_blob(struct sur40_blob *blob, struct input_dev *input)
+{
+ int wide, major, minor;
+
+ int bb_size_x = le16_to_cpu(blob->bb_size_x);
+ int bb_size_y = le16_to_cpu(blob->bb_size_y);
+
+ int pos_x = le16_to_cpu(blob->pos_x);
+ int pos_y = le16_to_cpu(blob->pos_y);
+
+ int ctr_x = le16_to_cpu(blob->ctr_x);
+ int ctr_y = le16_to_cpu(blob->ctr_y);
+
+ int slotnum = input_mt_get_slot_by_key(input, blob->blob_id);
+ if (slotnum < 0 || slotnum >= MAX_CONTACTS)
+ return;
+
+ input_mt_slot(input, slotnum);
+ input_mt_report_slot_state(input, MT_TOOL_FINGER, 1);
+ wide = (bb_size_x > bb_size_y);
+ major = max(bb_size_x, bb_size_y);
+ minor = min(bb_size_x, bb_size_y);
+
+ input_report_abs(input, ABS_MT_POSITION_X, pos_x);
+ input_report_abs(input, ABS_MT_POSITION_Y, pos_y);
+ input_report_abs(input, ABS_MT_TOOL_X, ctr_x);
+ input_report_abs(input, ABS_MT_TOOL_Y, ctr_y);
+
+ /* TODO: use a better orientation measure */
+ input_report_abs(input, ABS_MT_ORIENTATION, wide);
+ input_report_abs(input, ABS_MT_TOUCH_MAJOR, major);
+ input_report_abs(input, ABS_MT_TOUCH_MINOR, minor);
+}
+
+/* core function: poll for new input data */
+static void sur40_poll(struct input_polled_dev *polldev)
+{
+
+ struct sur40_state *sur40 = polldev->private;
+ struct input_dev *input = polldev->input;
+ int result, bulk_read, need_blobs, packet_blobs, i;
+ u32 uninitialized_var(packet_id);
+
+ struct sur40_header *header = &sur40->bulk_in_buffer->header;
+ struct sur40_blob *inblob = &sur40->bulk_in_buffer->blobs[0];
+
+ dev_dbg(sur40->dev, "poll\n");
+
+ need_blobs = -1;
+
+ do {
+
+ /* perform a blocking bulk read to get data from the device */
+ result = usb_bulk_msg(sur40->usbdev,
+ usb_rcvbulkpipe(sur40->usbdev, sur40->bulk_in_epaddr),
+ sur40->bulk_in_buffer, sur40->bulk_in_size,
+ &bulk_read, 1000);
+
+ dev_dbg(sur40->dev, "received %d bytes\n", bulk_read);
+
+ if (result < 0) {
+ dev_err(sur40->dev, "error in usb_bulk_read\n");
+ return;
+ }
+
+ result = bulk_read - sizeof(struct sur40_header);
+
+ if (result % sizeof(struct sur40_blob) != 0) {
+ dev_err(sur40->dev, "transfer size mismatch\n");
+ return;
+ }
+
+ /* first packet? */
+ if (need_blobs == -1) {
+ need_blobs = le16_to_cpu(header->count);
+ dev_dbg(sur40->dev, "need %d blobs\n", need_blobs);
+ packet_id = le32_to_cpu(header->packet_id);
+ }
+
+ /*
+ * Sanity check. when video data is also being retrieved, the
+ * packet ID will usually increase in the middle of a series
+ * instead of at the end.
+ */
+ if (packet_id != header->packet_id)
+ dev_warn(sur40->dev, "packet ID mismatch\n");
+
+ packet_blobs = result / sizeof(struct sur40_blob);
+ dev_dbg(sur40->dev, "received %d blobs\n", packet_blobs);
+
+ /* packets always contain at least 4 blobs, even if empty */
+ if (packet_blobs > need_blobs)
+ packet_blobs = need_blobs;
+
+ for (i = 0; i < packet_blobs; i++) {
+ need_blobs--;
+ dev_dbg(sur40->dev, "processing blob\n");
+ sur40_report_blob(&(inblob[i]), input);
+ }
+
+ } while (need_blobs > 0);
+
+ input_mt_sync_frame(input);
+ input_sync(input);
+}
+
+/* Initialize input device parameters. */
+static void sur40_input_setup(struct input_dev *input_dev)
+{
+ __set_bit(EV_KEY, input_dev->evbit);
+ __set_bit(EV_ABS, input_dev->evbit);
+
+ input_set_abs_params(input_dev, ABS_MT_POSITION_X,
+ 0, SENSOR_RES_X, 0, 0);
+ input_set_abs_params(input_dev, ABS_MT_POSITION_Y,
+ 0, SENSOR_RES_Y, 0, 0);
+
+ input_set_abs_params(input_dev, ABS_MT_TOOL_X,
+ 0, SENSOR_RES_X, 0, 0);
+ input_set_abs_params(input_dev, ABS_MT_TOOL_Y,
+ 0, SENSOR_RES_Y, 0, 0);
+
+ /* max value unknown, but major/minor axis
+ * can never be larger than screen */
+ input_set_abs_params(input_dev, ABS_MT_TOUCH_MAJOR,
+ 0, SENSOR_RES_X, 0, 0);
+ input_set_abs_params(input_dev, ABS_MT_TOUCH_MINOR,
+ 0, SENSOR_RES_Y, 0, 0);
+
+ input_set_abs_params(input_dev, ABS_MT_ORIENTATION, 0, 1, 0, 0);
+
+ input_mt_init_slots(input_dev, MAX_CONTACTS,
+ INPUT_MT_DIRECT | INPUT_MT_DROP_UNUSED);
+}
+
+/* Check candidate USB interface. */
+static int sur40_probe(struct usb_interface *interface,
+ const struct usb_device_id *id)
+{
+ struct usb_device *usbdev = interface_to_usbdev(interface);
+ struct sur40_state *sur40;
+ struct usb_host_interface *iface_desc;
+ struct usb_endpoint_descriptor *endpoint;
+ struct input_polled_dev *poll_dev;
+ int error;
+
+ /* Check if we really have the right interface. */
+ iface_desc = &interface->altsetting[0];
+ if (iface_desc->desc.bInterfaceClass != 0xFF)
+ return -ENODEV;
+
+ /* Use endpoint #4 (0x86). */
+ endpoint = &iface_desc->endpoint[4].desc;
+ if (endpoint->bEndpointAddress != TOUCH_ENDPOINT)
+ return -ENODEV;
+
+ /* Allocate memory for our device state and initialize it. */
+ sur40 = kzalloc(sizeof(struct sur40_state), GFP_KERNEL);
+ if (!sur40)
+ return -ENOMEM;
+
+ poll_dev = input_allocate_polled_device();
+ if (!poll_dev) {
+ error = -ENOMEM;
+ goto err_free_dev;
+ }
+
+ /* Set up polled input device control structure */
+ poll_dev->private = sur40;
+ poll_dev->poll_interval = POLL_INTERVAL;
+ poll_dev->open = sur40_open;
+ poll_dev->poll = sur40_poll;
+ poll_dev->close = sur40_close;
+
+ /* Set up regular input device structure */
+ sur40_input_setup(poll_dev->input);
+
+ poll_dev->input->name = "Samsung SUR40";
+ usb_to_input_id(usbdev, &poll_dev->input->id);
+ usb_make_path(usbdev, sur40->phys, sizeof(sur40->phys));
+ strlcat(sur40->phys, "/input0", sizeof(sur40->phys));
+ poll_dev->input->phys = sur40->phys;
+ poll_dev->input->dev.parent = &interface->dev;
+
+ sur40->usbdev = usbdev;
+ sur40->dev = &interface->dev;
+ sur40->input = poll_dev;
+
+ /* use the bulk-in endpoint tested above */
+ sur40->bulk_in_size = usb_endpoint_maxp(endpoint);
+ sur40->bulk_in_epaddr = endpoint->bEndpointAddress;
+ sur40->bulk_in_buffer = kmalloc(sur40->bulk_in_size, GFP_KERNEL);
+ if (!sur40->bulk_in_buffer) {
+ dev_err(&interface->dev, "Unable to allocate input buffer.");
+ error = -ENOMEM;
+ goto err_free_polldev;
+ }
+
+ error = input_register_polled_device(poll_dev);
+ if (error) {
+ dev_err(&interface->dev,
+ "Unable to register polled input device.");
+ goto err_free_buffer;
+ }
+
+ /* we can register the device now, as it is ready */
+ usb_set_intfdata(interface, sur40);
+ dev_dbg(&interface->dev, "%s is now attached\n", DRIVER_DESC);
+
+ return 0;
+
+err_free_buffer:
+ kfree(sur40->bulk_in_buffer);
+err_free_polldev:
+ input_free_polled_device(sur40->input);
+err_free_dev:
+ kfree(sur40);
+
+ return error;
+}
+
+/* Unregister device & clean up. */
+static void sur40_disconnect(struct usb_interface *interface)
+{
+ struct sur40_state *sur40 = usb_get_intfdata(interface);
+
+ input_unregister_polled_device(sur40->input);
+ input_free_polled_device(sur40->input);
+ kfree(sur40->bulk_in_buffer);
+ kfree(sur40);
+
+ usb_set_intfdata(interface, NULL);
+ dev_dbg(&interface->dev, "%s is now disconnected\n", DRIVER_DESC);
+}
+
+static const struct usb_device_id sur40_table[] = {
+ { USB_DEVICE(ID_MICROSOFT, ID_SUR40) }, /* Samsung SUR40 */
+ { } /* terminating null entry */
+};
+MODULE_DEVICE_TABLE(usb, sur40_table);
+
+/* USB-specific object needed to register this driver with the USB subsystem. */
+static struct usb_driver sur40_driver = {
+ .name = DRIVER_SHORT,
+ .probe = sur40_probe,
+ .disconnect = sur40_disconnect,
+ .id_table = sur40_table,
+};
+
+module_usb_driver(sur40_driver);
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
struct usbtouch_usb {
unsigned char *data;
dma_addr_t data_dma;
+ int data_size;
unsigned char *buffer;
int buf_len;
struct urb *irq;
static void usbtouch_free_buffers(struct usb_device *udev,
struct usbtouch_usb *usbtouch)
{
- usb_free_coherent(udev, usbtouch->type->rept_size,
+ usb_free_coherent(udev, usbtouch->data_size,
usbtouch->data, usbtouch->data_dma);
kfree(usbtouch->buffer);
}
if (!type->process_pkt)
type->process_pkt = usbtouch_process_pkt;
- usbtouch->data = usb_alloc_coherent(udev, type->rept_size,
+ usbtouch->data_size = type->rept_size;
+ if (type->get_pkt_len) {
+ /*
+ * When dealing with variable-length packets we should
+ * not request more than wMaxPacketSize bytes at once
+ * as we do not know if there is more data coming or
+ * we filled exactly wMaxPacketSize bytes and there is
+ * nothing else.
+ */
+ usbtouch->data_size = min(usbtouch->data_size,
+ usb_endpoint_maxp(endpoint));
+ }
+
+ usbtouch->data = usb_alloc_coherent(udev, usbtouch->data_size,
GFP_KERNEL, &usbtouch->data_dma);
if (!usbtouch->data)
goto out_free;
if (usb_endpoint_type(endpoint) == USB_ENDPOINT_XFER_INT)
usb_fill_int_urb(usbtouch->irq, udev,
usb_rcvintpipe(udev, endpoint->bEndpointAddress),
- usbtouch->data, type->rept_size,
+ usbtouch->data, usbtouch->data_size,
usbtouch_irq, usbtouch, endpoint->bInterval);
else
usb_fill_bulk_urb(usbtouch->irq, udev,
usb_rcvbulkpipe(udev, endpoint->bEndpointAddress),
- usbtouch->data, type->rept_size,
+ usbtouch->data, usbtouch->data_size,
usbtouch_irq, usbtouch);
usbtouch->irq->dev = udev;
}
}
-static irqreturn_t zforce_interrupt(int irq, void *dev_id)
+static irqreturn_t zforce_irq(int irq, void *dev_id)
+{
+ struct zforce_ts *ts = dev_id;
+ struct i2c_client *client = ts->client;
+
+ if (ts->suspended && device_may_wakeup(&client->dev))
+ pm_wakeup_event(&client->dev, 500);
+
+ return IRQ_WAKE_THREAD;
+}
+
+static irqreturn_t zforce_irq_thread(int irq, void *dev_id)
{
struct zforce_ts *ts = dev_id;
struct i2c_client *client = ts->client;
u8 *payload;
/*
- * When suspended, emit a wakeup signal if necessary and return.
+ * When still suspended, return.
* Due to the level-interrupt we will get re-triggered later.
*/
if (ts->suspended) {
- if (device_may_wakeup(&client->dev))
- pm_wakeup_event(&client->dev, 500);
msleep(20);
return IRQ_HANDLED;
}
* Therefore we can trigger the interrupt anytime it is low and do
* not need to limit it to the interrupt edge.
*/
- ret = devm_request_threaded_irq(&client->dev, client->irq, NULL,
- zforce_interrupt,
+ ret = devm_request_threaded_irq(&client->dev, client->irq,
+ zforce_irq, zforce_irq_thread,
IRQF_TRIGGER_LOW | IRQF_ONESHOT,
input_dev->name, ts);
if (ret) {
struct arm_smmu_cfg root_cfg;
phys_addr_t output_mask;
- spinlock_t lock;
+ struct mutex lock;
};
static DEFINE_SPINLOCK(arm_smmu_devices_lock);
goto out_free_domain;
smmu_domain->root_cfg.pgd = pgd;
- spin_lock_init(&smmu_domain->lock);
+ mutex_init(&smmu_domain->lock);
domain->priv = smmu_domain;
return 0;
* Sanity check the domain. We don't currently support domains
* that cross between different SMMU chains.
*/
- spin_lock(&smmu_domain->lock);
+ mutex_lock(&smmu_domain->lock);
if (!smmu_domain->leaf_smmu) {
/* Now that we have a master, we can finalise the domain */
ret = arm_smmu_init_domain_context(domain, dev);
dev_name(device_smmu->dev));
goto err_unlock;
}
- spin_unlock(&smmu_domain->lock);
+ mutex_unlock(&smmu_domain->lock);
/* Looks ok, so add the device to the domain */
master = find_smmu_master(smmu_domain->leaf_smmu, dev->of_node);
return arm_smmu_domain_add_master(smmu_domain, master);
err_unlock:
- spin_unlock(&smmu_domain->lock);
+ mutex_unlock(&smmu_domain->lock);
return ret;
}
if (paddr & ~output_mask)
return -ERANGE;
- spin_lock(&smmu_domain->lock);
+ mutex_lock(&smmu_domain->lock);
pgd += pgd_index(iova);
end = iova + size;
do {
} while (pgd++, iova != end);
out_unlock:
- spin_unlock(&smmu_domain->lock);
+ mutex_unlock(&smmu_domain->lock);
/* Ensure new page tables are visible to the hardware walker */
if (smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
phys_addr_t paddr, size_t size, int flags)
{
struct arm_smmu_domain *smmu_domain = domain->priv;
- struct arm_smmu_device *smmu = smmu_domain->leaf_smmu;
- if (!smmu_domain || !smmu)
+ if (!smmu_domain)
return -ENODEV;
/* Check for silent address truncation up the SMMU chain. */
static phys_addr_t arm_smmu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- pte_t *pte;
+ pgd_t *pgdp, pgd;
+ pud_t pud;
+ pmd_t pmd;
+ pte_t pte;
struct arm_smmu_domain *smmu_domain = domain->priv;
struct arm_smmu_cfg *root_cfg = &smmu_domain->root_cfg;
- struct arm_smmu_device *smmu = root_cfg->smmu;
- spin_lock(&smmu_domain->lock);
- pgd = root_cfg->pgd;
- if (!pgd)
- goto err_unlock;
+ pgdp = root_cfg->pgd;
+ if (!pgdp)
+ return 0;
- pgd += pgd_index(iova);
- if (pgd_none_or_clear_bad(pgd))
- goto err_unlock;
+ pgd = *(pgdp + pgd_index(iova));
+ if (pgd_none(pgd))
+ return 0;
- pud = pud_offset(pgd, iova);
- if (pud_none_or_clear_bad(pud))
- goto err_unlock;
+ pud = *pud_offset(&pgd, iova);
+ if (pud_none(pud))
+ return 0;
- pmd = pmd_offset(pud, iova);
- if (pmd_none_or_clear_bad(pmd))
- goto err_unlock;
+ pmd = *pmd_offset(&pud, iova);
+ if (pmd_none(pmd))
+ return 0;
- pte = pmd_page_vaddr(*pmd) + pte_index(iova);
+ pte = *(pmd_page_vaddr(pmd) + pte_index(iova));
if (pte_none(pte))
- goto err_unlock;
-
- spin_unlock(&smmu_domain->lock);
- return __pfn_to_phys(pte_pfn(*pte)) | (iova & ~PAGE_MASK);
+ return 0;
-err_unlock:
- spin_unlock(&smmu_domain->lock);
- dev_warn(smmu->dev,
- "invalid (corrupt?) page tables detected for iova 0x%llx\n",
- (unsigned long long)iova);
- return -EINVAL;
+ return __pfn_to_phys(pte_pfn(pte)) | (iova & ~PAGE_MASK);
}
static int arm_smmu_domain_has_cap(struct iommu_domain *domain,
dev_err(dev,
"found only %d context interrupt(s) but %d required\n",
smmu->num_context_irqs, smmu->num_context_banks);
+ err = -ENODEV;
goto out_put_parent;
}
if (WARN_ON(!gic->domain))
return;
+ if (gic_nr == 0) {
#ifdef CONFIG_SMP
- set_smp_cross_call(gic_raise_softirq);
- register_cpu_notifier(&gic_cpu_notifier);
+ set_smp_cross_call(gic_raise_softirq);
+ register_cpu_notifier(&gic_cpu_notifier);
#endif
-
- set_handle_irq(gic_handle_irq);
+ set_handle_irq(gic_handle_irq);
+ }
gic_chip.flags |= gic_arch_extn.flags;
gic_dist_init(gic);
static void intc_irqpin_mask_unmask_prio(struct intc_irqpin_priv *p,
int irq, int do_mask)
{
- int bitfield_width = 4; /* PRIO assumed to have fixed bitfield width */
- int shift = (7 - irq) * bitfield_width; /* PRIO assumed to be 32-bit */
+ /* The PRIO register is assumed to be 32-bit with fixed 4-bit fields. */
+ int bitfield_width = 4;
+ int shift = 32 - (irq + 1) * bitfield_width;
intc_irqpin_read_modify_write(p, INTC_IRQPIN_REG_PRIO,
shift, bitfield_width,
static int intc_irqpin_set_sense(struct intc_irqpin_priv *p, int irq, int value)
{
+ /* The SENSE register is assumed to be 32-bit. */
int bitfield_width = p->config.sense_bitfield_width;
- int shift = (7 - irq) * bitfield_width; /* SENSE assumed to be 32-bit */
+ int shift = 32 - (irq + 1) * bitfield_width;
dev_dbg(&p->pdev->dev, "sense irq = %d, mode = %d\n", irq, value);
int i;
struct pci_dev *tmp_hfcpci = NULL;
-#ifdef __BIG_ENDIAN
-#error "not running on big endian machines now"
-#endif
-
strcpy(tmp, hfcpci_revision);
printk(KERN_INFO "HiSax: HFC-PCI driver Rev. %s\n", HiSax_getrev(tmp));
struct IsdnCardState *cs = card->cs;
char tmp[64];
-#ifdef __BIG_ENDIAN
-#error "not running on big endian machines now"
-#endif
-
strcpy(tmp, telespci_revision);
printk(KERN_INFO "HiSax: Teles/PCI driver Rev. %s\n", HiSax_getrev(tmp));
if (cs->typ != ISDN_CTYPE_TELESPCI)
(sizeof(struct led_pwm_data) * num_leds);
}
-static struct led_pwm_priv *led_pwm_create_of(struct platform_device *pdev)
+static int led_pwm_create_of(struct platform_device *pdev,
+ struct led_pwm_priv *priv)
{
struct device_node *node = pdev->dev.of_node;
struct device_node *child;
- struct led_pwm_priv *priv;
- int count, ret;
-
- /* count LEDs in this device, so we know how much to allocate */
- count = of_get_child_count(node);
- if (!count)
- return NULL;
-
- priv = devm_kzalloc(&pdev->dev, sizeof_pwm_leds_priv(count),
- GFP_KERNEL);
- if (!priv)
- return NULL;
+ int ret;
for_each_child_of_node(node, child) {
struct led_pwm_data *led_dat = &priv->leds[priv->num_leds];
if (IS_ERR(led_dat->pwm)) {
dev_err(&pdev->dev, "unable to request PWM for %s\n",
led_dat->cdev.name);
+ ret = PTR_ERR(led_dat->pwm);
goto err;
}
/* Get the period from PWM core when n*/
priv->num_leds++;
}
- return priv;
+ return 0;
err:
while (priv->num_leds--)
led_classdev_unregister(&priv->leds[priv->num_leds].cdev);
- return NULL;
+ return ret;
}
static int led_pwm_probe(struct platform_device *pdev)
{
struct led_pwm_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct led_pwm_priv *priv;
- int i, ret = 0;
+ int count, i;
+ int ret = 0;
+
+ if (pdata)
+ count = pdata->num_leds;
+ else
+ count = of_get_child_count(pdev->dev.of_node);
+
+ if (!count)
+ return -EINVAL;
- if (pdata && pdata->num_leds) {
- priv = devm_kzalloc(&pdev->dev,
- sizeof_pwm_leds_priv(pdata->num_leds),
- GFP_KERNEL);
- if (!priv)
- return -ENOMEM;
+ priv = devm_kzalloc(&pdev->dev, sizeof_pwm_leds_priv(count),
+ GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
- for (i = 0; i < pdata->num_leds; i++) {
+ if (pdata) {
+ for (i = 0; i < count; i++) {
struct led_pwm *cur_led = &pdata->leds[i];
struct led_pwm_data *led_dat = &priv->leds[i];
if (ret < 0)
goto err;
}
- priv->num_leds = pdata->num_leds;
+ priv->num_leds = count;
} else {
- priv = led_pwm_create_of(pdev);
- if (!priv)
- return -ENODEV;
+ ret = led_pwm_create_of(pdev, priv);
+ if (ret)
+ return ret;
}
platform_set_drvdata(pdev, priv);
windfarm_ad7417_sensor.o \
windfarm_lm75_sensor.o \
windfarm_lm87_sensor.o \
+ windfarm_max6690_sensor.o \
windfarm_pid.o \
windfarm_cpufreq_clamp.o \
windfarm_rm31.o
if (watermark <= WATERMARK_METADATA) {
SET_GC_MARK(b, GC_MARK_METADATA);
+ SET_GC_MOVE(b, 0);
b->prio = BTREE_PRIO;
} else {
SET_GC_MARK(b, GC_MARK_RECLAIMABLE);
+ SET_GC_MOVE(b, 0);
b->prio = INITIAL_PRIO;
}
uint8_t disk_gen;
uint8_t last_gc; /* Most out of date gen in the btree */
uint8_t gc_gen;
- uint16_t gc_mark;
+ uint16_t gc_mark; /* Bitfield used by GC. See below for field */
};
/*
#define GC_MARK_RECLAIMABLE 0
#define GC_MARK_DIRTY 1
#define GC_MARK_METADATA 2
-BITMASK(GC_SECTORS_USED, struct bucket, gc_mark, 2, 14);
+BITMASK(GC_SECTORS_USED, struct bucket, gc_mark, 2, 13);
+BITMASK(GC_MOVE, struct bucket, gc_mark, 15, 1);
#include "journal.h"
#include "stats.h"
unsigned char writeback_percent;
unsigned writeback_delay;
- int writeback_rate_change;
- int64_t writeback_rate_derivative;
uint64_t writeback_rate_target;
+ int64_t writeback_rate_proportional;
+ int64_t writeback_rate_derivative;
+ int64_t writeback_rate_change;
unsigned writeback_rate_update_seconds;
unsigned writeback_rate_d_term;
unsigned writeback_rate_p_term_inverse;
- unsigned writeback_rate_d_smooth;
};
enum alloc_watermarks {
* call prio_write() to keep gens from wrapping.
*/
uint8_t need_save_prio;
- unsigned gc_move_threshold;
/*
* If nonzero, we know we aren't going to find any buckets to invalidate
SET_GC_MARK(PTR_BUCKET(c, &c->uuid_bucket, i),
GC_MARK_METADATA);
+ /* don't reclaim buckets to which writeback keys point */
+ rcu_read_lock();
+ for (i = 0; i < c->nr_uuids; i++) {
+ struct bcache_device *d = c->devices[i];
+ struct cached_dev *dc;
+ struct keybuf_key *w, *n;
+ unsigned j;
+
+ if (!d || UUID_FLASH_ONLY(&c->uuids[i]))
+ continue;
+ dc = container_of(d, struct cached_dev, disk);
+
+ spin_lock(&dc->writeback_keys.lock);
+ rbtree_postorder_for_each_entry_safe(w, n,
+ &dc->writeback_keys.keys, node)
+ for (j = 0; j < KEY_PTRS(&w->key); j++)
+ SET_GC_MARK(PTR_BUCKET(c, &w->key, j),
+ GC_MARK_DIRTY);
+ spin_unlock(&dc->writeback_keys.lock);
+ }
+ rcu_read_unlock();
+
for_each_cache(ca, c, i) {
uint64_t *i;
if (KEY_START(k) > KEY_START(insert) + sectors_found)
goto check_failed;
- if (KEY_PTRS(replace_key) != KEY_PTRS(k))
+ if (KEY_PTRS(k) != KEY_PTRS(replace_key) ||
+ KEY_DIRTY(k) != KEY_DIRTY(replace_key))
goto check_failed;
/* skip past gen */
struct bkey *replace_key;
};
-int btree_insert_fn(struct btree_op *b_op, struct btree *b)
+static int btree_insert_fn(struct btree_op *b_op, struct btree *b)
{
struct btree_insert_op *op = container_of(b_op,
struct btree_insert_op, op);
unsigned i;
for (i = 0; i < KEY_PTRS(k); i++) {
- struct cache *ca = PTR_CACHE(c, k, i);
struct bucket *g = PTR_BUCKET(c, k, i);
- if (GC_SECTORS_USED(g) < ca->gc_move_threshold)
+ if (GC_MOVE(g))
return true;
}
static void read_moving_endio(struct bio *bio, int error)
{
+ struct bbio *b = container_of(bio, struct bbio, bio);
struct moving_io *io = container_of(bio->bi_private,
struct moving_io, cl);
if (error)
io->op.error = error;
+ else if (!KEY_DIRTY(&b->key) &&
+ ptr_stale(io->op.c, &b->key, 0)) {
+ io->op.error = -EINTR;
+ }
bch_bbio_endio(io->op.c, bio, error, "reading data to move");
}
if (!w)
break;
+ if (ptr_stale(c, &w->key, 0)) {
+ bch_keybuf_del(&c->moving_gc_keys, w);
+ continue;
+ }
+
io = kzalloc(sizeof(struct moving_io) + sizeof(struct bio_vec)
* DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS),
GFP_KERNEL);
static unsigned bucket_heap_top(struct cache *ca)
{
- return GC_SECTORS_USED(heap_peek(&ca->heap));
+ struct bucket *b;
+ return (b = heap_peek(&ca->heap)) ? GC_SECTORS_USED(b) : 0;
}
void bch_moving_gc(struct cache_set *c)
sectors_to_move -= GC_SECTORS_USED(b);
}
- ca->gc_move_threshold = bucket_heap_top(ca);
-
- pr_debug("threshold %u", ca->gc_move_threshold);
+ while (heap_pop(&ca->heap, b, bucket_cmp))
+ SET_GC_MOVE(b, 1);
}
mutex_unlock(&c->bucket_lock);
static bool can_attach_cache(struct cache *ca, struct cache_set *c)
{
return ca->sb.block_size == c->sb.block_size &&
- ca->sb.bucket_size == c->sb.block_size &&
+ ca->sb.bucket_size == c->sb.bucket_size &&
ca->sb.nr_in_set == c->sb.nr_in_set;
}
rw_attribute(writeback_rate_update_seconds);
rw_attribute(writeback_rate_d_term);
rw_attribute(writeback_rate_p_term_inverse);
-rw_attribute(writeback_rate_d_smooth);
read_attribute(writeback_rate_debug);
read_attribute(stripe_size);
var_printf(writeback_running, "%i");
var_print(writeback_delay);
var_print(writeback_percent);
- sysfs_print(writeback_rate, dc->writeback_rate.rate);
+ sysfs_hprint(writeback_rate, dc->writeback_rate.rate << 9);
var_print(writeback_rate_update_seconds);
var_print(writeback_rate_d_term);
var_print(writeback_rate_p_term_inverse);
- var_print(writeback_rate_d_smooth);
if (attr == &sysfs_writeback_rate_debug) {
+ char rate[20];
char dirty[20];
- char derivative[20];
char target[20];
- bch_hprint(dirty,
- bcache_dev_sectors_dirty(&dc->disk) << 9);
- bch_hprint(derivative, dc->writeback_rate_derivative << 9);
+ char proportional[20];
+ char derivative[20];
+ char change[20];
+ s64 next_io;
+
+ bch_hprint(rate, dc->writeback_rate.rate << 9);
+ bch_hprint(dirty, bcache_dev_sectors_dirty(&dc->disk) << 9);
bch_hprint(target, dc->writeback_rate_target << 9);
+ bch_hprint(proportional,dc->writeback_rate_proportional << 9);
+ bch_hprint(derivative, dc->writeback_rate_derivative << 9);
+ bch_hprint(change, dc->writeback_rate_change << 9);
+
+ next_io = div64_s64(dc->writeback_rate.next - local_clock(),
+ NSEC_PER_MSEC);
return sprintf(buf,
- "rate:\t\t%u\n"
- "change:\t\t%i\n"
+ "rate:\t\t%s/sec\n"
"dirty:\t\t%s\n"
+ "target:\t\t%s\n"
+ "proportional:\t%s\n"
"derivative:\t%s\n"
- "target:\t\t%s\n",
- dc->writeback_rate.rate,
- dc->writeback_rate_change,
- dirty, derivative, target);
+ "change:\t\t%s/sec\n"
+ "next io:\t%llims\n",
+ rate, dirty, target, proportional,
+ derivative, change, next_io);
}
sysfs_hprint(dirty_data,
struct kobj_uevent_env *env;
#define d_strtoul(var) sysfs_strtoul(var, dc->var)
+#define d_strtoul_nonzero(var) sysfs_strtoul_clamp(var, dc->var, 1, INT_MAX)
#define d_strtoi_h(var) sysfs_hatoi(var, dc->var)
sysfs_strtoul(data_csum, dc->disk.data_csum);
d_strtoul(writeback_metadata);
d_strtoul(writeback_running);
d_strtoul(writeback_delay);
- sysfs_strtoul_clamp(writeback_rate,
- dc->writeback_rate.rate, 1, 1000000);
+
sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent, 0, 40);
- d_strtoul(writeback_rate_update_seconds);
+ sysfs_strtoul_clamp(writeback_rate,
+ dc->writeback_rate.rate, 1, INT_MAX);
+
+ d_strtoul_nonzero(writeback_rate_update_seconds);
d_strtoul(writeback_rate_d_term);
- d_strtoul(writeback_rate_p_term_inverse);
- sysfs_strtoul_clamp(writeback_rate_p_term_inverse,
- dc->writeback_rate_p_term_inverse, 1, INT_MAX);
- d_strtoul(writeback_rate_d_smooth);
+ d_strtoul_nonzero(writeback_rate_p_term_inverse);
d_strtoi_h(sequential_cutoff);
d_strtoi_h(readahead);
&sysfs_writeback_rate_update_seconds,
&sysfs_writeback_rate_d_term,
&sysfs_writeback_rate_p_term_inverse,
- &sysfs_writeback_rate_d_smooth,
&sysfs_writeback_rate_debug,
&sysfs_dirty_data,
&sysfs_stripe_size,
{
uint64_t now = local_clock();
- d->next += div_u64(done, d->rate);
+ d->next += div_u64(done * NSEC_PER_SEC, d->rate);
+
+ if (time_before64(now + NSEC_PER_SEC, d->next))
+ d->next = now + NSEC_PER_SEC;
+
+ if (time_after64(now - NSEC_PER_SEC * 2, d->next))
+ d->next = now - NSEC_PER_SEC * 2;
return time_after64(d->next, now)
? div_u64(d->next - now, NSEC_PER_SEC / HZ)
_r; \
})
-#define heap_peek(h) ((h)->size ? (h)->data[0] : NULL)
+#define heap_peek(h) ((h)->used ? (h)->data[0] : NULL)
#define heap_full(h) ((h)->used == (h)->size)
/* PD controller */
- int change = 0;
- int64_t error;
int64_t dirty = bcache_dev_sectors_dirty(&dc->disk);
int64_t derivative = dirty - dc->disk.sectors_dirty_last;
+ int64_t proportional = dirty - target;
+ int64_t change;
dc->disk.sectors_dirty_last = dirty;
- derivative *= dc->writeback_rate_d_term;
- derivative = clamp(derivative, -dirty, dirty);
+ /* Scale to sectors per second */
- derivative = ewma_add(dc->disk.sectors_dirty_derivative, derivative,
- dc->writeback_rate_d_smooth, 0);
+ proportional *= dc->writeback_rate_update_seconds;
+ proportional = div_s64(proportional, dc->writeback_rate_p_term_inverse);
- /* Avoid divide by zero */
- if (!target)
- goto out;
+ derivative = div_s64(derivative, dc->writeback_rate_update_seconds);
- error = div64_s64((dirty + derivative - target) << 8, target);
+ derivative = ewma_add(dc->disk.sectors_dirty_derivative, derivative,
+ (dc->writeback_rate_d_term /
+ dc->writeback_rate_update_seconds) ?: 1, 0);
+
+ derivative *= dc->writeback_rate_d_term;
+ derivative = div_s64(derivative, dc->writeback_rate_p_term_inverse);
- change = div_s64((dc->writeback_rate.rate * error) >> 8,
- dc->writeback_rate_p_term_inverse);
+ change = proportional + derivative;
/* Don't increase writeback rate if the device isn't keeping up */
if (change > 0 &&
time_after64(local_clock(),
- dc->writeback_rate.next + 10 * NSEC_PER_MSEC))
+ dc->writeback_rate.next + NSEC_PER_MSEC))
change = 0;
dc->writeback_rate.rate =
- clamp_t(int64_t, dc->writeback_rate.rate + change,
+ clamp_t(int64_t, (int64_t) dc->writeback_rate.rate + change,
1, NSEC_PER_MSEC);
-out:
+
+ dc->writeback_rate_proportional = proportional;
dc->writeback_rate_derivative = derivative;
dc->writeback_rate_change = change;
dc->writeback_rate_target = target;
static unsigned writeback_delay(struct cached_dev *dc, unsigned sectors)
{
- uint64_t ret;
-
if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) ||
!dc->writeback_percent)
return 0;
- ret = bch_next_delay(&dc->writeback_rate, sectors * 10000000ULL);
-
- return min_t(uint64_t, ret, HZ);
+ return bch_next_delay(&dc->writeback_rate, sectors);
}
struct dirty_io {
if (KEY_START(&w->key) != dc->last_read ||
jiffies_to_msecs(delay) > 50)
while (!kthread_should_stop() && delay)
- delay = schedule_timeout_interruptible(delay);
+ delay = schedule_timeout_uninterruptible(delay);
dc->last_read = KEY_OFFSET(&w->key);
while (delay &&
!kthread_should_stop() &&
!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
- delay = schedule_timeout_interruptible(delay);
+ delay = schedule_timeout_uninterruptible(delay);
}
}
bch_btree_map_keys(&op.op, dc->disk.c, &KEY(op.inode, 0, 0),
sectors_dirty_init_fn, 0);
+
+ dc->disk.sectors_dirty_last = bcache_dev_sectors_dirty(&dc->disk);
}
int bch_cached_dev_writeback_init(struct cached_dev *dc)
dc->writeback_delay = 30;
dc->writeback_rate.rate = 1024;
- dc->writeback_rate_update_seconds = 30;
- dc->writeback_rate_d_term = 16;
- dc->writeback_rate_p_term_inverse = 64;
- dc->writeback_rate_d_smooth = 8;
+ dc->writeback_rate_update_seconds = 5;
+ dc->writeback_rate_d_term = 30;
+ dc->writeback_rate_p_term_inverse = 6000;
dc->writeback_thread = kthread_create(bch_writeback_thread, dc,
"bcache_writeback");
if (IS_ERR(dc->writeback_thread))
return PTR_ERR(dc->writeback_thread);
- set_task_state(dc->writeback_thread, TASK_INTERRUPTIBLE);
-
INIT_DELAYED_WORK(&dc->writeback_rate_update, update_writeback_rate);
schedule_delayed_work(&dc->writeback_rate_update,
dc->writeback_rate_update_seconds * HZ);
{
__u64 mem;
+ dm_bufio_allocated_kmem_cache = 0;
+ dm_bufio_allocated_get_free_pages = 0;
+ dm_bufio_allocated_vmalloc = 0;
+ dm_bufio_current_allocated = 0;
+
memset(&dm_bufio_caches, 0, sizeof dm_bufio_caches);
memset(&dm_bufio_cache_names, 0, sizeof dm_bufio_cache_names);
int r = 0;
bool updated = updated_this_tick(mq, e);
- requeue_and_update_tick(mq, e);
-
if ((!discarded_oblock && updated) ||
- !should_promote(mq, e, discarded_oblock, data_dir))
+ !should_promote(mq, e, discarded_oblock, data_dir)) {
+ requeue_and_update_tick(mq, e);
result->op = POLICY_MISS;
- else if (!can_migrate)
+
+ } else if (!can_migrate)
r = -EWOULDBLOCK;
- else
+
+ else {
+ requeue_and_update_tick(mq, e);
r = pre_cache_to_cache(mq, e, result);
+ }
return r;
}
{
int r;
- r = dm_cache_resize(cache->cmd, cache->cache_size);
+ r = dm_cache_resize(cache->cmd, new_size);
if (r) {
DMERR("could not resize cache metadata");
return r;
struct delay_c {
struct timer_list delay_timer;
struct mutex timer_lock;
+ struct workqueue_struct *kdelayd_wq;
struct work_struct flush_expired_bios;
struct list_head delayed_bios;
atomic_t may_delay;
static DEFINE_MUTEX(delayed_bios_lock);
-static struct workqueue_struct *kdelayd_wq;
static struct kmem_cache *delayed_cache;
static void handle_delayed_timer(unsigned long data)
{
struct delay_c *dc = (struct delay_c *)data;
- queue_work(kdelayd_wq, &dc->flush_expired_bios);
+ queue_work(dc->kdelayd_wq, &dc->flush_expired_bios);
}
static void queue_timeout(struct delay_c *dc, unsigned long expires)
goto bad_dev_write;
}
+ dc->kdelayd_wq = alloc_workqueue("kdelayd", WQ_MEM_RECLAIM, 0);
+ if (!dc->kdelayd_wq) {
+ DMERR("Couldn't start kdelayd");
+ goto bad_queue;
+ }
+
setup_timer(&dc->delay_timer, handle_delayed_timer, (unsigned long)dc);
INIT_WORK(&dc->flush_expired_bios, flush_expired_bios);
ti->private = dc;
return 0;
+bad_queue:
+ mempool_destroy(dc->delayed_pool);
bad_dev_write:
if (dc->dev_write)
dm_put_device(ti, dc->dev_write);
{
struct delay_c *dc = ti->private;
- flush_workqueue(kdelayd_wq);
+ destroy_workqueue(dc->kdelayd_wq);
dm_put_device(ti, dc->dev_read);
{
int r = -ENOMEM;
- kdelayd_wq = alloc_workqueue("kdelayd", WQ_MEM_RECLAIM, 0);
- if (!kdelayd_wq) {
- DMERR("Couldn't start kdelayd");
- goto bad_queue;
- }
-
delayed_cache = KMEM_CACHE(dm_delay_info, 0);
if (!delayed_cache) {
DMERR("Couldn't create delayed bio cache.");
bad_register:
kmem_cache_destroy(delayed_cache);
bad_memcache:
- destroy_workqueue(kdelayd_wq);
-bad_queue:
return r;
}
{
dm_unregister_target(&delay_target);
kmem_cache_destroy(delayed_cache);
- destroy_workqueue(kdelayd_wq);
}
/* Module hooks */
atomic_t pending_exceptions_count;
+ /* Protected by "lock" */
+ sector_t exception_start_sequence;
+
+ /* Protected by kcopyd single-threaded callback */
+ sector_t exception_complete_sequence;
+
+ /*
+ * A list of pending exceptions that completed out of order.
+ * Protected by kcopyd single-threaded callback.
+ */
+ struct list_head out_of_order_list;
+
mempool_t *pending_pool;
struct dm_exception_table pending;
*/
int started;
+ /* There was copying error. */
+ int copy_error;
+
+ /* A sequence number, it is used for in-order completion. */
+ sector_t exception_sequence;
+
+ struct list_head out_of_order_entry;
+
/*
* For writing a complete chunk, bypassing the copy.
*/
s->valid = 1;
s->active = 0;
atomic_set(&s->pending_exceptions_count, 0);
+ s->exception_start_sequence = 0;
+ s->exception_complete_sequence = 0;
+ INIT_LIST_HEAD(&s->out_of_order_list);
init_rwsem(&s->lock);
INIT_LIST_HEAD(&s->list);
spin_lock_init(&s->pe_lock);
pending_complete(pe, success);
}
+static void complete_exception(struct dm_snap_pending_exception *pe)
+{
+ struct dm_snapshot *s = pe->snap;
+
+ if (unlikely(pe->copy_error))
+ pending_complete(pe, 0);
+
+ else
+ /* Update the metadata if we are persistent */
+ s->store->type->commit_exception(s->store, &pe->e,
+ commit_callback, pe);
+}
+
/*
* Called when the copy I/O has finished. kcopyd actually runs
* this code so don't block.
struct dm_snap_pending_exception *pe = context;
struct dm_snapshot *s = pe->snap;
- if (read_err || write_err)
- pending_complete(pe, 0);
+ pe->copy_error = read_err || write_err;
- else
- /* Update the metadata if we are persistent */
- s->store->type->commit_exception(s->store, &pe->e,
- commit_callback, pe);
+ if (pe->exception_sequence == s->exception_complete_sequence) {
+ s->exception_complete_sequence++;
+ complete_exception(pe);
+
+ while (!list_empty(&s->out_of_order_list)) {
+ pe = list_entry(s->out_of_order_list.next,
+ struct dm_snap_pending_exception, out_of_order_entry);
+ if (pe->exception_sequence != s->exception_complete_sequence)
+ break;
+ s->exception_complete_sequence++;
+ list_del(&pe->out_of_order_entry);
+ complete_exception(pe);
+ }
+ } else {
+ struct list_head *lh;
+ struct dm_snap_pending_exception *pe2;
+
+ list_for_each_prev(lh, &s->out_of_order_list) {
+ pe2 = list_entry(lh, struct dm_snap_pending_exception, out_of_order_entry);
+ if (pe2->exception_sequence < pe->exception_sequence)
+ break;
+ }
+ list_add(&pe->out_of_order_entry, lh);
+ }
}
/*
return NULL;
}
+ pe->exception_sequence = s->exception_start_sequence++;
+
dm_insert_exception(&s->pending, &pe->e);
return pe;
static struct target_type snapshot_target = {
.name = "snapshot",
- .version = {1, 11, 1},
+ .version = {1, 12, 0},
.module = THIS_MODULE,
.ctr = snapshot_ctr,
.dtr = snapshot_dtr,
int __init dm_statistics_init(void)
{
+ shared_memory_amount = 0;
dm_stat_need_rcu_barrier = 0;
return 0;
}
num_targets = dm_round_up(num_targets, KEYS_PER_NODE);
+ if (!num_targets) {
+ kfree(t);
+ return -ENOMEM;
+ }
+
if (alloc_targets(t, num_targets)) {
kfree(t);
return -ENOMEM;
up_write(&pmd->root_lock);
}
+void dm_pool_metadata_read_write(struct dm_pool_metadata *pmd)
+{
+ down_write(&pmd->root_lock);
+ pmd->read_only = false;
+ dm_bm_set_read_write(pmd->bm);
+ up_write(&pmd->root_lock);
+}
+
int dm_pool_register_metadata_threshold(struct dm_pool_metadata *pmd,
dm_block_t threshold,
dm_sm_threshold_fn fn,
* that nothing is changing.
*/
void dm_pool_metadata_read_only(struct dm_pool_metadata *pmd);
+void dm_pool_metadata_read_write(struct dm_pool_metadata *pmd);
int dm_pool_register_metadata_threshold(struct dm_pool_metadata *pmd,
dm_block_t threshold,
*/
r = dm_thin_insert_block(tc->td, m->virt_block, m->data_block);
if (r) {
- DMERR_LIMIT("dm_thin_insert_block() failed");
+ DMERR_LIMIT("%s: dm_thin_insert_block() failed: error = %d",
+ dm_device_name(pool->pool_md), r);
+ set_pool_mode(pool, PM_READ_ONLY);
cell_error(pool, m->cell);
goto out;
}
}
}
-static int commit(struct pool *pool)
-{
- int r;
-
- r = dm_pool_commit_metadata(pool->pmd);
- if (r)
- DMERR_LIMIT("%s: commit failed: error = %d",
- dm_device_name(pool->pool_md), r);
-
- return r;
-}
-
/*
* A non-zero return indicates read_only or fail_io mode.
* Many callers don't care about the return value.
*/
-static int commit_or_fallback(struct pool *pool)
+static int commit(struct pool *pool)
{
int r;
if (get_pool_mode(pool) != PM_WRITE)
return -EINVAL;
- r = commit(pool);
- if (r)
+ r = dm_pool_commit_metadata(pool->pmd);
+ if (r) {
+ DMERR_LIMIT("%s: dm_pool_commit_metadata failed: error = %d",
+ dm_device_name(pool->pool_md), r);
set_pool_mode(pool, PM_READ_ONLY);
+ }
return r;
}
* Try to commit to see if that will free up some
* more space.
*/
- (void) commit_or_fallback(pool);
+ r = commit(pool);
+ if (r)
+ return r;
r = dm_pool_get_free_block_count(pool->pmd, &free_blocks);
if (r)
* table reload).
*/
if (!free_blocks) {
- DMWARN("%s: no free space available.",
+ DMWARN("%s: no free data space available.",
dm_device_name(pool->pool_md));
spin_lock_irqsave(&pool->lock, flags);
pool->no_free_space = 1;
}
r = dm_pool_alloc_data_block(pool->pmd, result);
- if (r)
+ if (r) {
+ if (r == -ENOSPC &&
+ !dm_pool_get_free_metadata_block_count(pool->pmd, &free_blocks) &&
+ !free_blocks) {
+ DMWARN("%s: no free metadata space available.",
+ dm_device_name(pool->pool_md));
+ set_pool_mode(pool, PM_READ_ONLY);
+ }
return r;
+ }
return 0;
}
if (bio_list_empty(&bios) && !need_commit_due_to_time(pool))
return;
- if (commit_or_fallback(pool)) {
+ if (commit(pool)) {
while ((bio = bio_list_pop(&bios)))
bio_io_error(bio);
return;
case PM_FAIL:
DMERR("%s: switching pool to failure mode",
dm_device_name(pool->pool_md));
+ dm_pool_metadata_read_only(pool->pmd);
pool->process_bio = process_bio_fail;
pool->process_discard = process_bio_fail;
pool->process_prepared_mapping = process_prepared_mapping_fail;
break;
case PM_WRITE:
+ dm_pool_metadata_read_write(pool->pmd);
pool->process_bio = process_bio;
pool->process_discard = process_discard;
pool->process_prepared_mapping = process_prepared_mapping;
struct pool_c *pt = ti->private;
/*
- * We want to make sure that degraded pools are never upgraded.
+ * We want to make sure that a pool in PM_FAIL mode is never upgraded.
*/
enum pool_mode old_mode = pool->pf.mode;
enum pool_mode new_mode = pt->adjusted_pf.mode;
- if (old_mode > new_mode)
+ /*
+ * If we were in PM_FAIL mode, rollback of metadata failed. We're
+ * not going to recover without a thin_repair. So we never let the
+ * pool move out of the old mode. On the other hand a PM_READ_ONLY
+ * may have been due to a lack of metadata or data space, and may
+ * now work (ie. if the underlying devices have been resized).
+ */
+ if (old_mode == PM_FAIL)
new_mode = old_mode;
pool->ti = ti;
return r;
if (need_commit1 || need_commit2)
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
return 0;
}
cancel_delayed_work(&pool->waker);
flush_workqueue(pool->wq);
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
}
static int check_arg_count(unsigned argc, unsigned args_required)
if (r)
return r;
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
r = dm_pool_reserve_metadata_snap(pool->pmd);
if (r)
DMWARN("Unrecognised thin pool target message received: %s", argv[0]);
if (!r)
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
return r;
}
/* Commit to ensure statistics aren't out-of-date */
if (!(status_flags & DM_STATUS_NOFLUSH_FLAG) && !dm_suspended(ti))
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
r = dm_pool_get_metadata_transaction_id(pool->pmd, &transaction_id);
if (r) {
static struct ctl_table_header *raid_table_header;
-static ctl_table raid_table[] = {
+static struct ctl_table raid_table[] = {
{
.procname = "speed_limit_min",
.data = &sysctl_speed_limit_min,
{ }
};
-static ctl_table raid_dir_table[] = {
+static struct ctl_table raid_dir_table[] = {
{
.procname = "raid",
.maxlen = 0,
{ }
};
-static ctl_table raid_root_table[] = {
+static struct ctl_table raid_root_table[] = {
{
.procname = "dev",
.maxlen = 0,
goto retry;
}
-static inline int mddev_lock(struct mddev * mddev)
+static inline int __must_check mddev_lock(struct mddev * mddev)
{
return mutex_lock_interruptible(&mddev->reconfig_mutex);
}
+/* Sometimes we need to take the lock in a situation where
+ * failure due to interrupts is not acceptable.
+ */
+static inline void mddev_lock_nointr(struct mddev * mddev)
+{
+ mutex_lock(&mddev->reconfig_mutex);
+}
+
static inline int mddev_is_locked(struct mddev *mddev)
{
return mutex_is_locked(&mddev->reconfig_mutex);
finish_wait(&mddev->sb_wait, &wq);
}
-static void bi_complete(struct bio *bio, int error)
-{
- complete((struct completion*)bio->bi_private);
-}
-
int sync_page_io(struct md_rdev *rdev, sector_t sector, int size,
struct page *page, int rw, bool metadata_op)
{
struct bio *bio = bio_alloc_mddev(GFP_NOIO, 1, rdev->mddev);
- struct completion event;
int ret;
rw |= REQ_SYNC;
else
bio->bi_sector = sector + rdev->data_offset;
bio_add_page(bio, page, size, 0);
- init_completion(&event);
- bio->bi_private = &event;
- bio->bi_end_io = bi_complete;
- submit_bio(rw, bio);
- wait_for_completion(&event);
+ submit_bio_wait(rw, bio);
ret = test_bit(BIO_UPTODATE, &bio->bi_flags);
bio_put(bio);
for_each_mddev(mddev, tmp) {
struct md_rdev *rdev2;
- mddev_lock(mddev);
+ mddev_lock_nointr(mddev);
rdev_for_each(rdev2, mddev)
if (rdev->bdev == rdev2->bdev &&
rdev != rdev2 &&
break;
}
}
- mddev_lock(my_mddev);
+ mddev_lock_nointr(my_mddev);
if (overlap) {
/* Someone else could have slipped in a size
* change here, but doing so is just silly.
mddev->in_sync = 1;
del_timer_sync(&mddev->safemode_timer);
}
+ blk_set_stacking_limits(&mddev->queue->limits);
pers->run(mddev);
set_bit(MD_CHANGE_DEVS, &mddev->flags);
mddev_resume(mddev);
void md_stop_writes(struct mddev *mddev)
{
- mddev_lock(mddev);
+ mddev_lock_nointr(mddev);
__md_stop_writes(mddev);
mddev_unlock(mddev);
}
static int md_set_readonly(struct mddev *mddev, struct block_device *bdev)
{
int err = 0;
+ int did_freeze = 0;
+
+ if (!test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) {
+ did_freeze = 1;
+ set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ md_wakeup_thread(mddev->thread);
+ }
+ if (mddev->sync_thread) {
+ set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+ /* Thread might be blocked waiting for metadata update
+ * which will now never happen */
+ wake_up_process(mddev->sync_thread->tsk);
+ }
+ mddev_unlock(mddev);
+ wait_event(resync_wait, mddev->sync_thread == NULL);
+ mddev_lock_nointr(mddev);
+
mutex_lock(&mddev->open_mutex);
- if (atomic_read(&mddev->openers) > !!bdev) {
+ if (atomic_read(&mddev->openers) > !!bdev ||
+ mddev->sync_thread ||
+ (bdev && !test_bit(MD_STILL_CLOSED, &mddev->flags))) {
printk("md: %s still in use.\n",mdname(mddev));
+ if (did_freeze) {
+ clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ md_wakeup_thread(mddev->thread);
+ }
err = -EBUSY;
goto out;
}
- if (bdev && !test_bit(MD_STILL_CLOSED, &mddev->flags)) {
- /* Someone opened the device since we flushed it
- * so page cache could be dirty and it is too late
- * to flush. So abort
- */
- mutex_unlock(&mddev->open_mutex);
- return -EBUSY;
- }
if (mddev->pers) {
__md_stop_writes(mddev);
set_disk_ro(mddev->gendisk, 1);
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
sysfs_notify_dirent_safe(mddev->sysfs_state);
- err = 0;
+ err = 0;
}
out:
mutex_unlock(&mddev->open_mutex);
{
struct gendisk *disk = mddev->gendisk;
struct md_rdev *rdev;
+ int did_freeze = 0;
+
+ if (!test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) {
+ did_freeze = 1;
+ set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ md_wakeup_thread(mddev->thread);
+ }
+ if (mddev->sync_thread) {
+ set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+ /* Thread might be blocked waiting for metadata update
+ * which will now never happen */
+ wake_up_process(mddev->sync_thread->tsk);
+ }
+ mddev_unlock(mddev);
+ wait_event(resync_wait, mddev->sync_thread == NULL);
+ mddev_lock_nointr(mddev);
mutex_lock(&mddev->open_mutex);
if (atomic_read(&mddev->openers) > !!bdev ||
- mddev->sysfs_active) {
+ mddev->sysfs_active ||
+ mddev->sync_thread ||
+ (bdev && !test_bit(MD_STILL_CLOSED, &mddev->flags))) {
printk("md: %s still in use.\n",mdname(mddev));
mutex_unlock(&mddev->open_mutex);
- return -EBUSY;
- }
- if (bdev && !test_bit(MD_STILL_CLOSED, &mddev->flags)) {
- /* Someone opened the device since we flushed it
- * so page cache could be dirty and it is too late
- * to flush. So abort
- */
- mutex_unlock(&mddev->open_mutex);
+ if (did_freeze) {
+ clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ md_wakeup_thread(mddev->thread);
+ }
return -EBUSY;
}
if (mddev->pers) {
wait_event(mddev->sb_wait,
!test_bit(MD_CHANGE_DEVS, &mddev->flags) &&
!test_bit(MD_CHANGE_PENDING, &mddev->flags));
- mddev_lock(mddev);
+ mddev_lock_nointr(mddev);
}
} else {
err = -EROFS;
mddev->curr_resync = 2;
try_again:
- if (kthread_should_stop())
- set_bit(MD_RECOVERY_INTR, &mddev->recovery);
-
if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
goto skip;
for_each_mddev(mddev2, tmp) {
* be caught by 'softlockup'
*/
prepare_to_wait(&resync_wait, &wq, TASK_INTERRUPTIBLE);
- if (!kthread_should_stop() &&
+ if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
mddev2->curr_resync >= mddev->curr_resync) {
printk(KERN_INFO "md: delaying %s of %s"
" until %s has finished (they"
last_check = 0;
if (j>2) {
- printk(KERN_INFO
+ printk(KERN_INFO
"md: resuming %s of %s from checkpoint.\n",
desc, mdname(mddev));
mddev->curr_resync = j;
sysfs_notify(&mddev->kobj, NULL, "sync_completed");
}
- while (j >= mddev->resync_max && !kthread_should_stop()) {
+ while (j >= mddev->resync_max &&
+ !test_bit(MD_RECOVERY_INTR, &mddev->recovery)) {
/* As this condition is controlled by user-space,
* we can block indefinitely, so use '_interruptible'
* to avoid triggering warnings.
flush_signals(current); /* just in case */
wait_event_interruptible(mddev->recovery_wait,
mddev->resync_max > j
- || kthread_should_stop());
+ || test_bit(MD_RECOVERY_INTR,
+ &mddev->recovery));
}
- if (kthread_should_stop())
- goto interrupted;
+ if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
+ break;
sectors = mddev->pers->sync_request(mddev, j, &skipped,
currspeed < speed_min(mddev));
if (sectors == 0) {
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
- goto out;
+ break;
}
if (!skipped) { /* actual IO requested */
last_mark = next;
}
-
- if (kthread_should_stop())
- goto interrupted;
-
+ if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
+ break;
/*
* this loop exits only if either when we are slower than
}
}
}
- printk(KERN_INFO "md: %s: %s done.\n",mdname(mddev), desc);
+ printk(KERN_INFO "md: %s: %s %s.\n",mdname(mddev), desc,
+ test_bit(MD_RECOVERY_INTR, &mddev->recovery)
+ ? "interrupted" : "done");
/*
* this also signals 'finished resyncing' to md_stop
*/
- out:
blk_finish_plug(&plug);
wait_event(mddev->recovery_wait, !atomic_read(&mddev->recovery_active));
set_bit(MD_RECOVERY_DONE, &mddev->recovery);
md_wakeup_thread(mddev->thread);
return;
-
- interrupted:
- /*
- * got a signal, exit.
- */
- printk(KERN_INFO
- "md: md_do_sync() got signal ... exiting\n");
- set_bit(MD_RECOVERY_INTR, &mddev->recovery);
- goto out;
-
}
EXPORT_SYMBOL_GPL(md_do_sync);
if (mddev->ro && !test_bit(MD_RECOVERY_NEEDED, &mddev->recovery))
return;
if ( ! (
- (mddev->flags & ~ (1<<MD_CHANGE_PENDING)) ||
+ (mddev->flags & MD_UPDATE_SB_FLAGS & ~ (1<<MD_CHANGE_PENDING)) ||
test_bit(MD_RECOVERY_NEEDED, &mddev->recovery) ||
test_bit(MD_RECOVERY_DONE, &mddev->recovery) ||
(mddev->external == 0 && mddev->safemode == 1) ||
/* resync has finished, collect result */
md_unregister_thread(&mddev->sync_thread);
+ wake_up(&resync_wait);
if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
!test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
/* success...*/
* The shadow op will often be a noop. Only insert if it really
* copied data.
*/
- if (dm_block_location(*block) != b)
+ if (dm_block_location(*block) != b) {
+ /*
+ * dm_tm_shadow_block will have already decremented the old
+ * block, but it is still referenced by the btree. We
+ * increment to stop the insert decrementing it below zero
+ * when overwriting the old value.
+ */
+ dm_tm_inc(info->btree_info.tm, b);
r = insert_ablock(info, index, *block, root);
+ }
return r;
}
}
EXPORT_SYMBOL_GPL(dm_bm_set_read_only);
+void dm_bm_set_read_write(struct dm_block_manager *bm)
+{
+ bm->read_only = false;
+}
+EXPORT_SYMBOL_GPL(dm_bm_set_read_write);
+
u32 dm_bm_checksum(const void *data, size_t len, u32 init_xor)
{
return crc32c(~(u32) 0, data, len) ^ init_xor;
int dm_bm_flush_and_unlock(struct dm_block_manager *bm,
struct dm_block *superblock);
- /*
- * Request data be prefetched into the cache.
- */
+/*
+ * Request data is prefetched into the cache.
+ */
void dm_bm_prefetch(struct dm_block_manager *bm, dm_block_t b);
/*
* be returned if you do.
*/
void dm_bm_set_read_only(struct dm_block_manager *bm);
+void dm_bm_set_read_write(struct dm_block_manager *bm);
u32 dm_bm_checksum(const void *data, size_t len, u32 init_xor);
}
static int sm_ll_mutate(struct ll_disk *ll, dm_block_t b,
- uint32_t (*mutator)(void *context, uint32_t old),
+ int (*mutator)(void *context, uint32_t old, uint32_t *new),
void *context, enum allocation_event *ev)
{
int r;
if (old > 2) {
r = sm_ll_lookup_big_ref_count(ll, b, &old);
- if (r < 0)
+ if (r < 0) {
+ dm_tm_unlock(ll->tm, nb);
return r;
+ }
}
- ref_count = mutator(context, old);
+ r = mutator(context, old, &ref_count);
+ if (r) {
+ dm_tm_unlock(ll->tm, nb);
+ return r;
+ }
if (ref_count <= 2) {
sm_set_bitmap(bm_le, bit, ref_count);
return ll->save_ie(ll, index, &ie_disk);
}
-static uint32_t set_ref_count(void *context, uint32_t old)
+static int set_ref_count(void *context, uint32_t old, uint32_t *new)
{
- return *((uint32_t *) context);
+ *new = *((uint32_t *) context);
+ return 0;
}
int sm_ll_insert(struct ll_disk *ll, dm_block_t b,
return sm_ll_mutate(ll, b, set_ref_count, &ref_count, ev);
}
-static uint32_t inc_ref_count(void *context, uint32_t old)
+static int inc_ref_count(void *context, uint32_t old, uint32_t *new)
{
- return old + 1;
+ *new = old + 1;
+ return 0;
}
int sm_ll_inc(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev)
return sm_ll_mutate(ll, b, inc_ref_count, NULL, ev);
}
-static uint32_t dec_ref_count(void *context, uint32_t old)
+static int dec_ref_count(void *context, uint32_t old, uint32_t *new)
{
- return old - 1;
+ if (!old) {
+ DMERR_LIMIT("unable to decrement a reference count below 0");
+ return -EINVAL;
+ }
+
+ *new = old - 1;
+ return 0;
}
int sm_ll_dec(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev)
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
int r = sm_metadata_new_block_(sm, b);
- if (r)
+ if (r) {
DMERR("unable to allocate new metadata block");
+ return r;
+ }
r = sm_metadata_get_nr_free(sm, &count);
- if (r)
+ if (r) {
DMERR("couldn't get free block count");
+ return r;
+ }
check_threshold(&smm->threshold, count);
*/
static int max_queued_requests = 1024;
-static void allow_barrier(struct r1conf *conf);
+static void allow_barrier(struct r1conf *conf, sector_t start_next_window,
+ sector_t bi_sector);
static void lower_barrier(struct r1conf *conf);
static void * r1bio_pool_alloc(gfp_t gfp_flags, void *data)
}
#define RESYNC_BLOCK_SIZE (64*1024)
-//#define RESYNC_BLOCK_SIZE PAGE_SIZE
+#define RESYNC_DEPTH 32
#define RESYNC_SECTORS (RESYNC_BLOCK_SIZE >> 9)
#define RESYNC_PAGES ((RESYNC_BLOCK_SIZE + PAGE_SIZE-1) / PAGE_SIZE)
-#define RESYNC_WINDOW (2048*1024)
+#define RESYNC_WINDOW (RESYNC_BLOCK_SIZE * RESYNC_DEPTH)
+#define RESYNC_WINDOW_SECTORS (RESYNC_WINDOW >> 9)
+#define NEXT_NORMALIO_DISTANCE (3 * RESYNC_WINDOW_SECTORS)
static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data)
{
struct bio *bio = r1_bio->master_bio;
int done;
struct r1conf *conf = r1_bio->mddev->private;
+ sector_t start_next_window = r1_bio->start_next_window;
+ sector_t bi_sector = bio->bi_sector;
if (bio->bi_phys_segments) {
unsigned long flags;
bio->bi_phys_segments--;
done = (bio->bi_phys_segments == 0);
spin_unlock_irqrestore(&conf->device_lock, flags);
+ /*
+ * make_request() might be waiting for
+ * bi_phys_segments to decrease
+ */
+ wake_up(&conf->wait_barrier);
} else
done = 1;
* Wake up any possible resync thread that waits for the device
* to go idle.
*/
- allow_barrier(conf);
+ allow_barrier(conf, start_next_window, bi_sector);
}
}
* there is no normal IO happeing. It must arrange to call
* lower_barrier when the particular background IO completes.
*/
-#define RESYNC_DEPTH 32
-
static void raise_barrier(struct r1conf *conf)
{
spin_lock_irq(&conf->resync_lock);
/* block any new IO from starting */
conf->barrier++;
- /* Now wait for all pending IO to complete */
+ /* For these conditions we must wait:
+ * A: while the array is in frozen state
+ * B: while barrier >= RESYNC_DEPTH, meaning resync reach
+ * the max count which allowed.
+ * C: next_resync + RESYNC_SECTORS > start_next_window, meaning
+ * next resync will reach to the window which normal bios are
+ * handling.
+ */
wait_event_lock_irq(conf->wait_barrier,
- !conf->nr_pending && conf->barrier < RESYNC_DEPTH,
+ !conf->array_frozen &&
+ conf->barrier < RESYNC_DEPTH &&
+ (conf->start_next_window >=
+ conf->next_resync + RESYNC_SECTORS),
conf->resync_lock);
spin_unlock_irq(&conf->resync_lock);
wake_up(&conf->wait_barrier);
}
-static void wait_barrier(struct r1conf *conf)
+static bool need_to_wait_for_sync(struct r1conf *conf, struct bio *bio)
{
+ bool wait = false;
+
+ if (conf->array_frozen || !bio)
+ wait = true;
+ else if (conf->barrier && bio_data_dir(bio) == WRITE) {
+ if (conf->next_resync < RESYNC_WINDOW_SECTORS)
+ wait = true;
+ else if ((conf->next_resync - RESYNC_WINDOW_SECTORS
+ >= bio_end_sector(bio)) ||
+ (conf->next_resync + NEXT_NORMALIO_DISTANCE
+ <= bio->bi_sector))
+ wait = false;
+ else
+ wait = true;
+ }
+
+ return wait;
+}
+
+static sector_t wait_barrier(struct r1conf *conf, struct bio *bio)
+{
+ sector_t sector = 0;
+
spin_lock_irq(&conf->resync_lock);
- if (conf->barrier) {
+ if (need_to_wait_for_sync(conf, bio)) {
conf->nr_waiting++;
/* Wait for the barrier to drop.
* However if there are already pending
* count down.
*/
wait_event_lock_irq(conf->wait_barrier,
- !conf->barrier ||
- (conf->nr_pending &&
+ !conf->array_frozen &&
+ (!conf->barrier ||
+ ((conf->start_next_window <
+ conf->next_resync + RESYNC_SECTORS) &&
current->bio_list &&
- !bio_list_empty(current->bio_list)),
+ !bio_list_empty(current->bio_list))),
conf->resync_lock);
conf->nr_waiting--;
}
+
+ if (bio && bio_data_dir(bio) == WRITE) {
+ if (conf->next_resync + NEXT_NORMALIO_DISTANCE
+ <= bio->bi_sector) {
+ if (conf->start_next_window == MaxSector)
+ conf->start_next_window =
+ conf->next_resync +
+ NEXT_NORMALIO_DISTANCE;
+
+ if ((conf->start_next_window + NEXT_NORMALIO_DISTANCE)
+ <= bio->bi_sector)
+ conf->next_window_requests++;
+ else
+ conf->current_window_requests++;
+ }
+ if (bio->bi_sector >= conf->start_next_window)
+ sector = conf->start_next_window;
+ }
+
conf->nr_pending++;
spin_unlock_irq(&conf->resync_lock);
+ return sector;
}
-static void allow_barrier(struct r1conf *conf)
+static void allow_barrier(struct r1conf *conf, sector_t start_next_window,
+ sector_t bi_sector)
{
unsigned long flags;
+
spin_lock_irqsave(&conf->resync_lock, flags);
conf->nr_pending--;
+ if (start_next_window) {
+ if (start_next_window == conf->start_next_window) {
+ if (conf->start_next_window + NEXT_NORMALIO_DISTANCE
+ <= bi_sector)
+ conf->next_window_requests--;
+ else
+ conf->current_window_requests--;
+ } else
+ conf->current_window_requests--;
+
+ if (!conf->current_window_requests) {
+ if (conf->next_window_requests) {
+ conf->current_window_requests =
+ conf->next_window_requests;
+ conf->next_window_requests = 0;
+ conf->start_next_window +=
+ NEXT_NORMALIO_DISTANCE;
+ } else
+ conf->start_next_window = MaxSector;
+ }
+ }
spin_unlock_irqrestore(&conf->resync_lock, flags);
wake_up(&conf->wait_barrier);
}
{
/* stop syncio and normal IO and wait for everything to
* go quite.
- * We increment barrier and nr_waiting, and then
- * wait until nr_pending match nr_queued+extra
+ * We wait until nr_pending match nr_queued+extra
* This is called in the context of one normal IO request
* that has failed. Thus any sync request that might be pending
* will be blocked by nr_pending, and we need to wait for
* we continue.
*/
spin_lock_irq(&conf->resync_lock);
- conf->barrier++;
- conf->nr_waiting++;
+ conf->array_frozen = 1;
wait_event_lock_irq_cmd(conf->wait_barrier,
conf->nr_pending == conf->nr_queued+extra,
conf->resync_lock,
{
/* reverse the effect of the freeze */
spin_lock_irq(&conf->resync_lock);
- conf->barrier--;
- conf->nr_waiting--;
+ conf->array_frozen = 0;
wake_up(&conf->wait_barrier);
spin_unlock_irq(&conf->resync_lock);
}
int first_clone;
int sectors_handled;
int max_sectors;
+ sector_t start_next_window;
/*
* Register the new request and wait if the reconstruction
finish_wait(&conf->wait_barrier, &w);
}
- wait_barrier(conf);
+ start_next_window = wait_barrier(conf, bio);
bitmap = mddev->bitmap;
disks = conf->raid_disks * 2;
retry_write:
+ r1_bio->start_next_window = start_next_window;
blocked_rdev = NULL;
rcu_read_lock();
max_sectors = r1_bio->sectors;
if (unlikely(blocked_rdev)) {
/* Wait for this device to become unblocked */
int j;
+ sector_t old = start_next_window;
for (j = 0; j < i; j++)
if (r1_bio->bios[j])
rdev_dec_pending(conf->mirrors[j].rdev, mddev);
r1_bio->state = 0;
- allow_barrier(conf);
+ allow_barrier(conf, start_next_window, bio->bi_sector);
md_wait_for_blocked_rdev(blocked_rdev, mddev);
- wait_barrier(conf);
+ start_next_window = wait_barrier(conf, bio);
+ /*
+ * We must make sure the multi r1bios of bio have
+ * the same value of bi_phys_segments
+ */
+ if (bio->bi_phys_segments && old &&
+ old != start_next_window)
+ /* Wait for the former r1bio(s) to complete */
+ wait_event(conf->wait_barrier,
+ bio->bi_phys_segments == 1);
goto retry_write;
}
static void close_sync(struct r1conf *conf)
{
- wait_barrier(conf);
- allow_barrier(conf);
+ wait_barrier(conf, NULL);
+ allow_barrier(conf, 0, 0);
mempool_destroy(conf->r1buf_pool);
conf->r1buf_pool = NULL;
+
+ conf->next_resync = 0;
+ conf->start_next_window = MaxSector;
}
static int raid1_spare_active(struct mddev *mddev)
conf->pending_count = 0;
conf->recovery_disabled = mddev->recovery_disabled - 1;
+ conf->start_next_window = MaxSector;
+ conf->current_window_requests = conf->next_window_requests = 0;
+
err = -EIO;
for (i = 0; i < conf->raid_disks * 2; i++) {
atomic_read(&bitmap->behind_writes) == 0);
}
- raise_barrier(conf);
- lower_barrier(conf);
+ freeze_array(conf, 0);
+ unfreeze_array(conf);
md_unregister_thread(&mddev->thread);
if (conf->r1bio_pool)
wake_up(&conf->wait_barrier);
break;
case 1:
- raise_barrier(conf);
+ freeze_array(conf, 0);
break;
case 0:
- lower_barrier(conf);
+ unfreeze_array(conf);
break;
}
}
mddev->new_chunk_sectors = 0;
conf = setup_conf(mddev);
if (!IS_ERR(conf))
- conf->barrier = 1;
+ /* Array must appear to be quiesced */
+ conf->array_frozen = 1;
return conf;
}
return ERR_PTR(-EINVAL);
*/
sector_t next_resync;
+ /* When raid1 starts resync, we divide array into four partitions
+ * |---------|--------------|---------------------|-------------|
+ * next_resync start_next_window end_window
+ * start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
+ * end_window = start_next_window + NEXT_NORMALIO_DISTANCE
+ * current_window_requests means the count of normalIO between
+ * start_next_window and end_window.
+ * next_window_requests means the count of normalIO after end_window.
+ * */
+ sector_t start_next_window;
+ int current_window_requests;
+ int next_window_requests;
+
spinlock_t device_lock;
/* list of 'struct r1bio' that need to be processed by raid1d,
int nr_waiting;
int nr_queued;
int barrier;
+ int array_frozen;
/* Set to 1 if a full sync is needed, (fresh device added).
* Cleared when a sync completes.
* in this BehindIO request
*/
sector_t sector;
+ sector_t start_next_window;
int sectors;
unsigned long state;
struct mddev *mddev;
set_bit(MD_CHANGE_DEVS, &mddev->flags);
md_wakeup_thread(mddev->thread);
wait_event(mddev->sb_wait, mddev->flags == 0 ||
- kthread_should_stop());
+ test_bit(MD_RECOVERY_INTR, &mddev->recovery));
+ if (test_bit(MD_RECOVERY_INTR, &mddev->recovery)) {
+ allow_barrier(conf);
+ return sectors_done;
+ }
conf->reshape_safe = mddev->reshape_position;
allow_barrier(conf);
}
return &conf->stripe_hashtbl[hash];
}
+static inline int stripe_hash_locks_hash(sector_t sect)
+{
+ return (sect >> STRIPE_SHIFT) & STRIPE_HASH_LOCKS_MASK;
+}
+
+static inline void lock_device_hash_lock(struct r5conf *conf, int hash)
+{
+ spin_lock_irq(conf->hash_locks + hash);
+ spin_lock(&conf->device_lock);
+}
+
+static inline void unlock_device_hash_lock(struct r5conf *conf, int hash)
+{
+ spin_unlock(&conf->device_lock);
+ spin_unlock_irq(conf->hash_locks + hash);
+}
+
+static inline void lock_all_device_hash_locks_irq(struct r5conf *conf)
+{
+ int i;
+ local_irq_disable();
+ spin_lock(conf->hash_locks);
+ for (i = 1; i < NR_STRIPE_HASH_LOCKS; i++)
+ spin_lock_nest_lock(conf->hash_locks + i, conf->hash_locks);
+ spin_lock(&conf->device_lock);
+}
+
+static inline void unlock_all_device_hash_locks_irq(struct r5conf *conf)
+{
+ int i;
+ spin_unlock(&conf->device_lock);
+ for (i = NR_STRIPE_HASH_LOCKS; i; i--)
+ spin_unlock(conf->hash_locks + i - 1);
+ local_irq_enable();
+}
+
/* bio's attached to a stripe+device for I/O are linked together in bi_sector
* order without overlap. There may be several bio's per stripe+device, and
* a bio could span several devices.
}
}
-static void do_release_stripe(struct r5conf *conf, struct stripe_head *sh)
+static void do_release_stripe(struct r5conf *conf, struct stripe_head *sh,
+ struct list_head *temp_inactive_list)
{
BUG_ON(!list_empty(&sh->lru));
BUG_ON(atomic_read(&conf->active_stripes)==0);
< IO_THRESHOLD)
md_wakeup_thread(conf->mddev->thread);
atomic_dec(&conf->active_stripes);
- if (!test_bit(STRIPE_EXPANDING, &sh->state)) {
- list_add_tail(&sh->lru, &conf->inactive_list);
- wake_up(&conf->wait_for_stripe);
- if (conf->retry_read_aligned)
- md_wakeup_thread(conf->mddev->thread);
- }
+ if (!test_bit(STRIPE_EXPANDING, &sh->state))
+ list_add_tail(&sh->lru, temp_inactive_list);
}
}
-static void __release_stripe(struct r5conf *conf, struct stripe_head *sh)
+static void __release_stripe(struct r5conf *conf, struct stripe_head *sh,
+ struct list_head *temp_inactive_list)
{
if (atomic_dec_and_test(&sh->count))
- do_release_stripe(conf, sh);
+ do_release_stripe(conf, sh, temp_inactive_list);
+}
+
+/*
+ * @hash could be NR_STRIPE_HASH_LOCKS, then we have a list of inactive_list
+ *
+ * Be careful: Only one task can add/delete stripes from temp_inactive_list at
+ * given time. Adding stripes only takes device lock, while deleting stripes
+ * only takes hash lock.
+ */
+static void release_inactive_stripe_list(struct r5conf *conf,
+ struct list_head *temp_inactive_list,
+ int hash)
+{
+ int size;
+ bool do_wakeup = false;
+ unsigned long flags;
+
+ if (hash == NR_STRIPE_HASH_LOCKS) {
+ size = NR_STRIPE_HASH_LOCKS;
+ hash = NR_STRIPE_HASH_LOCKS - 1;
+ } else
+ size = 1;
+ while (size) {
+ struct list_head *list = &temp_inactive_list[size - 1];
+
+ /*
+ * We don't hold any lock here yet, get_active_stripe() might
+ * remove stripes from the list
+ */
+ if (!list_empty_careful(list)) {
+ spin_lock_irqsave(conf->hash_locks + hash, flags);
+ if (list_empty(conf->inactive_list + hash) &&
+ !list_empty(list))
+ atomic_dec(&conf->empty_inactive_list_nr);
+ list_splice_tail_init(list, conf->inactive_list + hash);
+ do_wakeup = true;
+ spin_unlock_irqrestore(conf->hash_locks + hash, flags);
+ }
+ size--;
+ hash--;
+ }
+
+ if (do_wakeup) {
+ wake_up(&conf->wait_for_stripe);
+ if (conf->retry_read_aligned)
+ md_wakeup_thread(conf->mddev->thread);
+ }
}
/* should hold conf->device_lock already */
-static int release_stripe_list(struct r5conf *conf)
+static int release_stripe_list(struct r5conf *conf,
+ struct list_head *temp_inactive_list)
{
struct stripe_head *sh;
int count = 0;
head = llist_del_all(&conf->released_stripes);
head = llist_reverse_order(head);
while (head) {
+ int hash;
+
sh = llist_entry(head, struct stripe_head, release_list);
head = llist_next(head);
/* sh could be readded after STRIPE_ON_RELEASE_LIST is cleard */
* again, the count is always > 1. This is true for
* STRIPE_ON_UNPLUG_LIST bit too.
*/
- __release_stripe(conf, sh);
+ hash = sh->hash_lock_index;
+ __release_stripe(conf, sh, &temp_inactive_list[hash]);
count++;
}
{
struct r5conf *conf = sh->raid_conf;
unsigned long flags;
+ struct list_head list;
+ int hash;
bool wakeup;
- if (test_and_set_bit(STRIPE_ON_RELEASE_LIST, &sh->state))
+ if (unlikely(!conf->mddev->thread) ||
+ test_and_set_bit(STRIPE_ON_RELEASE_LIST, &sh->state))
goto slow_path;
wakeup = llist_add(&sh->release_list, &conf->released_stripes);
if (wakeup)
local_irq_save(flags);
/* we are ok here if STRIPE_ON_RELEASE_LIST is set or not */
if (atomic_dec_and_lock(&sh->count, &conf->device_lock)) {
- do_release_stripe(conf, sh);
+ INIT_LIST_HEAD(&list);
+ hash = sh->hash_lock_index;
+ do_release_stripe(conf, sh, &list);
spin_unlock(&conf->device_lock);
+ release_inactive_stripe_list(conf, &list, hash);
}
local_irq_restore(flags);
}
/* find an idle stripe, make sure it is unhashed, and return it. */
-static struct stripe_head *get_free_stripe(struct r5conf *conf)
+static struct stripe_head *get_free_stripe(struct r5conf *conf, int hash)
{
struct stripe_head *sh = NULL;
struct list_head *first;
- if (list_empty(&conf->inactive_list))
+ if (list_empty(conf->inactive_list + hash))
goto out;
- first = conf->inactive_list.next;
+ first = (conf->inactive_list + hash)->next;
sh = list_entry(first, struct stripe_head, lru);
list_del_init(first);
remove_hash(sh);
atomic_inc(&conf->active_stripes);
+ BUG_ON(hash != sh->hash_lock_index);
+ if (list_empty(conf->inactive_list + hash))
+ atomic_inc(&conf->empty_inactive_list_nr);
out:
return sh;
}
static void init_stripe(struct stripe_head *sh, sector_t sector, int previous)
{
struct r5conf *conf = sh->raid_conf;
- int i;
+ int i, seq;
BUG_ON(atomic_read(&sh->count) != 0);
BUG_ON(test_bit(STRIPE_HANDLE, &sh->state));
(unsigned long long)sh->sector);
remove_hash(sh);
-
+retry:
+ seq = read_seqcount_begin(&conf->gen_lock);
sh->generation = conf->generation - previous;
sh->disks = previous ? conf->previous_raid_disks : conf->raid_disks;
sh->sector = sector;
dev->flags = 0;
raid5_build_block(sh, i, previous);
}
+ if (read_seqcount_retry(&conf->gen_lock, seq))
+ goto retry;
insert_hash(conf, sh);
sh->cpu = smp_processor_id();
}
int previous, int noblock, int noquiesce)
{
struct stripe_head *sh;
+ int hash = stripe_hash_locks_hash(sector);
pr_debug("get_stripe, sector %llu\n", (unsigned long long)sector);
- spin_lock_irq(&conf->device_lock);
+ spin_lock_irq(conf->hash_locks + hash);
do {
wait_event_lock_irq(conf->wait_for_stripe,
conf->quiesce == 0 || noquiesce,
- conf->device_lock);
+ *(conf->hash_locks + hash));
sh = __find_stripe(conf, sector, conf->generation - previous);
if (!sh) {
if (!conf->inactive_blocked)
- sh = get_free_stripe(conf);
+ sh = get_free_stripe(conf, hash);
if (noblock && sh == NULL)
break;
if (!sh) {
conf->inactive_blocked = 1;
- wait_event_lock_irq(conf->wait_for_stripe,
- !list_empty(&conf->inactive_list) &&
- (atomic_read(&conf->active_stripes)
- < (conf->max_nr_stripes *3/4)
- || !conf->inactive_blocked),
- conf->device_lock);
+ wait_event_lock_irq(
+ conf->wait_for_stripe,
+ !list_empty(conf->inactive_list + hash) &&
+ (atomic_read(&conf->active_stripes)
+ < (conf->max_nr_stripes * 3 / 4)
+ || !conf->inactive_blocked),
+ *(conf->hash_locks + hash));
conf->inactive_blocked = 0;
} else
init_stripe(sh, sector, previous);
} else {
+ spin_lock(&conf->device_lock);
if (atomic_read(&sh->count)) {
BUG_ON(!list_empty(&sh->lru)
&& !test_bit(STRIPE_EXPANDING, &sh->state)
&& !test_bit(STRIPE_ON_UNPLUG_LIST, &sh->state)
- && !test_bit(STRIPE_ON_RELEASE_LIST, &sh->state));
+ );
} else {
if (!test_bit(STRIPE_HANDLE, &sh->state))
atomic_inc(&conf->active_stripes);
- if (list_empty(&sh->lru) &&
- !test_bit(STRIPE_EXPANDING, &sh->state))
- BUG();
+ BUG_ON(list_empty(&sh->lru));
list_del_init(&sh->lru);
if (sh->group) {
sh->group->stripes_cnt--;
sh->group = NULL;
}
}
+ spin_unlock(&conf->device_lock);
}
} while (sh == NULL);
if (sh)
atomic_inc(&sh->count);
- spin_unlock_irq(&conf->device_lock);
+ spin_unlock_irq(conf->hash_locks + hash);
return sh;
}
bi->bi_sector = (sh->sector
+ rdev->data_offset);
if (test_bit(R5_ReadNoMerge, &sh->dev[i].flags))
- bi->bi_rw |= REQ_FLUSH;
+ bi->bi_rw |= REQ_NOMERGE;
bi->bi_vcnt = 1;
bi->bi_io_vec[0].bv_len = STRIPE_SIZE;
put_cpu();
}
-static int grow_one_stripe(struct r5conf *conf)
+static int grow_one_stripe(struct r5conf *conf, int hash)
{
struct stripe_head *sh;
sh = kmem_cache_zalloc(conf->slab_cache, GFP_KERNEL);
kmem_cache_free(conf->slab_cache, sh);
return 0;
}
+ sh->hash_lock_index = hash;
/* we just created an active stripe so... */
atomic_set(&sh->count, 1);
atomic_inc(&conf->active_stripes);
{
struct kmem_cache *sc;
int devs = max(conf->raid_disks, conf->previous_raid_disks);
+ int hash;
if (conf->mddev->gendisk)
sprintf(conf->cache_name[0],
return 1;
conf->slab_cache = sc;
conf->pool_size = devs;
- while (num--)
- if (!grow_one_stripe(conf))
+ hash = conf->max_nr_stripes % NR_STRIPE_HASH_LOCKS;
+ while (num--) {
+ if (!grow_one_stripe(conf, hash))
return 1;
+ conf->max_nr_stripes++;
+ hash = (hash + 1) % NR_STRIPE_HASH_LOCKS;
+ }
return 0;
}
int err;
struct kmem_cache *sc;
int i;
+ int hash, cnt;
if (newsize <= conf->pool_size)
return 0; /* never bother to shrink */
* OK, we have enough stripes, start collecting inactive
* stripes and copying them over
*/
+ hash = 0;
+ cnt = 0;
list_for_each_entry(nsh, &newstripes, lru) {
- spin_lock_irq(&conf->device_lock);
- wait_event_lock_irq(conf->wait_for_stripe,
- !list_empty(&conf->inactive_list),
- conf->device_lock);
- osh = get_free_stripe(conf);
- spin_unlock_irq(&conf->device_lock);
+ lock_device_hash_lock(conf, hash);
+ wait_event_cmd(conf->wait_for_stripe,
+ !list_empty(conf->inactive_list + hash),
+ unlock_device_hash_lock(conf, hash),
+ lock_device_hash_lock(conf, hash));
+ osh = get_free_stripe(conf, hash);
+ unlock_device_hash_lock(conf, hash);
atomic_set(&nsh->count, 1);
for(i=0; i<conf->pool_size; i++)
nsh->dev[i].page = osh->dev[i].page;
for( ; i<newsize; i++)
nsh->dev[i].page = NULL;
+ nsh->hash_lock_index = hash;
kmem_cache_free(conf->slab_cache, osh);
+ cnt++;
+ if (cnt >= conf->max_nr_stripes / NR_STRIPE_HASH_LOCKS +
+ !!((conf->max_nr_stripes % NR_STRIPE_HASH_LOCKS) > hash)) {
+ hash++;
+ cnt = 0;
+ }
}
kmem_cache_destroy(conf->slab_cache);
return err;
}
-static int drop_one_stripe(struct r5conf *conf)
+static int drop_one_stripe(struct r5conf *conf, int hash)
{
struct stripe_head *sh;
- spin_lock_irq(&conf->device_lock);
- sh = get_free_stripe(conf);
- spin_unlock_irq(&conf->device_lock);
+ spin_lock_irq(conf->hash_locks + hash);
+ sh = get_free_stripe(conf, hash);
+ spin_unlock_irq(conf->hash_locks + hash);
if (!sh)
return 0;
BUG_ON(atomic_read(&sh->count));
static void shrink_stripes(struct r5conf *conf)
{
- while (drop_one_stripe(conf))
- ;
+ int hash;
+ for (hash = 0; hash < NR_STRIPE_HASH_LOCKS; hash++)
+ while (drop_one_stripe(conf, hash))
+ ;
if (conf->slab_cache)
kmem_cache_destroy(conf->slab_cache);
mdname(conf->mddev), bdn);
else
retry = 1;
+ if (set_bad && test_bit(In_sync, &rdev->flags)
+ && !test_bit(R5_ReadNoMerge, &sh->dev[i].flags))
+ retry = 1;
if (retry)
if (test_bit(R5_ReadNoMerge, &sh->dev[i].flags)) {
set_bit(R5_ReadError, &sh->dev[i].flags);
}
}
-static void activate_bit_delay(struct r5conf *conf)
+static void activate_bit_delay(struct r5conf *conf,
+ struct list_head *temp_inactive_list)
{
/* device_lock is held */
struct list_head head;
list_del_init(&conf->bitmap_list);
while (!list_empty(&head)) {
struct stripe_head *sh = list_entry(head.next, struct stripe_head, lru);
+ int hash;
list_del_init(&sh->lru);
atomic_inc(&sh->count);
- __release_stripe(conf, sh);
+ hash = sh->hash_lock_index;
+ __release_stripe(conf, sh, &temp_inactive_list[hash]);
}
}
return 1;
if (conf->quiesce)
return 1;
- if (list_empty_careful(&conf->inactive_list))
+ if (atomic_read(&conf->empty_inactive_list_nr))
return 1;
return 0;
struct raid5_plug_cb {
struct blk_plug_cb cb;
struct list_head list;
+ struct list_head temp_inactive_list[NR_STRIPE_HASH_LOCKS];
};
static void raid5_unplug(struct blk_plug_cb *blk_cb, bool from_schedule)
struct mddev *mddev = cb->cb.data;
struct r5conf *conf = mddev->private;
int cnt = 0;
+ int hash;
if (cb->list.next && !list_empty(&cb->list)) {
spin_lock_irq(&conf->device_lock);
* STRIPE_ON_RELEASE_LIST could be set here. In that
* case, the count is always > 1 here
*/
- __release_stripe(conf, sh);
+ hash = sh->hash_lock_index;
+ __release_stripe(conf, sh, &cb->temp_inactive_list[hash]);
cnt++;
}
spin_unlock_irq(&conf->device_lock);
}
+ release_inactive_stripe_list(conf, cb->temp_inactive_list,
+ NR_STRIPE_HASH_LOCKS);
if (mddev->queue)
trace_block_unplug(mddev->queue, cnt, !from_schedule);
kfree(cb);
cb = container_of(blk_cb, struct raid5_plug_cb, cb);
- if (cb->list.next == NULL)
+ if (cb->list.next == NULL) {
+ int i;
INIT_LIST_HEAD(&cb->list);
+ for (i = 0; i < NR_STRIPE_HASH_LOCKS; i++)
+ INIT_LIST_HEAD(cb->temp_inactive_list + i);
+ }
if (!test_and_set_bit(STRIPE_ON_UNPLUG_LIST, &sh->state))
list_add_tail(&sh->lru, &cb->list);
time_after(jiffies, conf->reshape_checkpoint + 10*HZ)) {
/* Cannot proceed until we've updated the superblock... */
wait_event(conf->wait_for_overlap,
- atomic_read(&conf->reshape_stripes)==0);
+ atomic_read(&conf->reshape_stripes)==0
+ || test_bit(MD_RECOVERY_INTR, &mddev->recovery));
+ if (atomic_read(&conf->reshape_stripes) != 0)
+ return 0;
mddev->reshape_position = conf->reshape_progress;
mddev->curr_resync_completed = sector_nr;
conf->reshape_checkpoint = jiffies;
set_bit(MD_CHANGE_DEVS, &mddev->flags);
md_wakeup_thread(mddev->thread);
wait_event(mddev->sb_wait, mddev->flags == 0 ||
- kthread_should_stop());
+ test_bit(MD_RECOVERY_INTR, &mddev->recovery));
+ if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
+ return 0;
spin_lock_irq(&conf->device_lock);
conf->reshape_safe = mddev->reshape_position;
spin_unlock_irq(&conf->device_lock);
>= mddev->resync_max - mddev->curr_resync_completed) {
/* Cannot proceed until we've updated the superblock... */
wait_event(conf->wait_for_overlap,
- atomic_read(&conf->reshape_stripes) == 0);
+ atomic_read(&conf->reshape_stripes) == 0
+ || test_bit(MD_RECOVERY_INTR, &mddev->recovery));
+ if (atomic_read(&conf->reshape_stripes) != 0)
+ goto ret;
mddev->reshape_position = conf->reshape_progress;
mddev->curr_resync_completed = sector_nr;
conf->reshape_checkpoint = jiffies;
md_wakeup_thread(mddev->thread);
wait_event(mddev->sb_wait,
!test_bit(MD_CHANGE_DEVS, &mddev->flags)
- || kthread_should_stop());
+ || test_bit(MD_RECOVERY_INTR, &mddev->recovery));
+ if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
+ goto ret;
spin_lock_irq(&conf->device_lock);
conf->reshape_safe = mddev->reshape_position;
spin_unlock_irq(&conf->device_lock);
wake_up(&conf->wait_for_overlap);
sysfs_notify(&mddev->kobj, NULL, "sync_completed");
}
+ret:
return reshape_sectors;
}
}
static int handle_active_stripes(struct r5conf *conf, int group,
- struct r5worker *worker)
+ struct r5worker *worker,
+ struct list_head *temp_inactive_list)
{
struct stripe_head *batch[MAX_STRIPE_BATCH], *sh;
- int i, batch_size = 0;
+ int i, batch_size = 0, hash;
+ bool release_inactive = false;
while (batch_size < MAX_STRIPE_BATCH &&
(sh = __get_priority_stripe(conf, group)) != NULL)
batch[batch_size++] = sh;
- if (batch_size == 0)
- return batch_size;
+ if (batch_size == 0) {
+ for (i = 0; i < NR_STRIPE_HASH_LOCKS; i++)
+ if (!list_empty(temp_inactive_list + i))
+ break;
+ if (i == NR_STRIPE_HASH_LOCKS)
+ return batch_size;
+ release_inactive = true;
+ }
spin_unlock_irq(&conf->device_lock);
+ release_inactive_stripe_list(conf, temp_inactive_list,
+ NR_STRIPE_HASH_LOCKS);
+
+ if (release_inactive) {
+ spin_lock_irq(&conf->device_lock);
+ return 0;
+ }
+
for (i = 0; i < batch_size; i++)
handle_stripe(batch[i]);
cond_resched();
spin_lock_irq(&conf->device_lock);
- for (i = 0; i < batch_size; i++)
- __release_stripe(conf, batch[i]);
+ for (i = 0; i < batch_size; i++) {
+ hash = batch[i]->hash_lock_index;
+ __release_stripe(conf, batch[i], &temp_inactive_list[hash]);
+ }
return batch_size;
}
while (1) {
int batch_size, released;
- released = release_stripe_list(conf);
+ released = release_stripe_list(conf, worker->temp_inactive_list);
- batch_size = handle_active_stripes(conf, group_id, worker);
+ batch_size = handle_active_stripes(conf, group_id, worker,
+ worker->temp_inactive_list);
worker->working = false;
if (!batch_size && !released)
break;
struct bio *bio;
int batch_size, released;
- released = release_stripe_list(conf);
+ released = release_stripe_list(conf, conf->temp_inactive_list);
if (
!list_empty(&conf->bitmap_list)) {
bitmap_unplug(mddev->bitmap);
spin_lock_irq(&conf->device_lock);
conf->seq_write = conf->seq_flush;
- activate_bit_delay(conf);
+ activate_bit_delay(conf, conf->temp_inactive_list);
}
raid5_activate_delayed(conf);
handled++;
}
- batch_size = handle_active_stripes(conf, ANY_GROUP, NULL);
+ batch_size = handle_active_stripes(conf, ANY_GROUP, NULL,
+ conf->temp_inactive_list);
if (!batch_size && !released)
break;
handled += batch_size;
{
struct r5conf *conf = mddev->private;
int err;
+ int hash;
if (size <= 16 || size > 32768)
return -EINVAL;
+ hash = (conf->max_nr_stripes - 1) % NR_STRIPE_HASH_LOCKS;
while (size < conf->max_nr_stripes) {
- if (drop_one_stripe(conf))
+ if (drop_one_stripe(conf, hash))
conf->max_nr_stripes--;
else
break;
+ hash--;
+ if (hash < 0)
+ hash = NR_STRIPE_HASH_LOCKS - 1;
}
err = md_allow_write(mddev);
if (err)
return err;
+ hash = conf->max_nr_stripes % NR_STRIPE_HASH_LOCKS;
while (size > conf->max_nr_stripes) {
- if (grow_one_stripe(conf))
+ if (grow_one_stripe(conf, hash))
conf->max_nr_stripes++;
else break;
+ hash = (hash + 1) % NR_STRIPE_HASH_LOCKS;
}
return 0;
}
return 0;
}
-static int alloc_thread_groups(struct r5conf *conf, int cnt);
+static int alloc_thread_groups(struct r5conf *conf, int cnt,
+ int *group_cnt,
+ int *worker_cnt_per_group,
+ struct r5worker_group **worker_groups);
static ssize_t
raid5_store_group_thread_cnt(struct mddev *mddev, const char *page, size_t len)
{
struct r5conf *conf = mddev->private;
unsigned long new;
int err;
- struct r5worker_group *old_groups;
- int old_group_cnt;
+ struct r5worker_group *new_groups, *old_groups;
+ int group_cnt, worker_cnt_per_group;
if (len >= PAGE_SIZE)
return -EINVAL;
mddev_suspend(mddev);
old_groups = conf->worker_groups;
- old_group_cnt = conf->worker_cnt_per_group;
+ if (old_groups)
+ flush_workqueue(raid5_wq);
+
+ err = alloc_thread_groups(conf, new,
+ &group_cnt, &worker_cnt_per_group,
+ &new_groups);
+ if (!err) {
+ spin_lock_irq(&conf->device_lock);
+ conf->group_cnt = group_cnt;
+ conf->worker_cnt_per_group = worker_cnt_per_group;
+ conf->worker_groups = new_groups;
+ spin_unlock_irq(&conf->device_lock);
- conf->worker_groups = NULL;
- err = alloc_thread_groups(conf, new);
- if (err) {
- conf->worker_groups = old_groups;
- conf->worker_cnt_per_group = old_group_cnt;
- } else {
if (old_groups)
kfree(old_groups[0].workers);
kfree(old_groups);
.attrs = raid5_attrs,
};
-static int alloc_thread_groups(struct r5conf *conf, int cnt)
+static int alloc_thread_groups(struct r5conf *conf, int cnt,
+ int *group_cnt,
+ int *worker_cnt_per_group,
+ struct r5worker_group **worker_groups)
{
- int i, j;
+ int i, j, k;
ssize_t size;
struct r5worker *workers;
- conf->worker_cnt_per_group = cnt;
+ *worker_cnt_per_group = cnt;
if (cnt == 0) {
- conf->worker_groups = NULL;
+ *group_cnt = 0;
+ *worker_groups = NULL;
return 0;
}
- conf->group_cnt = num_possible_nodes();
+ *group_cnt = num_possible_nodes();
size = sizeof(struct r5worker) * cnt;
- workers = kzalloc(size * conf->group_cnt, GFP_NOIO);
- conf->worker_groups = kzalloc(sizeof(struct r5worker_group) *
- conf->group_cnt, GFP_NOIO);
- if (!conf->worker_groups || !workers) {
+ workers = kzalloc(size * *group_cnt, GFP_NOIO);
+ *worker_groups = kzalloc(sizeof(struct r5worker_group) *
+ *group_cnt, GFP_NOIO);
+ if (!*worker_groups || !workers) {
kfree(workers);
- kfree(conf->worker_groups);
- conf->worker_groups = NULL;
+ kfree(*worker_groups);
return -ENOMEM;
}
- for (i = 0; i < conf->group_cnt; i++) {
+ for (i = 0; i < *group_cnt; i++) {
struct r5worker_group *group;
- group = &conf->worker_groups[i];
+ group = &(*worker_groups)[i];
INIT_LIST_HEAD(&group->handle_list);
group->conf = conf;
group->workers = workers + i * cnt;
for (j = 0; j < cnt; j++) {
- group->workers[j].group = group;
- INIT_WORK(&group->workers[j].work, raid5_do_work);
+ struct r5worker *worker = group->workers + j;
+ worker->group = group;
+ INIT_WORK(&worker->work, raid5_do_work);
+
+ for (k = 0; k < NR_STRIPE_HASH_LOCKS; k++)
+ INIT_LIST_HEAD(worker->temp_inactive_list + k);
}
}
struct md_rdev *rdev;
struct disk_info *disk;
char pers_name[6];
+ int i;
+ int group_cnt, worker_cnt_per_group;
+ struct r5worker_group *new_group;
if (mddev->new_level != 5
&& mddev->new_level != 4
if (conf == NULL)
goto abort;
/* Don't enable multi-threading by default*/
- if (alloc_thread_groups(conf, 0))
+ if (!alloc_thread_groups(conf, 0, &group_cnt, &worker_cnt_per_group,
+ &new_group)) {
+ conf->group_cnt = group_cnt;
+ conf->worker_cnt_per_group = worker_cnt_per_group;
+ conf->worker_groups = new_group;
+ } else
goto abort;
spin_lock_init(&conf->device_lock);
seqcount_init(&conf->gen_lock);
INIT_LIST_HEAD(&conf->hold_list);
INIT_LIST_HEAD(&conf->delayed_list);
INIT_LIST_HEAD(&conf->bitmap_list);
- INIT_LIST_HEAD(&conf->inactive_list);
init_llist_head(&conf->released_stripes);
atomic_set(&conf->active_stripes, 0);
atomic_set(&conf->preread_active_stripes, 0);
if ((conf->stripe_hashtbl = kzalloc(PAGE_SIZE, GFP_KERNEL)) == NULL)
goto abort;
+ /* We init hash_locks[0] separately to that it can be used
+ * as the reference lock in the spin_lock_nest_lock() call
+ * in lock_all_device_hash_locks_irq in order to convince
+ * lockdep that we know what we are doing.
+ */
+ spin_lock_init(conf->hash_locks);
+ for (i = 1; i < NR_STRIPE_HASH_LOCKS; i++)
+ spin_lock_init(conf->hash_locks + i);
+
+ for (i = 0; i < NR_STRIPE_HASH_LOCKS; i++)
+ INIT_LIST_HEAD(conf->inactive_list + i);
+
+ for (i = 0; i < NR_STRIPE_HASH_LOCKS; i++)
+ INIT_LIST_HEAD(conf->temp_inactive_list + i);
+
conf->level = mddev->new_level;
if (raid5_alloc_percpu(conf) != 0)
goto abort;
else
conf->max_degraded = 1;
conf->algorithm = mddev->new_layout;
- conf->max_nr_stripes = NR_STRIPES;
conf->reshape_progress = mddev->reshape_position;
if (conf->reshape_progress != MaxSector) {
conf->prev_chunk_sectors = mddev->chunk_sectors;
memory = conf->max_nr_stripes * (sizeof(struct stripe_head) +
max_disks * ((sizeof(struct bio) + PAGE_SIZE))) / 1024;
- if (grow_stripes(conf, conf->max_nr_stripes)) {
+ atomic_set(&conf->empty_inactive_list_nr, NR_STRIPE_HASH_LOCKS);
+ if (grow_stripes(conf, NR_STRIPES)) {
printk(KERN_ERR
"md/raid:%s: couldn't allocate %dkB for buffers\n",
mdname(mddev), memory);
if (!mddev->sync_thread) {
mddev->recovery = 0;
spin_lock_irq(&conf->device_lock);
+ write_seqcount_begin(&conf->gen_lock);
mddev->raid_disks = conf->raid_disks = conf->previous_raid_disks;
+ mddev->new_chunk_sectors =
+ conf->chunk_sectors = conf->prev_chunk_sectors;
+ mddev->new_layout = conf->algorithm = conf->prev_algo;
rdev_for_each(rdev, mddev)
rdev->new_data_offset = rdev->data_offset;
smp_wmb();
+ conf->generation --;
conf->reshape_progress = MaxSector;
mddev->reshape_position = MaxSector;
+ write_seqcount_end(&conf->gen_lock);
spin_unlock_irq(&conf->device_lock);
return -EAGAIN;
}
break;
case 1: /* stop all writes */
- spin_lock_irq(&conf->device_lock);
+ lock_all_device_hash_locks_irq(conf);
/* '2' tells resync/reshape to pause so that all
* active stripes can drain
*/
conf->quiesce = 2;
- wait_event_lock_irq(conf->wait_for_stripe,
+ wait_event_cmd(conf->wait_for_stripe,
atomic_read(&conf->active_stripes) == 0 &&
atomic_read(&conf->active_aligned_reads) == 0,
- conf->device_lock);
+ unlock_all_device_hash_locks_irq(conf),
+ lock_all_device_hash_locks_irq(conf));
conf->quiesce = 1;
- spin_unlock_irq(&conf->device_lock);
+ unlock_all_device_hash_locks_irq(conf);
/* allow reshape to continue */
wake_up(&conf->wait_for_overlap);
break;
case 0: /* re-enable writes */
- spin_lock_irq(&conf->device_lock);
+ lock_all_device_hash_locks_irq(conf);
conf->quiesce = 0;
wake_up(&conf->wait_for_stripe);
wake_up(&conf->wait_for_overlap);
- spin_unlock_irq(&conf->device_lock);
+ unlock_all_device_hash_locks_irq(conf);
break;
}
}
short pd_idx; /* parity disk index */
short qd_idx; /* 'Q' disk index for raid6 */
short ddf_layout;/* use DDF ordering to calculate Q */
+ short hash_lock_index;
unsigned long state; /* state flags */
atomic_t count; /* nr of active thread/requests */
int bm_seq; /* sequence number for bitmap flushes */
struct md_rdev *rdev, *replacement;
};
+/* NOTE NR_STRIPE_HASH_LOCKS must remain below 64.
+ * This is because we sometimes take all the spinlocks
+ * and creating that much locking depth can cause
+ * problems.
+ */
+#define NR_STRIPE_HASH_LOCKS 8
+#define STRIPE_HASH_LOCKS_MASK (NR_STRIPE_HASH_LOCKS - 1)
+
struct r5worker {
struct work_struct work;
struct r5worker_group *group;
+ struct list_head temp_inactive_list[NR_STRIPE_HASH_LOCKS];
bool working;
};
struct r5conf {
struct hlist_head *stripe_hashtbl;
+ /* only protect corresponding hash list and inactive_list */
+ spinlock_t hash_locks[NR_STRIPE_HASH_LOCKS];
struct mddev *mddev;
int chunk_sectors;
int level, algorithm;
* Free stripes pool
*/
atomic_t active_stripes;
- struct list_head inactive_list;
+ struct list_head inactive_list[NR_STRIPE_HASH_LOCKS];
+ atomic_t empty_inactive_list_nr;
struct llist_head released_stripes;
wait_queue_head_t wait_for_stripe;
wait_queue_head_t wait_for_overlap;
* the new thread here until we fully activate the array.
*/
struct md_thread *thread;
+ struct list_head temp_inactive_list[NR_STRIPE_HASH_LOCKS];
struct r5worker_group *worker_groups;
int group_cnt;
int worker_cnt_per_group;
u32 modem_state; /* from SMSHOSTLIB_DVB_MODEM_STATE_ET */
s32 SNR; /* dB */
u32 ber; /* Post Viterbi ber [1E-5] */
- u32 ber_error_count; /* Number of erronous SYNC bits. */
+ u32 ber_error_count; /* Number of erroneous SYNC bits. */
u32 ber_bit_count; /* Total number of SYNC bits. */
u32 ts_per; /* Transport stream PER,
0xFFFFFFFF indicate N/A */
u32 modem_state; /* from SMSHOSTLIB_DVB_MODEM_STATE_ET */
s32 SNR; /* dB */
u32 ber; /* Post Viterbi ber [1E-5] */
- u32 ber_error_count; /* Number of erronous SYNC bits. */
+ u32 ber_error_count; /* Number of erroneous SYNC bits. */
u32 ber_bit_count; /* Total number of SYNC bits. */
u32 ts_per; /* Transport stream PER,
0xFFFFFFFF indicate N/A */
u32 is_demod_locked; /* 0 - not locked, 1 - locked */
u32 ber_bit_count; /* Total number of SYNC bits. */
- u32 ber_error_count; /* Number of erronous SYNC bits. */
+ u32 ber_error_count; /* Number of erroneous SYNC bits. */
s32 MRC_SNR; /* dB */
s32 mrc_in_band_pwr; /* In band power in dBM */
dprintk_tscheck("TEI detected. "
"PID=0x%x data1=0x%x\n",
pid, buf[1]);
- /* data in this packet cant be trusted - drop it unless
+ /* data in this packet can't be trusted - drop it unless
* module option dvb_demux_feed_err_pkts is set */
if (!dvb_demux_feed_err_pkts)
return;
return -EINVAL;
}
- if (feed->is_filtering)
+ if (feed->is_filtering) {
+ /* release dvbdmx->mutex as far as it is
+ acquired by stop_filtering() itself */
+ mutex_unlock(&dvbdmx->mutex);
feed->stop_filtering(feed);
+ mutex_lock(&dvbdmx->mutex);
+ }
spin_lock_irq(&dvbdmx->lock);
f = dvbdmxfeed->filter;
static int af9033_wr_reg_val_tab(struct af9033_state *state,
const struct reg_val *tab, int tab_len)
{
+#define MAX_TAB_LEN 212
int ret, i, j;
- u8 buf[MAX_XFER_SIZE];
+ u8 buf[1 + MAX_TAB_LEN];
+
+ dev_dbg(&state->i2c->dev, "%s: tab_len=%d\n", __func__, tab_len);
if (tab_len > sizeof(buf)) {
- dev_warn(&state->i2c->dev,
- "%s: i2c wr len=%d is too big!\n",
- KBUILD_MODNAME, tab_len);
+ dev_warn(&state->i2c->dev, "%s: tab len %d is too big\n",
+ KBUILD_MODNAME, tab_len);
return -EINVAL;
}
- dev_dbg(&state->i2c->dev, "%s: tab_len=%d\n", __func__, tab_len);
-
for (i = 0, j = 0; i < tab_len; i++) {
buf[j] = tab[i].val;
num = if_freq / 1000; /* Hz => kHz */
num *= 0x4000;
- if_ctl = cxd2820r_div_u64_round_closest(num, 41000);
+ if_ctl = 0x4000 - cxd2820r_div_u64_round_closest(num, 41000);
buf[0] = (if_ctl >> 8) & 0x3f;
buf[1] = (if_ctl >> 0) & 0xff;
dib8000_set_diversity_in(state->fe[0], state->diversity_onoff);
locks = (dib8000_read_word(state, 180) >> 6) & 0x3f; /* P_coff_winlen ? */
- /* coff should lock over P_coff_winlen ofdm symbols : give 3 times this lenght to lock */
+ /* coff should lock over P_coff_winlen ofdm symbols : give 3 times this length to lock */
*timeout = dib8000_get_timeout(state, 2 * locks, SYMBOL_DEPENDENT_ON);
*tune_state = CT_DEMOD_STEP_5;
break;
case CT_DEMOD_STEP_9: /* 39 */
if ((state->revision == 0x8090) || ((dib8000_read_word(state, 1291) >> 9) & 0x1)) { /* fe capable of deinterleaving : esram */
- /* defines timeout for mpeg lock depending on interleaver lenght of longest layer */
+ /* defines timeout for mpeg lock depending on interleaver length of longest layer */
for (i = 0; i < 3; i++) {
if (c->layer[i].interleaving >= deeper_interleaver) {
dprintk("layer%i: time interleaver = %d ", i, c->layer[i].interleaving);
goto error;
if (state->m_enable_parallel == true) {
- /* paralel -> enable MD1 to MD7 */
+ /* parallel -> enable MD1 to MD7 */
status = write16(state, SIO_PDR_MD1_CFG__A,
sio_pdr_mdx_cfg);
if (status < 0)
dprintk(1, "\n");
- /* Gracefull shutdown (byte boundaries) */
+ /* Graceful shutdown (byte boundaries) */
status = read16(state, FEC_OC_SNC_MODE__A, &fec_oc_snc_mode);
if (status < 0)
goto error;
fec_oc_dto_burst_len = 204;
}
- /* Check serial or parrallel output */
+ /* Check serial or parallel output */
fec_oc_reg_ipr_mode &= (~(FEC_OC_IPR_MODE_SERIAL__M));
if (state->m_enable_parallel == false) {
/* MPEG data output is serial -> set ipr_mode[0] */
goto error;
if (count == 1) {
- /* Try sampling on a diffrent edge */
+ /* Try sampling on a different edge */
u16 clk_neg = 0;
status = read16(state, IQM_AF_CLKNEG__A, &clk_neg);
if (status < 0)
goto error;
- /* Retreive results parameters from SC */
+ /* Retrieve results parameters from SC */
switch (cmd) {
/* All commands yielding 5 results */
/* All commands yielding 4 results */
break;
}
#if 0
- /* No hierachical channels support in BDA */
+ /* No hierarchical channels support in BDA */
/* Priority (only for hierarchical channels) */
switch (channel->priority) {
case DRX_PRIORITY_LOW:
/*============================================================================*/
/**
-* \brief Retreive lock status .
+* \brief Retrieve lock status .
* \param demod Pointer to demodulator instance.
* \param lockStat Pointer to lock status structure.
* \return DRXStatus_t.
goto error;
/* Stamp driver version number in SCU data RAM in BCD code
- Done to enable field application engineers to retreive drxdriver version
+ Done to enable field application engineers to retrieve drxdriver version
via I2C from SCU RAM.
Not using SCU command interface for SCU register access since no
microcode may be present.
fe->ops.tuner_ops.get_if_frequency(fe, &IF);
start(state, 0, IF);
- /* After set_frontend, stats aren't avaliable */
+ /* After set_frontend, stats aren't available */
p->strength.stat[0].scale = FE_SCALE_RELATIVE;
p->cnr.stat[0].scale = FE_SCALE_NOT_AVAILABLE;
p->block_error.stat[0].scale = FE_SCALE_NOT_AVAILABLE;
sizeof(priv->tuner_i2c_adapter.name));
priv->tuner_i2c_adapter.algo = &rtl2830_tuner_i2c_algo;
priv->tuner_i2c_adapter.algo_data = NULL;
+ priv->tuner_i2c_adapter.dev.parent = &i2c->dev;
i2c_set_adapdata(&priv->tuner_i2c_adapter, priv);
if (i2c_add_adapter(&priv->tuner_i2c_adapter) < 0) {
dev_err(&i2c->dev,
#define ADV7183_VS_FIELD_CTRL_1 0x31 /* Vsync field control 1 */
#define ADV7183_VS_FIELD_CTRL_2 0x32 /* Vsync field control 2 */
#define ADV7183_VS_FIELD_CTRL_3 0x33 /* Vsync field control 3 */
-#define ADV7183_HS_POS_CTRL_1 0x34 /* Hsync positon control 1 */
-#define ADV7183_HS_POS_CTRL_2 0x35 /* Hsync positon control 2 */
-#define ADV7183_HS_POS_CTRL_3 0x36 /* Hsync positon control 3 */
+#define ADV7183_HS_POS_CTRL_1 0x34 /* Hsync position control 1 */
+#define ADV7183_HS_POS_CTRL_2 0x35 /* Hsync position control 2 */
+#define ADV7183_HS_POS_CTRL_3 0x36 /* Hsync position control 3 */
#define ADV7183_POLARITY 0x37 /* Polarity */
#define ADV7183_NTSC_COMB_CTRL 0x38 /* NTSC comb control */
#define ADV7183_PAL_COMB_CTRL 0x39 /* PAL comb control */
break;
case ADV7604_MODE_HDMI:
/* set default prim_mode/vid_std for HDMI
- accoring to [REF_03, c. 4.2] */
+ according to [REF_03, c. 4.2] */
io_write(sd, 0x00, 0x02); /* video std */
io_write(sd, 0x01, 0x06); /* prim mode */
break;
break;
case ADV7842_MODE_HDMI:
/* set default prim_mode/vid_std for HDMI
- accoring to [REF_03, c. 4.2] */
+ according to [REF_03, c. 4.2] */
io_write(sd, 0x00, 0x02); /* video std */
io_write(sd, 0x01, 0x06); /* prim mode */
break;
if (!rc) {
/*
- * If platform_data doesn't specify rc_dev, initilize it
+ * If platform_data doesn't specify rc_dev, initialize it
* internally
*/
rc = rc_allocate_device();
u16 zoom_step;
int ret;
- /* Determine the firmware dependant control range and step values */
+ /* Determine the firmware dependent control range and step values */
ret = m5mols_read_u16(sd, AE_MAX_GAIN_MON, &exposure_max);
if (ret < 0)
return ret;
#include <linux/i2c.h>
#include <linux/log2.h>
#include <linux/module.h>
+#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/pm.h>
#include <linux/regulator/consumer.h>
mutex_unlock(&state->lock);
v4l2_dbg(1, s5c73m3_dbg, sd, "%s: Booting %s (%d)\n",
- __func__, ret ? "failed" : "succeded", ret);
+ __func__, ret ? "failed" : "succeeded", ret);
return ret;
}
/* External master clock frequency */
u32 mclk_frequency;
- /* Video bus type - MIPI-CSI2/paralell */
+ /* Video bus type - MIPI-CSI2/parallel */
enum v4l2_mbus_type bus_type;
const struct s5c73m3_frame_size *sensor_pix_size[2];
* the analog demod.
* If the tuner is not found, it returns -ENODEV.
* If auto-detection is disabled and the tuner doesn't match what it was
- * requred, it returns -EINVAL and fills 'name'.
+ * required, it returns -EINVAL and fills 'name'.
* If the chip is found, it returns the chip ID and fills 'name'.
*/
static int saa711x_detect_chip(struct i2c_client *client,
static int reg_read(struct i2c_client *client, u16 reg, u8 *val)
{
int ret;
- /* We have 16-bit i2c addresses - care for endianess */
+ /* We have 16-bit i2c addresses - care for endianness */
unsigned char data[2] = { reg >> 8, reg & 0xff };
ret = i2c_master_send(client, data, 2);
}
/* following function is used to set ths7303 */
-int ths7303_setval(struct v4l2_subdev *sd, enum ths7303_filter_mode mode)
+static int ths7303_setval(struct v4l2_subdev *sd,
+ enum ths7303_filter_mode mode)
{
struct i2c_client *client = v4l2_get_subdevdata(sd);
struct ths7303_state *state = to_state(sd);
return -EINVAL;
}
state->input = input;
- if (!v4l2_ctrl_g_ctrl(state->mute))
+ if (v4l2_ctrl_g_ctrl(state->mute))
return 0;
if (!v4l2_ctrl_g_ctrl(state->vol))
return 0;
- if (!v4l2_ctrl_g_ctrl(state->bal))
- return 0;
wm8775_set_audio(sd, 1);
return 0;
}
}
btv->std = V4L2_STD_PAL;
init_irqreg(btv);
- v4l2_ctrl_handler_setup(hdl);
+ if (!bttv_tvcards[btv->c.type].no_video)
+ v4l2_ctrl_handler_setup(hdl);
if (hdl->error) {
result = hdl->error;
goto fail2;
};
/* per-mdl bit flags */
-#define CX18_F_M_NEED_SWAP 0 /* mdl buffer data must be endianess swapped */
+#define CX18_F_M_NEED_SWAP 0 /* mdl buffer data must be endianness swapped */
/* per-stream, s_flags */
#define CX18_F_S_CLAIMED 3 /* this stream is claimed */
cx_write(MC417_RWD, regval);
/* Transition RD to effect read transaction across bus.
- * Transtion 0x5000 -> 0x9000 correct (RD/RDY -> WR/RDY)?
+ * Transition 0x5000 -> 0x9000 correct (RD/RDY -> WR/RDY)?
* Should it be 0x9000 -> 0xF000 (also why is RDY being set, its
* input only...)
*/
/* set automatic LED control by FPGA */
pluto_rw(pluto, REG_MISC, MISC_ALED, MISC_ALED);
- /* set data endianess */
+ /* set data endianness */
#ifdef __LITTLE_ENDIAN
pluto_rw(pluto, REG_PIDn(0), PID0_END, PID0_END);
#else
if (fw_debug) {
dev->kthread = kthread_run(saa7164_thread_function, dev,
"saa7164 debug");
- if (!dev->kthread)
+ if (IS_ERR(dev->kthread)) {
+ dev->kthread = NULL;
printk(KERN_ERR "%s() Failed to create "
"debug kernel thread\n", __func__);
+ }
}
} /* != BOARD_UNKNOWN */
if (q_data->fourcc == V4L2_PIX_FMT_H264 &&
vb->vb2_queue->type == V4L2_BUF_TYPE_VIDEO_OUTPUT) {
/*
- * For backwards compatiblity, queuing an empty buffer marks
+ * For backwards compatibility, queuing an empty buffer marks
* the stream end
*/
if (vb2_get_plane_payload(vb, 0) == 0)
dbg("fimc%d: state: 0x%lx", fimc->id, fimc->state);
- /* Enable clocks and perform basic initalization */
+ /* Enable clocks and perform basic initialization */
clk_enable(fimc->clock[CLK_GATE]);
fimc_hw_reset(fimc);
goto dev_unlock;
drvdata = dev_get_drvdata(dev);
- /* Some subdev didn't probe succesfully id drvdata is NULL */
+ /* Some subdev didn't probe successfully id drvdata is NULL */
if (drvdata) {
switch (plat_entity) {
case IDX_FIMC:
ctx->xt->dir = DMA_MEM_TO_MEM;
ctx->xt->src_sgl = false;
ctx->xt->dst_sgl = true;
- flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT |
- DMA_COMPL_SKIP_DEST_UNMAP | DMA_COMPL_SKIP_SRC_UNMAP;
+ flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
tx = dmadev->device_prep_interleaved_dma(chan, ctx->xt, flags);
if (tx == NULL) {
struct mmp_camera *cam = mcam_to_cam(mcam);
struct mmp_camera_platform_data *pdata;
- if (mcam->bus_type == V4L2_MBUS_CSI2) {
- cam->mipi_clk = devm_clk_get(mcam->dev, "mipi");
- if ((IS_ERR(cam->mipi_clk) && mcam->dphy[2] == 0))
- return PTR_ERR(cam->mipi_clk);
- }
-
/*
* Turn on power and clocks to the controller.
*/
gpio_set_value(pdata->sensor_power_gpio, 0);
gpio_set_value(pdata->sensor_reset_gpio, 0);
- if (mcam->bus_type == V4L2_MBUS_CSI2 && !IS_ERR(cam->mipi_clk)) {
- if (cam->mipi_clk)
- devm_clk_put(mcam->dev, cam->mipi_clk);
- cam->mipi_clk = NULL;
- }
-
mcam_clk_disable(mcam);
}
return;
/* get the escape clk, this is hard coded */
+ clk_prepare_enable(cam->mipi_clk);
tx_clk_esc = (clk_get_rate(cam->mipi_clk) / 1000000) / 12;
-
+ clk_disable_unprepare(cam->mipi_clk);
/*
* dphy[2] - CSI2_DPHY6:
* bit 0 ~ bit 7: CK Term Enable
return IRQ_RETVAL(handled);
}
-static void mcam_deinit_clk(struct mcam_camera *mcam)
-{
- unsigned int i;
-
- for (i = 0; i < NR_MCAM_CLK; i++) {
- if (!IS_ERR(mcam->clk[i])) {
- if (mcam->clk[i])
- devm_clk_put(mcam->dev, mcam->clk[i]);
- }
- mcam->clk[i] = NULL;
- }
-}
-
static void mcam_init_clk(struct mcam_camera *mcam)
{
unsigned int i;
if (cam == NULL)
return -ENOMEM;
cam->pdev = pdev;
- cam->mipi_clk = NULL;
INIT_LIST_HEAD(&cam->devlist);
mcam = &cam->mcam;
mcam->mclk_div = pdata->mclk_div;
mcam->bus_type = pdata->bus_type;
mcam->dphy = pdata->dphy;
+ if (mcam->bus_type == V4L2_MBUS_CSI2) {
+ cam->mipi_clk = devm_clk_get(mcam->dev, "mipi");
+ if ((IS_ERR(cam->mipi_clk) && mcam->dphy[2] == 0))
+ return PTR_ERR(cam->mipi_clk);
+ }
mcam->mipi_enabled = false;
mcam->lane = pdata->lane;
mcam->chip_id = MCAM_ARMADA610;
*/
ret = mmpcam_power_up(mcam);
if (ret)
- goto out_deinit_clk;
+ return ret;
ret = mccic_register(mcam);
if (ret)
goto out_power_down;
mccic_shutdown(mcam);
out_power_down:
mmpcam_power_down(mcam);
-out_deinit_clk:
- mcam_deinit_clk(mcam);
return ret;
}
static int mmpcam_remove(struct mmp_camera *cam)
{
struct mcam_camera *mcam = &cam->mcam;
- struct mmp_camera_platform_data *pdata;
mmpcam_remove_device(cam);
mccic_shutdown(mcam);
mmpcam_power_down(mcam);
- pdata = cam->pdev->dev.platform_data;
- gpio_free(pdata->sensor_reset_gpio);
- gpio_free(pdata->sensor_power_gpio);
- mcam_deinit_clk(mcam);
- iounmap(cam->power_regs);
- iounmap(mcam->regs);
- kfree(cam);
return 0;
}
* ISP clocks get disabled in suspend(). Similarly, the clocks are reenabled in
* resume(), and the the pipelines are restarted in complete().
*
- * TODO: PM dependencies between the ISP and sensors are not modeled explicitly
+ * TODO: PM dependencies between the ISP and sensors are not modelled explicitly
* yet.
*/
static int isp_pm_prepare(struct device *dev)
if (subdev == NULL)
return -EINVAL;
- mutex_lock(&video->mutex);
-
fmt.pad = pad;
fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
- ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &fmt);
- if (ret == -ENOIOCTLCMD)
- ret = -EINVAL;
+ mutex_lock(&video->mutex);
+ ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &fmt);
mutex_unlock(&video->mutex);
if (ret)
#define S5P_FIMV_R2H_CMD_EDFU_INIT_RET 16
#define S5P_FIMV_R2H_CMD_ERR_RET 32
-/* Dummy definition for MFCv6 compatibilty */
+/* Dummy definition for MFCv6 compatibility */
#define S5P_FIMV_CODEC_H264_MVC_DEC -1
#define S5P_FIMV_R2H_CMD_FIELD_DONE_RET -1
#define S5P_FIMV_MFC_RESET -1
frame_type = s5p_mfc_hw_call(dev->mfc_ops, get_dec_frame_type, dev);
/* Copy timestamp / timecode from decoded src to dst and set
- appropraite flags */
+ appropriate flags */
src_buf = list_entry(ctx->src_queue.next, struct s5p_mfc_buf, list);
list_for_each_entry(dst_buf, &ctx->dst_queue, list) {
if (vb2_dma_contig_plane_dma_addr(dst_buf->b, 0) == dec_y_addr) {
case MFCINST_FINISHING:
case MFCINST_FINISHED:
case MFCINST_RUNNING:
- /* It is higly probable that an error occured
+ /* It is highly probable that an error occurred
* while decoding a frame */
clear_work_bit(ctx);
ctx->state = MFCINST_ERROR;
mfc_debug(1, "Int reason: %d (err: %08x)\n", reason, err);
switch (reason) {
case S5P_MFC_R2H_CMD_ERR_RET:
- /* An error has occured */
+ /* An error has occurred */
if (ctx->state == MFCINST_RUNNING &&
s5p_mfc_hw_call(dev->mfc_ops, err_dec, err) >=
dev->warn_start)
mutex_unlock(&dev->mfc_mutex);
mfc_debug_leave();
return ret;
- /* Deinit when failure occured */
+ /* Deinit when failure occurred */
err_queue_init:
if (dev->num_inst == 1)
s5p_mfc_deinit_hw(dev);
/* Mark context as idle */
clear_work_bit_irqsave(ctx);
/* If instance was initialised then
- * return instance and free reosurces */
+ * return instance and free resources */
if (ctx->inst_no != MFC_NO_INSTANCE_SET) {
mfc_debug(2, "Has to free instance\n");
ctx->state = MFCINST_RETURN_INST;
set_work_bit_irqsave(ctx);
s5p_mfc_clean_ctx_int_flags(ctx);
s5p_mfc_hw_call(dev->mfc_ops, try_run, dev);
- /* Wait until instance is returned or timeout occured */
+ /* Wait until instance is returned or timeout occurred */
if (s5p_mfc_wait_for_done_ctx
(ctx, S5P_MFC_R2H_CMD_CLOSE_INSTANCE_RET, 0)) {
s5p_mfc_clock_off();
} else {
/* In this case bank2 can point to the same address as bank1.
- * Firmware will always occupy the beggining of this area so it is
+ * Firmware will always occupy the beginning of this area so it is
* impossible having a video frame buffer with zero address. */
dev->bank2 = dev->bank1;
}
int num_subframes;
/** specifies to which subframe belong given plane */
int plane2subframe[MXR_MAX_PLANES];
- /** internal code, driver dependant */
+ /** internal code, driver dependent */
unsigned long cookie;
};
mutex_lock(&mdev->mutex);
/* timings change cannot be done while there is an entity
- * dependant on output configuration
+ * dependent on output configuration
*/
if (mdev->n_output > 0) {
mutex_unlock(&mdev->mutex);
mutex_lock(&mdev->mutex);
/* standard change cannot be done while there is an entity
- * dependant on output configuration
+ * dependent on output configuration
*/
if (mdev->n_output > 0) {
mutex_unlock(&mdev->mutex);
if (ctrlclock & LCLK_EN)
CAM_WRITE(pcdev, CTRLCLOCK, ctrlclock);
- /* select bus endianess */
+ /* select bus endianness */
xlate = soc_camera_xlate_by_fourcc(icd, pixfmt);
fmt = xlate->host_fmt;
desc = dmaengine_prep_slave_sg(fh->chan,
buf->sg, sg_elems, DMA_DEV_TO_MEM,
- DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_SRC_UNMAP);
+ DMA_PREP_INTERRUPT);
if (!desc) {
spin_lock_irq(&fh->queue_lock);
list_del_init(&vb->queue);
return 0;
}
-/* timeperframe is arbitrary and continous */
+/* timeperframe is arbitrary and continuous */
static int vidioc_enum_frameintervals(struct file *file, void *priv,
struct v4l2_frmivalenum *fival)
{
fival->type = V4L2_FRMIVAL_TYPE_CONTINUOUS;
- /* fill in stepwise (step=1.0 is requred by V4L2 spec) */
+ /* fill in stepwise (step=1.0 is required by V4L2 spec) */
fival->stepwise.min = tpf_min;
fival->stepwise.max = tpf_max;
fival->stepwise.step = (struct v4l2_fract) {1, 1};
* Increment the VSP1 reference count and initialize the device if the first
* reference is taken.
*
- * Return a pointer to the VSP1 device or NULL if an error occured.
+ * Return a pointer to the VSP1 device or NULL if an error occurred.
*/
struct vsp1_device *vsp1_device_get(struct vsp1_device *vsp1)
{
/* ... and the buffers queue... */
video->alloc_ctx = vb2_dma_contig_init_ctx(video->vsp1->dev);
- if (IS_ERR(video->alloc_ctx))
+ if (IS_ERR(video->alloc_ctx)) {
+ ret = PTR_ERR(video->alloc_ctx);
goto error;
+ }
video->queue.type = video->type;
video->queue.io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
cancel_work_sync(&shark->led_work);
}
-#ifdef CONFIG_PM
-static void shark_resume_leds(struct shark_device *shark)
+static inline void shark_resume_leds(struct shark_device *shark)
{
if (test_bit(BLUE_IS_PULSE, &shark->brightness_new))
set_bit(BLUE_PULSE_LED, &shark->brightness_new);
set_bit(RED_LED, &shark->brightness_new);
schedule_work(&shark->led_work);
}
-#endif
#else
static int shark_register_leds(struct shark_device *shark, struct device *dev)
{
cancel_work_sync(&shark->led_work);
}
-#ifdef CONFIG_PM
-static void shark_resume_leds(struct shark_device *shark)
+static inline void shark_resume_leds(struct shark_device *shark)
{
int i;
schedule_work(&shark->led_work);
}
-#endif
#else
static int shark_register_leds(struct shark_device *shark, struct device *dev)
{
*
* @tune_freq: Tune chip to a specific frequency
* @seek_start: Star station seeking
- * @rsq_status: Get Recieved Signal Quality(RSQ) status
- * @rds_blckcnt: Get recived RDS blocks count
+ * @rsq_status: Get Received Signal Quality(RSQ) status
+ * @rds_blckcnt: Get received RDS blocks count
* @phase_diversity: Change phase diversity mode of the tuner
* @phase_div_status: Get phase diversity mode status
* @acf_status: Get the status of Automatically Controlled
So we keep it as-is. */
return -EINVAL;
}
- clamp(freq, FREQ_MIN * FREQ_MUL, FREQ_MAX * FREQ_MUL);
+ freq = clamp(freq, FREQ_MIN * FREQ_MUL, FREQ_MAX * FREQ_MUL);
tea5764_power_up(radio);
tea5764_tune(radio, (freq * 125) / 2);
return 0;
if (f->tuner != 0)
return -EINVAL;
- clamp(freq, TEF6862_LO_FREQ, TEF6862_HI_FREQ);
+ freq = clamp(freq, TEF6862_LO_FREQ, TEF6862_HI_FREQ);
pll = 1964 + ((freq - TEF6862_LO_FREQ) * 20) / FREQ_MUL;
i2cmsg[0] = (MSA_MODE_PRESET << MSA_MODE_SHIFT) | WM_SUB_PLLM;
i2cmsg[1] = (pll >> 8) & 0xff;
* 0x68nnnnB7 to 0x6AnnnnB7, the left mouse button generates
* 0x688301b7 and the right one 0x688481b7. All other keys generate
* 0x2nnnnnnn. Position coordinate is encoded in buf[1] and buf[2] with
- * reversed endianess. Extract direction from buffer, rotate endianess,
+ * reversed endianness. Extract direction from buffer, rotate endianness,
* adjust sign and feed the values into stabilize(). The resulting codes
* will be 0x01008000, 0x01007F00, which match the newer devices.
*/
#define RR3_IR_IO_LENGTH_FUZZ 0x04
/* Timeout for end of signal detection */
#define RR3_IR_IO_SIG_TIMEOUT 0x05
-/* Minumum value for pause recognition. */
+/* Minimum value for pause recognition. */
#define RR3_IR_IO_MIN_PAUSE 0x06
/* Clock freq. of EZ-USB chip */
* DNC Output is selected, the other is always off)
*
* @state: ptr to mt2063_state structure
- * @Mode: desired reciever delivery system
+ * @Mode: desired receiver delivery system
*
* Note: Register cache must be valid for it to work
*/
/*
* As defined on EN 300 429, the DVB-C roll-off factor is 0.15.
- * So, the amount of the needed bandwith is given by:
+ * So, the amount of the needed bandwidth is given by:
* Bw = Symbol_rate * (1 + 0.15)
* As such, the maximum symbol rate supported by 6 MHz is given by:
* max_symbol_rate = 6 MHz / 1.15 = 5217391 Bauds
#define V4L2_STD_A2 (V4L2_STD_A2_A | V4L2_STD_A2_B)
#define V4L2_STD_NICAM (V4L2_STD_NICAM_A | V4L2_STD_NICAM_B)
-/* To preserve backward compatibilty,
+/* To preserve backward compatibility,
(std & V4L2_STD_AUDIO) = 0 means that ALL audio stds are supported
*/
usb_set_intfdata(interface, NULL);
err_if:
usb_put_dev(udev);
- kfree(dev);
clear_bit(dev->devno, &cx231xx_devused);
+ kfree(dev);
return retval;
}
{
u8 wbuf[MAX_XFER_SIZE];
u8 mbox = (reg >> 16) & 0xff;
- struct usb_req req = { CMD_MEM_WR, mbox, sizeof(wbuf), wbuf, 0, NULL };
+ struct usb_req req = { CMD_MEM_WR, mbox, 6 + len, wbuf, 0, NULL };
if (6 + len > sizeof(wbuf)) {
dev_warn(&d->udev->dev, "%s: i2c wr: len=%d is too big!\n",
} else {
/* I2C */
u8 buf[MAX_XFER_SIZE];
- struct usb_req req = { CMD_I2C_RD, 0, sizeof(buf),
+ struct usb_req req = { CMD_I2C_RD, 0, 5 + msg[0].len,
buf, msg[1].len, msg[1].buf };
if (5 + msg[0].len > sizeof(buf)) {
dev_warn(&d->udev->dev,
"%s: i2c xfer: len=%d is too big!\n",
KBUILD_MODNAME, msg[0].len);
- return -EOPNOTSUPP;
+ ret = -EOPNOTSUPP;
+ goto unlock;
}
req.mbox |= ((msg[0].addr & 0x80) >> 3);
buf[0] = msg[1].len;
} else {
/* I2C */
u8 buf[MAX_XFER_SIZE];
- struct usb_req req = { CMD_I2C_WR, 0, sizeof(buf), buf,
- 0, NULL };
+ struct usb_req req = { CMD_I2C_WR, 0, 5 + msg[0].len,
+ buf, 0, NULL };
if (5 + msg[0].len > sizeof(buf)) {
dev_warn(&d->udev->dev,
"%s: i2c xfer: len=%d is too big!\n",
KBUILD_MODNAME, msg[0].len);
- return -EOPNOTSUPP;
+ ret = -EOPNOTSUPP;
+ goto unlock;
}
req.mbox |= ((msg[0].addr & 0x80) >> 3);
buf[0] = msg[0].len;
ret = -EOPNOTSUPP;
}
+unlock:
mutex_unlock(&d->i2c_mutex);
if (ret < 0)
/* XXX: that same ID [0ccd:0099] is used by af9015 driver too */
{ DVB_USB_DEVICE(USB_VID_TERRATEC, 0x0099,
&af9035_props, "TerraTec Cinergy T Stick Dual RC (rev. 2)", NULL) },
+ { DVB_USB_DEVICE(USB_VID_LEADTEK, 0x6a05,
+ &af9035_props, "Leadtek WinFast DTV Dongle Dual", NULL) },
{ }
};
MODULE_DEVICE_TABLE(usb, af9035_id_table);
struct mxl111sf_adap_state *adap_state = &state->adap_state[fe->id];
int err;
- /* exit if we didnt initialize the driver yet */
+ /* exit if we didn't initialize the driver yet */
if (!state->chip_id) {
mxl_debug("driver not yet initialized, exit.");
goto fail;
struct mxl111sf_adap_state *adap_state = &state->adap_state[fe->id];
int err;
- /* exit if we didnt initialize the driver yet */
+ /* exit if we didn't initialize the driver yet */
if (!state->chip_id) {
mxl_debug("driver not yet initialized, exit.");
goto fail;
if (rxlen > 62) {
err("i2c RX buffer can't exceed 62 bytes (dev 0x%02x)",
device_addr);
- txlen = 62;
+ rxlen = 62;
}
b[0] = I2C_SPEED_100KHZ_BIT;
em28xx_videodbg("users=%d\n", dev->users);
- mutex_lock(&dev->lock);
vb2_fop_release(filp);
+ mutex_lock(&dev->lock);
if (dev->users == 1) {
/* the device is already disconnect,
s32 nToSkip =
sd->swapRB * (gspca_dev->cam.cam_mode[mode].bytesperline + 1);
- /* Test only against 0202h, so endianess does not matter */
+ /* Test only against 0202h, so endianness does not matter */
switch (*(s16 *) data) {
case 0x0202: /* End of frame, start a new one */
gspca_frame_add(gspca_dev, LAST_PACKET, NULL, 0);
#if IS_ENABLED(CONFIG_INPUT)
static int sd_int_pkt_scan(struct gspca_dev *gspca_dev,
u8 *data, /* interrupt packet data */
- int len) /* interrput packet length */
+ int len) /* interrupt packet length */
{
int ret = -EINVAL;
#if IS_ENABLED(CONFIG_INPUT)
static int sd_int_pkt_scan(struct gspca_dev *gspca_dev,
u8 *data, /* interrupt packet data */
- int len) /* interrput packet length */
+ int len) /* interrupt packet length */
{
int ret = -EINVAL;
u8 data0, data1;
/* set serial interface clock divider (30MHz/0x1f*16+2) = 60240 kHz) */
reg_w(gspca_dev, STK1135_REG_SICTL + 2, 0x1f);
+
+ /* wait a while for sensor to catch up */
+ udelay(1000);
}
static void stk1135_camera_disable(struct gspca_dev *gspca_dev)
struct sd *sd = (struct sd *) gspca_dev;
struct cam *cam = &gspca_dev->cam;
- /* Give the camera some time to settle, otherwise initalization will
+ /* Give the camera some time to settle, otherwise initialization will
fail on hotplug, and yes it really needs a full second. */
msleep(1000);
{USB_DEVICE(0x055f, 0xc650), BS(SPCA533, 0)},
{USB_DEVICE(0x05da, 0x1018), BS(SPCA504B, 0)},
{USB_DEVICE(0x06d6, 0x0031), BS(SPCA533, 0)},
+ {USB_DEVICE(0x06d6, 0x0041), BS(SPCA504B, 0)},
{USB_DEVICE(0x0733, 0x1311), BS(SPCA533, 0)},
{USB_DEVICE(0x0733, 0x1314), BS(SPCA533, 0)},
{USB_DEVICE(0x0733, 0x2211), BS(SPCA533, 0)},
#if IS_ENABLED(CONFIG_INPUT)
static int sd_int_pkt_scan(struct gspca_dev *gspca_dev,
u8 *data, /* interrupt packet data */
- int len) /* interrput packet length */
+ int len) /* interrupt packet length */
{
if (len == 8 && data[4] == 1) {
input_report_key(gspca_dev->input_dev, KEY_CAMERA, 1);
/* Set the leds off */
pwc_set_leds(pdev, 0, 0);
- /* Setup intial videomode */
+ /* Setup initial videomode */
rc = pwc_set_video_mode(pdev, MAX_WIDTH, MAX_HEIGHT,
V4L2_PIX_FMT_YUV420, 30, &compression, 1);
if (rc)
#define USBTV_ISOC_TRANSFERS 16
#define USBTV_ISOC_PACKETS 8
-#define USBTV_WIDTH 720
-#define USBTV_HEIGHT 480
-
#define USBTV_CHUNK_SIZE 256
#define USBTV_CHUNK 240
-#define USBTV_CHUNKS (USBTV_WIDTH * USBTV_HEIGHT \
- / 4 / USBTV_CHUNK)
/* Chunk header. */
#define USBTV_MAGIC_OK(chunk) ((be32_to_cpu(chunk[0]) & 0xff000000) \
#define USBTV_ODD(chunk) ((be32_to_cpu(chunk[0]) & 0x0000f000) >> 15)
#define USBTV_CHUNK_NO(chunk) (be32_to_cpu(chunk[0]) & 0x00000fff)
+#define USBTV_TV_STD (V4L2_STD_525_60 | V4L2_STD_PAL)
+
+/* parameters for supported TV norms */
+struct usbtv_norm_params {
+ v4l2_std_id norm;
+ int cap_width, cap_height;
+};
+
+static struct usbtv_norm_params norm_params[] = {
+ {
+ .norm = V4L2_STD_525_60,
+ .cap_width = 720,
+ .cap_height = 480,
+ },
+ {
+ .norm = V4L2_STD_PAL,
+ .cap_width = 720,
+ .cap_height = 576,
+ }
+};
+
/* A single videobuf2 frame buffer. */
struct usbtv_buf {
struct vb2_buffer vb;
USBTV_COMPOSITE_INPUT,
USBTV_SVIDEO_INPUT,
} input;
+ v4l2_std_id norm;
+ int width, height;
+ int n_chunks;
int iso_size;
unsigned int sequence;
struct urb *isoc_urbs[USBTV_ISOC_TRANSFERS];
};
+static int usbtv_configure_for_norm(struct usbtv *usbtv, v4l2_std_id norm)
+{
+ int i, ret = 0;
+ struct usbtv_norm_params *params = NULL;
+
+ for (i = 0; i < ARRAY_SIZE(norm_params); i++) {
+ if (norm_params[i].norm & norm) {
+ params = &norm_params[i];
+ break;
+ }
+ }
+
+ if (params) {
+ usbtv->width = params->cap_width;
+ usbtv->height = params->cap_height;
+ usbtv->n_chunks = usbtv->width * usbtv->height
+ / 4 / USBTV_CHUNK;
+ usbtv->norm = params->norm;
+ } else
+ ret = -EINVAL;
+
+ return ret;
+}
+
static int usbtv_set_regs(struct usbtv *usbtv, const u16 regs[][2], int size)
{
int ret;
return ret;
}
+static int usbtv_select_norm(struct usbtv *usbtv, v4l2_std_id norm)
+{
+ int ret;
+ static const u16 pal[][2] = {
+ { USBTV_BASE + 0x001a, 0x0068 },
+ { USBTV_BASE + 0x010e, 0x0072 },
+ { USBTV_BASE + 0x010f, 0x00a2 },
+ { USBTV_BASE + 0x0112, 0x00b0 },
+ { USBTV_BASE + 0x0117, 0x0001 },
+ { USBTV_BASE + 0x0118, 0x002c },
+ { USBTV_BASE + 0x012d, 0x0010 },
+ { USBTV_BASE + 0x012f, 0x0020 },
+ { USBTV_BASE + 0x024f, 0x0002 },
+ { USBTV_BASE + 0x0254, 0x0059 },
+ { USBTV_BASE + 0x025a, 0x0016 },
+ { USBTV_BASE + 0x025b, 0x0035 },
+ { USBTV_BASE + 0x0263, 0x0017 },
+ { USBTV_BASE + 0x0266, 0x0016 },
+ { USBTV_BASE + 0x0267, 0x0036 }
+ };
+
+ static const u16 ntsc[][2] = {
+ { USBTV_BASE + 0x001a, 0x0079 },
+ { USBTV_BASE + 0x010e, 0x0068 },
+ { USBTV_BASE + 0x010f, 0x009c },
+ { USBTV_BASE + 0x0112, 0x00f0 },
+ { USBTV_BASE + 0x0117, 0x0000 },
+ { USBTV_BASE + 0x0118, 0x00fc },
+ { USBTV_BASE + 0x012d, 0x0004 },
+ { USBTV_BASE + 0x012f, 0x0008 },
+ { USBTV_BASE + 0x024f, 0x0001 },
+ { USBTV_BASE + 0x0254, 0x005f },
+ { USBTV_BASE + 0x025a, 0x0012 },
+ { USBTV_BASE + 0x025b, 0x0001 },
+ { USBTV_BASE + 0x0263, 0x001c },
+ { USBTV_BASE + 0x0266, 0x0011 },
+ { USBTV_BASE + 0x0267, 0x0005 }
+ };
+
+ ret = usbtv_configure_for_norm(usbtv, norm);
+
+ if (!ret) {
+ if (norm & V4L2_STD_525_60)
+ ret = usbtv_set_regs(usbtv, ntsc, ARRAY_SIZE(ntsc));
+ else if (norm & V4L2_STD_PAL)
+ ret = usbtv_set_regs(usbtv, pal, ARRAY_SIZE(pal));
+ }
+
+ return ret;
+}
+
static int usbtv_setup_capture(struct usbtv *usbtv)
{
int ret;
{ USBTV_BASE + 0x0284, 0x0088 },
{ USBTV_BASE + 0x0003, 0x0004 },
- { USBTV_BASE + 0x001a, 0x0079 },
{ USBTV_BASE + 0x0100, 0x00d3 },
- { USBTV_BASE + 0x010e, 0x0068 },
- { USBTV_BASE + 0x010f, 0x009c },
- { USBTV_BASE + 0x0112, 0x00f0 },
{ USBTV_BASE + 0x0115, 0x0015 },
- { USBTV_BASE + 0x0117, 0x0000 },
- { USBTV_BASE + 0x0118, 0x00fc },
- { USBTV_BASE + 0x012d, 0x0004 },
- { USBTV_BASE + 0x012f, 0x0008 },
{ USBTV_BASE + 0x0220, 0x002e },
{ USBTV_BASE + 0x0225, 0x0008 },
{ USBTV_BASE + 0x024e, 0x0002 },
- { USBTV_BASE + 0x024f, 0x0001 },
- { USBTV_BASE + 0x0254, 0x005f },
- { USBTV_BASE + 0x025a, 0x0012 },
- { USBTV_BASE + 0x025b, 0x0001 },
- { USBTV_BASE + 0x0263, 0x001c },
- { USBTV_BASE + 0x0266, 0x0011 },
- { USBTV_BASE + 0x0267, 0x0005 },
{ USBTV_BASE + 0x024e, 0x0002 },
{ USBTV_BASE + 0x024f, 0x0002 },
};
if (ret)
return ret;
+ ret = usbtv_select_norm(usbtv, usbtv->norm);
+ if (ret)
+ return ret;
+
ret = usbtv_select_input(usbtv, usbtv->input);
if (ret)
return ret;
frame_id = USBTV_FRAME_ID(chunk);
odd = USBTV_ODD(chunk);
chunk_no = USBTV_CHUNK_NO(chunk);
- if (chunk_no >= USBTV_CHUNKS)
+ if (chunk_no >= usbtv->n_chunks)
return;
/* Beginning of a frame. */
usbtv->chunks_done++;
/* Last chunk in a frame, signalling an end */
- if (odd && chunk_no == USBTV_CHUNKS-1) {
+ if (odd && chunk_no == usbtv->n_chunks-1) {
int size = vb2_plane_size(&buf->vb, 0);
enum vb2_buffer_state state = usbtv->chunks_done ==
- USBTV_CHUNKS ?
+ usbtv->n_chunks ?
VB2_BUF_STATE_DONE :
VB2_BUF_STATE_ERROR;
static int usbtv_enum_input(struct file *file, void *priv,
struct v4l2_input *i)
{
+ struct usbtv *dev = video_drvdata(file);
+
switch (i->index) {
case USBTV_COMPOSITE_INPUT:
strlcpy(i->name, "Composite", sizeof(i->name));
}
i->type = V4L2_INPUT_TYPE_CAMERA;
- i->std = V4L2_STD_525_60;
+ i->std = dev->vdev.tvnorms;
return 0;
}
static int usbtv_fmt_vid_cap(struct file *file, void *priv,
struct v4l2_format *f)
{
- f->fmt.pix.width = USBTV_WIDTH;
- f->fmt.pix.height = USBTV_HEIGHT;
+ struct usbtv *usbtv = video_drvdata(file);
+
+ f->fmt.pix.width = usbtv->width;
+ f->fmt.pix.height = usbtv->height;
f->fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
f->fmt.pix.field = V4L2_FIELD_INTERLACED;
- f->fmt.pix.bytesperline = USBTV_WIDTH * 2;
+ f->fmt.pix.bytesperline = usbtv->width * 2;
f->fmt.pix.sizeimage = (f->fmt.pix.bytesperline * f->fmt.pix.height);
f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
- f->fmt.pix.priv = 0;
+
return 0;
}
static int usbtv_g_std(struct file *file, void *priv, v4l2_std_id *norm)
{
- *norm = V4L2_STD_525_60;
+ struct usbtv *usbtv = video_drvdata(file);
+ *norm = usbtv->norm;
return 0;
}
+static int usbtv_s_std(struct file *file, void *priv, v4l2_std_id norm)
+{
+ int ret = -EINVAL;
+ struct usbtv *usbtv = video_drvdata(file);
+
+ if ((norm & V4L2_STD_525_60) || (norm & V4L2_STD_PAL))
+ ret = usbtv_select_norm(usbtv, norm);
+
+ return ret;
+}
+
static int usbtv_g_input(struct file *file, void *priv, unsigned int *i)
{
struct usbtv *usbtv = video_drvdata(file);
return usbtv_select_input(usbtv, i);
}
-static int usbtv_s_std(struct file *file, void *priv, v4l2_std_id norm)
-{
- if (norm & V4L2_STD_525_60)
- return 0;
- return -EINVAL;
-}
-
struct v4l2_ioctl_ops usbtv_ioctl_ops = {
.vidioc_querycap = usbtv_querycap,
.vidioc_enum_input = usbtv_enum_input,
const struct v4l2_format *v4l_fmt, unsigned int *nbuffers,
unsigned int *nplanes, unsigned int sizes[], void *alloc_ctxs[])
{
+ struct usbtv *usbtv = vb2_get_drv_priv(vq);
+
if (*nbuffers < 2)
*nbuffers = 2;
*nplanes = 1;
- sizes[0] = USBTV_WIDTH * USBTV_HEIGHT / 2 * sizeof(u32);
+ sizes[0] = USBTV_CHUNK * usbtv->n_chunks * 2 * sizeof(u32);
return 0;
}
return -ENOMEM;
usbtv->dev = dev;
usbtv->udev = usb_get_dev(interface_to_usbdev(intf));
+
usbtv->iso_size = size;
+
+ (void)usbtv_configure_for_norm(usbtv, V4L2_STD_525_60);
+
spin_lock_init(&usbtv->buflock);
mutex_init(&usbtv->v4l2_lock);
mutex_init(&usbtv->vb2q_lock);
usbtv->vdev.release = video_device_release_empty;
usbtv->vdev.fops = &usbtv_fops;
usbtv->vdev.ioctl_ops = &usbtv_ioctl_ops;
- usbtv->vdev.tvnorms = V4L2_STD_525_60;
+ usbtv->vdev.tvnorms = USBTV_TV_STD;
usbtv->vdev.queue = &usbtv->vb2q;
usbtv->vdev.lock = &usbtv->v4l2_lock;
set_bit(V4L2_FL_USE_FH_PRIO, &usbtv->vdev.flags);
*
* SOF = ((SOF2 - SOF1) * PTS + SOF1 * STC2 - SOF2 * STC1) / (STC2 - STC1) (1)
*
- * to avoid loosing precision in the division. Similarly, the host timestamp is
+ * to avoid losing precision in the division. Similarly, the host timestamp is
* computed with
*
* TS = ((TS2 - TS1) * PTS + TS1 * SOF2 - TS2 * SOF1) / (SOF2 - SOF1) (2)
"Advanced Simple",
"Core",
"Simple Scalable",
- "Advanced Coding Efficency",
+ "Advanced Coding Efficiency",
NULL,
};
__vb2_plane_dmabuf_put(q, &vb->planes[plane]);
}
+/**
+ * __setup_lengths() - setup initial lengths for every plane in
+ * every buffer on the queue
+ */
+static void __setup_lengths(struct vb2_queue *q, unsigned int n)
+{
+ unsigned int buffer, plane;
+ struct vb2_buffer *vb;
+
+ for (buffer = q->num_buffers; buffer < q->num_buffers + n; ++buffer) {
+ vb = q->bufs[buffer];
+ if (!vb)
+ continue;
+
+ for (plane = 0; plane < vb->num_planes; ++plane)
+ vb->v4l2_planes[plane].length = q->plane_sizes[plane];
+ }
+}
+
/**
* __setup_offsets() - setup unique offsets ("cookies") for every plane in
* every buffer on the queue
continue;
for (plane = 0; plane < vb->num_planes; ++plane) {
- vb->v4l2_planes[plane].length = q->plane_sizes[plane];
vb->v4l2_planes[plane].m.mem_offset = off;
dprintk(3, "Buffer %d, plane %d offset 0x%08lx\n",
q->bufs[q->num_buffers + buffer] = vb;
}
+ __setup_lengths(q, buffer);
if (memory == V4L2_MEMORY_MMAP)
__setup_offsets(q, buffer);
return -EINVAL;
}
- if (eb->flags & ~O_CLOEXEC) {
- dprintk(1, "Queue does support only O_CLOEXEC flag\n");
+ if (eb->flags & ~(O_CLOEXEC | O_ACCMODE)) {
+ dprintk(1, "Queue does support only O_CLOEXEC and access mode flags\n");
return -EINVAL;
}
vb_plane = &vb->planes[eb->plane];
- dbuf = call_memop(q, get_dmabuf, vb_plane->mem_priv);
+ dbuf = call_memop(q, get_dmabuf, vb_plane->mem_priv, eb->flags & O_ACCMODE);
if (IS_ERR_OR_NULL(dbuf)) {
dprintk(1, "Failed to export buffer %d, plane %d\n",
eb->index, eb->plane);
return -EINVAL;
}
- ret = dma_buf_fd(dbuf, eb->flags);
+ ret = dma_buf_fd(dbuf, eb->flags & ~O_ACCMODE);
if (ret < 0) {
dprintk(3, "buffer %d, plane %d failed to export (%d)\n",
eb->index, eb->plane, ret);
return sgt;
}
-static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv)
+static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv, unsigned long flags)
{
struct vb2_dc_buf *buf = buf_priv;
struct dma_buf *dbuf;
if (WARN_ON(!buf->sgt_base))
return NULL;
- dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, 0);
+ dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags);
if (IS_ERR(dbuf))
return NULL;
buf->pages = kzalloc(buf->num_pages * sizeof(struct page *),
GFP_KERNEL);
if (!buf->pages)
- return NULL;
+ goto userptr_fail_alloc_pages;
num_pages_from_user = get_user_pages(current, current->mm,
vaddr & PAGE_MASK,
while (--num_pages_from_user >= 0)
put_page(buf->pages[num_pages_from_user]);
kfree(buf->pages);
+userptr_fail_alloc_pages:
kfree(buf);
return NULL;
}
select MFD_CORE
select REGMAP_I2C
select REGMAP_IRQ
- depends on I2C && OF
+ depends on I2C=y && OF
help
The ams AS3722 is a compact system PMU suitable for mobile phones,
tablets etc. It has 4 DC/DC step-down regulators, 3 DC/DC step-down
.iTCO_version = 2,
},
[LPC_WPT_LP] = {
- .name = "Lynx Point_LP",
+ .name = "Wildcat Point_LP",
.iTCO_version = 2,
},
};
int sec_reg_read(struct sec_pmic_dev *sec_pmic, u8 reg, void *dest)
{
- return regmap_read(sec_pmic->regmap, reg, dest);
+ return regmap_read(sec_pmic->regmap_pmic, reg, dest);
}
EXPORT_SYMBOL_GPL(sec_reg_read);
int sec_bulk_read(struct sec_pmic_dev *sec_pmic, u8 reg, int count, u8 *buf)
{
- return regmap_bulk_read(sec_pmic->regmap, reg, buf, count);
+ return regmap_bulk_read(sec_pmic->regmap_pmic, reg, buf, count);
}
EXPORT_SYMBOL_GPL(sec_bulk_read);
int sec_reg_write(struct sec_pmic_dev *sec_pmic, u8 reg, u8 value)
{
- return regmap_write(sec_pmic->regmap, reg, value);
+ return regmap_write(sec_pmic->regmap_pmic, reg, value);
}
EXPORT_SYMBOL_GPL(sec_reg_write);
int sec_bulk_write(struct sec_pmic_dev *sec_pmic, u8 reg, int count, u8 *buf)
{
- return regmap_raw_write(sec_pmic->regmap, reg, buf, count);
+ return regmap_raw_write(sec_pmic->regmap_pmic, reg, buf, count);
}
EXPORT_SYMBOL_GPL(sec_bulk_write);
int sec_reg_update(struct sec_pmic_dev *sec_pmic, u8 reg, u8 val, u8 mask)
{
- return regmap_update_bits(sec_pmic->regmap, reg, mask, val);
+ return regmap_update_bits(sec_pmic->regmap_pmic, reg, mask, val);
}
EXPORT_SYMBOL_GPL(sec_reg_update);
.cache_type = REGCACHE_FLAT,
};
+static const struct regmap_config sec_rtc_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+};
+
#ifdef CONFIG_OF
/*
* Only the common platform data elements for s5m8767 are parsed here from the
break;
}
- sec_pmic->regmap = devm_regmap_init_i2c(i2c, regmap);
- if (IS_ERR(sec_pmic->regmap)) {
- ret = PTR_ERR(sec_pmic->regmap);
+ sec_pmic->regmap_pmic = devm_regmap_init_i2c(i2c, regmap);
+ if (IS_ERR(sec_pmic->regmap_pmic)) {
+ ret = PTR_ERR(sec_pmic->regmap_pmic);
dev_err(&i2c->dev, "Failed to allocate register map: %d\n",
ret);
return ret;
sec_pmic->rtc = i2c_new_dummy(i2c->adapter, RTC_I2C_ADDR);
i2c_set_clientdata(sec_pmic->rtc, sec_pmic);
+ sec_pmic->regmap_rtc = devm_regmap_init_i2c(sec_pmic->rtc,
+ &sec_rtc_regmap_config);
+ if (IS_ERR(sec_pmic->regmap_rtc)) {
+ ret = PTR_ERR(sec_pmic->regmap_rtc);
+ dev_err(&i2c->dev, "Failed to allocate RTC register map: %d\n",
+ ret);
+ return ret;
+ }
+
if (pdata && pdata->cfg_pmic_irq)
pdata->cfg_pmic_irq();
switch (type) {
case S5M8763X:
- ret = regmap_add_irq_chip(sec_pmic->regmap, sec_pmic->irq,
+ ret = regmap_add_irq_chip(sec_pmic->regmap_pmic, sec_pmic->irq,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
sec_pmic->irq_base, &s5m8763_irq_chip,
&sec_pmic->irq_data);
break;
case S5M8767X:
- ret = regmap_add_irq_chip(sec_pmic->regmap, sec_pmic->irq,
+ ret = regmap_add_irq_chip(sec_pmic->regmap_pmic, sec_pmic->irq,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
sec_pmic->irq_base, &s5m8767_irq_chip,
&sec_pmic->irq_data);
break;
case S2MPS11X:
- ret = regmap_add_irq_chip(sec_pmic->regmap, sec_pmic->irq,
+ ret = regmap_add_irq_chip(sec_pmic->regmap_pmic, sec_pmic->irq,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
sec_pmic->irq_base, &s2mps11_irq_chip,
&sec_pmic->irq_data);
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/io.h>
+#include <linux/sched.h>
#include <linux/mfd/core.h>
#include <linux/mfd/ti_ssp.h>
cells[id].id = id;
cells[id].name = data->dev_name;
cells[id].platform_data = data->pdata;
- cells[id].data_size = data->pdata_size;
}
error = mfd_add_devices(dev, 0, cells, 2, NULL, 0, NULL);
struct dma_async_tx_descriptor *tx;
dma_cookie_t cookie;
dma_addr_t dst, src;
- unsigned long dma_flags = DMA_COMPL_SKIP_DEST_UNMAP |
- DMA_COMPL_SKIP_SRC_UNMAP;
+ unsigned long dma_flags = 0;
dst_sg = buf->vb.sglist;
dst_nents = buf->vb.sglen;
{
char name[ENCLOSURE_NAME_SIZE];
+ /*
+ * In odd circumstances, like multipath devices, something else may
+ * already have removed the links, so check for this condition first.
+ */
+ if (!cdev->dev->kobj.sd)
+ return;
+
enclosure_link_name(cdev, name);
sysfs_remove_link(&cdev->dev->kobj, name);
sysfs_remove_link(&cdev->cdev.kobj, "device");
#define MEI_DEV_ID_PPT_2 0x1CBA /* Panther Point */
#define MEI_DEV_ID_PPT_3 0x1DBA /* Panther Point */
-#define MEI_DEV_ID_LPT 0x8C3A /* Lynx Point */
+#define MEI_DEV_ID_LPT_H 0x8C3A /* Lynx Point H */
#define MEI_DEV_ID_LPT_W 0x8D3A /* Lynx Point - Wellsburg */
#define MEI_DEV_ID_LPT_LP 0x9C3A /* Lynx Point LP */
+#define MEI_DEV_ID_LPT_HR 0x8CBA /* Lynx Point H Refresh */
+
+#define MEI_DEV_ID_WPT_LP 0x9CBA /* Wildcat Point LP */
/*
* MEI HW Section
*/
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_PPT_1)},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_PPT_2)},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_PPT_3)},
- {PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_LPT)},
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_LPT_H)},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_LPT_W)},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_LPT_LP)},
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_LPT_HR)},
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_WPT_LP)},
/* required last entry */
{0, }
{
struct mic_vdev *mvdev = to_micvdev(vdev);
struct mic_device_ctrl __iomem *dc = mvdev->dc;
- int retry = 100, i;
+ int retry;
iowrite8(0, &dc->host_ack);
iowrite8(1, &dc->vdev_reset);
mic_send_intr(mvdev->mdev, mvdev->c2h_vdev_db);
/* Wait till host completes all card accesses and acks the reset */
- for (i = retry; i--;) {
+ for (retry = 100; retry--;) {
if (ioread8(&dc->host_ack))
break;
msleep(100);
/*
* The virtio_ring code calls this API when it wants to notify the Host.
*/
-static void mic_notify(struct virtqueue *vq)
+static bool mic_notify(struct virtqueue *vq)
{
struct mic_vdev *mvdev = vq->priv;
mic_send_intr(mvdev->mdev, mvdev->c2h_vdev_db);
+ return true;
}
static void mic_del_vq(struct virtqueue *vq, int n)
/* First assign the vring's allocated in host memory */
vqconfig = mic_vq_config(mvdev->desc) + index;
memcpy_fromio(&config, vqconfig, sizeof(config));
- _vr_size = vring_size(config.num, MIC_VIRTIO_RING_ALIGN);
+ _vr_size = vring_size(le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN);
vr_size = PAGE_ALIGN(_vr_size + sizeof(struct _mic_vring_info));
- va = mic_card_map(mvdev->mdev, config.address, vr_size);
+ va = mic_card_map(mvdev->mdev, le64_to_cpu(config.address), vr_size);
if (!va)
return ERR_PTR(-ENOMEM);
mvdev->vr[index] = va;
memset_io(va, 0x0, _vr_size);
- vq = vring_new_virtqueue(index,
- config.num, MIC_VIRTIO_RING_ALIGN, vdev,
- false,
- va, mic_notify, callback, name);
+ vq = vring_new_virtqueue(index, le16_to_cpu(config.num),
+ MIC_VIRTIO_RING_ALIGN, vdev, false,
+ (void __force *)va, mic_notify, callback,
+ name);
if (!vq) {
err = -ENOMEM;
goto unmap;
/* Allocate and reassign used ring now */
mvdev->used_size[index] = PAGE_ALIGN(sizeof(__u16) * 3 +
- sizeof(struct vring_used_elem) * config.num);
+ sizeof(struct vring_used_elem) *
+ le16_to_cpu(config.num));
used = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
get_order(mvdev->used_size[index]));
if (!used) {
{
struct mic_vdev *mvdev = to_micvdev(vdev);
struct mic_device_ctrl __iomem *dc = mvdev->dc;
- int i, err, retry = 100;
+ int i, err, retry;
/* We must have this many virtqueues. */
if (nvqs > ioread8(&mvdev->desc->num_vq))
* rings have been re-assigned.
*/
mic_send_intr(mvdev->mdev, mvdev->c2h_vdev_db);
- for (i = retry; i--;) {
+ for (retry = 100; retry--;) {
if (!ioread8(&dc->used_address_updated))
break;
msleep(100);
struct device *dev;
int ret;
- for (i = mic_aligned_size(struct mic_bootparam);
- i < MIC_DP_SIZE; i += mic_total_desc_size(d)) {
+ for (i = sizeof(struct mic_bootparam); i < MIC_DP_SIZE;
+ i += mic_total_desc_size(d)) {
d = mdrv->dp + i;
dc = (void __iomem *)d + mic_aligned_desc_size(d);
/*
continue;
/* device already exists */
- dev = device_find_child(mdrv->dev, d, mic_match_desc);
+ dev = device_find_child(mdrv->dev, (void __force *)d,
+ mic_match_desc);
if (dev) {
if (remove)
iowrite8(MIC_VIRTIO_PARAM_DEV_REMOVE,
static inline unsigned mic_desc_size(struct mic_device_desc __iomem *desc)
{
- return mic_aligned_size(*desc)
- + ioread8(&desc->num_vq) * mic_aligned_size(struct mic_vqconfig)
+ return sizeof(*desc)
+ + ioread8(&desc->num_vq) * sizeof(struct mic_vqconfig)
+ ioread8(&desc->feature_len) * 2
+ ioread8(&desc->config_len);
}
}
static inline unsigned mic_total_desc_size(struct mic_device_desc __iomem *desc)
{
- return mic_aligned_desc_size(desc) +
- mic_aligned_size(struct mic_device_ctrl);
+ return mic_aligned_desc_size(desc) + sizeof(struct mic_device_ctrl);
}
int mic_devices_init(struct mic_driver *mdrv);
{
struct mic_bootparam *bootparam = mdev->dp;
- bootparam->magic = MIC_MAGIC;
+ bootparam->magic = cpu_to_le32(MIC_MAGIC);
bootparam->c2h_shutdown_db = mdev->shutdown_db;
bootparam->h2c_shutdown_db = -1;
bootparam->h2c_config_db = -1;
* We are copying from IO below an should ideally use something
* like copy_to_user_fromio(..) if it existed.
*/
- if (copy_to_user(ubuf, dbuf, len)) {
+ if (copy_to_user(ubuf, (void __force *)dbuf, len)) {
err = -EFAULT;
dev_err(mic_dev(mvdev), "%s %d err %d\n",
__func__, __LINE__, err);
* We are copying to IO below and should ideally use something
* like copy_from_user_toio(..) if it existed.
*/
- if (copy_from_user(dbuf, ubuf, len)) {
+ if (copy_from_user((void __force *)dbuf, ubuf, len)) {
err = -EFAULT;
dev_err(mic_dev(mvdev), "%s %d err %d\n",
__func__, __LINE__, err);
continue;
}
mvdev->mvr[i].vrh.vring.used =
- mvdev->mdev->aper.va +
+ (void __force *)mvdev->mdev->aper.va +
le64_to_cpu(vqconfig[i].used_address);
}
void __user *argp)
{
DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wake);
- int ret = 0, retry = 100, i;
+ int ret = 0, retry, i;
struct mic_bootparam *bootparam = mvdev->mdev->dp;
s8 db = bootparam->h2c_config_db;
mvdev->dc->config_change = MIC_VIRTIO_PARAM_CONFIG_CHANGED;
mvdev->mdev->ops->send_intr(mvdev->mdev, db);
- for (i = retry; i--;) {
+ for (retry = 100; retry--;) {
ret = wait_event_timeout(wake,
mvdev->dc->guest_ack, msecs_to_jiffies(100));
if (ret)
}
/* Find the first free device page entry */
- for (i = mic_aligned_size(struct mic_bootparam);
+ for (i = sizeof(struct mic_bootparam);
i < MIC_DP_SIZE - mic_total_desc_size(dd_config);
i += mic_total_desc_size(devp)) {
devp = mdev->dp + i;
char irqname[10];
struct mic_bootparam *bootparam = mdev->dp;
u16 num;
+ dma_addr_t vr_addr;
mutex_lock(&mdev->mic_mutex);
}
vr->len = vr_size;
vr->info = vr->va + vring_size(num, MIC_VIRTIO_RING_ALIGN);
- vr->info->magic = MIC_MAGIC + mvdev->virtio_id + i;
- vqconfig[i].address = mic_map_single(mdev,
- vr->va, vr_size);
- if (mic_map_error(vqconfig[i].address)) {
+ vr->info->magic = cpu_to_le32(MIC_MAGIC + mvdev->virtio_id + i);
+ vr_addr = mic_map_single(mdev, vr->va, vr_size);
+ if (mic_map_error(vr_addr)) {
free_pages((unsigned long)vr->va, get_order(vr_size));
ret = -ENOMEM;
dev_err(mic_dev(mvdev), "%s %d err %d\n",
__func__, __LINE__, ret);
goto err;
}
- vqconfig[i].address = cpu_to_le64(vqconfig[i].address);
+ vqconfig[i].address = cpu_to_le64(vr_addr);
vring_init(&vr->vr, num, vr->va, MIC_VIRTIO_RING_ALIGN);
ret = vringh_init_kern(&mvr->vrh,
struct mic_vdev *tmp_mvdev;
struct mic_device *mdev = mvdev->mdev;
DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wake);
- int i, ret, retry = 100;
+ int i, ret, retry;
struct mic_vqconfig *vqconfig;
struct mic_bootparam *bootparam = mdev->dp;
s8 db;
"Requesting hot remove id %d\n", mvdev->virtio_id);
mvdev->dc->config_change = MIC_VIRTIO_PARAM_DEV_REMOVE;
mdev->ops->send_intr(mdev, db);
- for (i = retry; i--;) {
+ for (retry = 100; retry--;) {
ret = wait_event_timeout(wake,
mvdev->dc->guest_ack, msecs_to_jiffies(100));
if (ret)
break;
}
dev_dbg(mdev->sdev->parent,
- "Device id %d config_change %d guest_ack %d\n",
+ "Device id %d config_change %d guest_ack %d retry %d\n",
mvdev->virtio_id, mvdev->dc->config_change,
- mvdev->dc->guest_ack);
+ mvdev->dc->guest_ack, retry);
mvdev->dc->config_change = 0;
mvdev->dc->guest_ack = 0;
skip_hot_remove:
* so copy over the ramdisk @ 128M.
*/
memcpy_toio(mdev->aper.va + (mdev->bootaddr << 1), fw->data, fw->size);
- iowrite32(cpu_to_le32(mdev->bootaddr << 1), &bp->hdr.ramdisk_image);
- iowrite32(cpu_to_le32(fw->size), &bp->hdr.ramdisk_size);
+ iowrite32(mdev->bootaddr << 1, &bp->hdr.ramdisk_image);
+ iowrite32(fw->size, &bp->hdr.ramdisk_size);
release_firmware(fw);
error:
return rc;
struct mmc_host *host = func->card->host;
u64 addr = (host->slotno << 16) | func->num;
- ACPI_HANDLE_SET(&func->dev,
- acpi_get_child(ACPI_HANDLE(host->parent), addr));
+ acpi_preset_companion(&func->dev, ACPI_HANDLE(host->parent), addr);
}
#else
static inline void sdio_acpi_set_handle(struct sdio_func *func) {}
#include <linux/delay.h>
#include <linux/spinlock.h>
#include <linux/timer.h>
+#include <linux/of.h>
#include <linux/omap-dma.h>
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#define OMAP_MMC_CMDTYPE_AC 2
#define OMAP_MMC_CMDTYPE_ADTC 3
-#define OMAP_DMA_MMC_TX 21
-#define OMAP_DMA_MMC_RX 22
-#define OMAP_DMA_MMC2_TX 54
-#define OMAP_DMA_MMC2_RX 55
-
-#define OMAP24XX_DMA_MMC2_TX 47
-#define OMAP24XX_DMA_MMC2_RX 48
-#define OMAP24XX_DMA_MMC1_TX 61
-#define OMAP24XX_DMA_MMC1_RX 62
-
-
#define DRIVER_NAME "mmci-omap"
/* Specifies how often in millisecs to poll for card status changes
struct mmc_omap_host *host = NULL;
struct resource *res;
dma_cap_mask_t mask;
- unsigned sig;
+ unsigned sig = 0;
int i, ret = 0;
int irq;
}
if (pdata->nr_slots == 0) {
dev_err(&pdev->dev, "no slots\n");
- return -ENXIO;
+ return -EPROBE_DEFER;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
host->dma_tx_burst = -1;
host->dma_rx_burst = -1;
- if (mmc_omap2())
- sig = host->id == 0 ? OMAP24XX_DMA_MMC1_TX : OMAP24XX_DMA_MMC2_TX;
- else
- sig = host->id == 0 ? OMAP_DMA_MMC_TX : OMAP_DMA_MMC2_TX;
- host->dma_tx = dma_request_channel(mask, omap_dma_filter_fn, &sig);
+ res = platform_get_resource_byname(pdev, IORESOURCE_DMA, "tx");
+ if (res)
+ sig = res->start;
+ host->dma_tx = dma_request_slave_channel_compat(mask,
+ omap_dma_filter_fn, &sig, &pdev->dev, "tx");
if (!host->dma_tx)
dev_warn(host->dev, "unable to obtain TX DMA engine channel %u\n",
sig);
- if (mmc_omap2())
- sig = host->id == 0 ? OMAP24XX_DMA_MMC1_RX : OMAP24XX_DMA_MMC2_RX;
- else
- sig = host->id == 0 ? OMAP_DMA_MMC_RX : OMAP_DMA_MMC2_RX;
- host->dma_rx = dma_request_channel(mask, omap_dma_filter_fn, &sig);
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_DMA, "rx");
+ if (res)
+ sig = res->start;
+ host->dma_rx = dma_request_slave_channel_compat(mask,
+ omap_dma_filter_fn, &sig, &pdev->dev, "rx");
if (!host->dma_rx)
dev_warn(host->dev, "unable to obtain RX DMA engine channel %u\n",
sig);
return 0;
}
+#if IS_BUILTIN(CONFIG_OF)
+static const struct of_device_id mmc_omap_match[] = {
+ { .compatible = "ti,omap2420-mmc", },
+ { },
+};
+#endif
+
static struct platform_driver mmc_omap_driver = {
.probe = mmc_omap_probe,
.remove = mmc_omap_remove,
.driver = {
.name = DRIVER_NAME,
.owner = THIS_MODULE,
+ .of_match_table = of_match_ptr(mmc_omap_match),
},
};
return -ENOMEM;
}
info->map.cached =
- ioremap_cached(info->map.phys, info->map.size);
+ ioremap_cache(info->map.phys, info->map.size);
if (!info->map.cached)
printk(KERN_WARNING "Failed to ioremap cached %s\n",
info->map.name);
dma_dev = host->dma_chan->device;
- flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_SRC_UNMAP |
- DMA_COMPL_SKIP_DEST_UNMAP;
+ flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
phys_addr = dma_map_single(dma_dev->dev, p, len, dir);
if (dma_mapping_error(dma_dev->dev, phys_addr)) {
dma_dev = chan->device;
dma_addr = dma_map_single(dma_dev->dev, buffer, len, direction);
- flags |= DMA_COMPL_SKIP_SRC_UNMAP | DMA_COMPL_SKIP_DEST_UNMAP;
-
if (direction == DMA_TO_DEVICE) {
dma_src = dma_addr;
dma_dst = host->data_pa;
static void pxa3xx_nand_free_buff(struct pxa3xx_nand_info *info)
{
struct platform_device *pdev = info->pdev;
- if (use_dma) {
+ if (info->use_dma) {
pxa_free_dma(info->data_dma_ch);
dma_free_coherent(&pdev->dev, info->buf_size,
info->data_buff, info->data_buff_phys);
.compatible = "marvell,pxa3xx-nand",
.data = (void *)PXA3XX_NAND_VARIANT_PXA,
},
- {
- .compatible = "marvell,armada370-nand",
- .data = (void *)PXA3XX_NAND_VARIANT_ARMADA370,
- },
{}
};
MODULE_DEVICE_TABLE(of, pxa3xx_nand_dt_ids);
port = &(SLAVE_AD_INFO(slave).port);
- // if slave is null, the whole port is not initialized
+ /* if slave is null, the whole port is not initialized */
if (!port->slave) {
pr_warning("Warning: %s: speed changed for uninitialized port on %s\n",
slave->bond->dev->name, slave->dev->name);
return;
}
+ __get_state_machine_lock(port);
+
port->actor_admin_port_key &= ~AD_SPEED_KEY_BITS;
port->actor_oper_port_key = port->actor_admin_port_key |=
(__get_link_speed(port) << 1);
pr_debug("Port %d changed speed\n", port->actor_port_number);
- // there is no need to reselect a new aggregator, just signal the
- // state machines to reinitialize
+ /* there is no need to reselect a new aggregator, just signal the
+ * state machines to reinitialize
+ */
port->sm_vars |= AD_PORT_BEGIN;
+
+ __release_state_machine_lock(port);
}
/**
port = &(SLAVE_AD_INFO(slave).port);
- // if slave is null, the whole port is not initialized
+ /* if slave is null, the whole port is not initialized */
if (!port->slave) {
pr_warning("%s: Warning: duplex changed for uninitialized port on %s\n",
slave->bond->dev->name, slave->dev->name);
return;
}
+ __get_state_machine_lock(port);
+
port->actor_admin_port_key &= ~AD_DUPLEX_KEY_BITS;
port->actor_oper_port_key = port->actor_admin_port_key |=
__get_duplex(port);
pr_debug("Port %d changed duplex\n", port->actor_port_number);
- // there is no need to reselect a new aggregator, just signal the
- // state machines to reinitialize
+ /* there is no need to reselect a new aggregator, just signal the
+ * state machines to reinitialize
+ */
port->sm_vars |= AD_PORT_BEGIN;
+
+ __release_state_machine_lock(port);
}
/**
port = &(SLAVE_AD_INFO(slave).port);
- // if slave is null, the whole port is not initialized
+ /* if slave is null, the whole port is not initialized */
if (!port->slave) {
pr_warning("Warning: %s: link status changed for uninitialized port on %s\n",
slave->bond->dev->name, slave->dev->name);
return;
}
- // on link down we are zeroing duplex and speed since some of the adaptors(ce1000.lan) report full duplex/speed instead of N/A(duplex) / 0(speed)
- // on link up we are forcing recheck on the duplex and speed since some of he adaptors(ce1000.lan) report
+ __get_state_machine_lock(port);
+ /* on link down we are zeroing duplex and speed since
+ * some of the adaptors(ce1000.lan) report full duplex/speed
+ * instead of N/A(duplex) / 0(speed).
+ *
+ * on link up we are forcing recheck on the duplex and speed since
+ * some of he adaptors(ce1000.lan) report.
+ */
if (link == BOND_LINK_UP) {
port->is_enabled = true;
port->actor_admin_port_key &= ~AD_DUPLEX_KEY_BITS;
port->actor_oper_port_key = (port->actor_admin_port_key &=
~AD_SPEED_KEY_BITS);
}
- //BOND_PRINT_DBG(("Port %d changed link status to %s", port->actor_port_number, ((link == BOND_LINK_UP)?"UP":"DOWN")));
- // there is no need to reselect a new aggregator, just signal the
- // state machines to reinitialize
+ pr_debug("Port %d changed link status to %s",
+ port->actor_port_number,
+ (link == BOND_LINK_UP) ? "UP" : "DOWN");
+ /* there is no need to reselect a new aggregator, just signal the
+ * state machines to reinitialize
+ */
port->sm_vars |= AD_PORT_BEGIN;
+
+ __release_state_machine_lock(port);
}
/*
if (!miimon) {
pr_warning("Warning: miimon must be specified, otherwise bonding will not detect link failure, speed and duplex which are essential for 802.3ad operation\n");
pr_warning("Forcing miimon to 100msec\n");
- miimon = 100;
+ miimon = BOND_DEFAULT_MIIMON;
}
}
if (!miimon) {
pr_warning("Warning: miimon must be specified, otherwise bonding will not detect link failure and link speed which are essential for TLB/ALB load balancing\n");
pr_warning("Forcing miimon to 100msec\n");
- miimon = 100;
+ miimon = BOND_DEFAULT_MIIMON;
}
}
(arp_ip_count < BOND_MAX_ARP_TARGETS) && arp_ip_target[i]; i++) {
/* not complete check, but should be good enough to
catch mistakes */
- __be32 ip = in_aton(arp_ip_target[i]);
- if (!isdigit(arp_ip_target[i][0]) || ip == 0 ||
- ip == htonl(INADDR_BROADCAST)) {
+ __be32 ip;
+ if (!in4_pton(arp_ip_target[i], -1, (u8 *)&ip, -1, NULL) ||
+ IS_IP_TARGET_UNUSABLE_ADDRESS(ip)) {
pr_warning("Warning: bad arp_ip_target module parameter (%s), ARP monitoring will not be performed\n",
arp_ip_target[i]);
arp_interval = 0;
return -EPERM;
}
- if (BOND_MODE_IS_LB(mode) && bond->params.arp_interval) {
- pr_err("%s: %s mode is incompatible with arp monitoring.\n",
- bond->dev->name, bond_mode_tbl[mode].modename);
- return -EINVAL;
+ if (BOND_NO_USES_ARP(mode) && bond->params.arp_interval) {
+ pr_info("%s: %s mode is incompatible with arp monitoring, start mii monitoring\n",
+ bond->dev->name, bond_mode_tbl[mode].modename);
+ /* disable arp monitoring */
+ bond->params.arp_interval = 0;
+ /* set miimon to default value */
+ bond->params.miimon = BOND_DEFAULT_MIIMON;
+ pr_info("%s: Setting MII monitoring interval to %d.\n",
+ bond->dev->name, bond->params.miimon);
}
/* don't cache arp_validate between modes */
ret = -EINVAL;
goto out;
}
- if (bond->params.mode == BOND_MODE_ALB ||
- bond->params.mode == BOND_MODE_TLB ||
- bond->params.mode == BOND_MODE_8023AD) {
+ if (BOND_NO_USES_ARP(bond->params.mode)) {
pr_info("%s: ARP monitoring cannot be used with ALB/TLB/802.3ad. Only MII monitoring is supported on %s.\n",
bond->dev->name, bond->dev->name);
ret = -EINVAL;
char *buf)
{
struct bonding *bond = to_bond(d);
- int packets_per_slave = bond->params.packets_per_slave;
+ unsigned int packets_per_slave = bond->params.packets_per_slave;
if (packets_per_slave > 1)
packets_per_slave = reciprocal_value(packets_per_slave);
- return sprintf(buf, "%d\n", packets_per_slave);
+ return sprintf(buf, "%u\n", packets_per_slave);
}
static ssize_t bonding_store_packets_per_slave(struct device *d,
#define BOND_MAX_ARP_TARGETS 16
+#define BOND_DEFAULT_MIIMON 100
+
#define IS_UP(dev) \
((((dev)->flags & IFF_UP) == IFF_UP) && \
netif_running(dev) && \
((mode) == BOND_MODE_TLB) || \
((mode) == BOND_MODE_ALB))
+#define BOND_NO_USES_ARP(mode) \
+ (((mode) == BOND_MODE_8023AD) || \
+ ((mode) == BOND_MODE_TLB) || \
+ ((mode) == BOND_MODE_ALB))
+
#define TX_QUEUE_OVERRIDE(mode) \
(((mode) == BOND_MODE_ACTIVEBACKUP) || \
((mode) == BOND_MODE_ROUNDROBIN))
return 0;
}
-static int c_can_get_berr_counter(const struct net_device *dev,
- struct can_berr_counter *bec)
+static int __c_can_get_berr_counter(const struct net_device *dev,
+ struct can_berr_counter *bec)
{
unsigned int reg_err_counter;
struct c_can_priv *priv = netdev_priv(dev);
- c_can_pm_runtime_get_sync(priv);
-
reg_err_counter = priv->read_reg(priv, C_CAN_ERR_CNT_REG);
bec->rxerr = (reg_err_counter & ERR_CNT_REC_MASK) >>
ERR_CNT_REC_SHIFT;
bec->txerr = reg_err_counter & ERR_CNT_TEC_MASK;
+ return 0;
+}
+
+static int c_can_get_berr_counter(const struct net_device *dev,
+ struct can_berr_counter *bec)
+{
+ struct c_can_priv *priv = netdev_priv(dev);
+ int err;
+
+ c_can_pm_runtime_get_sync(priv);
+ err = __c_can_get_berr_counter(dev, bec);
c_can_pm_runtime_put_sync(priv);
- return 0;
+ return err;
}
/*
if (!(val & (1 << (msg_obj_no - 1)))) {
can_get_echo_skb(dev,
msg_obj_no - C_CAN_MSG_OBJ_TX_FIRST);
+ c_can_object_get(dev, 0, msg_obj_no, IF_COMM_ALL);
stats->tx_bytes += priv->read_reg(priv,
C_CAN_IFACE(MSGCTRL_REG, 0))
& IF_MCONT_DLC_MASK;
if (unlikely(!skb))
return 0;
- c_can_get_berr_counter(dev, &bec);
+ __c_can_get_berr_counter(dev, &bec);
reg_err_counter = priv->read_reg(priv, C_CAN_ERR_CNT_REG);
rx_err_passive = (reg_err_counter & ERR_CNT_RP_MASK) >>
ERR_CNT_RP_SHIFT;
dev_err(&pdev->dev, "no ipg clock defined\n");
return PTR_ERR(clk_ipg);
}
- clock_freq = clk_get_rate(clk_ipg);
clk_per = devm_clk_get(&pdev->dev, "per");
if (IS_ERR(clk_per)) {
dev_err(&pdev->dev, "no per clock defined\n");
return PTR_ERR(clk_per);
}
+ clock_freq = clk_get_rate(clk_per);
}
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
uint8_t isrc, status;
int n = 0;
- /* Shared interrupts and IRQ off? */
- if (priv->read_reg(priv, SJA1000_IER) == IRQ_OFF)
- return IRQ_NONE;
-
if (priv->pre_irq)
priv->pre_irq(priv);
+ /* Shared interrupts and IRQ off? */
+ if (priv->read_reg(priv, SJA1000_IER) == IRQ_OFF)
+ goto out;
+
while ((isrc = priv->read_reg(priv, SJA1000_IR)) &&
(n < SJA1000_MAX_IRQ)) {
- n++;
+
status = priv->read_reg(priv, SJA1000_SR);
/* check for absent controller due to hw unplug */
if (status == 0xFF && sja1000_is_absent(priv))
- return IRQ_NONE;
+ goto out;
if (isrc & IRQ_WUI)
netdev_warn(dev, "wakeup interrupt\n");
status = priv->read_reg(priv, SJA1000_SR);
/* check for absent controller */
if (status == 0xFF && sja1000_is_absent(priv))
- return IRQ_NONE;
+ goto out;
}
}
if (isrc & (IRQ_DOI | IRQ_EI | IRQ_BEI | IRQ_EPI | IRQ_ALI)) {
if (sja1000_err(dev, isrc, status))
break;
}
+ n++;
}
-
+out:
if (priv->post_irq)
priv->post_irq(priv);
usb_unanchor_urb(urb);
usb_free_coherent(dev->udev, RX_BUFFER_SIZE, buf,
urb->transfer_dma);
+ usb_free_urb(urb);
break;
}
* allowed (MAX_TX_URBS).
*/
if (!context) {
- usb_unanchor_urb(urb);
usb_free_coherent(dev->udev, size, buf, urb->transfer_dma);
+ usb_free_urb(urb);
netdev_warn(netdev, "couldn't find free context\n");
/* set LED in default state (end of init phase) */
pcan_usb_pro_set_led(dev, 0, 1);
+ kfree(bi);
+ kfree(fi);
+
return 0;
err_out:
if (netif_msg_ifup(db))
dev_dbg(db->dev, "enabling %s\n", dev->name);
- if (devm_request_irq(db->dev, dev->irq, &emac_interrupt,
- 0, dev->name, dev))
+ if (request_irq(dev->irq, &emac_interrupt, 0, dev->name, dev))
return -EAGAIN;
/* Initialize EMAC board */
emac_shutdown(ndev);
+ free_irq(ndev->irq, ndev);
+
return 0;
}
/* Make sure pointer to data buffer is set */
wmb();
+ skb_tx_timestamp(skb);
+
*info = cpu_to_le32(FOR_EMAC | FIRST_OR_LAST_MASK | len);
/* Increment index to point to the next BD */
arc_reg_set(priv, R_STATUS, TXPL_MASK);
- skb_tx_timestamp(skb);
-
return NETDEV_TX_OK;
}
* Mask some pcie error bits
*/
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR);
- pci_read_config_dword(pdev, pos + PCI_ERR_UNCOR_SEVER, &data);
- data &= ~(PCI_ERR_UNC_DLP | PCI_ERR_UNC_FCP);
- pci_write_config_dword(pdev, pos + PCI_ERR_UNCOR_SEVER, data);
+ if (pos) {
+ pci_read_config_dword(pdev, pos + PCI_ERR_UNCOR_SEVER, &data);
+ data &= ~(PCI_ERR_UNC_DLP | PCI_ERR_UNC_FCP);
+ pci_write_config_dword(pdev, pos + PCI_ERR_UNCOR_SEVER, data);
+ }
/* clear error status */
pcie_capability_write_word(pdev, PCI_EXP_DEVSTA,
PCI_EXP_DEVSTA_NFED |
* Therefore, if they would have been defined in the same union,
* data can get corrupted.
*/
- struct afex_vif_list_ramrod_data func_afex_rdata;
+ union {
+ struct afex_vif_list_ramrod_data viflist_data;
+ struct function_update_data func_update;
+ } func_afex_rdata;
/* used by dmae command executer */
struct dmae_command dmae[MAX_DMAE_C];
#define MCPR_SCRATCH_BASE(bp) \
(CHIP_IS_E1x(bp) ? MCP_REG_MCPR_SCRATCH : MCP_A_REG_MCPR_SCRATCH)
+#define E1H_MAX_MF_SB_COUNT (HC_SB_MAX_SB_E1X/(E1HVN_MAX * PORT_MAX))
+
#endif /* bnx2x.h */
bnx2x_warpcore_enable_AN_KR2(phy, params, vars);
} else {
+ /* Enable Auto-Detect to support 1G over CL37 as well */
+ bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD,
+ MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X1, 0x10);
+
+ /* Force cl48 sync_status LOW to avoid getting stuck in CL73
+ * parallel-detect loop when CL73 and CL37 are enabled.
+ */
+ CL22_WR_OVER_CL45(bp, phy, MDIO_REG_BANK_AER_BLOCK,
+ MDIO_AER_BLOCK_AER_REG, 0);
+ bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD,
+ MDIO_WC_REG_RXB_ANA_RX_CONTROL_PCI, 0x0800);
+ bnx2x_set_aer_mmd(params, phy);
+
bnx2x_disable_kr2(params, vars, phy);
}
*edc_mode = EDC_MODE_ACTIVE_DAC;
else
check_limiting_mode = 1;
- } else if (copper_module_type &
- SFP_EEPROM_FC_TX_TECH_BITMASK_COPPER_PASSIVE) {
+ } else {
+ *edc_mode = EDC_MODE_PASSIVE_DAC;
+ /* Even in case PASSIVE_DAC indication is not set,
+ * treat it as a passive DAC cable, since some cables
+ * don't have this indication.
+ */
+ if (copper_module_type &
+ SFP_EEPROM_FC_TX_TECH_BITMASK_COPPER_PASSIVE) {
DP(NETIF_MSG_LINK,
"Passive Copper cable detected\n");
- *edc_mode =
- EDC_MODE_PASSIVE_DAC;
- } else {
- DP(NETIF_MSG_LINK,
- "Unknown copper-cable-type 0x%x !!!\n",
- copper_module_type);
- return -EINVAL;
+ } else {
+ DP(NETIF_MSG_LINK,
+ "Unknown copper-cable-type\n");
+ }
}
break;
}
(1<<11));
if (((phy->req_line_speed == SPEED_AUTO_NEG) &&
- (phy->speed_cap_mask &
- PORT_HW_CFG_SPEED_CAPABILITY_D0_1G)) ||
- (phy->req_line_speed == SPEED_1000)) {
+ (phy->speed_cap_mask &
+ PORT_HW_CFG_SPEED_CAPABILITY_D0_1G)) ||
+ (phy->req_line_speed == SPEED_1000)) {
an_1000_val |= (1<<8);
autoneg_val |= (1<<9 | 1<<12);
if (phy->req_duplex == DUPLEX_FULL)
0x09,
&an_1000_val);
- /* Set 100 speed advertisement */
- if (((phy->req_line_speed == SPEED_AUTO_NEG) &&
- (phy->speed_cap_mask &
- (PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_FULL |
- PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_HALF)))) {
- an_10_100_val |= (1<<7);
- /* Enable autoneg and restart autoneg for legacy speeds */
- autoneg_val |= (1<<9 | 1<<12);
-
- if (phy->req_duplex == DUPLEX_FULL)
- an_10_100_val |= (1<<8);
- DP(NETIF_MSG_LINK, "Advertising 100M\n");
- }
-
- /* Set 10 speed advertisement */
- if (((phy->req_line_speed == SPEED_AUTO_NEG) &&
- (phy->speed_cap_mask &
- (PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_FULL |
- PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_HALF)))) {
- an_10_100_val |= (1<<5);
- autoneg_val |= (1<<9 | 1<<12);
- if (phy->req_duplex == DUPLEX_FULL)
+ /* Advertise 10/100 link speed */
+ if (phy->req_line_speed == SPEED_AUTO_NEG) {
+ if (phy->speed_cap_mask &
+ PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_HALF) {
+ an_10_100_val |= (1<<5);
+ autoneg_val |= (1<<9 | 1<<12);
+ DP(NETIF_MSG_LINK, "Advertising 10M-HD\n");
+ }
+ if (phy->speed_cap_mask &
+ PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_FULL) {
an_10_100_val |= (1<<6);
- DP(NETIF_MSG_LINK, "Advertising 10M\n");
+ autoneg_val |= (1<<9 | 1<<12);
+ DP(NETIF_MSG_LINK, "Advertising 10M-FD\n");
+ }
+ if (phy->speed_cap_mask &
+ PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_HALF) {
+ an_10_100_val |= (1<<7);
+ autoneg_val |= (1<<9 | 1<<12);
+ DP(NETIF_MSG_LINK, "Advertising 100M-HD\n");
+ }
+ if (phy->speed_cap_mask &
+ PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_FULL) {
+ an_10_100_val |= (1<<8);
+ autoneg_val |= (1<<9 | 1<<12);
+ DP(NETIF_MSG_LINK, "Advertising 100M-FD\n");
+ }
}
/* Only 10/100 are allowed to work in FORCE mode */
DP(NETIF_MSG_LINK, "Link changed:[%x %x]->%x\n", vars->link_up,
old_status, status);
+ /* Do not touch the link in case physical link down */
+ if ((vars->phy_flags & PHY_PHYSICAL_LINK_FLAG) == 0)
+ return 1;
+
/* a. Update shmem->link_status accordingly
* b. Update link_vars->link_up
*/
*/
not_kr2_device = (((base_page & 0x8000) == 0) ||
(((base_page & 0x8000) &&
- ((next_page & 0xe0) == 0x2))));
+ ((next_page & 0xe0) == 0x20))));
/* In case KR2 is already disabled, check if we need to re-enable it */
if (!(vars->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) {
}
}
- /* adjust igu_sb_cnt to MF for E1x */
- if (CHIP_IS_E1x(bp) && IS_MF(bp))
- bp->igu_sb_cnt /= E1HVN_MAX;
+ /* adjust igu_sb_cnt to MF for E1H */
+ if (CHIP_IS_E1H(bp) && IS_MF(bp))
+ bp->igu_sb_cnt = min_t(u8, bp->igu_sb_cnt, E1H_MAX_MF_SB_COUNT);
/* port info */
bnx2x_get_port_hwinfo(bp);
#define MDIO_WC_REG_RX1_PCI_CTRL 0x80ca
#define MDIO_WC_REG_RX2_PCI_CTRL 0x80da
#define MDIO_WC_REG_RX3_PCI_CTRL 0x80ea
+#define MDIO_WC_REG_RXB_ANA_RX_CONTROL_PCI 0x80fa
#define MDIO_WC_REG_XGXSBLK2_UNICORE_MODE_10G 0x8104
#define MDIO_WC_REG_XGXS_STATUS3 0x8129
#define MDIO_WC_REG_PAR_DET_10G_STATUS 0x8130
struct bnx2x_vlan_mac_ramrod_params p;
struct bnx2x_exe_queue_obj *exeq = &o->exe_queue;
struct bnx2x_exeq_elem *exeq_pos, *exeq_pos_n;
+ unsigned long flags;
int read_lock;
int rc = 0;
spin_lock_bh(&exeq->lock);
list_for_each_entry_safe(exeq_pos, exeq_pos_n, &exeq->exe_queue, link) {
- if (exeq_pos->cmd_data.vlan_mac.vlan_mac_flags ==
- *vlan_mac_flags) {
+ flags = exeq_pos->cmd_data.vlan_mac.vlan_mac_flags;
+ if (BNX2X_VLAN_MAC_CMP_FLAGS(flags) ==
+ BNX2X_VLAN_MAC_CMP_FLAGS(*vlan_mac_flags)) {
rc = exeq->remove(bp, exeq->owner, exeq_pos);
if (rc) {
BNX2X_ERR("Failed to remove command\n");
return read_lock;
list_for_each_entry(pos, &o->head, link) {
- if (pos->vlan_mac_flags == *vlan_mac_flags) {
+ flags = pos->vlan_mac_flags;
+ if (BNX2X_VLAN_MAC_CMP_FLAGS(flags) ==
+ BNX2X_VLAN_MAC_CMP_FLAGS(*vlan_mac_flags)) {
p.user_req.vlan_mac_flags = pos->vlan_mac_flags;
memcpy(&p.user_req.u, &pos->u, sizeof(pos->u));
rc = bnx2x_config_vlan_mac(bp, &p);
struct bnx2x_raw_obj *r = &o->raw;
/* Do nothing if only driver cleanup was requested */
- if (test_bit(RAMROD_DRV_CLR_ONLY, &p->ramrod_flags))
+ if (test_bit(RAMROD_DRV_CLR_ONLY, &p->ramrod_flags)) {
+ DP(BNX2X_MSG_SP, "Not configuring RSS ramrod_flags=%lx\n",
+ p->ramrod_flags);
return 0;
+ }
r->set_pending(r);
BNX2X_DONT_CONSUME_CAM_CREDIT,
BNX2X_DONT_CONSUME_CAM_CREDIT_DEST,
};
+/* When looking for matching filters, some flags are not interesting */
+#define BNX2X_VLAN_MAC_CMP_MASK (1 << BNX2X_UC_LIST_MAC | \
+ 1 << BNX2X_ETH_MAC | \
+ 1 << BNX2X_ISCSI_ETH_MAC | \
+ 1 << BNX2X_NETQ_ETH_MAC)
+#define BNX2X_VLAN_MAC_CMP_FLAGS(flags) \
+ ((flags) & BNX2X_VLAN_MAC_CMP_MASK)
struct bnx2x_vlan_mac_ramrod_params {
/* Object to run the command from */
/* next state */
vfop->state = BNX2X_VFOP_RXMODE_DONE;
+ /* record the accept flags in vfdb so hypervisor can modify them
+ * if necessary
+ */
+ bnx2x_vfq(vf, ramrod->cl_id - vf->igu_base_id, accept_flags) =
+ ramrod->rx_accept_flags;
vfop->rc = bnx2x_config_rx_mode(bp, ramrod);
bnx2x_vfop_finalize(vf, vfop->rc, VFOP_DONE);
op_err:
return;
}
+static void bnx2x_vf_prep_rx_mode(struct bnx2x *bp, u8 qid,
+ struct bnx2x_rx_mode_ramrod_params *ramrod,
+ struct bnx2x_virtf *vf,
+ unsigned long accept_flags)
+{
+ struct bnx2x_vf_queue *vfq = vfq_get(vf, qid);
+
+ memset(ramrod, 0, sizeof(*ramrod));
+ ramrod->cid = vfq->cid;
+ ramrod->cl_id = vfq_cl_id(vf, vfq);
+ ramrod->rx_mode_obj = &bp->rx_mode_obj;
+ ramrod->func_id = FW_VF_HANDLE(vf->abs_vfid);
+ ramrod->rx_accept_flags = accept_flags;
+ ramrod->tx_accept_flags = accept_flags;
+ ramrod->pstate = &vf->filter_state;
+ ramrod->state = BNX2X_FILTER_RX_MODE_PENDING;
+
+ set_bit(BNX2X_FILTER_RX_MODE_PENDING, &vf->filter_state);
+ set_bit(RAMROD_RX, &ramrod->ramrod_flags);
+ set_bit(RAMROD_TX, &ramrod->ramrod_flags);
+
+ ramrod->rdata = bnx2x_vf_sp(bp, vf, rx_mode_rdata.e2);
+ ramrod->rdata_mapping = bnx2x_vf_sp_map(bp, vf, rx_mode_rdata.e2);
+}
+
int bnx2x_vfop_rxmode_cmd(struct bnx2x *bp,
struct bnx2x_virtf *vf,
struct bnx2x_vfop_cmd *cmd,
int qid, unsigned long accept_flags)
{
- struct bnx2x_vf_queue *vfq = vfq_get(vf, qid);
struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
if (vfop) {
struct bnx2x_rx_mode_ramrod_params *ramrod =
&vf->op_params.rx_mode;
- memset(ramrod, 0, sizeof(*ramrod));
-
- /* Prepare ramrod parameters */
- ramrod->cid = vfq->cid;
- ramrod->cl_id = vfq_cl_id(vf, vfq);
- ramrod->rx_mode_obj = &bp->rx_mode_obj;
- ramrod->func_id = FW_VF_HANDLE(vf->abs_vfid);
-
- ramrod->rx_accept_flags = accept_flags;
- ramrod->tx_accept_flags = accept_flags;
- ramrod->pstate = &vf->filter_state;
- ramrod->state = BNX2X_FILTER_RX_MODE_PENDING;
-
- set_bit(BNX2X_FILTER_RX_MODE_PENDING, &vf->filter_state);
- set_bit(RAMROD_RX, &ramrod->ramrod_flags);
- set_bit(RAMROD_TX, &ramrod->ramrod_flags);
-
- ramrod->rdata =
- bnx2x_vf_sp(bp, vf, rx_mode_rdata.e2);
- ramrod->rdata_mapping =
- bnx2x_vf_sp_map(bp, vf, rx_mode_rdata.e2);
+ bnx2x_vf_prep_rx_mode(bp, qid, ramrod, vf, accept_flags);
bnx2x_vfop_opset(BNX2X_VFOP_RXMODE_CONFIG,
bnx2x_vfop_rxmode, cmd->done);
{
struct bnx2x *bp = netdev_priv(pci_get_drvdata(dev));
+ if (!IS_SRIOV(bp)) {
+ BNX2X_ERR("failed to configure SR-IOV since vfdb was not allocated. Check dmesg for errors in probe stage\n");
+ return -EINVAL;
+ }
+
DP(BNX2X_MSG_IOV, "bnx2x_sriov_configure called with %d, BNX2X_NR_VIRTFN(bp) was %d\n",
num_vfs_param, BNX2X_NR_VIRTFN(bp));
bnx2x_iov_static_resc(bp, vf);
}
- /* prepare msix vectors in VF configuration space */
+ /* prepare msix vectors in VF configuration space - the value in the
+ * PCI configuration space should be the index of the last entry,
+ * namely one less than the actual size of the table
+ */
for (vf_idx = first_vf; vf_idx < first_vf + req_vfs; vf_idx++) {
bnx2x_pretend_func(bp, HW_VF_HANDLE(bp, vf_idx));
REG_WR(bp, PCICFG_OFFSET + GRC_CONFIG_REG_VF_MSIX_CONTROL,
- num_vf_queues);
+ num_vf_queues - 1);
DP(BNX2X_MSG_IOV, "set msix vec num in VF %d cfg space to %d\n",
- vf_idx, num_vf_queues);
+ vf_idx, num_vf_queues - 1);
}
bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
int bnx2x_set_vf_vlan(struct net_device *dev, int vfidx, u16 vlan, u8 qos)
{
+ struct bnx2x_queue_state_params q_params = {NULL};
+ struct bnx2x_vlan_mac_ramrod_params ramrod_param;
+ struct bnx2x_queue_update_params *update_params;
+ struct pf_vf_bulletin_content *bulletin = NULL;
+ struct bnx2x_rx_mode_ramrod_params rx_ramrod;
struct bnx2x *bp = netdev_priv(dev);
- int rc, q_logical_state;
+ struct bnx2x_vlan_mac_obj *vlan_obj;
+ unsigned long vlan_mac_flags = 0;
+ unsigned long ramrod_flags = 0;
struct bnx2x_virtf *vf = NULL;
- struct pf_vf_bulletin_content *bulletin = NULL;
+ unsigned long accept_flags;
+ int rc;
/* sanity and init */
rc = bnx2x_vf_ndo_prep(bp, vfidx, &vf, &bulletin);
/* update PF's copy of the VF's bulletin. No point in posting the vlan
* to the VF since it doesn't have anything to do with it. But it useful
* to store it here in case the VF is not up yet and we can only
- * configure the vlan later when it does.
+ * configure the vlan later when it does. Treat vlan id 0 as remove the
+ * Host tag.
*/
- bulletin->valid_bitmap |= 1 << VLAN_VALID;
+ if (vlan > 0)
+ bulletin->valid_bitmap |= 1 << VLAN_VALID;
+ else
+ bulletin->valid_bitmap &= ~(1 << VLAN_VALID);
bulletin->vlan = vlan;
/* is vf initialized and queue set up? */
- q_logical_state =
- bnx2x_get_q_logical_state(bp, &bnx2x_leading_vfq(vf, sp_obj));
- if (vf->state == VF_ENABLED &&
- q_logical_state == BNX2X_Q_LOGICAL_STATE_ACTIVE) {
- /* configure the vlan in device on this vf's queue */
- unsigned long ramrod_flags = 0;
- unsigned long vlan_mac_flags = 0;
- struct bnx2x_vlan_mac_obj *vlan_obj =
- &bnx2x_leading_vfq(vf, vlan_obj);
- struct bnx2x_vlan_mac_ramrod_params ramrod_param;
- struct bnx2x_queue_state_params q_params = {NULL};
- struct bnx2x_queue_update_params *update_params;
+ if (vf->state != VF_ENABLED ||
+ bnx2x_get_q_logical_state(bp, &bnx2x_leading_vfq(vf, sp_obj)) !=
+ BNX2X_Q_LOGICAL_STATE_ACTIVE)
+ return rc;
- rc = validate_vlan_mac(bp, &bnx2x_leading_vfq(vf, mac_obj));
- if (rc)
- return rc;
- memset(&ramrod_param, 0, sizeof(ramrod_param));
+ /* configure the vlan in device on this vf's queue */
+ vlan_obj = &bnx2x_leading_vfq(vf, vlan_obj);
+ rc = validate_vlan_mac(bp, &bnx2x_leading_vfq(vf, mac_obj));
+ if (rc)
+ return rc;
- /* must lock vfpf channel to protect against vf flows */
- bnx2x_lock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_VLAN);
+ /* must lock vfpf channel to protect against vf flows */
+ bnx2x_lock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_VLAN);
- /* remove existing vlans */
- __set_bit(RAMROD_COMP_WAIT, &ramrod_flags);
- rc = vlan_obj->delete_all(bp, vlan_obj, &vlan_mac_flags,
- &ramrod_flags);
- if (rc) {
- BNX2X_ERR("failed to delete vlans\n");
- rc = -EINVAL;
- goto out;
- }
+ /* remove existing vlans */
+ __set_bit(RAMROD_COMP_WAIT, &ramrod_flags);
+ rc = vlan_obj->delete_all(bp, vlan_obj, &vlan_mac_flags,
+ &ramrod_flags);
+ if (rc) {
+ BNX2X_ERR("failed to delete vlans\n");
+ rc = -EINVAL;
+ goto out;
+ }
+
+ /* need to remove/add the VF's accept_any_vlan bit */
+ accept_flags = bnx2x_leading_vfq(vf, accept_flags);
+ if (vlan)
+ clear_bit(BNX2X_ACCEPT_ANY_VLAN, &accept_flags);
+ else
+ set_bit(BNX2X_ACCEPT_ANY_VLAN, &accept_flags);
+
+ bnx2x_vf_prep_rx_mode(bp, LEADING_IDX, &rx_ramrod, vf,
+ accept_flags);
+ bnx2x_leading_vfq(vf, accept_flags) = accept_flags;
+ bnx2x_config_rx_mode(bp, &rx_ramrod);
+
+ /* configure the new vlan to device */
+ memset(&ramrod_param, 0, sizeof(ramrod_param));
+ __set_bit(RAMROD_COMP_WAIT, &ramrod_flags);
+ ramrod_param.vlan_mac_obj = vlan_obj;
+ ramrod_param.ramrod_flags = ramrod_flags;
+ set_bit(BNX2X_DONT_CONSUME_CAM_CREDIT,
+ &ramrod_param.user_req.vlan_mac_flags);
+ ramrod_param.user_req.u.vlan.vlan = vlan;
+ ramrod_param.user_req.cmd = BNX2X_VLAN_MAC_ADD;
+ rc = bnx2x_config_vlan_mac(bp, &ramrod_param);
+ if (rc) {
+ BNX2X_ERR("failed to configure vlan\n");
+ rc = -EINVAL;
+ goto out;
+ }
- /* send queue update ramrod to configure default vlan and silent
- * vlan removal
+ /* send queue update ramrod to configure default vlan and silent
+ * vlan removal
+ */
+ __set_bit(RAMROD_COMP_WAIT, &q_params.ramrod_flags);
+ q_params.cmd = BNX2X_Q_CMD_UPDATE;
+ q_params.q_obj = &bnx2x_leading_vfq(vf, sp_obj);
+ update_params = &q_params.params.update;
+ __set_bit(BNX2X_Q_UPDATE_DEF_VLAN_EN_CHNG,
+ &update_params->update_flags);
+ __set_bit(BNX2X_Q_UPDATE_SILENT_VLAN_REM_CHNG,
+ &update_params->update_flags);
+ if (vlan == 0) {
+ /* if vlan is 0 then we want to leave the VF traffic
+ * untagged, and leave the incoming traffic untouched
+ * (i.e. do not remove any vlan tags).
+ */
+ __clear_bit(BNX2X_Q_UPDATE_DEF_VLAN_EN,
+ &update_params->update_flags);
+ __clear_bit(BNX2X_Q_UPDATE_SILENT_VLAN_REM,
+ &update_params->update_flags);
+ } else {
+ /* configure default vlan to vf queue and set silent
+ * vlan removal (the vf remains unaware of this vlan).
*/
- __set_bit(RAMROD_COMP_WAIT, &q_params.ramrod_flags);
- q_params.cmd = BNX2X_Q_CMD_UPDATE;
- q_params.q_obj = &bnx2x_leading_vfq(vf, sp_obj);
- update_params = &q_params.params.update;
- __set_bit(BNX2X_Q_UPDATE_DEF_VLAN_EN_CHNG,
+ __set_bit(BNX2X_Q_UPDATE_DEF_VLAN_EN,
&update_params->update_flags);
- __set_bit(BNX2X_Q_UPDATE_SILENT_VLAN_REM_CHNG,
+ __set_bit(BNX2X_Q_UPDATE_SILENT_VLAN_REM,
&update_params->update_flags);
+ update_params->def_vlan = vlan;
+ update_params->silent_removal_value =
+ vlan & VLAN_VID_MASK;
+ update_params->silent_removal_mask = VLAN_VID_MASK;
+ }
- if (vlan == 0) {
- /* if vlan is 0 then we want to leave the VF traffic
- * untagged, and leave the incoming traffic untouched
- * (i.e. do not remove any vlan tags).
- */
- __clear_bit(BNX2X_Q_UPDATE_DEF_VLAN_EN,
- &update_params->update_flags);
- __clear_bit(BNX2X_Q_UPDATE_SILENT_VLAN_REM,
- &update_params->update_flags);
- } else {
- /* configure the new vlan to device */
- __set_bit(RAMROD_COMP_WAIT, &ramrod_flags);
- ramrod_param.vlan_mac_obj = vlan_obj;
- ramrod_param.ramrod_flags = ramrod_flags;
- ramrod_param.user_req.u.vlan.vlan = vlan;
- ramrod_param.user_req.cmd = BNX2X_VLAN_MAC_ADD;
- rc = bnx2x_config_vlan_mac(bp, &ramrod_param);
- if (rc) {
- BNX2X_ERR("failed to configure vlan\n");
- rc = -EINVAL;
- goto out;
- }
-
- /* configure default vlan to vf queue and set silent
- * vlan removal (the vf remains unaware of this vlan).
- */
- update_params = &q_params.params.update;
- __set_bit(BNX2X_Q_UPDATE_DEF_VLAN_EN,
- &update_params->update_flags);
- __set_bit(BNX2X_Q_UPDATE_SILENT_VLAN_REM,
- &update_params->update_flags);
- update_params->def_vlan = vlan;
- }
+ /* Update the Queue state */
+ rc = bnx2x_queue_state_change(bp, &q_params);
+ if (rc) {
+ BNX2X_ERR("Failed to configure default VLAN\n");
+ goto out;
+ }
- /* Update the Queue state */
- rc = bnx2x_queue_state_change(bp, &q_params);
- if (rc) {
- BNX2X_ERR("Failed to configure default VLAN\n");
- goto out;
- }
- /* clear the flag indicating that this VF needs its vlan
- * (will only be set if the HV configured the Vlan before vf was
- * up and we were called because the VF came up later
- */
+ /* clear the flag indicating that this VF needs its vlan
+ * (will only be set if the HV configured the Vlan before vf was
+ * up and we were called because the VF came up later
+ */
out:
- vf->cfg_flags &= ~VF_CFG_VLAN;
- bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_VLAN);
- }
+ vf->cfg_flags &= ~VF_CFG_VLAN;
+ bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_VLAN);
+
return rc;
}
/* VLANs object */
struct bnx2x_vlan_mac_obj vlan_obj;
atomic_t vlan_count; /* 0 means vlan-0 is set ~ untagged */
+ unsigned long accept_flags; /* last accept flags configured */
/* Queue Slow-path State object */
struct bnx2x_queue_sp_obj sp_obj;
return -EINVAL;
}
- BNX2X_ERR("valid ME register value: 0x%08x\n", me_reg);
+ DP(BNX2X_MSG_IOV, "valid ME register value: 0x%08x\n", me_reg);
*vf_id = (me_reg & ME_REG_VF_NUM_MASK) >> ME_REG_VF_NUM_SHIFT;
if (msg->flags & VFPF_SET_Q_FILTERS_RX_MASK_CHANGED) {
unsigned long accept = 0;
+ struct pf_vf_bulletin_content *bulletin =
+ BP_VF_BULLETIN(bp, vf->index);
/* covert VF-PF if mask to bnx2x accept flags */
if (msg->rx_mask & VFPF_RX_MASK_ACCEPT_MATCHED_UNICAST)
__set_bit(BNX2X_ACCEPT_BROADCAST, &accept);
/* A packet arriving the vf's mac should be accepted
- * with any vlan
+ * with any vlan, unless a vlan has already been
+ * configured.
*/
- __set_bit(BNX2X_ACCEPT_ANY_VLAN, &accept);
+ if (!(bulletin->valid_bitmap & (1 << VLAN_VALID)))
+ __set_bit(BNX2X_ACCEPT_ANY_VLAN, &accept);
/* set rx-mode */
rc = bnx2x_vfop_rxmode_cmd(bp, vf, &cmd,
goto response;
}
}
+ /* if vlan was set by hypervisor we don't allow guest to config vlan */
+ if (bulletin->valid_bitmap & 1 << VLAN_VALID) {
+ int i;
+
+ /* search for vlan filters */
+ for (i = 0; i < filters->n_mac_vlan_filters; i++) {
+ if (filters->filters[i].flags &
+ VFPF_Q_FILTER_VLAN_TAG_VALID) {
+ BNX2X_ERR("VF[%d] attempted to configure vlan but one was already set by Hypervisor. Aborting request\n",
+ vf->abs_vfid);
+ vf->op_rc = -EPERM;
+ goto response;
+ }
+ }
+ }
/* verify vf_qid */
if (filters->vf_qid > vf_rxq_count(vf))
vf_op_params->rss_result_mask = rss_tlv->rss_result_mask;
/* flags handled individually for backward/forward compatability */
+ vf_op_params->rss_flags = 0;
+ vf_op_params->ramrod_flags = 0;
+
if (rss_tlv->rss_flags & VFPF_RSS_MODE_DISABLED)
__set_bit(BNX2X_RSS_MODE_DISABLED, &vf_op_params->rss_flags);
if (rss_tlv->rss_flags & VFPF_RSS_MODE_REGULAR)
{
u32 base = (u32) mapping & 0xffffffff;
- return (base > 0xffffdcc0) && (base + len + 8 < base);
+ return base + len + 8 < base;
}
/* Test for TSO DMA buffers that cross into regions which are within MSS bytes
void (*write_op)(struct tg3 *, u32, u32);
int i, err;
+ if (!pci_device_is_present(tp->pdev))
+ return -ENODEV;
+
tg3_nvram_lock(tp);
tg3_ape_lock(tp, TG3_APE_LOCK_GRC);
static ssize_t tg3_show_temp(struct device *dev,
struct device_attribute *devattr, char *buf)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct tg3 *tp = netdev_priv(netdev);
struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
+ struct tg3 *tp = dev_get_drvdata(dev);
u32 temperature;
spin_lock_bh(&tp->lock);
static SENSOR_DEVICE_ATTR(temp1_max, S_IRUGO, tg3_show_temp, NULL,
TG3_TEMP_MAX_OFFSET);
-static struct attribute *tg3_attributes[] = {
+static struct attribute *tg3_attrs[] = {
&sensor_dev_attr_temp1_input.dev_attr.attr,
&sensor_dev_attr_temp1_crit.dev_attr.attr,
&sensor_dev_attr_temp1_max.dev_attr.attr,
NULL
};
-
-static const struct attribute_group tg3_group = {
- .attrs = tg3_attributes,
-};
+ATTRIBUTE_GROUPS(tg3);
static void tg3_hwmon_close(struct tg3 *tp)
{
if (tp->hwmon_dev) {
hwmon_device_unregister(tp->hwmon_dev);
tp->hwmon_dev = NULL;
- sysfs_remove_group(&tp->pdev->dev.kobj, &tg3_group);
}
}
static void tg3_hwmon_open(struct tg3 *tp)
{
- int i, err;
+ int i;
u32 size = 0;
struct pci_dev *pdev = tp->pdev;
struct tg3_ocir ocirs[TG3_SD_NUM_RECS];
if (!size)
return;
- /* Register hwmon sysfs hooks */
- err = sysfs_create_group(&pdev->dev.kobj, &tg3_group);
- if (err) {
- dev_err(&pdev->dev, "Cannot create sysfs group, aborting\n");
- return;
- }
-
- tp->hwmon_dev = hwmon_device_register(&pdev->dev);
+ tp->hwmon_dev = hwmon_device_register_with_groups(&pdev->dev, "tg3",
+ tp, tg3_groups);
if (IS_ERR(tp->hwmon_dev)) {
tp->hwmon_dev = NULL;
dev_err(&pdev->dev, "Cannot register hwmon device, aborting\n");
- sysfs_remove_group(&pdev->dev.kobj, &tg3_group);
}
}
memset(&tp->net_stats_prev, 0, sizeof(tp->net_stats_prev));
memset(&tp->estats_prev, 0, sizeof(tp->estats_prev));
- tg3_power_down_prepare(tp);
-
- tg3_carrier_off(tp);
+ if (pci_device_is_present(tp->pdev)) {
+ tg3_power_down_prepare(tp);
+ tg3_carrier_off(tp);
+ }
return 0;
}
/* Clear this out for sanity. */
tw32(TG3PCI_MEM_WIN_BASE_ADDR, 0);
+ /* Clear TG3PCI_REG_BASE_ADDR to prevent hangs. */
+ tw32(TG3PCI_REG_BASE_ADDR, 0);
+
pci_read_config_dword(tp->pdev, TG3PCI_PCISTATE,
&pci_state_reg);
if ((pci_state_reg & PCISTATE_CONV_PCI_MODE) == 0 &&
struct pci_dev *pdev = to_pci_dev(device);
struct net_device *dev = pci_get_drvdata(pdev);
struct tg3 *tp = netdev_priv(dev);
- int err;
+ int err = 0;
+
+ rtnl_lock();
if (!netif_running(dev))
- return 0;
+ goto unlock;
tg3_reset_task_cancel(tp);
tg3_phy_stop(tp);
tg3_phy_start(tp);
}
+unlock:
+ rtnl_unlock();
return err;
}
struct pci_dev *pdev = to_pci_dev(device);
struct net_device *dev = pci_get_drvdata(pdev);
struct tg3 *tp = netdev_priv(dev);
- int err;
+ int err = 0;
+
+ rtnl_lock();
if (!netif_running(dev))
- return 0;
+ goto unlock;
netif_device_attach(dev);
if (!err)
tg3_phy_start(tp);
+unlock:
+ rtnl_unlock();
return err;
}
#endif /* CONFIG_PM_SLEEP */
#include <asm/io.h>
#include "cxgb4_uld.h"
-#define FW_VERSION_MAJOR 1
-#define FW_VERSION_MINOR 4
-#define FW_VERSION_MICRO 0
+#define T4FW_VERSION_MAJOR 0x01
+#define T4FW_VERSION_MINOR 0x06
+#define T4FW_VERSION_MICRO 0x18
+#define T4FW_VERSION_BUILD 0x00
-#define FW_VERSION_MAJOR_T5 0
-#define FW_VERSION_MINOR_T5 0
-#define FW_VERSION_MICRO_T5 0
+#define T5FW_VERSION_MAJOR 0x01
+#define T5FW_VERSION_MINOR 0x08
+#define T5FW_VERSION_MICRO 0x1C
+#define T5FW_VERSION_BUILD 0x00
#define CH_WARN(adap, fmt, ...) dev_warn(adap->pdev_dev, fmt, ## __VA_ARGS__)
uint32_t dack_re; /* DACK timer resolution */
unsigned short tx_modq[NCHAN]; /* channel to modulation queue map */
+
+ u32 vlan_pri_map; /* cached TP_VLAN_PRI_MAP */
+ u32 ingress_config; /* cached TP_INGRESS_CONFIG */
+
+ /* TP_VLAN_PRI_MAP Compressed Filter Tuple field offsets. This is a
+ * subset of the set of fields which may be present in the Compressed
+ * Filter Tuple portion of filters and TCP TCB connections. The
+ * fields which are present are controlled by the TP_VLAN_PRI_MAP.
+ * Since a variable number of fields may or may not be present, their
+ * shifted field positions within the Compressed Filter Tuple may
+ * vary, or not even be present if the field isn't selected in
+ * TP_VLAN_PRI_MAP. Since some of these fields are needed in various
+ * places we store their offsets here, or a -1 if the field isn't
+ * present.
+ */
+ int vlan_shift;
+ int vnic_shift;
+ int port_shift;
+ int protocol_shift;
};
struct vpd_params {
unsigned char width;
};
+#define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
+#define CHELSIO_CHIP_FPGA 0x100
+#define CHELSIO_CHIP_VERSION(code) (((code) >> 4) & 0xf)
+#define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf)
+
+#define CHELSIO_T4 0x4
+#define CHELSIO_T5 0x5
+
+enum chip_type {
+ T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
+ T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
+ T4_FIRST_REV = T4_A1,
+ T4_LAST_REV = T4_A2,
+
+ T5_A0 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
+ T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 1),
+ T5_FIRST_REV = T5_A0,
+ T5_LAST_REV = T5_A1,
+};
+
struct adapter_params {
struct tp_params tp;
struct vpd_params vpd;
unsigned char nports; /* # of ethernet ports */
unsigned char portvec;
- unsigned char rev; /* chip revision */
+ enum chip_type chip; /* chip code */
unsigned char offload;
unsigned char bypass;
unsigned int ofldq_wr_cred;
};
+#include "t4fw_api.h"
+
+#define FW_VERSION(chip) ( \
+ FW_HDR_FW_VER_MAJOR_GET(chip##FW_VERSION_MAJOR) | \
+ FW_HDR_FW_VER_MINOR_GET(chip##FW_VERSION_MINOR) | \
+ FW_HDR_FW_VER_MICRO_GET(chip##FW_VERSION_MICRO) | \
+ FW_HDR_FW_VER_BUILD_GET(chip##FW_VERSION_BUILD))
+#define FW_INTFVER(chip, intf) (FW_HDR_INTFVER_##intf)
+
+struct fw_info {
+ u8 chip;
+ char *fs_name;
+ char *fw_mod_name;
+ struct fw_hdr fw_hdr;
+};
+
+
struct trace_params {
u32 data[TRACE_LEN / 4];
u32 mask[TRACE_LEN / 4];
struct l2t_data;
-#define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
-#define CHELSIO_CHIP_VERSION(code) ((code) >> 4)
-#define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf)
-
-#define CHELSIO_T4 0x4
-#define CHELSIO_T5 0x5
-
-enum chip_type {
- T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 0),
- T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
- T4_A3 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
- T4_FIRST_REV = T4_A1,
- T4_LAST_REV = T4_A3,
-
- T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
- T5_FIRST_REV = T5_A1,
- T5_LAST_REV = T5_A1,
-};
-
#ifdef CONFIG_PCI_IOV
/* T4 supports SRIOV on PF0-3 and T5 on PF0-7. However, the Serial
static inline int is_t5(enum chip_type chip)
{
- return (chip >= T5_FIRST_REV && chip <= T5_LAST_REV);
+ return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T5;
}
static inline int is_t4(enum chip_type chip)
{
- return (chip >= T4_FIRST_REV && chip <= T4_LAST_REV);
+ return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T4;
}
static inline u32 t4_read_reg(struct adapter *adap, u32 reg_addr)
int t4_load_fw(struct adapter *adapter, const u8 *fw_data, unsigned int size);
unsigned int t4_flash_cfg_addr(struct adapter *adapter);
int t4_load_cfg(struct adapter *adapter, const u8 *cfg_data, unsigned int size);
-int t4_check_fw_version(struct adapter *adapter);
+int t4_get_fw_version(struct adapter *adapter, u32 *vers);
+int t4_get_tp_version(struct adapter *adapter, u32 *vers);
+int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
+ const u8 *fw_data, unsigned int fw_size,
+ struct fw_hdr *card_fw, enum dev_state state, int *reset);
int t4_prep_adapter(struct adapter *adapter);
+int t4_init_tp_params(struct adapter *adap);
+int t4_filter_field_shift(const struct adapter *adap, int filter_sel);
int t4_port_init(struct adapter *adap, int mbox, int pf, int vf);
void t4_fatal_err(struct adapter *adapter);
int t4_config_rss_range(struct adapter *adapter, int mbox, unsigned int viid,
{ 0, }
};
-#define FW_FNAME "cxgb4/t4fw.bin"
+#define FW4_FNAME "cxgb4/t4fw.bin"
#define FW5_FNAME "cxgb4/t5fw.bin"
-#define FW_CFNAME "cxgb4/t4-config.txt"
+#define FW4_CFNAME "cxgb4/t4-config.txt"
#define FW5_CFNAME "cxgb4/t5-config.txt"
MODULE_DESCRIPTION(DRV_DESC);
MODULE_LICENSE("Dual BSD/GPL");
MODULE_VERSION(DRV_VERSION);
MODULE_DEVICE_TABLE(pci, cxgb4_pci_tbl);
-MODULE_FIRMWARE(FW_FNAME);
+MODULE_FIRMWARE(FW4_FNAME);
MODULE_FIRMWARE(FW5_FNAME);
/*
return 0;
}
-/*
- * Returns 0 if new FW was successfully loaded, a positive errno if a load was
- * started but failed, and a negative errno if flash load couldn't start.
- */
-static int upgrade_fw(struct adapter *adap)
-{
- int ret;
- u32 vers, exp_major;
- const struct fw_hdr *hdr;
- const struct firmware *fw;
- struct device *dev = adap->pdev_dev;
- char *fw_file_name;
-
- switch (CHELSIO_CHIP_VERSION(adap->chip)) {
- case CHELSIO_T4:
- fw_file_name = FW_FNAME;
- exp_major = FW_VERSION_MAJOR;
- break;
- case CHELSIO_T5:
- fw_file_name = FW5_FNAME;
- exp_major = FW_VERSION_MAJOR_T5;
- break;
- default:
- dev_err(dev, "Unsupported chip type, %x\n", adap->chip);
- return -EINVAL;
- }
-
- ret = request_firmware(&fw, fw_file_name, dev);
- if (ret < 0) {
- dev_err(dev, "unable to load firmware image %s, error %d\n",
- fw_file_name, ret);
- return ret;
- }
-
- hdr = (const struct fw_hdr *)fw->data;
- vers = ntohl(hdr->fw_ver);
- if (FW_HDR_FW_VER_MAJOR_GET(vers) != exp_major) {
- ret = -EINVAL; /* wrong major version, won't do */
- goto out;
- }
-
- /*
- * If the flash FW is unusable or we found something newer, load it.
- */
- if (FW_HDR_FW_VER_MAJOR_GET(adap->params.fw_vers) != exp_major ||
- vers > adap->params.fw_vers) {
- dev_info(dev, "upgrading firmware ...\n");
- ret = t4_fw_upgrade(adap, adap->mbox, fw->data, fw->size,
- /*force=*/false);
- if (!ret)
- dev_info(dev,
- "firmware upgraded to version %pI4 from %s\n",
- &hdr->fw_ver, fw_file_name);
- else
- dev_err(dev, "firmware upgrade failed! err=%d\n", -ret);
- } else {
- /*
- * Tell our caller that we didn't upgrade the firmware.
- */
- ret = -EINVAL;
- }
-
-out: release_firmware(fw);
- return ret;
-}
-
/*
* Allocate a chunk of memory using kmalloc or, if that fails, vmalloc.
* The allocated memory is cleared.
static int get_regs_len(struct net_device *dev)
{
struct adapter *adap = netdev2adap(dev);
- if (is_t4(adap->chip))
+ if (is_t4(adap->params.chip))
return T4_REGMAP_SIZE;
else
return T5_REGMAP_SIZE;
data += sizeof(struct port_stats) / sizeof(u64);
collect_sge_port_stats(adapter, pi, (struct queue_port_stats *)data);
data += sizeof(struct queue_port_stats) / sizeof(u64);
- if (!is_t4(adapter->chip)) {
+ if (!is_t4(adapter->params.chip)) {
t4_write_reg(adapter, SGE_STAT_CFG, STATSOURCE_T5(7));
val1 = t4_read_reg(adapter, SGE_STAT_TOTAL);
val2 = t4_read_reg(adapter, SGE_STAT_MATCH);
*/
static inline unsigned int mk_adap_vers(const struct adapter *ap)
{
- return CHELSIO_CHIP_VERSION(ap->chip) |
- (CHELSIO_CHIP_RELEASE(ap->chip) << 10) | (1 << 16);
+ return CHELSIO_CHIP_VERSION(ap->params.chip) |
+ (CHELSIO_CHIP_RELEASE(ap->params.chip) << 10) | (1 << 16);
}
static void reg_block_dump(struct adapter *ap, void *buf, unsigned int start,
static const unsigned int *reg_ranges;
int arr_size = 0, buf_size = 0;
- if (is_t4(ap->chip)) {
+ if (is_t4(ap->params.chip)) {
reg_ranges = &t4_reg_ranges[0];
arr_size = ARRAY_SIZE(t4_reg_ranges);
buf_size = T4_REGMAP_SIZE;
size = t4_read_reg(adap, MA_EDRAM1_BAR);
add_debugfs_mem(adap, "edc1", MEM_EDC1, EDRAM_SIZE_GET(size));
}
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
size = t4_read_reg(adap, MA_EXT_MEMORY_BAR);
if (i & EXT_MEM_ENABLE)
add_debugfs_mem(adap, "mc", MEM_MC,
if (stid >= 0) {
t->stid_tab[stid].data = data;
stid += t->stid_base;
- t->stids_in_use++;
+ /* IPv6 requires max of 520 bits or 16 cells in TCAM
+ * This is equivalent to 4 TIDs. With CLIP enabled it
+ * needs 2 TIDs.
+ */
+ if (family == PF_INET)
+ t->stids_in_use++;
+ else
+ t->stids_in_use += 4;
}
spin_unlock_bh(&t->stid_lock);
return stid;
}
if (stid >= 0) {
t->stid_tab[stid].data = data;
- stid += t->stid_base;
+ stid -= t->nstids;
+ stid += t->sftid_base;
t->stids_in_use++;
}
spin_unlock_bh(&t->stid_lock);
*/
void cxgb4_free_stid(struct tid_info *t, unsigned int stid, int family)
{
- stid -= t->stid_base;
+ /* Is it a server filter TID? */
+ if (t->nsftids && (stid >= t->sftid_base)) {
+ stid -= t->sftid_base;
+ stid += t->nstids;
+ } else {
+ stid -= t->stid_base;
+ }
+
spin_lock_bh(&t->stid_lock);
if (family == PF_INET)
__clear_bit(stid, t->stid_bmap);
else
bitmap_release_region(t->stid_bmap, stid, 2);
t->stid_tab[stid].data = NULL;
- t->stids_in_use--;
+ if (family == PF_INET)
+ t->stids_in_use--;
+ else
+ t->stids_in_use -= 4;
spin_unlock_bh(&t->stid_lock);
}
EXPORT_SYMBOL(cxgb4_free_stid);
size_t size;
unsigned int stid_bmap_size;
unsigned int natids = t->natids;
+ struct adapter *adap = container_of(t, struct adapter, tids);
stid_bmap_size = BITS_TO_LONGS(t->nstids + t->nsftids);
size = t->ntids * sizeof(*t->tid_tab) +
t->afree = t->atid_tab;
}
bitmap_zero(t->stid_bmap, t->nstids + t->nsftids);
+ /* Reserve stid 0 for T4/T5 adapters */
+ if (!t->stid_base &&
+ (is_t4(adap->params.chip) || is_t5(adap->params.chip)))
+ __set_bit(0, t->stid_bmap);
+
return 0;
}
v1 = t4_read_reg(adap, A_SGE_DBFIFO_STATUS);
v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2);
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
lp_count = G_LP_COUNT(v1);
hp_count = G_HP_COUNT(v1);
} else {
do {
v1 = t4_read_reg(adap, A_SGE_DBFIFO_STATUS);
v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2);
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
lp_count = G_LP_COUNT(v1);
hp_count = G_HP_COUNT(v1);
} else {
adap = container_of(work, struct adapter, db_drop_task);
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
disable_dbs(adap);
notify_rdma_uld(adap, CXGB4_CONTROL_DB_DROP);
drain_db_fifo(adap, 1);
void t4_db_full(struct adapter *adap)
{
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
t4_set_reg_field(adap, SGE_INT_ENABLE3,
DBFIFO_HP_INT | DBFIFO_LP_INT, 0);
queue_work(workq, &adap->db_full_task);
void t4_db_dropped(struct adapter *adap)
{
- if (is_t4(adap->chip))
+ if (is_t4(adap->params.chip))
queue_work(workq, &adap->db_drop_task);
}
lli.nchan = adap->params.nports;
lli.nports = adap->params.nports;
lli.wr_cred = adap->params.ofldq_wr_cred;
- lli.adapter_type = adap->params.rev;
+ lli.adapter_type = adap->params.chip;
lli.iscsi_iolen = MAXRXDATA_GET(t4_read_reg(adap, TP_PARA_REG2));
lli.udb_density = 1 << QUEUESPERPAGEPF0_GET(
t4_read_reg(adap, SGE_EGRESS_QUEUES_PER_PAGE_PF) >>
lli.ucq_density = 1 << QUEUESPERPAGEPF0_GET(
t4_read_reg(adap, SGE_INGRESS_QUEUES_PER_PAGE_PF) >>
(adap->fn * 4));
- lli.filt_mode = adap->filter_mode;
+ lli.filt_mode = adap->params.tp.vlan_pri_map;
/* MODQ_REQ_MAP sets queues 0-3 to chan 0-3 */
for (i = 0; i < NCHAN; i++)
lli.tx_modq[i] = i;
adap = netdev2adap(dev);
/* Adjust stid to correct filter index */
- stid -= adap->tids.nstids;
+ stid -= adap->tids.sftid_base;
stid += adap->tids.nftids;
/* Check to make sure the filter requested is writable ...
f->fs.val.lip[i] = val[i];
f->fs.mask.lip[i] = ~0;
}
- if (adap->filter_mode & F_PORT) {
+ if (adap->params.tp.vlan_pri_map & F_PORT) {
f->fs.val.iport = port;
f->fs.mask.iport = mask;
}
}
+ if (adap->params.tp.vlan_pri_map & F_PROTOCOL) {
+ f->fs.val.proto = IPPROTO_TCP;
+ f->fs.mask.proto = ~0;
+ }
+
f->fs.dirsteer = 1;
f->fs.iq = queue;
/* Mark filter as locked */
adap = netdev2adap(dev);
/* Adjust stid to correct filter index */
- stid -= adap->tids.nstids;
+ stid -= adap->tids.sftid_base;
stid += adap->tids.nftids;
f = &adap->tids.ftid_tab[stid];
u32 bar0, mem_win0_base, mem_win1_base, mem_win2_base;
bar0 = pci_resource_start(adap->pdev, 0); /* truncation intentional */
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
mem_win0_base = bar0 + MEMWIN0_BASE;
mem_win1_base = bar0 + MEMWIN1_BASE;
mem_win2_base = bar0 + MEMWIN2_BASE;
const struct firmware *cf;
unsigned long mtype = 0, maddr = 0;
u32 finiver, finicsum, cfcsum;
- int ret, using_flash;
+ int ret;
+ int config_issued = 0;
char *fw_config_file, fw_config_file_path[256];
+ char *config_name = NULL;
/*
* Reset device if necessary.
* then use that. Otherwise, use the configuration file stored
* in the adapter flash ...
*/
- switch (CHELSIO_CHIP_VERSION(adapter->chip)) {
+ switch (CHELSIO_CHIP_VERSION(adapter->params.chip)) {
case CHELSIO_T4:
- fw_config_file = FW_CFNAME;
+ fw_config_file = FW4_CFNAME;
break;
case CHELSIO_T5:
fw_config_file = FW5_CFNAME;
ret = request_firmware(&cf, fw_config_file, adapter->pdev_dev);
if (ret < 0) {
- using_flash = 1;
+ config_name = "On FLASH";
mtype = FW_MEMTYPE_CF_FLASH;
maddr = t4_flash_cfg_addr(adapter);
} else {
u32 params[7], val[7];
- using_flash = 0;
+ sprintf(fw_config_file_path,
+ "/lib/firmware/%s", fw_config_file);
+ config_name = fw_config_file_path;
+
if (cf->size >= FLASH_CFG_MAX_SIZE)
ret = -ENOMEM;
else {
FW_LEN16(caps_cmd));
ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd, sizeof(caps_cmd),
&caps_cmd);
+
+ /* If the CAPS_CONFIG failed with an ENOENT (for a Firmware
+ * Configuration File in FLASH), our last gasp effort is to use the
+ * Firmware Configuration File which is embedded in the firmware. A
+ * very few early versions of the firmware didn't have one embedded
+ * but we can ignore those.
+ */
+ if (ret == -ENOENT) {
+ memset(&caps_cmd, 0, sizeof(caps_cmd));
+ caps_cmd.op_to_write =
+ htonl(FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
+ FW_CMD_REQUEST |
+ FW_CMD_READ);
+ caps_cmd.cfvalid_to_len16 = htonl(FW_LEN16(caps_cmd));
+ ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd,
+ sizeof(caps_cmd), &caps_cmd);
+ config_name = "Firmware Default";
+ }
+
+ config_issued = 1;
if (ret < 0)
goto bye;
if (ret < 0)
goto bye;
- sprintf(fw_config_file_path, "/lib/firmware/%s", fw_config_file);
/*
* Return successfully and note that we're operating with parameters
* not supplied by the driver, rather than from hard-wired
*/
adapter->flags |= USING_SOFT_PARAMS;
dev_info(adapter->pdev_dev, "Successfully configured using Firmware "\
- "Configuration File %s, version %#x, computed checksum %#x\n",
- (using_flash
- ? "in device FLASH"
- : fw_config_file_path),
- finiver, cfcsum);
+ "Configuration File \"%s\", version %#x, computed checksum %#x\n",
+ config_name, finiver, cfcsum);
return 0;
/*
* want to issue a warning since this is fairly common.)
*/
bye:
- if (ret != -ENOENT)
- dev_warn(adapter->pdev_dev, "Configuration file error %d\n",
- -ret);
+ if (config_issued && ret != -ENOENT)
+ dev_warn(adapter->pdev_dev, "\"%s\" configuration file error %d\n",
+ config_name, -ret);
return ret;
}
return ret;
}
+static struct fw_info fw_info_array[] = {
+ {
+ .chip = CHELSIO_T4,
+ .fs_name = FW4_CFNAME,
+ .fw_mod_name = FW4_FNAME,
+ .fw_hdr = {
+ .chip = FW_HDR_CHIP_T4,
+ .fw_ver = __cpu_to_be32(FW_VERSION(T4)),
+ .intfver_nic = FW_INTFVER(T4, NIC),
+ .intfver_vnic = FW_INTFVER(T4, VNIC),
+ .intfver_ri = FW_INTFVER(T4, RI),
+ .intfver_iscsi = FW_INTFVER(T4, ISCSI),
+ .intfver_fcoe = FW_INTFVER(T4, FCOE),
+ },
+ }, {
+ .chip = CHELSIO_T5,
+ .fs_name = FW5_CFNAME,
+ .fw_mod_name = FW5_FNAME,
+ .fw_hdr = {
+ .chip = FW_HDR_CHIP_T5,
+ .fw_ver = __cpu_to_be32(FW_VERSION(T5)),
+ .intfver_nic = FW_INTFVER(T5, NIC),
+ .intfver_vnic = FW_INTFVER(T5, VNIC),
+ .intfver_ri = FW_INTFVER(T5, RI),
+ .intfver_iscsi = FW_INTFVER(T5, ISCSI),
+ .intfver_fcoe = FW_INTFVER(T5, FCOE),
+ },
+ }
+};
+
+static struct fw_info *find_fw_info(int chip)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(fw_info_array); i++) {
+ if (fw_info_array[i].chip == chip)
+ return &fw_info_array[i];
+ }
+ return NULL;
+}
+
/*
* Phase 0 of initialization: contact FW, obtain config, perform basic init.
*/
enum dev_state state;
u32 params[7], val[7];
struct fw_caps_config_cmd caps_cmd;
- int reset = 1, j;
+ int reset = 1;
/*
* Contact FW, advertising Master capability (and potentially forcing
* later reporting and B. to warn if the currently loaded firmware
* is excessively mismatched relative to the driver.)
*/
- ret = t4_check_fw_version(adap);
-
- /* The error code -EFAULT is returned by t4_check_fw_version() if
- * firmware on adapter < supported firmware. If firmware on adapter
- * is too old (not supported by driver) and we're the MASTER_PF set
- * adapter state to DEV_STATE_UNINIT to force firmware upgrade
- * and reinitialization.
- */
- if ((adap->flags & MASTER_PF) && ret == -EFAULT)
- state = DEV_STATE_UNINIT;
+ t4_get_fw_version(adap, &adap->params.fw_vers);
+ t4_get_tp_version(adap, &adap->params.tp_vers);
if ((adap->flags & MASTER_PF) && state != DEV_STATE_INIT) {
- if (ret == -EINVAL || ret == -EFAULT || ret > 0) {
- if (upgrade_fw(adap) >= 0) {
- /*
- * Note that the chip was reset as part of the
- * firmware upgrade so we don't reset it again
- * below and grab the new firmware version.
- */
- reset = 0;
- ret = t4_check_fw_version(adap);
- } else
- if (ret == -EFAULT) {
- /*
- * Firmware is old but still might
- * work if we force reinitialization
- * of the adapter. Ignoring FW upgrade
- * failure.
- */
- dev_warn(adap->pdev_dev,
- "Ignoring firmware upgrade "
- "failure, and forcing driver "
- "to reinitialize the "
- "adapter.\n");
- ret = 0;
- }
+ struct fw_info *fw_info;
+ struct fw_hdr *card_fw;
+ const struct firmware *fw;
+ const u8 *fw_data = NULL;
+ unsigned int fw_size = 0;
+
+ /* This is the firmware whose headers the driver was compiled
+ * against
+ */
+ fw_info = find_fw_info(CHELSIO_CHIP_VERSION(adap->params.chip));
+ if (fw_info == NULL) {
+ dev_err(adap->pdev_dev,
+ "unable to get firmware info for chip %d.\n",
+ CHELSIO_CHIP_VERSION(adap->params.chip));
+ return -EINVAL;
}
+
+ /* allocate memory to read the header of the firmware on the
+ * card
+ */
+ card_fw = t4_alloc_mem(sizeof(*card_fw));
+
+ /* Get FW from from /lib/firmware/ */
+ ret = request_firmware(&fw, fw_info->fw_mod_name,
+ adap->pdev_dev);
+ if (ret < 0) {
+ dev_err(adap->pdev_dev,
+ "unable to load firmware image %s, error %d\n",
+ fw_info->fw_mod_name, ret);
+ } else {
+ fw_data = fw->data;
+ fw_size = fw->size;
+ }
+
+ /* upgrade FW logic */
+ ret = t4_prep_fw(adap, fw_info, fw_data, fw_size, card_fw,
+ state, &reset);
+
+ /* Cleaning up */
+ if (fw != NULL)
+ release_firmware(fw);
+ t4_free_mem(card_fw);
+
if (ret < 0)
- return ret;
+ goto bye;
}
/*
if (ret == -ENOENT) {
dev_info(adap->pdev_dev,
"No Configuration File present "
- "on adapter. Using hard-wired "
+ "on adapter. Using hard-wired "
"configuration parameters.\n");
ret = adap_init0_no_config(adap, reset);
}
/*
* These are finalized by FW initialization, load their values now.
*/
- v = t4_read_reg(adap, TP_TIMER_RESOLUTION);
- adap->params.tp.tre = TIMERRESOLUTION_GET(v);
- adap->params.tp.dack_re = DELAYEDACKRESOLUTION_GET(v);
t4_read_mtu_tbl(adap, adap->params.mtus, NULL);
t4_load_mtus(adap, adap->params.mtus, adap->params.a_wnd,
adap->params.b_wnd);
- /* MODQ_REQ_MAP defaults to setting queues 0-3 to chan 0-3 */
- for (j = 0; j < NCHAN; j++)
- adap->params.tp.tx_modq[j] = j;
-
- t4_read_indirect(adap, TP_PIO_ADDR, TP_PIO_DATA,
- &adap->filter_mode, 1,
- TP_VLAN_PRI_MAP);
-
+ t4_init_tp_params(adap);
adap->flags |= FW_OK;
return 0;
netdev_info(dev, "Chelsio %s rev %d %s %sNIC PCIe x%d%s%s\n",
adap->params.vpd.id,
- CHELSIO_CHIP_RELEASE(adap->params.rev), buf,
+ CHELSIO_CHIP_RELEASE(adap->params.chip), buf,
is_offload(adap) ? "R" : "", adap->params.pci.width, spd,
(adap->flags & USING_MSIX) ? " MSI-X" :
(adap->flags & USING_MSI) ? " MSI" : "");
if (err)
goto out_unmap_bar0;
- if (!is_t4(adapter->chip)) {
+ if (!is_t4(adapter->params.chip)) {
s_qpp = QUEUESPERPAGEPF1 * adapter->fn;
qpp = 1 << QUEUESPERPAGEPF0_GET(t4_read_reg(adapter,
SGE_EGRESS_QUEUES_PER_PAGE_PF) >> s_qpp);
out_free_dev:
free_some_resources(adapter);
out_unmap_bar:
- if (!is_t4(adapter->chip))
+ if (!is_t4(adapter->params.chip))
iounmap(adapter->bar2);
out_unmap_bar0:
iounmap(adapter->regs);
free_some_resources(adapter);
iounmap(adapter->regs);
- if (!is_t4(adapter->chip))
+ if (!is_t4(adapter->params.chip))
iounmap(adapter->bar2);
kfree(adapter);
pci_disable_pcie_error_reporting(pdev);
static inline void *lookup_stid(const struct tid_info *t, unsigned int stid)
{
- stid -= t->stid_base;
+ /* Is it a server filter TID? */
+ if (t->nsftids && (stid >= t->sftid_base)) {
+ stid -= t->sftid_base;
+ stid += t->nstids;
+ } else {
+ stid -= t->stid_base;
+ }
+
return stid < (t->nstids + t->nsftids) ? t->stid_tab[stid].data : NULL;
}
#include "l2t.h"
#include "t4_msg.h"
#include "t4fw_api.h"
+#include "t4_regs.h"
#define VLAN_NONE 0xfff
}
EXPORT_SYMBOL(cxgb4_l2t_get);
+u64 cxgb4_select_ntuple(struct net_device *dev,
+ const struct l2t_entry *l2t)
+{
+ struct adapter *adap = netdev2adap(dev);
+ struct tp_params *tp = &adap->params.tp;
+ u64 ntuple = 0;
+
+ /* Initialize each of the fields which we care about which are present
+ * in the Compressed Filter Tuple.
+ */
+ if (tp->vlan_shift >= 0 && l2t->vlan != VLAN_NONE)
+ ntuple |= (F_FT_VLAN_VLD | l2t->vlan) << tp->vlan_shift;
+
+ if (tp->port_shift >= 0)
+ ntuple |= (u64)l2t->lport << tp->port_shift;
+
+ if (tp->protocol_shift >= 0)
+ ntuple |= (u64)IPPROTO_TCP << tp->protocol_shift;
+
+ if (tp->vnic_shift >= 0) {
+ u32 viid = cxgb4_port_viid(dev);
+ u32 vf = FW_VIID_VIN_GET(viid);
+ u32 pf = FW_VIID_PFN_GET(viid);
+ u32 vld = FW_VIID_VIVLD_GET(viid);
+
+ ntuple |= (u64)(V_FT_VNID_ID_VF(vf) |
+ V_FT_VNID_ID_PF(pf) |
+ V_FT_VNID_ID_VLD(vld)) << tp->vnic_shift;
+ }
+
+ return ntuple;
+}
+EXPORT_SYMBOL(cxgb4_select_ntuple);
+
/*
* Called when address resolution fails for an L2T entry to handle packets
* on the arpq head. If a packet specifies a failure handler it is invoked,
struct l2t_entry *cxgb4_l2t_get(struct l2t_data *d, struct neighbour *neigh,
const struct net_device *physdev,
unsigned int priority);
-
+u64 cxgb4_select_ntuple(struct net_device *dev,
+ const struct l2t_entry *l2t);
void t4_l2t_update(struct adapter *adap, struct neighbour *neigh);
struct l2t_entry *t4_l2t_alloc_switching(struct l2t_data *d);
int t4_l2t_set_switching(struct adapter *adap, struct l2t_entry *e, u16 vlan,
u32 val;
if (q->pend_cred >= 8) {
val = PIDX(q->pend_cred / 8);
- if (!is_t4(adap->chip))
+ if (!is_t4(adap->params.chip))
val |= DBTYPE(1);
wmb();
t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL), DBPRIO(1) |
wmb(); /* write descriptors before telling HW */
spin_lock(&q->db_lock);
if (!q->db_disabled) {
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
QID(q->cntxt_id) | PIDX(n));
} else {
return 0;
}
- if (is_t4(adap->chip))
+ if (is_t4(adap->params.chip))
__skb_pull(skb, sizeof(struct cpl_trace_pkt));
else
__skb_pull(skb, sizeof(struct cpl_t5_trace_pkt));
const struct cpl_rx_pkt *pkt;
struct sge_eth_rxq *rxq = container_of(q, struct sge_eth_rxq, rspq);
struct sge *s = &q->adap->sge;
- int cpl_trace_pkt = is_t4(q->adap->chip) ?
+ int cpl_trace_pkt = is_t4(q->adap->params.chip) ?
CPL_TRACE_PKT : CPL_TRACE_PKT_T5;
if (unlikely(*(u8 *)rsp == cpl_trace_pkt))
static void init_txq(struct adapter *adap, struct sge_txq *q, unsigned int id)
{
q->cntxt_id = id;
- if (!is_t4(adap->chip)) {
+ if (!is_t4(adap->params.chip)) {
unsigned int s_qpp;
unsigned short udb_density;
unsigned long qpshift;
#undef READ_FL_BUF
if (fl_small_pg != PAGE_SIZE ||
- (fl_large_pg != 0 && (fl_large_pg <= fl_small_pg ||
+ (fl_large_pg != 0 && (fl_large_pg < fl_small_pg ||
(fl_large_pg & (fl_large_pg-1)) != 0))) {
dev_err(adap->pdev_dev, "bad SGE FL page buffer sizes [%d, %d]\n",
fl_small_pg, fl_large_pg);
* Set up to drop DOORBELL writes when the DOORBELL FIFO overflows
* and generate an interrupt when this occurs so we can recover.
*/
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
t4_set_reg_field(adap, A_SGE_DBFIFO_STATUS,
V_HP_INT_THRESH(M_HP_INT_THRESH) |
V_LP_INT_THRESH(M_LP_INT_THRESH),
u32 mc_bist_cmd, mc_bist_cmd_addr, mc_bist_cmd_len;
u32 mc_bist_status_rdata, mc_bist_data_pattern;
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
mc_bist_cmd = MC_BIST_CMD;
mc_bist_cmd_addr = MC_BIST_CMD_ADDR;
mc_bist_cmd_len = MC_BIST_CMD_LEN;
u32 edc_bist_cmd, edc_bist_cmd_addr, edc_bist_cmd_len;
u32 edc_bist_cmd_data_pattern, edc_bist_status_rdata;
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
edc_bist_cmd = EDC_REG(EDC_BIST_CMD, idx);
edc_bist_cmd_addr = EDC_REG(EDC_BIST_CMD_ADDR, idx);
edc_bist_cmd_len = EDC_REG(EDC_BIST_CMD_LEN, idx);
static int t4_mem_win_rw(struct adapter *adap, u32 addr, __be32 *data, int dir)
{
int i;
- u32 win_pf = is_t4(adap->chip) ? 0 : V_PFNUM(adap->fn);
+ u32 win_pf = is_t4(adap->params.chip) ? 0 : V_PFNUM(adap->fn);
/*
* Setup offset into PCIE memory window. Address must be a
}
/**
- * get_fw_version - read the firmware version
+ * t4_get_fw_version - read the firmware version
* @adapter: the adapter
* @vers: where to place the version
*
* Reads the FW version from flash.
*/
-static int get_fw_version(struct adapter *adapter, u32 *vers)
+int t4_get_fw_version(struct adapter *adapter, u32 *vers)
{
- return t4_read_flash(adapter, adapter->params.sf_fw_start +
- offsetof(struct fw_hdr, fw_ver), 1, vers, 0);
+ return t4_read_flash(adapter, FLASH_FW_START +
+ offsetof(struct fw_hdr, fw_ver), 1,
+ vers, 0);
}
/**
- * get_tp_version - read the TP microcode version
+ * t4_get_tp_version - read the TP microcode version
* @adapter: the adapter
* @vers: where to place the version
*
* Reads the TP microcode version from flash.
*/
-static int get_tp_version(struct adapter *adapter, u32 *vers)
+int t4_get_tp_version(struct adapter *adapter, u32 *vers)
{
- return t4_read_flash(adapter, adapter->params.sf_fw_start +
+ return t4_read_flash(adapter, FLASH_FW_START +
offsetof(struct fw_hdr, tp_microcode_ver),
1, vers, 0);
}
-/**
- * t4_check_fw_version - check if the FW is compatible with this driver
- * @adapter: the adapter
- *
- * Checks if an adapter's FW is compatible with the driver. Returns 0
- * if there's exact match, a negative error if the version could not be
- * read or there's a major version mismatch, and a positive value if the
- * expected major version is found but there's a minor version mismatch.
+/* Is the given firmware API compatible with the one the driver was compiled
+ * with?
*/
-int t4_check_fw_version(struct adapter *adapter)
+static int fw_compatible(const struct fw_hdr *hdr1, const struct fw_hdr *hdr2)
{
- u32 api_vers[2];
- int ret, major, minor, micro;
- int exp_major, exp_minor, exp_micro;
- ret = get_fw_version(adapter, &adapter->params.fw_vers);
- if (!ret)
- ret = get_tp_version(adapter, &adapter->params.tp_vers);
- if (!ret)
- ret = t4_read_flash(adapter, adapter->params.sf_fw_start +
- offsetof(struct fw_hdr, intfver_nic),
- 2, api_vers, 1);
- if (ret)
- return ret;
+ /* short circuit if it's the exact same firmware version */
+ if (hdr1->chip == hdr2->chip && hdr1->fw_ver == hdr2->fw_ver)
+ return 1;
- major = FW_HDR_FW_VER_MAJOR_GET(adapter->params.fw_vers);
- minor = FW_HDR_FW_VER_MINOR_GET(adapter->params.fw_vers);
- micro = FW_HDR_FW_VER_MICRO_GET(adapter->params.fw_vers);
+#define SAME_INTF(x) (hdr1->intfver_##x == hdr2->intfver_##x)
+ if (hdr1->chip == hdr2->chip && SAME_INTF(nic) && SAME_INTF(vnic) &&
+ SAME_INTF(ri) && SAME_INTF(iscsi) && SAME_INTF(fcoe))
+ return 1;
+#undef SAME_INTF
- switch (CHELSIO_CHIP_VERSION(adapter->chip)) {
- case CHELSIO_T4:
- exp_major = FW_VERSION_MAJOR;
- exp_minor = FW_VERSION_MINOR;
- exp_micro = FW_VERSION_MICRO;
- break;
- case CHELSIO_T5:
- exp_major = FW_VERSION_MAJOR_T5;
- exp_minor = FW_VERSION_MINOR_T5;
- exp_micro = FW_VERSION_MICRO_T5;
- break;
- default:
- dev_err(adapter->pdev_dev, "Unsupported chip type, %x\n",
- adapter->chip);
- return -EINVAL;
- }
+ return 0;
+}
- memcpy(adapter->params.api_vers, api_vers,
- sizeof(adapter->params.api_vers));
+/* The firmware in the filesystem is usable, but should it be installed?
+ * This routine explains itself in detail if it indicates the filesystem
+ * firmware should be installed.
+ */
+static int should_install_fs_fw(struct adapter *adap, int card_fw_usable,
+ int k, int c)
+{
+ const char *reason;
- if (major < exp_major || (major == exp_major && minor < exp_minor) ||
- (major == exp_major && minor == exp_minor && micro < exp_micro)) {
- dev_err(adapter->pdev_dev,
- "Card has firmware version %u.%u.%u, minimum "
- "supported firmware is %u.%u.%u.\n", major, minor,
- micro, exp_major, exp_minor, exp_micro);
- return -EFAULT;
+ if (!card_fw_usable) {
+ reason = "incompatible or unusable";
+ goto install;
}
- if (major != exp_major) { /* major mismatch - fail */
- dev_err(adapter->pdev_dev,
- "card FW has major version %u, driver wants %u\n",
- major, exp_major);
- return -EINVAL;
+ if (k > c) {
+ reason = "older than the version supported with this driver";
+ goto install;
}
- if (minor == exp_minor && micro == exp_micro)
- return 0; /* perfect match */
+ return 0;
+
+install:
+ dev_err(adap->pdev_dev, "firmware on card (%u.%u.%u.%u) is %s, "
+ "installing firmware %u.%u.%u.%u on card.\n",
+ FW_HDR_FW_VER_MAJOR_GET(c), FW_HDR_FW_VER_MINOR_GET(c),
+ FW_HDR_FW_VER_MICRO_GET(c), FW_HDR_FW_VER_BUILD_GET(c), reason,
+ FW_HDR_FW_VER_MAJOR_GET(k), FW_HDR_FW_VER_MINOR_GET(k),
+ FW_HDR_FW_VER_MICRO_GET(k), FW_HDR_FW_VER_BUILD_GET(k));
- /* Minor/micro version mismatch. Report it but often it's OK. */
return 1;
}
+int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
+ const u8 *fw_data, unsigned int fw_size,
+ struct fw_hdr *card_fw, enum dev_state state,
+ int *reset)
+{
+ int ret, card_fw_usable, fs_fw_usable;
+ const struct fw_hdr *fs_fw;
+ const struct fw_hdr *drv_fw;
+
+ drv_fw = &fw_info->fw_hdr;
+
+ /* Read the header of the firmware on the card */
+ ret = -t4_read_flash(adap, FLASH_FW_START,
+ sizeof(*card_fw) / sizeof(uint32_t),
+ (uint32_t *)card_fw, 1);
+ if (ret == 0) {
+ card_fw_usable = fw_compatible(drv_fw, (const void *)card_fw);
+ } else {
+ dev_err(adap->pdev_dev,
+ "Unable to read card's firmware header: %d\n", ret);
+ card_fw_usable = 0;
+ }
+
+ if (fw_data != NULL) {
+ fs_fw = (const void *)fw_data;
+ fs_fw_usable = fw_compatible(drv_fw, fs_fw);
+ } else {
+ fs_fw = NULL;
+ fs_fw_usable = 0;
+ }
+
+ if (card_fw_usable && card_fw->fw_ver == drv_fw->fw_ver &&
+ (!fs_fw_usable || fs_fw->fw_ver == drv_fw->fw_ver)) {
+ /* Common case: the firmware on the card is an exact match and
+ * the filesystem one is an exact match too, or the filesystem
+ * one is absent/incompatible.
+ */
+ } else if (fs_fw_usable && state == DEV_STATE_UNINIT &&
+ should_install_fs_fw(adap, card_fw_usable,
+ be32_to_cpu(fs_fw->fw_ver),
+ be32_to_cpu(card_fw->fw_ver))) {
+ ret = -t4_fw_upgrade(adap, adap->mbox, fw_data,
+ fw_size, 0);
+ if (ret != 0) {
+ dev_err(adap->pdev_dev,
+ "failed to install firmware: %d\n", ret);
+ goto bye;
+ }
+
+ /* Installed successfully, update the cached header too. */
+ memcpy(card_fw, fs_fw, sizeof(*card_fw));
+ card_fw_usable = 1;
+ *reset = 0; /* already reset as part of load_fw */
+ }
+
+ if (!card_fw_usable) {
+ uint32_t d, c, k;
+
+ d = be32_to_cpu(drv_fw->fw_ver);
+ c = be32_to_cpu(card_fw->fw_ver);
+ k = fs_fw ? be32_to_cpu(fs_fw->fw_ver) : 0;
+
+ dev_err(adap->pdev_dev, "Cannot find a usable firmware: "
+ "chip state %d, "
+ "driver compiled with %d.%d.%d.%d, "
+ "card has %d.%d.%d.%d, filesystem has %d.%d.%d.%d\n",
+ state,
+ FW_HDR_FW_VER_MAJOR_GET(d), FW_HDR_FW_VER_MINOR_GET(d),
+ FW_HDR_FW_VER_MICRO_GET(d), FW_HDR_FW_VER_BUILD_GET(d),
+ FW_HDR_FW_VER_MAJOR_GET(c), FW_HDR_FW_VER_MINOR_GET(c),
+ FW_HDR_FW_VER_MICRO_GET(c), FW_HDR_FW_VER_BUILD_GET(c),
+ FW_HDR_FW_VER_MAJOR_GET(k), FW_HDR_FW_VER_MINOR_GET(k),
+ FW_HDR_FW_VER_MICRO_GET(k), FW_HDR_FW_VER_BUILD_GET(k));
+ ret = EINVAL;
+ goto bye;
+ }
+
+ /* We're using whatever's on the card and it's known to be good. */
+ adap->params.fw_vers = be32_to_cpu(card_fw->fw_ver);
+ adap->params.tp_vers = be32_to_cpu(card_fw->tp_microcode_ver);
+
+bye:
+ return ret;
+}
+
/**
* t4_flash_erase_sectors - erase a range of flash sectors
* @adapter: the adapter
PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
pcie_port_intr_info) +
t4_handle_intr_status(adapter, PCIE_INT_CAUSE,
- is_t4(adapter->chip) ?
+ is_t4(adapter->params.chip) ?
pcie_intr_info : t5_pcie_intr_info);
if (fat)
{
u32 v, int_cause_reg;
- if (is_t4(adap->chip))
+ if (is_t4(adap->params.chip))
int_cause_reg = PORT_REG(port, XGMAC_PORT_INT_CAUSE);
else
int_cause_reg = T5_PORT_REG(port, MAC_PORT_INT_CAUSE);
#define GET_STAT(name) \
t4_read_reg64(adap, \
- (is_t4(adap->chip) ? PORT_REG(idx, MPS_PORT_STAT_##name##_L) : \
+ (is_t4(adap->params.chip) ? PORT_REG(idx, MPS_PORT_STAT_##name##_L) : \
T5_PORT_REG(idx, MPS_PORT_STAT_##name##_L)))
#define GET_STAT_COM(name) t4_read_reg64(adap, MPS_STAT_##name##_L)
{
u32 mag_id_reg_l, mag_id_reg_h, port_cfg_reg;
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
mag_id_reg_l = PORT_REG(port, XGMAC_PORT_MAGIC_MACID_LO);
mag_id_reg_h = PORT_REG(port, XGMAC_PORT_MAGIC_MACID_HI);
port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2);
int i;
u32 port_cfg_reg;
- if (is_t4(adap->chip))
+ if (is_t4(adap->params.chip))
port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2);
else
port_cfg_reg = T5_PORT_REG(port, MAC_PORT_CFG2);
return -EINVAL;
#define EPIO_REG(name) \
- (is_t4(adap->chip) ? PORT_REG(port, XGMAC_PORT_EPIO_##name) : \
+ (is_t4(adap->params.chip) ? PORT_REG(port, XGMAC_PORT_EPIO_##name) : \
T5_PORT_REG(port, MAC_PORT_EPIO_##name))
t4_write_reg(adap, EPIO_REG(DATA1), mask0 >> 32);
int t4_mem_win_read_len(struct adapter *adap, u32 addr, __be32 *data, int len)
{
int i, off;
- u32 win_pf = is_t4(adap->chip) ? 0 : V_PFNUM(adap->fn);
+ u32 win_pf = is_t4(adap->params.chip) ? 0 : V_PFNUM(adap->fn);
/* Align on a 2KB boundary.
*/
int i, ret;
struct fw_vi_mac_cmd c;
struct fw_vi_mac_exact *p;
- unsigned int max_naddr = is_t4(adap->chip) ?
+ unsigned int max_naddr = is_t4(adap->params.chip) ?
NUM_MPS_CLS_SRAM_L_INSTANCES :
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
int ret, mode;
struct fw_vi_mac_cmd c;
struct fw_vi_mac_exact *p = c.u.exact;
- unsigned int max_mac_addr = is_t4(adap->chip) ?
+ unsigned int max_mac_addr = is_t4(adap->params.chip) ?
NUM_MPS_CLS_SRAM_L_INSTANCES :
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
{
int ret, ver;
uint16_t device_id;
+ u32 pl_rev;
ret = t4_wait_dev_ready(adapter);
if (ret < 0)
return ret;
get_pci_mode(adapter, &adapter->params.pci);
- adapter->params.rev = t4_read_reg(adapter, PL_REV);
+ pl_rev = G_REV(t4_read_reg(adapter, PL_REV));
ret = get_flash_params(adapter);
if (ret < 0) {
*/
pci_read_config_word(adapter->pdev, PCI_DEVICE_ID, &device_id);
ver = device_id >> 12;
+ adapter->params.chip = 0;
switch (ver) {
case CHELSIO_T4:
- adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T4,
- adapter->params.rev);
+ adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T4, pl_rev);
break;
case CHELSIO_T5:
- adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T5,
- adapter->params.rev);
+ adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T5, pl_rev);
break;
default:
dev_err(adapter->pdev_dev, "Device %d is not supported\n",
return -EINVAL;
}
- /* Reassign the updated revision field */
- adapter->params.rev = adapter->chip;
-
init_cong_ctrl(adapter->params.a_wnd, adapter->params.b_wnd);
/*
return 0;
}
+/**
+ * t4_init_tp_params - initialize adap->params.tp
+ * @adap: the adapter
+ *
+ * Initialize various fields of the adapter's TP Parameters structure.
+ */
+int t4_init_tp_params(struct adapter *adap)
+{
+ int chan;
+ u32 v;
+
+ v = t4_read_reg(adap, TP_TIMER_RESOLUTION);
+ adap->params.tp.tre = TIMERRESOLUTION_GET(v);
+ adap->params.tp.dack_re = DELAYEDACKRESOLUTION_GET(v);
+
+ /* MODQ_REQ_MAP defaults to setting queues 0-3 to chan 0-3 */
+ for (chan = 0; chan < NCHAN; chan++)
+ adap->params.tp.tx_modq[chan] = chan;
+
+ /* Cache the adapter's Compressed Filter Mode and global Incress
+ * Configuration.
+ */
+ t4_read_indirect(adap, TP_PIO_ADDR, TP_PIO_DATA,
+ &adap->params.tp.vlan_pri_map, 1,
+ TP_VLAN_PRI_MAP);
+ t4_read_indirect(adap, TP_PIO_ADDR, TP_PIO_DATA,
+ &adap->params.tp.ingress_config, 1,
+ TP_INGRESS_CONFIG);
+
+ /* Now that we have TP_VLAN_PRI_MAP cached, we can calculate the field
+ * shift positions of several elements of the Compressed Filter Tuple
+ * for this adapter which we need frequently ...
+ */
+ adap->params.tp.vlan_shift = t4_filter_field_shift(adap, F_VLAN);
+ adap->params.tp.vnic_shift = t4_filter_field_shift(adap, F_VNIC_ID);
+ adap->params.tp.port_shift = t4_filter_field_shift(adap, F_PORT);
+ adap->params.tp.protocol_shift = t4_filter_field_shift(adap,
+ F_PROTOCOL);
+
+ /* If TP_INGRESS_CONFIG.VNID == 0, then TP_VLAN_PRI_MAP.VNIC_ID
+ * represents the presense of an Outer VLAN instead of a VNIC ID.
+ */
+ if ((adap->params.tp.ingress_config & F_VNIC) == 0)
+ adap->params.tp.vnic_shift = -1;
+
+ return 0;
+}
+
+/**
+ * t4_filter_field_shift - calculate filter field shift
+ * @adap: the adapter
+ * @filter_sel: the desired field (from TP_VLAN_PRI_MAP bits)
+ *
+ * Return the shift position of a filter field within the Compressed
+ * Filter Tuple. The filter field is specified via its selection bit
+ * within TP_VLAN_PRI_MAL (filter mode). E.g. F_VLAN.
+ */
+int t4_filter_field_shift(const struct adapter *adap, int filter_sel)
+{
+ unsigned int filter_mode = adap->params.tp.vlan_pri_map;
+ unsigned int sel;
+ int field_shift;
+
+ if ((filter_mode & filter_sel) == 0)
+ return -1;
+
+ for (sel = 1, field_shift = 0; sel < filter_sel; sel <<= 1) {
+ switch (filter_mode & sel) {
+ case F_FCOE:
+ field_shift += W_FT_FCOE;
+ break;
+ case F_PORT:
+ field_shift += W_FT_PORT;
+ break;
+ case F_VNIC_ID:
+ field_shift += W_FT_VNIC_ID;
+ break;
+ case F_VLAN:
+ field_shift += W_FT_VLAN;
+ break;
+ case F_TOS:
+ field_shift += W_FT_TOS;
+ break;
+ case F_PROTOCOL:
+ field_shift += W_FT_PROTOCOL;
+ break;
+ case F_ETHERTYPE:
+ field_shift += W_FT_ETHERTYPE;
+ break;
+ case F_MACMATCH:
+ field_shift += W_FT_MACMATCH;
+ break;
+ case F_MPSHITTYPE:
+ field_shift += W_FT_MPSHITTYPE;
+ break;
+ case F_FRAGMENTATION:
+ field_shift += W_FT_FRAGMENTATION;
+ break;
+ }
+ }
+ return field_shift;
+}
+
int t4_port_init(struct adapter *adap, int mbox, int pf, int vf)
{
u8 addr[6];
#define PL_REV 0x1943c
+#define S_REV 0
+#define M_REV 0xfU
+#define V_REV(x) ((x) << S_REV)
+#define G_REV(x) (((x) >> S_REV) & M_REV)
+
#define LE_DB_CONFIG 0x19c04
#define HASHEN 0x00100000U
#define A_TP_TX_SCHED_PCMD 0x25
+#define S_VNIC 11
+#define V_VNIC(x) ((x) << S_VNIC)
+#define F_VNIC V_VNIC(1U)
+
+#define S_FRAGMENTATION 9
+#define V_FRAGMENTATION(x) ((x) << S_FRAGMENTATION)
+#define F_FRAGMENTATION V_FRAGMENTATION(1U)
+
+#define S_MPSHITTYPE 8
+#define V_MPSHITTYPE(x) ((x) << S_MPSHITTYPE)
+#define F_MPSHITTYPE V_MPSHITTYPE(1U)
+
+#define S_MACMATCH 7
+#define V_MACMATCH(x) ((x) << S_MACMATCH)
+#define F_MACMATCH V_MACMATCH(1U)
+
+#define S_ETHERTYPE 6
+#define V_ETHERTYPE(x) ((x) << S_ETHERTYPE)
+#define F_ETHERTYPE V_ETHERTYPE(1U)
+
+#define S_PROTOCOL 5
+#define V_PROTOCOL(x) ((x) << S_PROTOCOL)
+#define F_PROTOCOL V_PROTOCOL(1U)
+
+#define S_TOS 4
+#define V_TOS(x) ((x) << S_TOS)
+#define F_TOS V_TOS(1U)
+
+#define S_VLAN 3
+#define V_VLAN(x) ((x) << S_VLAN)
+#define F_VLAN V_VLAN(1U)
+
+#define S_VNIC_ID 2
+#define V_VNIC_ID(x) ((x) << S_VNIC_ID)
+#define F_VNIC_ID V_VNIC_ID(1U)
+
#define S_PORT 1
#define V_PORT(x) ((x) << S_PORT)
#define F_PORT V_PORT(1U)
+#define S_FCOE 0
+#define V_FCOE(x) ((x) << S_FCOE)
+#define F_FCOE V_FCOE(1U)
+
#define NUM_MPS_CLS_SRAM_L_INSTANCES 336
#define NUM_MPS_T5_CLS_SRAM_L_INSTANCES 512
#define EDC_STRIDE_T5 (EDC_T51_BASE_ADDR - EDC_T50_BASE_ADDR)
#define EDC_REG_T5(reg, idx) (reg + EDC_STRIDE_T5 * idx)
+#define A_PL_VF_REV 0x4
+#define A_PL_VF_WHOAMI 0x0
+#define A_PL_VF_REVISION 0x8
+
+#define S_CHIPID 4
+#define M_CHIPID 0xfU
+#define V_CHIPID(x) ((x) << S_CHIPID)
+#define G_CHIPID(x) (((x) >> S_CHIPID) & M_CHIPID)
+
+/* TP_VLAN_PRI_MAP controls which subset of fields will be present in the
+ * Compressed Filter Tuple for LE filters. Each bit set in TP_VLAN_PRI_MAP
+ * selects for a particular field being present. These fields, when present
+ * in the Compressed Filter Tuple, have the following widths in bits.
+ */
+#define W_FT_FCOE 1
+#define W_FT_PORT 3
+#define W_FT_VNIC_ID 17
+#define W_FT_VLAN 17
+#define W_FT_TOS 8
+#define W_FT_PROTOCOL 8
+#define W_FT_ETHERTYPE 16
+#define W_FT_MACMATCH 9
+#define W_FT_MPSHITTYPE 3
+#define W_FT_FRAGMENTATION 1
+
+/* Some of the Compressed Filter Tuple fields have internal structure. These
+ * bit shifts/masks describe those structures. All shifts are relative to the
+ * base position of the fields within the Compressed Filter Tuple
+ */
+#define S_FT_VLAN_VLD 16
+#define V_FT_VLAN_VLD(x) ((x) << S_FT_VLAN_VLD)
+#define F_FT_VLAN_VLD V_FT_VLAN_VLD(1U)
+
+#define S_FT_VNID_ID_VF 0
+#define V_FT_VNID_ID_VF(x) ((x) << S_FT_VNID_ID_VF)
+
+#define S_FT_VNID_ID_PF 7
+#define V_FT_VNID_ID_PF(x) ((x) << S_FT_VNID_ID_PF)
+
+#define S_FT_VNID_ID_VLD 16
+#define V_FT_VNID_ID_VLD(x) ((x) << S_FT_VNID_ID_VLD)
+
#endif /* __T4_REGS_H */
struct fw_hdr {
u8 ver;
- u8 reserved1;
+ u8 chip; /* terminator chip type */
__be16 len512; /* bin length in units of 512-bytes */
__be32 fw_ver; /* firmware version */
__be32 tp_microcode_ver;
__be32 reserved6[23];
};
+enum fw_hdr_chip {
+ FW_HDR_CHIP_T4,
+ FW_HDR_CHIP_T5
+};
+
#define FW_HDR_FW_VER_MAJOR_GET(x) (((x) >> 24) & 0xff)
#define FW_HDR_FW_VER_MINOR_GET(x) (((x) >> 16) & 0xff)
#define FW_HDR_FW_VER_MICRO_GET(x) (((x) >> 8) & 0xff)
unsigned long registered_device_map;
unsigned long open_device_map;
unsigned long flags;
- enum chip_type chip;
struct adapter_params params;
/* queue and interrupt resources */
/*
* Chip version 4, revision 0x3f (cxgb4vf).
*/
- return CHELSIO_CHIP_VERSION(adapter->chip) | (0x3f << 10);
+ return CHELSIO_CHIP_VERSION(adapter->params.chip) | (0x3f << 10);
}
/*
reg_block_dump(adapter, regbuf,
T4VF_MPS_BASE_ADDR + T4VF_MOD_MAP_MPS_FIRST,
T4VF_MPS_BASE_ADDR + T4VF_MOD_MAP_MPS_LAST);
+
+ /* T5 adds new registers in the PL Register map.
+ */
reg_block_dump(adapter, regbuf,
T4VF_PL_BASE_ADDR + T4VF_MOD_MAP_PL_FIRST,
- T4VF_PL_BASE_ADDR + T4VF_MOD_MAP_PL_LAST);
+ T4VF_PL_BASE_ADDR + (is_t4(adapter->params.chip)
+ ? A_PL_VF_WHOAMI : A_PL_VF_REVISION));
reg_block_dump(adapter, regbuf,
T4VF_CIM_BASE_ADDR + T4VF_MOD_MAP_CIM_FIRST,
T4VF_CIM_BASE_ADDR + T4VF_MOD_MAP_CIM_LAST);
unsigned int ethqsets;
int err;
u32 param, val = 0;
+ unsigned int chipid;
/*
* Wait for the device to become ready before proceeding ...
return err;
}
+ adapter->params.chip = 0;
switch (adapter->pdev->device >> 12) {
case CHELSIO_T4:
- adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T4, 0);
+ adapter->params.chip = CHELSIO_CHIP_CODE(CHELSIO_T4, 0);
break;
case CHELSIO_T5:
- adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T5, 0);
+ chipid = G_REV(t4_read_reg(adapter, A_PL_VF_REV));
+ adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T5, chipid);
break;
}
*/
if (fl->pend_cred >= FL_PER_EQ_UNIT) {
val = PIDX(fl->pend_cred / FL_PER_EQ_UNIT);
- if (!is_t4(adapter->chip))
+ if (!is_t4(adapter->params.chip))
val |= DBTYPE(1);
wmb();
t4_write_reg(adapter, T4VF_SGE_BASE_ADDR + SGE_VF_KDOORBELL,
#include "../cxgb4/t4fw_api.h"
#define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
-#define CHELSIO_CHIP_VERSION(code) ((code) >> 4)
+#define CHELSIO_CHIP_VERSION(code) (((code) >> 4) & 0xf)
#define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf)
+/* All T4 and later chips have their PCI-E Device IDs encoded as 0xVFPP where:
+ *
+ * V = "4" for T4; "5" for T5, etc. or
+ * = "a" for T4 FPGA; "b" for T4 FPGA, etc.
+ * F = "0" for PF 0..3; "4".."7" for PF4..7; and "8" for VFs
+ * PP = adapter product designation
+ */
#define CHELSIO_T4 0x4
#define CHELSIO_T5 0x5
enum chip_type {
- T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 0),
- T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
- T4_A3 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
+ T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
+ T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
T4_FIRST_REV = T4_A1,
- T4_LAST_REV = T4_A3,
+ T4_LAST_REV = T4_A2,
- T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
- T5_FIRST_REV = T5_A1,
+ T5_A0 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
+ T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 1),
+ T5_FIRST_REV = T5_A0,
T5_LAST_REV = T5_A1,
};
struct vpd_params vpd; /* Vital Product Data */
struct rss_params rss; /* Receive Side Scaling */
struct vf_resources vfres; /* Virtual Function Resource limits */
+ enum chip_type chip; /* chip code */
u8 nports; /* # of Ethernet "ports" */
};
static inline int is_t4(enum chip_type chip)
{
- return (chip >= T4_FIRST_REV && chip <= T4_LAST_REV);
+ return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T4;
}
int t4vf_wait_dev_ready(struct adapter *);
unsigned nfilters = 0;
unsigned int rem = naddr;
struct fw_vi_mac_cmd cmd, rpl;
- unsigned int max_naddr = is_t4(adapter->chip) ?
+ unsigned int max_naddr = is_t4(adapter->params.chip) ?
NUM_MPS_CLS_SRAM_L_INSTANCES :
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
struct fw_vi_mac_exact *p = &cmd.u.exact[0];
size_t len16 = DIV_ROUND_UP(offsetof(struct fw_vi_mac_cmd,
u.exact[1]), 16);
- unsigned int max_naddr = is_t4(adapter->chip) ?
+ unsigned int max_naddr = is_t4(adapter->params.chip) ?
NUM_MPS_CLS_SRAM_L_INSTANCES :
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
#define BE3_MAX_RSS_QS 16
#define BE3_MAX_TX_QS 16
#define BE3_MAX_EVT_QS 16
+#define BE3_SRIOV_MAX_EVT_QS 8
#define MAX_RX_QS 32
#define MAX_EVT_QS 32
struct list_head entry;
u32 flash_status;
- struct completion flash_compl;
+ struct completion et_cmd_compl;
struct be_resources res; /* resources available for the func */
u16 num_vfs; /* Number of VFs provisioned by PF */
};
#define be_physfn(adapter) (!adapter->virtfn)
+#define be_virtfn(adapter) (adapter->virtfn)
#define sriov_enabled(adapter) (adapter->num_vfs > 0)
#define sriov_want(adapter) (be_physfn(adapter) && \
(num_vfs || pci_num_vf(adapter->pdev)))
subsystem = resp_hdr->subsystem;
}
+ if (opcode == OPCODE_LOWLEVEL_LOOPBACK_TEST &&
+ subsystem == CMD_SUBSYSTEM_LOWLEVEL) {
+ complete(&adapter->et_cmd_compl);
+ return 0;
+ }
+
if (((opcode == OPCODE_COMMON_WRITE_FLASHROM) ||
(opcode == OPCODE_COMMON_WRITE_OBJECT)) &&
(subsystem == CMD_SUBSYSTEM_COMMON)) {
adapter->flash_status = compl_status;
- complete(&adapter->flash_compl);
+ complete(&adapter->et_cmd_compl);
}
if (compl_status == MCC_STATUS_SUCCESS) {
} else {
req->hdr.version = 2;
req->page_size = 1; /* 1 for 4K */
+
+ /* coalesce-wm field in this cmd is not relevant to Lancer.
+ * Lancer uses COMMON_MODIFY_CQ to set this field
+ */
+ if (!lancer_chip(adapter))
+ AMAP_SET_BITS(struct amap_cq_context_v2, coalescwm,
+ ctxt, coalesce_wm);
AMAP_SET_BITS(struct amap_cq_context_v2, nodelay, ctxt,
no_delay);
AMAP_SET_BITS(struct amap_cq_context_v2, count, ctxt,
0x3ea83c02, 0x4a110304};
int status;
+ if (!(be_if_cap_flags(adapter) & BE_IF_FLAGS_RSS))
+ return 0;
+
if (mutex_lock_interruptible(&adapter->mbox_lock))
return -1;
be_mcc_notify(adapter);
spin_unlock_bh(&adapter->mcc_lock);
- if (!wait_for_completion_timeout(&adapter->flash_compl,
+ if (!wait_for_completion_timeout(&adapter->et_cmd_compl,
msecs_to_jiffies(60000)))
status = -1;
else
be_mcc_notify(adapter);
spin_unlock_bh(&adapter->mcc_lock);
- if (!wait_for_completion_timeout(&adapter->flash_compl,
- msecs_to_jiffies(40000)))
+ if (!wait_for_completion_timeout(&adapter->et_cmd_compl,
+ msecs_to_jiffies(40000)))
status = -1;
else
status = adapter->flash_status;
{
struct be_mcc_wrb *wrb;
struct be_cmd_req_loopback_test *req;
+ struct be_cmd_resp_loopback_test *resp;
int status;
spin_lock_bh(&adapter->mcc_lock);
be_wrb_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_LOWLEVEL,
OPCODE_LOWLEVEL_LOOPBACK_TEST, sizeof(*req), wrb, NULL);
- req->hdr.timeout = cpu_to_le32(4);
+ req->hdr.timeout = cpu_to_le32(15);
req->pattern = cpu_to_le64(pattern);
req->src_port = cpu_to_le32(port_num);
req->dest_port = cpu_to_le32(port_num);
req->num_pkts = cpu_to_le32(num_pkts);
req->loopback_type = cpu_to_le32(loopback_type);
- status = be_mcc_notify_wait(adapter);
- if (!status) {
- struct be_cmd_resp_loopback_test *resp = embedded_payload(wrb);
- status = le32_to_cpu(resp->status);
- }
+ be_mcc_notify(adapter);
+
+ spin_unlock_bh(&adapter->mcc_lock);
+
+ wait_for_completion(&adapter->et_cmd_compl);
+ resp = embedded_payload(wrb);
+ status = le32_to_cpu(resp->status);
+ return status;
err:
spin_unlock_bh(&adapter->mcc_lock);
return status;
#define SLIPORT_ERROR_NO_RESOURCE1 0x2
#define SLIPORT_ERROR_NO_RESOURCE2 0x9
+#define SLIPORT_ERROR_FW_RESET1 0x2
+#define SLIPORT_ERROR_FW_RESET2 0x0
+
/********* Memory BAR register ************/
#define PCICFG_MEMBAR_CTRL_INT_CTRL_OFFSET 0xfc
/* Host Interrupt Enable, if set interrupts are enabled although "PCI Interrupt
*/
if (sliport_status & SLIPORT_STATUS_ERR_MASK) {
adapter->hw_error = true;
- dev_err(&adapter->pdev->dev,
- "Error detected in the card\n");
+ /* Do not log error messages if its a FW reset */
+ if (sliport_err1 == SLIPORT_ERROR_FW_RESET1 &&
+ sliport_err2 == SLIPORT_ERROR_FW_RESET2) {
+ dev_info(&adapter->pdev->dev,
+ "Firmware update in progress\n");
+ return;
+ } else {
+ dev_err(&adapter->pdev->dev,
+ "Error detected in the card\n");
+ }
}
if (sliport_status & SLIPORT_STATUS_ERR_MASK) {
be_roce_dev_close(adapter);
- for_all_evt_queues(adapter, eqo, i) {
- if (adapter->flags & BE_FLAGS_NAPI_ENABLED) {
+ if (adapter->flags & BE_FLAGS_NAPI_ENABLED) {
+ for_all_evt_queues(adapter, eqo, i) {
napi_disable(&eqo->napi);
be_disable_busy_poll(eqo);
}
if (!BEx_chip(adapter))
adapter->rss_flags |= RSS_ENABLE_UDP_IPV4 |
RSS_ENABLE_UDP_IPV6;
+ } else {
+ /* Disable RSS, if only default RX Q is created */
+ adapter->rss_flags = RSS_ENABLE_NONE;
+ }
- rc = be_cmd_rss_config(adapter, rsstable, adapter->rss_flags,
- 128);
- if (rc) {
- adapter->rss_flags = 0;
- return rc;
- }
+ rc = be_cmd_rss_config(adapter, rsstable, adapter->rss_flags,
+ 128);
+ if (rc) {
+ adapter->rss_flags = RSS_ENABLE_NONE;
+ return rc;
}
/* First time posting */
}
}
-static int be_clear(struct be_adapter *adapter)
+static void be_mac_clear(struct be_adapter *adapter)
{
int i;
+ if (adapter->pmac_id) {
+ for (i = 0; i < (adapter->uc_macs + 1); i++)
+ be_cmd_pmac_del(adapter, adapter->if_handle,
+ adapter->pmac_id[i], 0);
+ adapter->uc_macs = 0;
+
+ kfree(adapter->pmac_id);
+ adapter->pmac_id = NULL;
+ }
+}
+
+static int be_clear(struct be_adapter *adapter)
+{
be_cancel_worker(adapter);
if (sriov_enabled(adapter))
be_vf_clear(adapter);
/* delete the primary mac along with the uc-mac list */
- for (i = 0; i < (adapter->uc_macs + 1); i++)
- be_cmd_pmac_del(adapter, adapter->if_handle,
- adapter->pmac_id[i], 0);
- adapter->uc_macs = 0;
+ be_mac_clear(adapter);
be_cmd_if_destroy(adapter, adapter->if_handle, 0);
be_clear_queues(adapter);
- kfree(adapter->pmac_id);
- adapter->pmac_id = NULL;
-
be_msix_disable(adapter);
return 0;
}
{
struct pci_dev *pdev = adapter->pdev;
bool use_sriov = false;
+ int max_vfs;
- if (BE3_chip(adapter) && sriov_want(adapter)) {
- int max_vfs;
+ max_vfs = pci_sriov_get_totalvfs(pdev);
- max_vfs = pci_sriov_get_totalvfs(pdev);
+ if (BE3_chip(adapter) && sriov_want(adapter)) {
res->max_vfs = max_vfs > 0 ? min(MAX_VFS, max_vfs) : 0;
use_sriov = res->max_vfs;
}
BE3_MAX_RSS_QS : BE2_MAX_RSS_QS;
res->max_rx_qs = res->max_rss_qs + 1;
- res->max_evt_qs = be_physfn(adapter) ? BE3_MAX_EVT_QS : 1;
+ if (be_physfn(adapter))
+ res->max_evt_qs = (max_vfs > 0) ?
+ BE3_SRIOV_MAX_EVT_QS : BE3_MAX_EVT_QS;
+ else
+ res->max_evt_qs = 1;
res->if_cap_flags = BE_IF_CAP_FLAGS_WANT;
if (!(adapter->function_caps & BE_FUNCTION_CAPS_RSS))
memcpy(mac, adapter->netdev->dev_addr, ETH_ALEN);
}
- /* On BE3 VFs this cmd may fail due to lack of privilege.
- * Ignore the failure as in this case pmac_id is fetched
- * in the IFACE_CREATE cmd.
- */
- be_cmd_pmac_add(adapter, mac, adapter->if_handle,
- &adapter->pmac_id[0], 0);
+ /* For BE3-R VFs, the PF programs the initial MAC address */
+ if (!(BEx_chip(adapter) && be_virtfn(adapter)))
+ be_cmd_pmac_add(adapter, mac, adapter->if_handle,
+ &adapter->pmac_id[0], 0);
return 0;
}
}
if (change_status == LANCER_FW_RESET_NEEDED) {
+ dev_info(&adapter->pdev->dev,
+ "Resetting adapter to activate new FW\n");
status = lancer_physdev_ctrl(adapter,
PHYSDEV_CONTROL_FW_RESET_MASK);
if (status) {
spin_lock_init(&adapter->mcc_lock);
spin_lock_init(&adapter->mcc_cq_lock);
- init_completion(&adapter->flash_compl);
+ init_completion(&adapter->et_cmd_compl);
pci_save_state(adapter->pdev);
return 0;
goto err;
}
- dev_err(dev, "Error recovery successful\n");
+ dev_err(dev, "Adapter recovery successful\n");
return 0;
err:
if (status == -EAGAIN)
dev_err(dev, "Waiting for resource provisioning\n");
else
- dev_err(dev, "Error recovery failed\n");
+ dev_err(dev, "Adapter recovery failed\n");
return status;
}
if (adapter->wol)
be_setup_wol(adapter, true);
+ be_intr_set(adapter, false);
cancel_delayed_work_sync(&adapter->func_recovery_work);
netif_device_detach(netdev);
if (status)
return status;
+ be_intr_set(adapter, true);
/* tell fw we're ready to fire cmds */
status = be_cmd_fw_init(adapter);
if (status)
* detected as not set during a prior frame transmission, then the
* ENET_TDAR[TDAR] bit is cleared at a later time, even if additional TxBDs
* were added to the ring and the ENET_TDAR[TDAR] bit is set. This results in
- * If the ready bit in the transmit buffer descriptor (TxBD[R]) is previously
- * detected as not set during a prior frame transmission, then the
- * ENET_TDAR[TDAR] bit is cleared at a later time, even if additional TxBDs
- * were added to the ring and the ENET_TDAR[TDAR] bit is set. This results in
* frames not being transmitted until there is a 0-to-1 transition on
* ENET_TDAR[TDAR].
*/
* data.
*/
bdp->cbd_bufaddr = dma_map_single(&fep->pdev->dev, bufaddr,
- FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE);
+ skb->len, DMA_TO_DEVICE);
if (dma_mapping_error(&fep->pdev->dev, bdp->cbd_bufaddr)) {
bdp->cbd_bufaddr = 0;
fep->tx_skbuff[index] = NULL;
/* If this was the last BD in the ring, start at the beginning again. */
bdp = fec_enet_get_nextdesc(bdp, fep);
+ skb_tx_timestamp(skb);
+
fep->cur_tx = bdp;
if (fep->cur_tx == fep->dirty_tx)
/* Trigger transmission start */
writel(0, fep->hwp + FEC_X_DES_ACTIVE);
- skb_tx_timestamp(skb);
-
return NETDEV_TX_OK;
}
else
index = bdp - fep->tx_bd_base;
- dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
- FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE);
- bdp->cbd_bufaddr = 0;
-
skb = fep->tx_skbuff[index];
+ dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr, skb->len,
+ DMA_TO_DEVICE);
+ bdp->cbd_bufaddr = 0;
/* Check for errors. */
if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |
dev->hw_features = NETIF_F_SG | NETIF_F_TSO |
NETIF_F_IP_CSUM | NETIF_F_HW_VLAN_CTAG_TX;
- dev->features = NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_TSO |
+ dev->features = NETIF_F_SG | NETIF_F_TSO |
NETIF_F_HIGHDMA | NETIF_F_IP_CSUM |
NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX |
NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_RXCSUM;
#define E1000_MAX_INTR 10
+/*
+ * Count for polling __E1000_RESET condition every 10-20msec.
+ */
+#define E1000_CHECK_RESET_COUNT 50
+
/* TX/RX descriptor defines */
#define E1000_DEFAULT_TXD 256
#define E1000_MAX_TXD 256
struct delayed_work watchdog_task;
struct delayed_work fifo_stall_task;
struct delayed_work phy_info_task;
-
- struct mutex mutex;
};
enum e1000_state_t {
{
set_bit(__E1000_DOWN, &adapter->flags);
- /* Only kill reset task if adapter is not resetting */
- if (!test_bit(__E1000_RESETTING, &adapter->flags))
- cancel_work_sync(&adapter->reset_task);
-
cancel_delayed_work_sync(&adapter->watchdog_task);
+
+ /*
+ * Since the watchdog task can reschedule other tasks, we should cancel
+ * it first, otherwise we can run into the situation when a work is
+ * still running after the adapter has been turned down.
+ */
+
cancel_delayed_work_sync(&adapter->phy_info_task);
cancel_delayed_work_sync(&adapter->fifo_stall_task);
+
+ /* Only kill reset task if adapter is not resetting */
+ if (!test_bit(__E1000_RESETTING, &adapter->flags))
+ cancel_work_sync(&adapter->reset_task);
}
void e1000_down(struct e1000_adapter *adapter)
e1000_clean_all_rx_rings(adapter);
}
-static void e1000_reinit_safe(struct e1000_adapter *adapter)
-{
- while (test_and_set_bit(__E1000_RESETTING, &adapter->flags))
- msleep(1);
- mutex_lock(&adapter->mutex);
- e1000_down(adapter);
- e1000_up(adapter);
- mutex_unlock(&adapter->mutex);
- clear_bit(__E1000_RESETTING, &adapter->flags);
-}
-
void e1000_reinit_locked(struct e1000_adapter *adapter)
{
- /* if rtnl_lock is not held the call path is bogus */
- ASSERT_RTNL();
WARN_ON(in_interrupt());
while (test_and_set_bit(__E1000_RESETTING, &adapter->flags))
msleep(1);
e1000_irq_disable(adapter);
spin_lock_init(&adapter->stats_lock);
- mutex_init(&adapter->mutex);
set_bit(__E1000_DOWN, &adapter->flags);
{
struct e1000_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
+ int count = E1000_CHECK_RESET_COUNT;
+
+ while (test_bit(__E1000_RESETTING, &adapter->flags) && count--)
+ usleep_range(10000, 20000);
WARN_ON(test_bit(__E1000_RESETTING, &adapter->flags));
e1000_down(adapter);
struct e1000_adapter *adapter = container_of(work,
struct e1000_adapter,
phy_info_task.work);
- if (test_bit(__E1000_DOWN, &adapter->flags))
- return;
- mutex_lock(&adapter->mutex);
+
e1000_phy_get_info(&adapter->hw, &adapter->phy_info);
- mutex_unlock(&adapter->mutex);
}
/**
struct net_device *netdev = adapter->netdev;
u32 tctl;
- if (test_bit(__E1000_DOWN, &adapter->flags))
- return;
- mutex_lock(&adapter->mutex);
if (atomic_read(&adapter->tx_fifo_stall)) {
if ((er32(TDT) == er32(TDH)) &&
(er32(TDFT) == er32(TDFH)) &&
schedule_delayed_work(&adapter->fifo_stall_task, 1);
}
}
- mutex_unlock(&adapter->mutex);
}
bool e1000_has_link(struct e1000_adapter *adapter)
struct e1000_tx_ring *txdr = adapter->tx_ring;
u32 link, tctl;
- if (test_bit(__E1000_DOWN, &adapter->flags))
- return;
-
- mutex_lock(&adapter->mutex);
link = e1000_has_link(adapter);
if ((netif_carrier_ok(netdev)) && link)
goto link_up;
adapter->tx_timeout_count++;
schedule_work(&adapter->reset_task);
/* exit immediately since reset is imminent */
- goto unlock;
+ return;
}
}
/* Reschedule the task */
if (!test_bit(__E1000_DOWN, &adapter->flags))
schedule_delayed_work(&adapter->watchdog_task, 2 * HZ);
-
-unlock:
- mutex_unlock(&adapter->mutex);
}
enum latency_range {
struct e1000_adapter *adapter =
container_of(work, struct e1000_adapter, reset_task);
- if (test_bit(__E1000_DOWN, &adapter->flags))
- return;
e_err(drv, "Reset adapter\n");
- e1000_reinit_safe(adapter);
+ e1000_reinit_locked(adapter);
}
/**
netif_device_detach(netdev);
if (netif_running(netdev)) {
+ int count = E1000_CHECK_RESET_COUNT;
+
+ while (test_bit(__E1000_RESETTING, &adapter->flags) && count--)
+ usleep_range(10000, 20000);
+
WARN_ON(test_bit(__E1000_RESETTING, &adapter->flags));
e1000_down(adapter);
}
e1000_release_phy_80003es2lan(hw);
/* Disable IBIST slave mode (far-end loopback) */
- e1000_read_kmrn_reg_80003es2lan(hw, E1000_KMRNCTRLSTA_INBAND_PARAM,
- &kum_reg_data);
+ ret_val =
+ e1000_read_kmrn_reg_80003es2lan(hw, E1000_KMRNCTRLSTA_INBAND_PARAM,
+ &kum_reg_data);
+ if (ret_val)
+ return ret_val;
kum_reg_data |= E1000_KMRNCTRLSTA_IBIST_DISABLE;
e1000_write_kmrn_reg_80003es2lan(hw, E1000_KMRNCTRLSTA_INBAND_PARAM,
kum_reg_data);
return 0;
}
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_PM
static int e1000_suspend(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
return __e1000_resume(pdev);
}
-#endif /* CONFIG_PM_SLEEP */
+#endif /* CONFIG_PM */
#ifdef CONFIG_PM_RUNTIME
static int e1000_runtime_suspend(struct device *dev)
* it across the board.
*/
ret_val = e1e_rphy(hw, MII_BMSR, &phy_status);
- if (ret_val)
+ if (ret_val) {
/* If the first read fails, another entity may have
* ownership of the resources, wait and try again to
* see if they have relinquished the resources yet.
*/
- udelay(usec_interval);
+ if (usec_interval >= 1000)
+ msleep(usec_interval / 1000);
+ else
+ udelay(usec_interval);
+ }
ret_val = e1e_rphy(hw, MII_BMSR, &phy_status);
if (ret_val)
break;
if (phy_status & BMSR_LSTATUS)
break;
if (usec_interval >= 1000)
- mdelay(usec_interval / 1000);
+ msleep(usec_interval / 1000);
else
udelay(usec_interval);
}
struct rtnl_link_stats64 *vsi_stats = i40e_get_vsi_stats_struct(vsi);
int i;
+ if (!vsi->tx_rings)
+ return stats;
+
rcu_read_lock();
for (i = 0; i < vsi->num_queue_pairs; i++) {
struct i40e_ring *tx_ring, *rx_ring;
* ownership of the resources, wait and try again to
* see if they have relinquished the resources yet.
*/
- udelay(usec_interval);
+ if (usec_interval >= 1000)
+ mdelay(usec_interval/1000);
+ else
+ udelay(usec_interval);
}
ret_val = hw->phy.ops.read_reg(hw, PHY_STATUS, &phy_status);
if (ret_val)
{
struct igb_adapter *adapter = netdev_priv(netdev);
- wol->supported = WAKE_UCAST | WAKE_MCAST |
- WAKE_BCAST | WAKE_MAGIC |
- WAKE_PHY;
wol->wolopts = 0;
if (!(adapter->flags & IGB_FLAG_WOL_SUPPORTED))
return;
+ wol->supported = WAKE_UCAST | WAKE_MCAST |
+ WAKE_BCAST | WAKE_MAGIC |
+ WAKE_PHY;
+
/* apply any specific unsupported masks here */
switch (adapter->hw.device_id) {
default:
rx_ring->l2_accel_priv = NULL;
}
-int ixgbe_fwd_ring_down(struct net_device *vdev,
- struct ixgbe_fwd_adapter *accel)
+static int ixgbe_fwd_ring_down(struct net_device *vdev,
+ struct ixgbe_fwd_adapter *accel)
{
struct ixgbe_adapter *adapter = accel->real_adapter;
unsigned int rxbase = accel->rx_base_queue;
NETIF_F_TSO |
NETIF_F_TSO6 |
NETIF_F_RXHASH |
- NETIF_F_RXCSUM |
- NETIF_F_HW_L2FW_DOFFLOAD;
+ NETIF_F_RXCSUM;
- netdev->hw_features = netdev->features;
+ netdev->hw_features = netdev->features | NETIF_F_HW_L2FW_DOFFLOAD;
switch (adapter->hw.mac.type) {
case ixgbe_mac_82599EB:
static void ixgbe_i2c_bus_clear(struct ixgbe_hw *hw);
static enum ixgbe_phy_type ixgbe_get_phy_type_from_id(u32 phy_id);
static s32 ixgbe_get_phy_id(struct ixgbe_hw *hw);
+static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw);
/**
* ixgbe_identify_phy_generic - Get physical layer module
*
* Searches for and identifies the QSFP module and assigns appropriate PHY type
**/
-s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
+static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
{
struct ixgbe_adapter *adapter = hw->back;
s32 status = IXGBE_ERR_PHY_ADDR_INVALID;
s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw);
s32 ixgbe_identify_module_generic(struct ixgbe_hw *hw);
s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw);
-s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw);
s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
u16 *list_offset,
u16 *data_offset);
{
struct ixgbe_adapter *adapter = pci_get_drvdata(dev);
int err;
+#ifdef CONFIG_PCI_IOV
u32 current_flags = adapter->flags;
+#endif
err = ixgbe_disable_sriov(adapter);
if (time_is_before_jiffies(end))
++timedout;
} else {
+ /* wait_event_timeout does not guarantee a delay of at
+ * least one whole jiffie, so timeout must be no less
+ * than two.
+ */
+ if (timeout < 2)
+ timeout = 2;
wait_event_timeout(dev->smi_busy_wait,
orion_mdio_smi_is_done(dev),
timeout);
dev_kfree_skb_any(skb);
dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
- rx_desc->data_size, DMA_FROM_DEVICE);
+ MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
}
if (rx_done)
}
dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
- rx_desc->data_size, DMA_FROM_DEVICE);
+ MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
rx_bytes = rx_desc->data_size -
(ETH_FCS_LEN + MVNETA_MH_SIZE);
{
struct mlx4_en_priv *priv = netdev_priv(dev);
struct mlx4_en_dev *mdev = priv->mdev;
- struct mlx4_en_tx_ring *tx_ring;
int i, carrier_ok;
memset(buf, 0, sizeof(u64) * MLX4_EN_NUM_SELF_TEST);
carrier_ok = netif_carrier_ok(dev);
netif_carrier_off(dev);
-retry_tx:
/* Wait until all tx queues are empty.
* there should not be any additional incoming traffic
* since we turned the carrier off */
msleep(200);
- for (i = 0; i < priv->tx_ring_num && carrier_ok; i++) {
- tx_ring = priv->tx_ring[i];
- if (tx_ring->prod != (tx_ring->cons + tx_ring->last_nr_txbb))
- goto retry_tx;
- }
if (priv->mdev->dev->caps.flags &
MLX4_DEV_CAP_FLAG_UC_LOOPBACK) {
return -ENOMEM;
ret = pci_register_driver(&mlx4_driver);
+ if (ret < 0)
+ destroy_workqueue(mlx4_wq);
return ret < 0 ? ret : 0;
}
sg_dma_len(&ctl->sg) += 4 - sg_dma_len(&ctl->sg) % 4;
ctl->adesc = dmaengine_prep_slave_sg(ctl->chan,
- &ctl->sg, 1, DMA_MEM_TO_DEV,
- DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_SRC_UNMAP);
+ &ctl->sg, 1, DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT);
if (!ctl->adesc)
return NETDEV_TX_BUSY;
sg_dma_len(sg) = DMA_BUFFER_SIZE;
ctl->adesc = dmaengine_prep_slave_sg(ctl->chan,
- sg, 1, DMA_DEV_TO_MEM,
- DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_SRC_UNMAP);
+ sg, 1, DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT);
if (!ctl->adesc)
goto out;
{
struct fe_priv *np = netdev_priv(dev);
u8 __iomem *base = get_hwbase(dev);
- int result;
- memset(buffer, 0, nv_get_sset_count(dev, ETH_SS_TEST)*sizeof(u64));
+ int result, count;
+
+ count = nv_get_sset_count(dev, ETH_SS_TEST);
+ memset(buffer, 0, count * sizeof(u64));
if (!nv_link_test(dev)) {
test->flags |= ETH_TEST_FL_FAILED;
return;
}
- if (!nv_loopback_test(dev)) {
+ if (count > NV_TEST_COUNT_BASE && !nv_loopback_test(dev)) {
test->flags |= ETH_TEST_FL_FAILED;
buffer[3] = 1;
}
u32 seq_number;
u8 vhdr_len = 0;
- if (unlikely(ring > adapter->max_rds_rings))
+ if (unlikely(ring >= adapter->max_rds_rings))
return NULL;
rds_ring = &recv_ctx->rds_rings[ring];
index = netxen_get_lro_sts_refhandle(sts_data0);
- if (unlikely(index > rds_ring->num_desc))
+ if (unlikely(index >= rds_ring->num_desc))
return NULL;
buffer = &rds_ring->rx_buf_arr[index];
struct qlcnic_mailbox *mailbox;
u8 extend_lb_time;
u8 phys_port_id[ETH_ALEN];
+ u8 lb_mode;
};
struct qlcnic_adapter_stats {
dma_addr_t phys_addr;
dma_addr_t hw_cons_phys_addr;
struct netdev_queue *txq;
+ /* Lock to protect Tx descriptors cleanup */
+ spinlock_t tx_clean_lock;
} ____cacheline_internodealigned_in_smp;
/*
#define QLCNIC_ILB_MODE 0x1
#define QLCNIC_ELB_MODE 0x2
+#define QLCNIC_LB_MODE_MASK 0x3
#define QLCNIC_LINKEVENT 0x1
#define QLCNIC_LB_RESPONSE 0x2
struct qlcnic_filter_hash rx_fhash;
struct list_head vf_mc_list;
- spinlock_t tx_clean_lock;
spinlock_t mac_learn_lock;
/* spinlock for catching rcv filters for eswitch traffic */
spinlock_t rx_mac_learn_lock;
qlcnic_83xx_poll_process_aen(adapter);
- if (ahw->diag_test == QLCNIC_INTERRUPT_TEST) {
- ahw->diag_cnt++;
+ if (ahw->diag_test) {
+ if (ahw->diag_test == QLCNIC_INTERRUPT_TEST)
+ ahw->diag_cnt++;
qlcnic_83xx_enable_legacy_msix_mbx_intr(adapter);
return IRQ_HANDLED;
}
}
if (adapter->ahw->diag_test == QLCNIC_LOOPBACK_TEST) {
- /* disable and free mailbox interrupt */
- if (!(adapter->flags & QLCNIC_MSIX_ENABLED)) {
- qlcnic_83xx_enable_mbx_poll(adapter);
- qlcnic_83xx_free_mbx_intr(adapter);
- }
adapter->ahw->loopback_state = 0;
adapter->ahw->hw_ops->setup_link_event(adapter, 1);
}
{
struct qlcnic_adapter *adapter = netdev_priv(netdev);
struct qlcnic_host_sds_ring *sds_ring;
- int ring, err;
+ int ring;
clear_bit(__QLCNIC_DEV_UP, &adapter->state);
if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST) {
for (ring = 0; ring < adapter->drv_sds_rings; ring++) {
sds_ring = &adapter->recv_ctx->sds_rings[ring];
- qlcnic_83xx_disable_intr(adapter, sds_ring);
- if (!(adapter->flags & QLCNIC_MSIX_ENABLED))
- qlcnic_83xx_enable_mbx_poll(adapter);
+ if (adapter->flags & QLCNIC_MSIX_ENABLED)
+ qlcnic_83xx_disable_intr(adapter, sds_ring);
}
}
qlcnic_fw_destroy_ctx(adapter);
qlcnic_detach(adapter);
- if (adapter->ahw->diag_test == QLCNIC_LOOPBACK_TEST) {
- if (!(adapter->flags & QLCNIC_MSIX_ENABLED)) {
- err = qlcnic_83xx_setup_mbx_intr(adapter);
- qlcnic_83xx_disable_mbx_poll(adapter);
- if (err) {
- dev_err(&adapter->pdev->dev,
- "%s: failed to setup mbx interrupt\n",
- __func__);
- goto out;
- }
- }
- }
adapter->ahw->diag_test = 0;
adapter->drv_sds_rings = drv_sds_rings;
if (netif_running(netdev))
__qlcnic_up(adapter, netdev);
- if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST &&
- !(adapter->flags & QLCNIC_MSIX_ENABLED))
- qlcnic_83xx_disable_mbx_poll(adapter);
out:
netif_device_attach(netdev);
}
}
} while ((adapter->ahw->linkup && ahw->has_link_events) != 1);
- /* Make sure carrier is off and queue is stopped during loopback */
- if (netif_running(netdev)) {
- netif_carrier_off(netdev);
- netif_tx_stop_all_queues(netdev);
- }
-
ret = qlcnic_do_lb_test(adapter, mode);
qlcnic_83xx_clear_lb_mode(adapter, mode);
ahw->link_autoneg = MSB(MSW(data[3]));
ahw->module_type = MSB(LSW(data[3]));
ahw->has_link_events = 1;
+ ahw->lb_mode = data[4] & QLCNIC_LB_MODE_MASK;
qlcnic_advert_link_change(adapter, link_status);
}
return;
}
+static inline void qlcnic_dump_mailbox_registers(struct qlcnic_adapter *adapter)
+{
+ struct qlcnic_hardware_context *ahw = adapter->ahw;
+ u32 offset;
+
+ offset = QLCRDX(ahw, QLCNIC_DEF_INT_MASK);
+ dev_info(&adapter->pdev->dev, "Mbx interrupt mask=0x%x, Mbx interrupt enable=0x%x, Host mbx control=0x%x, Fw mbx control=0x%x",
+ readl(ahw->pci_base0 + offset),
+ QLCRDX(ahw, QLCNIC_MBX_INTR_ENBL),
+ QLCRDX(ahw, QLCNIC_HOST_MBX_CTRL),
+ QLCRDX(ahw, QLCNIC_FW_MBX_CTRL));
+}
+
static void qlcnic_83xx_mailbox_worker(struct work_struct *work)
{
struct qlcnic_mailbox *mbx = container_of(work, struct qlcnic_mailbox,
__func__, cmd->cmd_op, cmd->type, ahw->pci_func,
ahw->op_mode);
clear_bit(QLC_83XX_MBX_READY, &mbx->status);
+ qlcnic_dump_mailbox_registers(adapter);
+ qlcnic_83xx_get_mbx_data(adapter, cmd);
qlcnic_dump_mbx(adapter, cmd);
qlcnic_83xx_idc_request_reset(adapter,
QLCNIC_FORCE_FW_DUMP_KEY);
pci_channel_state_t);
pci_ers_result_t qlcnic_83xx_io_slot_reset(struct pci_dev *);
void qlcnic_83xx_io_resume(struct pci_dev *);
+void qlcnic_83xx_stop_hw(struct qlcnic_adapter *);
#endif
adapter->ahw->idc.err_code = -EIO;
dev_err(&adapter->pdev->dev,
"%s: Device in unknown state\n", __func__);
+ clear_bit(__QLCNIC_RESETTING, &adapter->state);
return 0;
}
struct qlcnic_hardware_context *ahw = adapter->ahw;
struct qlcnic_mailbox *mbx = ahw->mailbox;
int ret = 0;
- u32 owner;
u32 val;
/* Perform NIC configuration based ready state entry actions */
set_bit(__QLCNIC_RESETTING, &adapter->state);
qlcnic_83xx_idc_enter_need_reset_state(adapter, 1);
} else {
- owner = qlcnic_83xx_idc_find_reset_owner_id(adapter);
- if (ahw->pci_func == owner)
- qlcnic_dump_fw(adapter);
+ netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n",
+ __func__);
+ qlcnic_83xx_idc_enter_failed_state(adapter, 1);
}
return -EIO;
}
return 0;
}
-static int qlcnic_83xx_idc_failed_state(struct qlcnic_adapter *adapter)
+static void qlcnic_83xx_idc_failed_state(struct qlcnic_adapter *adapter)
{
- dev_err(&adapter->pdev->dev, "%s: please restart!!\n", __func__);
+ struct qlcnic_hardware_context *ahw = adapter->ahw;
+ u32 val, owner;
+
+ val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL);
+ if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) {
+ owner = qlcnic_83xx_idc_find_reset_owner_id(adapter);
+ if (ahw->pci_func == owner) {
+ qlcnic_83xx_stop_hw(adapter);
+ qlcnic_dump_fw(adapter);
+ }
+ }
+
+ netdev_warn(adapter->netdev, "%s: Reboot will be required to recover the adapter!!\n",
+ __func__);
clear_bit(__QLCNIC_RESETTING, &adapter->state);
- adapter->ahw->idc.err_code = -EIO;
+ ahw->idc.err_code = -EIO;
- return 0;
+ return;
}
static int qlcnic_83xx_idc_quiesce_state(struct qlcnic_adapter *adapter)
adapter->ahw->idc.prev_state = adapter->ahw->idc.curr_state;
qlcnic_83xx_periodic_tasks(adapter);
- /* Do not reschedule if firmaware is in hanged state and auto
- * recovery is disabled
- */
- if ((adapter->flags & QLCNIC_FW_HANG) && !qlcnic_auto_fw_reset)
- return;
-
/* Re-schedule the function */
if (test_bit(QLC_83XX_MODULE_LOADED, &adapter->ahw->idc.status))
qlcnic_schedule_work(adapter, qlcnic_83xx_idc_poll_dev_state,
}
val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL);
- if ((val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) ||
- !qlcnic_auto_fw_reset) {
- dev_err(&adapter->pdev->dev,
- "%s:failed, device in non reset mode\n", __func__);
+ if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) {
+ netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n",
+ __func__);
+ qlcnic_83xx_idc_enter_failed_state(adapter, 0);
qlcnic_83xx_unlock_driver(adapter);
return;
}
if (size & 0xF)
size = (size + 16) & ~0xF;
- p_cache = kzalloc(size, GFP_KERNEL);
+ p_cache = vzalloc(size);
if (p_cache == NULL)
return -ENOMEM;
ret = qlcnic_83xx_lockless_flash_read32(adapter, src, p_cache,
size / sizeof(u32));
if (ret) {
- kfree(p_cache);
+ vfree(p_cache);
return ret;
}
/* 16 byte write to MS memory */
ret = qlcnic_83xx_ms_mem_write128(adapter, dest, (u32 *)p_cache,
size / 16);
if (ret) {
- kfree(p_cache);
+ vfree(p_cache);
return ret;
}
- kfree(p_cache);
+ vfree(p_cache);
return ret;
}
p_dev->ahw->reset.seq_index = index;
}
-static void qlcnic_83xx_stop_hw(struct qlcnic_adapter *p_dev)
+void qlcnic_83xx_stop_hw(struct qlcnic_adapter *p_dev)
{
p_dev->ahw->reset.seq_index = 0;
val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL);
if (!(val & QLC_83XX_IDC_GRACEFULL_RESET))
qlcnic_dump_fw(adapter);
+
+ if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) {
+ netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n",
+ __func__);
+ qlcnic_83xx_idc_enter_failed_state(adapter, 1);
+ return err;
+ }
+
qlcnic_83xx_init_hw(adapter);
if (qlcnic_83xx_copy_bootloader(adapter))
ahw->nic_mode = QLCNIC_DEFAULT_MODE;
adapter->nic_ops->init_driver = qlcnic_83xx_init_default_driver;
ahw->idc.state_entry = qlcnic_83xx_idc_ready_state_entry;
- adapter->max_sds_rings = ahw->max_rx_ques;
- adapter->max_tx_rings = ahw->max_tx_ques;
+ adapter->max_sds_rings = QLCNIC_MAX_SDS_RINGS;
+ adapter->max_tx_rings = QLCNIC_MAX_TX_RINGS;
} else {
return -EIO;
}
static int qlcnic_validate_ring_count(struct qlcnic_adapter *adapter,
u8 rx_ring, u8 tx_ring)
{
+ if (rx_ring == 0 || tx_ring == 0)
+ return -EINVAL;
+
if (rx_ring != 0) {
if (rx_ring > adapter->max_sds_rings) {
- netdev_err(adapter->netdev, "Invalid ring count, SDS ring count %d should not be greater than max %d driver sds rings.\n",
+ netdev_err(adapter->netdev,
+ "Invalid ring count, SDS ring count %d should not be greater than max %d driver sds rings.\n",
rx_ring, adapter->max_sds_rings);
return -EINVAL;
}
}
if (tx_ring != 0) {
- if (qlcnic_82xx_check(adapter) &&
- (tx_ring > adapter->max_tx_rings)) {
+ if (tx_ring > adapter->max_tx_rings) {
netdev_err(adapter->netdev,
"Invalid ring count, Tx ring count %d should not be greater than max %d driver Tx rings.\n",
tx_ring, adapter->max_tx_rings);
return -EINVAL;
}
-
- if (qlcnic_83xx_check(adapter) &&
- (tx_ring > QLCNIC_SINGLE_RING)) {
- netdev_err(adapter->netdev,
- "Invalid ring count, Tx ring count %d should not be greater than %d driver Tx rings.\n",
- tx_ring, QLCNIC_SINGLE_RING);
- return -EINVAL;
- }
}
return 0;
struct qlcnic_hardware_context *ahw = adapter->ahw;
struct qlcnic_cmd_args cmd;
int ret, drv_sds_rings = adapter->drv_sds_rings;
+ int drv_tx_rings = adapter->drv_tx_rings;
if (qlcnic_83xx_check(adapter))
return qlcnic_83xx_interrupt_test(netdev);
clear_diag_irq:
adapter->drv_sds_rings = drv_sds_rings;
+ adapter->drv_tx_rings = drv_tx_rings;
clear_bit(__QLCNIC_RESETTING, &adapter->state);
return ret;
struct qlcnic_skb_frag *buffrag;
int i, j;
+ spin_lock(&tx_ring->tx_clean_lock);
+
cmd_buf = tx_ring->cmd_buf_arr;
for (i = 0; i < tx_ring->num_desc; i++) {
buffrag = cmd_buf->frag_array;
}
cmd_buf++;
}
+
+ spin_unlock(&tx_ring->tx_clean_lock);
}
void qlcnic_free_sw_resources(struct qlcnic_adapter *adapter)
if (adapter->ahw->linkup && !linkup) {
netdev_info(netdev, "NIC Link is down\n");
adapter->ahw->linkup = 0;
- if (netif_running(netdev)) {
- netif_carrier_off(netdev);
- netif_tx_stop_all_queues(netdev);
- }
+ netif_carrier_off(netdev);
} else if (!adapter->ahw->linkup && linkup) {
+ /* Do not advertise Link up if the port is in loopback mode */
+ if (qlcnic_83xx_check(adapter) && adapter->ahw->lb_mode)
+ return;
+
netdev_info(netdev, "NIC Link is up\n");
adapter->ahw->linkup = 1;
- if (netif_running(netdev)) {
- netif_carrier_on(netdev);
- netif_wake_queue(netdev);
- }
+ netif_carrier_on(netdev);
}
}
struct net_device *netdev = adapter->netdev;
struct qlcnic_skb_frag *frag;
- if (!spin_trylock(&adapter->tx_clean_lock))
+ if (!spin_trylock(&tx_ring->tx_clean_lock))
return 1;
sw_consumer = tx_ring->sw_consumer;
break;
}
+ tx_ring->sw_consumer = sw_consumer;
+
if (count && netif_running(netdev)) {
- tx_ring->sw_consumer = sw_consumer;
smp_mb();
if (netif_tx_queue_stopped(tx_ring->txq) &&
netif_carrier_ok(netdev)) {
*/
hw_consumer = le32_to_cpu(*(tx_ring->hw_consumer));
done = (sw_consumer == hw_consumer);
- spin_unlock(&adapter->tx_clean_lock);
+
+ spin_unlock(&tx_ring->tx_clean_lock);
return done;
}
} else {
adapter->ahw->nic_mode = QLCNIC_DEFAULT_MODE;
adapter->max_tx_rings = QLCNIC_MAX_HW_TX_RINGS;
+ adapter->max_sds_rings = QLCNIC_MAX_SDS_RINGS;
adapter->flags &= ~QLCNIC_ESWITCH_ENABLED;
}
if (qlcnic_sriov_vf_check(adapter))
qlcnic_sriov_cleanup_async_list(&adapter->ahw->sriov->bc);
smp_mb();
- spin_lock(&adapter->tx_clean_lock);
netif_carrier_off(netdev);
adapter->ahw->linkup = 0;
netif_tx_disable(netdev);
for (ring = 0; ring < adapter->drv_tx_rings; ring++)
qlcnic_release_tx_buffers(adapter, &adapter->tx_ring[ring]);
- spin_unlock(&adapter->tx_clean_lock);
}
/* Usage: During suspend and firmware recovery module */
qlcnic_detach(adapter);
adapter->drv_sds_rings = QLCNIC_SINGLE_RING;
- adapter->drv_tx_rings = QLCNIC_SINGLE_RING;
adapter->ahw->diag_test = test;
adapter->ahw->linkup = 0;
}
memset(cmd_buf_arr, 0, TX_BUFF_RINGSIZE(tx_ring));
tx_ring->cmd_buf_arr = cmd_buf_arr;
+ spin_lock_init(&tx_ring->tx_clean_lock);
}
if (qlcnic_83xx_check(adapter) ||
rwlock_init(&adapter->ahw->crb_lock);
mutex_init(&adapter->ahw->mem_lock);
- spin_lock_init(&adapter->tx_clean_lock);
INIT_LIST_HEAD(&adapter->mac_list);
qlcnic_register_dcb(adapter);
num_vfs = sriov->num_vfs;
max = num_vfs + 1;
info->bit_offsets = 0xffff;
- info->max_tx_ques = res->num_tx_queues / max;
info->max_rx_mcast_mac_filters = res->num_rx_mcast_mac_filters;
num_vf_macs = QLCNIC_SRIOV_VF_MAX_MAC;
info->max_tx_mac_filters = temp;
info->min_tx_bw = 0;
info->max_tx_bw = MAX_BW;
+ info->max_tx_ques = res->num_tx_queues - sriov->num_vfs;
} else {
id = qlcnic_sriov_func_to_index(adapter, func);
if (id < 0)
info->max_tx_bw = vp->max_tx_bw;
info->max_rx_ucast_mac_filters = num_vf_macs;
info->max_tx_mac_filters = num_vf_macs;
+ info->max_tx_ques = QLCNIC_SINGLE_RING;
}
info->max_rx_ip_addr = res->num_destip / max;
*/
#define DRV_NAME "qlge"
#define DRV_STRING "QLogic 10 Gigabit PCI-E Ethernet Driver "
-#define DRV_VERSION "1.00.00.33"
+#define DRV_VERSION "1.00.00.34"
#define WQ_ADDR_ALIGN 0x3 /* 4 byte alignment */
};
#define QLGE_TEST_LEN (sizeof(ql_gstrings_test) / ETH_GSTRING_LEN)
#define QLGE_STATS_LEN ARRAY_SIZE(ql_gstrings_stats)
+#define QLGE_RCV_MAC_ERR_STATS 7
static int ql_update_ring_coalescing(struct ql_adapter *qdev)
{
iter++;
}
+ /* Update receive mac error statistics */
+ iter += QLGE_RCV_MAC_ERR_STATS;
+
/*
* Get Per-priority TX pause frame counter statistics.
*/
netdev_features_t features)
{
int err;
- /*
- * Since there is no support for separate rx/tx vlan accel
- * enable/disable make sure tx flag is always in same state as rx.
- */
- if (features & NETIF_F_HW_VLAN_CTAG_RX)
- features |= NETIF_F_HW_VLAN_CTAG_TX;
- else
- features &= ~NETIF_F_HW_VLAN_CTAG_TX;
/* Update the behavior of vlan accel in the adapter */
err = qlge_update_hw_vlan_features(ndev, features);
le32_to_cpu(txd->opts1) & 0xffff,
PCI_DMA_TODEVICE);
- bytes_compl += skb->len;
- pkts_compl++;
-
if (status & LastFrag) {
if (status & (TxError | TxFIFOUnder)) {
netif_dbg(cp, tx_err, cp->dev,
netif_dbg(cp, tx_done, cp->dev,
"tx done, slot %d\n", tx_tail);
}
+ bytes_compl += skb->len;
+ pkts_compl++;
dev_kfree_skb_irq(skb);
}
rtl_writephy(tp, 0x14, 0x9065);
rtl_writephy(tp, 0x14, 0x1065);
+ /* Check ALDPS bit, disable it if enabled */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ if (rtl_readphy(tp, 0x10) & 0x0004)
+ rtl_w1w0_phy(tp, 0x10, 0x0000, 0x0004);
+
rtl_writephy(tp, 0x1f, 0x0000);
}
EFX_MAX_FRAME_LEN(efx->net_dev->mtu) +
efx->type->rx_buffer_padding);
rx_buf_len = (sizeof(struct efx_rx_page_state) +
- NET_IP_ALIGN + efx->rx_dma_len);
+ efx->rx_ip_align + efx->rx_dma_len);
if (rx_buf_len <= PAGE_SIZE) {
efx->rx_scatter = efx->type->always_rx_scatter;
efx->rx_buffer_order = 0;
WARN_ON(channel->rx_pkt_n_frags);
}
+ efx_ptp_start_datapath(efx);
+
if (netif_device_present(efx->net_dev))
netif_tx_wake_all_queues(efx->net_dev);
}
EFX_ASSERT_RESET_SERIALISED(efx);
BUG_ON(efx->port_enabled);
+ efx_ptp_stop_datapath(efx);
+
/* Stop RX refill */
efx_for_each_channel(channel, efx) {
efx_for_each_channel_rx_queue(rx_queue, channel)
efx->net_dev = net_dev;
efx->rx_prefix_size = efx->type->rx_prefix_size;
+ efx->rx_ip_align =
+ NET_IP_ALIGN ? (efx->rx_prefix_size + NET_IP_ALIGN) % 4 : 0;
efx->rx_packet_hash_offset =
efx->type->rx_hash_offset - efx->type->rx_prefix_size;
spin_lock_init(&efx->stats_lock);
static void efx_mcdi_timeout_async(unsigned long context);
static int efx_mcdi_drv_attach(struct efx_nic *efx, bool driver_operating,
bool *was_attached_out);
+static bool efx_mcdi_poll_once(struct efx_nic *efx);
static inline struct efx_mcdi_iface *efx_mcdi(struct efx_nic *efx)
{
}
}
+static bool efx_mcdi_poll_once(struct efx_nic *efx)
+{
+ struct efx_mcdi_iface *mcdi = efx_mcdi(efx);
+
+ rmb();
+ if (!efx->type->mcdi_poll_response(efx))
+ return false;
+
+ spin_lock_bh(&mcdi->iface_lock);
+ efx_mcdi_read_response_header(efx);
+ spin_unlock_bh(&mcdi->iface_lock);
+
+ return true;
+}
+
static int efx_mcdi_poll(struct efx_nic *efx)
{
struct efx_mcdi_iface *mcdi = efx_mcdi(efx);
time = jiffies;
- rmb();
- if (efx->type->mcdi_poll_response(efx))
+ if (efx_mcdi_poll_once(efx))
break;
if (time_after(time, finish))
return -ETIMEDOUT;
}
- spin_lock_bh(&mcdi->iface_lock);
- efx_mcdi_read_response_header(efx);
- spin_unlock_bh(&mcdi->iface_lock);
-
/* Return rc=0 like wait_event_timeout() */
return 0;
}
rc = efx_mcdi_await_completion(efx);
if (rc != 0) {
+ netif_err(efx, hw, efx->net_dev,
+ "MC command 0x%x inlen %d mode %d timed out\n",
+ cmd, (int)inlen, mcdi->mode);
+
+ if (mcdi->mode == MCDI_MODE_EVENTS && efx_mcdi_poll_once(efx)) {
+ netif_err(efx, hw, efx->net_dev,
+ "MCDI request was completed without an event\n");
+ rc = 0;
+ }
+
/* Close the race with efx_mcdi_ev_cpl() executing just too late
* and completing a request we've just cancelled, by ensuring
* that the seqno check therein fails.
++mcdi->seqno;
++mcdi->credits;
spin_unlock_bh(&mcdi->iface_lock);
+ }
- netif_err(efx, hw, efx->net_dev,
- "MC command 0x%x inlen %d mode %d timed out\n",
- cmd, (int)inlen, mcdi->mode);
- } else {
+ if (rc == 0) {
size_t hdr_len, data_len;
/* At the very least we need a memory barrier here to ensure
unsigned long last_update;
struct device *device;
struct efx_mcdi_mon_attribute *attrs;
+ struct attribute_group group;
+ const struct attribute_group *groups[2];
unsigned int n_attrs;
};
return rc;
}
-static ssize_t efx_mcdi_mon_show_name(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- return sprintf(buf, "%s\n", KBUILD_MODNAME);
-}
-
static int efx_mcdi_mon_get_entry(struct device *dev, unsigned int index,
efx_dword_t *entry)
{
- struct efx_nic *efx = dev_get_drvdata(dev);
+ struct efx_nic *efx = dev_get_drvdata(dev->parent);
struct efx_mcdi_mon *hwmon = efx_mcdi_mon(efx);
int rc;
efx_mcdi_sensor_type[mon_attr->type].label);
}
-static int
+static void
efx_mcdi_mon_add_attr(struct efx_nic *efx, const char *name,
ssize_t (*reader)(struct device *,
struct device_attribute *, char *),
{
struct efx_mcdi_mon *hwmon = efx_mcdi_mon(efx);
struct efx_mcdi_mon_attribute *attr = &hwmon->attrs[hwmon->n_attrs];
- int rc;
strlcpy(attr->name, name, sizeof(attr->name));
attr->index = index;
attr->dev_attr.attr.name = attr->name;
attr->dev_attr.attr.mode = S_IRUGO;
attr->dev_attr.show = reader;
- rc = device_create_file(&efx->pci_dev->dev, &attr->dev_attr);
- if (rc == 0)
- ++hwmon->n_attrs;
- return rc;
+ hwmon->group.attrs[hwmon->n_attrs++] = &attr->dev_attr.attr;
}
int efx_mcdi_mon_probe(struct efx_nic *efx)
efx_mcdi_mon_update(efx);
/* Allocate space for the maximum possible number of
- * attributes for this set of sensors: name of the driver plus
+ * attributes for this set of sensors:
* value, min, max, crit, alarm and label for each sensor.
*/
- n_attrs = 1 + 6 * n_sensors;
+ n_attrs = 6 * n_sensors;
hwmon->attrs = kcalloc(n_attrs, sizeof(*hwmon->attrs), GFP_KERNEL);
if (!hwmon->attrs) {
rc = -ENOMEM;
goto fail;
}
-
- hwmon->device = hwmon_device_register(&efx->pci_dev->dev);
- if (IS_ERR(hwmon->device)) {
- rc = PTR_ERR(hwmon->device);
+ hwmon->group.attrs = kcalloc(n_attrs + 1, sizeof(struct attribute *),
+ GFP_KERNEL);
+ if (!hwmon->group.attrs) {
+ rc = -ENOMEM;
goto fail;
}
- rc = efx_mcdi_mon_add_attr(efx, "name", efx_mcdi_mon_show_name, 0, 0, 0);
- if (rc)
- goto fail;
-
for (i = 0, j = -1, type = -1; ; i++) {
enum efx_hwmon_type hwmon_type;
const char *hwmon_prefix;
page = type / 32;
j = -1;
if (page == n_pages)
- return 0;
+ goto hwmon_register;
MCDI_SET_DWORD(inbuf, SENSOR_INFO_EXT_IN_PAGE,
page);
if (min1 != max1) {
snprintf(name, sizeof(name), "%s%u_input",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_value, i, type, 0);
- if (rc)
- goto fail;
if (hwmon_type != EFX_HWMON_POWER) {
snprintf(name, sizeof(name), "%s%u_min",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_limit,
i, type, min1);
- if (rc)
- goto fail;
}
snprintf(name, sizeof(name), "%s%u_max",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_limit,
i, type, max1);
- if (rc)
- goto fail;
if (min2 != max2) {
/* Assume max2 is critical value.
*/
snprintf(name, sizeof(name), "%s%u_crit",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_limit,
i, type, max2);
- if (rc)
- goto fail;
}
}
snprintf(name, sizeof(name), "%s%u_alarm",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_alarm, i, type, 0);
- if (rc)
- goto fail;
if (type < ARRAY_SIZE(efx_mcdi_sensor_type) &&
efx_mcdi_sensor_type[type].label) {
snprintf(name, sizeof(name), "%s%u_label",
hwmon_prefix, hwmon_index);
- rc = efx_mcdi_mon_add_attr(
+ efx_mcdi_mon_add_attr(
efx, name, efx_mcdi_mon_show_label, i, type, 0);
- if (rc)
- goto fail;
}
}
+hwmon_register:
+ hwmon->groups[0] = &hwmon->group;
+ hwmon->device = hwmon_device_register_with_groups(&efx->pci_dev->dev,
+ KBUILD_MODNAME, NULL,
+ hwmon->groups);
+ if (IS_ERR(hwmon->device)) {
+ rc = PTR_ERR(hwmon->device);
+ goto fail;
+ }
+
+ return 0;
+
fail:
efx_mcdi_mon_remove(efx);
return rc;
void efx_mcdi_mon_remove(struct efx_nic *efx)
{
struct efx_mcdi_mon *hwmon = efx_mcdi_mon(efx);
- unsigned int i;
- for (i = 0; i < hwmon->n_attrs; i++)
- device_remove_file(&efx->pci_dev->dev,
- &hwmon->attrs[i].dev_attr);
- kfree(hwmon->attrs);
if (hwmon->device)
hwmon_device_unregister(hwmon->device);
+ kfree(hwmon->attrs);
+ kfree(hwmon->group.attrs);
efx_nic_free_buffer(efx, &hwmon->dma_buf);
}
* @n_channels: Number of channels in use
* @n_rx_channels: Number of channels used for RX (= number of RX queues)
* @n_tx_channels: Number of channels used for TX
+ * @rx_ip_align: RX DMA address offset to have IP header aligned in
+ * in accordance with NET_IP_ALIGN
* @rx_dma_len: Current maximum RX DMA length
* @rx_buffer_order: Order (log2) of number of pages for each RX buffer
* @rx_buffer_truesize: Amortised allocation size of an RX buffer,
unsigned rss_spread;
unsigned tx_channel_offset;
unsigned n_tx_channels;
+ unsigned int rx_ip_align;
unsigned int rx_dma_len;
unsigned int rx_buffer_order;
unsigned int rx_buffer_truesize;
bool efx_ptp_is_ptp_tx(struct efx_nic *efx, struct sk_buff *skb);
int efx_ptp_tx(struct efx_nic *efx, struct sk_buff *skb);
void efx_ptp_event(struct efx_nic *efx, efx_qword_t *ev);
+void efx_ptp_start_datapath(struct efx_nic *efx);
+void efx_ptp_stop_datapath(struct efx_nic *efx);
extern const struct efx_nic_type falcon_a1_nic_type;
extern const struct efx_nic_type falcon_b0_nic_type;
* @evt_list: List of MC receive events awaiting packets
* @evt_free_list: List of free events
* @evt_lock: Lock for manipulating evt_list and evt_free_list
+ * @evt_overflow: Boolean indicating that event list has overflowed
* @rx_evts: Instantiated events (on evt_list and evt_free_list)
* @workwq: Work queue for processing pending PTP operations
* @work: Work task
struct list_head evt_list;
struct list_head evt_free_list;
spinlock_t evt_lock;
+ bool evt_overflow;
struct efx_ptp_event_rx rx_evts[MAX_RECEIVE_EVENTS];
struct workqueue_struct *workwq;
struct work_struct work;
}
}
}
+ /* If the event overflow flag is set and the event list is now empty
+ * clear the flag to re-enable the overflow warning message.
+ */
+ if (ptp->evt_overflow && list_empty(&ptp->evt_list))
+ ptp->evt_overflow = false;
spin_unlock_bh(&ptp->evt_lock);
}
break;
}
}
+ /* If the event overflow flag is set and the event list is now empty
+ * clear the flag to re-enable the overflow warning message.
+ */
+ if (ptp->evt_overflow && list_empty(&ptp->evt_list))
+ ptp->evt_overflow = false;
spin_unlock_bh(&ptp->evt_lock);
return rc;
__skb_queue_tail(q, skb);
} else if (time_after(jiffies, match->expiry)) {
match->state = PTP_PACKET_STATE_TIMED_OUT;
- netif_warn(efx, rx_err, efx->net_dev,
- "PTP packet - no timestamp seen\n");
+ if (net_ratelimit())
+ netif_warn(efx, rx_err, efx->net_dev,
+ "PTP packet - no timestamp seen\n");
__skb_queue_tail(q, skb);
} else {
/* Replace unprocessed entry and stop */
static int efx_ptp_stop(struct efx_nic *efx)
{
struct efx_ptp_data *ptp = efx->ptp_data;
- int rc = efx_ptp_disable(efx);
struct list_head *cursor;
struct list_head *next;
+ int rc;
+
+ if (ptp == NULL)
+ return 0;
+
+ rc = efx_ptp_disable(efx);
if (ptp->rxfilter_installed) {
efx_filter_remove_id_safe(efx, EFX_FILTER_PRI_REQUIRED,
list_for_each_safe(cursor, next, &efx->ptp_data->evt_list) {
list_move(cursor, &efx->ptp_data->evt_free_list);
}
+ ptp->evt_overflow = false;
spin_unlock_bh(&efx->ptp_data->evt_lock);
return rc;
}
+static int efx_ptp_restart(struct efx_nic *efx)
+{
+ if (efx->ptp_data && efx->ptp_data->enabled)
+ return efx_ptp_start(efx);
+ return 0;
+}
+
static void efx_ptp_pps_worker(struct work_struct *work)
{
struct efx_ptp_data *ptp =
spin_lock_init(&ptp->evt_lock);
for (pos = 0; pos < MAX_RECEIVE_EVENTS; pos++)
list_add(&ptp->rx_evts[pos].link, &ptp->evt_free_list);
+ ptp->evt_overflow = false;
ptp->phc_clock_info.owner = THIS_MODULE;
snprintf(ptp->phc_clock_info.name,
skb->len >= PTP_MIN_LENGTH &&
skb->len <= MC_CMD_PTP_IN_TRANSMIT_PACKET_MAXNUM &&
likely(skb->protocol == htons(ETH_P_IP)) &&
+ skb_transport_header_was_set(skb) &&
+ skb_network_header_len(skb) >= sizeof(struct iphdr) &&
ip_hdr(skb)->protocol == IPPROTO_UDP &&
+ skb_headlen(skb) >=
+ skb_transport_offset(skb) + sizeof(struct udphdr) &&
udp_hdr(skb)->dest == htons(PTP_EVENT_PORT);
}
{
if ((enable_wanted != efx->ptp_data->enabled) ||
(enable_wanted && (efx->ptp_data->mode != new_mode))) {
- int rc;
+ int rc = 0;
if (enable_wanted) {
/* Change of mode requires disable */
* succeed.
*/
efx->ptp_data->mode = new_mode;
- rc = efx_ptp_start(efx);
+ if (netif_running(efx->net_dev))
+ rc = efx_ptp_start(efx);
if (rc == 0) {
rc = efx_ptp_synchronize(efx,
PTP_SYNC_ATTEMPTS * 2);
list_add_tail(&evt->link, &ptp->evt_list);
queue_work(ptp->workwq, &ptp->work);
- } else {
- netif_err(efx, rx_err, efx->net_dev, "No free PTP event");
+ } else if (!ptp->evt_overflow) {
+ /* Log a warning message and set the event overflow flag.
+ * The message won't be logged again until the event queue
+ * becomes empty.
+ */
+ netif_err(efx, rx_err, efx->net_dev, "PTP event queue overflow\n");
+ ptp->evt_overflow = true;
}
spin_unlock_bh(&ptp->evt_lock);
}
if (rc != 0)
return rc;
- ptp_data->current_adjfreq = delta;
+ ptp_data->current_adjfreq = adjustment_ns;
return 0;
}
MCDI_SET_DWORD(inbuf, PTP_IN_OP, MC_CMD_PTP_OP_ADJUST);
MCDI_SET_DWORD(inbuf, PTP_IN_PERIPH_ID, 0);
- MCDI_SET_QWORD(inbuf, PTP_IN_ADJUST_FREQ, 0);
+ MCDI_SET_QWORD(inbuf, PTP_IN_ADJUST_FREQ, ptp_data->current_adjfreq);
MCDI_SET_DWORD(inbuf, PTP_IN_ADJUST_SECONDS, (u32)delta_ts.tv_sec);
MCDI_SET_DWORD(inbuf, PTP_IN_ADJUST_NANOSECONDS, (u32)delta_ts.tv_nsec);
return efx_mcdi_rpc(efx, MC_CMD_PTP, inbuf, sizeof(inbuf),
efx->extra_channel_type[EFX_EXTRA_CHANNEL_PTP] =
&efx_ptp_channel_type;
}
+
+void efx_ptp_start_datapath(struct efx_nic *efx)
+{
+ if (efx_ptp_restart(efx))
+ netif_err(efx, drv, efx->net_dev, "Failed to restart PTP.\n");
+}
+
+void efx_ptp_stop_datapath(struct efx_nic *efx)
+{
+ efx_ptp_stop(efx);
+}
void efx_rx_config_page_split(struct efx_nic *efx)
{
- efx->rx_page_buf_step = ALIGN(efx->rx_dma_len + NET_IP_ALIGN,
+ efx->rx_page_buf_step = ALIGN(efx->rx_dma_len + efx->rx_ip_align,
EFX_RX_BUF_ALIGNMENT);
efx->rx_bufs_per_page = efx->rx_buffer_order ? 1 :
((PAGE_SIZE - sizeof(struct efx_rx_page_state)) /
do {
index = rx_queue->added_count & rx_queue->ptr_mask;
rx_buf = efx_rx_buffer(rx_queue, index);
- rx_buf->dma_addr = dma_addr + NET_IP_ALIGN;
+ rx_buf->dma_addr = dma_addr + efx->rx_ip_align;
rx_buf->page = page;
- rx_buf->page_offset = page_offset + NET_IP_ALIGN;
+ rx_buf->page_offset = page_offset + efx->rx_ip_align;
rx_buf->len = efx->rx_dma_len;
rx_buf->flags = 0;
++rx_queue->added_count;
#include <linux/mii.h>
#include <linux/workqueue.h>
#include <linux/of.h>
+#include <linux/of_device.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
}
}
+#if IS_BUILTIN(CONFIG_OF)
+static const struct of_device_id smc91x_match[] = {
+ { .compatible = "smsc,lan91c94", },
+ { .compatible = "smsc,lan91c111", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, smc91x_match);
+#endif
+
/*
* smc_init(void)
* Input parameters:
static int smc_drv_probe(struct platform_device *pdev)
{
struct smc91x_platdata *pd = dev_get_platdata(&pdev->dev);
+ const struct of_device_id *match = NULL;
struct smc_local *lp;
struct net_device *ndev;
struct resource *res, *ires;
*/
lp = netdev_priv(ndev);
+ lp->cfg.flags = 0;
if (pd) {
memcpy(&lp->cfg, pd, sizeof(lp->cfg));
lp->io_shift = SMC91X_IO_SHIFT(lp->cfg.flags);
- } else {
+ }
+
+#if IS_BUILTIN(CONFIG_OF)
+ match = of_match_device(of_match_ptr(smc91x_match), &pdev->dev);
+ if (match) {
+ struct device_node *np = pdev->dev.of_node;
+ u32 val;
+
+ /* Combination of IO widths supported, default to 16-bit */
+ if (!of_property_read_u32(np, "reg-io-width", &val)) {
+ if (val & 1)
+ lp->cfg.flags |= SMC91X_USE_8BIT;
+ if ((val == 0) || (val & 2))
+ lp->cfg.flags |= SMC91X_USE_16BIT;
+ if (val & 4)
+ lp->cfg.flags |= SMC91X_USE_32BIT;
+ } else {
+ lp->cfg.flags |= SMC91X_USE_16BIT;
+ }
+ }
+#endif
+
+ if (!pd && !match) {
lp->cfg.flags |= (SMC_CAN_USE_8BIT) ? SMC91X_USE_8BIT : 0;
lp->cfg.flags |= (SMC_CAN_USE_16BIT) ? SMC91X_USE_16BIT : 0;
lp->cfg.flags |= (SMC_CAN_USE_32BIT) ? SMC91X_USE_32BIT : 0;
return 0;
}
-#ifdef CONFIG_OF
-static const struct of_device_id smc91x_match[] = {
- { .compatible = "smsc,lan91c94", },
- { .compatible = "smsc,lan91c111", },
- {},
-};
-MODULE_DEVICE_TABLE(of, smc91x_match);
-#endif
-
static struct dev_pm_ops smc_drv_pm_ops = {
.suspend = smc_drv_suspend,
.resume = smc_drv_resume,
defined(CONFIG_MACH_LITTLETON) ||\
defined(CONFIG_MACH_ZYLONITE2) ||\
defined(CONFIG_ARCH_VIPER) ||\
- defined(CONFIG_MACH_STARGATE2)
+ defined(CONFIG_MACH_STARGATE2) ||\
+ defined(CONFIG_ARCH_VERSATILE)
#include <asm/mach-types.h>
#define SMC_outl(v, a, r) writel(v, (a) + (r))
#define SMC_insl(a, r, p, l) readsl((a) + (r), p, l)
#define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l)
+#define SMC_insw(a, r, p, l) readsw((a) + (r), p, l)
+#define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l)
#define SMC_IRQ_FLAGS (-1) /* from resource */
/* We actually can't write halfwords properly if not word aligned */
#define RPC_LSA_DEFAULT RPC_LED_TX_RX
#define RPC_LSB_DEFAULT RPC_LED_100_10
-#elif defined(CONFIG_ARCH_VERSATILE)
-
-#define SMC_CAN_USE_8BIT 1
-#define SMC_CAN_USE_16BIT 1
-#define SMC_CAN_USE_32BIT 1
-#define SMC_NOWAIT 1
-
-#define SMC_inb(a, r) readb((a) + (r))
-#define SMC_inw(a, r) readw((a) + (r))
-#define SMC_inl(a, r) readl((a) + (r))
-#define SMC_outb(v, a, r) writeb(v, (a) + (r))
-#define SMC_outw(v, a, r) writew(v, (a) + (r))
-#define SMC_outl(v, a, r) writel(v, (a) + (r))
-#define SMC_insl(a, r, p, l) readsl((a) + (r), p, l)
-#define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l)
-#define SMC_IRQ_FLAGS (-1) /* from resource */
-
#elif defined(CONFIG_MN10300)
/*
if (!(priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp))
return -EOPNOTSUPP;
- if (netif_msg_hw(priv)) {
- if (priv->dma_cap.time_stamp) {
- pr_debug("IEEE 1588-2002 Time Stamp supported\n");
- priv->adv_ts = 0;
- }
- if (priv->dma_cap.atime_stamp && priv->extend_desc) {
- pr_debug
- ("IEEE 1588-2008 Advanced Time Stamp supported\n");
- priv->adv_ts = 1;
- }
- }
+ priv->adv_ts = 0;
+ if (priv->dma_cap.atime_stamp && priv->extend_desc)
+ priv->adv_ts = 1;
+
+ if (netif_msg_hw(priv) && priv->dma_cap.time_stamp)
+ pr_debug("IEEE 1588-2002 Time Stamp supported\n");
+
+ if (netif_msg_hw(priv) && priv->adv_ts)
+ pr_debug("IEEE 1588-2008 Advanced Time Stamp supported\n");
priv->hw->ptp = &stmmac_ptp;
priv->hwts_tx_en = 0;
priv->hw->ptp->config_addend(priv->ioaddr, addend);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_unlock_irqrestore(&priv->ptp_lock, flags);
return 0;
}
priv->hw->ptp->adjust_systime(priv->ioaddr, sec, nsec, neg_adj);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_unlock_irqrestore(&priv->ptp_lock, flags);
return 0;
}
ndev->features = NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO
| NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX |
NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_RXCSUM
- /*| NETIF_F_FRAGLIST */
;
ndev->hw_features = NETIF_F_IP_CSUM | NETIF_F_SG |
NETIF_F_TSO | NETIF_F_HW_VLAN_CTAG_TX;
/* set speed_in input in case RMII mode is used in 100Mbps */
if (phy->speed == 100)
mac_control |= BIT(15);
+ else if (phy->speed == 10)
+ mac_control |= BIT(18); /* In Band mode */
*link = true;
} else {
* receive descs
*/
cpsw_info(priv, ifup, "submitted %d rx descriptors\n", i);
+
+ if (cpts_register(&priv->pdev->dev, priv->cpts,
+ priv->data.cpts_clock_mult,
+ priv->data.cpts_clock_shift))
+ dev_err(priv->dev, "error registering cpts device\n");
+
}
/* Enable Interrupt pacing if configured */
netif_carrier_off(priv->ndev);
if (cpsw_common_res_usage_state(priv) <= 1) {
+ cpts_unregister(priv->cpts);
cpsw_intr_disable(priv);
cpdma_ctlr_int_ctrl(priv->dma, false);
cpdma_ctlr_stop(priv->dma);
}
i++;
+ if (i == data->slaves)
+ break;
}
return 0;
goto clean_runtime_disable_ret;
}
priv->regs = ss_regs;
- priv->version = __raw_readl(&priv->regs->id_ver);
priv->host_port = HOST_PORT_NUM;
+ /* Need to enable clocks with runtime PM api to access module
+ * registers
+ */
+ pm_runtime_get_sync(&pdev->dev);
+ priv->version = readl(&priv->regs->id_ver);
+ pm_runtime_put_sync(&pdev->dev);
+
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
priv->wr_regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(priv->wr_regs)) {
while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k))) {
for (i = res->start; i <= res->end; i++) {
if (devm_request_irq(&pdev->dev, i, cpsw_interrupt, 0,
- dev_name(priv->dev), priv)) {
+ dev_name(&pdev->dev), priv)) {
dev_err(priv->dev, "error attaching irq\n");
goto clean_ale_ret;
}
unregister_netdev(cpsw_get_slave_ndev(priv, 1));
unregister_netdev(ndev);
- cpts_unregister(priv->cpts);
-
cpsw_ale_destroy(priv->ale);
cpdma_chan_destroy(priv->txch);
cpdma_chan_destroy(priv->rxch);
#include <linux/davinci_emac.h>
#include <linux/of.h>
#include <linux/of_address.h>
+#include <linux/of_device.h>
#include <linux/of_irq.h>
#include <linux/of_net.h>
#endif
};
+static const struct of_device_id davinci_emac_of_match[];
+
static struct emac_platform_data *
davinci_emac_of_get_pdata(struct platform_device *pdev, struct emac_priv *priv)
{
struct device_node *np;
+ const struct of_device_id *match;
+ const struct emac_platform_data *auxdata;
struct emac_platform_data *pdata = NULL;
const u8 *mac_addr;
priv->phy_node = of_parse_phandle(np, "phy-handle", 0);
if (!priv->phy_node)
- pdata->phy_id = "";
+ pdata->phy_id = NULL;
+
+ auxdata = pdev->dev.platform_data;
+ if (auxdata) {
+ pdata->interrupt_enable = auxdata->interrupt_enable;
+ pdata->interrupt_disable = auxdata->interrupt_disable;
+ }
+
+ match = of_match_device(davinci_emac_of_match, &pdev->dev);
+ if (match && match->data) {
+ auxdata = match->data;
+ pdata->version = auxdata->version;
+ pdata->hw_ram_addr = auxdata->hw_ram_addr;
+ }
pdev->dev.platform_data = pdata;
};
#if IS_ENABLED(CONFIG_OF)
+static const struct emac_platform_data am3517_emac_data = {
+ .version = EMAC_VERSION_2,
+ .hw_ram_addr = 0x01e20000,
+};
+
static const struct of_device_id davinci_emac_of_match[] = {
{.compatible = "ti,davinci-dm6467-emac", },
+ {.compatible = "ti,am3517-emac", .data = &am3517_emac_data, },
{},
};
MODULE_DEVICE_TABLE(of, davinci_emac_of_match);
unsigned int rx_done;
unsigned long flags;
- spin_lock_irqsave(&vptr->lock, flags);
/*
* Do rx and tx twice for performance (taken from the VIA
* out-of-tree driver).
*/
- rx_done = velocity_rx_srv(vptr, budget / 2);
- velocity_tx_srv(vptr);
- rx_done += velocity_rx_srv(vptr, budget - rx_done);
+ rx_done = velocity_rx_srv(vptr, budget);
+ spin_lock_irqsave(&vptr->lock, flags);
velocity_tx_srv(vptr);
-
/* If budget not fully consumed, exit the polling mode */
if (rx_done < budget) {
napi_complete(napi);
if (ret < 0)
goto out_free_tmp_vptr_1;
+ napi_disable(&vptr->napi);
+
spin_lock_irqsave(&vptr->lock, flags);
netif_stop_queue(dev);
velocity_give_many_rx_descs(vptr);
+ napi_enable(&vptr->napi);
+
mac_enable_int(vptr->mac_regs);
netif_start_queue(dev);
platform_set_drvdata(op, ndev);
SET_NETDEV_DEV(ndev, &op->dev);
ndev->flags &= ~IFF_MULTICAST; /* clear multicast */
- ndev->features = NETIF_F_SG | NETIF_F_FRAGLIST;
+ ndev->features = NETIF_F_SG;
ndev->netdev_ops = &temac_netdev_ops;
ndev->ethtool_ops = &temac_ethtool_ops;
#if 0
SET_NETDEV_DEV(ndev, &op->dev);
ndev->flags &= ~IFF_MULTICAST; /* clear multicast */
- ndev->features = NETIF_F_SG | NETIF_F_FRAGLIST;
+ ndev->features = NETIF_F_SG;
ndev->netdev_ops = &axienet_netdev_ops;
ndev->ethtool_ops = &axienet_ethtool_ops;
__raw_writel(reg_data | XEL_TSR_XMIT_IE_MASK,
drvdata->base_addr + XEL_TSR_OFFSET);
- /* Enable the Tx interrupts for the second Buffer if
- * configured in HW */
- if (drvdata->tx_ping_pong != 0) {
- reg_data = __raw_readl(drvdata->base_addr +
- XEL_BUFFER_OFFSET + XEL_TSR_OFFSET);
- __raw_writel(reg_data | XEL_TSR_XMIT_IE_MASK,
- drvdata->base_addr + XEL_BUFFER_OFFSET +
- XEL_TSR_OFFSET);
- }
-
/* Enable the Rx interrupts for the first buffer */
__raw_writel(XEL_RSR_RECV_IE_MASK, drvdata->base_addr + XEL_RSR_OFFSET);
- /* Enable the Rx interrupts for the second Buffer if
- * configured in HW */
- if (drvdata->rx_ping_pong != 0) {
- __raw_writel(XEL_RSR_RECV_IE_MASK, drvdata->base_addr +
- XEL_BUFFER_OFFSET + XEL_RSR_OFFSET);
- }
-
/* Enable the Global Interrupt Enable */
__raw_writel(XEL_GIER_GIE_MASK, drvdata->base_addr + XEL_GIER_OFFSET);
}
__raw_writel(reg_data & (~XEL_TSR_XMIT_IE_MASK),
drvdata->base_addr + XEL_TSR_OFFSET);
- /* Disable the Tx interrupts for the second Buffer
- * if configured in HW */
- if (drvdata->tx_ping_pong != 0) {
- reg_data = __raw_readl(drvdata->base_addr + XEL_BUFFER_OFFSET +
- XEL_TSR_OFFSET);
- __raw_writel(reg_data & (~XEL_TSR_XMIT_IE_MASK),
- drvdata->base_addr + XEL_BUFFER_OFFSET +
- XEL_TSR_OFFSET);
- }
-
/* Disable the Rx interrupts for the first buffer */
reg_data = __raw_readl(drvdata->base_addr + XEL_RSR_OFFSET);
__raw_writel(reg_data & (~XEL_RSR_RECV_IE_MASK),
drvdata->base_addr + XEL_RSR_OFFSET);
-
- /* Disable the Rx interrupts for the second buffer
- * if configured in HW */
- if (drvdata->rx_ping_pong != 0) {
-
- reg_data = __raw_readl(drvdata->base_addr + XEL_BUFFER_OFFSET +
- XEL_RSR_OFFSET);
- __raw_writel(reg_data & (~XEL_RSR_RECV_IE_MASK),
- drvdata->base_addr + XEL_BUFFER_OFFSET +
- XEL_RSR_OFFSET);
- }
}
/**
*to_u16_ptr++ = *from_u16_ptr++;
*to_u16_ptr++ = *from_u16_ptr++;
+ /* This barrier resolves occasional issues seen around
+ * cases where the data is not properly flushed out
+ * from the processor store buffers to the destination
+ * memory locations.
+ */
+ wmb();
+
/* Output a word */
*to_u32_ptr++ = align_buffer;
}
for (; length > 0; length--)
*to_u8_ptr++ = *from_u8_ptr++;
+ /* This barrier resolves occasional issues seen around
+ * cases where the data is not properly flushed out
+ * from the processor store buffers to the destination
+ * memory locations.
+ */
+ wmb();
*to_u32_ptr = align_buffer;
}
}
case HDLCDRVCTL_CALIBRATE:
if(!capable(CAP_SYS_RAWIO))
return -EPERM;
+ if (bi.data.calibrate > INT_MAX / s->par.bitrate)
+ return -EINVAL;
s->hdlctx.calibrate = bi.data.calibrate * s->par.bitrate / 16;
return 0;
break;
case SIOCYAMGCFG:
+ memset(&yi, 0, sizeof(yi));
yi.cfg.mask = 0xffffffff;
yi.cfg.iobase = yp->iobase;
yi.cfg.irq = yp->irq;
struct sk_buff *skb;
net = ((struct netvsc_device *)hv_get_drvdata(device_obj))->ndev;
- if (!net) {
- netdev_err(net, "got receive callback but net device"
- " not initialized yet\n");
+ if (!net || net->reg_state != NETREG_REGISTERED) {
packet->status = NVSP_STAT_FAIL;
return 0;
}
return -EINVAL;
nvdev->start_remove = true;
- cancel_delayed_work_sync(&ndevctx->dwork);
cancel_work_sync(&ndevctx->work);
netif_tx_disable(ndev);
rndis_filter_device_remove(hdev);
SET_ETHTOOL_OPS(net, ðtool_ops);
SET_NETDEV_DEV(net, &dev->device);
- ret = register_netdev(net);
- if (ret != 0) {
- pr_err("Unable to register netdev.\n");
- free_netdev(net);
- goto out;
- }
-
/* Notify the netvsc driver of the new device */
device_info.ring_size = ring_size;
ret = rndis_filter_device_add(dev, &device_info);
if (ret != 0) {
netdev_err(net, "unable to add netvsc device (ret %d)\n", ret);
- unregister_netdev(net);
free_netdev(net);
hv_set_drvdata(dev, NULL);
return ret;
netif_carrier_on(net);
-out:
+ ret = register_netdev(net);
+ if (ret != 0) {
+ pr_err("Unable to register netdev.\n");
+ rndis_filter_device_remove(dev);
+ free_netdev(net);
+ }
+
return ret;
}
netdev_features_t features)
{
struct macvlan_dev *vlan = netdev_priv(dev);
+ netdev_features_t mask;
- return features & (vlan->set_features | ~MACVLAN_FEATURES);
+ features |= NETIF_F_ALL_FOR_ALL;
+ features &= (vlan->set_features | ~MACVLAN_FEATURES);
+ mask = features;
+
+ features = netdev_increment_features(vlan->lowerdev->features,
+ features,
+ mask);
+ if (!vlan->fwd_priv)
+ features |= NETIF_F_LLTX;
+
+ return features;
}
static const struct ethtool_ops macvlan_ethtool_ops = {
break;
case NETDEV_FEAT_CHANGE:
list_for_each_entry(vlan, &port->vlans, list) {
- vlan->dev->features = dev->features & MACVLAN_FEATURES;
vlan->dev->gso_max_size = dev->gso_max_size;
- netdev_features_change(vlan->dev);
+ netdev_update_features(vlan->dev);
}
break;
case NETDEV_UNREGISTER:
rcu_read_lock();
vlan = rcu_dereference(q->vlan);
if (vlan)
- vlan->dev->stats.tx_dropped++;
+ this_cpu_inc(vlan->pcpu_stats->tx_dropped);
rcu_read_unlock();
return err;
const struct sk_buff *skb,
const struct iovec *iv, int len)
{
- struct macvlan_dev *vlan;
int ret;
int vnet_hdr_len = 0;
int vlan_offset = 0;
- int copied;
+ int copied, total;
if (q->flags & IFF_VNET_HDR) {
struct virtio_net_hdr vnet_hdr;
if (memcpy_toiovecend(iv, (void *)&vnet_hdr, 0, sizeof(vnet_hdr)))
return -EFAULT;
}
- copied = vnet_hdr_len;
+ total = copied = vnet_hdr_len;
+ total += skb->len;
if (!vlan_tx_tag_present(skb))
len = min_t(int, skb->len, len);
vlan_offset = offsetof(struct vlan_ethhdr, h_vlan_proto);
len = min_t(int, skb->len + VLAN_HLEN, len);
+ total += VLAN_HLEN;
copy = min_t(int, vlan_offset, len);
ret = skb_copy_datagram_const_iovec(skb, 0, iv, copied, copy);
}
ret = skb_copy_datagram_const_iovec(skb, vlan_offset, iv, copied, len);
- copied += len;
done:
- rcu_read_lock();
- vlan = rcu_dereference(q->vlan);
- if (vlan) {
- preempt_disable();
- macvlan_count_rx(vlan, copied - vnet_hdr_len, ret == 0, 0);
- preempt_enable();
- }
- rcu_read_unlock();
-
- return ret ? ret : copied;
+ return ret ? ret : total;
}
static ssize_t macvtap_do_read(struct macvtap_queue *q, struct kiocb *iocb,
}
ret = macvtap_do_read(q, iocb, iv, len, file->f_flags & O_NONBLOCK);
- ret = min_t(ssize_t, ret, len); /* XXX copied from tun.c. Why? */
+ ret = min_t(ssize_t, ret, len);
+ if (ret > 0)
+ iocb->ki_pos = ret;
out:
return ret;
}
.suspend = genphy_suspend,
.resume = genphy_resume,
.driver = { .owner = THIS_MODULE,},
+}, {
+ .phy_id = PHY_ID_KSZ8041RNLI,
+ .phy_id_mask = 0x00fffff0,
+ .name = "Micrel KSZ8041RNLI",
+ .features = PHY_BASIC_FEATURES |
+ SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+ .flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
+ .config_init = kszphy_config_init,
+ .config_aneg = genphy_config_aneg,
+ .read_status = genphy_read_status,
+ .ack_interrupt = kszphy_ack_interrupt,
+ .config_intr = kszphy_config_intr,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
+ .driver = { .owner = THIS_MODULE,},
}, {
.phy_id = PHY_ID_KSZ8051,
.phy_id_mask = 0x00fffff0,
int err = 0;
atomic_set(&phydev->irq_disable, 0);
- if (request_irq(phydev->irq, phy_interrupt,
- IRQF_SHARED,
- "phy_interrupt",
- phydev) < 0) {
+ if (request_irq(phydev->irq, phy_interrupt, 0, "phy_interrupt",
+ phydev) < 0) {
pr_warn("%s: Can't get IRQ %d (PHY)\n",
phydev->bus->name, phydev->irq);
phydev->irq = PHY_POLL;
#define PHY_ID_VSC8234 0x000fc620
#define PHY_ID_VSC8244 0x000fc6c0
+#define PHY_ID_VSC8514 0x00070670
#define PHY_ID_VSC8574 0x000704a0
#define PHY_ID_VSC8662 0x00070660
#define PHY_ID_VSC8221 0x000fc550
err = phy_write(phydev, MII_VSC8244_IMASK,
(phydev->drv->phy_id == PHY_ID_VSC8234 ||
phydev->drv->phy_id == PHY_ID_VSC8244 ||
+ phydev->drv->phy_id == PHY_ID_VSC8514 ||
phydev->drv->phy_id == PHY_ID_VSC8574) ?
MII_VSC8244_IMASK_MASK :
MII_VSC8221_IMASK_MASK);
.ack_interrupt = &vsc824x_ack_interrupt,
.config_intr = &vsc82xx_config_intr,
.driver = { .owner = THIS_MODULE,},
+}, {
+ .phy_id = PHY_ID_VSC8514,
+ .name = "Vitesse VSC8514",
+ .phy_id_mask = 0x000ffff0,
+ .features = PHY_GBIT_FEATURES,
+ .flags = PHY_HAS_INTERRUPT,
+ .config_init = &vsc824x_config_init,
+ .config_aneg = &vsc82x4_config_aneg,
+ .read_status = &genphy_read_status,
+ .ack_interrupt = &vsc824x_ack_interrupt,
+ .config_intr = &vsc82xx_config_intr,
+ .driver = { .owner = THIS_MODULE,},
}, {
.phy_id = PHY_ID_VSC8574,
.name = "Vitesse VSC8574",
static struct mdio_device_id __maybe_unused vitesse_tbl[] = {
{ PHY_ID_VSC8234, 0x000ffff0 },
{ PHY_ID_VSC8244, 0x000fffc0 },
+ { PHY_ID_VSC8514, 0x000ffff0 },
{ PHY_ID_VSC8574, 0x000ffff0 },
{ PHY_ID_VSC8662, 0x000ffff0 },
{ PHY_ID_VSC8221, 0x000ffff0 },
return 0;
}
+static void __team_carrier_check(struct team *team);
+
static int team_user_linkup_option_set(struct team *team,
struct team_gsetter_ctx *ctx)
{
port->user.linkup = ctx->data.bool_val;
team_refresh_port_linkup(port);
+ __team_carrier_check(port->team);
return 0;
}
port->user.linkup_enabled = ctx->data.bool_val;
team_refresh_port_linkup(port);
+ __team_carrier_check(port->team);
return 0;
}
{
struct tun_pi pi = { 0, skb->protocol };
ssize_t total = 0;
- int vlan_offset = 0;
+ int vlan_offset = 0, copied;
if (!(tun->flags & TUN_NO_PI)) {
if ((len -= sizeof(pi)) < 0)
total += tun->vnet_hdr_sz;
}
+ copied = total;
+ total += skb->len;
if (!vlan_tx_tag_present(skb)) {
len = min_t(int, skb->len, len);
} else {
vlan_offset = offsetof(struct vlan_ethhdr, h_vlan_proto);
len = min_t(int, skb->len + VLAN_HLEN, len);
+ total += VLAN_HLEN;
copy = min_t(int, vlan_offset, len);
- ret = skb_copy_datagram_const_iovec(skb, 0, iv, total, copy);
+ ret = skb_copy_datagram_const_iovec(skb, 0, iv, copied, copy);
len -= copy;
- total += copy;
+ copied += copy;
if (ret || !len)
goto done;
copy = min_t(int, sizeof(veth), len);
- ret = memcpy_toiovecend(iv, (void *)&veth, total, copy);
+ ret = memcpy_toiovecend(iv, (void *)&veth, copied, copy);
len -= copy;
- total += copy;
+ copied += copy;
if (ret || !len)
goto done;
}
- skb_copy_datagram_const_iovec(skb, vlan_offset, iv, total, len);
- total += len;
+ skb_copy_datagram_const_iovec(skb, vlan_offset, iv, copied, len);
done:
tun->dev->stats.tx_packets++;
ret = tun_do_read(tun, tfile, iocb, iv, len,
file->f_flags & O_NONBLOCK);
ret = min_t(ssize_t, ret, len);
+ if (ret > 0)
+ iocb->ki_pos = ret;
out:
tun_put(tun);
return ret;
module will be called cdc_mbim.
config USB_NET_DM9601
- tristate "Davicom DM9601 based USB 1.1 10/100 ethernet devices"
+ tristate "Davicom DM96xx based USB 10/100 ethernet devices"
depends on USB_USBNET
select CRC32
help
- This option adds support for Davicom DM9601 based USB 1.1
- 10/100 Ethernet adapters.
+ This option adds support for Davicom DM9601/DM9620/DM9621A
+ based USB 10/100 Ethernet adapters.
config USB_NET_SR9700
tristate "CoreChip-sz SR9700 based USB 1.1 10/100 ethernet devices"
/*
- * Davicom DM9601 USB 1.1 10/100Mbps ethernet devices
+ * Davicom DM96xx USB 10/100Mbps ethernet devices
*
* Peter Korsgaard <jacmet@sunsite.dk>
*
dev->net->ethtool_ops = &dm9601_ethtool_ops;
dev->net->hard_header_len += DM_TX_OVERHEAD;
dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len;
- dev->rx_urb_size = dev->net->mtu + ETH_HLEN + DM_RX_OVERHEAD;
+
+ /* dm9620/21a require room for 4 byte padding, even in dm9601
+ * mode, so we need +1 to be able to receive full size
+ * ethernet frames.
+ */
+ dev->rx_urb_size = dev->net->mtu + ETH_HLEN + DM_RX_OVERHEAD + 1;
dev->mii.dev = dev->net;
dev->mii.mdio_read = dm9601_mdio_read;
static struct sk_buff *dm9601_tx_fixup(struct usbnet *dev, struct sk_buff *skb,
gfp_t flags)
{
- int len;
+ int len, pad;
/* format:
b1: packet length low
b3..n: packet data
*/
- len = skb->len;
+ len = skb->len + DM_TX_OVERHEAD;
+
+ /* workaround for dm962x errata with tx fifo getting out of
+ * sync if a USB bulk transfer retry happens right after a
+ * packet with odd / maxpacket length by adding up to 3 bytes
+ * padding.
+ */
+ while ((len & 1) || !(len % dev->maxpacket))
+ len++;
- if (skb_headroom(skb) < DM_TX_OVERHEAD) {
+ len -= DM_TX_OVERHEAD; /* hw header doesn't count as part of length */
+ pad = len - skb->len;
+
+ if (skb_headroom(skb) < DM_TX_OVERHEAD || skb_tailroom(skb) < pad) {
struct sk_buff *skb2;
- skb2 = skb_copy_expand(skb, DM_TX_OVERHEAD, 0, flags);
+ skb2 = skb_copy_expand(skb, DM_TX_OVERHEAD, pad, flags);
dev_kfree_skb_any(skb);
skb = skb2;
if (!skb)
__skb_push(skb, DM_TX_OVERHEAD);
- /* usbnet adds padding if length is a multiple of packet size
- if so, adjust length value in header */
- if ((skb->len % dev->maxpacket) == 0)
- len++;
+ if (pad) {
+ memset(skb->data + skb->len, 0, pad);
+ __skb_put(skb, pad);
+ }
skb->data[0] = len;
skb->data[1] = len >> 8;
}
static const struct driver_info dm9601_info = {
- .description = "Davicom DM9601 USB Ethernet",
+ .description = "Davicom DM96xx USB 10/100 Ethernet",
.flags = FLAG_ETHER | FLAG_LINK_INTR,
.bind = dm9601_bind,
.rx_fixup = dm9601_rx_fixup,
USB_DEVICE(0x0a46, 0x9620), /* DM9620 USB to Fast Ethernet Adapter */
.driver_info = (unsigned long)&dm9601_info,
},
+ {
+ USB_DEVICE(0x0a46, 0x9621), /* DM9621A USB to Fast Ethernet Adapter */
+ .driver_info = (unsigned long)&dm9601_info,
+ },
{}, // END
};
module_usb_driver(dm9601_driver);
MODULE_AUTHOR("Peter Korsgaard <jacmet@sunsite.dk>");
-MODULE_DESCRIPTION("Davicom DM9601 USB 1.1 ethernet devices");
+MODULE_DESCRIPTION("Davicom DM96xx USB 10/100 ethernet devices");
MODULE_LICENSE("GPL");
#define BM_REQUEST_TYPE (0xa1)
#define B_NOTIFICATION (0x20)
#define W_VALUE (0x0)
-#define W_INDEX (0x2)
#define W_LENGTH (0x2)
#define B_OVERRUN (0x1<<6)
struct uart_icount *icount;
struct hso_serial_state_notification *serial_state_notification;
struct usb_device *usb;
+ int if_num;
/* Sanity checks */
if (!serial)
handle_usb_error(status, __func__, serial->parent);
return;
}
+
+ /* tiocmget is only supported on HSO_PORT_MODEM */
tiocmget = serial->tiocmget;
if (!tiocmget)
return;
+ BUG_ON((serial->parent->port_spec & HSO_PORT_MASK) != HSO_PORT_MODEM);
+
usb = serial->parent->usb;
+ if_num = serial->parent->interface->altsetting->desc.bInterfaceNumber;
+
+ /* wIndex should be the USB interface number of the port to which the
+ * notification applies, which should always be the Modem port.
+ */
serial_state_notification = &tiocmget->serial_state_notification;
if (serial_state_notification->bmRequestType != BM_REQUEST_TYPE ||
serial_state_notification->bNotification != B_NOTIFICATION ||
le16_to_cpu(serial_state_notification->wValue) != W_VALUE ||
- le16_to_cpu(serial_state_notification->wIndex) != W_INDEX ||
+ le16_to_cpu(serial_state_notification->wIndex) != if_num ||
le16_to_cpu(serial_state_notification->wLength) != W_LENGTH) {
dev_warn(&usb->dev,
"hso received invalid serial state notification\n");
struct mcs7830_data {
u8 multi_filter[8];
u8 config;
- u8 link_counter;
};
static const char driver_name[] = "MOSCHIP usb-ethernet driver";
{
u8 *buf = urb->transfer_buffer;
bool link, link_changed;
- struct mcs7830_data *data = mcs7830_get_data(dev);
if (urb->actual_length < 16)
return;
- link = !(buf[1] & 0x20);
+ link = !(buf[1] == 0x20);
link_changed = netif_carrier_ok(dev->net) != link;
if (link_changed) {
- data->link_counter++;
- /*
- track link state 20 times to guard against erroneous
- link state changes reported sometimes by the chip
- */
- if (data->link_counter > 20) {
- data->link_counter = 0;
- usbnet_link_change(dev, link, 0);
- netdev_dbg(dev->net, "Link Status is: %d\n", link);
- }
- } else
- data->link_counter = 0;
+ usbnet_link_change(dev, link, 0);
+ netdev_dbg(dev->net, "Link Status is: %d\n", link);
+ }
}
static const struct driver_info moschip_info = {
return skb;
}
-static int receive_mergeable(struct receive_queue *rq, struct sk_buff *head_skb)
+static struct sk_buff *receive_small(void *buf, unsigned int len)
{
- struct skb_vnet_hdr *hdr = skb_vnet_hdr(head_skb);
+ struct sk_buff * skb = buf;
+
+ len -= sizeof(struct virtio_net_hdr);
+ skb_trim(skb, len);
+
+ return skb;
+}
+
+static struct sk_buff *receive_big(struct net_device *dev,
+ struct receive_queue *rq,
+ void *buf,
+ unsigned int len)
+{
+ struct page *page = buf;
+ struct sk_buff *skb = page_to_skb(rq, page, 0, len, PAGE_SIZE);
+
+ if (unlikely(!skb))
+ goto err;
+
+ return skb;
+
+err:
+ dev->stats.rx_dropped++;
+ give_pages(rq, page);
+ return NULL;
+}
+
+static struct sk_buff *receive_mergeable(struct net_device *dev,
+ struct receive_queue *rq,
+ void *buf,
+ unsigned int len)
+{
+ struct skb_vnet_hdr *hdr = buf;
+ int num_buf = hdr->mhdr.num_buffers;
+ struct page *page = virt_to_head_page(buf);
+ int offset = buf - page_address(page);
+ struct sk_buff *head_skb = page_to_skb(rq, page, offset, len,
+ MERGE_BUFFER_LEN);
struct sk_buff *curr_skb = head_skb;
- char *buf;
- struct page *page;
- int num_buf, len, offset;
- num_buf = hdr->mhdr.num_buffers;
+ if (unlikely(!curr_skb))
+ goto err_skb;
+
while (--num_buf) {
- int num_skb_frags = skb_shinfo(curr_skb)->nr_frags;
+ int num_skb_frags;
+
buf = virtqueue_get_buf(rq->vq, &len);
if (unlikely(!buf)) {
- pr_debug("%s: rx error: %d buffers missing\n",
- head_skb->dev->name, hdr->mhdr.num_buffers);
- head_skb->dev->stats.rx_length_errors++;
- return -EINVAL;
+ pr_debug("%s: rx error: %d buffers out of %d missing\n",
+ dev->name, num_buf, hdr->mhdr.num_buffers);
+ dev->stats.rx_length_errors++;
+ goto err_buf;
}
if (unlikely(len > MERGE_BUFFER_LEN)) {
pr_debug("%s: rx error: merge buffer too long\n",
- head_skb->dev->name);
+ dev->name);
len = MERGE_BUFFER_LEN;
}
+
+ page = virt_to_head_page(buf);
+ --rq->num;
+
+ num_skb_frags = skb_shinfo(curr_skb)->nr_frags;
if (unlikely(num_skb_frags == MAX_SKB_FRAGS)) {
struct sk_buff *nskb = alloc_skb(0, GFP_ATOMIC);
- if (unlikely(!nskb)) {
- head_skb->dev->stats.rx_dropped++;
- return -ENOMEM;
- }
+
+ if (unlikely(!nskb))
+ goto err_skb;
if (curr_skb == head_skb)
skb_shinfo(curr_skb)->frag_list = nskb;
else
head_skb->len += len;
head_skb->truesize += MERGE_BUFFER_LEN;
}
- page = virt_to_head_page(buf);
- offset = buf - (char *)page_address(page);
+ offset = buf - page_address(page);
if (skb_can_coalesce(curr_skb, num_skb_frags, page, offset)) {
put_page(page);
skb_coalesce_rx_frag(curr_skb, num_skb_frags - 1,
skb_add_rx_frag(curr_skb, num_skb_frags, page,
offset, len, MERGE_BUFFER_LEN);
}
+ }
+
+ return head_skb;
+
+err_skb:
+ put_page(page);
+ while (--num_buf) {
+ buf = virtqueue_get_buf(rq->vq, &len);
+ if (unlikely(!buf)) {
+ pr_debug("%s: rx error: %d buffers missing\n",
+ dev->name, num_buf);
+ dev->stats.rx_length_errors++;
+ break;
+ }
+ page = virt_to_head_page(buf);
+ put_page(page);
--rq->num;
}
- return 0;
+err_buf:
+ dev->stats.rx_dropped++;
+ dev_kfree_skb(head_skb);
+ return NULL;
}
static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len)
struct net_device *dev = vi->dev;
struct virtnet_stats *stats = this_cpu_ptr(vi->stats);
struct sk_buff *skb;
- struct page *page;
struct skb_vnet_hdr *hdr;
if (unlikely(len < sizeof(struct virtio_net_hdr) + ETH_HLEN)) {
pr_debug("%s: short packet %i\n", dev->name, len);
dev->stats.rx_length_errors++;
- if (vi->big_packets)
- give_pages(rq, buf);
- else if (vi->mergeable_rx_bufs)
+ if (vi->mergeable_rx_bufs)
put_page(virt_to_head_page(buf));
+ else if (vi->big_packets)
+ give_pages(rq, buf);
else
dev_kfree_skb(buf);
return;
}
- if (!vi->mergeable_rx_bufs && !vi->big_packets) {
- skb = buf;
- len -= sizeof(struct virtio_net_hdr);
- skb_trim(skb, len);
- } else if (vi->mergeable_rx_bufs) {
- struct page *page = virt_to_head_page(buf);
- skb = page_to_skb(rq, page,
- (char *)buf - (char *)page_address(page),
- len, MERGE_BUFFER_LEN);
- if (unlikely(!skb)) {
- dev->stats.rx_dropped++;
- put_page(page);
- return;
- }
- if (receive_mergeable(rq, skb)) {
- dev_kfree_skb(skb);
- return;
- }
- } else {
- page = buf;
- skb = page_to_skb(rq, page, 0, len, PAGE_SIZE);
- if (unlikely(!skb)) {
- dev->stats.rx_dropped++;
- give_pages(rq, page);
- return;
- }
- }
+ if (vi->mergeable_rx_bufs)
+ skb = receive_mergeable(dev, rq, buf, len);
+ else if (vi->big_packets)
+ skb = receive_big(dev, rq, buf, len);
+ else
+ skb = receive_small(buf, len);
+
+ if (unlikely(!skb))
+ return;
hdr = skb_vnet_hdr(skb);
if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MAC,
VIRTIO_NET_CTRL_MAC_TABLE_SET,
sg, NULL))
- dev_warn(&dev->dev, "Failed to set MAC fitler table.\n");
+ dev_warn(&dev->dev, "Failed to set MAC filter table.\n");
kfree(buf);
}
static void virtnet_free_queues(struct virtnet_info *vi)
{
+ int i;
+
+ for (i = 0; i < vi->max_queue_pairs; i++)
+ netif_napi_del(&vi->rq[i].napi);
+
kfree(vi->rq);
kfree(vi->sq);
}
struct virtqueue *vq = vi->rq[i].vq;
while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) {
- if (vi->big_packets)
- give_pages(&vi->rq[i], buf);
- else if (vi->mergeable_rx_bufs)
+ if (vi->mergeable_rx_bufs)
put_page(virt_to_head_page(buf));
+ else if (vi->big_packets)
+ give_pages(&vi->rq[i], buf);
else
dev_kfree_skb(buf);
--vi->rq[i].num;
if (err)
return err;
- if (netif_running(vi->dev))
+ if (netif_running(vi->dev)) {
+ for (i = 0; i < vi->curr_queue_pairs; i++)
+ if (!try_fill_recv(&vi->rq[i], GFP_KERNEL))
+ schedule_delayed_work(&vi->refill, 0);
+
for (i = 0; i < vi->max_queue_pairs; i++)
virtnet_napi_enable(&vi->rq[i]);
+ }
netif_device_attach(vi->dev);
- for (i = 0; i < vi->curr_queue_pairs; i++)
- if (!try_fill_recv(&vi->rq[i], GFP_KERNEL))
- schedule_delayed_work(&vi->refill, 0);
-
mutex_lock(&vi->config_lock);
vi->config_enable = true;
mutex_unlock(&vi->config_lock);
netdev_dbg(dev, "circular route to %pI4\n",
&dst->sin.sin_addr.s_addr);
dev->stats.collisions++;
- goto tx_error;
+ goto rt_tx_error;
}
/* Bypass encapsulation if the destination is local */
/* update header length based on lower device */
dev->hard_header_len = lowerdev->hard_header_len +
(use_ipv6 ? VXLAN6_HEADROOM : VXLAN_HEADROOM);
- }
+ } else if (use_ipv6)
+ vxlan->flags |= VXLAN_F_IPV6;
if (data[IFLA_VXLAN_TOS])
vxlan->tos = nla_get_u8(data[IFLA_VXLAN_TOS]);
#define MAX_PENDING_REQS 256
+/* It's possible for an skb to have a maximal number of frags
+ * but still be less than MAX_BUFFER_OFFSET in size. Thus the
+ * worst-case number of copy operations is MAX_SKB_FRAGS per
+ * ring slot.
+ */
+#define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
+
struct xenvif {
/* Unique identifier for this interface. */
domid_t domid;
*/
RING_IDX rx_req_cons_peek;
- /* Given MAX_BUFFER_OFFSET of 4096 the worst case is that each
- * head/fragment page uses 2 copy operations because it
- * straddles two buffers in the frontend.
- */
- struct gnttab_copy grant_copy_op[2*XEN_NETIF_RX_RING_SIZE];
- struct xenvif_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];
+ /* This array is allocated seperately as it is large */
+ struct gnttab_copy *grant_copy_op;
+ /* We create one meta structure per ring request we consume, so
+ * the maximum number is the same as the ring size.
+ */
+ struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
u8 fe_dev_addr[6];
#include <linux/ethtool.h>
#include <linux/rtnetlink.h>
#include <linux/if_vlan.h>
+#include <linux/vmalloc.h>
#include <xen/events.h>
#include <asm/xen/hypercall.h>
SET_NETDEV_DEV(dev, parent);
vif = netdev_priv(dev);
+
+ vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
+ MAX_GRANT_COPY_OPS);
+ if (vif->grant_copy_op == NULL) {
+ pr_warn("Could not allocate grant copy space for %s\n", name);
+ free_netdev(dev);
+ return ERR_PTR(-ENOMEM);
+ }
+
vif->domid = domid;
vif->handle = handle;
vif->can_sg = 1;
unsigned long rx_ring_ref, unsigned int tx_evtchn,
unsigned int rx_evtchn)
{
+ struct task_struct *task;
int err = -ENOMEM;
- /* Already connected through? */
- if (vif->tx_irq)
- return 0;
+ BUG_ON(vif->tx_irq);
+ BUG_ON(vif->task);
err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
if (err < 0)
}
init_waitqueue_head(&vif->wq);
- vif->task = kthread_create(xenvif_kthread,
- (void *)vif, "%s", vif->dev->name);
- if (IS_ERR(vif->task)) {
+ task = kthread_create(xenvif_kthread,
+ (void *)vif, "%s", vif->dev->name);
+ if (IS_ERR(task)) {
pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
- err = PTR_ERR(vif->task);
+ err = PTR_ERR(task);
goto err_rx_unbind;
}
+ vif->task = task;
+
rtnl_lock();
if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
dev_set_mtu(vif->dev, ETH_DATA_LEN);
if (netif_carrier_ok(vif->dev))
xenvif_carrier_off(vif);
- if (vif->task)
+ if (vif->task) {
kthread_stop(vif->task);
+ vif->task = NULL;
+ }
if (vif->tx_irq) {
if (vif->tx_irq == vif->rx_irq)
unregister_netdev(vif->dev);
+ vfree(vif->grant_copy_op);
free_netdev(vif->dev);
module_put(THIS_MODULE);
#include <linux/udp.h>
#include <net/tcp.h>
+#include <net/ip6_checksum.h>
#include <xen/xen.h>
#include <xen/events.h>
}
/* Set up a GSO prefix descriptor, if necessary */
- if ((1 << skb_shinfo(skb)->gso_type) & vif->gso_prefix_mask) {
+ if ((1 << gso_type) & vif->gso_prefix_mask) {
req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
meta = npo->meta + npo->meta_prod++;
meta->gso_type = gso_type;
if (!npo.copy_prod)
return;
- BUG_ON(npo.copy_prod > ARRAY_SIZE(vif->grant_copy_op));
+ BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
while ((skb = __skb_dequeue(&rxq)) != NULL) {
return 0;
}
-static inline void maybe_pull_tail(struct sk_buff *skb, unsigned int len)
+static inline int maybe_pull_tail(struct sk_buff *skb, unsigned int len,
+ unsigned int max)
{
- if (skb_is_nonlinear(skb) && skb_headlen(skb) < len) {
- /* If we need to pullup then pullup to the max, so we
- * won't need to do it again.
- */
- int target = min_t(int, skb->len, MAX_TCP_HEADER);
- __pskb_pull_tail(skb, target - skb_headlen(skb));
- }
+ if (skb_headlen(skb) >= len)
+ return 0;
+
+ /* If we need to pullup then pullup to the max, so we
+ * won't need to do it again.
+ */
+ if (max > skb->len)
+ max = skb->len;
+
+ if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
+ return -ENOMEM;
+
+ if (skb_headlen(skb) < len)
+ return -EPROTO;
+
+ return 0;
}
+/* This value should be large enough to cover a tagged ethernet header plus
+ * maximally sized IP and TCP or UDP headers.
+ */
+#define MAX_IP_HDR_LEN 128
+
static int checksum_setup_ip(struct xenvif *vif, struct sk_buff *skb,
int recalculate_partial_csum)
{
- struct iphdr *iph = (void *)skb->data;
- unsigned int header_size;
unsigned int off;
- int err = -EPROTO;
+ bool fragment;
+ int err;
- off = sizeof(struct iphdr);
+ fragment = false;
- header_size = skb->network_header + off + MAX_IPOPTLEN;
- maybe_pull_tail(skb, header_size);
+ err = maybe_pull_tail(skb,
+ sizeof(struct iphdr),
+ MAX_IP_HDR_LEN);
+ if (err < 0)
+ goto out;
- off = iph->ihl * 4;
+ if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
+ fragment = true;
- switch (iph->protocol) {
- case IPPROTO_TCP:
- if (!skb_partial_csum_set(skb, off,
- offsetof(struct tcphdr, check)))
- goto out;
+ off = ip_hdrlen(skb);
- if (recalculate_partial_csum) {
- struct tcphdr *tcph = tcp_hdr(skb);
+ err = -EPROTO;
+
+ if (fragment)
+ goto out;
- header_size = skb->network_header +
- off +
- sizeof(struct tcphdr);
- maybe_pull_tail(skb, header_size);
+ switch (ip_hdr(skb)->protocol) {
+ case IPPROTO_TCP:
+ err = maybe_pull_tail(skb,
+ off + sizeof(struct tcphdr),
+ MAX_IP_HDR_LEN);
+ if (err < 0)
+ goto out;
- tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
- skb->len - off,
- IPPROTO_TCP, 0);
+ if (!skb_partial_csum_set(skb, off,
+ offsetof(struct tcphdr, check))) {
+ err = -EPROTO;
+ goto out;
}
+
+ if (recalculate_partial_csum)
+ tcp_hdr(skb)->check =
+ ~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+ ip_hdr(skb)->daddr,
+ skb->len - off,
+ IPPROTO_TCP, 0);
break;
case IPPROTO_UDP:
- if (!skb_partial_csum_set(skb, off,
- offsetof(struct udphdr, check)))
+ err = maybe_pull_tail(skb,
+ off + sizeof(struct udphdr),
+ MAX_IP_HDR_LEN);
+ if (err < 0)
goto out;
- if (recalculate_partial_csum) {
- struct udphdr *udph = udp_hdr(skb);
-
- header_size = skb->network_header +
- off +
- sizeof(struct udphdr);
- maybe_pull_tail(skb, header_size);
-
- udph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
- skb->len - off,
- IPPROTO_UDP, 0);
+ if (!skb_partial_csum_set(skb, off,
+ offsetof(struct udphdr, check))) {
+ err = -EPROTO;
+ goto out;
}
+
+ if (recalculate_partial_csum)
+ udp_hdr(skb)->check =
+ ~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+ ip_hdr(skb)->daddr,
+ skb->len - off,
+ IPPROTO_UDP, 0);
break;
default:
- if (net_ratelimit())
- netdev_err(vif->dev,
- "Attempting to checksum a non-TCP/UDP packet, "
- "dropping a protocol %d packet\n",
- iph->protocol);
goto out;
}
return err;
}
+/* This value should be large enough to cover a tagged ethernet header plus
+ * an IPv6 header, all options, and a maximal TCP or UDP header.
+ */
+#define MAX_IPV6_HDR_LEN 256
+
+#define OPT_HDR(type, skb, off) \
+ (type *)(skb_network_header(skb) + (off))
+
static int checksum_setup_ipv6(struct xenvif *vif, struct sk_buff *skb,
int recalculate_partial_csum)
{
- int err = -EPROTO;
- struct ipv6hdr *ipv6h = (void *)skb->data;
+ int err;
u8 nexthdr;
- unsigned int header_size;
unsigned int off;
+ unsigned int len;
bool fragment;
bool done;
+ fragment = false;
done = false;
off = sizeof(struct ipv6hdr);
- header_size = skb->network_header + off;
- maybe_pull_tail(skb, header_size);
+ err = maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
+ if (err < 0)
+ goto out;
- nexthdr = ipv6h->nexthdr;
+ nexthdr = ipv6_hdr(skb)->nexthdr;
- while ((off <= sizeof(struct ipv6hdr) + ntohs(ipv6h->payload_len)) &&
- !done) {
+ len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
+ while (off <= len && !done) {
switch (nexthdr) {
case IPPROTO_DSTOPTS:
case IPPROTO_HOPOPTS:
case IPPROTO_ROUTING: {
- struct ipv6_opt_hdr *hp = (void *)(skb->data + off);
+ struct ipv6_opt_hdr *hp;
- header_size = skb->network_header +
- off +
- sizeof(struct ipv6_opt_hdr);
- maybe_pull_tail(skb, header_size);
+ err = maybe_pull_tail(skb,
+ off +
+ sizeof(struct ipv6_opt_hdr),
+ MAX_IPV6_HDR_LEN);
+ if (err < 0)
+ goto out;
+ hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
nexthdr = hp->nexthdr;
off += ipv6_optlen(hp);
break;
}
case IPPROTO_AH: {
- struct ip_auth_hdr *hp = (void *)(skb->data + off);
+ struct ip_auth_hdr *hp;
- header_size = skb->network_header +
- off +
- sizeof(struct ip_auth_hdr);
- maybe_pull_tail(skb, header_size);
+ err = maybe_pull_tail(skb,
+ off +
+ sizeof(struct ip_auth_hdr),
+ MAX_IPV6_HDR_LEN);
+ if (err < 0)
+ goto out;
+ hp = OPT_HDR(struct ip_auth_hdr, skb, off);
nexthdr = hp->nexthdr;
- off += (hp->hdrlen+2)<<2;
+ off += ipv6_authlen(hp);
+ break;
+ }
+ case IPPROTO_FRAGMENT: {
+ struct frag_hdr *hp;
+
+ err = maybe_pull_tail(skb,
+ off +
+ sizeof(struct frag_hdr),
+ MAX_IPV6_HDR_LEN);
+ if (err < 0)
+ goto out;
+
+ hp = OPT_HDR(struct frag_hdr, skb, off);
+
+ if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
+ fragment = true;
+
+ nexthdr = hp->nexthdr;
+ off += sizeof(struct frag_hdr);
break;
}
- case IPPROTO_FRAGMENT:
- fragment = true;
- /* fall through */
default:
done = true;
break;
}
}
- if (!done) {
- if (net_ratelimit())
- netdev_err(vif->dev, "Failed to parse packet header\n");
- goto out;
- }
+ err = -EPROTO;
- if (fragment) {
- if (net_ratelimit())
- netdev_err(vif->dev, "Packet is a fragment!\n");
+ if (!done || fragment)
goto out;
- }
switch (nexthdr) {
case IPPROTO_TCP:
- if (!skb_partial_csum_set(skb, off,
- offsetof(struct tcphdr, check)))
+ err = maybe_pull_tail(skb,
+ off + sizeof(struct tcphdr),
+ MAX_IPV6_HDR_LEN);
+ if (err < 0)
goto out;
- if (recalculate_partial_csum) {
- struct tcphdr *tcph = tcp_hdr(skb);
-
- header_size = skb->network_header +
- off +
- sizeof(struct tcphdr);
- maybe_pull_tail(skb, header_size);
-
- tcph->check = ~csum_ipv6_magic(&ipv6h->saddr,
- &ipv6h->daddr,
- skb->len - off,
- IPPROTO_TCP, 0);
+ if (!skb_partial_csum_set(skb, off,
+ offsetof(struct tcphdr, check))) {
+ err = -EPROTO;
+ goto out;
}
+
+ if (recalculate_partial_csum)
+ tcp_hdr(skb)->check =
+ ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+ &ipv6_hdr(skb)->daddr,
+ skb->len - off,
+ IPPROTO_TCP, 0);
break;
case IPPROTO_UDP:
- if (!skb_partial_csum_set(skb, off,
- offsetof(struct udphdr, check)))
+ err = maybe_pull_tail(skb,
+ off + sizeof(struct udphdr),
+ MAX_IPV6_HDR_LEN);
+ if (err < 0)
goto out;
- if (recalculate_partial_csum) {
- struct udphdr *udph = udp_hdr(skb);
-
- header_size = skb->network_header +
- off +
- sizeof(struct udphdr);
- maybe_pull_tail(skb, header_size);
-
- udph->check = ~csum_ipv6_magic(&ipv6h->saddr,
- &ipv6h->daddr,
- skb->len - off,
- IPPROTO_UDP, 0);
+ if (!skb_partial_csum_set(skb, off,
+ offsetof(struct udphdr, check))) {
+ err = -EPROTO;
+ goto out;
}
+
+ if (recalculate_partial_csum)
+ udp_hdr(skb)->check =
+ ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+ &ipv6_hdr(skb)->daddr,
+ skb->len - off,
+ IPPROTO_UDP, 0);
break;
default:
- if (net_ratelimit())
- netdev_err(vif->dev,
- "Attempting to checksum a non-TCP/UDP packet, "
- "dropping a protocol %d packet\n",
- nexthdr);
goto out;
}
return false;
}
-static unsigned xenvif_tx_build_gops(struct xenvif *vif)
+static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
{
struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
struct sk_buff *skb;
int ret;
while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
- < MAX_PENDING_REQS)) {
+ < MAX_PENDING_REQS) &&
+ (skb_queue_len(&vif->tx_queue) < budget)) {
struct xen_netif_tx_request txreq;
struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
struct page *page;
continue;
}
- RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, work_to_do);
+ work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
if (!work_to_do)
break;
}
-static int xenvif_tx_submit(struct xenvif *vif, int budget)
+static int xenvif_tx_submit(struct xenvif *vif)
{
struct gnttab_copy *gop = vif->tx_copy_ops;
struct sk_buff *skb;
int work_done = 0;
- while (work_done < budget &&
- (skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+ while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
struct xen_netif_tx_request *txp;
u16 pending_idx;
unsigned data_len;
if (unlikely(!tx_work_todo(vif)))
return 0;
- nr_gops = xenvif_tx_build_gops(vif);
+ nr_gops = xenvif_tx_build_gops(vif, budget);
if (nr_gops == 0)
return 0;
gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
- work_done = xenvif_tx_submit(vif, nr_gops);
+ work_done = xenvif_tx_submit(vif);
return work_done;
}
ndev->event_cb = NULL;
}
+static void ntb_irq_work(unsigned long data)
+{
+ struct ntb_db_cb *db_cb = (struct ntb_db_cb *)data;
+ int rc;
+
+ rc = db_cb->callback(db_cb->data, db_cb->db_num);
+ if (rc)
+ tasklet_schedule(&db_cb->irq_work);
+ else {
+ struct ntb_device *ndev = db_cb->ndev;
+ unsigned long mask;
+
+ mask = readw(ndev->reg_ofs.ldb_mask);
+ clear_bit(db_cb->db_num * ndev->bits_per_vector, &mask);
+ writew(mask, ndev->reg_ofs.ldb_mask);
+ }
+}
+
/**
* ntb_register_db_callback() - register a callback for doorbell interrupt
* @ndev: pointer to ntb_device instance
* RETURNS: An appropriate -ERRNO error value on error, or zero for success.
*/
int ntb_register_db_callback(struct ntb_device *ndev, unsigned int idx,
- void *data, void (*func)(void *data, int db_num))
+ void *data, int (*func)(void *data, int db_num))
{
unsigned long mask;
ndev->db_cb[idx].callback = func;
ndev->db_cb[idx].data = data;
+ ndev->db_cb[idx].ndev = ndev;
+
+ tasklet_init(&ndev->db_cb[idx].irq_work, ntb_irq_work,
+ (unsigned long) &ndev->db_cb[idx]);
/* unmask interrupt */
mask = readw(ndev->reg_ofs.ldb_mask);
set_bit(idx * ndev->bits_per_vector, &mask);
writew(mask, ndev->reg_ofs.ldb_mask);
+ tasklet_disable(&ndev->db_cb[idx].irq_work);
+
ndev->db_cb[idx].callback = NULL;
}
return -EINVAL;
ndev->limits.max_mw = SNB_ERRATA_MAX_MW;
+ ndev->limits.max_db_bits = SNB_MAX_DB_BITS;
ndev->reg_ofs.spad_write = ndev->mw[1].vbase +
SNB_SPAD_OFFSET;
ndev->reg_ofs.rdb = ndev->mw[1].vbase +
*/
writeq(ndev->mw[1].bar_sz + 0x1000, ndev->reg_base +
SNB_PBAR4LMT_OFFSET);
+ /* HW errata on the Limit registers. They can only be
+ * written when the base register is 4GB aligned and
+ * < 32bit. This should already be the case based on the
+ * driver defaults, but write the Limit registers first
+ * just in case.
+ */
} else {
ndev->limits.max_mw = SNB_MAX_MW;
+
+ /* HW Errata on bit 14 of b2bdoorbell register. Writes
+ * will not be mirrored to the remote system. Shrink
+ * the number of bits by one, since bit 14 is the last
+ * bit.
+ */
+ ndev->limits.max_db_bits = SNB_MAX_DB_BITS - 1;
ndev->reg_ofs.spad_write = ndev->reg_base +
SNB_B2B_SPAD_OFFSET;
ndev->reg_ofs.rdb = ndev->reg_base +
* something silly
*/
writeq(0, ndev->reg_base + SNB_PBAR4LMT_OFFSET);
+ /* HW errata on the Limit registers. They can only be
+ * written when the base register is 4GB aligned and
+ * < 32bit. This should already be the case based on the
+ * driver defaults, but write the Limit registers first
+ * just in case.
+ */
}
/* The Xeon errata workaround requires setting SBAR Base
* have an equal amount.
*/
ndev->limits.max_spads = SNB_MAX_COMPAT_SPADS / 2;
+ ndev->limits.max_db_bits = SNB_MAX_DB_BITS;
/* Note: The SDOORBELL is the cause of the errata. You REALLY
* don't want to touch it.
*/
* have an equal amount.
*/
ndev->limits.max_spads = SNB_MAX_COMPAT_SPADS / 2;
+ ndev->limits.max_db_bits = SNB_MAX_DB_BITS;
ndev->reg_ofs.rdb = ndev->reg_base + SNB_PDOORBELL_OFFSET;
ndev->reg_ofs.ldb = ndev->reg_base + SNB_SDOORBELL_OFFSET;
ndev->reg_ofs.ldb_mask = ndev->reg_base + SNB_SDBMSK_OFFSET;
ndev->reg_ofs.lnk_stat = ndev->reg_base + SNB_SLINK_STATUS_OFFSET;
ndev->reg_ofs.spci_cmd = ndev->reg_base + SNB_PCICMD_OFFSET;
- ndev->limits.max_db_bits = SNB_MAX_DB_BITS;
ndev->limits.msix_cnt = SNB_MSIX_CNT;
ndev->bits_per_vector = SNB_DB_BITS_PER_VEC;
{
struct ntb_db_cb *db_cb = data;
struct ntb_device *ndev = db_cb->ndev;
+ unsigned long mask;
dev_dbg(&ndev->pdev->dev, "MSI-X irq %d received for DB %d\n", irq,
db_cb->db_num);
- if (db_cb->callback)
- db_cb->callback(db_cb->data, db_cb->db_num);
+ mask = readw(ndev->reg_ofs.ldb_mask);
+ set_bit(db_cb->db_num * ndev->bits_per_vector, &mask);
+ writew(mask, ndev->reg_ofs.ldb_mask);
+
+ tasklet_schedule(&db_cb->irq_work);
/* No need to check for the specific HB irq, any interrupt means
* we're connected.
{
struct ntb_db_cb *db_cb = data;
struct ntb_device *ndev = db_cb->ndev;
+ unsigned long mask;
dev_dbg(&ndev->pdev->dev, "MSI-X irq %d received for DB %d\n", irq,
db_cb->db_num);
- if (db_cb->callback)
- db_cb->callback(db_cb->data, db_cb->db_num);
+ mask = readw(ndev->reg_ofs.ldb_mask);
+ set_bit(db_cb->db_num * ndev->bits_per_vector, &mask);
+ writew(mask, ndev->reg_ofs.ldb_mask);
+
+ tasklet_schedule(&db_cb->irq_work);
/* On Sandybridge, there are 16 bits in the interrupt register
* but only 4 vectors. So, 5 bits are assigned to the first 3
dev_err(&ndev->pdev->dev, "Error determining link status\n");
/* bit 15 is always the link bit */
- writew(1 << ndev->limits.max_db_bits, ndev->reg_ofs.ldb);
+ writew(1 << SNB_LINK_DB, ndev->reg_ofs.ldb);
return IRQ_HANDLED;
}
"Only %d MSI-X vectors. Limiting the number of queues to that number.\n",
rc);
msix_entries = rc;
+
+ rc = pci_enable_msix(pdev, ndev->msix_entries, msix_entries);
+ if (rc)
+ goto err1;
}
for (i = 0; i < msix_entries; i++) {
*/
if (ndev->hw_type == BWD_HW)
writeq(~0, ndev->reg_ofs.ldb_mask);
- else
- writew(~(1 << ndev->limits.max_db_bits),
- ndev->reg_ofs.ldb_mask);
+ else {
+ u16 var = 1 << SNB_LINK_DB;
+ writew(~var, ndev->reg_ofs.ldb_mask);
+ }
rc = ntb_setup_msix(ndev);
if (!rc)
}
}
+static void ntb_hw_link_up(struct ntb_device *ndev)
+{
+ if (ndev->conn_type == NTB_CONN_TRANSPARENT)
+ ntb_link_event(ndev, NTB_LINK_UP);
+ else {
+ u32 ntb_cntl;
+
+ /* Let's bring the NTB link up */
+ ntb_cntl = readl(ndev->reg_ofs.lnk_cntl);
+ ntb_cntl &= ~(NTB_CNTL_LINK_DISABLE | NTB_CNTL_CFG_LOCK);
+ ntb_cntl |= NTB_CNTL_P2S_BAR23_SNOOP | NTB_CNTL_S2P_BAR23_SNOOP;
+ ntb_cntl |= NTB_CNTL_P2S_BAR45_SNOOP | NTB_CNTL_S2P_BAR45_SNOOP;
+ writel(ntb_cntl, ndev->reg_ofs.lnk_cntl);
+ }
+}
+
+static void ntb_hw_link_down(struct ntb_device *ndev)
+{
+ u32 ntb_cntl;
+
+ if (ndev->conn_type == NTB_CONN_TRANSPARENT) {
+ ntb_link_event(ndev, NTB_LINK_DOWN);
+ return;
+ }
+
+ /* Bring NTB link down */
+ ntb_cntl = readl(ndev->reg_ofs.lnk_cntl);
+ ntb_cntl &= ~(NTB_CNTL_P2S_BAR23_SNOOP | NTB_CNTL_S2P_BAR23_SNOOP);
+ ntb_cntl &= ~(NTB_CNTL_P2S_BAR45_SNOOP | NTB_CNTL_S2P_BAR45_SNOOP);
+ ntb_cntl |= NTB_CNTL_LINK_DISABLE | NTB_CNTL_CFG_LOCK;
+ writel(ntb_cntl, ndev->reg_ofs.lnk_cntl);
+}
+
static int ntb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct ntb_device *ndev;
if (rc)
goto err6;
- /* Let's bring the NTB link up */
- writel(NTB_CNTL_BAR23_SNOOP | NTB_CNTL_BAR45_SNOOP,
- ndev->reg_ofs.lnk_cntl);
+ ntb_hw_link_up(ndev);
return 0;
{
struct ntb_device *ndev = pci_get_drvdata(pdev);
int i;
- u32 ntb_cntl;
- /* Bring NTB link down */
- ntb_cntl = readl(ndev->reg_ofs.lnk_cntl);
- ntb_cntl |= NTB_CNTL_LINK_DISABLE;
- writel(ntb_cntl, ndev->reg_ofs.lnk_cntl);
+ ntb_hw_link_down(ndev);
ntb_transport_free(ndev->ntb_transport);
};
struct ntb_db_cb {
- void (*callback) (void *data, int db_num);
+ int (*callback)(void *data, int db_num);
unsigned int db_num;
void *data;
struct ntb_device *ndev;
+ struct tasklet_struct irq_work;
};
struct ntb_device {
void ntb_unregister_transport(struct ntb_device *ndev);
void ntb_set_mw_addr(struct ntb_device *ndev, unsigned int mw, u64 addr);
int ntb_register_db_callback(struct ntb_device *ndev, unsigned int idx,
- void *data, void (*db_cb_func) (void *data,
- int db_num));
+ void *data, int (*db_cb_func)(void *data,
+ int db_num));
void ntb_unregister_db_callback(struct ntb_device *ndev, unsigned int idx);
int ntb_register_event_callback(struct ntb_device *ndev,
void (*event_cb_func) (void *handle,
#define SNB_MAX_COMPAT_SPADS 16
/* Reserve the uppermost bit for link interrupt */
#define SNB_MAX_DB_BITS 15
+#define SNB_LINK_DB 15
#define SNB_DB_BITS_PER_VEC 5
#define SNB_MAX_MW 2
#define SNB_ERRATA_MAX_MW 1
#define SNB_SBAR2XLAT_OFFSET 0x0030
#define SNB_SBAR4XLAT_OFFSET 0x0038
#define SNB_SBAR0BASE_OFFSET 0x0040
-#define SNB_SBAR0BASE_OFFSET 0x0040
-#define SNB_SBAR2BASE_OFFSET 0x0048
-#define SNB_SBAR4BASE_OFFSET 0x0050
#define SNB_SBAR2BASE_OFFSET 0x0048
#define SNB_SBAR4BASE_OFFSET 0x0050
#define SNB_NTBCNTL_OFFSET 0x0058
#define BWD_LTSSMSTATEJMP_FORCEDETECT (1 << 2)
#define BWD_IBIST_ERR_OFLOW 0x7FFF7FFF
-#define NTB_CNTL_CFG_LOCK (1 << 0)
-#define NTB_CNTL_LINK_DISABLE (1 << 1)
-#define NTB_CNTL_BAR23_SNOOP (1 << 2)
-#define NTB_CNTL_BAR45_SNOOP (1 << 6)
-#define BWD_CNTL_LINK_DOWN (1 << 16)
+#define NTB_CNTL_CFG_LOCK (1 << 0)
+#define NTB_CNTL_LINK_DISABLE (1 << 1)
+#define NTB_CNTL_S2P_BAR23_SNOOP (1 << 2)
+#define NTB_CNTL_P2S_BAR23_SNOOP (1 << 4)
+#define NTB_CNTL_S2P_BAR45_SNOOP (1 << 6)
+#define NTB_CNTL_P2S_BAR45_SNOOP (1 << 8)
+#define BWD_CNTL_LINK_DOWN (1 << 16)
#define NTB_PPD_OFFSET 0x00D4
#define SNB_PPD_CONN_TYPE 0x0003
void (*rx_handler) (struct ntb_transport_qp *qp, void *qp_data,
void *data, int len);
- struct tasklet_struct rx_work;
struct list_head rx_pend_q;
struct list_head rx_free_q;
spinlock_t ntb_rx_pend_q_lock;
return 0;
}
-static void ntb_qp_link_cleanup(struct work_struct *work)
+static void ntb_qp_link_cleanup(struct ntb_transport_qp *qp)
{
- struct ntb_transport_qp *qp = container_of(work,
- struct ntb_transport_qp,
- link_cleanup);
struct ntb_transport *nt = qp->transport;
struct pci_dev *pdev = ntb_query_pdev(nt->ndev);
dev_info(&pdev->dev, "qp %d: Link Down\n", qp->qp_num);
qp->qp_link = NTB_LINK_DOWN;
+}
+
+static void ntb_qp_link_cleanup_work(struct work_struct *work)
+{
+ struct ntb_transport_qp *qp = container_of(work,
+ struct ntb_transport_qp,
+ link_cleanup);
+ struct ntb_transport *nt = qp->transport;
+
+ ntb_qp_link_cleanup(qp);
if (nt->transport_link == NTB_LINK_UP)
schedule_delayed_work(&qp->link_work,
schedule_work(&qp->link_cleanup);
}
-static void ntb_transport_link_cleanup(struct work_struct *work)
+static void ntb_transport_link_cleanup(struct ntb_transport *nt)
{
- struct ntb_transport *nt = container_of(work, struct ntb_transport,
- link_cleanup);
int i;
+ /* Pass along the info to any clients */
+ for (i = 0; i < nt->max_qps; i++)
+ if (!test_bit(i, &nt->qp_bitmap))
+ ntb_qp_link_cleanup(&nt->qps[i]);
+
if (nt->transport_link == NTB_LINK_DOWN)
cancel_delayed_work_sync(&nt->link_work);
else
nt->transport_link = NTB_LINK_DOWN;
- /* Pass along the info to any clients */
- for (i = 0; i < nt->max_qps; i++)
- if (!test_bit(i, &nt->qp_bitmap))
- ntb_qp_link_down(&nt->qps[i]);
-
/* The scratchpad registers keep the values if the remote side
* goes down, blast them now to give them a sane value the next
* time they are accessed
ntb_write_local_spad(nt->ndev, i, 0);
}
+static void ntb_transport_link_cleanup_work(struct work_struct *work)
+{
+ struct ntb_transport *nt = container_of(work, struct ntb_transport,
+ link_cleanup);
+
+ ntb_transport_link_cleanup(nt);
+}
+
static void ntb_transport_event_callback(void *data, enum ntb_hw_event event)
{
struct ntb_transport *nt = data;
}
INIT_DELAYED_WORK(&qp->link_work, ntb_qp_link_work);
- INIT_WORK(&qp->link_cleanup, ntb_qp_link_cleanup);
+ INIT_WORK(&qp->link_cleanup, ntb_qp_link_cleanup_work);
spin_lock_init(&qp->ntb_rx_pend_q_lock);
spin_lock_init(&qp->ntb_rx_free_q_lock);
}
INIT_DELAYED_WORK(&nt->link_work, ntb_transport_link_work);
- INIT_WORK(&nt->link_cleanup, ntb_transport_link_cleanup);
+ INIT_WORK(&nt->link_cleanup, ntb_transport_link_cleanup_work);
rc = ntb_register_event_callback(nt->ndev,
ntb_transport_event_callback);
struct ntb_device *ndev = nt->ndev;
int i;
- nt->transport_link = NTB_LINK_DOWN;
+ ntb_transport_link_cleanup(nt);
/* verify that all the qp's are freed */
for (i = 0; i < nt->max_qps; i++) {
struct dma_chan *chan = qp->dma_chan;
struct dma_device *device;
size_t pay_off, buff_off;
- dma_addr_t src, dest;
+ struct dmaengine_unmap_data *unmap;
dma_cookie_t cookie;
void *buf = entry->buf;
- unsigned long flags;
entry->len = len;
goto err;
if (len < copy_bytes)
- goto err1;
+ goto err_wait;
device = chan->device;
pay_off = (size_t) offset & ~PAGE_MASK;
buff_off = (size_t) buf & ~PAGE_MASK;
if (!is_dma_copy_aligned(device, pay_off, buff_off, len))
- goto err1;
+ goto err_wait;
- dest = dma_map_single(device->dev, buf, len, DMA_FROM_DEVICE);
- if (dma_mapping_error(device->dev, dest))
- goto err1;
+ unmap = dmaengine_get_unmap_data(device->dev, 2, GFP_NOWAIT);
+ if (!unmap)
+ goto err_wait;
- src = dma_map_single(device->dev, offset, len, DMA_TO_DEVICE);
- if (dma_mapping_error(device->dev, src))
- goto err2;
+ unmap->len = len;
+ unmap->addr[0] = dma_map_page(device->dev, virt_to_page(offset),
+ pay_off, len, DMA_TO_DEVICE);
+ if (dma_mapping_error(device->dev, unmap->addr[0]))
+ goto err_get_unmap;
+
+ unmap->to_cnt = 1;
+
+ unmap->addr[1] = dma_map_page(device->dev, virt_to_page(buf),
+ buff_off, len, DMA_FROM_DEVICE);
+ if (dma_mapping_error(device->dev, unmap->addr[1]))
+ goto err_get_unmap;
+
+ unmap->from_cnt = 1;
- flags = DMA_COMPL_DEST_UNMAP_SINGLE | DMA_COMPL_SRC_UNMAP_SINGLE |
- DMA_PREP_INTERRUPT;
- txd = device->device_prep_dma_memcpy(chan, dest, src, len, flags);
+ txd = device->device_prep_dma_memcpy(chan, unmap->addr[1],
+ unmap->addr[0], len,
+ DMA_PREP_INTERRUPT);
if (!txd)
- goto err3;
+ goto err_get_unmap;
txd->callback = ntb_rx_copy_callback;
txd->callback_param = entry;
+ dma_set_unmap(txd, unmap);
cookie = dmaengine_submit(txd);
if (dma_submit_error(cookie))
- goto err3;
+ goto err_set_unmap;
+
+ dmaengine_unmap_put(unmap);
qp->last_cookie = cookie;
return;
-err3:
- dma_unmap_single(device->dev, src, len, DMA_TO_DEVICE);
-err2:
- dma_unmap_single(device->dev, dest, len, DMA_FROM_DEVICE);
-err1:
+err_set_unmap:
+ dmaengine_unmap_put(unmap);
+err_get_unmap:
+ dmaengine_unmap_put(unmap);
+err_wait:
/* If the callbacks come out of order, the writing of the index to the
* last completed will be out of order. This may result in the
* receive stalling forever.
goto out;
}
-static void ntb_transport_rx(unsigned long data)
+static int ntb_transport_rxc_db(void *data, int db_num)
{
- struct ntb_transport_qp *qp = (struct ntb_transport_qp *)data;
+ struct ntb_transport_qp *qp = data;
int rc, i;
+ dev_dbg(&ntb_query_pdev(qp->ndev)->dev, "%s: doorbell %d received\n",
+ __func__, db_num);
+
/* Limit the number of packets processed in a single interrupt to
* provide fairness to others
*/
if (qp->dma_chan)
dma_async_issue_pending(qp->dma_chan);
-}
-
-static void ntb_transport_rxc_db(void *data, int db_num)
-{
- struct ntb_transport_qp *qp = data;
-
- dev_dbg(&ntb_query_pdev(qp->ndev)->dev, "%s: doorbell %d received\n",
- __func__, db_num);
- tasklet_schedule(&qp->rx_work);
+ return i;
}
static void ntb_tx_copy_callback(void *data)
struct dma_chan *chan = qp->dma_chan;
struct dma_device *device;
size_t dest_off, buff_off;
- dma_addr_t src, dest;
+ struct dmaengine_unmap_data *unmap;
+ dma_addr_t dest;
dma_cookie_t cookie;
void __iomem *offset;
size_t len = entry->len;
void *buf = entry->buf;
- unsigned long flags;
offset = qp->tx_mw + qp->tx_max_frame * qp->tx_index;
hdr = offset + qp->tx_max_frame - sizeof(struct ntb_payload_header);
if (!is_dma_copy_aligned(device, buff_off, dest_off, len))
goto err;
- src = dma_map_single(device->dev, buf, len, DMA_TO_DEVICE);
- if (dma_mapping_error(device->dev, src))
+ unmap = dmaengine_get_unmap_data(device->dev, 1, GFP_NOWAIT);
+ if (!unmap)
goto err;
- flags = DMA_COMPL_SRC_UNMAP_SINGLE | DMA_PREP_INTERRUPT;
- txd = device->device_prep_dma_memcpy(chan, dest, src, len, flags);
+ unmap->len = len;
+ unmap->addr[0] = dma_map_page(device->dev, virt_to_page(buf),
+ buff_off, len, DMA_TO_DEVICE);
+ if (dma_mapping_error(device->dev, unmap->addr[0]))
+ goto err_get_unmap;
+
+ unmap->to_cnt = 1;
+
+ txd = device->device_prep_dma_memcpy(chan, dest, unmap->addr[0], len,
+ DMA_PREP_INTERRUPT);
if (!txd)
- goto err1;
+ goto err_get_unmap;
txd->callback = ntb_tx_copy_callback;
txd->callback_param = entry;
+ dma_set_unmap(txd, unmap);
cookie = dmaengine_submit(txd);
if (dma_submit_error(cookie))
- goto err1;
+ goto err_set_unmap;
+
+ dmaengine_unmap_put(unmap);
dma_async_issue_pending(chan);
qp->tx_async++;
return;
-err1:
- dma_unmap_single(device->dev, src, len, DMA_TO_DEVICE);
+err_set_unmap:
+ dmaengine_unmap_put(unmap);
+err_get_unmap:
+ dmaengine_unmap_put(unmap);
err:
ntb_memcpy_tx(entry, offset);
qp->tx_memcpy++;
qp->tx_handler = handlers->tx_handler;
qp->event_handler = handlers->event_handler;
+ dmaengine_get();
qp->dma_chan = dma_find_channel(DMA_MEMCPY);
- if (!qp->dma_chan)
+ if (!qp->dma_chan) {
+ dmaengine_put();
dev_info(&pdev->dev, "Unable to allocate DMA channel, using CPU instead\n");
- else
- dmaengine_get();
+ }
for (i = 0; i < NTB_QP_DEF_NUM_ENTRIES; i++) {
entry = kzalloc(sizeof(struct ntb_queue_entry), GFP_ATOMIC);
&qp->tx_free_q);
}
- tasklet_init(&qp->rx_work, ntb_transport_rx, (unsigned long) qp);
-
rc = ntb_register_db_callback(qp->ndev, free_queue, qp,
ntb_transport_rxc_db);
if (rc)
- goto err3;
+ goto err2;
dev_info(&pdev->dev, "NTB Transport QP %d created\n", qp->qp_num);
return qp;
-err3:
- tasklet_disable(&qp->rx_work);
err2:
while ((entry = ntb_list_rm(&qp->ntb_tx_free_q_lock, &qp->tx_free_q)))
kfree(entry);
err1:
while ((entry = ntb_list_rm(&qp->ntb_rx_free_q_lock, &qp->rx_free_q)))
kfree(entry);
+ if (qp->dma_chan)
+ dmaengine_put();
set_bit(free_queue, &nt->qp_bitmap);
err:
return NULL;
}
ntb_unregister_db_callback(qp->ndev, qp->qp_num);
- tasklet_disable(&qp->rx_work);
cancel_delayed_work_sync(&qp->link_work);
depends on OF_IRQ
help
This option builds in test cases for the device tree infrastructure
- that are executed one at boot time, and the results dumped to the
+ that are executed once at boot time, and the results dumped to the
console.
If unsure, say N here, but this option is safe to enable.
(unsigned long long)cp, (unsigned long long)s,
(unsigned long long)da);
- /*
- * If the number of address cells is larger than 2 we assume the
- * mapping doesn't specify a physical address. Rather, the address
- * specifies an identifier that must match exactly.
- */
- if (na > 2 && memcmp(range, addr, na * 4) != 0)
- return OF_BAD_ADDR;
-
if (da < cp || da >= (cp + s))
return OF_BAD_ADDR;
return da - cp;
*/
void __init unflatten_and_copy_device_tree(void)
{
- int size = __be32_to_cpu(initial_boot_params->totalsize);
- void *dt = early_init_dt_alloc_memory_arch(size,
+ int size;
+ void *dt;
+
+ if (!initial_boot_params) {
+ pr_warn("No valid device tree found, continuing without\n");
+ return;
+ }
+
+ size = __be32_to_cpu(initial_boot_params->totalsize);
+ dt = early_init_dt_alloc_memory_arch(size,
__alignof__(struct boot_param_header));
if (dt) {
if (of_get_property(ipar, "interrupt-controller", NULL) !=
NULL) {
pr_debug(" -> got it !\n");
- of_node_put(old);
return 0;
}
* Successfully parsed an interrrupt-map translation; copy new
* interrupt specifier into the out_irq structure
*/
- of_node_put(out_irq->np);
- out_irq->np = of_node_get(newpar);
+ out_irq->np = newpar;
match_array = imap - newaddrsize - newintsize;
for (i = 0; i < newintsize; i++)
}
fail:
of_node_put(ipar);
- of_node_put(out_irq->np);
of_node_put(newpar);
return -EINVAL;
* Otherwise is returns a bitmask with supported features. Current
* features reported are:
* PCI_PASID_CAP_EXEC - Execute permission supported
- * PCI_PASID_CAP_PRIV - Priviledged mode supported
+ * PCI_PASID_CAP_PRIV - Privileged mode supported
*/
int pci_pasid_features(struct pci_dev *pdev)
{
*value = 0;
break;
+ case PCI_INTERRUPT_LINE:
+ /* LINE PIN MIN_GNT MAX_LAT */
+ *value = 0;
+ break;
+
default:
*value = 0xffffffff;
return PCIBIOS_BAD_REGISTER_NUMBER;
void __iomem *afi;
int irq;
- struct list_head busses;
+ struct list_head buses;
struct resource *cs;
struct resource io;
/*
* Look up a virtual address mapping for the specified bus number. If no such
- * mapping existis, try to create one.
+ * mapping exists, try to create one.
*/
static void __iomem *tegra_pcie_bus_map(struct tegra_pcie *pcie,
unsigned int busnr)
{
struct tegra_pcie_bus *bus;
- list_for_each_entry(bus, &pcie->busses, list)
+ list_for_each_entry(bus, &pcie->buses, list)
if (bus->nr == busnr)
return (void __iomem *)bus->area->addr;
if (IS_ERR(bus))
return NULL;
- list_add_tail(&bus->list, &pcie->busses);
+ list_add_tail(&bus->list, &pcie->buses);
return (void __iomem *)bus->area->addr;
}
value &= ~AFI_FUSE_PCIE_T0_GEN2_DIS;
afi_writel(pcie, value, AFI_FUSE);
- /* initialze internal PHY, enable up to 16 PCIE lanes */
+ /* initialize internal PHY, enable up to 16 PCIE lanes */
pads_writel(pcie, 0x0, PADS_CTL_SEL);
/* override IDDQ to 1 on all 4 lanes */
if (!pcie)
return -ENOMEM;
- INIT_LIST_HEAD(&pcie->busses);
+ INIT_LIST_HEAD(&pcie->buses);
INIT_LIST_HEAD(&pcie->ports);
pcie->soc_data = match->data;
pcie->dev = &pdev->dev;
return -ENOSPC;
/*
* Check if this position is at correct offset.nvec is always a
- * power of two. pos0 must be nvec bit alligned.
+ * power of two. pos0 must be nvec bit aligned.
*/
if (pos % msgvec)
pos += msgvec - (pos % msgvec);
To compile this driver as a module, choose M here: the
module will be called rpadlpar_io.
-
- When in doubt, say N.
+
+ When in doubt, say N.
config HOTPLUG_PCI_SGI
tristate "SGI PCI Hotplug Support"
cpci_hotplug_pci.o
endif
ifdef CONFIG_ACPI
-pci_hotplug-objs += acpi_pcihp.o
+pci_hotplug-objs += acpi_pcihp.o
endif
cpqphp-objs := cpqphp_core.o \
string = (struct acpi_buffer){ ACPI_ALLOCATE_BUFFER, NULL };
}
- handle = DEVICE_ACPI_HANDLE(&pdev->dev);
+ handle = ACPI_HANDLE(&pdev->dev);
if (!handle) {
/*
* This hotplug controller was not listed in the ACPI name
u8 acpiphp_get_adapter_status(struct acpiphp_slot *slot);
/* variables */
-extern bool acpiphp_debug;
extern bool acpiphp_disabled;
#endif /* _ACPIPHP_H */
* @info: must match the pointer used to register
*
* Description: This is used to un-register a hardware specific acpi
- * driver that manipulates the attention LED. The pointer to the
+ * driver that manipulates the attention LED. The pointer to the
* info struct must be the same as the one used to set it.
*/
int acpiphp_unregister_attention(struct acpiphp_attention_info *info)
* was registered with us. This allows hardware specific
* ACPI implementations to blink the light for us.
*/
- static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
- {
+static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
+{
int retval = -ENODEV;
pr_debug("%s - physical_slot = %s\n", __func__,
} else
attention_info = NULL;
return retval;
- }
-
+}
+
/**
* get_power_status - get power status of a slot
if (retval) {
pr_err("pci_hp_register failed with error %d\n", retval);
goto error_hpslot;
- }
+ }
pr_info("Slot [%s] registered\n", slot_name(slot));
status = acpi_evaluate_integer(handle, "_ADR", NULL, &adr);
if (ACPI_FAILURE(status)) {
- acpi_handle_warn(handle, "can't evaluate _ADR (%#x)\n", status);
+ if (status != AE_NOT_FOUND)
+ acpi_handle_warn(handle,
+ "can't evaluate _ADR (%#x)\n", status);
return AE_OK;
}
list_add_tail(&slot->node, &bridge->slots);
- /* Register slots for ejectable funtions only. */
+ /* Register slots for ejectable functions only. */
if (acpi_pci_check_ejectable(pbus, handle) || is_dock_device(handle)) {
unsigned long long sun;
int retval;
slot->flags &= (~SLOT_ENABLED);
}
+static bool acpiphp_no_hotplug(acpi_handle handle)
+{
+ struct acpi_device *adev = NULL;
+
+ acpi_bus_get_device(handle, &adev);
+ return adev && adev->flags.no_hotplug;
+}
+
+static bool slot_no_hotplug(struct acpiphp_slot *slot)
+{
+ struct acpiphp_func *func;
+
+ list_for_each_entry(func, &slot->funcs, sibling)
+ if (acpiphp_no_hotplug(func_to_handle(func)))
+ return true;
+
+ return false;
+}
/**
* get_slot_status - get ACPI slot status
unsigned long long sta;
status = acpi_evaluate_integer(handle, "_STA", NULL, &sta);
- alive = ACPI_SUCCESS(status) && sta == ACPI_STA_ALL;
+ alive = (ACPI_SUCCESS(status) && sta == ACPI_STA_ALL)
+ || acpiphp_no_hotplug(handle);
}
if (!alive) {
u32 v;
struct pci_dev *dev, *tmp;
mutex_lock(&slot->crit_sect);
- /* wake up all functions */
- if (get_slot_status(slot) == ACPI_STA_ALL) {
+ if (slot_no_hotplug(slot)) {
+ ; /* do nothing */
+ } else if (get_slot_status(slot) == ACPI_STA_ALL) {
/* remove stale devices if any */
list_for_each_entry_safe(dev, tmp, &bus->devices,
bus_list)
.read = ibm_read_apci_table,
.write = NULL,
};
-static struct acpiphp_attention_info ibm_attention_info =
+static struct acpiphp_attention_info ibm_attention_info =
{
.set_attn = ibm_set_attention_status,
.get_attn = ibm_get_attention_status,
*/
static int ibm_set_attention_status(struct hotplug_slot *slot, u8 status)
{
- union acpi_object args[2];
+ union acpi_object args[2];
struct acpi_object_list params = { .pointer = args, .count = 2 };
- acpi_status stat;
+ acpi_status stat;
unsigned long long rc;
union apci_descriptor *ibm_slot;
*
* Description: This method is registered with the acpiphp module as a
* callback to do the device specific task of getting the LED status.
- *
+ *
* Because there is no direct method of getting the LED status directly
* from an ACPI call, we read the aPCI table and parse out our
* slot descriptor to read the status from that.
pr_debug("%s: Received notification %02x\n", __func__, event);
if (subevent == 0x80) {
- pr_debug("%s: generationg bus event\n", __func__);
+ pr_debug("%s: generating bus event\n", __func__);
acpi_bus_generate_netlink_event(note->device->pnp.device_class,
dev_name(¬e->device->dev),
note->event, detail);
u32 lvl, void *context, void **rv)
{
acpi_handle *phandle = (acpi_handle *)context;
- acpi_status status;
+ acpi_status status;
struct acpi_device_info *info;
int retval = 0;
info->hardware_id.string, handle);
*phandle = handle;
/* returning non-zero causes the search to stop
- * and returns this value to the caller of
+ * and returns this value to the caller of
* acpi_walk_namespace, but it also causes some warnings
* in the acpi debug code to print...
*/
do { \
if (cpci_debug) \
printk (KERN_DEBUG "%s: " format "\n", \
- MY_NAME , ## arg); \
+ MY_NAME , ## arg); \
} while (0)
#define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg)
#define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg)
do { \
if (cpci_debug) \
printk (KERN_DEBUG "%s: " format "\n", \
- MY_NAME , ## arg); \
+ MY_NAME , ## arg); \
} while (0)
#define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg)
#define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg)
* option) any later version.
*
* THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
- * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
- * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
- * THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
- * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
- * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
+ * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* You should have received a copy of the GNU General Public License along
#define dbg(format, arg...) \
do { \
- if(debug) \
+ if (debug) \
printk (KERN_DEBUG "%s: " format "\n", \
- MY_NAME , ## arg); \
+ MY_NAME , ## arg); \
} while(0)
#define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg)
#define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg)
* option) any later version.
*
* THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
- * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
- * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
- * THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
- * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
- * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
+ * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* You should have received a copy of the GNU General Public License along
#define dbg(format, arg...) \
do { \
- if(debug) \
+ if (debug) \
printk (KERN_DEBUG "%s: " format "\n", \
- MY_NAME , ## arg); \
+ MY_NAME , ## arg); \
} while(0)
#define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg)
#define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg)
{ 0, }
};
MODULE_DEVICE_TABLE(pci, zt5550_hc_pci_tbl);
-
+
static struct pci_driver zt5550_hc_driver = {
.name = "zt5550_hc",
.id_table = zt5550_hc_pci_tbl,
* option) any later version.
*
* THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
- * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
- * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
- * THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
- * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
- * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
+ * AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* You should have received a copy of the GNU General Public License along
#define HC_CMD_REG 0x0C
#define ARB_CONFIG_GNT_REG 0x10
#define ARB_CONFIG_CFG_REG 0x12
-#define ARB_CONFIG_REG 0x10
+#define ARB_CONFIG_REG 0x10
#define ISOL_CONFIG_REG 0x18
#define FAULT_STATUS_REG 0x20
#define FAULT_CONFIG_REG 0x24
goto err_disable_device;
}
- /* Check for the proper subsystem ID's
+ /* Check for the proper subsystem IDs
* Intel uses a different SSID programming model than Compaq.
* For Intel, each SSID bit identifies a PHP capability.
- * Also Intel HPC's may have RID=0.
+ * Also Intel HPCs may have RID=0.
*/
if ((pdev->revision <= 2) && (vendor_id != PCI_VENDOR_ID_INTEL)) {
err(msg_HPC_not_supported);
/* Only if mode change...*/
if (((bus->cur_bus_speed == PCI_SPEED_66MHz) && (adapter_speed == PCI_SPEED_66MHz_PCIX)) ||
- ((bus->cur_bus_speed == PCI_SPEED_66MHz_PCIX) && (adapter_speed == PCI_SPEED_66MHz)))
+ ((bus->cur_bus_speed == PCI_SPEED_66MHz_PCIX) && (adapter_speed == PCI_SPEED_66MHz)))
set_SOGO(ctrl);
wait_for_ctrl_irq(ctrl);
if (ctrl->event_queue[loop].event_type == INT_BUTTON_PRESS) {
dbg("button pressed\n");
- } else if (ctrl->event_queue[loop].event_type ==
+ } else if (ctrl->event_queue[loop].event_type ==
INT_BUTTON_CANCEL) {
dbg("button cancel\n");
del_timer(&p_slot->task_event);
if (rc)
return rc;
- /* find range of busses to use */
+ /* find range of buses to use */
dbg("find ranges of buses to use\n");
bus_node = get_max_resource(&(resources->bus_head), 1);
- /* If we don't have any busses to allocate, we can't continue */
+ /* If we don't have any buses to allocate, we can't continue */
if (!bus_node)
return -ENOMEM;
/* If this function needs an interrupt and we are behind
* a bridge and the pin is tied to something that's
- * alread mapped, set this one the same */
+ * already mapped, set this one the same */
if (temp_byte && resources->irqs &&
(resources->irqs->valid_INT &
(0x01 << ((temp_byte + resources->irqs->barber_pole - 1) & 0x03)))) {
*
* Reads configuration for all slots in a PCI bus and saves info.
*
- * Note: For non-hot plug busses, the slot # saved is the device #
+ * Note: For non-hot plug buses, the slot # saved is the device #
*
* returns 0 if success
*/
* cpqhp_save_slot_config
*
* Saves configuration info for all PCI devices in a given slot
- * including subordinate busses.
+ * including subordinate buses.
*
* returns 0 if success
*/
kfree(tres);
}
}
-
/************************************************************
-* RESOURE TYPE *
+* RESOURCE TYPE *
************************************************************/
#define EBDA_RSRC_TYPE_MASK 0x03
//--------------------------------------------------------------
struct rio_table_hdr {
- u8 ver_num;
+ u8 ver_num;
u8 scal_count;
u8 riodev_count;
u16 offset;
};
//--------------------------------------------------------------
-// RIO DETAIL
+// RIO DETAIL
//--------------------------------------------------------------
struct rio_detail {
u8 first_slot_num;
u8 middle_num;
struct list_head opt_rio_list;
-};
+};
struct opt_rio_lo {
u8 rio_type;
u8 middle_num;
u8 pack_count;
struct list_head opt_rio_lo_list;
-};
+};
/****************************************************************
* HPC DESCRIPTOR NODE *
#define HPC_CTLR_IRQ_PENDG 0x80
//----------------------------------------------------------------------------
-// HPC_CTLR_WROKING status return codes
+// HPC_CTLR_WORKING status return codes
//----------------------------------------------------------------------------
#define HPC_CTLR_WORKING_NO 0x00
#define HPC_CTLR_WORKING_YES 0x01
struct pci_bus *ibmphp_pci_bus;
static int max_slots;
-static int irqs[16]; /* PIC mode IRQ's we're using so far (in case MPS
+static int irqs[16]; /* PIC mode IRQs we're using so far (in case MPS
* tables don't provide default info for empty slots */
static int init_flag;
return get_max_adapter_speed_1 (hs, value, 1);
}
*/
-static inline int get_cur_bus_info(struct slot **sl)
+static inline int get_cur_bus_info(struct slot **sl)
{
int rc = 1;
struct slot * slot_cur = *sl;
debug("options = %x\n", slot_cur->ctrl->options);
- debug("revision = %x\n", slot_cur->ctrl->revision);
+ debug("revision = %x\n", slot_cur->ctrl->revision);
- if (READ_BUS_STATUS(slot_cur->ctrl))
+ if (READ_BUS_STATUS(slot_cur->ctrl))
rc = ibmphp_hpc_readslot(slot_cur, READ_BUSSTATUS, NULL);
-
- if (rc)
+
+ if (rc)
return rc;
-
+
slot_cur->bus_on->current_speed = CURRENT_BUS_SPEED(slot_cur->busstatus);
if (READ_BUS_MODE(slot_cur->ctrl))
slot_cur->bus_on->current_bus_mode =
slot_cur->busstatus,
slot_cur->bus_on->current_speed,
slot_cur->bus_on->current_bus_mode);
-
+
*sl = slot_cur;
return 0;
}
static inline int slot_update(struct slot **sl)
{
int rc;
- rc = ibmphp_hpc_readslot(*sl, READ_ALLSTAT, NULL);
- if (rc)
+ rc = ibmphp_hpc_readslot(*sl, READ_ALLSTAT, NULL);
+ if (rc)
return rc;
if (!init_flag)
rc = get_cur_bus_info(sl);
debug("(*cur_slot)->irq[3] = %x\n",
(*cur_slot)->irq[3]);
- debug("rtable->exlusive_irqs = %x\n",
+ debug("rtable->exclusive_irqs = %x\n",
rtable->exclusive_irqs);
debug("rtable->slots[loop].irq[0].bitmap = %x\n",
rtable->slots[loop].irq[0].bitmap);
else
rc = -ENODEV;
}
- } else
+ } else
rc = -ENODEV;
ibmphp_unlock_operations();
debug("get_attention_status - Entry hotplug_slot[%lx] pvalue[%lx]\n",
(ulong) hotplug_slot, (ulong) value);
-
+
ibmphp_lock_operations();
if (hotplug_slot) {
pslot = hotplug_slot->private;
ibmphp_lock_operations();
mode = slot->supported_bus_mode;
- speed = slot->supported_speed;
+ speed = slot->supported_speed;
ibmphp_unlock_operations();
switch (speed) {
case BUS_SPEED_33:
break;
case BUS_SPEED_66:
- if (mode == BUS_MODE_PCIX)
+ if (mode == BUS_MODE_PCIX)
speed += 0x01;
break;
case BUS_SPEED_100:
debug("BEFORE GETTING SLOT STATUS, slot # %x\n",
slot_cur->number);
- if (slot_cur->ctrl->revision == 0xFF)
+ if (slot_cur->ctrl->revision == 0xFF)
if (get_ctrl_revision(slot_cur,
&slot_cur->ctrl->revision))
return -1;
- if (slot_cur->bus_on->current_speed == 0xFF)
- if (get_cur_bus_info(&slot_cur))
+ if (slot_cur->bus_on->current_speed == 0xFF)
+ if (get_cur_bus_info(&slot_cur))
return -1;
get_max_bus_speed(slot_cur);
debug("SLOT_PRESENT = %x\n", SLOT_PRESENT(slot_cur->status));
debug("SLOT_LATCH = %x\n", SLOT_LATCH(slot_cur->status));
- if ((SLOT_PWRGD(slot_cur->status)) &&
- !(SLOT_PRESENT(slot_cur->status)) &&
+ if ((SLOT_PWRGD(slot_cur->status)) &&
+ !(SLOT_PRESENT(slot_cur->status)) &&
!(SLOT_LATCH(slot_cur->status))) {
debug("BEFORE POWER OFF COMMAND\n");
rc = power_off(slot_cur);
switch (opn) {
case ENABLE:
- if (!(SLOT_PWRGD(slot_cur->status)) &&
- (SLOT_PRESENT(slot_cur->status)) &&
+ if (!(SLOT_PWRGD(slot_cur->status)) &&
+ (SLOT_PRESENT(slot_cur->status)) &&
!(SLOT_LATCH(slot_cur->status)))
return 0;
break;
case DISABLE:
- if ((SLOT_PWRGD(slot_cur->status)) &&
+ if ((SLOT_PWRGD(slot_cur->status)) &&
(SLOT_PRESENT(slot_cur->status)) &&
!(SLOT_LATCH(slot_cur->status)))
return 0;
err("out of system memory\n");
return -ENOMEM;
}
-
+
info->power_status = SLOT_PWRGD(slot_cur->status);
info->attention_status = SLOT_ATTN(slot_cur->status,
slot_cur->ext_status);
case BUS_SPEED_33:
break;
case BUS_SPEED_66:
- if (mode == BUS_MODE_PCIX)
+ if (mode == BUS_MODE_PCIX)
bus_speed += 0x01;
else if (mode == BUS_MODE_PCI)
;
}
bus->cur_bus_speed = bus_speed;
- // To do: bus_names
-
+ // To do: bus_names
+
rc = pci_hp_change_slot_info(slot_cur->hotplug_slot, info);
kfree(info);
return rc;
}
/*
- * The following function is to fix kernel bug regarding
- * getting bus entries, here we manually add those primary
+ * The following function is to fix kernel bug regarding
+ * getting bus entries, here we manually add those primary
* bus entries to kernel bus structure whenever apply
*/
static u8 bus_structure_fixup(u8 busno)
}
/*******************************************************
- * Returns whether the bus is empty or not
+ * Returns whether the bus is empty or not
*******************************************************/
static int is_bus_empty(struct slot * slot_cur)
{
}
/***********************************************************
- * If the HPC permits and the bus currently empty, tries to set the
+ * If the HPC permits and the bus currently empty, tries to set the
* bus speed and mode at the maximum card and bus capability
* Parameters: slot
* Returns: bus is set (0) or error code
static struct pci_device_id ciobx[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_SERVERWORKS, 0x0101) },
{ },
- };
+ };
debug("%s - entry slot # %d\n", __func__, slot_cur->number);
if (SET_BUS_STATUS(slot_cur->ctrl) && is_bus_empty(slot_cur)) {
else if (!SLOT_BUS_MODE(slot_cur->ext_status))
/* if max slot/bus capability is 66 pci
and there's no bus mode mismatch, then
- the adapter supports 66 pci */
+ the adapter supports 66 pci */
cmd = HPC_BUS_66CONVMODE;
else
cmd = HPC_BUS_33CONVMODE;
return -EIO;
}
}
- /* This is for x440, once Brandon fixes the firmware,
+ /* This is for x440, once Brandon fixes the firmware,
will not need this delay */
msleep(1000);
debug("%s -Exit\n", __func__);
}
/* This routine checks the bus limitations that the slot is on from the BIOS.
- * This is used in deciding whether or not to power up the slot.
+ * This is used in deciding whether or not to power up the slot.
* (electrical/spec limitations. For example, >1 133 MHz or >2 66 PCI cards on
- * same bus)
+ * same bus)
* Parameters: slot
* Returns: 0 = no limitations, -EINVAL = exceeded limitations on the bus
*/
static inline void print_card_capability(struct slot *slot_cur)
{
info("capability of the card is ");
- if ((slot_cur->ext_status & CARD_INFO) == PCIX133)
+ if ((slot_cur->ext_status & CARD_INFO) == PCIX133)
info(" 133 MHz PCI-X\n");
else if ((slot_cur->ext_status & CARD_INFO) == PCIX66)
info(" 66 MHz PCI-X\n");
}
attn_LED_blink(slot_cur);
-
+
rc = set_bus(slot_cur);
if (rc) {
err("was not able to set the bus\n");
rc = slot_update(&slot_cur);
if (rc)
goto error_power;
-
+
rc = -EINVAL;
if (SLOT_POWER(slot_cur->status) && !(SLOT_PWRGD(slot_cur->status))) {
err("power fault occurred trying to power up...\n");
"speed and card capability\n");
print_card_capability(slot_cur);
goto error_power;
- }
+ }
/* Don't think this case will happen after above checks...
* but just in case, for paranoia sake */
if (!(SLOT_POWER(slot_cur->status))) {
ibmphp_print_test();
rc = ibmphp_update_slot_info(slot_cur);
exit:
- ibmphp_unlock_operations();
+ ibmphp_unlock_operations();
return rc;
error_nopower:
{
struct slot *slot = hotplug_slot->private;
int rc;
-
+
ibmphp_lock_operations();
rc = ibmphp_do_disable_slot(slot);
ibmphp_unlock_operations();
int rc;
u8 flag;
- debug("DISABLING SLOT...\n");
-
+ debug("DISABLING SLOT...\n");
+
if ((slot_cur == NULL) || (slot_cur->ctrl == NULL)) {
return -ENODEV;
}
-
+
flag = slot_cur->flag;
slot_cur->flag = 1;
attn_LED_blink(slot_cur);
if (slot_cur->func == NULL) {
- /* We need this for fncs's that were there on bootup */
+ /* We need this for functions that were there on bootup */
slot_cur->func = kzalloc(sizeof(struct pci_func), GFP_KERNEL);
if (!slot_cur->func) {
err("out of system memory\n");
}
ibm_unconfigure_device(slot_cur->func);
-
- /* If we got here from latch suddenly opening on operating card or
- a power fault, there's no power to the card, so cannot
- read from it to determine what resources it occupied. This operation
- is forbidden anyhow. The best we can do is remove it from kernel
- lists at least */
+
+ /*
+ * If we got here from latch suddenly opening on operating card or
+ * a power fault, there's no power to the card, so cannot
+ * read from it to determine what resources it occupied. This operation
+ * is forbidden anyhow. The best we can do is remove it from kernel
+ * lists at least */
if (!flag) {
attn_off(slot_cur);
rc = -EFAULT;
goto exit;
}
- if (flag)
+ if (flag)
ibmphp_update_slot_info(slot_cur);
goto exit;
}
debug("AFTER Resource & EBDA INITIALIZATIONS\n");
max_slots = get_max_slots();
-
+
if ((rc = ibmphp_register_pci()))
goto error;
static void __init print_bus_info (void)
{
struct bus_info *ptr;
-
+
list_for_each_entry(ptr, &bus_info_head, bus_info_list) {
debug ("%s - slot_min = %x\n", __func__, ptr->slot_min);
debug ("%s - slot_max = %x\n", __func__, ptr->slot_max);
debug ("%s - bus# = %x\n", __func__, ptr->busno);
debug ("%s - current_speed = %x\n", __func__, ptr->current_speed);
debug ("%s - controller_id = %x\n", __func__, ptr->controller_id);
-
+
debug ("%s - slots_at_33_conv = %x\n", __func__, ptr->slots_at_33_conv);
debug ("%s - slots_at_66_conv = %x\n", __func__, ptr->slots_at_66_conv);
debug ("%s - slots_at_66_pcix = %x\n", __func__, ptr->slots_at_66_pcix);
static void print_lo_info (void)
{
struct rio_detail *ptr;
- debug ("print_lo_info ----\n");
+ debug ("print_lo_info ----\n");
list_for_each_entry(ptr, &rio_lo_head, rio_detail_list) {
debug ("%s - rio_node_id = %x\n", __func__, ptr->rio_node_id);
debug ("%s - rio_type = %x\n", __func__, ptr->rio_type);
struct ebda_pci_rsrc *ptr;
list_for_each_entry(ptr, &ibmphp_ebda_pci_rsrc_head, ebda_pci_rsrc_list) {
- debug ("%s - rsrc type: %x bus#: %x dev_func: %x start addr: %x end addr: %x\n",
+ debug ("%s - rsrc type: %x bus#: %x dev_func: %x start addr: %x end addr: %x\n",
__func__, ptr->rsrc_type ,ptr->bus_num, ptr->dev_fun,ptr->start_addr, ptr->end_addr);
}
}
ebda_seg = readw (io_mem);
iounmap (io_mem);
debug ("returned ebda segment: %x\n", ebda_seg);
-
+
io_mem = ioremap(ebda_seg<<4, 1);
if (!io_mem)
return -ENOMEM;
re = readw (io_mem + sub_addr); /* next sub blk */
sub_addr += 2;
- rc_id = readw (io_mem + sub_addr); /* sub blk id */
+ rc_id = readw (io_mem + sub_addr); /* sub blk id */
sub_addr += 2;
if (rc_id != 0x5243)
debug ("info about hpc descriptor---\n");
debug ("hot blk format: %x\n", format);
debug ("num of controller: %x\n", num_ctlrs);
- debug ("offset of hpc data structure enteries: %x\n ", sub_addr);
+ debug ("offset of hpc data structure entries: %x\n ", sub_addr);
sub_addr = base + re; /* re sub blk */
/* FIXME: rc is never used/checked */
debug ("info about rsrc descriptor---\n");
debug ("format: %x\n", format);
debug ("num of rsrc: %x\n", num_entries);
- debug ("offset of rsrc data structure enteries: %x\n ", sub_addr);
+ debug ("offset of rsrc data structure entries: %x\n ", sub_addr);
hs_complete = 1;
} else {
rio_table_ptr->scal_count = readb (io_mem + offset + 1);
rio_table_ptr->riodev_count = readb (io_mem + offset + 2);
rio_table_ptr->offset = offset +3 ;
-
+
debug("info about rio table hdr ---\n");
debug("ver_num: %x\nscal_count: %x\nriodev_count: %x\noffset of rio table: %x\n ",
rio_table_ptr->ver_num, rio_table_ptr->scal_count,
rio_detail_ptr->chassis_num = readb (io_mem + offset + 14);
// debug ("rio_node_id: %x\nbbar: %x\nrio_type: %x\nowner_id: %x\nport0_node: %x\nport0_port: %x\nport1_node: %x\nport1_port: %x\nfirst_slot_num: %x\nstatus: %x\n", rio_detail_ptr->rio_node_id, rio_detail_ptr->bbar, rio_detail_ptr->rio_type, rio_detail_ptr->owner_id, rio_detail_ptr->port0_node_connect, rio_detail_ptr->port0_port_connect, rio_detail_ptr->port1_node_connect, rio_detail_ptr->port1_port_connect, rio_detail_ptr->first_slot_num, rio_detail_ptr->status);
//create linked list of chassis
- if (rio_detail_ptr->rio_type == 4 || rio_detail_ptr->rio_type == 5)
+ if (rio_detail_ptr->rio_type == 4 || rio_detail_ptr->rio_type == 5)
list_add (&rio_detail_ptr->rio_detail_list, &rio_vg_head);
- //create linked list of expansion box
- else if (rio_detail_ptr->rio_type == 6 || rio_detail_ptr->rio_type == 7)
+ //create linked list of expansion box
+ else if (rio_detail_ptr->rio_type == 6 || rio_detail_ptr->rio_type == 7)
list_add (&rio_detail_ptr->rio_detail_list, &rio_lo_head);
- else
+ else
// not in my concern
kfree (rio_detail_ptr);
offset += 15;
}
/*
- * reorganizing linked list of chassis
+ * reorganizing linked list of chassis
*/
static struct opt_rio *search_opt_vg (u8 chassis_num)
{
list_for_each_entry(ptr, &opt_vg_head, opt_rio_list) {
if (ptr->chassis_num == chassis_num)
return ptr;
- }
+ }
return NULL;
}
{
struct opt_rio *opt_rio_ptr = NULL;
struct rio_detail *rio_detail_ptr = NULL;
-
+
list_for_each_entry(rio_detail_ptr, &rio_vg_head, rio_detail_list) {
opt_rio_ptr = search_opt_vg (rio_detail_ptr->chassis_num);
if (!opt_rio_ptr) {
opt_rio_ptr->first_slot_num = rio_detail_ptr->first_slot_num;
opt_rio_ptr->middle_num = rio_detail_ptr->first_slot_num;
list_add (&opt_rio_ptr->opt_rio_list, &opt_vg_head);
- } else {
+ } else {
opt_rio_ptr->first_slot_num = min (opt_rio_ptr->first_slot_num, rio_detail_ptr->first_slot_num);
opt_rio_ptr->middle_num = max (opt_rio_ptr->middle_num, rio_detail_ptr->first_slot_num);
- }
+ }
}
print_opt_vg ();
- return 0;
-}
+ return 0;
+}
/*
* reorganizing linked list of expansion box
list_for_each_entry(ptr, &opt_lo_head, opt_rio_lo_list) {
if (ptr->chassis_num == chassis_num)
return ptr;
- }
+ }
return NULL;
}
{
struct opt_rio_lo *opt_rio_lo_ptr = NULL;
struct rio_detail *rio_detail_ptr = NULL;
-
+
list_for_each_entry(rio_detail_ptr, &rio_lo_head, rio_detail_list) {
opt_rio_lo_ptr = search_opt_lo (rio_detail_ptr->chassis_num);
if (!opt_rio_lo_ptr) {
opt_rio_lo_ptr->first_slot_num = rio_detail_ptr->first_slot_num;
opt_rio_lo_ptr->middle_num = rio_detail_ptr->first_slot_num;
opt_rio_lo_ptr->pack_count = 1;
-
+
list_add (&opt_rio_lo_ptr->opt_rio_lo_list, &opt_lo_head);
- } else {
+ } else {
opt_rio_lo_ptr->first_slot_num = min (opt_rio_lo_ptr->first_slot_num, rio_detail_ptr->first_slot_num);
opt_rio_lo_ptr->middle_num = max (opt_rio_lo_ptr->middle_num, rio_detail_ptr->first_slot_num);
opt_rio_lo_ptr->pack_count = 2;
- }
+ }
}
- return 0;
+ return 0;
}
-
+
/* Since we don't know the max slot number per each chassis, hence go
* through the list of all chassis to find out the range
- * Arguments: slot_num, 1st slot number of the chassis we think we are on,
- * var (0 = chassis, 1 = expansion box)
+ * Arguments: slot_num, 1st slot number of the chassis we think we are on,
+ * var (0 = chassis, 1 = expansion box)
*/
static int first_slot_num (u8 slot_num, u8 first_slot, u8 var)
{
if (!var) {
list_for_each_entry(opt_vg_ptr, &opt_vg_head, opt_rio_list) {
- if ((first_slot < opt_vg_ptr->first_slot_num) && (slot_num >= opt_vg_ptr->first_slot_num)) {
+ if ((first_slot < opt_vg_ptr->first_slot_num) && (slot_num >= opt_vg_ptr->first_slot_num)) {
rc = -ENODEV;
break;
}
list_for_each_entry(opt_lo_ptr, &opt_lo_head, opt_rio_lo_list) {
//check to see if this slot_num belongs to expansion box
- if ((slot_num >= opt_lo_ptr->first_slot_num) && (!first_slot_num (slot_num, opt_lo_ptr->first_slot_num, 1)))
+ if ((slot_num >= opt_lo_ptr->first_slot_num) && (!first_slot_num (slot_num, opt_lo_ptr->first_slot_num, 1)))
return opt_lo_ptr;
}
return NULL;
struct opt_rio *opt_vg_ptr;
list_for_each_entry(opt_vg_ptr, &opt_vg_head, opt_rio_list) {
- //check to see if this slot_num belongs to chassis
- if ((slot_num >= opt_vg_ptr->first_slot_num) && (!first_slot_num (slot_num, opt_vg_ptr->first_slot_num, 0)))
+ //check to see if this slot_num belongs to chassis
+ if ((slot_num >= opt_vg_ptr->first_slot_num) && (!first_slot_num (slot_num, opt_vg_ptr->first_slot_num, 0)))
return opt_vg_ptr;
}
return NULL;
{
u8 first_slot = 1;
struct slot * slot_cur;
-
+
list_for_each_entry(slot_cur, &ibmphp_slot_head, ibm_slot_list) {
if (slot_cur->ctrl) {
- if ((slot_cur->ctrl->ctlr_type != 4) && (slot_cur->ctrl->ending_slot_num > first_slot) && (slot_num > slot_cur->ctrl->ending_slot_num))
+ if ((slot_cur->ctrl->ctlr_type != 4) && (slot_cur->ctrl->ending_slot_num > first_slot) && (slot_num > slot_cur->ctrl->ending_slot_num))
first_slot = slot_cur->ctrl->ending_slot_num;
}
- }
+ }
return first_slot + 1;
}
err ("Structure passed is empty\n");
return NULL;
}
-
+
slot_num = slot_cur->number;
memset (str, 0, sizeof(str));
-
+
if (rio_table_ptr) {
if (rio_table_ptr->ver_num == 3) {
opt_vg_ptr = find_chassis_num (slot_num);
/* if both NULL and we DO have correct RIO table in BIOS */
return NULL;
}
- }
+ }
if (!flag) {
if (slot_cur->ctrl->ctlr_type == 4) {
first_slot = calculate_first_slot (slot_num);
slot_ptr->ctl_index = readb (io_mem + addr_slot + 2*slot_num);
slot_ptr->slot_cap = readb (io_mem + addr_slot + 3*slot_num);
- // create bus_info lined list --- if only one slot per bus: slot_min = slot_max
+ // create bus_info lined list --- if only one slot per bus: slot_min = slot_max
bus_info_ptr2 = ibmphp_find_same_bus_num (slot_ptr->slot_bus_num);
if (!bus_info_ptr2) {
bus_info_ptr1->index = bus_index++;
bus_info_ptr1->current_speed = 0xff;
bus_info_ptr1->current_bus_mode = 0xff;
-
+
bus_info_ptr1->controller_id = hpc_ptr->ctlr_id;
-
+
list_add_tail (&bus_info_ptr1->bus_info_list, &bus_info_head);
} else {
bus_info_ptr2->slots_at_66_conv = bus_ptr->slots_at_66_conv;
bus_info_ptr2->slots_at_66_pcix = bus_ptr->slots_at_66_pcix;
bus_info_ptr2->slots_at_100_pcix = bus_ptr->slots_at_100_pcix;
- bus_info_ptr2->slots_at_133_pcix = bus_ptr->slots_at_133_pcix;
+ bus_info_ptr2->slots_at_133_pcix = bus_ptr->slots_at_133_pcix;
}
bus_ptr++;
}
hpc_ptr->u.pci_ctlr.dev_fun = readb (io_mem + addr + 1);
hpc_ptr->irq = readb (io_mem + addr + 2);
addr += 3;
- debug ("ctrl bus = %x, ctlr devfun = %x, irq = %x\n",
+ debug ("ctrl bus = %x, ctlr devfun = %x, irq = %x\n",
hpc_ptr->u.pci_ctlr.bus,
hpc_ptr->u.pci_ctlr.dev_fun, hpc_ptr->irq);
break;
tmp_slot->supported_speed = 2;
else if ((hpc_ptr->slots[index].slot_cap & EBDA_SLOT_66_MAX) == EBDA_SLOT_66_MAX)
tmp_slot->supported_speed = 1;
-
+
if ((hpc_ptr->slots[index].slot_cap & EBDA_SLOT_PCIX_CAP) == EBDA_SLOT_PCIX_CAP)
tmp_slot->supported_bus_mode = 1;
else
return rc;
}
-/*
+/*
* map info (bus, devfun, start addr, end addr..) of i/o, memory,
* pfm from the physical addr to a list of resource.
*/
addr += 10;
debug ("rsrc from mem or pfm ---\n");
- debug ("rsrc type: %x bus#: %x dev_func: %x start addr: %x end addr: %x\n",
+ debug ("rsrc type: %x bus#: %x dev_func: %x start addr: %x end addr: %x\n",
rsrc_ptr->rsrc_type, rsrc_ptr->bus_num, rsrc_ptr->dev_fun, rsrc_ptr->start_addr, rsrc_ptr->end_addr);
list_add (&rsrc_ptr->ebda_pci_rsrc_list, &ibmphp_ebda_pci_rsrc_head);
struct bus_info *ptr;
list_for_each_entry(ptr, &bus_info_head, bus_info_list) {
- if (ptr->busno == num)
+ if (ptr->busno == num)
return ptr;
}
return NULL;
struct bus_info *ptr;
list_for_each_entry(ptr, &bus_info_head, bus_info_list) {
- if (ptr->busno == num)
+ if (ptr->busno == num)
return ptr->index;
}
return -ENODEV;
.subdevice = HPC_SUBSYSTEM_ID,
.class = ((PCI_CLASS_SYSTEM_PCI_HOTPLUG << 8) | 0x00),
}, {}
-};
+};
MODULE_DEVICE_TABLE(pci, id_table);
struct controller *ctrl;
debug ("inside ibmphp_probe\n");
-
+
list_for_each_entry(ctrl, &ebda_hpc_head, ebda_hpc_list) {
if (ctrl->ctlr_type == 1) {
if ((dev->devfn == ctrl->u.pci_ctlr.dev_fun) && (dev->bus->number == ctrl->u.pci_ctlr.bus)) {
}
return -ENODEV;
}
-
{
u8 rc;
void __iomem *wpg_addr; // base addr + offset
- unsigned long wpg_data; // data to/from WPG LOHI format
+ unsigned long wpg_data; // data to/from WPG LOHI format
unsigned long ultemp;
unsigned long data; // actual data HILO format
int i;
}
//------------------------------------------------------------
-// Read from ISA type HPC
+// Read from ISA type HPC
//------------------------------------------------------------
static u8 isa_ctrl_read (struct controller *ctlr_ptr, u8 offset)
{
{
u16 start_address;
u16 port_address;
-
+
start_address = ctlr_ptr->u.isa_ctlr.io_start;
port_address = start_address + (u16) offset;
outb (data, port_address);
//--------------------------------------------------------------------
// cleanup
//--------------------------------------------------------------------
-
+
// remove physical to logical address mapping
if ((ctlr_ptr->ctlr_type == 2) || (ctlr_ptr->ctlr_type == 4))
iounmap (wpg_bbar);
-
+
free_hpc_access ();
debug_polling ("%s - Exit rc[%d]\n", __func__, rc);
down (&semOperations);
switch (poll_state) {
- case POLL_LATCH_REGISTER:
+ case POLL_LATCH_REGISTER:
oldlatchlow = curlatchlow;
ctrl_count = 0x00;
list_for_each (pslotlist, &ibmphp_slot_head) {
if (kthread_should_stop())
goto out_sleep;
-
+
down (&semOperations);
-
+
if (poll_count >= POLL_LATCH_CNT) {
poll_count = 0;
poll_state = POLL_SLOTS;
} else
poll_state = POLL_LATCH_REGISTER;
break;
- }
+ }
/* give up the hardware semaphore */
up (&semOperations);
/* sleep for a short time just for good measure */
// bit 5 - HPC_SLOT_PWRGD
if ((pslot->status & 0x20) != (poldslot->status & 0x20))
// OFF -> ON: ignore, ON -> OFF: disable slot
- if ((poldslot->status & 0x20) && (SLOT_CONNECT (poldslot->status) == HPC_SLOT_CONNECTED) && (SLOT_PRESENT (poldslot->status)))
+ if ((poldslot->status & 0x20) && (SLOT_CONNECT (poldslot->status) == HPC_SLOT_CONNECTED) && (SLOT_PRESENT (poldslot->status)))
disable = 1;
// bit 6 - HPC_SLOT_BUS_SPEED
pslot->status &= ~HPC_SLOT_POWER;
}
}
- // CLOSE -> OPEN
+ // CLOSE -> OPEN
else if ((SLOT_PWRGD (poldslot->status) == HPC_SLOT_PWRGD_GOOD)
&& (SLOT_CONNECT (poldslot->status) == HPC_SLOT_CONNECTED) && (SLOT_PRESENT (poldslot->status))) {
disable = 1;
debug ("before locking operations \n");
ibmphp_lock_operations ();
debug ("after locking operations \n");
-
+
// wait for poll thread to exit
debug ("before sem_exit down \n");
down (&sem_exit);
/*
* IBM Hot Plug Controller Driver
- *
+ *
* Written By: Irene Zubarev, IBM Corporation
- *
+ *
* Copyright (C) 2001 Greg Kroah-Hartman (greg@kroah.com)
* Copyright (C) 2001,2002 IBM Corp.
*
/*
* NOTE..... If BIOS doesn't provide default routing, we assign:
- * 9 for SCSI, 10 for LAN adapters, and 11 for everything else.
+ * 9 for SCSI, 10 for LAN adapters, and 11 for everything else.
* If adapter is bridged, then we assign 11 to it and devices behind it.
* We also assign the same irq numbers for multi function devices.
* These are PIC mode, so shouldn't matter n.e.ways (hopefully)
* Configures the device to be added (will allocate needed resources if it
* can), the device can be a bridge or a regular pci device, can also be
* multi-functional
- *
+ *
* Input: function to be added
- *
+ *
* TO DO: The error case with Multifunction device or multi function bridge,
- * if there is an error, will need to go through all previous functions and
+ * if there is an error, will need to go through all previous functions and
* unconfigure....or can add some code into unconfigure_card....
*/
int ibmphp_configure_card (struct pci_func *func, u8 slotno)
cur_func = func;
/* We only get bus and device from IRQ routing table. So at this point,
- * func->busno is correct, and func->device contains only device (at the 5
+ * func->busno is correct, and func->device contains only device (at the 5
* highest bits)
*/
cur_func->device, cur_func->busno);
cleanup_count = 6;
goto error;
- }
+ }
cur_func->next = NULL;
function = 0x8;
break;
}
/*
- * This function configures the pci BARs of a single device.
+ * This function configures the pci BARs of a single device.
* Input: pointer to the pci_func
* Output: configured PCI, 0, or error
*/
for (count = 0; address[count]; count++) { /* for 6 BARs */
- /* not sure if i need this. per scott, said maybe need smth like this
+ /* not sure if i need this. per scott, said maybe need * something like this
if devices don't adhere 100% to the spec, so don't want to write
to the reserved bits
- pcibios_read_config_byte(cur_func->busno, cur_func->device,
+ pcibios_read_config_byte(cur_func->busno, cur_func->device,
PCI_BASE_ADDRESS_0 + 4 * count, &tmp);
if (tmp & 0x01) // IO
- pcibios_write_config_dword(cur_func->busno, cur_func->device,
+ pcibios_write_config_dword(cur_func->busno, cur_func->device,
PCI_BASE_ADDRESS_0 + 4 * count, 0xFFFFFFFD);
else // Memory
- pcibios_write_config_dword(cur_func->busno, cur_func->device,
+ pcibios_write_config_dword(cur_func->busno, cur_func->device,
PCI_BASE_ADDRESS_0 + 4 * count, 0xFFFFFFFF);
*/
pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFF);
return -EIO;
}
pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], func->io[count]->start);
-
- /* _______________This is for debugging purposes only_____________________ */
+
+ /* _______________This is for debugging purposes only_____________________ */
debug ("b4 writing, the IO address is %x\n", func->io[count]->start);
pci_bus_read_config_dword (ibmphp_pci_bus, devfn, address[count], &bar[count]);
debug ("after writing.... the start address is %x\n", bar[count]);
pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], func->pfmem[count]->start);
- /*_______________This is for debugging purposes only______________________________*/
+ /*_______________This is for debugging purposes only______________________________*/
debug ("b4 writing, start address is %x\n", func->pfmem[count]->start);
pci_bus_read_config_dword (ibmphp_pci_bus, devfn, address[count], &bar[count]);
debug ("after writing, start address is %x\n", bar[count]);
/******************************************************************************
* This routine configures a PCI-2-PCI bridge and the functions behind it
* Parameters: pci_func
- * Returns:
+ * Returns:
******************************************************************************/
static int configure_bridge (struct pci_func **func_passed, u8 slotno)
{
debug ("AFTER FIND_SEC_NUMBER, func->busno IS %x\n", func->busno);
pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, sec_number);
-
+
/* __________________For debugging purposes only __________________________________
pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_number);
debug ("sec_number after write/read is %x\n", sec_number);
/* !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
- !!!!!!!!!!!!!!!NEED TO ADD!!! FAST BACK-TO-BACK ENABLE!!!!!!!!!!!!!!!!!!!!
+ !!!!!!!!!!!!!!!NEED TO ADD!!! FAST BACK-TO-BACK ENABLE!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!*/
debug ("len[count] in IO = %x\n", len[count]);
bus_io[count] = kzalloc(sizeof(struct resource_node), GFP_KERNEL);
-
+
if (!bus_io[count]) {
err ("out of system memory\n");
retval = -ENOMEM;
ibmphp_add_pfmem_from_mem (bus_pfmem[count]);
func->pfmem[count] = bus_pfmem[count];
} else {
- err ("cannot allocate requested pfmem for bus %x, device %x, len %x\n",
+ err ("cannot allocate requested pfmem for bus %x, device %x, len %x\n",
func->busno, func->device, len[count]);
kfree (mem_tmp);
kfree (bus_pfmem[count]);
debug ("amount_needed->mem = %x\n", amount_needed->mem);
debug ("amount_needed->pfmem = %x\n", amount_needed->pfmem);
- if (amount_needed->not_correct) {
+ if (amount_needed->not_correct) {
debug ("amount_needed is not correct\n");
for (count = 0; address[count]; count++) {
/* for 2 BARs */
} else {
debug ("it wants %x IO behind the bridge\n", amount_needed->io);
io = kzalloc(sizeof(*io), GFP_KERNEL);
-
+
if (!io) {
err ("out of system memory\n");
retval = -ENOMEM;
if (bus->noIORanges) {
pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_IO_BASE, 0x00 | bus->rangeIO->start >> 8);
- pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_IO_LIMIT, 0x00 | bus->rangeIO->end >> 8);
+ pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_IO_LIMIT, 0x00 | bus->rangeIO->end >> 8);
/* _______________This is for debugging purposes only ____________________
pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_IO_BASE, &temp);
if (bus->noMemRanges) {
pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_BASE, 0x0000 | bus->rangeMem->start >> 16);
pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_LIMIT, 0x0000 | bus->rangeMem->end >> 16);
-
+
/* ____________________This is for debugging purposes only ________________________
pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_BASE, &temp);
debug ("mem_base = %x\n", (temp & PCI_MEMORY_RANGE_TYPE_MASK) << 16);
pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_INTERRUPT_PIN, &irq);
if ((irq > 0x00) && (irq < 0x05))
pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_INTERRUPT_LINE, func->irq[irq - 1]);
- /*
+ /*
pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_BRIDGE_CONTROL, ctrl);
pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_BRIDGE_CONTROL, PCI_BRIDGE_CTL_PARITY);
pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_BRIDGE_CONTROL, PCI_BRIDGE_CTL_SERR);
* This function adds up the amount of resources needed behind the PPB bridge
* and passes it to the configure_bridge function
* Input: bridge function
- * Ouput: amount of resources needed
+ * Output: amount of resources needed
*****************************************************************************/
static struct res_needed *scan_behind_bridge (struct pci_func * func, u8 busno)
{
return amount;
}
-/* The following 3 unconfigure_boot_ routines deal with the case when we had the card
- * upon bootup in the system, since we don't allocate func to such case, we need to read
- * the start addresses from pci config space and then find the corresponding entries in
+/* The following 3 unconfigure_boot_ routines deal with the case when we had the card
+ * upon bootup in the system, since we don't allocate func to such case, we need to read
+ * the start addresses from pci config space and then find the corresponding entries in
* our resource lists. The functions return either 0, -ENODEV, or -1 (general failure)
* Change: we also call these functions even if we configured the card ourselves (i.e., not
* the bootup case), since it should work same way
* unconfiguring the device
* TO DO: will probably need to add some code in case there was some resource,
* to remove it... this is from when we have errors in the configure_card...
- * !!!!!!!!!!!!!!!!!!!!!!!!!FOR BUSES!!!!!!!!!!!!
- * Returns: 0, -1, -ENODEV
+ * !!!!!!!!!!!!!!!!!!!!!!!!!FOR BUSES!!!!!!!!!!!!
+ * Returns: 0, -1, -ENODEV
*/
int ibmphp_unconfigure_card (struct slot **slot_cur, int the_end)
{
* Input: bus and the amount of resources needed (we know we can assign those,
* since they've been checked already
* Output: bus added to the correct spot
- * 0, -1, error
+ * 0, -1, error
*/
static int add_new_bus (struct bus_node *bus, struct resource_node *io, struct resource_node *mem, struct resource_node *pfmem, u8 parent_busno)
{
err ("strange, cannot find bus which is supposed to be at the system... something is terribly wrong...\n");
return -ENODEV;
}
-
+
list_add (&bus->bus_list, &cur_bus->bus_list);
}
if (io) {
}
if (pfmem) {
pfmem_range = kzalloc(sizeof(*pfmem_range), GFP_KERNEL);
- if (!pfmem_range) {
+ if (!pfmem_range) {
err ("out of system memory\n");
return -ENOMEM;
}
return busno;
return 0xff;
}
-
static struct resource_node * __init alloc_resources (struct ebda_pci_rsrc * curr)
{
struct resource_node *rs;
-
+
if (!curr) {
err ("NULL passed to allocate\n");
return NULL;
}
newrange->start = curr->start_addr;
newrange->end = curr->end_addr;
-
+
if (first_bus || (!num_ranges))
newrange->rangeno = 1;
else {
newbus->rangePFMem = newrange;
if (first_bus)
newbus->noPFMemRanges = 1;
- else {
+ else {
debug ("1st PFMemory Primary on Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end);
++newbus->noPFMemRanges;
fix_resources (newbus);
* This is the Resource Management initialization function. It will go through
* the Resource list taken from EBDA and fill in this module's data structures
*
- * THIS IS NOT TAKING INTO CONSIDERATION IO RESTRICTIONS OF PRIMARY BUSES,
+ * THIS IS NOT TAKING INTO CONSIDERATION IO RESTRICTIONS OF PRIMARY BUSES,
* SINCE WE'RE GOING TO ASSUME FOR NOW WE DON'T HAVE THOSE ON OUR BUSES FOR NOW
*
* Input: ptr to the head of the resource list from EBDA
* pci devices' resources for the appropriate resource
*
* Input: type of the resource, range to add, current bus
- * Output: 0 or -1, bus and range ptrs
+ * Output: 0 or -1, bus and range ptrs
********************************************************************************/
static int add_bus_range (int type, struct range_node *range, struct bus_node *bus_cur)
{
switch (type) {
case MEM:
- if (bus_cur->firstMem)
+ if (bus_cur->firstMem)
res = bus_cur->firstMem;
break;
case PFMEM:
}
/*******************************************************************************
- * This routine adds a resource to the list of resources to the appropriate bus
+ * This routine adds a resource to the list of resources to the appropriate bus
* based on their resource type and sorted by their starting addresses. It assigns
* the ptrs to next and nextRange if needed.
*
err ("NULL passed to add\n");
return -ENODEV;
}
-
+
bus_cur = find_bus_wprev (res->busno, NULL, 0);
-
+
if (!bus_cur) {
- /* didn't find a bus, smth's wrong!!! */
+ /* didn't find a bus, something's wrong!!! */
debug ("no bus in the system, either pci_dev's wrong or allocation failed\n");
return -ENODEV;
}
if (!range_cur) {
switch (res->type) {
case IO:
- ++bus_cur->needIOUpdate;
+ ++bus_cur->needIOUpdate;
break;
case MEM:
++bus_cur->needMemUpdate;
}
res->rangeno = -1;
}
-
+
debug ("The range is %d\n", res->rangeno);
if (!res_start) {
/* no first{IO,Mem,Pfmem} on the bus, 1st IO/Mem/Pfmem resource ever */
switch (res->type) {
case IO:
- bus_cur->firstIO = res;
+ bus_cur->firstIO = res;
break;
case MEM:
bus_cur->firstMem = res;
case PFMEM:
bus_cur->firstPFMem = res;
break;
- }
+ }
res->next = NULL;
res->nextRange = NULL;
} else {
* This routine will remove the resource from the list of resources
*
* Input: io, mem, and/or pfmem resource to be deleted
- * Ouput: modified resource list
+ * Output: modified resource list
* 0 or error code
****************************************************************************/
int ibmphp_remove_resource (struct resource_node *res)
if (!res_cur) {
if (res->type == PFMEM) {
- /*
+ /*
* case where pfmem might be in the PFMemFromMem list
* so will also need to remove the corresponding mem
* entry
}
/*****************************************************************************
- * This routine will check to make sure the io/mem/pfmem->len that the device asked for
+ * This routine will check to make sure the io/mem/pfmem->len that the device asked for
* can fit w/i our list of available IO/MEM/PFMEM resources. If cannot, returns -EINVAL,
* otherwise, returns 0
*
* Input: resource
- * Ouput: the correct start and end address are inputted into the resource node,
+ * Output: the correct start and end address are inputted into the resource node,
* 0 or -EINVAL
*****************************************************************************/
int ibmphp_check_resource (struct resource_node *res, u8 bridge)
bus_cur = find_bus_wprev (res->busno, NULL, 0);
if (!bus_cur) {
- /* didn't find a bus, smth's wrong!!! */
+ /* didn't find a bus, something's wrong!!! */
debug ("no bus in the system, either pci_dev's wrong or allocation failed\n");
return -EINVAL;
}
break;
}
}
-
+
if (flag && len_cur == res->len) {
debug ("but we are not here, right?\n");
res->start = start_cur;
if (res_prev) {
if (res_prev->rangeno != res_cur->rangeno) {
/* 1st device on this range */
- if ((res_cur->start != range->start) &&
+ if ((res_cur->start != range->start) &&
((len_tmp = res_cur->start - 1 - range->start) >= res->len)) {
if ((len_tmp < len_cur) || (len_cur == 0)) {
- if ((range->start % tmp_divide) == 0) {
+ if ((range->start % tmp_divide) == 0) {
/* just perfect, starting address is divisible by length */
flag = 1;
len_cur = len_tmp;
* This routine is called from remove_card if the card contained PPB.
* It will remove all the resources on the bus as well as the bus itself
* Input: Bus
- * Ouput: 0, -ENODEV
+ * Output: 0, -ENODEV
********************************************************************************/
int ibmphp_remove_bus (struct bus_node *bus, u8 parent_busno)
{
struct bus_node *prev_bus;
int rc;
- prev_bus = find_bus_wprev (parent_busno, NULL, 0);
+ prev_bus = find_bus_wprev (parent_busno, NULL, 0);
if (!prev_bus) {
debug ("something terribly wrong. Cannot find parent bus to the one to remove\n");
}
/******************************************************************************
- * This routine deletes the ranges from a given bus, and the entries from the
+ * This routine deletes the ranges from a given bus, and the entries from the
* parent's bus in the resources
* Input: current bus, previous bus
* Output: 0, -EINVAL
if (bus_cur->noMemRanges) {
range_cur = bus_cur->rangeMem;
for (i = 0; i < bus_cur->noMemRanges; i++) {
- if (ibmphp_find_resource (bus_prev, range_cur->start, &res, MEM) < 0)
+ if (ibmphp_find_resource (bus_prev, range_cur->start, &res, MEM) < 0)
return -EINVAL;
ibmphp_remove_resource (res);
if (bus_cur->noPFMemRanges) {
range_cur = bus_cur->rangePFMem;
for (i = 0; i < bus_cur->noPFMemRanges; i++) {
- if (ibmphp_find_resource (bus_prev, range_cur->start, &res, PFMEM) < 0)
+ if (ibmphp_find_resource (bus_prev, range_cur->start, &res, PFMEM) < 0)
return -EINVAL;
ibmphp_remove_resource (res);
}
/*
- * find the resource node in the bus
+ * find the resource node in the bus
* Input: Resource needed, start address of the resource, type of resource
*/
int ibmphp_find_resource (struct bus_node *bus, u32 start_address, struct resource_node **res, int flag)
err ("wrong type of flag\n");
return -EINVAL;
}
-
+
while (res_cur) {
if (res_cur->start == start_address) {
*res = res_cur;
} /* end for pfmem */
} /* end if */
} /* end list_for_each bus */
- return 0;
+ return 0;
}
int ibmphp_add_pfmem_from_mem (struct resource_node *pfmem)
list_for_each (tmp, &gbuses) {
tmp_prev = tmp->prev;
bus_cur = list_entry (tmp, struct bus_node, bus_list);
- if (flag)
+ if (flag)
*prev = list_entry (tmp_prev, struct bus_node, bus_list);
- if (bus_cur->busno == bus_number)
+ if (bus_cur->busno == bus_number)
return bus_cur;
}
struct range_node *range;
struct resource_node *res;
struct list_head *tmp;
-
+
debug_pci ("*****************START**********************\n");
if ((!list_empty(&gbuses)) && flags) {
return 1;
range_cur = range_cur->next;
}
-
+
return 0;
}
* Returns: none
* Note: this function doesn't take into account IO restrictions etc,
* so will only work for bridges with no video/ISA devices behind them It
- * also will not work for onboard PPB's that can have more than 1 *bus
+ * also will not work for onboard PPBs that can have more than 1 *bus
* behind them All these are TO DO.
* Also need to add more error checkings... (from fnc returns etc)
*/
case PCI_HEADER_TYPE_BRIDGE:
function = 0x8;
case PCI_HEADER_TYPE_MULTIBRIDGE:
- /* We assume here that only 1 bus behind the bridge
+ /* We assume here that only 1 bus behind the bridge
TO DO: add functionality for several:
temp = secondary;
while (temp < subordinate) {
}
*/
pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_busno);
- bus_sec = find_bus_wprev (sec_busno, NULL, 0);
+ bus_sec = find_bus_wprev (sec_busno, NULL, 0);
/* this bus structure doesn't exist yet, PPB was configured during previous loading of ibmphp */
if (!bus_sec) {
bus_sec = alloc_error_bus (NULL, sec_busno, 1);
io->len = io->end - io->start + 1;
ibmphp_add_resource (io);
}
- }
+ }
pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_BASE, &start_mem_address);
pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_LIMIT, &end_mem_address);
}
module_put(slot->ops->owner);
-exit:
+exit:
if (retval)
return retval;
return count;
retval = ops->set_attention_status(slot->hotplug, attention);
module_put(ops->owner);
-exit:
+exit:
if (retval)
return retval;
return count;
retval = slot->ops->hardware_test(slot, test);
module_put(slot->ops->owner);
-exit:
+exit:
if (retval)
return retval;
return count;
* @hotplug: pointer to the slot whose info has changed
* @info: pointer to the info copy into the slot's info structure
*
- * @slot must have been registered with the pci
+ * @slot must have been registered with the pci
* hotplug subsystem previously with a call to pci_hp_register().
*
* Returns 0 if successful, anything else for an error.
{
return 0;
}
-#endif /* CONFIG_ACPI */
+#endif /* CONFIG_ACPI */
#endif /* _PCIEHP_H */
{
if (slot_detection_mode != PCIEHP_DETECT_ACPI)
return 0;
- if (acpi_pci_detect_ejectable(DEVICE_ACPI_HANDLE(&dev->dev)))
+ if (acpi_pci_detect_ejectable(ACPI_HANDLE(&dev->dev)))
return 0;
return -ENODEV;
}
static int __initdata acpi_slot_detected;
static struct list_head __initdata dummy_slots = LIST_HEAD_INIT(dummy_slots);
-/* Dummy driver for dumplicate name detection */
+/* Dummy driver for duplicate name detection */
static int __init dummy_probe(struct pcie_device *dev)
{
u32 slot_cap;
dup_slot_id++;
}
list_add_tail(&slot->list, &dummy_slots);
- handle = DEVICE_ACPI_HANDLE(&pdev->dev);
+ handle = ACPI_HANDLE(&pdev->dev);
if (!acpi_slot_detected && acpi_pci_detect_ejectable(handle))
acpi_slot_detected = 1;
return -ENODEV; /* dummy driver always returns error */
pciehp_firmware_init();
retval = pcie_port_service_register(&hpdriver_portdrv);
- dbg("pcie_port_service_register = %d\n", retval);
- info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
+ dbg("pcie_port_service_register = %d\n", retval);
+ info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
if (retval)
dbg("Failure to register service\n");
{
/* Clamp to sane value */
if ((sec <= 0) || (sec > 60))
- sec = 2;
+ sec = 2;
ctrl->poll_timer.function = &int_poll_timeout;
ctrl->poll_timer.data = (unsigned long)ctrl;
ctrl_dbg(ctrl, "CMD_COMPLETED not clear after 1 sec\n");
} else if (!NO_CMD_CMPL(ctrl)) {
/*
- * This controller semms to notify of command completed
+ * This controller seems to notify of command completed
* event even though it supports none of power
* controller, attention led, power led and EMI.
*/
if (pciehp_writew(ctrl, PCI_EXP_SLTSTA, 0x1f))
goto abort_ctrl;
- /* Disable sotfware notification */
+ /* Disable software notification */
pcie_disable_notification(ctrl);
ctrl_info(ctrl, "HPC vendor_id %x device_id %x ss_vid %x ss_did %x\n",
do { \
if (debug) \
printk (KERN_DEBUG "%s: " format "\n", \
- MY_NAME , ## arg); \
+ MY_NAME , ## arg); \
} while (0)
#define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg)
#define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg)
hotplug_slot->release = &release_slot;
make_slot_name(slot);
hotplug_slot->ops = &skel_hotplug_slot_ops;
-
+
/*
* Initialize the slot info structure with some known
* good values.
get_attention_status(hotplug_slot, &info->attention_status);
get_latch_status(hotplug_slot, &info->latch_status);
get_adapter_status(hotplug_slot, &info->adapter_status);
-
+
dbg("registering slot %d\n", i);
retval = pci_hp_register(slot->hotplug_slot);
if (retval) {
pci_hp_deregister(slot->hotplug_slot);
}
}
-
+
static int __init pcihp_skel_init(void)
{
int retval;
if (!pcibios_find_pci_bus(dn))
return -EINVAL;
- /* If pci slot is hotplugable, use hotplug to remove it */
+ /* If pci slot is hotpluggable, use hotplug to remove it */
slot = find_php_slot(dn);
if (slot && rpaphp_deregister_slot(slot)) {
printk(KERN_ERR "%s: unable to remove hotplug slot %s\n",
extern bool rpaphp_debug;
#define dbg(format, arg...) \
do { \
- if (rpaphp_debug) \
+ if (rpaphp_debug) \
printk(KERN_DEBUG "%s: " format, \
- MY_NAME , ## arg); \
+ MY_NAME , ## arg); \
} while (0)
#define err(format, arg...) printk(KERN_ERR "%s: " format, MY_NAME , ## arg)
#define info(format, arg...) printk(KERN_INFO "%s: " format, MY_NAME , ## arg)
struct slot *alloc_slot_struct(struct device_node *dn, int drc_index, char *drc_name, int power_domain);
int rpaphp_register_slot(struct slot *slot);
int rpaphp_deregister_slot(struct slot *slot);
-
+
#endif /* _PPC64PHP_H */
for (i = 0; i < indexes[0]; i++) {
if ((unsigned int) indexes[i + 1] == *my_index) {
if (drc_name)
- *drc_name = name_tmp;
+ *drc_name = name_tmp;
if (drc_type)
*drc_type = type_tmp;
if (drc_index)
* rpaphp_add_slot -- declare a hotplug slot to the hotplug subsystem.
* @dn: device node of slot
*
- * This subroutine will register a hotplugable slot with the
+ * This subroutine will register a hotpluggable slot with the
* PCI hotplug infrastructure. This routine is typically called
* during boot time, if the hotplug slots are present at boot time,
* or is called later, by the dlpar add code, if the slot is
return -ENOMEM;
slot->type = simple_strtoul(type, NULL, 10);
-
+
dbg("Found drc-index:0x%x drc-name:%s drc-type:%s\n",
indexes[i + 1], name, type);
/*
* Unregister all of our slots with the pci_hotplug subsystem,
* and free up all memory that we had allocated.
- * memory will be freed in release_slot callback.
+ * memory will be freed in release_slot callback.
*/
list_for_each_safe(tmp, n, &rpaphp_slot_head) {
dbg("%s: slot must be power up to get sensor-state\n",
__func__);
- /* some slots have to be powered up
+ /* some slots have to be powered up
* before get-sensor will succeed.
*/
rc = rtas_set_power_level(slot->power_domain, POWER_ON,
return 0;
}
-
/*
- * RPA Virtual I/O device functions
+ * RPA Virtual I/O device functions
* Copyright (C) 2004 Linda Xie <lxie@us.ibm.com>
*
* All rights reserved.
int drc_index, char *drc_name, int power_domain)
{
struct slot *slot;
-
+
slot = kzalloc(sizeof(struct slot), GFP_KERNEL);
if (!slot)
goto error_nomem;
slot->hotplug_slot = kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL);
if (!slot->hotplug_slot)
- goto error_slot;
+ goto error_slot;
slot->hotplug_slot->info = kzalloc(sizeof(struct hotplug_slot_info),
GFP_KERNEL);
if (!slot->hotplug_slot->info)
goto error_hpslot;
slot->name = kstrdup(drc_name, GFP_KERNEL);
if (!slot->name)
- goto error_info;
+ goto error_info;
slot->dn = dn;
slot->index = drc_index;
slot->power_domain = power_domain;
slot->hotplug_slot->private = slot;
slot->hotplug_slot->ops = &rpaphp_hotplug_slot_ops;
slot->hotplug_slot->release = &rpaphp_release_slot;
-
+
return (slot);
error_info:
list_for_each_entry(tmp_slot, &rpaphp_slot_head, rpaphp_slot_list) {
if (!strcmp(tmp_slot->name, slot->name))
return 1;
- }
+ }
return 0;
}
__func__, slot->name);
list_del(&slot->rpaphp_slot_list);
-
+
retval = pci_hp_deregister(php_slot);
if (retval)
err("Problem unregistering a slot %s\n", slot->name);
int retval;
int slotno;
- dbg("%s registering slot:path[%s] index[%x], name[%s] pdomain[%x] type[%d]\n",
+ dbg("%s registering slot:path[%s] index[%x], name[%s] pdomain[%x] type[%d]\n",
__func__, slot->dn->full_name, slot->index, slot->name,
slot->power_domain, slot->type);
if (is_registered(slot)) {
err("rpaphp_register_slot: slot[%s] is already registered\n", slot->name);
return -EAGAIN;
- }
+ }
if (slot->dn->child)
slotno = PCI_SLOT(PCI_DN(slot->dn->child)->devfn);
info("Slot [%s] registered\n", slot->name);
return 0;
}
-
* Work to add BIOS PROM support was completed by Mike Habeck.
*/
+#include <linux/acpi.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <asm/sn/sn_feature_sets.h>
#include <asm/sn/sn_sal.h>
#include <asm/sn/types.h>
-#include <linux/acpi.h>
#include <asm/sn/acpi.h>
#include "../pci.h"
acpi_handle rethandle;
acpi_status ret;
- phandle = PCI_CONTROLLER(slot->pci_bus)->acpi_handle;
+ phandle = acpi_device_handle(PCI_CONTROLLER(slot->pci_bus)->companion);
if (acpi_bus_get_device(phandle, &pdevice)) {
dev_dbg(&slot->pci_bus->self->dev,
/* free the ACPI resources for the slot */
if (SN_ACPI_BASE_SUPPORT() &&
- PCI_CONTROLLER(slot->pci_bus)->acpi_handle) {
+ PCI_CONTROLLER(slot->pci_bus)->companion) {
unsigned long long adr;
struct acpi_device *device;
acpi_handle phandle;
acpi_status ret;
/* Get the rootbus node pointer */
- phandle = PCI_CONTROLLER(slot->pci_bus)->acpi_handle;
+ phandle = acpi_device_handle(PCI_CONTROLLER(slot->pci_bus)->companion);
acpi_scan_lock_acquire();
/*
/* offsets to the controller registers based on the above structure layout */
enum ctrl_offsets {
- BASE_OFFSET = offsetof(struct ctrl_reg, base_offset),
- SLOT_AVAIL1 = offsetof(struct ctrl_reg, slot_avail1),
+ BASE_OFFSET = offsetof(struct ctrl_reg, base_offset),
+ SLOT_AVAIL1 = offsetof(struct ctrl_reg, slot_avail1),
SLOT_AVAIL2 = offsetof(struct ctrl_reg, slot_avail2),
- SLOT_CONFIG = offsetof(struct ctrl_reg, slot_config),
+ SLOT_CONFIG = offsetof(struct ctrl_reg, slot_config),
SEC_BUS_CONFIG = offsetof(struct ctrl_reg, sec_bus_config),
MSI_CTRL = offsetof(struct ctrl_reg, msi_ctrl),
- PROG_INTERFACE = offsetof(struct ctrl_reg, prog_interface),
+ PROG_INTERFACE = offsetof(struct ctrl_reg, prog_interface),
CMD = offsetof(struct ctrl_reg, cmd),
CMD_STATUS = offsetof(struct ctrl_reg, cmd_status),
INTR_LOC = offsetof(struct ctrl_reg, intr_loc),
snprintf(name, SLOT_NAME_SIZE, "%d", slot->number);
hotplug_slot->ops = &shpchp_hotplug_slot_ops;
- ctrl_dbg(ctrl, "Registering domain:bus:dev=%04x:%02x:%02x "
- "hp_slot=%x sun=%x slot_device_offset=%x\n",
- pci_domain_nr(ctrl->pci_dev->subordinate),
- slot->bus, slot->device, slot->hp_slot, slot->number,
- ctrl->slot_device_offset);
+ ctrl_dbg(ctrl, "Registering domain:bus:dev=%04x:%02x:%02x "
+ "hp_slot=%x sun=%x slot_device_offset=%x\n",
+ pci_domain_nr(ctrl->pci_dev->subordinate),
+ slot->bus, slot->device, slot->hp_slot, slot->number,
+ ctrl->slot_device_offset);
retval = pci_hp_register(slot->hotplug_slot,
ctrl->pci_dev->subordinate, slot->device, name);
if (retval) {
#define SLOT_REG_RSVDZ_MASK ((1 << 15) | (7 << 21))
/*
- * SHPC Command Code definitnions
+ * SHPC Command Code definitions
*
* Slot Operation 00h - 3Fh
* Set Bus Segment Speed/Mode A 40h - 47h
char *type;
struct resource *res;
- handle = DEVICE_ACPI_HANDLE(&dev->dev);
+ handle = ACPI_HANDLE(&dev->dev);
if (!handle)
return -EINVAL;
struct resource tmp;
enum pci_bar_type type;
int reg = pci_iov_resource_bar(dev, resno, &type);
-
+
if (!reg)
return 0;
/**
* pci_lost_interrupt - reports a lost PCI interrupt
* @pdev: device whose interrupt is lost
- *
+ *
* The primary function of this routine is to report a lost interrupt
* in a standard way which users can recognise (instead of blaming the
* driver).
* @nvec: how many MSIs have been requested ?
* @type: are we checking for MSI or MSI-X ?
*
- * Look at global flags, the device itself, and its parent busses
+ * Look at global flags, the device itself, and its parent buses
* to determine if MSI/-X are supported for the device. If MSI/-X is
* supported return 0, else return an error code.
**/
* if (_PRW at S-state x)
* choose from highest power _SxD to lowest power _SxW
* else // no _PRW at S-state x
- * choose highest power _SxD or any lower power
+ * choose highest power _SxD or any lower power
*/
static pci_power_t acpi_pci_choose_state(struct pci_dev *pdev)
static bool acpi_pci_power_manageable(struct pci_dev *dev)
{
- acpi_handle handle = DEVICE_ACPI_HANDLE(&dev->dev);
+ acpi_handle handle = ACPI_HANDLE(&dev->dev);
return handle ? acpi_bus_power_manageable(handle) : false;
}
static int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state)
{
- acpi_handle handle = DEVICE_ACPI_HANDLE(&dev->dev);
+ acpi_handle handle = ACPI_HANDLE(&dev->dev);
static const u8 state_conv[] = {
[PCI_D0] = ACPI_STATE_D0,
[PCI_D1] = ACPI_STATE_D1,
static bool acpi_pci_can_wakeup(struct pci_dev *dev)
{
- acpi_handle handle = DEVICE_ACPI_HANDLE(&dev->dev);
+ acpi_handle handle = ACPI_HANDLE(&dev->dev);
return handle ? acpi_bus_can_wakeup(handle) : false;
}
static void pci_acpi_setup(struct device *dev)
{
struct pci_dev *pci_dev = to_pci_dev(dev);
- acpi_handle handle = ACPI_HANDLE(dev);
- struct acpi_device *adev;
+ struct acpi_device *adev = ACPI_COMPANION(dev);
- if (acpi_bus_get_device(handle, &adev) || !adev->wakeup.flags.valid)
+ if (!adev)
+ return;
+
+ pci_acpi_add_pm_notifier(adev, pci_dev);
+ if (!adev->wakeup.flags.valid)
return;
device_set_wakeup_capable(dev, true);
acpi_pci_sleep_wake(pci_dev, false);
-
- pci_acpi_add_pm_notifier(adev, pci_dev);
if (adev->wakeup.flags.run_wake)
device_set_run_wake(dev, true);
}
static void pci_acpi_cleanup(struct device *dev)
{
- acpi_handle handle = ACPI_HANDLE(dev);
- struct acpi_device *adev;
+ struct acpi_device *adev = ACPI_COMPANION(dev);
+
+ if (!adev)
+ return;
- if (!acpi_bus_get_device(handle, &adev) && adev->wakeup.flags.valid) {
+ pci_acpi_remove_pm_notifier(adev);
+ if (adev->wakeup.flags.valid) {
device_set_wakeup_capable(dev, false);
device_set_run_wake(dev, false);
- pci_acpi_remove_pm_notifier(adev);
}
}
#include <linux/cpu.h>
#include <linux/pm_runtime.h>
#include <linux/suspend.h>
+#include <linux/kexec.h>
#include "pci.h"
struct pci_dynid {
int error, node;
struct drv_dev_and_id ddi = { drv, dev, id };
- /* Execute driver initialization on node where the device's
- bus is attached to. This way the driver likely allocates
- its local memory on the right node without any need to
- change it. */
+ /*
+ * Execute driver initialization on node where the device is
+ * attached. This way the driver likely allocates its local memory
+ * on the right node.
+ */
node = dev_to_node(&dev->dev);
- if (node >= 0) {
+
+ /*
+ * On NUMA systems, we are likely to call a PF probe function using
+ * work_on_cpu(). If that probe calls pci_enable_sriov() (which
+ * adds the VF devices via pci_bus_add_device()), we may re-enter
+ * this function to call the VF probe function. Calling
+ * work_on_cpu() again will cause a lockdep warning. Since VFs are
+ * always on the same node as the PF, we can work around this by
+ * avoiding work_on_cpu() when we're already on the correct node.
+ *
+ * Preemption is enabled, so it's theoretically unsafe to use
+ * numa_node_id(), but even if we run the probe function on the
+ * wrong node, it should be functionally correct.
+ */
+ if (node >= 0 && node != numa_node_id()) {
int cpu;
get_online_cpus();
put_online_cpus();
} else
error = local_pci_probe(&ddi);
+
return error;
}
* __pci_device_probe - check if a driver wants to claim a specific PCI device
* @drv: driver to call to check if it wants the PCI device
* @pci_dev: PCI device being probed
- *
+ *
* returns 0 on success, else error.
* side-effect: pci_dev->driver is set to drv when drv claims pci_dev.
*/
* We would love to complain here if pci_dev->is_enabled is set, that
* the driver should have called pci_disable_device(), but the
* unfortunate fact is there are too many odd BIOS and bridge setups
- * that don't like drivers doing that all of the time.
+ * that don't like drivers doing that all of the time.
* Oh well, we can dream of sane hardware when we sleep, no matter how
* horrible the crap we have to deal with is when we are awake...
*/
pci_msi_shutdown(pci_dev);
pci_msix_shutdown(pci_dev);
+#ifdef CONFIG_KEXEC
/*
- * Turn off Bus Master bit on the device to tell it to not
- * continue to do DMA. Don't touch devices in D3cold or unknown states.
+ * If this is a kexec reboot, turn off Bus Master bit on the
+ * device to tell it to not continue to do DMA. Don't touch
+ * devices in D3cold or unknown states.
+ * If it is not a kexec reboot, firmware will hit the PCI
+ * devices with big hammer and stop their DMA any way.
*/
- if (pci_dev->current_state <= PCI_D3hot)
+ if (kexec_in_progress && (pci_dev->current_state <= PCI_D3hot))
pci_clear_master(pci_dev);
+#endif
}
#ifdef CONFIG_PM
* @drv: the driver structure to register
* @owner: owner module of drv
* @mod_name: module name string
- *
+ *
* Adds the driver structure to the list of registered drivers.
- * Returns a negative value on error, otherwise 0.
- * If no error occurred, the driver remains registered even if
+ * Returns a negative value on error, otherwise 0.
+ * If no error occurred, the driver remains registered even if
* no device was claimed during registration.
*/
int __pci_register_driver(struct pci_driver *drv, struct module *owner,
/**
* pci_unregister_driver - unregister a pci driver
* @drv: the driver structure to unregister
- *
+ *
* Deletes the driver structure from the list of registered PCI drivers,
* gives it a chance to clean up by calling its remove() function for
* each device it was responsible for, and marks those devices as
* pci_dev_driver - get the pci_driver of a device
* @dev: the device to query
*
- * Returns the appropriate pci_driver structure or %NULL if there is no
+ * Returns the appropriate pci_driver structure or %NULL if there is no
* registered driver for the device.
*/
struct pci_driver *
* pci_bus_match - Tell if a PCI device structure has a matching PCI device id structure
* @dev: the PCI device structure to match against
* @drv: the device driver to search for matching PCI device id structures
- *
+ *
* Used by a driver to check whether a PCI device present in the
* system is in its list of supported devices. Returns the matching
* pci_device_id structure or %NULL if there is no match.
acpi_handle handle;
struct acpi_buffer output = {ACPI_ALLOCATE_BUFFER, NULL};
- handle = DEVICE_ACPI_HANDLE(dev);
+ handle = ACPI_HANDLE(dev);
if (!handle)
return FALSE;
acpi_handle handle;
int length;
- handle = DEVICE_ACPI_HANDLE(dev);
+ handle = ACPI_HANDLE(dev);
if (!handle)
return -1;
acpi_handle handle;
int length;
- handle = DEVICE_ACPI_HANDLE(dev);
+ handle = ACPI_HANDLE(dev);
if (!handle)
return -1;
*
* Copyright (C) 2008 Red Hat, Inc.
* Author:
- * Chris Wright
+ * Chris Wright
*
* This work is licensed under the terms of the GNU GPL, version 2.
*
* Usage is simple, allocate a new id to the stub driver and bind the
* device to it. For example:
- *
+ *
* # echo "8086 10f5" > /sys/bus/pci/drivers/pci-stub/new_id
* # echo -n 0000:00:19.0 > /sys/bus/pci/drivers/e1000e/unbind
* # echo -n 0000:00:19.0 > /sys/bus/pci/drivers/pci-stub/bind
*
* File attributes for PCI devices
*
- * Modeled after usb's driverfs.c
+ * Modeled after usb's driverfs.c
*
*/
if (kstrtoul(buf, 0, &val) < 0)
return -EINVAL;
- /* bad things may happen if the no_msi flag is changed
- * while some drivers are loaded */
+ /*
+ * Bad things may happen if the no_msi flag is changed
+ * while drivers are loaded.
+ */
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
- /* Maybe pci devices without subordinate busses shouldn't even have this
- * attribute in the first place? */
+ /*
+ * Maybe devices without subordinate buses shouldn't have this
+ * attribute in the first place?
+ */
if (!pdev->subordinate)
return count;
size = dev->cfg_size - off;
count = size;
}
-
+
pci_config_pm_runtime_get(dev);
if ((off & 1) && size) {
off++;
size--;
}
-
+
if ((off & 3) && size > 2) {
u16 val = data[off - init_off];
val |= (u16) data[off - init_off + 1] << 8;
off += 4;
size -= 4;
}
-
+
if (size >= 2) {
u16 val = data[off - init_off];
val |= (u16) data[off - init_off + 1] << 8;
if (!pdev->rom_attr_enabled)
return -EINVAL;
-
+
rom = pci_map_rom(pdev, &size); /* size starts out as PCI window size */
if (!rom || !size)
return -EIO;
-
+
if (off >= size)
count = 0;
else {
if (off + count > size)
count = size - off;
-
+
memcpy_fromio(buf, rom + off, count);
}
pci_unmap_rom(pdev, rom);
-
+
return count;
}
}
/**
- * pci_find_capability - query for devices' capabilities
+ * pci_find_capability - query for devices' capabilities
* @dev: PCI device to query
* @cap: capability code
*
* device's PCI configuration space or 0 in case the device does not
* support it. Possible values for @cap:
*
- * %PCI_CAP_ID_PM Power Management
- * %PCI_CAP_ID_AGP Accelerated Graphics Port
- * %PCI_CAP_ID_VPD Vital Product Data
- * %PCI_CAP_ID_SLOTID Slot Identification
+ * %PCI_CAP_ID_PM Power Management
+ * %PCI_CAP_ID_AGP Accelerated Graphics Port
+ * %PCI_CAP_ID_VPD Vital Product Data
+ * %PCI_CAP_ID_SLOTID Slot Identification
* %PCI_CAP_ID_MSI Message Signalled Interrupts
- * %PCI_CAP_ID_CHSWP CompactPCI HotSwap
+ * %PCI_CAP_ID_CHSWP CompactPCI HotSwap
* %PCI_CAP_ID_PCIX PCI-X
* %PCI_CAP_ID_EXP PCI Express
*/
}
/**
- * pci_bus_find_capability - query for devices' capabilities
+ * pci_bus_find_capability - query for devices' capabilities
* @bus: the PCI bus to query
* @devfn: PCI device to query
* @cap: capability code
*
* Like pci_find_capability() but works for pci devices that do not have a
- * pci_dev structure set up yet.
+ * pci_dev structure set up yet.
*
* Returns the address of the requested capability structure within the
* device's PCI configuration space or 0 in case the device does not
return -EINVAL;
/* Validate current state:
- * Can enter D0 from any state, but if we can only go deeper
+ * Can enter D0 from any state, but if we can only go deeper
* to sleep if we're already in a low power state
*/
if (state != PCI_D0 && dev->current_state <= PCI_D3cold
}
}
-/**
+/**
* pci_restore_state - Restore the saved state of a PCI device
* @dev: - PCI device that we're dealing with
*/
* the device saved state.
* @dev: PCI device that we're dealing with
*
- * Rerturn NULL if no state or error.
+ * Return NULL if no state or error.
*/
struct pci_saved_state *pci_store_saved_state(struct pci_dev *dev)
{
* pci_dev_run_wake - Check if device can generate run-time wake-up events.
* @dev: Device to check.
*
- * Return true if the device itself is cabable of generating wake-up events
+ * Return true if the device itself is capable of generating wake-up events
* (through the platform or using the native PCIe PME) or if the device supports
* PME and one of its upstream bridges can generate wake-up events.
*/
switch (pci_pcie_type(pdev)) {
/*
* PCI/X-to-PCIe bridges are not specifically mentioned by the spec,
- * but since their primary inteface is PCI/X, we conservatively
+ * but since their primary interface is PCI/X, we conservatively
* handle them as we would a non-PCIe device.
*/
case PCI_EXP_TYPE_PCIE_BRIDGE:
/*
* PCIe 3.0, 6.12.1.2 specifies ACS capabilities that should be
* implemented by the remaining PCIe types to indicate peer-to-peer
- * capabilities, but only when they are part of a multifunciton
+ * capabilities, but only when they are part of a multifunction
* device. The footnote for section 6.12 indicates the specific
* PCIe types included here.
*/
}
/*
- * PCIe 3.0, 6.12.1.3 specifies no ACS capabilties are applicable
+ * PCIe 3.0, 6.12.1.3 specifies no ACS capabilities are applicable
* to single function devices with the exception of downstream ports.
*/
return true;
*
* If @exclusive is set, then the region is marked so that userspace
* is explicitly not allowed to map the resource via /dev/mem or
- * sysfs MMIO access.
+ * sysfs MMIO access.
*
* Returns 0 on success, or %EBUSY on error. A warning
* message is also printed on failure.
if (pci_resource_len(pdev, bar) == 0)
return 0;
-
+
if (pci_resource_flags(pdev, bar) & IORESOURCE_IO) {
if (!request_region(pci_resource_start(pdev, bar),
pci_resource_len(pdev, bar), res_name))
*
* The key difference that _exclusive makes it that userspace is
* explicitly not allowed to map the resource via /dev/mem or
- * sysfs.
+ * sysfs.
*/
int pci_request_region_exclusive(struct pci_dev *pdev, int bar, const char *res_name)
{
* successfully.
*
* pci_request_regions_exclusive() will mark the region so that
- * /dev/mem and the sysfs MMIO access will not be allowed.
+ * /dev/mem and the sysfs MMIO access will not be allowed.
*
* Returns 0 on success, or %EBUSY on error. A warning
* message is also printed on failure.
cmd |= PCI_COMMAND_INVALIDATE;
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
-
+
return 0;
}
*
* NOTE: This causes the caller to sleep for twice the device power transition
* cooldown period, which for the D0->D3hot and D3hot->D0 transitions is 10 ms
- * by devault (i.e. unless the @dev's d3_delay field has a different value).
+ * by default (i.e. unless the @dev's d3_delay field has a different value).
* Moreover, only devices in D0 can be reset by this function.
*/
static int pci_pm_reset(struct pci_dev *dev, int probe)
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, ctrl);
/*
* PCI spec v3.0 7.6.4.2 requires minimum Trst of 1ms. Double
- * this to 2ms to ensure that we meet the minium requirement.
+ * this to 2ms to ensure that we meet the minimum requirement.
*/
msleep(2);
return -EINVAL;
v = ffs(mps) - 8;
- if (v > dev->pcie_mpss)
+ if (v > dev->pcie_mpss)
return -EINVAL;
v <<= 5;
return 0;
}
+bool pci_device_is_present(struct pci_dev *pdev)
+{
+ u32 v;
+
+ return pci_bus_read_dev_vendor_id(pdev->bus, pdev->devfn, &v, 0);
+}
+EXPORT_SYMBOL_GPL(pci_device_is_present);
+
#define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE
static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0};
static DEFINE_SPINLOCK(resource_alignment_lock);
if (info->severity == AER_CORRECTABLE) {
/*
- * Correctable error does not need software intevention.
+ * Correctable error does not need software intervention.
* No need to go through error recovery process.
*/
pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR);
/*
* pcie_aspm_init_link_state: Initiate PCI express link state.
- * It is called after the pcie and its children devices are scaned.
+ * It is called after the pcie and its children devices are scanned.
* @pdev: the root port or switch downstream port
*/
void pcie_aspm_init_link_state(struct pci_dev *pdev)
static struct pcie_port_service_driver pcie_pme_driver = {
.name = "pcie_pme",
- .port_type = PCI_EXP_TYPE_ROOT_PORT,
- .service = PCIE_PORT_SERVICE_PME,
+ .port_type = PCI_EXP_TYPE_ROOT_PORT,
+ .service = PCIE_PORT_SERVICE_PME,
.probe = pcie_pme_probe,
.suspend = pcie_pme_suspend,
#define PCIE_PORT_DEVICE_MAXSERVICES 4
/*
* According to the PCI Express Base Specification 2.0, the indices of
- * the MSI-X table entires used by port services must not exceed 31
+ * the MSI-X table entries used by port services must not exceed 31
*/
#define PCIE_PORT_MAX_MSIX_ENTRIES 32
static int pcie_port_bus_match(struct device *dev, struct device_driver *drv);
struct bus_type pcie_port_bus_type = {
- .name = "pci_express",
- .match = pcie_port_bus_match,
+ .name = "pci_express",
+ .match = pcie_port_bus_match,
};
EXPORT_SYMBOL_GPL(pcie_port_bus_type);
* pcie_port_msix_add_entry - add entry to given array of MSI-X entries
* @entries: Array of MSI-X entries
* @new_entry: Index of the entry to add to the array
- * @nr_entries: Number of entries aleady in the array
+ * @nr_entries: Number of entries already in the array
*
* Return value: Position of the added entry in the array
*/
static void pcie_portdrv_remove(struct pci_dev *dev)
{
pcie_port_device_remove(dev);
- pci_disable_device(dev);
}
static int error_detected_iter(struct device *device, void *data)
.probe = pcie_portdrv_probe,
.remove = pcie_portdrv_remove,
- .err_handler = &pcie_portdrv_err_handler,
+ .err_handler = &pcie_portdrv_err_handler,
- .driver.pm = PCIE_PORTDRV_PM_OPS,
+ .driver.pm = PCIE_PORTDRV_PM_OPS,
};
static int __init dmi_pcie_pme_disable_msi(const struct dmi_system_id *d)
.ident = "MSI Wind U-100",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR,
- "MICRO-STAR INTERNATIONAL CO., LTD"),
+ "MICRO-STAR INTERNATIONAL CO., LTD"),
DMI_MATCH(DMI_PRODUCT_NAME, "U-100"),
},
},
index = 1;
else
goto out;
-
+
if (agp3) {
index += 2;
if (index == 5)
}
/* Disable MasterAbortMode during probing to avoid reporting
- of bus errors (in some architectures) */
+ of bus errors (in some architectures) */
pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &bctl);
pci_write_config_word(dev, PCI_BRIDGE_CONTROL,
bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT);
* pci_setup_device - fill in class and map information of a device
* @dev: the device structure to fill
*
- * Initialize the device structure with information about the device's
+ * Initialize the device structure with information about the device's
* vendor,class,memory and IO-space addresses,IRQ lines etc.
* Called at initialisation of the PCI subsystem and by CardBus services.
* Returns 0 on success and negative if unknown type of device (not normal,
goto bad;
/* The PCI-to-PCI bridge spec requires that subtractive
decoding (i.e. transparent) bridge must have programming
- interface code of 0x01. */
+ interface code of 0x01. */
pci_read_irq(dev);
dev->transparent = ((dev->class & 0xff) == 1);
pci_read_bases(dev, 2, PCI_ROM_ADDRESS1);
* subsequent read will verify if the value is acceptable or not.
* If the MRRS value provided is not acceptable (e.g., too large),
* shrink the value until it is acceptable to the HW.
- */
+ */
while (mrrs != pcie_get_readrq(dev) && mrrs >= 128) {
rc = pcie_set_readrq(dev, mrrs);
if (!rc)
default:
ret = -EINVAL;
break;
- };
+ }
return ret;
}
*
* Init/reset quirks for USB host controllers should be in the
* USB quirks file, where their drivers can access reuse it.
- *
- * The bridge optimization stuff has been removed. If you really
- * have a silly BIOS which is unable to set your host bridge right,
- * use the PowerTweak utility (see http://powertweak.sourceforge.net).
*/
#include <linux/types.h>
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX,PCI_DEVICE_ID_MELLANOX_TAVOR,quirk_mellanox_tavor);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX,PCI_DEVICE_ID_MELLANOX_TAVOR_BRIDGE,quirk_mellanox_tavor);
-/* Deal with broken BIOS'es that neglect to enable passive release,
+/* Deal with broken BIOSes that neglect to enable passive release,
which can cause problems in combination with the 82441FX/PPro MTRRs */
static void quirk_passive_release(struct pci_dev *dev)
{
/* The VIA VP2/VP3/MVP3 seem to have some 'features'. There may be a workaround
but VIA don't answer queries. If you happen to have good contacts at VIA
- ask them for me please -- Alan
-
- This appears to be BIOS not version dependent. So presumably there is a
+ ask them for me please -- Alan
+
+ This appears to be BIOS not version dependent. So presumably there is a
chipset level fix */
-
+
static void quirk_isa_dma_hangs(struct pci_dev *dev)
{
if (!isa_dma_bridge_buggy) {
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_0, quirk_isa_dma_hangs);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C596, quirk_isa_dma_hangs);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371SB_0, quirk_isa_dma_hangs);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, quirk_isa_dma_hangs);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, quirk_isa_dma_hangs);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_1, quirk_isa_dma_hangs);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_2, quirk_isa_dma_hangs);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_3, quirk_isa_dma_hangs);
pci_pci_problems |= PCIPCI_TRITON;
}
}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437, quirk_triton);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437VX, quirk_triton);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439, quirk_triton);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439TX, quirk_triton);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437, quirk_triton);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82437VX, quirk_triton);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439, quirk_triton);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82439TX, quirk_triton);
/*
* VIA Apollo KT133 needs PCI latency patch
* the info on which Mr Breese based his work.
*
* Updated based on further information from the site and also on
- * information provided by VIA
+ * information provided by VIA
*/
static void quirk_vialatency(struct pci_dev *dev)
{
u8 busarb;
/* Ok we have a potential problem chipset here. Now see if we have
a buggy southbridge */
-
+
p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, NULL);
if (p!=NULL) {
/* 0x40 - 0x4f == 686B, 0x10 - 0x2f == 686A; thanks Dan Hollis */
if (p->revision < 0x10 || p->revision > 0x12)
goto exit;
}
-
+
/*
- * Ok we have the problem. Now set the PCI master grant to
+ * Ok we have the problem. Now set the PCI master grant to
* occur every master grant. The apparent bug is that under high
* PCI load (quite common in Linux of course) you can get data
* loss when the CPU is held off the bus for 3 bus master requests
*/
pci_read_config_byte(dev, 0x76, &busarb);
- /* Set bit 4 and bi 5 of byte 76 to 0x01
+ /* Set bit 4 and bi 5 of byte 76 to 0x01
"Master priority rotation on every PCI master grant */
busarb &= ~(1<<5);
busarb |= (1<<4);
* that DMA to AGP space. Latency must be set to 0xA and triton
* workaround applied too
* [Info kindly provided by ALi]
- */
+ */
static void quirk_alimagik(struct pci_dev *dev)
{
if ((pci_pci_problems&PCIPCI_ALIMAGIK)==0) {
pci_pci_problems |= PCIPCI_ALIMAGIK|PCIPCI_TRITON;
}
}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1647, quirk_alimagik);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1651, quirk_alimagik);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1647, quirk_alimagik);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1651, quirk_alimagik);
/*
* Natoma has some interesting boundary conditions with Zoran stuff
pci_pci_problems |= PCIPCI_NATOMA;
}
}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_natoma);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_0, quirk_natoma);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_1, quirk_natoma);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_0, quirk_natoma);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_1, quirk_natoma);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_2, quirk_natoma);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_natoma);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_0, quirk_natoma);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443LX_1, quirk_natoma);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_0, quirk_natoma);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_1, quirk_natoma);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82443BX_2, quirk_natoma);
/*
* This chip can cause PCI parity errors if config register 0xA0 is read
/*
* For now we only print it out. Eventually we'll want to
* reserve it (at least if it's in the 0x1000+ range), but
- * let's get enough confirmation reports first.
+ * let's get enough confirmation reports first.
*/
base &= -size;
dev_info(&dev->dev, "%s PIO at %04x-%04x\n", name, base, base + size - 1);
}
/*
* For now we only print it out. Eventually we'll want to
- * reserve it, but let's get enough confirmation reports first.
+ * reserve it, but let's get enough confirmation reports first.
*/
base &= -size;
dev_info(&dev->dev, "%s MMIO at %04x-%04x\n", name, base, base + size - 1);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_XIO2000A,
quirk_xio2000a);
-#ifdef CONFIG_X86_IO_APIC
+#ifdef CONFIG_X86_IO_APIC
#include <asm/io_apic.h>
static void quirk_via_ioapic(struct pci_dev *dev)
{
u8 tmp;
-
+
if (nr_ioapics < 1)
tmp = 0; /* nothing routed to external APIC */
else
tmp = 0x1f; /* all known bits (4-0) routed to external APIC */
-
+
dev_info(&dev->dev, "%sbling VIA external APIC routing\n",
tmp == 0 ? "Disa" : "Ena");
DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, quirk_via_ioapic);
/*
- * VIA 8237: Some BIOSs don't set the 'Bypass APIC De-Assert Message' Bit.
+ * VIA 8237: Some BIOSes don't set the 'Bypass APIC De-Assert Message' Bit.
* This leads to doubled level interrupt rates.
* Set this bit to get rid of cycle wastage.
* Otherwise uncritical.
static void quirk_disable_pxb(struct pci_dev *pdev)
{
u16 config;
-
+
if (pdev->revision != 0x04) /* Only C0 requires this */
return;
pci_read_config_word(pdev, 0x40, &config);
* On ASUS P4B boards, the SMBus PCI Device within the ICH2/4 southbridge
* is not activated. The myth is that Asus said that they do not want the
* users to be irritated by just another PCI Device in the Win98 device
- * manager. (see the file prog/hotplug/README.p4b in the lm_sensors
+ * manager. (see the file prog/hotplug/README.p4b in the lm_sensors
* package 2.7.0 for details)
*
- * The SMBus PCI Device can be activated by setting a bit in the ICH LPC
- * bridge. Unfortunately, this device has no subvendor/subdevice ID. So it
+ * The SMBus PCI Device can be activated by setting a bit in the ICH LPC
+ * bridge. Unfortunately, this device has no subvendor/subdevice ID. So it
* becomes necessary to do this tweak in two steps -- the chosen trigger
* is either the Host bridge (preferred) or on-board VGA controller.
*
static void asus_hides_smbus_lpc(struct pci_dev *dev)
{
u16 val;
-
+
if (likely(!asus_hides_smbus))
return;
dev_info(&dev->dev, "disabled boot interrupts on device [%04x:%04x]\n",
dev->vendor, dev->device);
}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt);
-DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt);
+DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt);
/*
* disable boot interrupts on HT-1000
dev_info(&dev->dev, "disabled boot interrupts on device [%04x:%04x]\n",
dev->vendor, dev->device);
}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT1000SB, quirk_disable_broadcom_boot_interrupt);
-DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT1000SB, quirk_disable_broadcom_boot_interrupt);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT1000SB, quirk_disable_broadcom_boot_interrupt);
+DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT1000SB, quirk_disable_broadcom_boot_interrupt);
/*
* disable boot interrupts on AMD and ATI chipsets
dev_info(&dev->dev, "disabled boot interrupts on device [%04x:%04x]\n",
dev->vendor, dev->device);
}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8111_SMBUS, quirk_disable_amd_8111_boot_interrupt);
-DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8111_SMBUS, quirk_disable_amd_8111_boot_interrupt);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8111_SMBUS, quirk_disable_amd_8111_boot_interrupt);
+DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8111_SMBUS, quirk_disable_amd_8111_boot_interrupt);
#endif /* CONFIG_X86_IO_APIC */
/*
#ifdef CONFIG_PCI_MSI
/* Some chipsets do not support MSI. We cannot easily rely on setting
* PCI_BUS_FLAGS_NO_MSI in its bus flags because there are actually
- * some other busses controlled by the chipset even if Linux is not
- * aware of it. Instead of setting the flag on all busses in the
+ * some other buses controlled by the chipset even if Linux is not
+ * aware of it. Instead of setting the flag on all buses in the
* machine, simply disable MSI globally.
*/
static void quirk_disable_all_msi(struct pci_dev *dev)
nvenet_msi_disable);
/*
- * Some versions of the MCP55 bridge from nvidia have a legacy irq routing
- * config register. This register controls the routing of legacy interrupts
- * from devices that route through the MCP55. If this register is misprogramed
- * interrupts are only sent to the bsp, unlike conventional systems where the
- * irq is broadxast to all online cpus. Not having this register set
- * properly prevents kdump from booting up properly, so lets make sure that
- * we have it set correctly.
- * Note this is an undocumented register.
+ * Some versions of the MCP55 bridge from Nvidia have a legacy IRQ routing
+ * config register. This register controls the routing of legacy
+ * interrupts from devices that route through the MCP55. If this register
+ * is misprogrammed, interrupts are only sent to the BSP, unlike
+ * conventional systems where the IRQ is broadcast to all online CPUs. Not
+ * having this register set properly prevents kdump from booting up
+ * properly, so let's make sure that we have it set correctly.
+ * Note that this is an undocumented register.
*/
static void nvbridge_check_legacy_irq_routing(struct pci_dev *dev)
{
/* Allow manual resource allocation for PCI hotplug bridges
* via pci=hpmemsize=nnM and pci=hpiosize=nnM parameters. For
* some PCI-PCI hotplug bridges, like PLX 6254 (former HINT HB6),
- * kernel fails to allocate resources when hotplug device is
+ * kernel fails to allocate resources when hotplug device is
* inserted and PCI bus is rescanned.
*/
static void quirk_hotplug_bridge(struct pci_dev *dev)
{
int i;
- msi_remove_pci_irq_vectors(dev);
+ msi_remove_pci_irq_vectors(dev);
pci_cleanup_rom(dev);
for (i = 0; i < PCI_NUM_RESOURCES; i++) {
if (dev->is_added) {
pci_proc_detach_device(dev);
pci_remove_sysfs_dev_files(dev);
- device_del(&dev->dev);
+ device_release_driver(&dev->dev);
dev->is_added = 0;
}
static void pci_destroy_dev(struct pci_dev *dev)
{
+ device_del(&dev->dev);
+
down_write(&pci_bus_sem);
list_del(&dev->bus_list);
up_write(&pci_bus_sem);
/*
- * PCI searching functions.
+ * PCI searching functions.
*
* Copyright (C) 1993 -- 1997 Drew Eckhardt, Frederic Potter,
* David Mosberger-Tang
* pci_find_next_bus - begin or continue searching for a PCI bus
* @from: Previous PCI bus found, or %NULL for new search.
*
- * Iterates through the list of known PCI busses. A new search is
+ * Iterates through the list of known PCI buses. A new search is
* initiated by passing %NULL as the @from argument. Otherwise if
* @from is not %NULL, searches continue from next device on the
* global list.
*/
-struct pci_bus *
+struct pci_bus *
pci_find_next_bus(const struct pci_bus *from)
{
struct list_head *n;
/**
* pci_get_slot - locate PCI device for a given PCI slot
* @bus: PCI bus on which desired PCI device resides
- * @devfn: encodes number of PCI slot in which the desired PCI
- * device resides and the logical device number within that slot
+ * @devfn: encodes number of PCI slot in which the desired PCI
+ * device resides and the logical device number within that slot
* in case of multi-function devices.
*
- * Given a PCI bus and slot/function number, the desired PCI device
+ * Given a PCI bus and slot/function number, the desired PCI device
* is located in the list of PCI devices.
* If the device is found, its reference count is increased and this
* function returns a pointer to its data structure. The caller must
(!(res->flags & IORESOURCE_ROM_ENABLE))))
add_to_list(fail_head,
dev_res->dev, res,
- 0 /* dont care */,
- 0 /* dont care */);
+ 0 /* don't care */,
+ 0 /* don't care */);
}
reset_resource(res);
}
if (!io) {
pci_write_config_word(bridge, PCI_IO_BASE, 0xf0f0);
pci_read_config_word(bridge, PCI_IO_BASE, &io);
- pci_write_config_word(bridge, PCI_IO_BASE, 0x0);
- }
- if (io)
+ pci_write_config_word(bridge, PCI_IO_BASE, 0x0);
+ }
+ if (io)
b_res[0].flags |= IORESOURCE_IO;
/* DECchip 21050 pass 2 errata: the bridge may miss an address
disconnect boundary by one PCI data phase.
resource_size_t min_align, align;
if (!b_res)
- return;
+ return;
min_align = window_alignment(bus, IORESOURCE_IO);
list_for_each_entry(dev, &bus->devices, bus_list) {
if (realloc_head && i >= PCI_IOV_RESOURCES &&
i <= PCI_IOV_RESOURCE_END) {
r->end = r->start - 1;
- add_to_list(realloc_head, dev, r, r_size, 0/* dont' care */);
+ add_to_list(realloc_head, dev, r, r_size, 0/* don't care */);
children_add_size += r_size;
continue;
}
/*
* first try will not touch pci bridge res
- * second and later try will clear small leaf bridge res
- * will stop till to the max deepth if can not find good one
+ * second and later try will clear small leaf bridge res
+ * will stop till to the max depth if can not find good one
*/
void pci_assign_unassigned_root_bus_resources(struct pci_bus *bus)
{
return 0;
}
-static int pci_revert_fw_address(struct resource *res, struct pci_dev *dev,
+static int pci_revert_fw_address(struct resource *res, struct pci_dev *dev,
int resno, resource_size_t size)
{
struct resource *root, *conflict;
static const char *pci_bus_speed_strings[] = {
"33 MHz PCI", /* 0x00 */
"66 MHz PCI", /* 0x01 */
- "66 MHz PCI-X", /* 0x02 */
+ "66 MHz PCI-X", /* 0x02 */
"100 MHz PCI-X", /* 0x03 */
"133 MHz PCI-X", /* 0x04 */
NULL, /* 0x05 */
default:
err = -EINVAL;
goto error;
- };
+ }
err = -EIO;
if (cfg_ret != PCIBIOS_SUCCESSFUL)
config OMAP_USB2
tristate "OMAP USB2 PHY Driver"
depends on ARCH_OMAP2PLUS
+ depends on USB_PHY
select GENERIC_PHY
- select USB_PHY
select OMAP_CONTROL_USB
help
Enable this to support the transceiver that is part of SOC. This
config TWL4030_USB
tristate "TWL4030 USB Transceiver Driver"
depends on TWL4030_CORE && REGULATOR_TWL4030 && USB_MUSB_OMAP2PLUS
+ depends on USB_PHY
select GENERIC_PHY
- select USB_PHY
help
Enable this to support the USB OTG transceiver on TWL4030
family chips (including the TWL5030 and TPS659x0 devices).
int id;
struct phy *phy;
- if (!dev) {
- dev_WARN(dev, "no device provided for PHY\n");
- ret = -EINVAL;
- goto err0;
- }
+ if (WARN_ON(!dev))
+ return ERR_PTR(-EINVAL);
phy = kzalloc(sizeof(*phy), GFP_KERNEL);
- if (!phy) {
- ret = -ENOMEM;
- goto err0;
- }
+ if (!phy)
+ return ERR_PTR(-ENOMEM);
id = ida_simple_get(&phy_ida, 0, 0, GFP_KERNEL);
if (id < 0) {
dev_err(dev, "unable to get id\n");
ret = id;
- goto err0;
+ goto free_phy;
}
device_initialize(&phy->dev);
ret = dev_set_name(&phy->dev, "phy-%s.%d", dev_name(dev), id);
if (ret)
- goto err1;
+ goto put_dev;
ret = device_add(&phy->dev);
if (ret)
- goto err1;
+ goto put_dev;
if (pm_runtime_enabled(dev)) {
pm_runtime_enable(&phy->dev);
return phy;
-err1:
- ida_remove(&phy_ida, phy->id);
+put_dev:
put_device(&phy->dev);
+ ida_remove(&phy_ida, phy->id);
+free_phy:
kfree(phy);
-
-err0:
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(phy_create);
ret = abx500_gpio_set_bits(chip,
AB8500_GPIO_ALTFUN_REG,
af.alt_bit1,
- !!(af.alta_val && BIT(0)));
+ !!(af.alta_val & BIT(0)));
if (ret < 0)
goto out;
goto out;
ret = abx500_gpio_set_bits(chip, AB8500_GPIO_ALTFUN_REG,
- af.alt_bit1, !!(af.altb_val && BIT(0)));
+ af.alt_bit1, !!(af.altb_val & BIT(0)));
if (ret < 0)
goto out;
goto out;
ret = abx500_gpio_set_bits(chip, AB8500_GPIO_ALTFUN_REG,
- af.alt_bit2, !!(af.altc_val && BIT(1)));
+ af.alt_bit2, !!(af.altc_val & BIT(1)));
break;
default:
-#ifndef PINCTRL_PINCTRL_ABx5O0_H
+#ifndef PINCTRL_PINCTRL_ABx500_H
#define PINCTRL_PINCTRL_ABx500_H
/* Package definitions */
static const struct acpi_device_id byt_gpio_acpi_match[] = {
{ "INT33B2", 0 },
+ { "INT33FC", 0 },
{ }
};
MODULE_DEVICE_TABLE(acpi, byt_gpio_acpi_match);
data |= (3 << bit);
break;
default:
+ spin_unlock_irqrestore(&bank->slock, flags);
dev_err(info->dev, "unsupported pull setting %d\n",
pull);
return -EINVAL;
if (ctrl->type == RK3188) {
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
info->reg_pull = devm_ioremap_resource(&pdev->dev, res);
- if (IS_ERR(info->reg_base))
- return PTR_ERR(info->reg_base);
+ if (IS_ERR(info->reg_pull))
+ return PTR_ERR(info->reg_pull);
}
ret = rockchip_gpiolib_register(pdev, info);
const struct r8a7740_portcr_group *group =
&r8a7740_portcr_offsets[i];
- if (i <= group->end_pin)
+ if (pin <= group->end_pin)
return pfc->window->virt + group->offset + pin;
}
const struct sh7372_portcr_group *group =
&sh7372_portcr_offsets[i];
- if (i <= group->end_pin)
+ if (pin <= group->end_pin)
return pfc->window->virt + group->offset + pin;
}
#define PINMUX_GPIO(_pin) \
[GPIO_##_pin] = { \
.pin = (u16)-1, \
- .name = __stringify(name), \
+ .name = __stringify(GPIO_##_pin), \
.enum_id = _pin##_DATA, \
}
source "drivers/platform/goldfish/Kconfig"
endif
+source "drivers/platform/chrome/Kconfig"
obj-$(CONFIG_X86) += x86/
obj-$(CONFIG_OLPC) += olpc/
obj-$(CONFIG_GOLDFISH) += goldfish/
+obj-$(CONFIG_CHROME_PLATFORMS) += chrome/
--- /dev/null
+#
+# Platform support for Chrome OS hardware (Chromebooks and Chromeboxes)
+#
+
+menuconfig CHROME_PLATFORMS
+ bool "Platform support for Chrome hardware"
+ depends on X86
+ ---help---
+ Say Y here to get to see options for platform support for
+ various Chromebooks and Chromeboxes. This option alone does
+ not add any kernel code.
+
+ If you say N, all options in this submenu will be skipped and disabled.
+
+if CHROME_PLATFORMS
+
+config CHROMEOS_LAPTOP
+ tristate "Chrome OS Laptop"
+ depends on I2C
+ depends on DMI
+ ---help---
+ This driver instantiates i2c and smbus devices such as
+ light sensors and touchpads.
+
+ If you have a supported Chromebook, choose Y or M here.
+ The module will be called chromeos_laptop.
+
+endif # CHROMEOS_PLATFORMS
--- /dev/null
+
+obj-$(CONFIG_CHROMEOS_LAPTOP) += chromeos_laptop.o
--- /dev/null
+/*
+ * chromeos_laptop.c - Driver to instantiate Chromebook i2c/smbus devices.
+ *
+ * Author : Benson Leung <bleung@chromium.org>
+ *
+ * Copyright (C) 2012 Google, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+#include <linux/dmi.h>
+#include <linux/i2c.h>
+#include <linux/i2c/atmel_mxt_ts.h>
+#include <linux/input.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+
+#define ATMEL_TP_I2C_ADDR 0x4b
+#define ATMEL_TP_I2C_BL_ADDR 0x25
+#define ATMEL_TS_I2C_ADDR 0x4a
+#define ATMEL_TS_I2C_BL_ADDR 0x26
+#define CYAPA_TP_I2C_ADDR 0x67
+#define ISL_ALS_I2C_ADDR 0x44
+#define TAOS_ALS_I2C_ADDR 0x29
+
+static struct i2c_client *als;
+static struct i2c_client *tp;
+static struct i2c_client *ts;
+
+const char *i2c_adapter_names[] = {
+ "SMBus I801 adapter",
+ "i915 gmbus vga",
+ "i915 gmbus panel",
+};
+
+/* Keep this enum consistent with i2c_adapter_names */
+enum i2c_adapter_type {
+ I2C_ADAPTER_SMBUS = 0,
+ I2C_ADAPTER_VGADDC,
+ I2C_ADAPTER_PANEL,
+};
+
+static struct i2c_board_info __initdata cyapa_device = {
+ I2C_BOARD_INFO("cyapa", CYAPA_TP_I2C_ADDR),
+ .flags = I2C_CLIENT_WAKE,
+};
+
+static struct i2c_board_info __initdata isl_als_device = {
+ I2C_BOARD_INFO("isl29018", ISL_ALS_I2C_ADDR),
+};
+
+static struct i2c_board_info __initdata tsl2583_als_device = {
+ I2C_BOARD_INFO("tsl2583", TAOS_ALS_I2C_ADDR),
+};
+
+static struct i2c_board_info __initdata tsl2563_als_device = {
+ I2C_BOARD_INFO("tsl2563", TAOS_ALS_I2C_ADDR),
+};
+
+static struct mxt_platform_data atmel_224s_tp_platform_data = {
+ .x_line = 18,
+ .y_line = 12,
+ .x_size = 102*20,
+ .y_size = 68*20,
+ .blen = 0x80, /* Gain setting is in upper 4 bits */
+ .threshold = 0x32,
+ .voltage = 0, /* 3.3V */
+ .orient = MXT_VERTICAL_FLIP,
+ .irqflags = IRQF_TRIGGER_FALLING,
+ .is_tp = true,
+ .key_map = { KEY_RESERVED,
+ KEY_RESERVED,
+ KEY_RESERVED,
+ BTN_LEFT },
+ .config = NULL,
+ .config_length = 0,
+};
+
+static struct i2c_board_info __initdata atmel_224s_tp_device = {
+ I2C_BOARD_INFO("atmel_mxt_tp", ATMEL_TP_I2C_ADDR),
+ .platform_data = &atmel_224s_tp_platform_data,
+ .flags = I2C_CLIENT_WAKE,
+};
+
+static struct mxt_platform_data atmel_1664s_platform_data = {
+ .x_line = 32,
+ .y_line = 50,
+ .x_size = 1700,
+ .y_size = 2560,
+ .blen = 0x89, /* Gain setting is in upper 4 bits */
+ .threshold = 0x28,
+ .voltage = 0, /* 3.3V */
+ .orient = MXT_ROTATED_90_COUNTER,
+ .irqflags = IRQF_TRIGGER_FALLING,
+ .is_tp = false,
+ .config = NULL,
+ .config_length = 0,
+};
+
+static struct i2c_board_info __initdata atmel_1664s_device = {
+ I2C_BOARD_INFO("atmel_mxt_ts", ATMEL_TS_I2C_ADDR),
+ .platform_data = &atmel_1664s_platform_data,
+ .flags = I2C_CLIENT_WAKE,
+};
+
+static struct i2c_client __init *__add_probed_i2c_device(
+ const char *name,
+ int bus,
+ struct i2c_board_info *info,
+ const unsigned short *addrs)
+{
+ const struct dmi_device *dmi_dev;
+ const struct dmi_dev_onboard *dev_data;
+ struct i2c_adapter *adapter;
+ struct i2c_client *client;
+
+ if (bus < 0)
+ return NULL;
+ /*
+ * If a name is specified, look for irq platform information stashed
+ * in DMI_DEV_TYPE_DEV_ONBOARD by the Chrome OS custom system firmware.
+ */
+ if (name) {
+ dmi_dev = dmi_find_device(DMI_DEV_TYPE_DEV_ONBOARD, name, NULL);
+ if (!dmi_dev) {
+ pr_err("%s failed to dmi find device %s.\n",
+ __func__,
+ name);
+ return NULL;
+ }
+ dev_data = (struct dmi_dev_onboard *)dmi_dev->device_data;
+ if (!dev_data) {
+ pr_err("%s failed to get data from dmi for %s.\n",
+ __func__, name);
+ return NULL;
+ }
+ info->irq = dev_data->instance;
+ }
+
+ adapter = i2c_get_adapter(bus);
+ if (!adapter) {
+ pr_err("%s failed to get i2c adapter %d.\n", __func__, bus);
+ return NULL;
+ }
+
+ /* add the i2c device */
+ client = i2c_new_probed_device(adapter, info, addrs, NULL);
+ if (!client)
+ pr_err("%s failed to register device %d-%02x\n",
+ __func__, bus, info->addr);
+ else
+ pr_debug("%s added i2c device %d-%02x\n",
+ __func__, bus, info->addr);
+
+ i2c_put_adapter(adapter);
+ return client;
+}
+
+static int __init __find_i2c_adap(struct device *dev, void *data)
+{
+ const char *name = data;
+ static const char *prefix = "i2c-";
+ struct i2c_adapter *adapter;
+ if (strncmp(dev_name(dev), prefix, strlen(prefix)) != 0)
+ return 0;
+ adapter = to_i2c_adapter(dev);
+ return (strncmp(adapter->name, name, strlen(name)) == 0);
+}
+
+static int __init find_i2c_adapter_num(enum i2c_adapter_type type)
+{
+ struct device *dev = NULL;
+ struct i2c_adapter *adapter;
+ const char *name = i2c_adapter_names[type];
+ /* find the adapter by name */
+ dev = bus_find_device(&i2c_bus_type, NULL, (void *)name,
+ __find_i2c_adap);
+ if (!dev) {
+ pr_err("%s: i2c adapter %s not found on system.\n", __func__,
+ name);
+ return -ENODEV;
+ }
+ adapter = to_i2c_adapter(dev);
+ return adapter->nr;
+}
+
+/*
+ * Takes a list of addresses in addrs as such :
+ * { addr1, ... , addrn, I2C_CLIENT_END };
+ * add_probed_i2c_device will use i2c_new_probed_device
+ * and probe for devices at all of the addresses listed.
+ * Returns NULL if no devices found.
+ * See Documentation/i2c/instantiating-devices for more information.
+ */
+static __init struct i2c_client *add_probed_i2c_device(
+ const char *name,
+ enum i2c_adapter_type type,
+ struct i2c_board_info *info,
+ const unsigned short *addrs)
+{
+ return __add_probed_i2c_device(name,
+ find_i2c_adapter_num(type),
+ info,
+ addrs);
+}
+
+/*
+ * Probes for a device at a single address, the one provided by
+ * info->addr.
+ * Returns NULL if no device found.
+ */
+static __init struct i2c_client *add_i2c_device(const char *name,
+ enum i2c_adapter_type type,
+ struct i2c_board_info *info)
+{
+ const unsigned short addr_list[] = { info->addr, I2C_CLIENT_END };
+ return __add_probed_i2c_device(name,
+ find_i2c_adapter_num(type),
+ info,
+ addr_list);
+}
+
+
+static struct i2c_client __init *add_smbus_device(const char *name,
+ struct i2c_board_info *info)
+{
+ return add_i2c_device(name, I2C_ADAPTER_SMBUS, info);
+}
+
+static int __init setup_cyapa_smbus_tp(const struct dmi_system_id *id)
+{
+ /* add cyapa touchpad on smbus */
+ tp = add_smbus_device("trackpad", &cyapa_device);
+ return 0;
+}
+
+static int __init setup_atmel_224s_tp(const struct dmi_system_id *id)
+{
+ const unsigned short addr_list[] = { ATMEL_TP_I2C_BL_ADDR,
+ ATMEL_TP_I2C_ADDR,
+ I2C_CLIENT_END };
+
+ /* add atmel mxt touchpad on VGA DDC GMBus */
+ tp = add_probed_i2c_device("trackpad", I2C_ADAPTER_VGADDC,
+ &atmel_224s_tp_device, addr_list);
+ return 0;
+}
+
+static int __init setup_atmel_1664s_ts(const struct dmi_system_id *id)
+{
+ const unsigned short addr_list[] = { ATMEL_TS_I2C_BL_ADDR,
+ ATMEL_TS_I2C_ADDR,
+ I2C_CLIENT_END };
+
+ /* add atmel mxt touch device on PANEL GMBus */
+ ts = add_probed_i2c_device("touchscreen", I2C_ADAPTER_PANEL,
+ &atmel_1664s_device, addr_list);
+ return 0;
+}
+
+
+static int __init setup_isl29018_als(const struct dmi_system_id *id)
+{
+ /* add isl29018 light sensor */
+ als = add_smbus_device("lightsensor", &isl_als_device);
+ return 0;
+}
+
+static int __init setup_isl29023_als(const struct dmi_system_id *id)
+{
+ /* add isl29023 light sensor on Panel GMBus */
+ als = add_i2c_device("lightsensor", I2C_ADAPTER_PANEL,
+ &isl_als_device);
+ return 0;
+}
+
+static int __init setup_tsl2583_als(const struct dmi_system_id *id)
+{
+ /* add tsl2583 light sensor on smbus */
+ als = add_smbus_device(NULL, &tsl2583_als_device);
+ return 0;
+}
+
+static int __init setup_tsl2563_als(const struct dmi_system_id *id)
+{
+ /* add tsl2563 light sensor on smbus */
+ als = add_smbus_device(NULL, &tsl2563_als_device);
+ return 0;
+}
+
+static struct dmi_system_id __initdata chromeos_laptop_dmi_table[] = {
+ {
+ .ident = "Samsung Series 5 550 - Touchpad",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Lumpy"),
+ },
+ .callback = setup_cyapa_smbus_tp,
+ },
+ {
+ .ident = "Chromebook Pixel - Touchscreen",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Link"),
+ },
+ .callback = setup_atmel_1664s_ts,
+ },
+ {
+ .ident = "Chromebook Pixel - Touchpad",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Link"),
+ },
+ .callback = setup_atmel_224s_tp,
+ },
+ {
+ .ident = "Samsung Series 5 550 - Light Sensor",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Lumpy"),
+ },
+ .callback = setup_isl29018_als,
+ },
+ {
+ .ident = "Chromebook Pixel - Light Sensor",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Link"),
+ },
+ .callback = setup_isl29023_als,
+ },
+ {
+ .ident = "Acer C7 Chromebook - Touchpad",
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Parrot"),
+ },
+ .callback = setup_cyapa_smbus_tp,
+ },
+ {
+ .ident = "HP Pavilion 14 Chromebook - Touchpad",
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Butterfly"),
+ },
+ .callback = setup_cyapa_smbus_tp,
+ },
+ {
+ .ident = "Samsung Series 5 - Light Sensor",
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Alex"),
+ },
+ .callback = setup_tsl2583_als,
+ },
+ {
+ .ident = "Cr-48 - Light Sensor",
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Mario"),
+ },
+ .callback = setup_tsl2563_als,
+ },
+ {
+ .ident = "Acer AC700 - Light Sensor",
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "ZGB"),
+ },
+ .callback = setup_tsl2563_als,
+ },
+ { }
+};
+MODULE_DEVICE_TABLE(dmi, chromeos_laptop_dmi_table);
+
+static int __init chromeos_laptop_init(void)
+{
+ if (!dmi_check_system(chromeos_laptop_dmi_table)) {
+ pr_debug("%s unsupported system.\n", __func__);
+ return -ENODEV;
+ }
+ return 0;
+}
+
+static void __exit chromeos_laptop_exit(void)
+{
+ if (als)
+ i2c_unregister_device(als);
+ if (tp)
+ i2c_unregister_device(tp);
+ if (ts)
+ i2c_unregister_device(ts);
+}
+
+module_init(chromeos_laptop_init);
+module_exit(chromeos_laptop_exit);
+
+MODULE_DESCRIPTION("Chrome OS Laptop driver");
+MODULE_AUTHOR("Benson Leung <bleung@chromium.org>");
+MODULE_LICENSE("GPL");
If you have an ACPI-compatible ASUS laptop, say Y or M here.
-config CHROMEOS_LAPTOP
- tristate "Chrome OS Laptop"
- depends on I2C
- depends on DMI
- ---help---
- This driver instantiates i2c and smbus devices such as
- light sensors and touchpads.
-
- If you have a supported Chromebook, choose Y or M here.
- The module will be called chromeos_laptop.
-
config DELL_LAPTOP
tristate "Dell Laptop Extras"
depends on X86
obj-$(CONFIG_INTEL_OAKTRAIL) += intel_oaktrail.o
obj-$(CONFIG_SAMSUNG_Q10) += samsung-q10.o
obj-$(CONFIG_APPLE_GMUX) += apple-gmux.o
-obj-$(CONFIG_CHROMEOS_LAPTOP) += chromeos_laptop.o
obj-$(CONFIG_INTEL_RST) += intel-rst.o
obj-$(CONFIG_INTEL_SMARTCONNECT) += intel-smartconnect.o
gmux_data->power_state = VGA_SWITCHEROO_ON;
- gmux_data->dhandle = DEVICE_ACPI_HANDLE(&pnp->dev);
+ gmux_data->dhandle = ACPI_HANDLE(&pnp->dev);
if (!gmux_data->dhandle) {
pr_err("Cannot find acpi handle for pnp device %s\n",
dev_name(&pnp->dev));
int error;
input = input_allocate_device();
- if (!input) {
- pr_warn("Unable to allocate input device\n");
+ if (!input)
return -ENOMEM;
- }
+
input->name = "Asus Laptop extra buttons";
input->phys = ASUS_LAPTOP_FILE "/input0";
input->id.bustype = BUS_HOST;
+++ /dev/null
-/*
- * chromeos_laptop.c - Driver to instantiate Chromebook i2c/smbus devices.
- *
- * Author : Benson Leung <bleung@chromium.org>
- *
- * Copyright (C) 2012 Google, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- *
- */
-
-#include <linux/dmi.h>
-#include <linux/i2c.h>
-#include <linux/i2c/atmel_mxt_ts.h>
-#include <linux/input.h>
-#include <linux/interrupt.h>
-#include <linux/module.h>
-
-#define ATMEL_TP_I2C_ADDR 0x4b
-#define ATMEL_TP_I2C_BL_ADDR 0x25
-#define ATMEL_TS_I2C_ADDR 0x4a
-#define ATMEL_TS_I2C_BL_ADDR 0x26
-#define CYAPA_TP_I2C_ADDR 0x67
-#define ISL_ALS_I2C_ADDR 0x44
-#define TAOS_ALS_I2C_ADDR 0x29
-
-static struct i2c_client *als;
-static struct i2c_client *tp;
-static struct i2c_client *ts;
-
-const char *i2c_adapter_names[] = {
- "SMBus I801 adapter",
- "i915 gmbus vga",
- "i915 gmbus panel",
-};
-
-/* Keep this enum consistent with i2c_adapter_names */
-enum i2c_adapter_type {
- I2C_ADAPTER_SMBUS = 0,
- I2C_ADAPTER_VGADDC,
- I2C_ADAPTER_PANEL,
-};
-
-static struct i2c_board_info __initdata cyapa_device = {
- I2C_BOARD_INFO("cyapa", CYAPA_TP_I2C_ADDR),
- .flags = I2C_CLIENT_WAKE,
-};
-
-static struct i2c_board_info __initdata isl_als_device = {
- I2C_BOARD_INFO("isl29018", ISL_ALS_I2C_ADDR),
-};
-
-static struct i2c_board_info __initdata tsl2583_als_device = {
- I2C_BOARD_INFO("tsl2583", TAOS_ALS_I2C_ADDR),
-};
-
-static struct i2c_board_info __initdata tsl2563_als_device = {
- I2C_BOARD_INFO("tsl2563", TAOS_ALS_I2C_ADDR),
-};
-
-static struct mxt_platform_data atmel_224s_tp_platform_data = {
- .x_line = 18,
- .y_line = 12,
- .x_size = 102*20,
- .y_size = 68*20,
- .blen = 0x80, /* Gain setting is in upper 4 bits */
- .threshold = 0x32,
- .voltage = 0, /* 3.3V */
- .orient = MXT_VERTICAL_FLIP,
- .irqflags = IRQF_TRIGGER_FALLING,
- .is_tp = true,
- .key_map = { KEY_RESERVED,
- KEY_RESERVED,
- KEY_RESERVED,
- BTN_LEFT },
- .config = NULL,
- .config_length = 0,
-};
-
-static struct i2c_board_info __initdata atmel_224s_tp_device = {
- I2C_BOARD_INFO("atmel_mxt_tp", ATMEL_TP_I2C_ADDR),
- .platform_data = &atmel_224s_tp_platform_data,
- .flags = I2C_CLIENT_WAKE,
-};
-
-static struct mxt_platform_data atmel_1664s_platform_data = {
- .x_line = 32,
- .y_line = 50,
- .x_size = 1700,
- .y_size = 2560,
- .blen = 0x89, /* Gain setting is in upper 4 bits */
- .threshold = 0x28,
- .voltage = 0, /* 3.3V */
- .orient = MXT_ROTATED_90_COUNTER,
- .irqflags = IRQF_TRIGGER_FALLING,
- .is_tp = false,
- .config = NULL,
- .config_length = 0,
-};
-
-static struct i2c_board_info __initdata atmel_1664s_device = {
- I2C_BOARD_INFO("atmel_mxt_ts", ATMEL_TS_I2C_ADDR),
- .platform_data = &atmel_1664s_platform_data,
- .flags = I2C_CLIENT_WAKE,
-};
-
-static struct i2c_client __init *__add_probed_i2c_device(
- const char *name,
- int bus,
- struct i2c_board_info *info,
- const unsigned short *addrs)
-{
- const struct dmi_device *dmi_dev;
- const struct dmi_dev_onboard *dev_data;
- struct i2c_adapter *adapter;
- struct i2c_client *client;
-
- if (bus < 0)
- return NULL;
- /*
- * If a name is specified, look for irq platform information stashed
- * in DMI_DEV_TYPE_DEV_ONBOARD by the Chrome OS custom system firmware.
- */
- if (name) {
- dmi_dev = dmi_find_device(DMI_DEV_TYPE_DEV_ONBOARD, name, NULL);
- if (!dmi_dev) {
- pr_err("%s failed to dmi find device %s.\n",
- __func__,
- name);
- return NULL;
- }
- dev_data = (struct dmi_dev_onboard *)dmi_dev->device_data;
- if (!dev_data) {
- pr_err("%s failed to get data from dmi for %s.\n",
- __func__, name);
- return NULL;
- }
- info->irq = dev_data->instance;
- }
-
- adapter = i2c_get_adapter(bus);
- if (!adapter) {
- pr_err("%s failed to get i2c adapter %d.\n", __func__, bus);
- return NULL;
- }
-
- /* add the i2c device */
- client = i2c_new_probed_device(adapter, info, addrs, NULL);
- if (!client)
- pr_err("%s failed to register device %d-%02x\n",
- __func__, bus, info->addr);
- else
- pr_debug("%s added i2c device %d-%02x\n",
- __func__, bus, info->addr);
-
- i2c_put_adapter(adapter);
- return client;
-}
-
-static int __init __find_i2c_adap(struct device *dev, void *data)
-{
- const char *name = data;
- static const char *prefix = "i2c-";
- struct i2c_adapter *adapter;
- if (strncmp(dev_name(dev), prefix, strlen(prefix)) != 0)
- return 0;
- adapter = to_i2c_adapter(dev);
- return (strncmp(adapter->name, name, strlen(name)) == 0);
-}
-
-static int __init find_i2c_adapter_num(enum i2c_adapter_type type)
-{
- struct device *dev = NULL;
- struct i2c_adapter *adapter;
- const char *name = i2c_adapter_names[type];
- /* find the adapter by name */
- dev = bus_find_device(&i2c_bus_type, NULL, (void *)name,
- __find_i2c_adap);
- if (!dev) {
- pr_err("%s: i2c adapter %s not found on system.\n", __func__,
- name);
- return -ENODEV;
- }
- adapter = to_i2c_adapter(dev);
- return adapter->nr;
-}
-
-/*
- * Takes a list of addresses in addrs as such :
- * { addr1, ... , addrn, I2C_CLIENT_END };
- * add_probed_i2c_device will use i2c_new_probed_device
- * and probe for devices at all of the addresses listed.
- * Returns NULL if no devices found.
- * See Documentation/i2c/instantiating-devices for more information.
- */
-static __init struct i2c_client *add_probed_i2c_device(
- const char *name,
- enum i2c_adapter_type type,
- struct i2c_board_info *info,
- const unsigned short *addrs)
-{
- return __add_probed_i2c_device(name,
- find_i2c_adapter_num(type),
- info,
- addrs);
-}
-
-/*
- * Probes for a device at a single address, the one provided by
- * info->addr.
- * Returns NULL if no device found.
- */
-static __init struct i2c_client *add_i2c_device(const char *name,
- enum i2c_adapter_type type,
- struct i2c_board_info *info)
-{
- const unsigned short addr_list[] = { info->addr, I2C_CLIENT_END };
- return __add_probed_i2c_device(name,
- find_i2c_adapter_num(type),
- info,
- addr_list);
-}
-
-
-static struct i2c_client __init *add_smbus_device(const char *name,
- struct i2c_board_info *info)
-{
- return add_i2c_device(name, I2C_ADAPTER_SMBUS, info);
-}
-
-static int __init setup_cyapa_smbus_tp(const struct dmi_system_id *id)
-{
- /* add cyapa touchpad on smbus */
- tp = add_smbus_device("trackpad", &cyapa_device);
- return 0;
-}
-
-static int __init setup_atmel_224s_tp(const struct dmi_system_id *id)
-{
- const unsigned short addr_list[] = { ATMEL_TP_I2C_BL_ADDR,
- ATMEL_TP_I2C_ADDR,
- I2C_CLIENT_END };
-
- /* add atmel mxt touchpad on VGA DDC GMBus */
- tp = add_probed_i2c_device("trackpad", I2C_ADAPTER_VGADDC,
- &atmel_224s_tp_device, addr_list);
- return 0;
-}
-
-static int __init setup_atmel_1664s_ts(const struct dmi_system_id *id)
-{
- const unsigned short addr_list[] = { ATMEL_TS_I2C_BL_ADDR,
- ATMEL_TS_I2C_ADDR,
- I2C_CLIENT_END };
-
- /* add atmel mxt touch device on PANEL GMBus */
- ts = add_probed_i2c_device("touchscreen", I2C_ADAPTER_PANEL,
- &atmel_1664s_device, addr_list);
- return 0;
-}
-
-
-static int __init setup_isl29018_als(const struct dmi_system_id *id)
-{
- /* add isl29018 light sensor */
- als = add_smbus_device("lightsensor", &isl_als_device);
- return 0;
-}
-
-static int __init setup_isl29023_als(const struct dmi_system_id *id)
-{
- /* add isl29023 light sensor on Panel GMBus */
- als = add_i2c_device("lightsensor", I2C_ADAPTER_PANEL,
- &isl_als_device);
- return 0;
-}
-
-static int __init setup_tsl2583_als(const struct dmi_system_id *id)
-{
- /* add tsl2583 light sensor on smbus */
- als = add_smbus_device(NULL, &tsl2583_als_device);
- return 0;
-}
-
-static int __init setup_tsl2563_als(const struct dmi_system_id *id)
-{
- /* add tsl2563 light sensor on smbus */
- als = add_smbus_device(NULL, &tsl2563_als_device);
- return 0;
-}
-
-static struct dmi_system_id __initdata chromeos_laptop_dmi_table[] = {
- {
- .ident = "Samsung Series 5 550 - Touchpad",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG"),
- DMI_MATCH(DMI_PRODUCT_NAME, "Lumpy"),
- },
- .callback = setup_cyapa_smbus_tp,
- },
- {
- .ident = "Chromebook Pixel - Touchscreen",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
- DMI_MATCH(DMI_PRODUCT_NAME, "Link"),
- },
- .callback = setup_atmel_1664s_ts,
- },
- {
- .ident = "Chromebook Pixel - Touchpad",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
- DMI_MATCH(DMI_PRODUCT_NAME, "Link"),
- },
- .callback = setup_atmel_224s_tp,
- },
- {
- .ident = "Samsung Series 5 550 - Light Sensor",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG"),
- DMI_MATCH(DMI_PRODUCT_NAME, "Lumpy"),
- },
- .callback = setup_isl29018_als,
- },
- {
- .ident = "Chromebook Pixel - Light Sensor",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
- DMI_MATCH(DMI_PRODUCT_NAME, "Link"),
- },
- .callback = setup_isl29023_als,
- },
- {
- .ident = "Acer C7 Chromebook - Touchpad",
- .matches = {
- DMI_MATCH(DMI_PRODUCT_NAME, "Parrot"),
- },
- .callback = setup_cyapa_smbus_tp,
- },
- {
- .ident = "HP Pavilion 14 Chromebook - Touchpad",
- .matches = {
- DMI_MATCH(DMI_PRODUCT_NAME, "Butterfly"),
- },
- .callback = setup_cyapa_smbus_tp,
- },
- {
- .ident = "Samsung Series 5 - Light Sensor",
- .matches = {
- DMI_MATCH(DMI_PRODUCT_NAME, "Alex"),
- },
- .callback = setup_tsl2583_als,
- },
- {
- .ident = "Cr-48 - Light Sensor",
- .matches = {
- DMI_MATCH(DMI_PRODUCT_NAME, "Mario"),
- },
- .callback = setup_tsl2563_als,
- },
- {
- .ident = "Acer AC700 - Light Sensor",
- .matches = {
- DMI_MATCH(DMI_PRODUCT_NAME, "ZGB"),
- },
- .callback = setup_tsl2563_als,
- },
- { }
-};
-MODULE_DEVICE_TABLE(dmi, chromeos_laptop_dmi_table);
-
-static int __init chromeos_laptop_init(void)
-{
- if (!dmi_check_system(chromeos_laptop_dmi_table)) {
- pr_debug("%s unsupported system.\n", __func__);
- return -ENODEV;
- }
- return 0;
-}
-
-static void __exit chromeos_laptop_exit(void)
-{
- if (als)
- i2c_unregister_device(als);
- if (tp)
- i2c_unregister_device(tp);
- if (ts)
- i2c_unregister_device(ts);
-}
-
-module_init(chromeos_laptop_init);
-module_exit(chromeos_laptop_exit);
-
-MODULE_DESCRIPTION("Chrome OS Laptop driver");
-MODULE_AUTHOR("Benson Leung <bleung@chromium.org>");
-MODULE_LICENSE("GPL");
#include <linux/err.h>
#include <linux/dmi.h>
#include <linux/io.h>
+#include <linux/rfkill.h>
#include <linux/power_supply.h>
#include <linux/acpi.h>
#include <linux/mm.h>
static struct platform_device *platform_device;
static struct backlight_device *dell_backlight_device;
+static struct rfkill *wifi_rfkill;
+static struct rfkill *bluetooth_rfkill;
+static struct rfkill *wwan_rfkill;
+static bool force_rfkill;
+
+module_param(force_rfkill, bool, 0444);
+MODULE_PARM_DESC(force_rfkill, "enable rfkill on non whitelisted models");
static const struct dmi_system_id dell_device_table[] __initconst = {
{
return buffer;
}
+/* Derived from information in DellWirelessCtl.cpp:
+ Class 17, select 11 is radio control. It returns an array of 32-bit values.
+
+ Input byte 0 = 0: Wireless information
+
+ result[0]: return code
+ result[1]:
+ Bit 0: Hardware switch supported
+ Bit 1: Wifi locator supported
+ Bit 2: Wifi is supported
+ Bit 3: Bluetooth is supported
+ Bit 4: WWAN is supported
+ Bit 5: Wireless keyboard supported
+ Bits 6-7: Reserved
+ Bit 8: Wifi is installed
+ Bit 9: Bluetooth is installed
+ Bit 10: WWAN is installed
+ Bits 11-15: Reserved
+ Bit 16: Hardware switch is on
+ Bit 17: Wifi is blocked
+ Bit 18: Bluetooth is blocked
+ Bit 19: WWAN is blocked
+ Bits 20-31: Reserved
+ result[2]: NVRAM size in bytes
+ result[3]: NVRAM format version number
+
+ Input byte 0 = 2: Wireless switch configuration
+ result[0]: return code
+ result[1]:
+ Bit 0: Wifi controlled by switch
+ Bit 1: Bluetooth controlled by switch
+ Bit 2: WWAN controlled by switch
+ Bits 3-6: Reserved
+ Bit 7: Wireless switch config locked
+ Bit 8: Wifi locator enabled
+ Bits 9-14: Reserved
+ Bit 15: Wifi locator setting locked
+ Bits 16-31: Reserved
+*/
+
+static int dell_rfkill_set(void *data, bool blocked)
+{
+ int disable = blocked ? 1 : 0;
+ unsigned long radio = (unsigned long)data;
+ int hwswitch_bit = (unsigned long)data - 1;
+
+ get_buffer();
+ dell_send_request(buffer, 17, 11);
+
+ /* If the hardware switch controls this radio, and the hardware
+ switch is disabled, always disable the radio */
+ if ((hwswitch_state & BIT(hwswitch_bit)) &&
+ !(buffer->output[1] & BIT(16)))
+ disable = 1;
+
+ buffer->input[0] = (1 | (radio<<8) | (disable << 16));
+ dell_send_request(buffer, 17, 11);
+
+ release_buffer();
+ return 0;
+}
+
+/* Must be called with the buffer held */
+static void dell_rfkill_update_sw_state(struct rfkill *rfkill, int radio,
+ int status)
+{
+ if (status & BIT(0)) {
+ /* Has hw-switch, sync sw_state to BIOS */
+ int block = rfkill_blocked(rfkill);
+ buffer->input[0] = (1 | (radio << 8) | (block << 16));
+ dell_send_request(buffer, 17, 11);
+ } else {
+ /* No hw-switch, sync BIOS state to sw_state */
+ rfkill_set_sw_state(rfkill, !!(status & BIT(radio + 16)));
+ }
+}
+
+static void dell_rfkill_update_hw_state(struct rfkill *rfkill, int radio,
+ int status)
+{
+ if (hwswitch_state & (BIT(radio - 1)))
+ rfkill_set_hw_state(rfkill, !(status & BIT(16)));
+}
+
+static void dell_rfkill_query(struct rfkill *rfkill, void *data)
+{
+ int status;
+
+ get_buffer();
+ dell_send_request(buffer, 17, 11);
+ status = buffer->output[1];
+
+ dell_rfkill_update_hw_state(rfkill, (unsigned long)data, status);
+
+ release_buffer();
+}
+
+static const struct rfkill_ops dell_rfkill_ops = {
+ .set_block = dell_rfkill_set,
+ .query = dell_rfkill_query,
+};
+
static struct dentry *dell_laptop_dir;
static int dell_debugfs_show(struct seq_file *s, void *data)
.release = single_release,
};
+static void dell_update_rfkill(struct work_struct *ignored)
+{
+ int status;
+
+ get_buffer();
+ dell_send_request(buffer, 17, 11);
+ status = buffer->output[1];
+
+ if (wifi_rfkill) {
+ dell_rfkill_update_hw_state(wifi_rfkill, 1, status);
+ dell_rfkill_update_sw_state(wifi_rfkill, 1, status);
+ }
+ if (bluetooth_rfkill) {
+ dell_rfkill_update_hw_state(bluetooth_rfkill, 2, status);
+ dell_rfkill_update_sw_state(bluetooth_rfkill, 2, status);
+ }
+ if (wwan_rfkill) {
+ dell_rfkill_update_hw_state(wwan_rfkill, 3, status);
+ dell_rfkill_update_sw_state(wwan_rfkill, 3, status);
+ }
+
+ release_buffer();
+}
+static DECLARE_DELAYED_WORK(dell_rfkill_work, dell_update_rfkill);
+
+
+static int __init dell_setup_rfkill(void)
+{
+ int status;
+ int ret;
+ const char *product;
+
+ /*
+ * rfkill causes trouble on various non Latitudes, according to Dell
+ * actually testing the rfkill functionality is only done on Latitudes.
+ */
+ product = dmi_get_system_info(DMI_PRODUCT_NAME);
+ if (!force_rfkill && (!product || strncmp(product, "Latitude", 8)))
+ return 0;
+
+ get_buffer();
+ dell_send_request(buffer, 17, 11);
+ status = buffer->output[1];
+ buffer->input[0] = 0x2;
+ dell_send_request(buffer, 17, 11);
+ hwswitch_state = buffer->output[1];
+ release_buffer();
+
+ if (!(status & BIT(0))) {
+ if (force_rfkill) {
+ /* No hwsitch, clear all hw-controlled bits */
+ hwswitch_state &= ~7;
+ } else {
+ /* rfkill is only tested on laptops with a hwswitch */
+ return 0;
+ }
+ }
+
+ if ((status & (1<<2|1<<8)) == (1<<2|1<<8)) {
+ wifi_rfkill = rfkill_alloc("dell-wifi", &platform_device->dev,
+ RFKILL_TYPE_WLAN,
+ &dell_rfkill_ops, (void *) 1);
+ if (!wifi_rfkill) {
+ ret = -ENOMEM;
+ goto err_wifi;
+ }
+ ret = rfkill_register(wifi_rfkill);
+ if (ret)
+ goto err_wifi;
+ }
+
+ if ((status & (1<<3|1<<9)) == (1<<3|1<<9)) {
+ bluetooth_rfkill = rfkill_alloc("dell-bluetooth",
+ &platform_device->dev,
+ RFKILL_TYPE_BLUETOOTH,
+ &dell_rfkill_ops, (void *) 2);
+ if (!bluetooth_rfkill) {
+ ret = -ENOMEM;
+ goto err_bluetooth;
+ }
+ ret = rfkill_register(bluetooth_rfkill);
+ if (ret)
+ goto err_bluetooth;
+ }
+
+ if ((status & (1<<4|1<<10)) == (1<<4|1<<10)) {
+ wwan_rfkill = rfkill_alloc("dell-wwan",
+ &platform_device->dev,
+ RFKILL_TYPE_WWAN,
+ &dell_rfkill_ops, (void *) 3);
+ if (!wwan_rfkill) {
+ ret = -ENOMEM;
+ goto err_wwan;
+ }
+ ret = rfkill_register(wwan_rfkill);
+ if (ret)
+ goto err_wwan;
+ }
+
+ return 0;
+err_wwan:
+ rfkill_destroy(wwan_rfkill);
+ if (bluetooth_rfkill)
+ rfkill_unregister(bluetooth_rfkill);
+err_bluetooth:
+ rfkill_destroy(bluetooth_rfkill);
+ if (wifi_rfkill)
+ rfkill_unregister(wifi_rfkill);
+err_wifi:
+ rfkill_destroy(wifi_rfkill);
+
+ return ret;
+}
+
+static void dell_cleanup_rfkill(void)
+{
+ if (wifi_rfkill) {
+ rfkill_unregister(wifi_rfkill);
+ rfkill_destroy(wifi_rfkill);
+ }
+ if (bluetooth_rfkill) {
+ rfkill_unregister(bluetooth_rfkill);
+ rfkill_destroy(bluetooth_rfkill);
+ }
+ if (wwan_rfkill) {
+ rfkill_unregister(wwan_rfkill);
+ rfkill_destroy(wwan_rfkill);
+ }
+}
+
static int dell_send_intensity(struct backlight_device *bd)
{
int ret = 0;
led_classdev_unregister(&touchpad_led);
}
+static bool dell_laptop_i8042_filter(unsigned char data, unsigned char str,
+ struct serio *port)
+{
+ static bool extended;
+
+ if (str & 0x20)
+ return false;
+
+ if (unlikely(data == 0xe0)) {
+ extended = true;
+ return false;
+ } else if (unlikely(extended)) {
+ switch (data) {
+ case 0x8:
+ schedule_delayed_work(&dell_rfkill_work,
+ round_jiffies_relative(HZ / 4));
+ break;
+ }
+ extended = false;
+ }
+
+ return false;
+}
+
static int __init dell_init(void)
{
int max_intensity = 0;
}
buffer = page_address(bufferpage);
+ ret = dell_setup_rfkill();
+
+ if (ret) {
+ pr_warn("Unable to setup rfkill\n");
+ goto fail_rfkill;
+ }
+
+ ret = i8042_install_filter(dell_laptop_i8042_filter);
+ if (ret) {
+ pr_warn("Unable to install key filter\n");
+ goto fail_filter;
+ }
+
if (quirks && quirks->touchpad_led)
touchpad_led_init(&platform_device->dev);
dell_laptop_dir = debugfs_create_dir("dell_laptop", NULL);
+ if (dell_laptop_dir != NULL)
+ debugfs_create_file("rfkill", 0444, dell_laptop_dir, NULL,
+ &dell_debugfs_fops);
#ifdef CONFIG_ACPI
/* In the event of an ACPI backlight being available, don't
return 0;
fail_backlight:
+ i8042_remove_filter(dell_laptop_i8042_filter);
+ cancel_delayed_work_sync(&dell_rfkill_work);
+fail_filter:
+ dell_cleanup_rfkill();
+fail_rfkill:
free_page((unsigned long)bufferpage);
fail_buffer:
platform_device_del(platform_device);
debugfs_remove_recursive(dell_laptop_dir);
if (quirks && quirks->touchpad_led)
touchpad_led_exit();
+ i8042_remove_filter(dell_laptop_i8042_filter);
+ cancel_delayed_work_sync(&dell_rfkill_work);
backlight_device_unregister(dell_backlight_device);
+ dell_cleanup_rfkill();
if (platform_device) {
platform_device_unregister(platform_device);
platform_driver_unregister(&platform_driver);
KEY_BRIGHTNESSUP, KEY_UNKNOWN, KEY_KBDILLUMTOGGLE,
KEY_UNKNOWN, KEY_SWITCHVIDEOMODE, KEY_UNKNOWN, KEY_UNKNOWN,
KEY_SWITCHVIDEOMODE, KEY_UNKNOWN, KEY_UNKNOWN, KEY_PROG2,
- KEY_UNKNOWN, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ KEY_UNKNOWN, KEY_UNKNOWN, KEY_UNKNOWN, KEY_UNKNOWN,
+ KEY_UNKNOWN, KEY_UNKNOWN, KEY_UNKNOWN, KEY_MICMUTE,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- KEY_PROG3
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, KEY_PROG3
};
static struct input_dev *dell_wmi_input_dev;
int error;
input = input_allocate_device();
- if (!input) {
- pr_info("Unable to allocate input device\n");
+ if (!input)
return -ENOMEM;
- }
input->name = "Asus EeePC extra buttons";
input->phys = EEEPC_LAPTOP_FILE "/input0";
#define HPWMI_HARDWARE_QUERY 0x4
#define HPWMI_WIRELESS_QUERY 0x5
#define HPWMI_HOTKEY_QUERY 0xc
+#define HPWMI_FEATURE_QUERY 0xd
#define HPWMI_WIRELESS2_QUERY 0x1b
#define HPWMI_POSTCODEERROR_QUERY 0x2a
return (state & 0x4) ? 1 : 0;
}
+static int hp_wmi_bios_2009_later(void)
+{
+ int state = 0;
+ int ret = hp_wmi_perform_query(HPWMI_FEATURE_QUERY, 0, &state,
+ sizeof(state), sizeof(state));
+ if (ret)
+ return ret;
+
+ return (state & 0x10) ? 1 : 0;
+}
+
static int hp_wmi_set_block(void *data, bool blocked)
{
enum hp_wmi_radio r = (enum hp_wmi_radio) data;
gps_rfkill = NULL;
rfkill2_count = 0;
- if (hp_wmi_rfkill_setup(device))
+ if (hp_wmi_bios_2009_later() || hp_wmi_rfkill_setup(device))
hp_wmi_rfkill2_setup(device);
err = device_create_file(&device->dev, &dev_attr_display);
int error;
inputdev = input_allocate_device();
- if (!inputdev) {
- pr_info("Unable to allocate input device\n");
+ if (!inputdev)
return -ENOMEM;
- }
inputdev->name = "Ideapad extra buttons";
inputdev->phys = "ideapad/input0";
return -EINVAL;
input = input_allocate_device();
- if (!input) {
- dev_err(&pdev->dev, "Input device allocation error\n");
+ if (!input)
return -ENOMEM;
- }
input->name = pdev->name;
input->phys = "power-button/input0";
* message handler is called within firmware.
*/
-#define IPC_BASE_ADDR 0xFF11C000 /* IPC1 base register address */
-#define IPC_MAX_ADDR 0x100 /* Maximum IPC regisers */
#define IPC_WWBUF_SIZE 20 /* IPC Write buffer Size */
#define IPC_RWBUF_SIZE 20 /* IPC Read buffer Size */
-#define IPC_I2C_BASE 0xFF12B000 /* I2C control register base address */
-#define IPC_I2C_MAX_ADDR 0x10 /* Maximum I2C regisers */
+#define IPC_IOC 0x100 /* IPC command register IOC bit */
+
+enum {
+ SCU_IPC_LINCROFT,
+ SCU_IPC_PENWELL,
+ SCU_IPC_CLOVERVIEW,
+ SCU_IPC_TANGIER,
+};
+
+/* intel scu ipc driver data*/
+struct intel_scu_ipc_pdata_t {
+ u32 ipc_base;
+ u32 i2c_base;
+ u32 ipc_len;
+ u32 i2c_len;
+ u8 irq_mode;
+};
+
+static struct intel_scu_ipc_pdata_t intel_scu_ipc_pdata[] = {
+ [SCU_IPC_LINCROFT] = {
+ .ipc_base = 0xff11c000,
+ .i2c_base = 0xff12b000,
+ .ipc_len = 0x100,
+ .i2c_len = 0x10,
+ .irq_mode = 0,
+ },
+ [SCU_IPC_PENWELL] = {
+ .ipc_base = 0xff11c000,
+ .i2c_base = 0xff12b000,
+ .ipc_len = 0x100,
+ .i2c_len = 0x10,
+ .irq_mode = 1,
+ },
+ [SCU_IPC_CLOVERVIEW] = {
+ .ipc_base = 0xff11c000,
+ .i2c_base = 0xff12b000,
+ .ipc_len = 0x100,
+ .i2c_len = 0x10,
+ .irq_mode = 1,
+ },
+ [SCU_IPC_TANGIER] = {
+ .ipc_base = 0xff009000,
+ .i2c_base = 0xff00d000,
+ .ipc_len = 0x100,
+ .i2c_len = 0x10,
+ .irq_mode = 0,
+ },
+};
static int ipc_probe(struct pci_dev *dev, const struct pci_device_id *id);
static void ipc_remove(struct pci_dev *pdev);
struct pci_dev *pdev;
void __iomem *ipc_base;
void __iomem *i2c_base;
+ struct completion cmd_complete;
+ u8 irq_mode;
};
static struct intel_scu_ipc_dev ipcdev; /* Only one for now */
*/
static inline void ipc_command(u32 cmd) /* Send ipc command */
{
+ if (ipcdev.irq_mode) {
+ reinit_completion(&ipcdev.cmd_complete);
+ writel(cmd | IPC_IOC, ipcdev.ipc_base);
+ }
writel(cmd, ipcdev.ipc_base);
}
return 0;
}
+/* Wait till ipc ioc interrupt is received or timeout in 3 HZ */
+static inline int ipc_wait_for_interrupt(void)
+{
+ int status;
+
+ if (!wait_for_completion_timeout(&ipcdev.cmd_complete, 3 * HZ)) {
+ struct device *dev = &ipcdev.pdev->dev;
+ dev_err(dev, "IPC timed out\n");
+ return -ETIMEDOUT;
+ }
+
+ status = ipc_read_status();
+
+ if ((status >> 1) & 1)
+ return -EIO;
+
+ return 0;
+}
+
+int intel_scu_ipc_check_status(void)
+{
+ return ipcdev.irq_mode ? ipc_wait_for_interrupt() : busy_loop();
+}
+
/* Read/Write power control(PMIC in Langwell, MSIC in PenWell) registers */
static int pwr_reg_rdwr(u16 *addr, u8 *data, u32 count, u32 op, u32 id)
{
ipc_command(4 << 16 | id << 12 | 0 << 8 | op);
}
- err = busy_loop();
- if (id == IPC_CMD_PCNTRL_R) { /* Read rbuf */
+ err = intel_scu_ipc_check_status();
+ if (!err && id == IPC_CMD_PCNTRL_R) { /* Read rbuf */
/* Workaround: values are read as 0 without memcpy_fromio */
memcpy_fromio(cbuf, ipcdev.ipc_base + 0x90, 16);
for (nc = 0; nc < count; nc++)
return -ENODEV;
}
ipc_command(sub << 12 | cmd);
- err = busy_loop();
+ err = intel_scu_ipc_check_status();
mutex_unlock(&ipclock);
return err;
}
ipc_data_writel(*in++, 4 * i);
ipc_command((inlen << 16) | (sub << 12) | cmd);
- err = busy_loop();
+ err = intel_scu_ipc_check_status();
- for (i = 0; i < outlen; i++)
- *out++ = ipc_data_readl(4 * i);
+ if (!err) {
+ for (i = 0; i < outlen; i++)
+ *out++ = ipc_data_readl(4 * i);
+ }
mutex_unlock(&ipclock);
return err;
*/
static irqreturn_t ioc(int irq, void *dev_id)
{
+ if (ipcdev.irq_mode)
+ complete(&ipcdev.cmd_complete);
+
return IRQ_HANDLED;
}
*/
static int ipc_probe(struct pci_dev *dev, const struct pci_device_id *id)
{
- int err;
+ int err, pid;
+ struct intel_scu_ipc_pdata_t *pdata;
resource_size_t pci_resource;
if (ipcdev.pdev) /* We support only one SCU */
return -EBUSY;
+ pid = id->driver_data;
+ pdata = &intel_scu_ipc_pdata[pid];
+
ipcdev.pdev = pci_dev_get(dev);
+ ipcdev.irq_mode = pdata->irq_mode;
err = pci_enable_device(dev);
if (err)
if (!pci_resource)
return -ENOMEM;
+ init_completion(&ipcdev.cmd_complete);
+
if (request_irq(dev->irq, ioc, 0, "intel_scu_ipc", &ipcdev))
return -EBUSY;
- ipcdev.ipc_base = ioremap_nocache(IPC_BASE_ADDR, IPC_MAX_ADDR);
+ ipcdev.ipc_base = ioremap_nocache(pdata->ipc_base, pdata->ipc_len);
if (!ipcdev.ipc_base)
return -ENOMEM;
- ipcdev.i2c_base = ioremap_nocache(IPC_I2C_BASE, IPC_I2C_MAX_ADDR);
+ ipcdev.i2c_base = ioremap_nocache(pdata->i2c_base, pdata->i2c_len);
if (!ipcdev.i2c_base) {
iounmap(ipcdev.ipc_base);
return -ENOMEM;
}
static DEFINE_PCI_DEVICE_TABLE(pci_ids) = {
- {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x082a)},
+ {PCI_VDEVICE(INTEL, 0x082a), SCU_IPC_LINCROFT},
+ {PCI_VDEVICE(INTEL, 0x080e), SCU_IPC_PENWELL},
+ {PCI_VDEVICE(INTEL, 0x08ea), SCU_IPC_CLOVERVIEW},
+ {PCI_VDEVICE(INTEL, 0x11a0), SCU_IPC_TANGIER},
{ 0,}
};
MODULE_DEVICE_TABLE(pci, pci_ids);
int error;
input_dev = input_allocate_device();
- if (!input_dev) {
- ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
- "Couldn't allocate input device for hotkey"));
+ if (!input_dev)
return -ENOMEM;
- }
input_dev->name = ACPI_PCC_DRIVER_NAME;
input_dev->phys = ACPI_PCC_INPUT_PHYS;
"on the model (default: no change from current value)");
#ifdef CONFIG_PM_SLEEP
-static void sony_nc_kbd_backlight_resume(void);
static void sony_nc_thermal_resume(void);
#endif
static int sony_nc_kbd_backlight_setup(struct platform_device *pd,
unsigned int handle);
-static void sony_nc_kbd_backlight_cleanup(struct platform_device *pd);
+static void sony_nc_kbd_backlight_cleanup(struct platform_device *pd,
+ unsigned int handle);
static int sony_nc_battery_care_setup(struct platform_device *pd,
unsigned int handle);
KEY_FN_F10, /* 14 SONYPI_EVENT_FNKEY_F10 */
KEY_FN_F11, /* 15 SONYPI_EVENT_FNKEY_F11 */
KEY_FN_F12, /* 16 SONYPI_EVENT_FNKEY_F12 */
- KEY_FN_F1, /* 17 SONYPI_EVENT_FNKEY_1 */
- KEY_FN_F2, /* 18 SONYPI_EVENT_FNKEY_2 */
+ KEY_FN_1, /* 17 SONYPI_EVENT_FNKEY_1 */
+ KEY_FN_2, /* 18 SONYPI_EVENT_FNKEY_2 */
KEY_FN_D, /* 19 SONYPI_EVENT_FNKEY_D */
KEY_FN_E, /* 20 SONYPI_EVENT_FNKEY_E */
KEY_FN_F, /* 21 SONYPI_EVENT_FNKEY_F */
case 0x014b:
case 0x014c:
case 0x0163:
- sony_nc_kbd_backlight_cleanup(pd);
+ sony_nc_kbd_backlight_cleanup(pd, handle);
break;
default:
continue;
case 0x0135:
sony_nc_rfkill_update();
break;
- case 0x0137:
- case 0x0143:
- case 0x014b:
- case 0x014c:
- case 0x0163:
- sony_nc_kbd_backlight_resume();
- break;
default:
continue;
}
int result;
int ret = 0;
+ if (kbdbl_ctl) {
+ pr_warn("handle 0x%.4x: keyboard backlight setup already done for 0x%.4x\n",
+ handle, kbdbl_ctl->handle);
+ return -EBUSY;
+ }
+
/* verify the kbd backlight presence, these handles are not used for
* keyboard backlight only
*/
return ret;
}
-static void sony_nc_kbd_backlight_cleanup(struct platform_device *pd)
+static void sony_nc_kbd_backlight_cleanup(struct platform_device *pd,
+ unsigned int handle)
{
- if (kbdbl_ctl) {
+ if (kbdbl_ctl && handle == kbdbl_ctl->handle) {
device_remove_file(&pd->dev, &kbdbl_ctl->mode_attr);
device_remove_file(&pd->dev, &kbdbl_ctl->timeout_attr);
kfree(kbdbl_ctl);
}
}
-#ifdef CONFIG_PM_SLEEP
-static void sony_nc_kbd_backlight_resume(void)
-{
- int ignore = 0;
-
- if (!kbdbl_ctl)
- return;
-
- if (kbdbl_ctl->mode == 0)
- sony_call_snc_handle(kbdbl_ctl->handle, kbdbl_ctl->base,
- &ignore);
-
- if (kbdbl_ctl->timeout != 0)
- sony_call_snc_handle(kbdbl_ctl->handle,
- (kbdbl_ctl->base + 0x200) |
- (kbdbl_ctl->timeout << 0x10), &ignore);
-}
-#endif
-
struct battery_care_control {
struct device_attribute attrs[2];
unsigned int handle;
#define TPACPI_ALSA_SHRTNAME "ThinkPad Console Audio Control"
#define TPACPI_ALSA_MIXERNAME TPACPI_ALSA_SHRTNAME
-static int alsa_index = ~((1 << (SNDRV_CARDS - 3)) - 1); /* last three slots */
+#if SNDRV_CARDS <= 32
+#define DEFAULT_ALSA_IDX ~((1 << (SNDRV_CARDS - 3)) - 1)
+#else
+#define DEFAULT_ALSA_IDX ~((1 << (32 - 3)) - 1)
+#endif
+static int alsa_index = DEFAULT_ALSA_IDX; /* last three slots */
static char *alsa_id = "ThinkPadEC";
static bool alsa_enable = SNDRV_DEFAULT_ENABLE1;
mutex_init(&tpacpi_inputdev_send_mutex);
tpacpi_inputdev = input_allocate_device();
if (!tpacpi_inputdev) {
- pr_err("unable to allocate input device\n");
thinkpad_acpi_module_exit();
return -ENOMEM;
} else {
int error;
input = input_allocate_device();
- if (!input) {
- pr_err("Unable to allocate input device\n");
+ if (!input)
return -ENOMEM;
- }
input->name = "Topstar Laptop extra buttons";
input->phys = "topstar/input0";
u32 hci_result;
dev->hotkey_dev = input_allocate_device();
- if (!dev->hotkey_dev) {
- pr_info("Unable to register input device\n");
+ if (!dev->hotkey_dev)
return -ENOMEM;
- }
dev->hotkey_dev->name = "Toshiba input device";
dev->hotkey_dev->phys = "toshiba_acpi/input0";
struct wmi_block *wblock;
wblock = dev_get_drvdata(dev);
- if (!wblock)
- return -ENOMEM;
+ if (!wblock) {
+ strcat(buf, "\n");
+ return strlen(buf);
+ }
wmi_gtoa(wblock->gblock.guid, guid_string);
return __pnp_bus_suspend(dev, PMSG_FREEZE);
}
+static int pnp_bus_poweroff(struct device *dev)
+{
+ return __pnp_bus_suspend(dev, PMSG_HIBERNATE);
+}
+
static int pnp_bus_resume(struct device *dev)
{
struct pnp_dev *pnp_dev = to_pnp_dev(dev);
}
static const struct dev_pm_ops pnp_bus_dev_pm_ops = {
+ /* Suspend callbacks */
.suspend = pnp_bus_suspend,
- .freeze = pnp_bus_freeze,
.resume = pnp_bus_resume,
+ /* Hibernate callbacks */
+ .freeze = pnp_bus_freeze,
+ .thaw = pnp_bus_resume,
+ .poweroff = pnp_bus_poweroff,
+ .restore = pnp_bus_resume,
};
struct bus_type pnp_bus_type = {
pnp_dbg(&dev->dev, "set resources\n");
- handle = DEVICE_ACPI_HANDLE(&dev->dev);
+ handle = ACPI_HANDLE(&dev->dev);
if (!handle || acpi_bus_get_device(handle, &acpi_dev)) {
dev_dbg(&dev->dev, "ACPI device not found in %s!\n", __func__);
return -ENODEV;
dev_dbg(&dev->dev, "disable resources\n");
- handle = DEVICE_ACPI_HANDLE(&dev->dev);
+ handle = ACPI_HANDLE(&dev->dev);
if (!handle || acpi_bus_get_device(handle, &acpi_dev)) {
dev_dbg(&dev->dev, "ACPI device not found in %s!\n", __func__);
return 0;
struct acpi_device *acpi_dev;
acpi_handle handle;
- handle = DEVICE_ACPI_HANDLE(&dev->dev);
+ handle = ACPI_HANDLE(&dev->dev);
if (!handle || acpi_bus_get_device(handle, &acpi_dev)) {
dev_dbg(&dev->dev, "ACPI device not found in %s!\n", __func__);
return false;
acpi_handle handle;
int error = 0;
- handle = DEVICE_ACPI_HANDLE(&dev->dev);
+ handle = ACPI_HANDLE(&dev->dev);
if (!handle || acpi_bus_get_device(handle, &acpi_dev)) {
dev_dbg(&dev->dev, "ACPI device not found in %s!\n", __func__);
return 0;
static int pnpacpi_resume(struct pnp_dev *dev)
{
struct acpi_device *acpi_dev;
- acpi_handle handle = DEVICE_ACPI_HANDLE(&dev->dev);
+ acpi_handle handle = ACPI_HANDLE(&dev->dev);
int error = 0;
if (!handle || acpi_bus_get_device(handle, &acpi_dev)) {
config BATTERY_MAX17042
tristate "Maxim MAX17042/17047/17050/8997/8966 Fuel Gauge"
depends on I2C
+ select REGMAP_I2C
help
MAX17042 is fuel-gauge systems for lithium-ion (Li+) batteries
in handheld and portable equipment. The MAX17042 is configured
dev_set_drvdata(dev, psy);
psy->dev = dev;
+ rc = dev_set_name(dev, "%s", psy->name);
+ if (rc)
+ goto dev_set_name_failed;
+
INIT_WORK(&psy->changed_work, power_supply_changed_work);
rc = power_supply_check_supplies(psy);
if (rc)
goto wakeup_init_failed;
- rc = kobject_set_name(&dev->kobj, "%s", psy->name);
- if (rc)
- goto kobject_set_name_failed;
-
rc = device_add(dev);
if (rc)
goto device_add_failed;
register_cooler_failed:
psy_unregister_thermal(psy);
register_thermal_failed:
-wakeup_init_failed:
device_del(dev);
-kobject_set_name_failed:
device_add_failed:
+wakeup_init_failed:
check_supplies_failed:
+dev_set_name_failed:
put_device(dev);
success:
return rc;
return 0;
}
+static const struct x86_cpu_id energy_unit_quirk_ids[] = {
+ { X86_VENDOR_INTEL, 6, 0x37},/* VLV */
+ {}
+};
+
static int rapl_check_unit(struct rapl_package *rp, int cpu)
{
u64 msr_val;
* time unit: 1/time_unit_divisor Seconds
*/
value = (msr_val & ENERGY_UNIT_MASK) >> ENERGY_UNIT_OFFSET;
- rp->energy_unit_divisor = 1 << value;
-
+ /* some CPUs have different way to calculate energy unit */
+ if (x86_match_cpu(energy_unit_quirk_ids))
+ rp->energy_unit_divisor = 1000000 / (1 << value);
+ else
+ rp->energy_unit_divisor = 1 << value;
value = (msr_val & POWER_UNIT_MASK) >> POWER_UNIT_OFFSET;
rp->power_unit_divisor = 1 << value;
static const struct x86_cpu_id rapl_ids[] = {
{ X86_VENDOR_INTEL, 6, 0x2a},/* SNB */
{ X86_VENDOR_INTEL, 6, 0x2d},/* SNB EP */
+ { X86_VENDOR_INTEL, 6, 0x37},/* VLV */
{ X86_VENDOR_INTEL, 6, 0x3a},/* IVB */
{ X86_VENDOR_INTEL, 6, 0x45},/* HSW */
/* TODO: Add more CPU IDs after testing */
if (power_zone->ops->get_max_energy_range_uj)
power_zone->zone_dev_attrs[count++] =
&dev_attr_max_energy_range_uj.attr;
- if (power_zone->ops->get_energy_uj)
+ if (power_zone->ops->get_energy_uj) {
+ if (power_zone->ops->reset_energy_uj)
+ dev_attr_energy_uj.attr.mode = S_IWUSR | S_IRUGO;
+ else
+ dev_attr_energy_uj.attr.mode = S_IRUGO;
power_zone->zone_dev_attrs[count++] =
&dev_attr_energy_uj.attr;
+ }
if (power_zone->ops->get_power_uw)
power_zone->zone_dev_attrs[count++] =
&dev_attr_power_uw.attr;
.owner = THIS_MODULE,
};
+static const struct regulator_linear_range arizona_micsupp_ext_ranges[] = {
+ REGULATOR_LINEAR_RANGE(900000, 0, 0x14, 25000),
+ REGULATOR_LINEAR_RANGE(1500000, 0x15, 0x27, 100000),
+};
+
+static const struct regulator_desc arizona_micsupp_ext = {
+ .name = "MICVDD",
+ .supply_name = "CPVDD",
+ .type = REGULATOR_VOLTAGE,
+ .n_voltages = 40,
+ .ops = &arizona_micsupp_ops,
+
+ .vsel_reg = ARIZONA_LDO2_CONTROL_1,
+ .vsel_mask = ARIZONA_LDO2_VSEL_MASK,
+ .enable_reg = ARIZONA_MIC_CHARGE_PUMP_1,
+ .enable_mask = ARIZONA_CPMIC_ENA,
+ .bypass_reg = ARIZONA_MIC_CHARGE_PUMP_1,
+ .bypass_mask = ARIZONA_CPMIC_BYPASS,
+
+ .linear_ranges = arizona_micsupp_ext_ranges,
+ .n_linear_ranges = ARRAY_SIZE(arizona_micsupp_ext_ranges),
+
+ .enable_time = 3000,
+
+ .owner = THIS_MODULE,
+};
+
static const struct regulator_init_data arizona_micsupp_default = {
.constraints = {
.valid_ops_mask = REGULATOR_CHANGE_STATUS |
.num_consumer_supplies = 1,
};
+static const struct regulator_init_data arizona_micsupp_ext_default = {
+ .constraints = {
+ .valid_ops_mask = REGULATOR_CHANGE_STATUS |
+ REGULATOR_CHANGE_VOLTAGE |
+ REGULATOR_CHANGE_BYPASS,
+ .min_uV = 900000,
+ .max_uV = 3300000,
+ },
+
+ .num_consumer_supplies = 1,
+};
+
static int arizona_micsupp_probe(struct platform_device *pdev)
{
struct arizona *arizona = dev_get_drvdata(pdev->dev.parent);
+ const struct regulator_desc *desc;
struct regulator_config config = { };
struct arizona_micsupp *micsupp;
int ret;
* default init_data for it. This will be overridden with
* platform data if provided.
*/
- micsupp->init_data = arizona_micsupp_default;
+ switch (arizona->type) {
+ case WM5110:
+ desc = &arizona_micsupp_ext;
+ micsupp->init_data = arizona_micsupp_ext_default;
+ break;
+ default:
+ desc = &arizona_micsupp;
+ micsupp->init_data = arizona_micsupp_default;
+ break;
+ }
+
micsupp->init_data.consumer_supplies = &micsupp->supply;
micsupp->supply.supply = "MICVDD";
micsupp->supply.dev_name = dev_name(arizona->dev);
ARIZONA_CPMIC_BYPASS, 0);
micsupp->regulator = devm_regulator_register(&pdev->dev,
- &arizona_micsupp,
+ desc,
&config);
if (IS_ERR(micsupp->regulator)) {
ret = PTR_ERR(micsupp->regulator);
default:
return -EINVAL;
}
+ ret <<= ffs(mask) - 1;
val = ret & mask;
- val <<= ffs(mask) - 1;
return as3722_update_bits(as3722, reg, mask, val);
}
return "";
}
+static bool have_full_constraints(void)
+{
+ return has_full_constraints || of_have_populated_dt();
+}
+
/**
* of_get_regulator - get a regulator device node based on supply name
* @dev: Device pointer for the consumer (of regulator) device
* Assume that a regulator is physically present and enabled
* even if it isn't hooked up and just provide a dummy.
*/
- if (has_full_constraints && allow_dummy) {
+ if (have_full_constraints() && allow_dummy) {
pr_warn("%s supply %s not found, using dummy regulator\n",
devname, id);
struct regulator_ops *ops = rdev->desc->ops;
int ret;
+ if (rdev->desc->fixed_uV && rdev->desc->n_voltages == 1 && !selector)
+ return rdev->desc->fixed_uV;
+
if (!ops->list_voltage || selector >= rdev->desc->n_voltages)
return -EINVAL;
if (error)
ret = error;
} else {
- if (!has_full_constraints)
+ if (!have_full_constraints())
goto unlock;
if (!ops->disable)
goto unlock;
if (!enabled)
goto unlock;
- if (has_full_constraints) {
+ if (have_full_constraints()) {
/* We log since this may kill the system if it
* goes wrong. */
rdev_info(rdev, "disabling\n");
struct property *prop;
const char *regtype;
int proplen, gpio, i;
+ int ret;
config = devm_kzalloc(dev,
sizeof(struct gpio_regulator_config),
}
config->nr_states = i;
- of_property_read_string(np, "regulator-type", ®type);
+ ret = of_property_read_string(np, "regulator-type", ®type);
+ if (ret < 0) {
+ dev_err(dev, "Missing 'regulator-type' property\n");
+ return ERR_PTR(-EINVAL);
+ }
if (!strncmp("voltage", regtype, 7))
config->type = REGULATOR_VOLTAGE;
#define PFUZE100_DEVICEID 0x0
#define PFUZE100_REVID 0x3
-#define PFUZE100_FABID 0x3
+#define PFUZE100_FABID 0x4
#define PFUZE100_SW1ABVOL 0x20
#define PFUZE100_SW1CVOL 0x2e
if (ret)
return ret;
- if (value & 0x0f) {
- dev_warn(pfuze_chip->dev, "Illegal ID: %x\n", value);
- return -ENODEV;
+ switch (value & 0x0f) {
+ /* Freescale misprogrammed 1-3% of parts prior to week 8 of 2013 as ID=8 */
+ case 0x8:
+ dev_info(pfuze_chip->dev, "Assuming misprogrammed ID=0x8");
+ case 0x0:
+ break;
+ default:
+ dev_warn(pfuze_chip->dev, "Illegal ID: %x\n", value);
+ return -ENODEV;
}
ret = regmap_read(pfuze_chip->regmap, PFUZE100_REVID, &value);
platform_set_drvdata(pdev, s2mps11);
config.dev = &pdev->dev;
- config.regmap = iodev->regmap;
+ config.regmap = iodev->regmap_pmic;
config.driver_data = s2mps11;
for (i = 0; i < S2MPS11_REGULATOR_MAX; i++) {
if (!reg_np) {
config.dev = s5m8767->dev;
config.init_data = pdata->regulators[i].initdata;
config.driver_data = s5m8767;
- config.regmap = iodev->regmap;
+ config.regmap = iodev->regmap_pmic;
config.of_node = pdata->regulators[i].reg_node;
rdev[i] = devm_regulator_register(&pdev->dev, ®ulators[id],
config RTC_DRV_CMOS
tristate "PC-style 'CMOS'"
- depends on X86 || ALPHA || ARM || M32R || ATARI || PPC || MIPS || SPARC64
+ depends on X86 || ARM || M32R || ATARI || PPC || MIPS || SPARC64
default y if X86
help
Say "yes" here to get direct support for the real time clock
This driver can also be built as a module. If so, the module
will be called rtc-cmos.
+config RTC_DRV_ALPHA
+ bool "Alpha PC-style CMOS"
+ depends on ALPHA
+ default y
+ help
+ Direct support for the real-time clock found on every Alpha
+ system, specifically MC146818 compatibles. If in doubt, say Y.
+
config RTC_DRV_VRTC
tristate "Virtual RTC for Intel MID platforms"
depends on X86_INTEL_MID
at91_alarm_year = tm.tm_year;
+ tm.tm_mon = alrm->time.tm_mon;
+ tm.tm_mday = alrm->time.tm_mday;
tm.tm_hour = alrm->time.tm_hour;
tm.tm_min = alrm->time.tm_min;
tm.tm_sec = alrm->time.tm_sec;
return 0;
}
+static void at91_rtc_shutdown(struct platform_device *pdev)
+{
+ /* Disable all interrupts */
+ at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD | AT91_RTC_ALARM |
+ AT91_RTC_SECEV | AT91_RTC_TIMEV |
+ AT91_RTC_CALEV);
+}
+
#ifdef CONFIG_PM_SLEEP
/* AT91RM9200 RTC Power management control */
static struct platform_driver at91_rtc_driver = {
.remove = __exit_p(at91_rtc_remove),
+ .shutdown = at91_rtc_shutdown,
.driver = {
.name = "at91_rtc",
.owner = THIS_MODULE,
#include <linux/mfd/samsung/irq.h>
#include <linux/mfd/samsung/rtc.h>
+/*
+ * Maximum number of retries for checking changes in UDR field
+ * of SEC_RTC_UDR_CON register (to limit possible endless loop).
+ *
+ * After writing to RTC registers (setting time or alarm) read the UDR field
+ * in SEC_RTC_UDR_CON register. UDR is auto-cleared when data have
+ * been transferred.
+ */
+#define UDR_READ_RETRY_CNT 5
+
struct s5m_rtc_info {
struct device *dev;
struct sec_pmic_dev *s5m87xx;
- struct regmap *rtc;
+ struct regmap *regmap;
struct rtc_device *rtc_dev;
int irq;
int device_type;
}
}
+/*
+ * Read RTC_UDR_CON register and wait till UDR field is cleared.
+ * This indicates that time/alarm update ended.
+ */
+static inline int s5m8767_wait_for_udr_update(struct s5m_rtc_info *info)
+{
+ int ret, retry = UDR_READ_RETRY_CNT;
+ unsigned int data;
+
+ do {
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &data);
+ } while (--retry && (data & RTC_UDR_MASK) && !ret);
+
+ if (!retry)
+ dev_err(info->dev, "waiting for UDR update, reached max number of retries\n");
+
+ return ret;
+}
+
static inline int s5m8767_rtc_set_time_reg(struct s5m_rtc_info *info)
{
int ret;
unsigned int data;
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &data);
if (ret < 0) {
dev_err(info->dev, "failed to read update reg(%d)\n", ret);
return ret;
data |= RTC_TIME_EN_MASK;
data |= RTC_UDR_MASK;
- ret = regmap_write(info->rtc, SEC_RTC_UDR_CON, data);
+ ret = regmap_write(info->regmap, SEC_RTC_UDR_CON, data);
if (ret < 0) {
dev_err(info->dev, "failed to write update reg(%d)\n", ret);
return ret;
}
- do {
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
- } while ((data & RTC_UDR_MASK) && !ret);
+ ret = s5m8767_wait_for_udr_update(info);
return ret;
}
int ret;
unsigned int data;
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &data);
if (ret < 0) {
dev_err(info->dev, "%s: fail to read update reg(%d)\n",
__func__, ret);
data &= ~RTC_TIME_EN_MASK;
data |= RTC_UDR_MASK;
- ret = regmap_write(info->rtc, SEC_RTC_UDR_CON, data);
+ ret = regmap_write(info->regmap, SEC_RTC_UDR_CON, data);
if (ret < 0) {
dev_err(info->dev, "%s: fail to write update reg(%d)\n",
__func__, ret);
return ret;
}
- do {
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
- } while ((data & RTC_UDR_MASK) && !ret);
+ ret = s5m8767_wait_for_udr_update(info);
return ret;
}
u8 data[8];
int ret;
- ret = regmap_bulk_read(info->rtc, SEC_RTC_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_RTC_SEC, data, 8);
if (ret < 0)
return ret;
1900 + tm->tm_year, 1 + tm->tm_mon, tm->tm_mday,
tm->tm_hour, tm->tm_min, tm->tm_sec, tm->tm_wday);
- ret = regmap_raw_write(info->rtc, SEC_RTC_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_RTC_SEC, data, 8);
if (ret < 0)
return ret;
unsigned int val;
int ret, i;
- ret = regmap_bulk_read(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
switch (info->device_type) {
case S5M8763X:
s5m8763_data_to_tm(data, &alrm->time);
- ret = regmap_read(info->rtc, SEC_ALARM0_CONF, &val);
+ ret = regmap_read(info->regmap, SEC_ALARM0_CONF, &val);
if (ret < 0)
return ret;
alrm->enabled = !!val;
- ret = regmap_read(info->rtc, SEC_RTC_STATUS, &val);
+ ret = regmap_read(info->regmap, SEC_RTC_STATUS, &val);
if (ret < 0)
return ret;
}
alrm->pending = 0;
- ret = regmap_read(info->rtc, SEC_RTC_STATUS, &val);
+ ret = regmap_read(info->regmap, SEC_RTC_STATUS, &val);
if (ret < 0)
return ret;
break;
int ret, i;
struct rtc_time tm;
- ret = regmap_bulk_read(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
switch (info->device_type) {
case S5M8763X:
- ret = regmap_write(info->rtc, SEC_ALARM0_CONF, 0);
+ ret = regmap_write(info->regmap, SEC_ALARM0_CONF, 0);
break;
case S5M8767X:
for (i = 0; i < 7; i++)
data[i] &= ~ALARM_ENABLE_MASK;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
u8 alarm0_conf;
struct rtc_time tm;
- ret = regmap_bulk_read(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
switch (info->device_type) {
case S5M8763X:
alarm0_conf = 0x77;
- ret = regmap_write(info->rtc, SEC_ALARM0_CONF, alarm0_conf);
+ ret = regmap_write(info->regmap, SEC_ALARM0_CONF, alarm0_conf);
break;
case S5M8767X:
if (data[RTC_YEAR1] & 0x7f)
data[RTC_YEAR1] |= ALARM_ENABLE_MASK;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
ret = s5m8767_rtc_set_alarm_reg(info);
if (ret < 0)
return ret;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
static void s5m_rtc_enable_wtsr(struct s5m_rtc_info *info, bool enable)
{
int ret;
- ret = regmap_update_bits(info->rtc, SEC_WTSR_SMPL_CNTL,
+ ret = regmap_update_bits(info->regmap, SEC_WTSR_SMPL_CNTL,
WTSR_ENABLE_MASK,
enable ? WTSR_ENABLE_MASK : 0);
if (ret < 0)
static void s5m_rtc_enable_smpl(struct s5m_rtc_info *info, bool enable)
{
int ret;
- ret = regmap_update_bits(info->rtc, SEC_WTSR_SMPL_CNTL,
+ ret = regmap_update_bits(info->regmap, SEC_WTSR_SMPL_CNTL,
SMPL_ENABLE_MASK,
enable ? SMPL_ENABLE_MASK : 0);
if (ret < 0)
int ret;
struct rtc_time tm;
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &tp_read);
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &tp_read);
if (ret < 0) {
dev_err(info->dev, "%s: fail to read control reg(%d)\n",
__func__, ret);
data[1] = (0 << BCD_EN_SHIFT) | (1 << MODEL24_SHIFT);
info->rtc_24hr_mode = 1;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_CONF, data, 2);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_CONF, data, 2);
if (ret < 0) {
dev_err(info->dev, "%s: fail to write controlm reg(%d)\n",
__func__, ret);
ret = s5m_rtc_set_time(info->dev, &tm);
}
- ret = regmap_update_bits(info->rtc, SEC_RTC_UDR_CON,
+ ret = regmap_update_bits(info->regmap, SEC_RTC_UDR_CON,
RTC_TCON_MASK, tp_read | RTC_TCON_MASK);
if (ret < 0)
dev_err(info->dev, "%s: fail to update TCON reg(%d)\n",
info->dev = &pdev->dev;
info->s5m87xx = s5m87xx;
- info->rtc = s5m87xx->rtc;
+ info->regmap = s5m87xx->regmap_rtc;
info->device_type = s5m87xx->device_type;
info->wtsr_smpl = s5m87xx->wtsr_smpl;
switch (pdata->device_type) {
case S5M8763X:
- info->irq = s5m87xx->irq_base + S5M8763_IRQ_ALARM0;
+ info->irq = regmap_irq_get_virq(s5m87xx->irq_data,
+ S5M8763_IRQ_ALARM0);
break;
case S5M8767X:
- info->irq = s5m87xx->irq_base + S5M8767_IRQ_RTCA1;
+ info->irq = regmap_irq_get_virq(s5m87xx->irq_data,
+ S5M8767_IRQ_RTCA1);
break;
default:
if (info->wtsr_smpl) {
for (i = 0; i < 3; i++) {
s5m_rtc_enable_wtsr(info, false);
- regmap_read(info->rtc, SEC_WTSR_SMPL_CNTL, &val);
+ regmap_read(info->regmap, SEC_WTSR_SMPL_CNTL, &val);
pr_debug("%s: WTSR_SMPL reg(0x%02x)\n", __func__, val);
if (val & WTSR_ENABLE_MASK)
pr_emerg("%s: fail to disable WTSR\n",
s5m_rtc_enable_smpl(info, false);
}
+static int s5m_rtc_resume(struct device *dev)
+{
+ struct s5m_rtc_info *info = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (device_may_wakeup(dev))
+ ret = disable_irq_wake(info->irq);
+
+ return ret;
+}
+
+static int s5m_rtc_suspend(struct device *dev)
+{
+ struct s5m_rtc_info *info = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (device_may_wakeup(dev))
+ ret = enable_irq_wake(info->irq);
+
+ return ret;
+}
+
+static SIMPLE_DEV_PM_OPS(s5m_rtc_pm_ops, s5m_rtc_suspend, s5m_rtc_resume);
+
static const struct platform_device_id s5m_rtc_id[] = {
{ "s5m-rtc", 0 },
};
.driver = {
.name = "s5m-rtc",
.owner = THIS_MODULE,
+ .pm = &s5m_rtc_pm_ops,
},
.probe = s5m_rtc_probe,
.shutdown = s5m_rtc_shutdown,
fcx_multitrack = private->features.feature[40] & 0x20;
data_size = blk_rq_bytes(req);
+ if (data_size % blksize)
+ return ERR_PTR(-EINVAL);
/* tpm write request add CBC data on each track boundary */
if (rq_data_dir(req) == WRITE)
data_size += (last_trk - first_trk) * 4;
{
if (block->gdp) {
del_gendisk(block->gdp);
- block->gdp->queue = NULL;
block->gdp->private_data = NULL;
put_disk(block->gdp);
block->gdp = NULL;
u8 _reserved5[4096 - 112]; /* 112-4095 */
} __packed __aligned(PAGE_SIZE);
-static __initdata struct init_sccb early_event_mask_sccb __aligned(PAGE_SIZE);
static __initdata struct read_info_sccb early_read_info_sccb;
static __initdata char sccb_early[PAGE_SIZE] __aligned(PAGE_SIZE);
static unsigned long sclp_hsa_size;
bool __init sclp_has_linemode(void)
{
- struct init_sccb *sccb = &early_event_mask_sccb;
+ struct init_sccb *sccb = (void *) &sccb_early;
if (sccb->header.response_code != 0x20)
return 0;
bool __init sclp_has_vt220(void)
{
- struct init_sccb *sccb = &early_event_mask_sccb;
+ struct init_sccb *sccb = (void *) &sccb_early;
if (sccb->header.response_code != 0x20)
return 0;
return rc;
}
- tp->screen = tty3270_alloc_screen(tp->view.cols, tp->view.rows);
+ tp->screen = tty3270_alloc_screen(tp->view.rows, tp->view.cols);
if (IS_ERR(tp->screen)) {
rc = PTR_ERR(tp->screen);
raw3270_put_view(&tp->view);
.cmd_per_lun = TW_MAX_CMDS_PER_LUN,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = twa_host_attrs,
- .emulated = 1
+ .emulated = 1,
+ .no_write_same = 1,
};
/* This function will probe and initialize a card */
.cmd_per_lun = TW_MAX_CMDS_PER_LUN,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = twl_host_attrs,
- .emulated = 1
+ .emulated = 1,
+ .no_write_same = 1,
};
/* This function will probe and initialize a card */
.cmd_per_lun = TW_MAX_CMDS_PER_LUN,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = tw_host_attrs,
- .emulated = 1
+ .emulated = 1,
+ .no_write_same = 1,
};
/* This function will probe and initialize a card */
#endif
.use_clustering = ENABLE_CLUSTERING,
.emulated = 1,
+ .no_write_same = 1,
};
static void __aac_shutdown(struct aac_dev * aac)
.cmd_per_lun = ARCMSR_MAX_CMD_PERLUN,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = arcmsr_host_attrs,
+ .no_write_same = 1,
};
static struct pci_device_id arcmsr_device_id_table[] = {
{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1110)},
struct bfa_fcs_lport_s *bfa_fcs_lookup_port(struct bfa_fcs_s *fcs,
u16 vf_id, wwn_t lpwwn);
+void bfa_fcs_lport_set_symname(struct bfa_fcs_lport_s *port, char *symname);
void bfa_fcs_lport_get_info(struct bfa_fcs_lport_s *port,
struct bfa_lport_info_s *port_info);
void bfa_fcs_lport_get_attr(struct bfa_fcs_lport_s *port,
bfa_sm_send_event(lport, BFA_FCS_PORT_SM_CREATE);
}
+void
+bfa_fcs_lport_set_symname(struct bfa_fcs_lport_s *port,
+ char *symname)
+{
+ strcpy(port->port_cfg.sym_name.symname, symname);
+
+ if (bfa_sm_cmp_state(port, bfa_fcs_lport_sm_online))
+ bfa_fcs_lport_ns_util_send_rspn_id(
+ BFA_FCS_GET_NS_FROM_PORT(port), NULL);
+}
+
/*
* fcs_lport_api
*/
u8 *psymbl = &symbl[0];
int len;
- if (!bfa_sm_cmp_state(port, bfa_fcs_lport_sm_online))
- return;
-
/* Avoid sending RSPN in the following states. */
if (bfa_sm_cmp_state(ns, bfa_fcs_lport_ns_sm_offline) ||
bfa_sm_cmp_state(ns, bfa_fcs_lport_ns_sm_plogi_sending) ||
return;
spin_lock_irqsave(&bfad->bfad_lock, flags);
- if (strlen(sym_name) > 0) {
- strcpy(fcs_vport->lport.port_cfg.sym_name.symname, sym_name);
- bfa_fcs_lport_ns_util_send_rspn_id(
- BFA_FCS_GET_NS_FROM_PORT((&fcs_vport->lport)), NULL);
- }
+ if (strlen(sym_name) > 0)
+ bfa_fcs_lport_set_symname(&fcs_vport->lport, sym_name);
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
}
.cmd_per_lun = GDTH_MAXC_P_L,
.unchecked_isa_dma = 1,
.use_clustering = ENABLE_CLUSTERING,
+ .no_write_same = 1,
};
#ifdef CONFIG_ISA
shost->use_clustering = sht->use_clustering;
shost->ordered_tag = sht->ordered_tag;
shost->eh_deadline = shost_eh_deadline * HZ;
+ shost->no_write_same = sht->no_write_same;
if (sht->supported_mode == MODE_UNKNOWN)
/* means we didn't set it ... default to INITIATOR */
.sdev_attrs = hpsa_sdev_attrs,
.shost_attrs = hpsa_shost_attrs,
.max_sectors = 8192,
+ .no_write_same = 1,
};
"has check condition: aborted command: "
"ASC: 0x%x, ASCQ: 0x%x\n",
cp, asc, ascq);
- cmd->result = DID_SOFT_ERROR << 16;
+ cmd->result |= DID_SOFT_ERROR << 16;
break;
}
/* Must be some other type of check condition */
hpsa_hba_inquiry(h);
hpsa_register_scsi(h); /* hook ourselves into SCSI subsystem */
start_controller_lockup_detector(h);
- return 1;
+ return 0;
clean4:
hpsa_free_sg_chain_blocks(h);
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = ipr_ioa_attrs,
.sdev_attrs = ipr_dev_attrs,
- .proc_name = IPR_NAME
+ .proc_name = IPR_NAME,
+ .no_write_same = 1,
};
/**
.sg_tablesize = IPS_MAX_SG,
.cmd_per_lun = 3,
.use_clustering = ENABLE_CLUSTERING,
+ .no_write_same = 1,
};
qc->tf.nsect = 0;
}
- ata_tf_to_fis(&qc->tf, 1, 0, (u8*)&task->ata_task.fis);
+ ata_tf_to_fis(&qc->tf, qc->dev->link->pmp, 1, (u8 *)&task->ata_task.fis);
task->uldd_task = qc;
if (ata_is_atapi(qc->tf.protocol)) {
memcpy(task->ata_task.atapi_packet, qc->cdb, qc->dev->cdb_len);
.eh_device_reset_handler = megaraid_reset,
.eh_bus_reset_handler = megaraid_reset,
.eh_host_reset_handler = megaraid_reset,
+ .no_write_same = 1,
};
static int
.eh_host_reset_handler = megaraid_reset_handler,
.change_queue_depth = megaraid_change_queue_depth,
.use_clustering = ENABLE_CLUSTERING,
+ .no_write_same = 1,
.sdev_attrs = megaraid_sdev_attrs,
.shost_attrs = megaraid_shost_attrs,
};
.bios_param = megasas_bios_param,
.use_clustering = ENABLE_CLUSTERING,
.change_queue_depth = megasas_change_queue_depth,
+ .no_write_same = 1,
};
/**
unsigned long flags;
u8 deviceType = pPayload->sas_identify.dev_type;
port->port_state = portstate;
+ phy->phy_state = PHY_STATE_LINK_UP_SPC;
PM8001_MSG_DBG(pm8001_ha,
pm8001_printk("HW_EVENT_SAS_PHY_UP port id = %d, phy id = %d\n",
port_id, phy_id));
pm8001_printk("HW_EVENT_SATA_PHY_UP port id = %d,"
" phy id = %d\n", port_id, phy_id));
port->port_state = portstate;
+ phy->phy_state = PHY_STATE_LINK_UP_SPC;
port->port_attached = 1;
pm8001_get_lrate_mode(phy, link_rate);
phy->phy_type |= PORT_TYPE_SATA;
#define LINKRATE_30 (0x02 << 8)
#define LINKRATE_60 (0x04 << 8)
+/* for phy state */
+
+#define PHY_STATE_LINK_UP_SPC 0x1
+
/* for new SPC controllers MEMBASE III is shared between BIOS and DATA */
#define GSM_SM_BASE 0x4F0000
struct mpi_msg_hdr{
static void pm8001_tasklet(unsigned long opaque)
{
struct pm8001_hba_info *pm8001_ha;
- u32 vec;
- pm8001_ha = (struct pm8001_hba_info *)opaque;
+ struct isr_param *irq_vector;
+
+ irq_vector = (struct isr_param *)opaque;
+ pm8001_ha = irq_vector->drv_inst;
if (unlikely(!pm8001_ha))
BUG_ON(1);
- vec = pm8001_ha->int_vector;
- PM8001_CHIP_DISP->isr(pm8001_ha, vec);
+ PM8001_CHIP_DISP->isr(pm8001_ha, irq_vector->irq_id);
}
#endif
-static struct pm8001_hba_info *outq_to_hba(u8 *outq)
-{
- return container_of((outq - *outq), struct pm8001_hba_info, outq[0]);
-}
-
/**
* pm8001_interrupt_handler_msix - main MSIX interrupt handler.
* It obtains the vector number and calls the equivalent bottom
*/
static irqreturn_t pm8001_interrupt_handler_msix(int irq, void *opaque)
{
- struct pm8001_hba_info *pm8001_ha = outq_to_hba(opaque);
- u8 outq = *(u8 *)opaque;
+ struct isr_param *irq_vector;
+ struct pm8001_hba_info *pm8001_ha;
irqreturn_t ret = IRQ_HANDLED;
+ irq_vector = (struct isr_param *)opaque;
+ pm8001_ha = irq_vector->drv_inst;
+
if (unlikely(!pm8001_ha))
return IRQ_NONE;
if (!PM8001_CHIP_DISP->is_our_interupt(pm8001_ha))
return IRQ_NONE;
- pm8001_ha->int_vector = outq;
#ifdef PM8001_USE_TASKLET
- tasklet_schedule(&pm8001_ha->tasklet);
+ tasklet_schedule(&pm8001_ha->tasklet[irq_vector->irq_id]);
#else
- ret = PM8001_CHIP_DISP->isr(pm8001_ha, outq);
+ ret = PM8001_CHIP_DISP->isr(pm8001_ha, irq_vector->irq_id);
#endif
return ret;
}
if (!PM8001_CHIP_DISP->is_our_interupt(pm8001_ha))
return IRQ_NONE;
- pm8001_ha->int_vector = 0;
#ifdef PM8001_USE_TASKLET
- tasklet_schedule(&pm8001_ha->tasklet);
+ tasklet_schedule(&pm8001_ha->tasklet[0]);
#else
ret = PM8001_CHIP_DISP->isr(pm8001_ha, 0);
#endif
{
struct pm8001_hba_info *pm8001_ha;
struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost);
-
+ int j;
pm8001_ha = sha->lldd_ha;
if (!pm8001_ha)
pm8001_ha->iomb_size = IOMB_SIZE_SPC;
#ifdef PM8001_USE_TASKLET
- /**
- * default tasklet for non msi-x interrupt handler/first msi-x
- * interrupt handler
- **/
- tasklet_init(&pm8001_ha->tasklet, pm8001_tasklet,
- (unsigned long)pm8001_ha);
+ /* Tasklet for non msi-x interrupt handler */
+ if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
+ tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet,
+ (unsigned long)&(pm8001_ha->irq_vector[0]));
+ else
+ for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
+ tasklet_init(&pm8001_ha->tasklet[j], pm8001_tasklet,
+ (unsigned long)&(pm8001_ha->irq_vector[j]));
#endif
pm8001_ioremap(pm8001_ha);
if (!pm8001_alloc(pm8001_ha, ent))
"pci_enable_msix request ret:%d no of intr %d\n",
rc, pm8001_ha->number_of_intr));
- for (i = 0; i < number_of_intr; i++)
- pm8001_ha->outq[i] = i;
for (i = 0; i < number_of_intr; i++) {
snprintf(intr_drvname[i], sizeof(intr_drvname[0]),
DRV_NAME"%d", i);
+ pm8001_ha->irq_vector[i].irq_id = i;
+ pm8001_ha->irq_vector[i].drv_inst = pm8001_ha;
+
if (request_irq(pm8001_ha->msix_entries[i].vector,
pm8001_interrupt_handler_msix, flag,
- intr_drvname[i], &pm8001_ha->outq[i])) {
+ intr_drvname[i], &(pm8001_ha->irq_vector[i]))) {
for (j = 0; j < i; j++)
free_irq(
pm8001_ha->msix_entries[j].vector,
- &pm8001_ha->outq[j]);
+ &(pm8001_ha->irq_vector[i]));
pci_disable_msix(pm8001_ha->pdev);
break;
}
{
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
struct pm8001_hba_info *pm8001_ha;
- int i;
+ int i, j;
pm8001_ha = sha->lldd_ha;
sas_unregister_ha(sha);
sas_remove_host(pm8001_ha->shost);
synchronize_irq(pm8001_ha->msix_entries[i].vector);
for (i = 0; i < pm8001_ha->number_of_intr; i++)
free_irq(pm8001_ha->msix_entries[i].vector,
- &pm8001_ha->outq[i]);
+ &(pm8001_ha->irq_vector[i]));
pci_disable_msix(pdev);
#else
free_irq(pm8001_ha->irq, sha);
#endif
#ifdef PM8001_USE_TASKLET
- tasklet_kill(&pm8001_ha->tasklet);
+ /* For non-msix and msix interrupts */
+ if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
+ tasklet_kill(&pm8001_ha->tasklet[0]);
+ else
+ for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
+ tasklet_kill(&pm8001_ha->tasklet[j]);
#endif
pm8001_free(pm8001_ha);
kfree(sha->sas_phy);
{
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
struct pm8001_hba_info *pm8001_ha;
- int i;
+ int i, j;
u32 device_state;
pm8001_ha = sha->lldd_ha;
flush_workqueue(pm8001_wq);
synchronize_irq(pm8001_ha->msix_entries[i].vector);
for (i = 0; i < pm8001_ha->number_of_intr; i++)
free_irq(pm8001_ha->msix_entries[i].vector,
- &pm8001_ha->outq[i]);
+ &(pm8001_ha->irq_vector[i]));
pci_disable_msix(pdev);
#else
free_irq(pm8001_ha->irq, sha);
#endif
#ifdef PM8001_USE_TASKLET
- tasklet_kill(&pm8001_ha->tasklet);
+ /* For non-msix and msix interrupts */
+ if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
+ tasklet_kill(&pm8001_ha->tasklet[0]);
+ else
+ for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
+ tasklet_kill(&pm8001_ha->tasklet[j]);
#endif
device_state = pci_choose_state(pdev, state);
pm8001_printk("pdev=0x%p, slot=%s, entering "
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
struct pm8001_hba_info *pm8001_ha;
int rc;
- u8 i = 0;
+ u8 i = 0, j;
u32 device_state;
pm8001_ha = sha->lldd_ha;
device_state = pdev->current_state;
if (rc)
goto err_out_disable;
#ifdef PM8001_USE_TASKLET
- /* default tasklet for non msi-x interrupt handler/first msi-x
- * interrupt handler */
- tasklet_init(&pm8001_ha->tasklet, pm8001_tasklet,
- (unsigned long)pm8001_ha);
+ /* Tasklet for non msi-x interrupt handler */
+ if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
+ tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet,
+ (unsigned long)&(pm8001_ha->irq_vector[0]));
+ else
+ for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
+ tasklet_init(&pm8001_ha->tasklet[j], pm8001_tasklet,
+ (unsigned long)&(pm8001_ha->irq_vector[j]));
#endif
PM8001_CHIP_DISP->interrupt_enable(pm8001_ha, 0);
if (pm8001_ha->chip_id != chip_8001) {
MODULE_AUTHOR("Jack Wang <jack_wang@usish.com>");
MODULE_AUTHOR("Anand Kumar Santhanam <AnandKumar.Santhanam@pmcs.com>");
MODULE_AUTHOR("Sangeetha Gnanasekaran <Sangeetha.Gnanasekaran@pmcs.com>");
+MODULE_AUTHOR("Nikith Ganigarakoppal <Nikith.Ganigarakoppal@pmcs.com>");
MODULE_DESCRIPTION(
"PMC-Sierra PM8001/8081/8088/8089/8074/8076/8077 "
"SAS/SATA controller driver");
struct pm8001_tmf_task tmf_task;
struct pm8001_device *pm8001_dev = dev->lldd_dev;
struct pm8001_hba_info *pm8001_ha = pm8001_find_ha_by_dev(dev);
+ DECLARE_COMPLETION_ONSTACK(completion_setstate);
if (dev_is_sata(dev)) {
struct sas_phy *phy = sas_get_local_phy(dev);
rc = pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev ,
dev, 1, 0);
rc = sas_phy_reset(phy, 1);
sas_put_local_phy(phy);
+ pm8001_dev->setds_completion = &completion_setstate;
rc = PM8001_CHIP_DISP->set_dev_state_req(pm8001_ha,
pm8001_dev, 0x01);
- msleep(2000);
+ wait_for_completion(&completion_setstate);
} else {
tmf_task.tmf = TMF_LU_RESET;
rc = pm8001_issue_ssp_tmf(dev, lun, &tmf_task);
u64 membase;
u32 memsize;
};
+struct isr_param {
+ struct pm8001_hba_info *drv_inst;
+ u32 irq_id;
+};
struct pm8001_hba_info {
char name[PM8001_NAME_LENGTH];
struct list_head list;
int number_of_intr;/*will be used in remove()*/
#endif
#ifdef PM8001_USE_TASKLET
- struct tasklet_struct tasklet;
+ struct tasklet_struct tasklet[PM8001_MAX_MSIX_VEC];
#endif
u32 logging_level;
u32 fw_status;
u32 smp_exp_mode;
- u32 int_vector;
const struct firmware *fw_image;
- u8 outq[PM8001_MAX_MSIX_VEC];
+ struct isr_param irq_vector[PM8001_MAX_MSIX_VEC];
};
struct pm8001_work {
unsigned long flags;
u8 deviceType = pPayload->sas_identify.dev_type;
port->port_state = portstate;
+ phy->phy_state = PHY_STATE_LINK_UP_SPCV;
PM8001_MSG_DBG(pm8001_ha, pm8001_printk(
"portid:%d; phyid:%d; linkrate:%d; "
"portstate:%x; devicetype:%x\n",
port_id, phy_id, link_rate, portstate));
port->port_state = portstate;
+ phy->phy_state = PHY_STATE_LINK_UP_SPCV;
port->port_attached = 1;
pm8001_get_lrate_mode(phy, link_rate);
phy->phy_type |= PORT_TYPE_SATA;
#define SAS_DOPNRJT_RTRY_TMO 128
#define SAS_COPNRJT_RTRY_TMO 128
+/* for phy state */
+#define PHY_STATE_LINK_UP_SPCV 0x2
/*
Making ORR bigger than IT NEXUS LOSS which is 2000000us = 2 second.
Assuming a bigger value 3 second, 3000000/128 = 23437.5 where 128
};
#define PMCRAID_AEN_CMD_MAX (__PMCRAID_AEN_CMD_MAX - 1)
+static struct genl_multicast_group pmcraid_mcgrps[] = {
+ { .name = "events", /* not really used - see ID discussion below */ },
+};
+
static struct genl_family pmcraid_event_family = {
- .id = GENL_ID_GENERATE,
+ /*
+ * Due to prior multicast group abuse (the code having assumed that
+ * the family ID can be used as a multicast group ID) we need to
+ * statically allocate a family (and thus group) ID.
+ */
+ .id = GENL_ID_PMCRAID,
.name = "pmcraid",
.version = 1,
- .maxattr = PMCRAID_AEN_ATTR_MAX
+ .maxattr = PMCRAID_AEN_ATTR_MAX,
+ .mcgrps = pmcraid_mcgrps,
+ .n_mcgrps = ARRAY_SIZE(pmcraid_mcgrps),
};
/**
return result;
}
- result =
- genlmsg_multicast(&pmcraid_event_family, skb, 0,
- pmcraid_event_family.id, GFP_ATOMIC);
+ result = genlmsg_multicast(&pmcraid_event_family, skb,
+ 0, 0, GFP_ATOMIC);
/* If there are no listeners, genlmsg_multicast may return non-zero
* value.
.this_id = -1,
.sg_tablesize = PMCRAID_MAX_IOADLS,
.max_sectors = PMCRAID_IOA_MAX_SECTORS,
+ .no_write_same = 1,
.cmd_per_lun = PMCRAID_MAX_CMD_PER_LUN,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = pmcraid_host_attrs,
schedule_delayed_work(&tgt->sess_del_work, 0);
else
schedule_delayed_work(&tgt->sess_del_work,
- jiffies - sess->expires);
+ sess->expires - jiffies);
}
/* ha->hardware_lock supposed to be held on entry */
struct scsi_qla_host *vha = tgt->vha;
struct qla_hw_data *ha = vha->hw;
struct qla_tgt_sess *sess;
- unsigned long flags;
+ unsigned long flags, elapsed;
spin_lock_irqsave(&ha->hardware_lock, flags);
while (!list_empty(&tgt->del_sess_list)) {
sess = list_entry(tgt->del_sess_list.next, typeof(*sess),
del_list_entry);
- if (time_after_eq(jiffies, sess->expires)) {
+ elapsed = jiffies;
+ if (time_after_eq(elapsed, sess->expires)) {
qlt_undelete_sess(sess);
ql_dbg(ql_dbg_tgt_mgt, vha, 0xf004,
ha->tgt.tgt_ops->put_sess(sess);
} else {
schedule_delayed_work(&tgt->sess_del_work,
- jiffies - sess->expires);
+ sess->expires - elapsed);
break;
}
}
if (rc != 0) {
ha->tgt.tgt_ops = NULL;
ha->tgt.target_lport_ptr = NULL;
+ scsi_host_put(host);
}
mutex_unlock(&qla_tgt_mutex);
return rc;
struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
struct tcm_qla2xxx_tpg, se_tpg);
- return QLA_TPG_ATTRIB(tpg)->generate_node_acls;
+ return tpg->tpg_attrib.generate_node_acls;
}
static int tcm_qla2xxx_check_demo_mode_cache(struct se_portal_group *se_tpg)
struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
struct tcm_qla2xxx_tpg, se_tpg);
- return QLA_TPG_ATTRIB(tpg)->cache_dynamic_acls;
+ return tpg->tpg_attrib.cache_dynamic_acls;
}
static int tcm_qla2xxx_check_demo_write_protect(struct se_portal_group *se_tpg)
struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
struct tcm_qla2xxx_tpg, se_tpg);
- return QLA_TPG_ATTRIB(tpg)->demo_mode_write_protect;
+ return tpg->tpg_attrib.demo_mode_write_protect;
}
static int tcm_qla2xxx_check_prod_write_protect(struct se_portal_group *se_tpg)
struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
struct tcm_qla2xxx_tpg, se_tpg);
- return QLA_TPG_ATTRIB(tpg)->prod_mode_write_protect;
+ return tpg->tpg_attrib.prod_mode_write_protect;
}
static int tcm_qla2xxx_check_demo_mode_login_only(struct se_portal_group *se_tpg)
struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg,
struct tcm_qla2xxx_tpg, se_tpg);
- return QLA_TPG_ATTRIB(tpg)->demo_mode_login_only;
+ return tpg->tpg_attrib.demo_mode_login_only;
}
static struct se_node_acl *tcm_qla2xxx_alloc_fabric_acl(
struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg, \
struct tcm_qla2xxx_tpg, se_tpg); \
\
- return sprintf(page, "%u\n", QLA_TPG_ATTRIB(tpg)->name); \
+ return sprintf(page, "%u\n", tpg->tpg_attrib.name); \
} \
\
static ssize_t tcm_qla2xxx_tpg_attrib_store_##name( \
* By default allow READ-ONLY TPG demo-mode access w/ cached dynamic
* NodeACLs
*/
- QLA_TPG_ATTRIB(tpg)->generate_node_acls = 1;
- QLA_TPG_ATTRIB(tpg)->demo_mode_write_protect = 1;
- QLA_TPG_ATTRIB(tpg)->cache_dynamic_acls = 1;
- QLA_TPG_ATTRIB(tpg)->demo_mode_login_only = 1;
+ tpg->tpg_attrib.generate_node_acls = 1;
+ tpg->tpg_attrib.demo_mode_write_protect = 1;
+ tpg->tpg_attrib.cache_dynamic_acls = 1;
+ tpg->tpg_attrib.demo_mode_login_only = 1;
ret = core_tpg_register(&tcm_qla2xxx_fabric_configfs->tf_ops, wwn,
&tpg->se_tpg, tpg, TRANSPORT_TPG_TYPE_NORMAL);
/*
* Setup default attribute lists for various fabric->tf_cit_tmpl
*/
- TF_CIT_TMPL(fabric)->tfc_wwn_cit.ct_attrs = tcm_qla2xxx_wwn_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_base_cit.ct_attrs = tcm_qla2xxx_tpg_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_attrib_cit.ct_attrs =
+ fabric->tf_cit_tmpl.tfc_wwn_cit.ct_attrs = tcm_qla2xxx_wwn_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_base_cit.ct_attrs = tcm_qla2xxx_tpg_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_attrib_cit.ct_attrs =
tcm_qla2xxx_tpg_attrib_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_param_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_np_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_param_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_param_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_np_base_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_base_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_param_cit.ct_attrs = NULL;
/*
* Register the fabric for use within TCM
*/
/*
* Setup default attribute lists for various npiv_fabric->tf_cit_tmpl
*/
- TF_CIT_TMPL(npiv_fabric)->tfc_wwn_cit.ct_attrs = tcm_qla2xxx_wwn_attrs;
- TF_CIT_TMPL(npiv_fabric)->tfc_tpg_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(npiv_fabric)->tfc_tpg_attrib_cit.ct_attrs = NULL;
- TF_CIT_TMPL(npiv_fabric)->tfc_tpg_param_cit.ct_attrs = NULL;
- TF_CIT_TMPL(npiv_fabric)->tfc_tpg_np_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(npiv_fabric)->tfc_tpg_nacl_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(npiv_fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
- TF_CIT_TMPL(npiv_fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
- TF_CIT_TMPL(npiv_fabric)->tfc_tpg_nacl_param_cit.ct_attrs = NULL;
+ npiv_fabric->tf_cit_tmpl.tfc_wwn_cit.ct_attrs = tcm_qla2xxx_wwn_attrs;
+ npiv_fabric->tf_cit_tmpl.tfc_tpg_base_cit.ct_attrs = NULL;
+ npiv_fabric->tf_cit_tmpl.tfc_tpg_attrib_cit.ct_attrs = NULL;
+ npiv_fabric->tf_cit_tmpl.tfc_tpg_param_cit.ct_attrs = NULL;
+ npiv_fabric->tf_cit_tmpl.tfc_tpg_np_base_cit.ct_attrs = NULL;
+ npiv_fabric->tf_cit_tmpl.tfc_tpg_nacl_base_cit.ct_attrs = NULL;
+ npiv_fabric->tf_cit_tmpl.tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
+ npiv_fabric->tf_cit_tmpl.tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
+ npiv_fabric->tf_cit_tmpl.tfc_tpg_nacl_param_cit.ct_attrs = NULL;
/*
* Register the npiv_fabric for use within TCM
*/
struct se_portal_group se_tpg;
};
-#define QLA_TPG_ATTRIB(tpg) (&(tpg)->tpg_attrib)
-
struct tcm_qla2xxx_fc_loopid {
struct se_node_acl *se_nacl;
};
{
struct scsi_device *sdev = sdkp->device;
+ if (sdev->host->no_write_same) {
+ sdev->no_write_same = 1;
+
+ return;
+ }
+
if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, INQUIRY) < 0) {
/* too large values might cause issues with arcmsr */
int vpd_buf_len = 64;
.use_clustering = DISABLE_CLUSTERING,
/* Make sure we dont get a sg segment crosses a page boundary */
.dma_boundary = PAGE_SIZE-1,
+ .no_write_same = 1,
};
enum {
static int bcm2835_spi_remove(struct platform_device *pdev)
{
- struct spi_master *master = spi_master_get(platform_get_drvdata(pdev));
+ struct spi_master *master = platform_get_drvdata(pdev);
struct bcm2835_spi *bs = spi_master_get_devdata(master);
free_irq(bs->irq, master);
static int bcm63xx_spi_remove(struct platform_device *pdev)
{
- struct spi_master *master = spi_master_get(platform_get_drvdata(pdev));
+ struct spi_master *master = platform_get_drvdata(pdev);
struct bcm63xx_spi *bs = spi_master_get_devdata(master);
/* reset spi block */
&dws->tx_sgl,
1,
DMA_MEM_TO_DEV,
- DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_DEST_UNMAP);
+ DMA_PREP_INTERRUPT);
txdesc->callback = dw_spi_dma_done;
txdesc->callback_param = dws;
&dws->rx_sgl,
1,
DMA_DEV_TO_MEM,
- DMA_PREP_INTERRUPT | DMA_COMPL_SKIP_DEST_UNMAP);
+ DMA_PREP_INTERRUPT);
rxdesc->callback = dw_spi_dma_done;
rxdesc->callback_param = dws;
static int mpc512x_psc_spi_do_remove(struct device *dev)
{
- struct spi_master *master = spi_master_get(dev_get_drvdata(dev));
+ struct spi_master *master = dev_get_drvdata(dev);
struct mpc512x_psc_spi *mps = spi_master_get_devdata(master);
clk_disable_unprepare(mps->clk_mclk);
struct mxs_spi *spi;
struct mxs_ssp *ssp;
- master = spi_master_get(platform_get_drvdata(pdev));
+ master = platform_get_drvdata(pdev);
spi = spi_master_get_devdata(master);
ssp = &spi->ssp;
static struct acpi_device_id pxa2xx_spi_acpi_match[] = {
{ "INT33C0", 0 },
{ "INT33C1", 0 },
+ { "INT3430", 0 },
+ { "INT3431", 0 },
{ "80860F0E", 0 },
{ },
};
/* Enable the SSP clock */
clk_prepare_enable(ssp->clk);
+ /* Restore LPSS private register bits */
+ lpss_ssp_setup(drv_data);
+
/* Start the queue running */
status = spi_master_resume(drv_data->master);
if (status != 0) {
static int rspi_remove(struct platform_device *pdev)
{
- struct rspi_data *rspi = spi_master_get(platform_get_drvdata(pdev));
+ struct rspi_data *rspi = platform_get_drvdata(pdev);
spi_unregister_master(rspi->master);
rspi_release_dma(rspi);
free_irq(platform_get_irq(pdev, 0), rspi);
clk_put(rspi->clk);
iounmap(rspi->addr);
- spi_master_put(rspi->master);
return 0;
}
qspi->spi_max_frequency, clk_div);
ret = pm_runtime_get_sync(qspi->dev);
- if (ret) {
+ if (ret < 0) {
dev_err(qspi->dev, "pm_runtime_get_sync() failed\n");
return ret;
}
if (!of_property_read_u32(np, "num-cs", &num_cs))
master->num_chipselect = num_cs;
- platform_set_drvdata(pdev, master);
-
qspi = spi_master_get_devdata(master);
qspi->master = master;
qspi->dev = &pdev->dev;
+ platform_set_drvdata(pdev, qspi);
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
static int ti_qspi_remove(struct platform_device *pdev)
{
- struct ti_qspi *qspi = platform_get_drvdata(pdev);
+ struct spi_master *master;
+ struct ti_qspi *qspi;
+ int ret;
+
+ master = platform_get_drvdata(pdev);
+ qspi = spi_master_get_devdata(master);
+
+ ret = pm_runtime_get_sync(qspi->dev);
+ if (ret < 0) {
+ dev_err(qspi->dev, "pm_runtime_get_sync() failed\n");
+ return ret;
+ }
ti_qspi_write(qspi, QSPI_WC_INT_DISABLE, QSPI_INTR_ENABLE_CLEAR_REG);
+ pm_runtime_put(qspi->dev);
+ pm_runtime_disable(&pdev->dev);
+
+ spi_unregister_master(master);
+
return 0;
}
static int txx9spi_remove(struct platform_device *dev)
{
- struct spi_master *master = spi_master_get(platform_get_drvdata(dev));
+ struct spi_master *master = platform_get_drvdata(dev);
struct txx9spi *c = spi_master_get_devdata(master);
destroy_workqueue(c->workqueue);
}
EXPORT_SYMBOL_GPL(spi_alloc_device);
+static void spi_dev_set_name(struct spi_device *spi)
+{
+ struct acpi_device *adev = ACPI_COMPANION(&spi->dev);
+
+ if (adev) {
+ dev_set_name(&spi->dev, "spi-%s", acpi_dev_name(adev));
+ return;
+ }
+
+ dev_set_name(&spi->dev, "%s.%u", dev_name(&spi->master->dev),
+ spi->chip_select);
+}
+
/**
* spi_add_device - Add spi_device allocated with spi_alloc_device
* @spi: spi_device to register
}
/* Set the bus ID string */
- dev_set_name(&spi->dev, "%s.%u", dev_name(&spi->master->dev),
- spi->chip_select);
-
+ spi_dev_set_name(spi);
/* We need to make sure there's no other device with this
* chipselect **BEFORE** we call setup(), else we'll trash
return AE_NO_MEMORY;
}
- ACPI_HANDLE_SET(&spi->dev, handle);
+ ACPI_COMPANION_SET(&spi->dev, adev);
spi->irq = -1;
INIT_LIST_HEAD(&resource_list);
return -ENOMEM;
ret = spi_register_master(master);
- if (ret != 0) {
+ if (!ret) {
*ptr = master;
devres_add(dev, ptr);
} else {
kfree_skb(skb);
}
-static int btmtk_usb_send_frame(struct sk_buff *skb)
+static int btmtk_usb_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
{
- struct hci_dev *hdev = (struct hci_dev *)skb->dev;
struct btmtk_usb_data *data = hci_get_drvdata(hdev);
struct usb_ctrlrequest *dr;
struct urb *urb;
release_firmware(fw);
}
- return ret;
+ return ret < 0 ? ret : 0;
}
EXPORT_SYMBOL_GPL(comedi_load_firmware);
BOARD_ADLINK_PCI7296,
BOARD_CB_PCIDIO24,
BOARD_CB_PCIDIO24H,
- BOARD_CB_PCIDIO48H,
+ BOARD_CB_PCIDIO48H_OLD,
+ BOARD_CB_PCIDIO48H_NEW,
BOARD_CB_PCIDIO96H,
BOARD_NI_PCIDIO96,
BOARD_NI_PCIDIO96B,
.dio_badr = 2,
.n_8255 = 1,
},
- [BOARD_CB_PCIDIO48H] = {
+ [BOARD_CB_PCIDIO48H_OLD] = {
.name = "cb_pci-dio48h",
.dio_badr = 1,
.n_8255 = 2,
},
+ [BOARD_CB_PCIDIO48H_NEW] = {
+ .name = "cb_pci-dio48h",
+ .dio_badr = 2,
+ .n_8255 = 2,
+ },
[BOARD_CB_PCIDIO96H] = {
.name = "cb_pci-dio96h",
.dio_badr = 2,
{ PCI_VDEVICE(ADLINK, 0x7296), BOARD_ADLINK_PCI7296 },
{ PCI_VDEVICE(CB, 0x0028), BOARD_CB_PCIDIO24 },
{ PCI_VDEVICE(CB, 0x0014), BOARD_CB_PCIDIO24H },
- { PCI_VDEVICE(CB, 0x000b), BOARD_CB_PCIDIO48H },
+ { PCI_DEVICE_SUB(PCI_VENDOR_ID_CB, 0x000b, 0x0000, 0x0000),
+ .driver_data = BOARD_CB_PCIDIO48H_OLD },
+ { PCI_DEVICE_SUB(PCI_VENDOR_ID_CB, 0x000b, PCI_VENDOR_ID_CB, 0x000b),
+ .driver_data = BOARD_CB_PCIDIO48H_NEW },
{ PCI_VDEVICE(CB, 0x0017), BOARD_CB_PCIDIO96H },
{ PCI_VDEVICE(NI, 0x0160), BOARD_NI_PCIDIO96 },
{ PCI_VDEVICE(NI, 0x1630), BOARD_NI_PCIDIO96B },
if (mask) {
if (mask & 0x00ff)
outb(s->state & 0xff, dev->iobase + reg);
- if ((mask & 0xff00) & (s->n_chan > 8))
+ if ((mask & 0xff00) && (s->n_chan > 8))
outb((s->state >> 8) & 0xff, dev->iobase + reg + 1);
- if ((mask & 0xff0000) & (s->n_chan > 16))
+ if ((mask & 0xff0000) && (s->n_chan > 16))
outb((s->state >> 16) & 0xff, dev->iobase + reg + 2);
- if ((mask & 0xff000000) & (s->n_chan > 24))
+ if ((mask & 0xff000000) && (s->n_chan > 24))
outb((s->state >> 24) & 0xff, dev->iobase + reg + 3);
}
* Private helper function: Write setpoint to an application DAC channel.
*/
static void s626_set_dac(struct comedi_device *dev, uint16_t chan,
- unsigned short dacdata)
+ int16_t dacdata)
{
struct s626_private *devpriv = dev->private;
uint16_t signmask;
unsigned char *rx_buf = devpriv->usb_rx_buf;
unsigned char *tx_buf = devpriv->usb_tx_buf;
int reg, cmd;
- int ret;
+ int ret = 0;
if (devpriv->model == VMK8061_MODEL) {
reg = VMK8061_DO_REG;
u8 **c_file, const u8 *endpoint, bool boot_case)
{
long word_length;
- int status;
+ int status = 0;
/*DEBUG("FT1000:REQUEST_CODE_SEGMENT\n");i*/
word_length = get_request_value(ft1000dev);
return status;
}
-
config SENSORS_HMC5843
tristate "Honeywell HMC5843/5883/5883L 3-Axis Magnetometer"
depends on I2C
+ select IIO_BUFFER
+ select IIO_TRIGGERED_BUFFER
help
Say Y here to add support for the Honeywell HMC5843, HMC5883 and
HMC5883L 3-Axis Magnetometer (digital compass).
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) | \
BIT(IIO_CHAN_INFO_SAMP_FREQ), \
.scan_index = idx, \
- .scan_type = IIO_ST('s', 16, 16, IIO_BE), \
+ .scan_type = { \
+ .sign = 's', \
+ .realbits = 16, \
+ .storagebits = 16, \
+ .endianness = IIO_BE, \
+ }, \
}
static const struct iio_chan_spec hmc5843_channels[] = {
obj-$(CONFIG_DRM_IMX_LDB) += imx-ldb.o
obj-$(CONFIG_DRM_IMX_FB_HELPER) += imx-fbdev.o
obj-$(CONFIG_DRM_IMX_IPUV3_CORE) += ipu-v3/
-obj-$(CONFIG_DRM_IMX_IPUV3) += ipuv3-crtc.o ipuv3-plane.o
+
+imx-ipuv3-crtc-objs := ipuv3-crtc.o ipuv3-plane.o
+obj-$(CONFIG_DRM_IMX_IPUV3) += imx-ipuv3-crtc.o
{
return crtc->pipe;
}
+EXPORT_SYMBOL_GPL(imx_drm_crtc_id);
static void imx_drm_driver_lastclose(struct drm_device *drm)
{
imx_drm_device_put();
- drm_mode_config_cleanup(imxdrm->drm);
+ drm_vblank_cleanup(imxdrm->drm);
drm_kms_helper_poll_fini(imxdrm->drm);
+ drm_mode_config_cleanup(imxdrm->drm);
return 0;
}
if (!file->is_master)
return;
- for (i = 0; i < 4; i++)
- imx_drm_disable_vblank(drm , i);
+ for (i = 0; i < MAX_CRTC; i++)
+ imx_drm_disable_vblank(drm, i);
}
static const struct file_operations imx_drm_driver_fops = {
struct imx_drm_device *imxdrm = __imx_drm_device();
int ret;
- drm_crtc_init(imxdrm->drm, imx_drm_crtc->crtc,
- imx_drm_crtc->imx_drm_helper_funcs.crtc_funcs);
ret = drm_mode_crtc_set_gamma_size(imx_drm_crtc->crtc, 256);
if (ret)
return ret;
drm_crtc_helper_add(imx_drm_crtc->crtc,
imx_drm_crtc->imx_drm_helper_funcs.crtc_helper_funcs);
+ drm_crtc_init(imxdrm->drm, imx_drm_crtc->crtc,
+ imx_drm_crtc->imx_drm_helper_funcs.crtc_funcs);
+
drm_mode_group_reinit(imxdrm->drm);
return 0;
ret = drm_mode_group_init_legacy_group(imxdrm->drm,
&imxdrm->drm->primary->mode_group);
if (ret)
- goto err_init;
+ goto err_kms;
ret = drm_vblank_init(imxdrm->drm, MAX_CRTC);
if (ret)
- goto err_init;
+ goto err_kms;
/*
* with vblank_disable_allowed = true, vblank interrupt will be disabled
*/
imxdrm->drm->vblank_disable_allowed = true;
- if (!imx_drm_device_get())
+ if (!imx_drm_device_get()) {
ret = -EINVAL;
+ goto err_vblank;
+ }
- ret = 0;
+ mutex_unlock(&imxdrm->mutex);
+ return 0;
-err_init:
+err_vblank:
+ drm_vblank_cleanup(drm);
+err_kms:
+ drm_kms_helper_poll_fini(drm);
+ drm_mode_config_cleanup(drm);
mutex_unlock(&imxdrm->mutex);
return ret;
mutex_lock(&imxdrm->mutex);
+ /*
+ * The vblank arrays are dimensioned by MAX_CRTC - we can't
+ * pass IDs greater than this to those functions.
+ */
+ if (imxdrm->pipes >= MAX_CRTC) {
+ ret = -EINVAL;
+ goto err_busy;
+ }
+
if (imxdrm->drm->open_count) {
ret = -EBUSY;
goto err_busy;
return 0;
err_register:
+ list_del(&imx_drm_crtc->list);
kfree(imx_drm_crtc);
err_alloc:
err_busy:
struct drm_encoder encoder;
struct imx_drm_encoder *imx_drm_encoder;
struct device *dev;
- spinlock_t enable_lock; /* serializes tve_enable/disable */
spinlock_t lock; /* register lock */
bool enabled;
int mode;
static void tve_enable(struct imx_tve *tve)
{
- unsigned long flags;
int ret;
- spin_lock_irqsave(&tve->enable_lock, flags);
if (!tve->enabled) {
tve->enabled = true;
clk_prepare_enable(tve->clk);
TVE_CD_SM_IEN |
TVE_CD_LM_IEN |
TVE_CD_MON_END_IEN);
-
- spin_unlock_irqrestore(&tve->enable_lock, flags);
}
static void tve_disable(struct imx_tve *tve)
{
- unsigned long flags;
int ret;
- spin_lock_irqsave(&tve->enable_lock, flags);
if (tve->enabled) {
tve->enabled = false;
ret = regmap_update_bits(tve->regmap, TVE_COM_CONF_REG,
TVE_IPU_CLK_EN | TVE_EN, 0);
clk_disable_unprepare(tve->clk);
}
- spin_unlock_irqrestore(&tve->enable_lock, flags);
}
static int tve_setup_tvout(struct imx_tve *tve)
tve->dev = &pdev->dev;
spin_lock_init(&tve->lock);
- spin_lock_init(&tve->enable_lock);
ddc_node = of_parse_phandle(np, "ddc", 0);
if (ddc_node) {
},
};
+static DEFINE_MUTEX(ipu_client_id_mutex);
static int ipu_client_id;
-static int ipu_add_subdevice_pdata(struct device *dev,
- const struct ipu_platform_reg *reg)
-{
- struct platform_device *pdev;
-
- pdev = platform_device_register_data(dev, reg->name, ipu_client_id++,
- ®->pdata, sizeof(struct ipu_platform_reg));
-
- return PTR_ERR_OR_ZERO(pdev);
-}
-
static int ipu_add_client_devices(struct ipu_soc *ipu)
{
- int ret;
- int i;
+ struct device *dev = ipu->dev;
+ unsigned i;
+ int id, ret;
+
+ mutex_lock(&ipu_client_id_mutex);
+ id = ipu_client_id;
+ ipu_client_id += ARRAY_SIZE(client_reg);
+ mutex_unlock(&ipu_client_id_mutex);
for (i = 0; i < ARRAY_SIZE(client_reg); i++) {
const struct ipu_platform_reg *reg = &client_reg[i];
- ret = ipu_add_subdevice_pdata(ipu->dev, reg);
- if (ret)
+ struct platform_device *pdev;
+
+ pdev = platform_device_register_data(dev, reg->name,
+ id++, ®->pdata, sizeof(reg->pdata));
+
+ if (IS_ERR(pdev))
goto err_register;
}
return 0;
err_register:
- platform_device_unregister_children(to_platform_device(ipu->dev));
+ platform_device_unregister_children(to_platform_device(dev));
return ret;
}
struct l_wait_info lwi = { 0 };
int rc = 0;
- if (!thread_is_init(&pinger_thread) &&
- !thread_is_stopped(&pinger_thread))
+ if (thread_is_init(&pinger_thread) ||
+ thread_is_stopped(&pinger_thread))
return -EALREADY;
ptlrpc_pinger_remove_timeouts();
* Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
if (usb->board->flags & GO7007_USB_EZUSB) {
/* Reset buffer in EZ-USB */
- dev_dbg(go->dev, "resetting EZ-USB buffers\n");
+ pr_debug("resetting EZ-USB buffers\n");
if (go7007_usb_vendor_request(go, 0x10, 0, 0, NULL, 0, 0) < 0 ||
go7007_usb_vendor_request(go, 0x10, 0, 0, NULL, 0, 0) < 0)
return -1;
u16 status_reg = 0;
int timeout = 500;
- dev_dbg(go->dev, "WriteInterrupt: %04x %04x\n", addr, data);
+ pr_debug("WriteInterrupt: %04x %04x\n", addr, data);
for (i = 0; i < 100; ++i) {
r = usb_control_msg(usb->usbdev,
int r;
int timeout = 500;
- dev_dbg(go->dev, "WriteInterrupt: %04x %04x\n", addr, data);
+ pr_debug("WriteInterrupt: %04x %04x\n", addr, data);
go->usb_buf[0] = data & 0xff;
go->usb_buf[1] = data >> 8;
go->interrupt_available = 1;
go->interrupt_data = __le16_to_cpu(regs[0]);
go->interrupt_value = __le16_to_cpu(regs[1]);
- dev_dbg(go->dev, "ReadInterrupt: %04x %04x\n",
+ pr_debug("ReadInterrupt: %04x %04x\n",
go->interrupt_value, go->interrupt_data);
}
int transferred, pipe;
int timeout = 500;
- dev_dbg(go->dev, "DownloadBuffer sending %d bytes\n", len);
+ pr_debug("DownloadBuffer sending %d bytes\n", len);
if (usb->board->flags & GO7007_USB_EZUSB)
pipe = usb_sndbulkpipe(usb->usbdev, 2);
!(msgs[i].flags & I2C_M_RD) &&
(msgs[i + 1].flags & I2C_M_RD)) {
#ifdef GO7007_I2C_DEBUG
- dev_dbg(go->dev, "i2c write/read %d/%d bytes on %02x\n",
+ pr_debug("i2c write/read %d/%d bytes on %02x\n",
msgs[i].len, msgs[i + 1].len, msgs[i].addr);
#endif
buf[0] = 0x01;
buf[buf_len++] = msgs[++i].len;
} else if (msgs[i].flags & I2C_M_RD) {
#ifdef GO7007_I2C_DEBUG
- dev_dbg(go->dev, "i2c read %d bytes on %02x\n",
+ pr_debug("i2c read %d bytes on %02x\n",
msgs[i].len, msgs[i].addr);
#endif
buf[0] = 0x01;
buf_len = 4;
} else {
#ifdef GO7007_I2C_DEBUG
- dev_dbg(go->dev, "i2c write %d bytes on %02x\n",
+ pr_debug("i2c write %d bytes on %02x\n",
msgs[i].len, msgs[i].addr);
#endif
buf[0] = 0x00;
char *name;
int video_pipe, i, v_urb_len;
- dev_dbg(go->dev, "probing new GO7007 USB board\n");
+ pr_debug("probing new GO7007 USB board\n");
switch (id->driver_info) {
case GO7007_BOARDID_MATRIX_II:
board = &board_px_tv402u;
break;
case GO7007_BOARDID_LIFEVIEW_LR192:
- dev_err(go->dev, "The Lifeview TV Walker Ultra is not supported. Sorry!\n");
+ dev_err(&intf->dev, "The Lifeview TV Walker Ultra is not supported. Sorry!\n");
return -ENODEV;
name = "Lifeview TV Walker Ultra";
board = &board_lifeview_lr192;
break;
case GO7007_BOARDID_SENSORAY_2250:
- dev_info(go->dev, "Sensoray 2250 found\n");
+ dev_info(&intf->dev, "Sensoray 2250 found\n");
name = "Sensoray 2250/2251";
board = &board_sensoray_2250;
break;
board = &board_ads_usbav_709;
break;
default:
- dev_err(go->dev, "unknown board ID %d!\n",
+ dev_err(&intf->dev, "unknown board ID %d!\n",
(unsigned int)id->driver_info);
return -ENODEV;
}
sizeof(go->name));
break;
default:
- dev_dbg(go->dev, "unable to detect tuner type!\n");
+ pr_debug("unable to detect tuner type!\n");
break;
}
/* Configure tuner mode selection inputs connected
dev_err(nvec->dev,
"RX buffer overflow on %p: "
"Trying to write byte %u of %u\n",
- nvec->rx, nvec->rx->pos, NVEC_MSG_SIZE);
+ nvec->rx, nvec->rx ? nvec->rx->pos : 0,
+ NVEC_MSG_SIZE);
break;
default:
nvec->state = 0;
return _FAIL;
}
+ /* fix bug of flush_cam_entry at STOP AP mode */
+ psta->state |= WIFI_AP_STATE;
+ rtw_indicate_connect(padapter);
pmlmepriv->cur_network.join_res = true;/* for check if already set beacon */
return ret;
}
menuconfig TIDSPBRIDGE
tristate "DSP Bridge driver"
- depends on ARCH_OMAP3 && !ARCH_MULTIPLATFORM
+ depends on ARCH_OMAP3 && !ARCH_MULTIPLATFORM && BROKEN
select MAILBOX
select OMAP2PLUS_MBOX
help
/* This function maps kernel space memory to user space memory. */
static int bridge_mmap(struct file *filp, struct vm_area_struct *vma)
{
- u32 status;
+ struct omap_dsp_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
/* VM_IO | VM_DONTEXPAND | VM_DONTDUMP are set by remap_pfn_range() */
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
vma->vm_start, vma->vm_end, vma->vm_page_prot,
vma->vm_flags);
- status = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
- vma->vm_end - vma->vm_start,
- vma->vm_page_prot);
- if (status != 0)
- status = -EAGAIN;
-
- return status;
+ return vm_iomap_memory(vma,
+ pdata->phys_mempool_base,
+ pdata->phys_mempool_size);
}
static const struct file_operations bridge_fops = {
DBG_PRT(MSG_LEVEL_DEBUG, KERN_INFO "%s: Netdevice %s unregistered\n",
pDevice->dev->name, pDevice->apdev->name);
}
- free_netdev(pDevice->apdev);
+ if (pDevice->apdev)
+ free_netdev(pDevice->apdev);
pDevice->apdev = NULL;
pDevice->bEnable8021x = false;
pDevice->bEnableHostWEP = false;
u8 * pbyAgc;
u16 wLengthAgc;
u8 abyArray[256];
+ u8 data;
ntStatus = CONTROLnsRequestIn(pDevice,
MESSAGE_TYPE_READ,
ControlvWriteByte(pDevice,MESSAGE_REQUEST_BBREG,0x0D,0x01);
RFbRFTableDownload(pDevice);
+
+ /* Fix for TX USB resets from vendors driver */
+ CONTROLnsRequestIn(pDevice, MESSAGE_TYPE_READ, USB_REG4,
+ MESSAGE_REQUEST_MEM, sizeof(data), &data);
+
+ data |= 0x2;
+
+ CONTROLnsRequestOut(pDevice, MESSAGE_TYPE_WRITE, USB_REG4,
+ MESSAGE_REQUEST_MEM, sizeof(data), &data);
+
return true;//ntStatus;
}
DBG_PRT(MSG_LEVEL_DEBUG, KERN_INFO "%s: Netdevice %s unregistered\n",
pDevice->dev->name, pDevice->apdev->name);
}
- free_netdev(pDevice->apdev);
+ if (pDevice->apdev)
+ free_netdev(pDevice->apdev);
pDevice->apdev = NULL;
pDevice->bEnable8021x = false;
pDevice->bEnableHostWEP = false;
#define VIAUSB20_PACKET_HEADER 0x04
+#define USB_REG4 0x604
+
typedef struct _CMD_MESSAGE
{
u8 byData[256];
return -ENOMEM;
/* Do not reset an active device! */
- if (bdev->bd_holders)
- return -EBUSY;
+ if (bdev->bd_holders) {
+ ret = -EBUSY;
+ goto out;
+ }
ret = kstrtou16(buf, 10, &do_reset);
if (ret)
- return ret;
+ goto out;
- if (!do_reset)
- return -EINVAL;
+ if (!do_reset) {
+ ret = -EINVAL;
+ goto out;
+ }
/* Make sure all pending I/O is finished */
fsync_bdev(bdev);
+ bdput(bdev);
zram_reset_device(zram, true);
return len;
+
+out:
+ bdput(bdev);
+ return ret;
}
static void __zram_make_request(struct zram *zram, struct bio *bio, int rw)
return next;
}
-/* Encode <page, obj_idx> as a single handle value */
+/*
+ * Encode <page, obj_idx> as a single handle value.
+ * On hardware platforms with physical memory starting at 0x0 the pfn
+ * could be 0 so we ensure that the handle will never be 0 by adjusting the
+ * encoded obj_idx value before encoding.
+ */
static void *obj_location_to_handle(struct page *page, unsigned long obj_idx)
{
unsigned long handle;
}
handle = page_to_pfn(page) << OBJ_INDEX_BITS;
- handle |= (obj_idx & OBJ_INDEX_MASK);
+ handle |= ((obj_idx + 1) & OBJ_INDEX_MASK);
return (void *)handle;
}
-/* Decode <page, obj_idx> pair from the given object handle */
+/*
+ * Decode <page, obj_idx> pair from the given object handle. We adjust the
+ * decoded obj_idx back to its original value since it was adjusted in
+ * obj_location_to_handle().
+ */
static void obj_handle_to_location(unsigned long handle, struct page **page,
unsigned long *obj_idx)
{
*page = pfn_to_page(handle >> OBJ_INDEX_BITS);
- *obj_idx = handle & OBJ_INDEX_MASK;
+ *obj_idx = (handle & OBJ_INDEX_MASK) - 1;
}
static unsigned long obj_idx_to_offset(struct page *page,
*/
send_sig(SIGINT, np->np_thread, 1);
kthread_stop(np->np_thread);
+ np->np_thread = NULL;
}
np->np_transport->iscsit_free_np(np);
int iscsi_task_attr;
int sam_task_attr;
- spin_lock_bh(&conn->sess->session_stats_lock);
- conn->sess->cmd_pdus++;
- if (conn->sess->se_sess->se_node_acl) {
- spin_lock(&conn->sess->se_sess->se_node_acl->stats_lock);
- conn->sess->se_sess->se_node_acl->num_cmds++;
- spin_unlock(&conn->sess->se_sess->se_node_acl->stats_lock);
- }
- spin_unlock_bh(&conn->sess->session_stats_lock);
+ atomic_long_inc(&conn->sess->cmd_pdus);
hdr = (struct iscsi_scsi_req *) buf;
payload_length = ntoh24(hdr->dlength);
if (((hdr->flags & ISCSI_FLAG_CMD_READ) ||
(hdr->flags & ISCSI_FLAG_CMD_WRITE)) && !hdr->data_length) {
/*
- * Vmware ESX v3.0 uses a modified Cisco Initiator (v3.4.2)
- * that adds support for RESERVE/RELEASE. There is a bug
- * add with this new functionality that sets R/W bits when
- * neither CDB carries any READ or WRITE datapayloads.
+ * From RFC-3720 Section 10.3.1:
+ *
+ * "Either or both of R and W MAY be 1 when either the
+ * Expected Data Transfer Length and/or Bidirectional Read
+ * Expected Data Transfer Length are 0"
+ *
+ * For this case, go ahead and clear the unnecssary bits
+ * to avoid any confusion with ->data_direction.
*/
- if ((hdr->cdb[0] == 0x16) || (hdr->cdb[0] == 0x17)) {
- hdr->flags &= ~ISCSI_FLAG_CMD_READ;
- hdr->flags &= ~ISCSI_FLAG_CMD_WRITE;
- goto done;
- }
+ hdr->flags &= ~ISCSI_FLAG_CMD_READ;
+ hdr->flags &= ~ISCSI_FLAG_CMD_WRITE;
- pr_err("ISCSI_FLAG_CMD_READ or ISCSI_FLAG_CMD_WRITE"
+ pr_warn("ISCSI_FLAG_CMD_READ or ISCSI_FLAG_CMD_WRITE"
" set when Expected Data Transfer Length is 0 for"
- " CDB: 0x%02x. Bad iSCSI Initiator.\n", hdr->cdb[0]);
- return iscsit_add_reject_cmd(cmd,
- ISCSI_REASON_BOOKMARK_INVALID, buf);
+ " CDB: 0x%02x, Fixing up flags\n", hdr->cdb[0]);
}
-done:
if (!(hdr->flags & ISCSI_FLAG_CMD_READ) &&
!(hdr->flags & ISCSI_FLAG_CMD_WRITE) && (hdr->data_length != 0)) {
int rc;
if (!payload_length) {
- pr_err("DataOUT payload is ZERO, protocol error.\n");
- return iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
- buf);
+ pr_warn("DataOUT payload is ZERO, ignoring.\n");
+ return 0;
}
/* iSCSI write */
- spin_lock_bh(&conn->sess->session_stats_lock);
- conn->sess->rx_data_octets += payload_length;
- if (conn->sess->se_sess->se_node_acl) {
- spin_lock(&conn->sess->se_sess->se_node_acl->stats_lock);
- conn->sess->se_sess->se_node_acl->write_bytes += payload_length;
- spin_unlock(&conn->sess->se_sess->se_node_acl->stats_lock);
- }
- spin_unlock_bh(&conn->sess->session_stats_lock);
+ atomic_long_add(payload_length, &conn->sess->rx_data_octets);
if (payload_length > conn->conn_ops->MaxXmitDataSegmentLength) {
pr_err("DataSegmentLength: %u is greater than"
static int iscsit_handle_data_out(struct iscsi_conn *conn, unsigned char *buf)
{
- struct iscsi_cmd *cmd;
+ struct iscsi_cmd *cmd = NULL;
struct iscsi_data *hdr = (struct iscsi_data *)buf;
int rc;
bool data_crc_failed = false;
(unsigned char *)hdr);
}
+ if (!(hdr->flags & ISCSI_FLAG_CMD_FINAL) ||
+ (hdr->flags & ISCSI_FLAG_TEXT_CONTINUE)) {
+ pr_err("Multi sequence text commands currently not supported\n");
+ return iscsit_reject_cmd(cmd, ISCSI_REASON_CMD_NOT_SUPPORTED,
+ (unsigned char *)hdr);
+ }
+
pr_debug("Got Text Request: ITT: 0x%08x, CmdSN: 0x%08x,"
" ExpStatSN: 0x%08x, Length: %u\n", hdr->itt, hdr->cmdsn,
hdr->exp_statsn, payload_length);
return -1;
}
- spin_lock_bh(&conn->sess->session_stats_lock);
- conn->sess->tx_data_octets += datain.length;
- if (conn->sess->se_sess->se_node_acl) {
- spin_lock(&conn->sess->se_sess->se_node_acl->stats_lock);
- conn->sess->se_sess->se_node_acl->read_bytes += datain.length;
- spin_unlock(&conn->sess->se_sess->se_node_acl->stats_lock);
- }
- spin_unlock_bh(&conn->sess->session_stats_lock);
+ atomic_long_add(datain.length, &conn->sess->tx_data_octets);
/*
* Special case for successfully execution w/ both DATAIN
* and Sense Data.
if (inc_stat_sn)
cmd->stat_sn = conn->stat_sn++;
- spin_lock_bh(&conn->sess->session_stats_lock);
- conn->sess->rsp_pdus++;
- spin_unlock_bh(&conn->sess->session_stats_lock);
+ atomic_long_inc(&conn->sess->rsp_pdus);
memset(hdr, 0, ISCSI_HDR_LEN);
hdr->opcode = ISCSI_OP_SCSI_CMD_RSP;
struct iscsi_tiqn *tiqn;
struct iscsi_tpg_np *tpg_np;
int buffer_len, end_of_buf = 0, len = 0, payload_len = 0;
+ int target_name_printed;
unsigned char buf[ISCSI_IQN_LEN+12]; /* iqn + "TargetName=" + \0 */
unsigned char *text_in = cmd->text_in_ptr, *text_ptr = NULL;
continue;
}
- len = sprintf(buf, "TargetName=%s", tiqn->tiqn);
- len += 1;
-
- if ((len + payload_len) > buffer_len) {
- end_of_buf = 1;
- goto eob;
- }
- memcpy(payload + payload_len, buf, len);
- payload_len += len;
+ target_name_printed = 0;
spin_lock(&tiqn->tiqn_tpg_lock);
list_for_each_entry(tpg, &tiqn->tiqn_tpg_list, tpg_list) {
+ /* If demo_mode_discovery=0 and generate_node_acls=0
+ * (demo mode dislabed) do not return
+ * TargetName+TargetAddress unless a NodeACL exists.
+ */
+
+ if ((tpg->tpg_attrib.generate_node_acls == 0) &&
+ (tpg->tpg_attrib.demo_mode_discovery == 0) &&
+ (!core_tpg_get_initiator_node_acl(&tpg->tpg_se_tpg,
+ cmd->conn->sess->sess_ops->InitiatorName))) {
+ continue;
+ }
+
spin_lock(&tpg->tpg_state_lock);
if ((tpg->tpg_state == TPG_STATE_FREE) ||
(tpg->tpg_state == TPG_STATE_INACTIVE)) {
struct iscsi_np *np = tpg_np->tpg_np;
bool inaddr_any = iscsit_check_inaddr_any(np);
+ if (!target_name_printed) {
+ len = sprintf(buf, "TargetName=%s",
+ tiqn->tiqn);
+ len += 1;
+
+ if ((len + payload_len) > buffer_len) {
+ spin_unlock(&tpg->tpg_np_lock);
+ spin_unlock(&tiqn->tiqn_tpg_lock);
+ end_of_buf = 1;
+ goto eob;
+ }
+ memcpy(payload + payload_len, buf, len);
+ payload_len += len;
+ target_name_printed = 1;
+ }
+
len = sprintf(buf, "TargetAddress="
"%s:%hu,%hu",
(inaddr_any == false) ?
* hit default in the switch below.
*/
memset(buffer, 0xff, ISCSI_HDR_LEN);
- spin_lock_bh(&conn->sess->session_stats_lock);
- conn->sess->conn_digest_errors++;
- spin_unlock_bh(&conn->sess->session_stats_lock);
+ atomic_long_inc(&conn->sess->conn_digest_errors);
} else {
pr_debug("Got HeaderDigest CRC32C"
" 0x%08x\n", checksum);
int iscsit_close_session(struct iscsi_session *sess)
{
- struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+ struct iscsi_portal_group *tpg = sess->tpg;
struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
if (atomic_read(&sess->nconn)) {
/*
* Set Identifier.
*/
- chap->id = ISCSI_TPG_C(conn)->tpg_chap_id++;
+ chap->id = conn->tpg->tpg_chap_id++;
*aic_len += sprintf(aic_str + *aic_len, "CHAP_I=%d", chap->id);
*aic_len += 1;
pr_debug("[server] Sending CHAP_I=%d\n", chap->id);
unsigned char client_digest[MD5_SIGNATURE_SIZE];
unsigned char server_digest[MD5_SIGNATURE_SIZE];
unsigned char chap_n[MAX_CHAP_N_SIZE], chap_r[MAX_RESPONSE_LENGTH];
+ size_t compare_len;
struct iscsi_chap *chap = conn->auth_protocol;
struct crypto_hash *tfm;
struct hash_desc desc;
goto out;
}
- if (memcmp(chap_n, auth->userid, strlen(auth->userid)) != 0) {
+ /* Include the terminating NULL in the compare */
+ compare_len = strlen(auth->userid) + 1;
+ if (strncmp(chap_n, auth->userid, compare_len) != 0) {
pr_err("CHAP_N values do not match!\n");
goto out;
}
struct iscsi_node_acl *nacl = container_of(se_nacl, struct iscsi_node_acl, \
se_node_acl); \
\
- return sprintf(page, "%u\n", ISCSI_NODE_ATTRIB(nacl)->name); \
+ return sprintf(page, "%u\n", nacl->node_attrib.name); \
} \
\
static ssize_t iscsi_nacl_attrib_store_##name( \
\
if (!capable(CAP_SYS_ADMIN)) \
return -EPERM; \
- \
+ if (count >= sizeof(auth->name)) \
+ return -EINVAL; \
snprintf(auth->name, sizeof(auth->name), "%s", page); \
if (!strncmp("NULL", auth->name, 4)) \
auth->naf_flags &= ~flags; \
if (!se_nacl_new)
return ERR_PTR(-ENOMEM);
- cmdsn_depth = ISCSI_TPG_ATTRIB(tpg)->default_cmdsn_depth;
+ cmdsn_depth = tpg->tpg_attrib.default_cmdsn_depth;
/*
* se_nacl_new may be released by core_tpg_add_initiator_node_acl()
* when converting a NdoeACL from demo mode -> explict
return ERR_PTR(-ENOMEM);
}
- stats_cg->default_groups[0] = &NODE_STAT_GRPS(acl)->iscsi_sess_stats_group;
+ stats_cg->default_groups[0] = &acl->node_stat_grps.iscsi_sess_stats_group;
stats_cg->default_groups[1] = NULL;
- config_group_init_type_name(&NODE_STAT_GRPS(acl)->iscsi_sess_stats_group,
+ config_group_init_type_name(&acl->node_stat_grps.iscsi_sess_stats_group,
"iscsi_sess_stats", &iscsi_stat_sess_cit);
return se_nacl;
if (iscsit_get_tpg(tpg) < 0) \
return -EINVAL; \
\
- rb = sprintf(page, "%u\n", ISCSI_TPG_ATTRIB(tpg)->name); \
+ rb = sprintf(page, "%u\n", tpg->tpg_attrib.name); \
iscsit_put_tpg(tpg); \
return rb; \
} \
*/
DEF_TPG_ATTRIB(prod_mode_write_protect);
TPG_ATTR(prod_mode_write_protect, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_demo_mode_discovery,
+ */
+DEF_TPG_ATTRIB(demo_mode_discovery);
+TPG_ATTR(demo_mode_discovery, S_IRUGO | S_IWUSR);
+/*
+ * Define iscsi_tpg_attrib_s_default_erl
+ */
+DEF_TPG_ATTRIB(default_erl);
+TPG_ATTR(default_erl, S_IRUGO | S_IWUSR);
static struct configfs_attribute *lio_target_tpg_attrib_attrs[] = {
&iscsi_tpg_attrib_authentication.attr,
&iscsi_tpg_attrib_cache_dynamic_acls.attr,
&iscsi_tpg_attrib_demo_mode_write_protect.attr,
&iscsi_tpg_attrib_prod_mode_write_protect.attr,
+ &iscsi_tpg_attrib_demo_mode_discovery.attr,
+ &iscsi_tpg_attrib_default_erl.attr,
NULL,
};
return ERR_PTR(-ENOMEM);
}
- stats_cg->default_groups[0] = &WWN_STAT_GRPS(tiqn)->iscsi_instance_group;
- stats_cg->default_groups[1] = &WWN_STAT_GRPS(tiqn)->iscsi_sess_err_group;
- stats_cg->default_groups[2] = &WWN_STAT_GRPS(tiqn)->iscsi_tgt_attr_group;
- stats_cg->default_groups[3] = &WWN_STAT_GRPS(tiqn)->iscsi_login_stats_group;
- stats_cg->default_groups[4] = &WWN_STAT_GRPS(tiqn)->iscsi_logout_stats_group;
+ stats_cg->default_groups[0] = &tiqn->tiqn_stat_grps.iscsi_instance_group;
+ stats_cg->default_groups[1] = &tiqn->tiqn_stat_grps.iscsi_sess_err_group;
+ stats_cg->default_groups[2] = &tiqn->tiqn_stat_grps.iscsi_tgt_attr_group;
+ stats_cg->default_groups[3] = &tiqn->tiqn_stat_grps.iscsi_login_stats_group;
+ stats_cg->default_groups[4] = &tiqn->tiqn_stat_grps.iscsi_logout_stats_group;
stats_cg->default_groups[5] = NULL;
- config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_instance_group,
+ config_group_init_type_name(&tiqn->tiqn_stat_grps.iscsi_instance_group,
"iscsi_instance", &iscsi_stat_instance_cit);
- config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_sess_err_group,
+ config_group_init_type_name(&tiqn->tiqn_stat_grps.iscsi_sess_err_group,
"iscsi_sess_err", &iscsi_stat_sess_err_cit);
- config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_tgt_attr_group,
+ config_group_init_type_name(&tiqn->tiqn_stat_grps.iscsi_tgt_attr_group,
"iscsi_tgt_attr", &iscsi_stat_tgt_attr_cit);
- config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_login_stats_group,
+ config_group_init_type_name(&tiqn->tiqn_stat_grps.iscsi_login_stats_group,
"iscsi_login_stats", &iscsi_stat_login_cit);
- config_group_init_type_name(&WWN_STAT_GRPS(tiqn)->iscsi_logout_stats_group,
+ config_group_init_type_name(&tiqn->tiqn_stat_grps.iscsi_logout_stats_group,
"iscsi_logout_stats", &iscsi_stat_logout_cit);
pr_debug("LIO_Target_ConfigFS: REGISTER -> %s\n", tiqn->tiqn);
struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd);
cmd->i_state = ISTATE_SEND_STATUS;
+
+ if (cmd->se_cmd.scsi_status || cmd->sense_reason) {
+ iscsit_add_cmd_to_response_queue(cmd, cmd->conn, cmd->i_state);
+ return 0;
+ }
cmd->conn->conn_transport->iscsit_queue_status(cmd->conn, cmd);
return 0;
{
struct iscsi_portal_group *tpg = se_tpg->se_tpg_fabric_ptr;
- return ISCSI_TPG_ATTRIB(tpg)->default_cmdsn_depth;
+ return tpg->tpg_attrib.default_cmdsn_depth;
}
static int lio_tpg_check_demo_mode(struct se_portal_group *se_tpg)
{
struct iscsi_portal_group *tpg = se_tpg->se_tpg_fabric_ptr;
- return ISCSI_TPG_ATTRIB(tpg)->generate_node_acls;
+ return tpg->tpg_attrib.generate_node_acls;
}
static int lio_tpg_check_demo_mode_cache(struct se_portal_group *se_tpg)
{
struct iscsi_portal_group *tpg = se_tpg->se_tpg_fabric_ptr;
- return ISCSI_TPG_ATTRIB(tpg)->cache_dynamic_acls;
+ return tpg->tpg_attrib.cache_dynamic_acls;
}
static int lio_tpg_check_demo_mode_write_protect(
{
struct iscsi_portal_group *tpg = se_tpg->se_tpg_fabric_ptr;
- return ISCSI_TPG_ATTRIB(tpg)->demo_mode_write_protect;
+ return tpg->tpg_attrib.demo_mode_write_protect;
}
static int lio_tpg_check_prod_mode_write_protect(
{
struct iscsi_portal_group *tpg = se_tpg->se_tpg_fabric_ptr;
- return ISCSI_TPG_ATTRIB(tpg)->prod_mode_write_protect;
+ return tpg->tpg_attrib.prod_mode_write_protect;
}
static void lio_tpg_release_fabric_acl(
{
struct iscsi_node_acl *acl = container_of(se_acl, struct iscsi_node_acl,
se_node_acl);
+ struct se_portal_group *se_tpg = se_acl->se_tpg;
+ struct iscsi_portal_group *tpg = container_of(se_tpg,
+ struct iscsi_portal_group, tpg_se_tpg);
- ISCSI_NODE_ATTRIB(acl)->nacl = acl;
- iscsit_set_default_node_attribues(acl);
+ acl->node_attrib.nacl = acl;
+ iscsit_set_default_node_attribues(acl, tpg);
}
static int lio_check_stop_free(struct se_cmd *se_cmd)
* Setup default attribute lists for various fabric->tf_cit_tmpl
* sturct config_item_type's
*/
- TF_CIT_TMPL(fabric)->tfc_discovery_cit.ct_attrs = lio_target_discovery_auth_attrs;
- TF_CIT_TMPL(fabric)->tfc_wwn_cit.ct_attrs = lio_target_wwn_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_base_cit.ct_attrs = lio_target_tpg_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_attrib_cit.ct_attrs = lio_target_tpg_attrib_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_auth_cit.ct_attrs = lio_target_tpg_auth_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_param_cit.ct_attrs = lio_target_tpg_param_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_np_base_cit.ct_attrs = lio_target_portal_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_base_cit.ct_attrs = lio_target_initiator_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = lio_target_nacl_attrib_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = lio_target_nacl_auth_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_param_cit.ct_attrs = lio_target_nacl_param_attrs;
+ fabric->tf_cit_tmpl.tfc_discovery_cit.ct_attrs = lio_target_discovery_auth_attrs;
+ fabric->tf_cit_tmpl.tfc_wwn_cit.ct_attrs = lio_target_wwn_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_base_cit.ct_attrs = lio_target_tpg_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_attrib_cit.ct_attrs = lio_target_tpg_attrib_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_auth_cit.ct_attrs = lio_target_tpg_auth_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_param_cit.ct_attrs = lio_target_tpg_param_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_np_base_cit.ct_attrs = lio_target_portal_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_base_cit.ct_attrs = lio_target_initiator_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_attrib_cit.ct_attrs = lio_target_nacl_attrib_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_auth_cit.ct_attrs = lio_target_nacl_auth_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_param_cit.ct_attrs = lio_target_nacl_param_attrs;
ret = target_fabric_configfs_register(fabric);
if (ret < 0) {
#define NA_RANDOM_DATAIN_PDU_OFFSETS 0
#define NA_RANDOM_DATAIN_SEQ_OFFSETS 0
#define NA_RANDOM_R2T_OFFSETS 0
-#define NA_DEFAULT_ERL 0
-#define NA_DEFAULT_ERL_MAX 2
-#define NA_DEFAULT_ERL_MIN 0
/* struct iscsi_tpg_attrib sanity values */
#define TA_AUTHENTICATION 1
#define TA_DEMO_MODE_WRITE_PROTECT 1
/* Disabled by default in production mode w/ explict ACLs */
#define TA_PROD_MODE_WRITE_PROTECT 0
+#define TA_DEMO_MODE_DISCOVERY 1
+#define TA_DEFAULT_ERL 0
#define TA_CACHE_CORE_NPS 0
CMDSN_NORMAL_OPERATION = 0,
CMDSN_LOWER_THAN_EXP = 1,
CMDSN_HIGHER_THAN_EXP = 2,
+ CMDSN_MAXCMDSN_OVERRUN = 3,
};
/* Used for iscsi_handle_immediate_data() return values */
/* Used for session reference counting */
int session_usage_count;
int session_waiting_on_uc;
- u32 cmd_pdus;
- u32 rsp_pdus;
- u64 tx_data_octets;
- u64 rx_data_octets;
- u32 conn_digest_errors;
- u32 conn_timeout_errors;
+ atomic_long_t cmd_pdus;
+ atomic_long_t rsp_pdus;
+ atomic_long_t tx_data_octets;
+ atomic_long_t rx_data_octets;
+ atomic_long_t conn_digest_errors;
+ atomic_long_t conn_timeout_errors;
u64 creation_time;
- spinlock_t session_stats_lock;
/* Number of active connections */
atomic_t nconn;
atomic_t session_continuation;
struct se_node_acl se_node_acl;
};
-#define NODE_STAT_GRPS(nacl) (&(nacl)->node_stat_grps)
-
-#define ISCSI_NODE_ATTRIB(t) (&(t)->node_attrib)
-#define ISCSI_NODE_AUTH(t) (&(t)->node_auth)
-
struct iscsi_tpg_attrib {
u32 authentication;
u32 login_timeout;
u32 default_cmdsn_depth;
u32 demo_mode_write_protect;
u32 prod_mode_write_protect;
+ u32 demo_mode_discovery;
+ u32 default_erl;
struct iscsi_portal_group *tpg;
};
struct list_head tpg_list;
} ____cacheline_aligned;
-#define ISCSI_TPG_C(c) ((struct iscsi_portal_group *)(c)->tpg)
-#define ISCSI_TPG_LUN(c, l) ((iscsi_tpg_list_t *)(c)->tpg->tpg_lun_list_t[l])
-#define ISCSI_TPG_S(s) ((struct iscsi_portal_group *)(s)->tpg)
-#define ISCSI_TPG_ATTRIB(t) (&(t)->tpg_attrib)
-#define SE_TPG(tpg) (&(tpg)->tpg_se_tpg)
-
struct iscsi_wwn_stat_grps {
struct config_group iscsi_stat_group;
struct config_group iscsi_instance_group;
struct iscsi_logout_stats logout_stats;
} ____cacheline_aligned;
-#define WWN_STAT_GRPS(tiqn) (&(tiqn)->tiqn_stat_grps)
-
struct iscsit_global {
/* In core shutdown */
u32 in_shutdown;
cmd->maxcmdsn_inc = 1;
- if (!mutex_trylock(&sess->cmdsn_mutex)) {
- sess->max_cmd_sn += 1;
- pr_debug("Updated MaxCmdSN to 0x%08x\n", sess->max_cmd_sn);
- return;
- }
+ mutex_lock(&sess->cmdsn_mutex);
sess->max_cmd_sn += 1;
pr_debug("Updated MaxCmdSN to 0x%08x\n", sess->max_cmd_sn);
mutex_unlock(&sess->cmdsn_mutex);
static void iscsit_handle_time2retain_timeout(unsigned long data)
{
struct iscsi_session *sess = (struct iscsi_session *) data;
- struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+ struct iscsi_portal_group *tpg = sess->tpg;
struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
spin_lock_bh(&se_tpg->session_lock);
tiqn->sess_err_stats.last_sess_failure_type =
ISCSI_SESS_ERR_CXN_TIMEOUT;
tiqn->sess_err_stats.cxn_timeout_errors++;
- sess->conn_timeout_errors++;
+ atomic_long_inc(&sess->conn_timeout_errors);
spin_unlock(&tiqn->sess_err_stats.lock);
}
}
* Only start Time2Retain timer when the associated TPG is still in
* an ACTIVE (eg: not disabled or shutdown) state.
*/
- spin_lock(&ISCSI_TPG_S(sess)->tpg_state_lock);
- tpg_active = (ISCSI_TPG_S(sess)->tpg_state == TPG_STATE_ACTIVE);
- spin_unlock(&ISCSI_TPG_S(sess)->tpg_state_lock);
+ spin_lock(&sess->tpg->tpg_state_lock);
+ tpg_active = (sess->tpg->tpg_state == TPG_STATE_ACTIVE);
+ spin_unlock(&sess->tpg->tpg_state_lock);
if (!tpg_active)
return;
*/
int iscsit_stop_time2retain_timer(struct iscsi_session *sess)
{
- struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+ struct iscsi_portal_group *tpg = sess->tpg;
struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
if (sess->time2retain_timer_flags & ISCSI_TF_EXPIRED)
}
sess->creation_time = get_jiffies_64();
- spin_lock_init(&sess->session_stats_lock);
/*
* The FFP CmdSN window values will be allocated from the TPG's
* Initiator Node's ACL once the login has been successfully completed.
* Assign a new TPG Session Handle. Note this is protected with
* struct iscsi_portal_group->np_login_sem from iscsit_access_np().
*/
- sess->tsih = ++ISCSI_TPG_S(sess)->ntsih;
+ sess->tsih = ++sess->tpg->ntsih;
if (!sess->tsih)
- sess->tsih = ++ISCSI_TPG_S(sess)->ntsih;
+ sess->tsih = ++sess->tpg->ntsih;
/*
* Create the default params from user defined values..
*/
if (iscsi_copy_param_list(&conn->param_list,
- ISCSI_TPG_C(conn)->param_list, 1) < 0) {
+ conn->tpg->param_list, 1) < 0) {
iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
ISCSI_LOGIN_STATUS_NO_RESOURCES);
return -1;
* In our case, we have already located the struct iscsi_tiqn at this point.
*/
memset(buf, 0, 32);
- sprintf(buf, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt);
+ sprintf(buf, "TargetPortalGroupTag=%hu", sess->tpg->tpgt);
if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) {
iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
ISCSI_LOGIN_STATUS_NO_RESOURCES);
iscsi_login_set_conn_values(sess, conn, pdu->cid);
if (iscsi_copy_param_list(&conn->param_list,
- ISCSI_TPG_C(conn)->param_list, 0) < 0) {
+ conn->tpg->param_list, 0) < 0) {
iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
ISCSI_LOGIN_STATUS_NO_RESOURCES);
return -1;
* In our case, we have already located the struct iscsi_tiqn at this point.
*/
memset(buf, 0, 32);
- sprintf(buf, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt);
+ sprintf(buf, "TargetPortalGroupTag=%hu", sess->tpg->tpgt);
if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) {
iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
ISCSI_LOGIN_STATUS_NO_RESOURCES);
int stop_timer = 0;
struct iscsi_session *sess = conn->sess;
struct se_session *se_sess = sess->se_sess;
- struct iscsi_portal_group *tpg = ISCSI_TPG_S(sess);
+ struct iscsi_portal_group *tpg = sess->tpg;
struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
struct iscsi_thread_set *ts;
spin_lock_bh(&conn->sess->conn_lock);
if (conn->sess->session_state == TARG_SESS_STATE_FAILED) {
struct se_portal_group *se_tpg =
- &ISCSI_TPG_C(conn)->tpg_se_tpg;
+ &conn->tpg->tpg_se_tpg;
atomic_set(&conn->sess->session_continuation, 0);
spin_unlock_bh(&conn->sess->conn_lock);
out:
stop = kthread_should_stop();
- if (!stop && signal_pending(current)) {
- spin_lock_bh(&np->np_thread_lock);
- stop = (np->np_thread_state == ISCSI_NP_THREAD_SHUTDOWN);
- spin_unlock_bh(&np->np_thread_lock);
- }
/* Wait for another socket.. */
if (!stop)
return 1;
iscsi_stop_login_thread_timer(np);
spin_lock_bh(&np->np_thread_lock);
np->np_thread_state = ISCSI_NP_THREAD_EXIT;
- np->np_thread = NULL;
spin_unlock_bh(&np->np_thread_lock);
return 0;
if (len < 0)
return -1;
- if (len > max_length) {
+ if (len >= max_length) {
pr_err("Length of input: %d exceeds max_length:"
" %d\n", len, max_length);
return -1;
iscsi_nacl = container_of(se_nacl, struct iscsi_node_acl,
se_node_acl);
- auth = ISCSI_NODE_AUTH(iscsi_nacl);
+ auth = &iscsi_nacl->node_auth;
}
} else {
/*
return -1;
if (!iscsi_check_negotiated_keys(conn->param_list)) {
- if (ISCSI_TPG_ATTRIB(ISCSI_TPG_C(conn))->authentication &&
+ if (conn->tpg->tpg_attrib.authentication &&
!strncmp(param->value, NONE, 4)) {
pr_err("Initiator sent AuthMethod=None but"
" Target is enforcing iSCSI Authentication,"
return -1;
}
- if (ISCSI_TPG_ATTRIB(ISCSI_TPG_C(conn))->authentication &&
+ if (conn->tpg->tpg_attrib.authentication &&
!login->auth_complete)
return 0;
}
if (!login->auth_complete &&
- ISCSI_TPG_ATTRIB(ISCSI_TPG_C(conn))->authentication) {
+ conn->tpg->tpg_attrib.authentication) {
pr_err("Initiator is requesting CSG: 1, has not been"
" successfully authenticated, and the Target is"
" enforcing iSCSI Authentication, login failed.\n");
}
void iscsit_set_default_node_attribues(
- struct iscsi_node_acl *acl)
+ struct iscsi_node_acl *acl,
+ struct iscsi_portal_group *tpg)
{
struct iscsi_node_attrib *a = &acl->node_attrib;
a->random_datain_pdu_offsets = NA_RANDOM_DATAIN_PDU_OFFSETS;
a->random_datain_seq_offsets = NA_RANDOM_DATAIN_SEQ_OFFSETS;
a->random_r2t_offsets = NA_RANDOM_R2T_OFFSETS;
- a->default_erl = NA_DEFAULT_ERL;
+ a->default_erl = tpg->tpg_attrib.default_erl;
}
int iscsit_na_dataout_timeout(
#ifndef ISCSI_TARGET_NODEATTRIB_H
#define ISCSI_TARGET_NODEATTRIB_H
-extern void iscsit_set_default_node_attribues(struct iscsi_node_acl *);
+extern void iscsit_set_default_node_attribues(struct iscsi_node_acl *,
+ struct iscsi_portal_group *);
extern int iscsit_na_dataout_timeout(struct iscsi_node_acl *, u32);
extern int iscsit_na_dataout_timeout_retries(struct iscsi_node_acl *, u32);
extern int iscsit_na_nopin_timeout(struct iscsi_node_acl *, u32);
if (se_sess) {
sess = se_sess->fabric_sess_ptr;
if (sess)
- ret = snprintf(page, PAGE_SIZE, "%u\n", sess->cmd_pdus);
+ ret = snprintf(page, PAGE_SIZE, "%lu\n",
+ atomic_long_read(&sess->cmd_pdus));
}
spin_unlock_bh(&se_nacl->nacl_sess_lock);
if (se_sess) {
sess = se_sess->fabric_sess_ptr;
if (sess)
- ret = snprintf(page, PAGE_SIZE, "%u\n", sess->rsp_pdus);
+ ret = snprintf(page, PAGE_SIZE, "%lu\n",
+ atomic_long_read(&sess->rsp_pdus));
}
spin_unlock_bh(&se_nacl->nacl_sess_lock);
if (se_sess) {
sess = se_sess->fabric_sess_ptr;
if (sess)
- ret = snprintf(page, PAGE_SIZE, "%llu\n",
- (unsigned long long)sess->tx_data_octets);
+ ret = snprintf(page, PAGE_SIZE, "%lu\n",
+ atomic_long_read(&sess->tx_data_octets));
}
spin_unlock_bh(&se_nacl->nacl_sess_lock);
if (se_sess) {
sess = se_sess->fabric_sess_ptr;
if (sess)
- ret = snprintf(page, PAGE_SIZE, "%llu\n",
- (unsigned long long)sess->rx_data_octets);
+ ret = snprintf(page, PAGE_SIZE, "%lu\n",
+ atomic_long_read(&sess->rx_data_octets));
}
spin_unlock_bh(&se_nacl->nacl_sess_lock);
if (se_sess) {
sess = se_sess->fabric_sess_ptr;
if (sess)
- ret = snprintf(page, PAGE_SIZE, "%u\n",
- sess->conn_digest_errors);
+ ret = snprintf(page, PAGE_SIZE, "%lu\n",
+ atomic_long_read(&sess->conn_digest_errors));
}
spin_unlock_bh(&se_nacl->nacl_sess_lock);
if (se_sess) {
sess = se_sess->fabric_sess_ptr;
if (sess)
- ret = snprintf(page, PAGE_SIZE, "%u\n",
- sess->conn_timeout_errors);
+ ret = snprintf(page, PAGE_SIZE, "%lu\n",
+ atomic_long_read(&sess->conn_timeout_errors));
}
spin_unlock_bh(&se_nacl->nacl_sess_lock);
a->cache_dynamic_acls = TA_CACHE_DYNAMIC_ACLS;
a->demo_mode_write_protect = TA_DEMO_MODE_WRITE_PROTECT;
a->prod_mode_write_protect = TA_PROD_MODE_WRITE_PROTECT;
+ a->demo_mode_discovery = TA_DEMO_MODE_DISCOVERY;
+ a->default_erl = TA_DEFAULT_ERL;
}
int iscsit_tpg_add_portal_group(struct iscsi_tiqn *tiqn, struct iscsi_portal_group *tpg)
if (iscsi_create_default_params(&tpg->param_list) < 0)
goto err_out;
- ISCSI_TPG_ATTRIB(tpg)->tpg = tpg;
+ tpg->tpg_attrib.tpg = tpg;
spin_lock(&tpg->tpg_state_lock);
tpg->tpg_state = TPG_STATE_INACTIVE;
return -EINVAL;
}
- if (ISCSI_TPG_ATTRIB(tpg)->authentication) {
+ if (tpg->tpg_attrib.authentication) {
if (!strcmp(param->value, NONE)) {
ret = iscsi_update_param_value(param, CHAP);
if (ret)
return 0;
}
+
+int iscsit_ta_demo_mode_discovery(
+ struct iscsi_portal_group *tpg,
+ u32 flag)
+{
+ struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+ if ((flag != 0) && (flag != 1)) {
+ pr_err("Illegal value %d\n", flag);
+ return -EINVAL;
+ }
+
+ a->demo_mode_discovery = flag;
+ pr_debug("iSCSI_TPG[%hu] - Demo Mode Discovery bit:"
+ " %s\n", tpg->tpgt, (a->demo_mode_discovery) ?
+ "ON" : "OFF");
+
+ return 0;
+}
+
+int iscsit_ta_default_erl(
+ struct iscsi_portal_group *tpg,
+ u32 default_erl)
+{
+ struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
+
+ if ((default_erl != 0) && (default_erl != 1) && (default_erl != 2)) {
+ pr_err("Illegal value for default_erl: %u\n", default_erl);
+ return -EINVAL;
+ }
+
+ a->default_erl = default_erl;
+ pr_debug("iSCSI_TPG[%hu] - DefaultERL: %u\n", tpg->tpgt, a->default_erl);
+
+ return 0;
+}
extern int iscsit_ta_cache_dynamic_acls(struct iscsi_portal_group *, u32);
extern int iscsit_ta_demo_mode_write_protect(struct iscsi_portal_group *, u32);
extern int iscsit_ta_prod_mode_write_protect(struct iscsi_portal_group *, u32);
+extern int iscsit_ta_demo_mode_discovery(struct iscsi_portal_group *, u32);
+extern int iscsit_ta_default_erl(struct iscsi_portal_group *, u32);
#endif /* ISCSI_TARGET_TPG_H */
*/
if (iscsi_sna_gt(cmdsn, sess->max_cmd_sn)) {
pr_err("Received CmdSN: 0x%08x is greater than"
- " MaxCmdSN: 0x%08x, protocol error.\n", cmdsn,
+ " MaxCmdSN: 0x%08x, ignoring.\n", cmdsn,
sess->max_cmd_sn);
- ret = CMDSN_ERROR_CANNOT_RECOVER;
+ ret = CMDSN_MAXCMDSN_OVERRUN;
} else if (cmdsn == sess->exp_cmd_sn) {
sess->exp_cmd_sn++;
ret = CMDSN_HIGHER_THAN_EXP;
break;
case CMDSN_LOWER_THAN_EXP:
+ case CMDSN_MAXCMDSN_OVERRUN:
+ default:
cmd->i_state = ISTATE_REMOVE;
iscsit_add_cmd_to_immediate_queue(cmd, conn, cmd->i_state);
- ret = cmdsn_ret;
- break;
- default:
- reason = ISCSI_REASON_PROTOCOL_ERROR;
- reject = true;
- ret = cmdsn_ret;
+ /*
+ * Existing callers for iscsit_sequence_cmd() will silently
+ * ignore commands with CMDSN_LOWER_THAN_EXP, so force this
+ * return for CMDSN_MAXCMDSN_OVERRUN as well..
+ */
+ ret = CMDSN_LOWER_THAN_EXP;
break;
}
mutex_unlock(&conn->sess->cmdsn_mutex);
tiqn->sess_err_stats.last_sess_failure_type =
ISCSI_SESS_ERR_CXN_TIMEOUT;
tiqn->sess_err_stats.cxn_timeout_errors++;
- conn->sess->conn_timeout_errors++;
+ atomic_long_inc(&conn->sess->conn_timeout_errors);
spin_unlock_bh(&tiqn->sess_err_stats.lock);
}
}
return sdev->queue_depth;
}
+static int tcm_loop_change_queue_type(struct scsi_device *sdev, int tag)
+{
+ if (sdev->tagged_supported) {
+ scsi_set_tag_type(sdev, tag);
+
+ if (tag)
+ scsi_activate_tcq(sdev, sdev->queue_depth);
+ else
+ scsi_deactivate_tcq(sdev, sdev->queue_depth);
+ } else
+ tag = 0;
+
+ return tag;
+}
+
/*
* Locate the SAM Task Attr from struct scsi_cmnd *
*/
set_host_byte(sc, DID_NO_CONNECT);
goto out_done;
}
-
+ if (tl_tpg->tl_transport_status == TCM_TRANSPORT_OFFLINE) {
+ set_host_byte(sc, DID_TRANSPORT_DISRUPTED);
+ goto out_done;
+ }
tl_nexus = tl_hba->tl_nexus;
if (!tl_nexus) {
scmd_printk(KERN_ERR, sc, "TCM_Loop I_T Nexus"
}
tl_cmd->sc = sc;
+ tl_cmd->sc_cmd_tag = sc->tag;
INIT_WORK(&tl_cmd->work, tcm_loop_submission_work);
queue_work(tcm_loop_workqueue, &tl_cmd->work);
return 0;
* Called from SCSI EH process context to issue a LUN_RESET TMR
* to struct scsi_device
*/
-static int tcm_loop_device_reset(struct scsi_cmnd *sc)
+static int tcm_loop_issue_tmr(struct tcm_loop_tpg *tl_tpg,
+ struct tcm_loop_nexus *tl_nexus,
+ int lun, int task, enum tcm_tmreq_table tmr)
{
struct se_cmd *se_cmd = NULL;
- struct se_portal_group *se_tpg;
struct se_session *se_sess;
+ struct se_portal_group *se_tpg;
struct tcm_loop_cmd *tl_cmd = NULL;
- struct tcm_loop_hba *tl_hba;
- struct tcm_loop_nexus *tl_nexus;
struct tcm_loop_tmr *tl_tmr = NULL;
- struct tcm_loop_tpg *tl_tpg;
- int ret = FAILED, rc;
- /*
- * Locate the tcm_loop_hba_t pointer
- */
- tl_hba = *(struct tcm_loop_hba **)shost_priv(sc->device->host);
- /*
- * Locate the tl_nexus and se_sess pointers
- */
- tl_nexus = tl_hba->tl_nexus;
- if (!tl_nexus) {
- pr_err("Unable to perform device reset without"
- " active I_T Nexus\n");
- return FAILED;
- }
- se_sess = tl_nexus->se_sess;
- /*
- * Locate the tl_tpg and se_tpg pointers from TargetID in sc->device->id
- */
- tl_tpg = &tl_hba->tl_hba_tpgs[sc->device->id];
- se_tpg = &tl_tpg->tl_se_tpg;
+ int ret = TMR_FUNCTION_FAILED, rc;
tl_cmd = kmem_cache_zalloc(tcm_loop_cmd_cache, GFP_KERNEL);
if (!tl_cmd) {
pr_err("Unable to allocate memory for tl_cmd\n");
- return FAILED;
+ return ret;
}
tl_tmr = kzalloc(sizeof(struct tcm_loop_tmr), GFP_KERNEL);
init_waitqueue_head(&tl_tmr->tl_tmr_wait);
se_cmd = &tl_cmd->tl_se_cmd;
+ se_tpg = &tl_tpg->tl_se_tpg;
+ se_sess = tl_nexus->se_sess;
/*
* Initialize struct se_cmd descriptor from target_core_mod infrastructure
*/
DMA_NONE, MSG_SIMPLE_TAG,
&tl_cmd->tl_sense_buf[0]);
- rc = core_tmr_alloc_req(se_cmd, tl_tmr, TMR_LUN_RESET, GFP_KERNEL);
+ rc = core_tmr_alloc_req(se_cmd, tl_tmr, tmr, GFP_KERNEL);
if (rc < 0)
goto release;
+
+ if (tmr == TMR_ABORT_TASK)
+ se_cmd->se_tmr_req->ref_task_tag = task;
+
/*
- * Locate the underlying TCM struct se_lun from sc->device->lun
+ * Locate the underlying TCM struct se_lun
*/
- if (transport_lookup_tmr_lun(se_cmd, sc->device->lun) < 0)
+ if (transport_lookup_tmr_lun(se_cmd, lun) < 0) {
+ ret = TMR_LUN_DOES_NOT_EXIST;
goto release;
+ }
/*
- * Queue the TMR to TCM Core and sleep waiting for tcm_loop_queue_tm_rsp()
- * to wake us up.
+ * Queue the TMR to TCM Core and sleep waiting for
+ * tcm_loop_queue_tm_rsp() to wake us up.
*/
transport_generic_handle_tmr(se_cmd);
wait_event(tl_tmr->tl_tmr_wait, atomic_read(&tl_tmr->tmr_complete));
* The TMR LUN_RESET has completed, check the response status and
* then release allocations.
*/
- ret = (se_cmd->se_tmr_req->response == TMR_FUNCTION_COMPLETE) ?
- SUCCESS : FAILED;
+ ret = se_cmd->se_tmr_req->response;
release:
if (se_cmd)
transport_generic_free_cmd(se_cmd, 1);
return ret;
}
+static int tcm_loop_abort_task(struct scsi_cmnd *sc)
+{
+ struct tcm_loop_hba *tl_hba;
+ struct tcm_loop_nexus *tl_nexus;
+ struct tcm_loop_tpg *tl_tpg;
+ int ret = FAILED;
+
+ /*
+ * Locate the tcm_loop_hba_t pointer
+ */
+ tl_hba = *(struct tcm_loop_hba **)shost_priv(sc->device->host);
+ /*
+ * Locate the tl_nexus and se_sess pointers
+ */
+ tl_nexus = tl_hba->tl_nexus;
+ if (!tl_nexus) {
+ pr_err("Unable to perform device reset without"
+ " active I_T Nexus\n");
+ return FAILED;
+ }
+
+ /*
+ * Locate the tl_tpg pointer from TargetID in sc->device->id
+ */
+ tl_tpg = &tl_hba->tl_hba_tpgs[sc->device->id];
+ ret = tcm_loop_issue_tmr(tl_tpg, tl_nexus, sc->device->lun,
+ sc->tag, TMR_ABORT_TASK);
+ return (ret == TMR_FUNCTION_COMPLETE) ? SUCCESS : FAILED;
+}
+
+/*
+ * Called from SCSI EH process context to issue a LUN_RESET TMR
+ * to struct scsi_device
+ */
+static int tcm_loop_device_reset(struct scsi_cmnd *sc)
+{
+ struct tcm_loop_hba *tl_hba;
+ struct tcm_loop_nexus *tl_nexus;
+ struct tcm_loop_tpg *tl_tpg;
+ int ret = FAILED;
+
+ /*
+ * Locate the tcm_loop_hba_t pointer
+ */
+ tl_hba = *(struct tcm_loop_hba **)shost_priv(sc->device->host);
+ /*
+ * Locate the tl_nexus and se_sess pointers
+ */
+ tl_nexus = tl_hba->tl_nexus;
+ if (!tl_nexus) {
+ pr_err("Unable to perform device reset without"
+ " active I_T Nexus\n");
+ return FAILED;
+ }
+ /*
+ * Locate the tl_tpg pointer from TargetID in sc->device->id
+ */
+ tl_tpg = &tl_hba->tl_hba_tpgs[sc->device->id];
+ ret = tcm_loop_issue_tmr(tl_tpg, tl_nexus, sc->device->lun,
+ 0, TMR_LUN_RESET);
+ return (ret == TMR_FUNCTION_COMPLETE) ? SUCCESS : FAILED;
+}
+
+static int tcm_loop_target_reset(struct scsi_cmnd *sc)
+{
+ struct tcm_loop_hba *tl_hba;
+ struct tcm_loop_tpg *tl_tpg;
+
+ /*
+ * Locate the tcm_loop_hba_t pointer
+ */
+ tl_hba = *(struct tcm_loop_hba **)shost_priv(sc->device->host);
+ if (!tl_hba) {
+ pr_err("Unable to perform device reset without"
+ " active I_T Nexus\n");
+ return FAILED;
+ }
+ /*
+ * Locate the tl_tpg pointer from TargetID in sc->device->id
+ */
+ tl_tpg = &tl_hba->tl_hba_tpgs[sc->device->id];
+ if (tl_tpg) {
+ tl_tpg->tl_transport_status = TCM_TRANSPORT_ONLINE;
+ return SUCCESS;
+ }
+ return FAILED;
+}
+
static int tcm_loop_slave_alloc(struct scsi_device *sd)
{
set_bit(QUEUE_FLAG_BIDI, &sd->request_queue->queue_flags);
static int tcm_loop_slave_configure(struct scsi_device *sd)
{
+ if (sd->tagged_supported) {
+ scsi_activate_tcq(sd, sd->queue_depth);
+ scsi_adjust_queue_depth(sd, MSG_SIMPLE_TAG,
+ sd->host->cmd_per_lun);
+ } else {
+ scsi_adjust_queue_depth(sd, 0,
+ sd->host->cmd_per_lun);
+ }
+
return 0;
}
.name = "TCM_Loopback",
.queuecommand = tcm_loop_queuecommand,
.change_queue_depth = tcm_loop_change_queue_depth,
+ .change_queue_type = tcm_loop_change_queue_type,
+ .eh_abort_handler = tcm_loop_abort_task,
.eh_device_reset_handler = tcm_loop_device_reset,
+ .eh_target_reset_handler = tcm_loop_target_reset,
.can_queue = 1024,
.this_id = -1,
.sg_tablesize = 256,
static u32 tcm_loop_get_task_tag(struct se_cmd *se_cmd)
{
- return 1;
+ struct tcm_loop_cmd *tl_cmd = container_of(se_cmd,
+ struct tcm_loop_cmd, tl_se_cmd);
+
+ return tl_cmd->sc_cmd_tag;
}
static int tcm_loop_get_cmd_state(struct se_cmd *se_cmd)
struct tcm_loop_nexus *tl_nexus;
struct tcm_loop_hba *tl_hba = tpg->tl_hba;
- tl_nexus = tpg->tl_hba->tl_nexus;
+ if (!tl_hba)
+ return -ENODEV;
+
+ tl_nexus = tl_hba->tl_nexus;
if (!tl_nexus)
return -ENODEV;
TF_TPG_BASE_ATTR(tcm_loop, nexus, S_IRUGO | S_IWUSR);
+static ssize_t tcm_loop_tpg_show_transport_status(
+ struct se_portal_group *se_tpg,
+ char *page)
+{
+ struct tcm_loop_tpg *tl_tpg = container_of(se_tpg,
+ struct tcm_loop_tpg, tl_se_tpg);
+ const char *status = NULL;
+ ssize_t ret = -EINVAL;
+
+ switch (tl_tpg->tl_transport_status) {
+ case TCM_TRANSPORT_ONLINE:
+ status = "online";
+ break;
+ case TCM_TRANSPORT_OFFLINE:
+ status = "offline";
+ break;
+ default:
+ break;
+ }
+
+ if (status)
+ ret = snprintf(page, PAGE_SIZE, "%s\n", status);
+
+ return ret;
+}
+
+static ssize_t tcm_loop_tpg_store_transport_status(
+ struct se_portal_group *se_tpg,
+ const char *page,
+ size_t count)
+{
+ struct tcm_loop_tpg *tl_tpg = container_of(se_tpg,
+ struct tcm_loop_tpg, tl_se_tpg);
+
+ if (!strncmp(page, "online", 6)) {
+ tl_tpg->tl_transport_status = TCM_TRANSPORT_ONLINE;
+ return count;
+ }
+ if (!strncmp(page, "offline", 7)) {
+ tl_tpg->tl_transport_status = TCM_TRANSPORT_OFFLINE;
+ return count;
+ }
+ return -EINVAL;
+}
+
+TF_TPG_BASE_ATTR(tcm_loop, transport_status, S_IRUGO | S_IWUSR);
+
static struct configfs_attribute *tcm_loop_tpg_attrs[] = {
&tcm_loop_tpg_nexus.attr,
+ &tcm_loop_tpg_transport_status.attr,
NULL,
};
/*
* Setup default attribute lists for various fabric->tf_cit_tmpl
*/
- TF_CIT_TMPL(fabric)->tfc_wwn_cit.ct_attrs = tcm_loop_wwn_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_base_cit.ct_attrs = tcm_loop_tpg_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_attrib_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_param_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_np_base_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_wwn_cit.ct_attrs = tcm_loop_wwn_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_base_cit.ct_attrs = tcm_loop_tpg_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_attrib_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_param_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_np_base_cit.ct_attrs = NULL;
/*
* Once fabric->tf_ops has been setup, now register the fabric for
* use within TCM
struct tcm_loop_cmd {
/* State of Linux/SCSI CDB+Data descriptor */
u32 sc_cmd_state;
+ /* Tagged command queueing */
+ u32 sc_cmd_tag;
/* Pointer to the CDB+Data descriptor from Linux/SCSI subsystem */
struct scsi_cmnd *sc;
/* The TCM I/O descriptor that is accessed via container_of() */
struct se_node_acl se_node_acl;
};
+#define TCM_TRANSPORT_ONLINE 0
+#define TCM_TRANSPORT_OFFLINE 1
+
struct tcm_loop_tpg {
unsigned short tl_tpgt;
+ unsigned short tl_transport_status;
atomic_t tl_tpg_port_count;
struct se_portal_group tl_se_tpg;
struct tcm_loop_hba *tl_hba;
/*
* Setup default attribute lists for various fabric->tf_cit_tmpl
*/
- TF_CIT_TMPL(fabric)->tfc_wwn_cit.ct_attrs = sbp_wwn_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_base_cit.ct_attrs = sbp_tpg_base_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_attrib_cit.ct_attrs = sbp_tpg_attrib_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_param_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_np_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_param_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_wwn_cit.ct_attrs = sbp_wwn_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_base_cit.ct_attrs = sbp_tpg_base_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_attrib_cit.ct_attrs = sbp_tpg_attrib_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_param_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_np_base_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_base_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_param_cit.ct_attrs = NULL;
ret = target_fabric_configfs_register(fabric);
if (ret < 0) {
static sense_reason_t core_alua_check_transition(int state, int *primary);
static int core_alua_set_tg_pt_secondary_state(
struct t10_alua_tg_pt_gp_member *tg_pt_gp_mem,
- struct se_port *port, int explict, int offline);
+ struct se_port *port, int explicit, int offline);
static u16 alua_lu_gps_counter;
static u32 alua_lu_gps_count;
/*
* Set supported ASYMMETRIC ACCESS State bits
*/
- buf[off] = 0x80; /* T_SUP */
- buf[off] |= 0x40; /* O_SUP */
- buf[off] |= 0x8; /* U_SUP */
- buf[off] |= 0x4; /* S_SUP */
- buf[off] |= 0x2; /* AN_SUP */
- buf[off++] |= 0x1; /* AO_SUP */
+ buf[off++] |= tg_pt_gp->tg_pt_gp_alua_supported_states;
/*
* TARGET PORT GROUP
*/
if (ext_hdr != 0) {
buf[4] = 0x10;
/*
- * Set the implict transition time (in seconds) for the application
+ * Set the implicit transition time (in seconds) for the application
* client to use as a base for it's transition timeout value.
*
* Use the current tg_pt_gp_mem -> tg_pt_gp membership from the LUN
spin_lock(&tg_pt_gp_mem->tg_pt_gp_mem_lock);
tg_pt_gp = tg_pt_gp_mem->tg_pt_gp;
if (tg_pt_gp)
- buf[5] = tg_pt_gp->tg_pt_gp_implict_trans_secs;
+ buf[5] = tg_pt_gp->tg_pt_gp_implicit_trans_secs;
spin_unlock(&tg_pt_gp_mem->tg_pt_gp_mem_lock);
}
}
}
/*
- * SET_TARGET_PORT_GROUPS for explict ALUA operation.
+ * SET_TARGET_PORT_GROUPS for explicit ALUA operation.
*
* See spc4r17 section 6.35
*/
return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
/*
- * Determine if explict ALUA via SET_TARGET_PORT_GROUPS is allowed
+ * Determine if explicit ALUA via SET_TARGET_PORT_GROUPS is allowed
* for the local tg_pt_gp.
*/
l_tg_pt_gp_mem = l_port->sep_alua_tg_pt_gp_mem;
}
spin_unlock(&l_tg_pt_gp_mem->tg_pt_gp_mem_lock);
- if (!(l_tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_EXPLICT_ALUA)) {
+ if (!(l_tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_EXPLICIT_ALUA)) {
pr_debug("Unable to process SET_TARGET_PORT_GROUPS"
- " while TPGS_EXPLICT_ALUA is disabled\n");
+ " while TPGS_EXPLICIT_ALUA is disabled\n");
rc = TCM_UNSUPPORTED_SCSI_OPCODE;
goto out;
}
spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
} else {
/*
- * Extact the RELATIVE TARGET PORT IDENTIFIER to identify
+ * Extract the RELATIVE TARGET PORT IDENTIFIER to identify
* the Target Port in question for the the incoming
* SET_TARGET_PORT_GROUPS op.
*/
u8 *alua_ascq)
{
/*
- * Allowed CDBs for ALUA_ACCESS_STATE_TRANSITIO as defined by
+ * Allowed CDBs for ALUA_ACCESS_STATE_TRANSITION as defined by
* spc4r17 section 5.9.2.5
*/
switch (cdb[0]) {
}
/*
- * return 1: Is used to signal LUN not accecsable, and check condition/not ready
+ * return 1: Is used to signal LUN not accessible, and check condition/not ready
* return 0: Used to signal success
- * reutrn -1: Used to signal failure, and invalid cdb field
+ * return -1: Used to signal failure, and invalid cdb field
*/
sense_reason_t
target_alua_state_check(struct se_cmd *cmd)
nonop_delay_msecs = tg_pt_gp->tg_pt_gp_nonop_delay_msecs;
spin_unlock(&tg_pt_gp_mem->tg_pt_gp_mem_lock);
/*
- * Process ALUA_ACCESS_STATE_ACTIVE_OPTMIZED in a separate conditional
+ * Process ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED in a separate conditional
* statement so the compiler knows explicitly to check this case first.
* For the Optimized ALUA access state case, we want to process the
* incoming fabric cmd ASAP..
*/
- if (out_alua_state == ALUA_ACCESS_STATE_ACTIVE_OPTMIZED)
+ if (out_alua_state == ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED)
return 0;
switch (out_alua_state) {
}
/*
- * Check implict and explict ALUA state change request.
+ * Check implicit and explicit ALUA state change request.
*/
static sense_reason_t
core_alua_check_transition(int state, int *primary)
{
switch (state) {
- case ALUA_ACCESS_STATE_ACTIVE_OPTMIZED:
+ case ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED:
case ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED:
case ALUA_ACCESS_STATE_STANDBY:
case ALUA_ACCESS_STATE_UNAVAILABLE:
static char *core_alua_dump_state(int state)
{
switch (state) {
- case ALUA_ACCESS_STATE_ACTIVE_OPTMIZED:
+ case ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED:
return "Active/Optimized";
case ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED:
return "Active/NonOptimized";
switch (status) {
case ALUA_STATUS_NONE:
return "None";
- case ALUA_STATUS_ALTERED_BY_EXPLICT_STPG:
- return "Altered by Explict STPG";
- case ALUA_STATUS_ALTERED_BY_IMPLICT_ALUA:
- return "Altered by Implict ALUA";
+ case ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG:
+ return "Altered by Explicit STPG";
+ case ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA:
+ return "Altered by Implicit ALUA";
default:
return "Unknown";
}
struct se_node_acl *nacl,
unsigned char *md_buf,
int new_state,
- int explict)
+ int explicit)
{
struct se_dev_entry *se_deve;
struct se_lun_acl *lacl;
old_state = atomic_read(&tg_pt_gp->tg_pt_gp_alua_access_state);
atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
ALUA_ACCESS_STATE_TRANSITION);
- tg_pt_gp->tg_pt_gp_alua_access_status = (explict) ?
- ALUA_STATUS_ALTERED_BY_EXPLICT_STPG :
- ALUA_STATUS_ALTERED_BY_IMPLICT_ALUA;
+ tg_pt_gp->tg_pt_gp_alua_access_status = (explicit) ?
+ ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG :
+ ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA;
/*
* Check for the optional ALUA primary state transition delay
*/
* change, a device server shall establish a unit attention
* condition for the initiator port associated with every I_T
* nexus with the additional sense code set to ASYMMETRIC
- * ACCESS STATE CHAGED.
+ * ACCESS STATE CHANGED.
*
* After an explicit target port asymmetric access state
* change, a device server shall establish a unit attention
lacl = se_deve->se_lun_acl;
/*
* se_deve->se_lun_acl pointer may be NULL for a
- * entry created without explict Node+MappedLUN ACLs
+ * entry created without explicit Node+MappedLUN ACLs
*/
if (!lacl)
continue;
- if (explict &&
+ if (explicit &&
(nacl != NULL) && (nacl == lacl->se_lun_nacl) &&
(l_port != NULL) && (l_port == port))
continue;
atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state, new_state);
pr_debug("Successful %s ALUA transition TG PT Group: %s ID: %hu"
- " from primary access state %s to %s\n", (explict) ? "explict" :
- "implict", config_item_name(&tg_pt_gp->tg_pt_gp_group.cg_item),
+ " from primary access state %s to %s\n", (explicit) ? "explicit" :
+ "implicit", config_item_name(&tg_pt_gp->tg_pt_gp_group.cg_item),
tg_pt_gp->tg_pt_gp_id, core_alua_dump_state(old_state),
core_alua_dump_state(new_state));
struct se_port *l_port,
struct se_node_acl *l_nacl,
int new_state,
- int explict)
+ int explicit)
{
struct se_device *dev;
struct se_port *port;
* success.
*/
core_alua_do_transition_tg_pt(l_tg_pt_gp, l_port, l_nacl,
- md_buf, new_state, explict);
+ md_buf, new_state, explicit);
atomic_dec(&lu_gp->lu_gp_ref_cnt);
smp_mb__after_atomic_dec();
kfree(md_buf);
continue;
/*
* If the target behavior port asymmetric access state
- * is changed for any target port group accessiable via
+ * is changed for any target port group accessible via
* a logical unit within a LU group, the target port
* behavior group asymmetric access states for the same
* target port group accessible via other logical units
* success.
*/
core_alua_do_transition_tg_pt(tg_pt_gp, port,
- nacl, md_buf, new_state, explict);
+ nacl, md_buf, new_state, explicit);
spin_lock(&dev->t10_alua.tg_pt_gps_lock);
atomic_dec(&tg_pt_gp->tg_pt_gp_ref_cnt);
pr_debug("Successfully processed LU Group: %s all ALUA TG PT"
" Group IDs: %hu %s transition to primary state: %s\n",
config_item_name(&lu_gp->lu_gp_group.cg_item),
- l_tg_pt_gp->tg_pt_gp_id, (explict) ? "explict" : "implict",
+ l_tg_pt_gp->tg_pt_gp_id, (explicit) ? "explicit" : "implicit",
core_alua_dump_state(new_state));
atomic_dec(&lu_gp->lu_gp_ref_cnt);
static int core_alua_set_tg_pt_secondary_state(
struct t10_alua_tg_pt_gp_member *tg_pt_gp_mem,
struct se_port *port,
- int explict,
+ int explicit,
int offline)
{
struct t10_alua_tg_pt_gp *tg_pt_gp;
atomic_set(&port->sep_tg_pt_secondary_offline, 0);
md_buf_len = tg_pt_gp->tg_pt_gp_md_buf_len;
- port->sep_tg_pt_secondary_stat = (explict) ?
- ALUA_STATUS_ALTERED_BY_EXPLICT_STPG :
- ALUA_STATUS_ALTERED_BY_IMPLICT_ALUA;
+ port->sep_tg_pt_secondary_stat = (explicit) ?
+ ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG :
+ ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA;
pr_debug("Successful %s ALUA transition TG PT Group: %s ID: %hu"
- " to secondary access state: %s\n", (explict) ? "explict" :
- "implict", config_item_name(&tg_pt_gp->tg_pt_gp_group.cg_item),
+ " to secondary access state: %s\n", (explicit) ? "explicit" :
+ "implicit", config_item_name(&tg_pt_gp->tg_pt_gp_group.cg_item),
tg_pt_gp->tg_pt_gp_id, (offline) ? "OFFLINE" : "ONLINE");
spin_unlock(&tg_pt_gp_mem->tg_pt_gp_mem_lock);
* struct se_device is released via core_alua_free_lu_gp_mem().
*
* If the passed lu_gp does NOT match the default_lu_gp, assume
- * we want to re-assocate a given lu_gp_mem with default_lu_gp.
+ * we want to re-associate a given lu_gp_mem with default_lu_gp.
*/
spin_lock(&lu_gp_mem->lu_gp_mem_lock);
if (lu_gp != default_lu_gp)
tg_pt_gp->tg_pt_gp_dev = dev;
tg_pt_gp->tg_pt_gp_md_buf_len = ALUA_MD_BUF_LEN;
atomic_set(&tg_pt_gp->tg_pt_gp_alua_access_state,
- ALUA_ACCESS_STATE_ACTIVE_OPTMIZED);
+ ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED);
/*
- * Enable both explict and implict ALUA support by default
+ * Enable both explicit and implicit ALUA support by default
*/
tg_pt_gp->tg_pt_gp_alua_access_type =
- TPGS_EXPLICT_ALUA | TPGS_IMPLICT_ALUA;
+ TPGS_EXPLICIT_ALUA | TPGS_IMPLICIT_ALUA;
/*
* Set the default Active/NonOptimized Delay in milliseconds
*/
tg_pt_gp->tg_pt_gp_nonop_delay_msecs = ALUA_DEFAULT_NONOP_DELAY_MSECS;
tg_pt_gp->tg_pt_gp_trans_delay_msecs = ALUA_DEFAULT_TRANS_DELAY_MSECS;
- tg_pt_gp->tg_pt_gp_implict_trans_secs = ALUA_DEFAULT_IMPLICT_TRANS_SECS;
+ tg_pt_gp->tg_pt_gp_implicit_trans_secs = ALUA_DEFAULT_IMPLICIT_TRANS_SECS;
+
+ /*
+ * Enable all supported states
+ */
+ tg_pt_gp->tg_pt_gp_alua_supported_states =
+ ALUA_T_SUP | ALUA_O_SUP |
+ ALUA_U_SUP | ALUA_S_SUP | ALUA_AN_SUP | ALUA_AO_SUP;
if (def_group) {
spin_lock(&dev->t10_alua.tg_pt_gps_lock);
* been called from target_core_alua_drop_tg_pt_gp().
*
* Here we remove *tg_pt_gp from the global list so that
- * no assications *OR* explict ALUA via SET_TARGET_PORT_GROUPS
+ * no associations *OR* explicit ALUA via SET_TARGET_PORT_GROUPS
* can be made while we are releasing struct t10_alua_tg_pt_gp.
*/
spin_lock(&dev->t10_alua.tg_pt_gps_lock);
* core_alua_free_tg_pt_gp_mem().
*
* If the passed tg_pt_gp does NOT match the default_tg_pt_gp,
- * assume we want to re-assocate a given tg_pt_gp_mem with
+ * assume we want to re-associate a given tg_pt_gp_mem with
* default_tg_pt_gp.
*/
spin_lock(&tg_pt_gp_mem->tg_pt_gp_mem_lock);
struct t10_alua_tg_pt_gp *tg_pt_gp,
char *page)
{
- if ((tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_EXPLICT_ALUA) &&
- (tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_IMPLICT_ALUA))
- return sprintf(page, "Implict and Explict\n");
- else if (tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_IMPLICT_ALUA)
- return sprintf(page, "Implict\n");
- else if (tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_EXPLICT_ALUA)
- return sprintf(page, "Explict\n");
+ if ((tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_EXPLICIT_ALUA) &&
+ (tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_IMPLICIT_ALUA))
+ return sprintf(page, "Implicit and Explicit\n");
+ else if (tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_IMPLICIT_ALUA)
+ return sprintf(page, "Implicit\n");
+ else if (tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_EXPLICIT_ALUA)
+ return sprintf(page, "Explicit\n");
else
return sprintf(page, "None\n");
}
}
if (tmp == 3)
tg_pt_gp->tg_pt_gp_alua_access_type =
- TPGS_IMPLICT_ALUA | TPGS_EXPLICT_ALUA;
+ TPGS_IMPLICIT_ALUA | TPGS_EXPLICIT_ALUA;
else if (tmp == 2)
- tg_pt_gp->tg_pt_gp_alua_access_type = TPGS_EXPLICT_ALUA;
+ tg_pt_gp->tg_pt_gp_alua_access_type = TPGS_EXPLICIT_ALUA;
else if (tmp == 1)
- tg_pt_gp->tg_pt_gp_alua_access_type = TPGS_IMPLICT_ALUA;
+ tg_pt_gp->tg_pt_gp_alua_access_type = TPGS_IMPLICIT_ALUA;
else
tg_pt_gp->tg_pt_gp_alua_access_type = 0;
return count;
}
-ssize_t core_alua_show_implict_trans_secs(
+ssize_t core_alua_show_implicit_trans_secs(
struct t10_alua_tg_pt_gp *tg_pt_gp,
char *page)
{
- return sprintf(page, "%d\n", tg_pt_gp->tg_pt_gp_implict_trans_secs);
+ return sprintf(page, "%d\n", tg_pt_gp->tg_pt_gp_implicit_trans_secs);
}
-ssize_t core_alua_store_implict_trans_secs(
+ssize_t core_alua_store_implicit_trans_secs(
struct t10_alua_tg_pt_gp *tg_pt_gp,
const char *page,
size_t count)
ret = kstrtoul(page, 0, &tmp);
if (ret < 0) {
- pr_err("Unable to extract implict_trans_secs\n");
+ pr_err("Unable to extract implicit_trans_secs\n");
return ret;
}
- if (tmp > ALUA_MAX_IMPLICT_TRANS_SECS) {
- pr_err("Passed implict_trans_secs: %lu, exceeds"
- " ALUA_MAX_IMPLICT_TRANS_SECS: %d\n", tmp,
- ALUA_MAX_IMPLICT_TRANS_SECS);
+ if (tmp > ALUA_MAX_IMPLICIT_TRANS_SECS) {
+ pr_err("Passed implicit_trans_secs: %lu, exceeds"
+ " ALUA_MAX_IMPLICIT_TRANS_SECS: %d\n", tmp,
+ ALUA_MAX_IMPLICIT_TRANS_SECS);
return -EINVAL;
}
- tg_pt_gp->tg_pt_gp_implict_trans_secs = (int)tmp;
+ tg_pt_gp->tg_pt_gp_implicit_trans_secs = (int)tmp;
return count;
}
return ret;
}
if ((tmp != ALUA_STATUS_NONE) &&
- (tmp != ALUA_STATUS_ALTERED_BY_EXPLICT_STPG) &&
- (tmp != ALUA_STATUS_ALTERED_BY_IMPLICT_ALUA)) {
+ (tmp != ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG) &&
+ (tmp != ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA)) {
pr_err("Illegal value for alua_tg_pt_status: %lu\n",
tmp);
return -EINVAL;
* from spc4r17 section 6.4.2 Table 135
*/
#define TPGS_NO_ALUA 0x00
-#define TPGS_IMPLICT_ALUA 0x10
-#define TPGS_EXPLICT_ALUA 0x20
+#define TPGS_IMPLICIT_ALUA 0x10
+#define TPGS_EXPLICIT_ALUA 0x20
/*
* ASYMMETRIC ACCESS STATE field
*
* from spc4r17 section 6.27 Table 245
*/
-#define ALUA_ACCESS_STATE_ACTIVE_OPTMIZED 0x0
+#define ALUA_ACCESS_STATE_ACTIVE_OPTIMIZED 0x0
#define ALUA_ACCESS_STATE_ACTIVE_NON_OPTIMIZED 0x1
#define ALUA_ACCESS_STATE_STANDBY 0x2
#define ALUA_ACCESS_STATE_UNAVAILABLE 0x3
#define ALUA_ACCESS_STATE_OFFLINE 0xe
#define ALUA_ACCESS_STATE_TRANSITION 0xf
+/*
+ * from spc4r36j section 6.37 Table 306
+ */
+#define ALUA_T_SUP 0x80
+#define ALUA_O_SUP 0x40
+#define ALUA_LBD_SUP 0x10
+#define ALUA_U_SUP 0x08
+#define ALUA_S_SUP 0x04
+#define ALUA_AN_SUP 0x02
+#define ALUA_AO_SUP 0x01
+
/*
* REPORT_TARGET_PORT_GROUP STATUS CODE
*
* from spc4r17 section 6.27 Table 246
*/
#define ALUA_STATUS_NONE 0x00
-#define ALUA_STATUS_ALTERED_BY_EXPLICT_STPG 0x01
-#define ALUA_STATUS_ALTERED_BY_IMPLICT_ALUA 0x02
+#define ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG 0x01
+#define ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA 0x02
/*
* From spc4r17, Table D.1: ASC and ASCQ Assignement
#define ALUA_DEFAULT_NONOP_DELAY_MSECS 100
#define ALUA_MAX_NONOP_DELAY_MSECS 10000 /* 10 seconds */
/*
- * Used for implict and explict ALUA transitional delay, that is disabled
+ * Used for implicit and explicit ALUA transitional delay, that is disabled
* by default, and is intended to be used for debugging client side ALUA code.
*/
#define ALUA_DEFAULT_TRANS_DELAY_MSECS 0
#define ALUA_MAX_TRANS_DELAY_MSECS 30000 /* 30 seconds */
/*
- * Used for the recommended application client implict transition timeout
+ * Used for the recommended application client implicit transition timeout
* in seconds, returned by the REPORT_TARGET_PORT_GROUPS w/ extended header.
*/
-#define ALUA_DEFAULT_IMPLICT_TRANS_SECS 0
-#define ALUA_MAX_IMPLICT_TRANS_SECS 255
+#define ALUA_DEFAULT_IMPLICIT_TRANS_SECS 0
+#define ALUA_MAX_IMPLICIT_TRANS_SECS 255
/*
* Used by core_alua_update_tpg_primary_metadata() and
* core_alua_update_tpg_secondary_metadata()
char *);
extern ssize_t core_alua_store_trans_delay_msecs(struct t10_alua_tg_pt_gp *,
const char *, size_t);
-extern ssize_t core_alua_show_implict_trans_secs(struct t10_alua_tg_pt_gp *,
+extern ssize_t core_alua_show_implicit_trans_secs(struct t10_alua_tg_pt_gp *,
char *);
-extern ssize_t core_alua_store_implict_trans_secs(struct t10_alua_tg_pt_gp *,
+extern ssize_t core_alua_store_implicit_trans_secs(struct t10_alua_tg_pt_gp *,
const char *, size_t);
extern ssize_t core_alua_show_preferred_bit(struct t10_alua_tg_pt_gp *,
char *);
* struct target_fabric_configfs *tf will contain a usage reference.
*/
pr_debug("Target_Core_ConfigFS: REGISTER tfc_wwn_cit -> %p\n",
- &TF_CIT_TMPL(tf)->tfc_wwn_cit);
+ &tf->tf_cit_tmpl.tfc_wwn_cit);
tf->tf_group.default_groups = tf->tf_default_groups;
tf->tf_group.default_groups[0] = &tf->tf_disc_group;
tf->tf_group.default_groups[1] = NULL;
config_group_init_type_name(&tf->tf_group, name,
- &TF_CIT_TMPL(tf)->tfc_wwn_cit);
+ &tf->tf_cit_tmpl.tfc_wwn_cit);
config_group_init_type_name(&tf->tf_disc_group, "discovery_auth",
- &TF_CIT_TMPL(tf)->tfc_discovery_cit);
+ &tf->tf_cit_tmpl.tfc_discovery_cit);
pr_debug("Target_Core_ConfigFS: REGISTER -> Allocated Fabric:"
" %s\n", tf->tf_group.cg_item.ci_name);
int new_state, ret;
if (!tg_pt_gp->tg_pt_gp_valid_id) {
- pr_err("Unable to do implict ALUA on non valid"
+ pr_err("Unable to do implicit ALUA on non valid"
" tg_pt_gp ID: %hu\n", tg_pt_gp->tg_pt_gp_valid_id);
return -EINVAL;
}
}
new_state = (int)tmp;
- if (!(tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_IMPLICT_ALUA)) {
- pr_err("Unable to process implict configfs ALUA"
- " transition while TPGS_IMPLICT_ALUA is disabled\n");
+ if (!(tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_IMPLICIT_ALUA)) {
+ pr_err("Unable to process implicit configfs ALUA"
+ " transition while TPGS_IMPLICIT_ALUA is disabled\n");
return -EINVAL;
}
new_status = (int)tmp;
if ((new_status != ALUA_STATUS_NONE) &&
- (new_status != ALUA_STATUS_ALTERED_BY_EXPLICT_STPG) &&
- (new_status != ALUA_STATUS_ALTERED_BY_IMPLICT_ALUA)) {
+ (new_status != ALUA_STATUS_ALTERED_BY_EXPLICIT_STPG) &&
+ (new_status != ALUA_STATUS_ALTERED_BY_IMPLICIT_ALUA)) {
pr_err("Illegal ALUA access status: 0x%02x\n",
new_status);
return -EINVAL;
SE_DEV_ALUA_TG_PT_ATTR(alua_access_type, S_IRUGO | S_IWUSR);
+/*
+ * alua_supported_states
+ */
+
+#define SE_DEV_ALUA_SUPPORT_STATE_SHOW(_name, _var, _bit) \
+static ssize_t target_core_alua_tg_pt_gp_show_attr_alua_support_##_name( \
+ struct t10_alua_tg_pt_gp *t, char *p) \
+{ \
+ return sprintf(p, "%d\n", !!(t->_var & _bit)); \
+}
+
+#define SE_DEV_ALUA_SUPPORT_STATE_STORE(_name, _var, _bit) \
+static ssize_t target_core_alua_tg_pt_gp_store_attr_alua_support_##_name(\
+ struct t10_alua_tg_pt_gp *t, const char *p, size_t c) \
+{ \
+ unsigned long tmp; \
+ int ret; \
+ \
+ if (!t->tg_pt_gp_valid_id) { \
+ pr_err("Unable to do set ##_name ALUA state on non" \
+ " valid tg_pt_gp ID: %hu\n", \
+ t->tg_pt_gp_valid_id); \
+ return -EINVAL; \
+ } \
+ \
+ ret = kstrtoul(p, 0, &tmp); \
+ if (ret < 0) { \
+ pr_err("Invalid value '%s', must be '0' or '1'\n", p); \
+ return -EINVAL; \
+ } \
+ if (tmp > 1) { \
+ pr_err("Invalid value '%ld', must be '0' or '1'\n", tmp); \
+ return -EINVAL; \
+ } \
+ if (!tmp) \
+ t->_var |= _bit; \
+ else \
+ t->_var &= ~_bit; \
+ \
+ return c; \
+}
+
+SE_DEV_ALUA_SUPPORT_STATE_SHOW(transitioning,
+ tg_pt_gp_alua_supported_states, ALUA_T_SUP);
+SE_DEV_ALUA_SUPPORT_STATE_STORE(transitioning,
+ tg_pt_gp_alua_supported_states, ALUA_T_SUP);
+SE_DEV_ALUA_TG_PT_ATTR(alua_support_transitioning, S_IRUGO | S_IWUSR);
+
+SE_DEV_ALUA_SUPPORT_STATE_SHOW(offline,
+ tg_pt_gp_alua_supported_states, ALUA_O_SUP);
+SE_DEV_ALUA_SUPPORT_STATE_STORE(offline,
+ tg_pt_gp_alua_supported_states, ALUA_O_SUP);
+SE_DEV_ALUA_TG_PT_ATTR(alua_support_offline, S_IRUGO | S_IWUSR);
+
+SE_DEV_ALUA_SUPPORT_STATE_SHOW(lba_dependent,
+ tg_pt_gp_alua_supported_states, ALUA_LBD_SUP);
+SE_DEV_ALUA_SUPPORT_STATE_STORE(lba_dependent,
+ tg_pt_gp_alua_supported_states, ALUA_LBD_SUP);
+SE_DEV_ALUA_TG_PT_ATTR(alua_support_lba_dependent, S_IRUGO | S_IWUSR);
+
+SE_DEV_ALUA_SUPPORT_STATE_SHOW(unavailable,
+ tg_pt_gp_alua_supported_states, ALUA_U_SUP);
+SE_DEV_ALUA_SUPPORT_STATE_STORE(unavailable,
+ tg_pt_gp_alua_supported_states, ALUA_U_SUP);
+SE_DEV_ALUA_TG_PT_ATTR(alua_support_unavailable, S_IRUGO | S_IWUSR);
+
+SE_DEV_ALUA_SUPPORT_STATE_SHOW(standby,
+ tg_pt_gp_alua_supported_states, ALUA_S_SUP);
+SE_DEV_ALUA_SUPPORT_STATE_STORE(standby,
+ tg_pt_gp_alua_supported_states, ALUA_S_SUP);
+SE_DEV_ALUA_TG_PT_ATTR(alua_support_standby, S_IRUGO | S_IWUSR);
+
+SE_DEV_ALUA_SUPPORT_STATE_SHOW(active_optimized,
+ tg_pt_gp_alua_supported_states, ALUA_AO_SUP);
+SE_DEV_ALUA_SUPPORT_STATE_STORE(active_optimized,
+ tg_pt_gp_alua_supported_states, ALUA_AO_SUP);
+SE_DEV_ALUA_TG_PT_ATTR(alua_support_active_optimized, S_IRUGO | S_IWUSR);
+
+SE_DEV_ALUA_SUPPORT_STATE_SHOW(active_nonoptimized,
+ tg_pt_gp_alua_supported_states, ALUA_AN_SUP);
+SE_DEV_ALUA_SUPPORT_STATE_STORE(active_nonoptimized,
+ tg_pt_gp_alua_supported_states, ALUA_AN_SUP);
+SE_DEV_ALUA_TG_PT_ATTR(alua_support_active_nonoptimized, S_IRUGO | S_IWUSR);
+
/*
* alua_write_metadata
*/
SE_DEV_ALUA_TG_PT_ATTR(trans_delay_msecs, S_IRUGO | S_IWUSR);
/*
- * implict_trans_secs
+ * implicit_trans_secs
*/
-static ssize_t target_core_alua_tg_pt_gp_show_attr_implict_trans_secs(
+static ssize_t target_core_alua_tg_pt_gp_show_attr_implicit_trans_secs(
struct t10_alua_tg_pt_gp *tg_pt_gp,
char *page)
{
- return core_alua_show_implict_trans_secs(tg_pt_gp, page);
+ return core_alua_show_implicit_trans_secs(tg_pt_gp, page);
}
-static ssize_t target_core_alua_tg_pt_gp_store_attr_implict_trans_secs(
+static ssize_t target_core_alua_tg_pt_gp_store_attr_implicit_trans_secs(
struct t10_alua_tg_pt_gp *tg_pt_gp,
const char *page,
size_t count)
{
- return core_alua_store_implict_trans_secs(tg_pt_gp, page, count);
+ return core_alua_store_implicit_trans_secs(tg_pt_gp, page, count);
}
-SE_DEV_ALUA_TG_PT_ATTR(implict_trans_secs, S_IRUGO | S_IWUSR);
+SE_DEV_ALUA_TG_PT_ATTR(implicit_trans_secs, S_IRUGO | S_IWUSR);
/*
* preferred
&target_core_alua_tg_pt_gp_alua_access_state.attr,
&target_core_alua_tg_pt_gp_alua_access_status.attr,
&target_core_alua_tg_pt_gp_alua_access_type.attr,
+ &target_core_alua_tg_pt_gp_alua_support_transitioning.attr,
+ &target_core_alua_tg_pt_gp_alua_support_offline.attr,
+ &target_core_alua_tg_pt_gp_alua_support_lba_dependent.attr,
+ &target_core_alua_tg_pt_gp_alua_support_unavailable.attr,
+ &target_core_alua_tg_pt_gp_alua_support_standby.attr,
+ &target_core_alua_tg_pt_gp_alua_support_active_nonoptimized.attr,
+ &target_core_alua_tg_pt_gp_alua_support_active_optimized.attr,
&target_core_alua_tg_pt_gp_alua_write_metadata.attr,
&target_core_alua_tg_pt_gp_nonop_delay_msecs.attr,
&target_core_alua_tg_pt_gp_trans_delay_msecs.attr,
- &target_core_alua_tg_pt_gp_implict_trans_secs.attr,
+ &target_core_alua_tg_pt_gp_implicit_trans_secs.attr,
&target_core_alua_tg_pt_gp_preferred.attr,
&target_core_alua_tg_pt_gp_tg_pt_gp_id.attr,
&target_core_alua_tg_pt_gp_members.attr,
se_cmd->pr_res_key = deve->pr_res_key;
se_cmd->orig_fe_lun = unpacked_lun;
se_cmd->se_cmd_flags |= SCF_SE_LUN_CMD;
+
+ percpu_ref_get(&se_lun->lun_ref);
+ se_cmd->lun_ref_active = true;
}
spin_unlock_irqrestore(&se_sess->se_node_acl->device_list_lock, flags);
se_cmd->se_lun = &se_sess->se_tpg->tpg_virt_lun0;
se_cmd->orig_fe_lun = 0;
se_cmd->se_cmd_flags |= SCF_SE_LUN_CMD;
+
+ percpu_ref_get(&se_lun->lun_ref);
+ se_cmd->lun_ref_active = true;
}
/* Directly associate cmd with se_dev */
se_cmd->se_dev = se_lun->lun_se_dev;
- /* TODO: get rid of this and use atomics for stats */
dev = se_lun->lun_se_dev;
- spin_lock_irqsave(&dev->stats_lock, flags);
- dev->num_cmds++;
+ atomic_long_inc(&dev->num_cmds);
if (se_cmd->data_direction == DMA_TO_DEVICE)
- dev->write_bytes += se_cmd->data_length;
+ atomic_long_add(se_cmd->data_length, &dev->write_bytes);
else if (se_cmd->data_direction == DMA_FROM_DEVICE)
- dev->read_bytes += se_cmd->data_length;
- spin_unlock_irqrestore(&dev->stats_lock, flags);
-
- spin_lock_irqsave(&se_lun->lun_cmd_lock, flags);
- list_add_tail(&se_cmd->se_lun_node, &se_lun->lun_cmd_list);
- spin_unlock_irqrestore(&se_lun->lun_cmd_lock, flags);
+ atomic_long_add(se_cmd->data_length, &dev->read_bytes);
return 0;
}
deve = nacl->device_list[mapped_lun];
/*
- * Check if the call is handling demo mode -> explict LUN ACL
+ * Check if the call is handling demo mode -> explicit LUN ACL
* transition. This transition must be for the same struct se_lun
* + mapped_lun that was setup in demo mode..
*/
if (deve->lun_flags & TRANSPORT_LUNFLAGS_INITIATOR_ACCESS) {
if (deve->se_lun_acl != NULL) {
pr_err("struct se_dev_entry->se_lun_acl"
- " already set for demo mode -> explict"
+ " already set for demo mode -> explicit"
" LUN ACL transition\n");
spin_unlock_irq(&nacl->device_list_lock);
return -EINVAL;
if (deve->se_lun != lun) {
pr_err("struct se_dev_entry->se_lun does"
" match passed struct se_lun for demo mode"
- " -> explict LUN ACL transition\n");
+ " -> explicit LUN ACL transition\n");
spin_unlock_irq(&nacl->device_list_lock);
return -EINVAL;
}
dev->dev_attrib.block_size = block_size;
pr_debug("dev[%p]: SE Device block_size changed to %u\n",
dev, block_size);
+
+ if (dev->dev_attrib.max_bytes_per_io)
+ dev->dev_attrib.hw_max_sectors =
+ dev->dev_attrib.max_bytes_per_io / block_size;
+
return 0;
}
struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
{
struct se_device *dev;
+ struct se_lun *xcopy_lun;
dev = hba->transport->alloc_device(hba, name);
if (!dev)
INIT_LIST_HEAD(&dev->state_list);
INIT_LIST_HEAD(&dev->qf_cmd_list);
INIT_LIST_HEAD(&dev->g_dev_node);
- spin_lock_init(&dev->stats_lock);
spin_lock_init(&dev->execute_task_lock);
spin_lock_init(&dev->delayed_cmd_lock);
spin_lock_init(&dev->dev_reservation_lock);
dev->dev_attrib.fabric_max_sectors = DA_FABRIC_MAX_SECTORS;
dev->dev_attrib.optimal_sectors = DA_FABRIC_MAX_SECTORS;
+ xcopy_lun = &dev->xcopy_lun;
+ xcopy_lun->lun_se_dev = dev;
+ init_completion(&xcopy_lun->lun_shutdown_comp);
+ INIT_LIST_HEAD(&xcopy_lun->lun_acl_list);
+ spin_lock_init(&xcopy_lun->lun_acl_lock);
+ spin_lock_init(&xcopy_lun->lun_sep_lock);
+ init_completion(&xcopy_lun->lun_ref_comp);
+
return dev;
}
}
config_group_init_type_name(&lacl->se_lun_group, name,
- &TF_CIT_TMPL(tf)->tfc_tpg_mappedlun_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_mappedlun_cit);
config_group_init_type_name(&lacl->ml_stat_grps.stat_group,
- "statistics", &TF_CIT_TMPL(tf)->tfc_tpg_mappedlun_stat_cit);
+ "statistics", &tf->tf_cit_tmpl.tfc_tpg_mappedlun_stat_cit);
lacl_cg->default_groups[0] = &lacl->ml_stat_grps.stat_group;
lacl_cg->default_groups[1] = NULL;
nacl_cg->default_groups[4] = NULL;
config_group_init_type_name(&se_nacl->acl_group, name,
- &TF_CIT_TMPL(tf)->tfc_tpg_nacl_base_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_nacl_base_cit);
config_group_init_type_name(&se_nacl->acl_attrib_group, "attrib",
- &TF_CIT_TMPL(tf)->tfc_tpg_nacl_attrib_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_nacl_attrib_cit);
config_group_init_type_name(&se_nacl->acl_auth_group, "auth",
- &TF_CIT_TMPL(tf)->tfc_tpg_nacl_auth_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_nacl_auth_cit);
config_group_init_type_name(&se_nacl->acl_param_group, "param",
- &TF_CIT_TMPL(tf)->tfc_tpg_nacl_param_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_nacl_param_cit);
config_group_init_type_name(&se_nacl->acl_fabric_stat_group,
"fabric_statistics",
- &TF_CIT_TMPL(tf)->tfc_tpg_nacl_stat_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_nacl_stat_cit);
return &se_nacl->acl_group;
}
se_tpg_np->tpg_np_parent = se_tpg;
config_group_init_type_name(&se_tpg_np->tpg_np_group, name,
- &TF_CIT_TMPL(tf)->tfc_tpg_np_base_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_np_base_cit);
return &se_tpg_np->tpg_np_group;
}
}
config_group_init_type_name(&lun->lun_group, name,
- &TF_CIT_TMPL(tf)->tfc_tpg_port_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_port_cit);
config_group_init_type_name(&lun->port_stat_grps.stat_group,
- "statistics", &TF_CIT_TMPL(tf)->tfc_tpg_port_stat_cit);
+ "statistics", &tf->tf_cit_tmpl.tfc_tpg_port_stat_cit);
lun_cg->default_groups[0] = &lun->port_stat_grps.stat_group;
lun_cg->default_groups[1] = NULL;
se_tpg->tpg_group.default_groups[6] = NULL;
config_group_init_type_name(&se_tpg->tpg_group, name,
- &TF_CIT_TMPL(tf)->tfc_tpg_base_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_base_cit);
config_group_init_type_name(&se_tpg->tpg_lun_group, "lun",
- &TF_CIT_TMPL(tf)->tfc_tpg_lun_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_lun_cit);
config_group_init_type_name(&se_tpg->tpg_np_group, "np",
- &TF_CIT_TMPL(tf)->tfc_tpg_np_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_np_cit);
config_group_init_type_name(&se_tpg->tpg_acl_group, "acls",
- &TF_CIT_TMPL(tf)->tfc_tpg_nacl_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_nacl_cit);
config_group_init_type_name(&se_tpg->tpg_attrib_group, "attrib",
- &TF_CIT_TMPL(tf)->tfc_tpg_attrib_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_attrib_cit);
config_group_init_type_name(&se_tpg->tpg_auth_group, "auth",
- &TF_CIT_TMPL(tf)->tfc_tpg_auth_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_auth_cit);
config_group_init_type_name(&se_tpg->tpg_param_group, "param",
- &TF_CIT_TMPL(tf)->tfc_tpg_param_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_param_cit);
return &se_tpg->tpg_group;
}
wwn->wwn_group.default_groups[1] = NULL;
config_group_init_type_name(&wwn->wwn_group, name,
- &TF_CIT_TMPL(tf)->tfc_tpg_cit);
+ &tf->tf_cit_tmpl.tfc_tpg_cit);
config_group_init_type_name(&wwn->fabric_stat_group, "fabric_statistics",
- &TF_CIT_TMPL(tf)->tfc_wwn_fabric_stats_cit);
+ &tf->tf_cit_tmpl.tfc_wwn_fabric_stats_cit);
return &wwn->wwn_group;
}
pr_debug("CORE_HBA[%d] - TCM FILEIO HBA Driver %s on Generic"
" Target Core Stack %s\n", hba->hba_id, FD_VERSION,
TARGET_CORE_MOD_VERSION);
- pr_debug("CORE_HBA[%d] - Attached FILEIO HBA: %u to Generic"
- " MaxSectors: %u\n",
- hba->hba_id, fd_host->fd_host_id, FD_MAX_SECTORS);
+ pr_debug("CORE_HBA[%d] - Attached FILEIO HBA: %u to Generic\n",
+ hba->hba_id, fd_host->fd_host_id);
return 0;
}
}
dev->dev_attrib.hw_block_size = fd_dev->fd_block_size;
- dev->dev_attrib.hw_max_sectors = FD_MAX_SECTORS;
+ dev->dev_attrib.max_bytes_per_io = FD_MAX_BYTES;
+ dev->dev_attrib.hw_max_sectors = FD_MAX_BYTES / fd_dev->fd_block_size;
dev->dev_attrib.hw_queue_depth = FD_MAX_DEVICE_QUEUE_DEPTH;
if (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) {
} else {
ret = fd_do_rw(cmd, sgl, sgl_nents, 1);
/*
- * Perform implict vfs_fsync_range() for fd_do_writev() ops
+ * Perform implicit vfs_fsync_range() for fd_do_writev() ops
* for SCSI WRITEs with Forced Unit Access (FUA) set.
* Allow this to happen independent of WCE=0 setting.
*/
#define FD_DEVICE_QUEUE_DEPTH 32
#define FD_MAX_DEVICE_QUEUE_DEPTH 128
#define FD_BLOCKSIZE 512
-#define FD_MAX_SECTORS 2048
+/*
+ * Limited by the number of iovecs (2048) per vfs_[writev,readv] call
+ */
+#define FD_MAX_BYTES 8388608
#define RRF_EMULATE_CDB 0x01
#define RRF_GOT_LBA 0x02
return iblock_emulate_read_cap_with_block_size(dev, bd, q);
}
+static sector_t iblock_get_alignment_offset_lbas(struct se_device *dev)
+{
+ struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
+ struct block_device *bd = ib_dev->ibd_bd;
+ int ret;
+
+ ret = bdev_alignment_offset(bd);
+ if (ret == -1)
+ return 0;
+
+ /* convert offset-bytes to offset-lbas */
+ return ret / bdev_logical_block_size(bd);
+}
+
+static unsigned int iblock_get_lbppbe(struct se_device *dev)
+{
+ struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
+ struct block_device *bd = ib_dev->ibd_bd;
+ int logs_per_phys = bdev_physical_block_size(bd) / bdev_logical_block_size(bd);
+
+ return ilog2(logs_per_phys);
+}
+
+static unsigned int iblock_get_io_min(struct se_device *dev)
+{
+ struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
+ struct block_device *bd = ib_dev->ibd_bd;
+
+ return bdev_io_min(bd);
+}
+
+static unsigned int iblock_get_io_opt(struct se_device *dev)
+{
+ struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
+ struct block_device *bd = ib_dev->ibd_bd;
+
+ return bdev_io_opt(bd);
+}
+
static struct sbc_ops iblock_sbc_ops = {
.execute_rw = iblock_execute_rw,
.execute_sync_cache = iblock_execute_sync_cache,
.show_configfs_dev_params = iblock_show_configfs_dev_params,
.get_device_type = sbc_get_device_type,
.get_blocks = iblock_get_blocks,
+ .get_alignment_offset_lbas = iblock_get_alignment_offset_lbas,
+ .get_lbppbe = iblock_get_lbppbe,
+ .get_io_min = iblock_get_io_min,
+ .get_io_opt = iblock_get_io_opt,
.get_write_cache = iblock_get_write_cache,
};
struct se_node_acl *__core_tpg_get_initiator_node_acl(struct se_portal_group *tpg,
const char *);
-struct se_node_acl *core_tpg_get_initiator_node_acl(struct se_portal_group *tpg,
- unsigned char *);
void core_tpg_add_node_to_devs(struct se_node_acl *, struct se_portal_group *);
void core_tpg_wait_for_nacl_pr_ref(struct se_node_acl *);
struct se_lun *core_tpg_pre_addlun(struct se_portal_group *, u32);
int transport_dump_vpd_ident_type(struct t10_vpd *, unsigned char *, int);
int transport_dump_vpd_ident(struct t10_vpd *, unsigned char *, int);
bool target_stop_cmd(struct se_cmd *cmd, unsigned long *flags);
-int transport_clear_lun_from_sessions(struct se_lun *);
+int transport_clear_lun_ref(struct se_lun *);
void transport_send_task_abort(struct se_cmd *);
sense_reason_t target_cmd_size_check(struct se_cmd *cmd, unsigned int size);
void target_qf_do_work(struct work_struct *work);
* statement.
*/
if (!ret && !other_cdb) {
- pr_debug("Allowing explict CDB: 0x%02x for %s"
+ pr_debug("Allowing explicit CDB: 0x%02x for %s"
" reservation holder\n", cdb[0],
core_scsi3_pr_dump_type(pr_reg_type));
*/
if (!registered_nexus) {
- pr_debug("Allowing implict CDB: 0x%02x"
+ pr_debug("Allowing implicit CDB: 0x%02x"
" for %s reservation on unregistered"
" nexus\n", cdb[0],
core_scsi3_pr_dump_type(pr_reg_type));
* allow commands from registered nexuses.
*/
- pr_debug("Allowing implict CDB: 0x%02x for %s"
+ pr_debug("Allowing implicit CDB: 0x%02x for %s"
" reservation\n", cdb[0],
core_scsi3_pr_dump_type(pr_reg_type));
alua_port_list) {
/*
* This pointer will be NULL for demo mode MappedLUNs
- * that have not been make explict via a ConfigFS
+ * that have not been make explicit via a ConfigFS
* MappedLUN group for the SCSI Initiator Node ACL.
*/
if (!deve_tmp->se_lun_acl)
smp_mb__after_atomic_dec();
}
-static int core_scsi3_check_implict_release(
+static int core_scsi3_check_implicit_release(
struct se_device *dev,
struct t10_pr_registration *pr_reg)
{
}
if (pr_res_holder == pr_reg) {
/*
- * Perform an implict RELEASE if the registration that
+ * Perform an implicit RELEASE if the registration that
* is being released is holding the reservation.
*
* From spc4r17, section 5.7.11.1:
* For 'All Registrants' reservation types, all existing
* registrations are still processed as reservation holders
* in core_scsi3_pr_seq_non_holder() after the initial
- * reservation holder is implictly released here.
+ * reservation holder is implicitly released here.
*/
} else if (pr_reg->pr_reg_all_tg_pt &&
(!strcmp(pr_res_holder->pr_reg_nacl->initiatorname,
/*
* sa_res_key=0 Unregister Reservation Key for registered I_T Nexus.
*/
- pr_holder = core_scsi3_check_implict_release(
+ pr_holder = core_scsi3_check_implicit_release(
cmd->se_dev, pr_reg);
if (pr_holder < 0) {
ret = TCM_RESERVATION_CONFLICT;
struct se_device *dev,
struct se_node_acl *se_nacl,
struct t10_pr_registration *pr_reg,
- int explict)
+ int explicit)
{
struct target_core_fabric_ops *tfo = se_nacl->se_tpg->se_tpg_tfo;
char i_buf[PR_REG_ISID_ID_LEN];
pr_debug("SPC-3 PR [%s] Service Action: %s RELEASE cleared"
" reservation holder TYPE: %s ALL_TG_PT: %d\n",
- tfo->get_fabric_name(), (explict) ? "explict" : "implict",
+ tfo->get_fabric_name(), (explicit) ? "explicit" : "implicit",
core_scsi3_pr_dump_type(pr_reg->pr_res_type),
(pr_reg->pr_reg_all_tg_pt) ? 1 : 0);
pr_debug("SPC-3 PR [%s] RELEASE Node: %s%s\n",
memset(i_buf, 0, PR_REG_ISID_ID_LEN);
core_pr_dump_initiator_port(pr_reg, i_buf, PR_REG_ISID_ID_LEN);
/*
- * Do an implict RELEASE of the existing reservation.
+ * Do an implicit RELEASE of the existing reservation.
*/
if (dev->dev_pr_res_holder)
__core_scsi3_complete_pro_release(dev, nacl,
* 5.7.11.4 Preempting, Table 52 and Figure 7.
*
* For a ZERO SA Reservation key, release
- * all other registrations and do an implict
+ * all other registrations and do an implicit
* release of active persistent reservation.
*
* For a non-ZERO SA Reservation key, only
#include <linux/string.h>
#include <linux/parser.h>
#include <linux/timer.h>
-#include <linux/blkdev.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <scsi/scsi.h>
buf[9] = (dev->dev_attrib.block_size >> 16) & 0xff;
buf[10] = (dev->dev_attrib.block_size >> 8) & 0xff;
buf[11] = dev->dev_attrib.block_size & 0xff;
+
+ if (dev->transport->get_lbppbe)
+ buf[13] = dev->transport->get_lbppbe(dev) & 0x0f;
+
+ if (dev->transport->get_alignment_offset_lbas) {
+ u16 lalba = dev->transport->get_alignment_offset_lbas(dev);
+ buf[14] = (lalba >> 8) & 0x3f;
+ buf[15] = lalba & 0xff;
+ }
+
/*
* Set Thin Provisioning Enable bit following sbc3r22 in section
* READ CAPACITY (16) byte 14 if emulate_tpu or emulate_tpws is enabled.
*/
if (dev->dev_attrib.emulate_tpu || dev->dev_attrib.emulate_tpws)
- buf[14] = 0x80;
+ buf[14] |= 0x80;
rbuf = transport_kmap_data_sg(cmd);
if (rbuf) {
buf[5] = 0x80;
/*
- * Set TPGS field for explict and/or implict ALUA access type
+ * Set TPGS field for explicit and/or implicit ALUA access type
* and opteration.
*
* See spc4r17 section 6.4.2 Table 135
struct se_device *dev = cmd->se_dev;
u32 max_sectors;
int have_tp = 0;
+ int opt, min;
/*
* Following spc3r22 section 6.5.3 Block Limits VPD page, when
/*
* Set OPTIMAL TRANSFER LENGTH GRANULARITY
*/
- put_unaligned_be16(1, &buf[6]);
+ if (dev->transport->get_io_min && (min = dev->transport->get_io_min(dev)))
+ put_unaligned_be16(min / dev->dev_attrib.block_size, &buf[6]);
+ else
+ put_unaligned_be16(1, &buf[6]);
/*
* Set MAXIMUM TRANSFER LENGTH
/*
* Set OPTIMAL TRANSFER LENGTH
*/
- put_unaligned_be32(dev->dev_attrib.optimal_sectors, &buf[12]);
+ if (dev->transport->get_io_opt && (opt = dev->transport->get_io_opt(dev)))
+ put_unaligned_be32(opt / dev->dev_attrib.block_size, &buf[12]);
+ else
+ put_unaligned_be32(dev->dev_attrib.optimal_sectors, &buf[12]);
/*
* Exit now if we don't support TP.
*size = (cdb[3] << 8) + cdb[4];
/*
- * Do implict HEAD_OF_QUEUE processing for INQUIRY.
+ * Do implicit HEAD_OF_QUEUE processing for INQUIRY.
* See spc4r17 section 5.3
*/
cmd->sam_task_attr = MSG_HEAD_TAG;
cmd->execute_cmd = spc_emulate_report_luns;
*size = (cdb[6] << 24) | (cdb[7] << 16) | (cdb[8] << 8) | cdb[9];
/*
- * Do implict HEAD_OF_QUEUE processing for REPORT_LUNS
+ * Do implicit HEAD_OF_QUEUE processing for REPORT_LUNS
* See spc4r17 section 5.3
*/
cmd->sam_task_attr = MSG_HEAD_TAG;
#include <linux/utsname.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
-#include <linux/blkdev.h>
#include <linux/configfs.h>
#include <scsi/scsi.h>
#include <scsi/scsi_device.h>
struct se_device *dev =
container_of(sgrps, struct se_device, dev_stat_grps);
- return snprintf(page, PAGE_SIZE, "%u\n", dev->num_resets);
+ return snprintf(page, PAGE_SIZE, "%lu\n",
+ atomic_long_read(&dev->num_resets));
}
DEV_STAT_SCSI_TGT_DEV_ATTR_RO(resets);
container_of(sgrps, struct se_device, dev_stat_grps);
/* scsiLuNumCommands */
- return snprintf(page, PAGE_SIZE, "%llu\n",
- (unsigned long long)dev->num_cmds);
+ return snprintf(page, PAGE_SIZE, "%lu\n",
+ atomic_long_read(&dev->num_cmds));
}
DEV_STAT_SCSI_LU_ATTR_RO(num_cmds);
container_of(sgrps, struct se_device, dev_stat_grps);
/* scsiLuReadMegaBytes */
- return snprintf(page, PAGE_SIZE, "%u\n", (u32)(dev->read_bytes >> 20));
+ return snprintf(page, PAGE_SIZE, "%lu\n",
+ atomic_long_read(&dev->read_bytes) >> 20);
}
DEV_STAT_SCSI_LU_ATTR_RO(read_mbytes);
container_of(sgrps, struct se_device, dev_stat_grps);
/* scsiLuWrittenMegaBytes */
- return snprintf(page, PAGE_SIZE, "%u\n", (u32)(dev->write_bytes >> 20));
+ return snprintf(page, PAGE_SIZE, "%lu\n",
+ atomic_long_read(&dev->write_bytes) >> 20);
}
DEV_STAT_SCSI_LU_ATTR_RO(write_mbytes);
container_of(sgrps, struct se_device, dev_stat_grps);
/* scsiLuInResets */
- return snprintf(page, PAGE_SIZE, "%u\n", dev->num_resets);
+ return snprintf(page, PAGE_SIZE, "%lu\n", atomic_long_read(&dev->num_resets));
}
DEV_STAT_SCSI_LU_ATTR_RO(resets);
pr_debug("LUN_RESET: SCSI-2 Released reservation\n");
}
- spin_lock_irq(&dev->stats_lock);
- dev->num_resets++;
- spin_unlock_irq(&dev->stats_lock);
+ atomic_long_inc(&dev->num_resets);
pr_debug("LUN_RESET: %s for [%s] Complete\n",
(preempt_and_abort_list) ? "Preempt" : "TMR",
return acl;
}
+EXPORT_SYMBOL(core_tpg_get_initiator_node_acl);
/* core_tpg_add_node_to_devs():
*
snprintf(acl->initiatorname, TRANSPORT_IQN_LEN, "%s", initiatorname);
acl->se_tpg = tpg;
acl->acl_index = scsi_get_new_index(SCSI_AUTH_INTR_INDEX);
- spin_lock_init(&acl->stats_lock);
acl->dynamic_node_acl = 1;
tpg->se_tpg_tfo->set_default_node_attributes(acl);
snprintf(acl->initiatorname, TRANSPORT_IQN_LEN, "%s", initiatorname);
acl->se_tpg = tpg;
acl->acl_index = scsi_get_new_index(SCSI_AUTH_INTR_INDEX);
- spin_lock_init(&acl->stats_lock);
tpg->se_tpg_tfo->set_default_node_attributes(acl);
}
EXPORT_SYMBOL(core_tpg_set_initiator_node_tag);
+static void core_tpg_lun_ref_release(struct percpu_ref *ref)
+{
+ struct se_lun *lun = container_of(ref, struct se_lun, lun_ref);
+
+ complete(&lun->lun_ref_comp);
+}
+
static int core_tpg_setup_virtual_lun0(struct se_portal_group *se_tpg)
{
/* Set in core_dev_setup_virtual_lun0() */
atomic_set(&lun->lun_acl_count, 0);
init_completion(&lun->lun_shutdown_comp);
INIT_LIST_HEAD(&lun->lun_acl_list);
- INIT_LIST_HEAD(&lun->lun_cmd_list);
spin_lock_init(&lun->lun_acl_lock);
- spin_lock_init(&lun->lun_cmd_lock);
spin_lock_init(&lun->lun_sep_lock);
+ init_completion(&lun->lun_ref_comp);
ret = core_tpg_post_addlun(se_tpg, lun, lun_access, dev);
if (ret < 0)
atomic_set(&lun->lun_acl_count, 0);
init_completion(&lun->lun_shutdown_comp);
INIT_LIST_HEAD(&lun->lun_acl_list);
- INIT_LIST_HEAD(&lun->lun_cmd_list);
spin_lock_init(&lun->lun_acl_lock);
- spin_lock_init(&lun->lun_cmd_lock);
spin_lock_init(&lun->lun_sep_lock);
+ init_completion(&lun->lun_ref_comp);
}
se_tpg->se_tpg_type = se_tpg_type;
{
int ret;
- ret = core_dev_export(lun_ptr, tpg, lun);
+ ret = percpu_ref_init(&lun->lun_ref, core_tpg_lun_ref_release);
if (ret < 0)
return ret;
+ ret = core_dev_export(lun_ptr, tpg, lun);
+ if (ret < 0) {
+ percpu_ref_cancel_init(&lun->lun_ref);
+ return ret;
+ }
+
spin_lock(&tpg->tpg_lun_lock);
lun->lun_access = lun_access;
lun->lun_status = TRANSPORT_LUN_STATUS_ACTIVE;
return 0;
}
-static void core_tpg_shutdown_lun(
- struct se_portal_group *tpg,
- struct se_lun *lun)
-{
- core_clear_lun_from_tpg(lun, tpg);
- transport_clear_lun_from_sessions(lun);
-}
-
struct se_lun *core_tpg_pre_dellun(
struct se_portal_group *tpg,
u32 unpacked_lun)
struct se_portal_group *tpg,
struct se_lun *lun)
{
- core_tpg_shutdown_lun(tpg, lun);
+ core_clear_lun_from_tpg(lun, tpg);
+ transport_clear_lun_ref(lun);
core_dev_unexport(lun->lun_se_dev, tpg, lun);
#include <linux/string.h>
#include <linux/timer.h>
#include <linux/slab.h>
-#include <linux/blkdev.h>
#include <linux/spinlock.h>
#include <linux/kthread.h>
#include <linux/in.h>
pr_debug("TARGET_CORE[%s]: Deregistered fabric_sess\n",
se_tpg->se_tpg_tfo->get_fabric_name());
/*
- * If last kref is dropping now for an explict NodeACL, awake sleeping
+ * If last kref is dropping now for an explicit NodeACL, awake sleeping
* ->acl_free_comp caller to wakeup configfs se_node_acl->acl_group
* removal context.
*/
if (write_pending)
cmd->t_state = TRANSPORT_WRITE_PENDING;
- /*
- * Determine if IOCTL context caller in requesting the stopping of this
- * command for LUN shutdown purposes.
- */
- if (cmd->transport_state & CMD_T_LUN_STOP) {
- pr_debug("%s:%d CMD_T_LUN_STOP for ITT: 0x%08x\n",
- __func__, __LINE__, cmd->se_tfo->get_task_tag(cmd));
-
- cmd->transport_state &= ~CMD_T_ACTIVE;
- if (remove_from_lists)
- target_remove_from_state_list(cmd);
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
-
- complete(&cmd->transport_lun_stop_comp);
- return 1;
- }
-
if (remove_from_lists) {
target_remove_from_state_list(cmd);
static void transport_lun_remove_cmd(struct se_cmd *cmd)
{
struct se_lun *lun = cmd->se_lun;
- unsigned long flags;
- if (!lun)
+ if (!lun || !cmd->lun_ref_active)
return;
- spin_lock_irqsave(&lun->lun_cmd_lock, flags);
- if (!list_empty(&cmd->se_lun_node))
- list_del_init(&cmd->se_lun_node);
- spin_unlock_irqrestore(&lun->lun_cmd_lock, flags);
+ percpu_ref_put(&lun->lun_ref);
}
void transport_cmd_finish_abort(struct se_cmd *cmd, int remove)
cmd->transport_state |= CMD_T_FAILED;
/*
- * Check for case where an explict ABORT_TASK has been received
+ * Check for case where an explicit ABORT_TASK has been received
* and transport_wait_for_tasks() will be waiting for completion..
*/
if (cmd->transport_state & CMD_T_ABORTED &&
int task_attr,
unsigned char *sense_buffer)
{
- INIT_LIST_HEAD(&cmd->se_lun_node);
INIT_LIST_HEAD(&cmd->se_delayed_node);
INIT_LIST_HEAD(&cmd->se_qf_node);
INIT_LIST_HEAD(&cmd->se_cmd_list);
INIT_LIST_HEAD(&cmd->state_list);
- init_completion(&cmd->transport_lun_fe_stop_comp);
- init_completion(&cmd->transport_lun_stop_comp);
init_completion(&cmd->t_transport_stop_comp);
init_completion(&cmd->cmd_wait_comp);
init_completion(&cmd->task_stop_comp);
/*
* If the received CDB has aleady been aborted stop processing it here.
*/
- if (transport_check_aborted_status(cmd, 1)) {
- complete(&cmd->transport_lun_stop_comp);
+ if (transport_check_aborted_status(cmd, 1))
return;
- }
- /*
- * Determine if IOCTL context caller in requesting the stopping of this
- * command for LUN shutdown purposes.
- */
- spin_lock_irq(&cmd->t_state_lock);
- if (cmd->transport_state & CMD_T_LUN_STOP) {
- pr_debug("%s:%d CMD_T_LUN_STOP for ITT: 0x%08x\n",
- __func__, __LINE__, cmd->se_tfo->get_task_tag(cmd));
-
- cmd->transport_state &= ~CMD_T_ACTIVE;
- spin_unlock_irq(&cmd->t_state_lock);
- complete(&cmd->transport_lun_stop_comp);
- return;
- }
/*
* Determine if frontend context caller is requesting the stopping of
* this command for frontend exceptions.
*/
+ spin_lock_irq(&cmd->t_state_lock);
if (cmd->transport_state & CMD_T_STOP) {
pr_debug("%s:%d CMD_T_STOP for ITT: 0x%08x\n",
__func__, __LINE__,
}
EXPORT_SYMBOL(target_wait_for_sess_cmds);
-/* transport_lun_wait_for_tasks():
- *
- * Called from ConfigFS context to stop the passed struct se_cmd to allow
- * an struct se_lun to be successfully shutdown.
- */
-static int transport_lun_wait_for_tasks(struct se_cmd *cmd, struct se_lun *lun)
-{
- unsigned long flags;
- int ret = 0;
-
- /*
- * If the frontend has already requested this struct se_cmd to
- * be stopped, we can safely ignore this struct se_cmd.
- */
- spin_lock_irqsave(&cmd->t_state_lock, flags);
- if (cmd->transport_state & CMD_T_STOP) {
- cmd->transport_state &= ~CMD_T_LUN_STOP;
-
- pr_debug("ConfigFS ITT[0x%08x] - CMD_T_STOP, skipping\n",
- cmd->se_tfo->get_task_tag(cmd));
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
- transport_cmd_check_stop(cmd, false, false);
- return -EPERM;
- }
- cmd->transport_state |= CMD_T_LUN_FE_STOP;
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
-
- // XXX: audit task_flags checks.
- spin_lock_irqsave(&cmd->t_state_lock, flags);
- if ((cmd->transport_state & CMD_T_BUSY) &&
- (cmd->transport_state & CMD_T_SENT)) {
- if (!target_stop_cmd(cmd, &flags))
- ret++;
- }
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
-
- pr_debug("ConfigFS: cmd: %p stop tasks ret:"
- " %d\n", cmd, ret);
- if (!ret) {
- pr_debug("ConfigFS: ITT[0x%08x] - stopping cmd....\n",
- cmd->se_tfo->get_task_tag(cmd));
- wait_for_completion(&cmd->transport_lun_stop_comp);
- pr_debug("ConfigFS: ITT[0x%08x] - stopped cmd....\n",
- cmd->se_tfo->get_task_tag(cmd));
- }
-
- return 0;
-}
-
-static void __transport_clear_lun_from_sessions(struct se_lun *lun)
-{
- struct se_cmd *cmd = NULL;
- unsigned long lun_flags, cmd_flags;
- /*
- * Do exception processing and return CHECK_CONDITION status to the
- * Initiator Port.
- */
- spin_lock_irqsave(&lun->lun_cmd_lock, lun_flags);
- while (!list_empty(&lun->lun_cmd_list)) {
- cmd = list_first_entry(&lun->lun_cmd_list,
- struct se_cmd, se_lun_node);
- list_del_init(&cmd->se_lun_node);
-
- spin_lock(&cmd->t_state_lock);
- pr_debug("SE_LUN[%d] - Setting cmd->transport"
- "_lun_stop for ITT: 0x%08x\n",
- cmd->se_lun->unpacked_lun,
- cmd->se_tfo->get_task_tag(cmd));
- cmd->transport_state |= CMD_T_LUN_STOP;
- spin_unlock(&cmd->t_state_lock);
-
- spin_unlock_irqrestore(&lun->lun_cmd_lock, lun_flags);
-
- if (!cmd->se_lun) {
- pr_err("ITT: 0x%08x, [i,t]_state: %u/%u\n",
- cmd->se_tfo->get_task_tag(cmd),
- cmd->se_tfo->get_cmd_state(cmd), cmd->t_state);
- BUG();
- }
- /*
- * If the Storage engine still owns the iscsi_cmd_t, determine
- * and/or stop its context.
- */
- pr_debug("SE_LUN[%d] - ITT: 0x%08x before transport"
- "_lun_wait_for_tasks()\n", cmd->se_lun->unpacked_lun,
- cmd->se_tfo->get_task_tag(cmd));
-
- if (transport_lun_wait_for_tasks(cmd, cmd->se_lun) < 0) {
- spin_lock_irqsave(&lun->lun_cmd_lock, lun_flags);
- continue;
- }
-
- pr_debug("SE_LUN[%d] - ITT: 0x%08x after transport_lun"
- "_wait_for_tasks(): SUCCESS\n",
- cmd->se_lun->unpacked_lun,
- cmd->se_tfo->get_task_tag(cmd));
-
- spin_lock_irqsave(&cmd->t_state_lock, cmd_flags);
- if (!(cmd->transport_state & CMD_T_DEV_ACTIVE)) {
- spin_unlock_irqrestore(&cmd->t_state_lock, cmd_flags);
- goto check_cond;
- }
- cmd->transport_state &= ~CMD_T_DEV_ACTIVE;
- target_remove_from_state_list(cmd);
- spin_unlock_irqrestore(&cmd->t_state_lock, cmd_flags);
-
- /*
- * The Storage engine stopped this struct se_cmd before it was
- * send to the fabric frontend for delivery back to the
- * Initiator Node. Return this SCSI CDB back with an
- * CHECK_CONDITION status.
- */
-check_cond:
- transport_send_check_condition_and_sense(cmd,
- TCM_NON_EXISTENT_LUN, 0);
- /*
- * If the fabric frontend is waiting for this iscsi_cmd_t to
- * be released, notify the waiting thread now that LU has
- * finished accessing it.
- */
- spin_lock_irqsave(&cmd->t_state_lock, cmd_flags);
- if (cmd->transport_state & CMD_T_LUN_FE_STOP) {
- pr_debug("SE_LUN[%d] - Detected FE stop for"
- " struct se_cmd: %p ITT: 0x%08x\n",
- lun->unpacked_lun,
- cmd, cmd->se_tfo->get_task_tag(cmd));
-
- spin_unlock_irqrestore(&cmd->t_state_lock,
- cmd_flags);
- transport_cmd_check_stop(cmd, false, false);
- complete(&cmd->transport_lun_fe_stop_comp);
- spin_lock_irqsave(&lun->lun_cmd_lock, lun_flags);
- continue;
- }
- pr_debug("SE_LUN[%d] - ITT: 0x%08x finished processing\n",
- lun->unpacked_lun, cmd->se_tfo->get_task_tag(cmd));
-
- spin_unlock_irqrestore(&cmd->t_state_lock, cmd_flags);
- spin_lock_irqsave(&lun->lun_cmd_lock, lun_flags);
- }
- spin_unlock_irqrestore(&lun->lun_cmd_lock, lun_flags);
-}
-
-static int transport_clear_lun_thread(void *p)
+static int transport_clear_lun_ref_thread(void *p)
{
struct se_lun *lun = p;
- __transport_clear_lun_from_sessions(lun);
+ percpu_ref_kill(&lun->lun_ref);
+
+ wait_for_completion(&lun->lun_ref_comp);
complete(&lun->lun_shutdown_comp);
return 0;
}
-int transport_clear_lun_from_sessions(struct se_lun *lun)
+int transport_clear_lun_ref(struct se_lun *lun)
{
struct task_struct *kt;
- kt = kthread_run(transport_clear_lun_thread, lun,
+ kt = kthread_run(transport_clear_lun_ref_thread, lun,
"tcm_cl_%u", lun->unpacked_lun);
if (IS_ERR(kt)) {
pr_err("Unable to start clear_lun thread\n");
spin_unlock_irqrestore(&cmd->t_state_lock, flags);
return false;
}
- /*
- * If we are already stopped due to an external event (ie: LUN shutdown)
- * sleep until the connection can have the passed struct se_cmd back.
- * The cmd->transport_lun_stopped_sem will be upped by
- * transport_clear_lun_from_sessions() once the ConfigFS context caller
- * has completed its operation on the struct se_cmd.
- */
- if (cmd->transport_state & CMD_T_LUN_STOP) {
- pr_debug("wait_for_tasks: Stopping"
- " wait_for_completion(&cmd->t_tasktransport_lun_fe"
- "_stop_comp); for ITT: 0x%08x\n",
- cmd->se_tfo->get_task_tag(cmd));
- /*
- * There is a special case for WRITES where a FE exception +
- * LUN shutdown means ConfigFS context is still sleeping on
- * transport_lun_stop_comp in transport_lun_wait_for_tasks().
- * We go ahead and up transport_lun_stop_comp just to be sure
- * here.
- */
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
- complete(&cmd->transport_lun_stop_comp);
- wait_for_completion(&cmd->transport_lun_fe_stop_comp);
- spin_lock_irqsave(&cmd->t_state_lock, flags);
-
- target_remove_from_state_list(cmd);
- /*
- * At this point, the frontend who was the originator of this
- * struct se_cmd, now owns the structure and can be released through
- * normal means below.
- */
- pr_debug("wait_for_tasks: Stopped"
- " wait_for_completion(&cmd->t_tasktransport_lun_fe_"
- "stop_comp); for ITT: 0x%08x\n",
- cmd->se_tfo->get_task_tag(cmd));
-
- cmd->transport_state &= ~CMD_T_LUN_STOP;
- }
if (!(cmd->transport_state & CMD_T_ACTIVE)) {
spin_unlock_irqrestore(&cmd->t_state_lock, flags);
cmd->t_task_cdb[0], cmd->se_tfo->get_task_tag(cmd));
cmd->se_cmd_flags |= SCF_SENT_DELAYED_TAS;
+ cmd->scsi_status = SAM_STAT_TASK_ABORTED;
trace_target_cmd_complete(cmd);
cmd->se_tfo->queue_status(cmd);
if (cmd->se_tfo->write_pending_status(cmd) != 0) {
cmd->transport_state |= CMD_T_ABORTED;
smp_mb__after_atomic_inc();
+ return;
}
}
cmd->scsi_status = SAM_STAT_TASK_ABORTED;
#define ASCQ_2AH_RESERVATIONS_RELEASED 0x04
#define ASCQ_2AH_REGISTRATIONS_PREEMPTED 0x05
#define ASCQ_2AH_ASYMMETRIC_ACCESS_STATE_CHANGED 0x06
-#define ASCQ_2AH_IMPLICT_ASYMMETRIC_ACCESS_STATE_TRANSITION_FAILED 0x07
+#define ASCQ_2AH_IMPLICIT_ASYMMETRIC_ACCESS_STATE_TRANSITION_FAILED 0x07
#define ASCQ_2AH_PRIORITY_CHANGED 0x08
#define ASCQ_2CH_PREVIOUS_RESERVATION_CONFLICT_STATUS 0x09
struct xcopy_pt_cmd *xpt_cmd = container_of(se_cmd,
struct xcopy_pt_cmd, se_cmd);
- if (xpt_cmd->remote_port)
- kfree(se_cmd->se_lun);
-
kfree(xpt_cmd);
}
return 0;
}
- pt_cmd->se_lun = kzalloc(sizeof(struct se_lun), GFP_KERNEL);
- if (!pt_cmd->se_lun) {
- pr_err("Unable to allocate pt_cmd->se_lun\n");
- return -ENOMEM;
- }
- init_completion(&pt_cmd->se_lun->lun_shutdown_comp);
- INIT_LIST_HEAD(&pt_cmd->se_lun->lun_cmd_list);
- INIT_LIST_HEAD(&pt_cmd->se_lun->lun_acl_list);
- spin_lock_init(&pt_cmd->se_lun->lun_acl_lock);
- spin_lock_init(&pt_cmd->se_lun->lun_cmd_lock);
- spin_lock_init(&pt_cmd->se_lun->lun_sep_lock);
-
+ pt_cmd->se_lun = &se_dev->xcopy_lun;
pt_cmd->se_dev = se_dev;
pr_debug("Setup emulated se_dev: %p from se_dev\n", pt_cmd->se_dev);
- pt_cmd->se_lun->lun_se_dev = se_dev;
pt_cmd->se_cmd_flags |= SCF_SE_LUN_CMD | SCF_CMD_XCOPY_PASSTHROUGH;
pr_debug("Setup emulated se_dev: %p to pt_cmd->se_lun->lun_se_dev\n",
return 0;
out:
- if (remote_port == true)
- kfree(cmd->se_lun);
return ret;
}
#define FT_NAMELEN 32 /* length of ASCII WWPNs including pad */
#define FT_TPG_NAMELEN 32 /* max length of TPG name */
#define FT_LUN_NAMELEN 32 /* max length of LUN name */
+#define TCM_FC_DEFAULT_TAGS 512 /* tags used for per-session preallocation */
struct ft_transport_id {
__u8 format;
#include <linux/configfs.h>
#include <linux/ctype.h>
#include <linux/hash.h>
+#include <linux/percpu_ida.h>
#include <asm/unaligned.h>
#include <scsi/scsi.h>
#include <scsi/scsi_host.h>
{
struct fc_frame *fp;
struct fc_lport *lport;
+ struct se_session *se_sess;
if (!cmd)
return;
+ se_sess = cmd->sess->se_sess;
fp = cmd->req_frame;
lport = fr_dev(fp);
if (fr_seq(fp))
lport->tt.seq_release(fr_seq(fp));
fc_frame_free(fp);
+ percpu_ida_free(&se_sess->sess_tag_pool, cmd->se_cmd.map_tag);
ft_sess_put(cmd->sess); /* undo get from lookup at recv */
- kfree(cmd);
}
void ft_release_cmd(struct se_cmd *se_cmd)
{
struct ft_cmd *cmd;
struct fc_lport *lport = sess->tport->lport;
+ struct se_session *se_sess = sess->se_sess;
+ int tag;
- cmd = kzalloc(sizeof(*cmd), GFP_ATOMIC);
- if (!cmd)
+ tag = percpu_ida_alloc(&se_sess->sess_tag_pool, GFP_ATOMIC);
+ if (tag < 0)
goto busy;
+
+ cmd = &((struct ft_cmd *)se_sess->sess_cmd_map)[tag];
+ memset(cmd, 0, sizeof(struct ft_cmd));
+
+ cmd->se_cmd.map_tag = tag;
cmd->sess = sess;
cmd->seq = lport->tt.seq_assign(lport, fp);
if (!cmd->seq) {
- kfree(cmd);
+ percpu_ida_free(&se_sess->sess_tag_pool, tag);
goto busy;
}
cmd->req_frame = fp; /* hold frame during cmd */
/*
* Setup default attribute lists for various fabric->tf_cit_tmpl
*/
- TF_CIT_TMPL(fabric)->tfc_wwn_cit.ct_attrs = ft_wwn_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_attrib_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_param_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_np_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_base_cit.ct_attrs =
+ fabric->tf_cit_tmpl.tfc_wwn_cit.ct_attrs = ft_wwn_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_base_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_attrib_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_param_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_np_base_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_base_cit.ct_attrs =
ft_nacl_base_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_param_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_param_cit.ct_attrs = NULL;
/*
* register the fabric for use within TCM
*/
if (!sess)
return NULL;
- sess->se_sess = transport_init_session();
+ sess->se_sess = transport_init_session_tags(TCM_FC_DEFAULT_TAGS,
+ sizeof(struct ft_cmd));
if (IS_ERR(sess->se_sess)) {
kfree(sess);
return NULL;
*/
static int __init amiserial_console_init(void)
{
+ if (!MACH_IS_AMIGA)
+ return -ENODEV;
+
register_console(&sercons);
return 0;
}
size_t canon_head;
size_t echo_head;
size_t echo_commit;
+ size_t echo_mark;
DECLARE_BITMAP(char_map, 256);
/* private to n_tty_receive_overrun (single-threaded) */
{
ldata->read_head = ldata->canon_head = ldata->read_tail = 0;
ldata->echo_head = ldata->echo_tail = ldata->echo_commit = 0;
+ ldata->echo_mark = 0;
ldata->line_start = 0;
ldata->erasing = 0;
* data at the tail to prevent a subsequent overrun */
while (ldata->echo_commit - tail >= ECHO_DISCARD_WATERMARK) {
if (echo_buf(ldata, tail) == ECHO_OP_START) {
- if (echo_buf(ldata, tail) == ECHO_OP_ERASE_TAB)
+ if (echo_buf(ldata, tail + 1) == ECHO_OP_ERASE_TAB)
tail += 3;
else
tail += 2;
size_t head;
head = ldata->echo_head;
+ ldata->echo_mark = head;
old = ldata->echo_commit - ldata->echo_tail;
/* Process committed echoes if the accumulated # of bytes
struct n_tty_data *ldata = tty->disc_data;
size_t echoed;
- if (!L_ECHO(tty) || ldata->echo_commit == ldata->echo_tail)
+ if ((!L_ECHO(tty) && !L_ECHONL(tty)) ||
+ ldata->echo_mark == ldata->echo_tail)
return;
mutex_lock(&ldata->output_lock);
+ ldata->echo_commit = ldata->echo_mark;
echoed = __process_echoes(tty);
mutex_unlock(&ldata->output_lock);
tty->ops->flush_chars(tty);
}
+/* NB: echo_mark and echo_head should be equivalent here */
static void flush_echoes(struct tty_struct *tty)
{
struct n_tty_data *ldata = tty->disc_data;
- if (!L_ECHO(tty) || ldata->echo_commit == ldata->echo_head)
+ if ((!L_ECHO(tty) && !L_ECHONL(tty)) ||
+ ldata->echo_commit == ldata->echo_head)
return;
mutex_lock(&ldata->output_lock);
found = 1;
size = N_TTY_BUF_SIZE - tail;
- n = (found + eol + size) & (N_TTY_BUF_SIZE - 1);
+ n = eol - tail;
+ if (n > 4096)
+ n += 4096;
+ n += found;
c = n;
if (found && read_buf(ldata, eol) == __DISABLED_CHAR) {
if (time)
timeout = time;
}
- mutex_unlock(&ldata->atomic_read_lock);
- remove_wait_queue(&tty->read_wait, &wait);
+ n_tty_set_room(tty);
+ up_read(&tty->termios_rwsem);
+ remove_wait_queue(&tty->read_wait, &wait);
if (!waitqueue_active(&tty->read_wait))
ldata->minimum_to_wake = minimum;
+ mutex_unlock(&ldata->atomic_read_lock);
+
__set_current_state(TASK_RUNNING);
if (b - buf)
retval = b - buf;
- n_tty_set_room(tty);
- up_read(&tty->termios_rwsem);
return retval;
}
if (offset == UART_LCR) {
int tries = 1000;
while (tries--) {
- if (value == p->serial_in(p, UART_LCR))
+ unsigned int lcr = p->serial_in(p, UART_LCR);
+ if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR))
return;
dw8250_force_idle(p);
writeb(value, p->membase + (UART_LCR << p->regshift));
if (offset == UART_LCR) {
int tries = 1000;
while (tries--) {
- if (value == p->serial_in(p, UART_LCR))
+ unsigned int lcr = p->serial_in(p, UART_LCR);
+ if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR))
return;
dw8250_force_idle(p);
writel(value, p->membase + (UART_LCR << p->regshift));
static const struct acpi_device_id dw8250_acpi_match[] = {
{ "INT33C4", 0 },
{ "INT33C5", 0 },
+ { "INT3434", 0 },
+ { "INT3435", 0 },
{ "80860F0A", 0 },
{ },
};
accept kernel parameters in both forms like 8250_core.nr_uarts=4 and
8250.nr_uarts=4. We now renamed the module back to 8250, but if
anybody noticed in 3.7 and changed their userspace we still have to
- keep the 8350_core.* options around until they revert the changes
+ keep the 8250_core.* options around until they revert the changes
they already did.
If 8250 is built as a module, this adds 8250_core alias instead.
/* Probe ports */
pmz_probe();
+ if (pmz_ports_count == 0)
+ return -ENODEV;
+
/* TODO: Autoprobe console based on OF */
/* pmz_console.index = i; */
register_console(&pmz_console);
desc = s->desc_rx[new];
if (dma_async_is_tx_complete(s->chan_rx, s->active_rx, NULL, NULL) !=
- DMA_SUCCESS) {
+ DMA_COMPLETE) {
/* Handle incomplete DMA receive */
struct dma_chan *chan = s->chan_rx;
struct shdma_desc *sh_desc = container_of(desc,
continue;
}
+#ifdef SUPPORT_SYSRQ
/*
* uart_handle_sysrq_char() doesn't work if
* spinlocked, for some reason
}
spin_lock(&port->lock);
}
+#endif
port->icount.rx++;
filp->f_op = &tty_fops;
goto retry_open;
}
+ clear_bit(TTY_HUPPED, &tty->flags);
tty_unlock(tty);
return atomic_long_add_return(delta, (atomic_long_t *)&sem->count);
}
+/*
+ * ldsem_cmpxchg() updates @*old with the last-known sem->count value.
+ * Returns 1 if count was successfully changed; @*old will have @new value.
+ * Returns 0 if count was not changed; @*old will have most recent sem->count
+ */
static inline int ldsem_cmpxchg(long *old, long new, struct ld_semaphore *sem)
{
- long tmp = *old;
- *old = atomic_long_cmpxchg(&sem->count, *old, new);
- return *old == tmp;
+ long tmp = atomic_long_cmpxchg(&sem->count, *old, new);
+ if (tmp == *old) {
+ *old = new;
+ return 1;
+ } else {
+ *old = tmp;
+ return 0;
+ }
}
/*
return -EINVAL;
mem = idev->info->mem + mi;
+ if (mem->addr & ~PAGE_MASK)
+ return -ENODEV;
if (vma->vm_end - vma->vm_start > mem->size)
return -EINVAL;
: CI_ROLE_GADGET;
}
+ /* only update vbus status for peripheral */
+ if (ci->role == CI_ROLE_GADGET)
+ ci_handle_vbus_change(ci);
+
ret = ci_role_start(ci, ci->role);
if (ret) {
dev_err(dev, "can't start %s role\n", ci_role(ci)->name);
return ret;
disable_reg:
- regulator_disable(ci->platdata->reg_vbus);
+ if (ci->platdata->reg_vbus)
+ regulator_disable(ci->platdata->reg_vbus);
put_hcd:
usb_put_hcd(hcd);
pm_runtime_no_callbacks(&ci->gadget.dev);
pm_runtime_enable(&ci->gadget.dev);
- /* Update ci->vbus_active */
- ci_handle_vbus_change(ci);
-
return retval;
destroy_eps:
static const struct usb_device_id acm_ids[] = {
/* quirky and broken devices */
+ { USB_DEVICE(0x17ef, 0x7000), /* Lenovo USB modem */
+ .driver_info = NO_UNION_NORMAL, },/* has no union descriptor */
{ USB_DEVICE(0x0870, 0x0001), /* Metricom GS Modem */
.driver_info = NO_UNION_NORMAL, /* has no union descriptor */
},
{
/* need autopm_get/put here to ensure the usbcore sees the new value */
int rv = usb_autopm_get_interface(intf);
- if (rv < 0)
- goto err;
intf->needs_remote_wakeup = on;
- usb_autopm_put_interface(intf);
-err:
- return rv;
+ if (!rv)
+ usb_autopm_put_interface(intf);
+ return 0;
}
static int wdm_probe(struct usb_interface *intf, const struct usb_device_id *id)
hub->ports[i - 1]->child;
dev_dbg(hub_dev, "warm reset port %d\n", i);
- if (!udev || !(portstatus &
- USB_PORT_STAT_CONNECTION)) {
+ if (!udev ||
+ !(portstatus & USB_PORT_STAT_CONNECTION) ||
+ udev->state == USB_STATE_NOTATTACHED) {
status = hub_port_reset(hub, i,
NULL, HUB_BH_RESET_TIME,
true);
if (!hub)
return NULL;
- return DEVICE_ACPI_HANDLE(&hub->ports[port1 - 1]->dev);
+ return ACPI_HANDLE(&hub->ports[port1 - 1]->dev);
}
#endif
}
/* root hub's parent is the usb hcd. */
- parent_handle = DEVICE_ACPI_HANDLE(dev->parent);
+ parent_handle = ACPI_HANDLE(dev->parent);
*handle = acpi_get_child(parent_handle, udev->portnum);
if (!*handle)
return -ENODEV;
raw_port_num = usb_hcd_find_raw_port_number(hcd,
port_num);
- *handle = acpi_get_child(DEVICE_ACPI_HANDLE(&udev->dev),
+ *handle = acpi_get_child(ACPI_HANDLE(&udev->dev),
raw_port_num);
if (!*handle)
return -ENODEV;
if (IS_ERR(regs))
return PTR_ERR(regs);
- usb_phy_set_suspend(dwc->usb2_phy, 0);
- usb_phy_set_suspend(dwc->usb3_phy, 0);
-
spin_lock_init(&dwc->lock);
platform_set_drvdata(pdev, dwc);
goto err0;
}
+ usb_phy_set_suspend(dwc->usb2_phy, 0);
+ usb_phy_set_suspend(dwc->usb3_phy, 0);
+
ret = dwc3_event_buffers_setup(dwc);
if (ret) {
dev_err(dwc->dev, "failed to setup event buffers\n");
dwc3_event_buffers_cleanup(dwc);
err1:
+ usb_phy_set_suspend(dwc->usb2_phy, 1);
+ usb_phy_set_suspend(dwc->usb3_phy, 1);
dwc3_core_exit(dwc);
err0:
dep = dwc3_wIndex_to_dep(dwc, wIndex);
if (!dep)
return -EINVAL;
+ if (set == 0 && (dep->flags & DWC3_EP_WEDGE))
+ break;
ret = __dwc3_gadget_ep_set_halt(dep, set);
if (ret)
return -EINVAL;
else
dep->flags |= DWC3_EP_STALL;
} else {
- if (dep->flags & DWC3_EP_WEDGE)
- return 0;
-
ret = dwc3_send_gadget_ep_cmd(dwc, dep->number,
DWC3_DEPCMD_CLEARSTALL, ¶ms);
if (ret)
value ? "set" : "clear",
dep->name);
else
- dep->flags &= ~DWC3_EP_STALL;
+ dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
}
return ret;
config USB_CONFIGFS_MASS_STORAGE
boolean "Mass storage"
depends on USB_CONFIGFS
+ depends on BLOCK
select USB_F_MASS_STORAGE
help
The Mass Storage Gadget acts as a USB Mass Storage disk drive.
bitmap_zero(f->endpoints, 32);
}
cdev->config = NULL;
+ cdev->delayed_status = 0;
}
static int set_config(struct usb_composite_dev *cdev,
{
struct ffs_data *ffs = kzalloc(sizeof *ffs, GFP_KERNEL);
if (unlikely(!ffs))
- return 0;
+ return NULL;
ENTER();
*/
DBG(fsg, "bulk reset request\n");
raise_exception(fsg->common, FSG_STATE_RESET);
- return DELAYED_STATUS;
+ return USB_GADGET_DELAYED_STATUS;
case US_BULK_GET_MAX_LUN:
if (ctrl->bRequestType !=
return true;
}
-static int sleep_thread(struct fsg_common *common)
+static int sleep_thread(struct fsg_common *common, bool can_freeze)
{
int rc = 0;
/* Wait until a signal arrives or we are woken up */
for (;;) {
- try_to_freeze();
+ if (can_freeze)
+ try_to_freeze();
set_current_state(TASK_INTERRUPTIBLE);
if (signal_pending(current)) {
rc = -EINTR;
/* Wait for the next buffer to become available */
bh = common->next_buffhd_to_fill;
while (bh->state != BUF_STATE_EMPTY) {
- rc = sleep_thread(common);
+ rc = sleep_thread(common, false);
if (rc)
return rc;
}
}
/* Wait for something to happen */
- rc = sleep_thread(common);
+ rc = sleep_thread(common, false);
if (rc)
return rc;
}
}
/* Otherwise wait for something to happen */
- rc = sleep_thread(common);
+ rc = sleep_thread(common, true);
if (rc)
return rc;
}
/* Wait for the next buffer to become available */
bh = common->next_buffhd_to_fill;
while (bh->state != BUF_STATE_EMPTY) {
- rc = sleep_thread(common);
+ rc = sleep_thread(common, true);
if (rc)
return rc;
}
bh = common->next_buffhd_to_fill;
common->next_buffhd_to_drain = bh;
while (bh->state != BUF_STATE_EMPTY) {
- rc = sleep_thread(common);
+ rc = sleep_thread(common, true);
if (rc)
return rc;
}
/* Wait for the next buffer to become available */
bh = common->next_buffhd_to_fill;
while (bh->state != BUF_STATE_EMPTY) {
- rc = sleep_thread(common);
+ rc = sleep_thread(common, true);
if (rc)
return rc;
}
/* Wait for the CBW to arrive */
while (bh->state != BUF_STATE_FULL) {
- rc = sleep_thread(common);
+ rc = sleep_thread(common, true);
if (rc)
return rc;
}
}
if (num_active == 0)
break;
- if (sleep_thread(common))
+ if (sleep_thread(common, true))
return;
}
}
if (!common->running) {
- sleep_thread(common);
+ sleep_thread(common, true);
continue;
}
fsg->common->can_stall);
if (ret)
return ret;
- fsg_common_set_inquiry_string(fsg->common, 0, 0);
+ fsg_common_set_inquiry_string(fsg->common, NULL, NULL);
ret = fsg_common_run_thread(fsg->common);
if (ret)
return ret;
*/
#ifdef CONFIG_ARCH_PXA
#include <mach/pxa25x-udc.h>
+#include <mach/hardware.h>
#endif
#ifdef CONFIG_ARCH_LUBBOCK
}
static void s3c_hsotg_enqueue_setup(struct s3c_hsotg *hsotg);
+static void s3c_hsotg_disconnect(struct s3c_hsotg *hsotg);
/**
* s3c_hsotg_process_control - process a control request
if ((ctrl->bRequestType & USB_TYPE_MASK) == USB_TYPE_STANDARD) {
switch (ctrl->bRequest) {
case USB_REQ_SET_ADDRESS:
+ s3c_hsotg_disconnect(hsotg);
dcfg = readl(hsotg->regs + DCFG);
dcfg &= ~DCFG_DevAddr_MASK;
dcfg |= ctrl->wValue << DCFG_DevAddr_SHIFT;
/* as a fallback, try delivering it to the driver to deal with */
if (ret == 0 && hsotg->driver) {
+ spin_unlock(&hsotg->lock);
ret = hsotg->driver->setup(&hsotg->gadget, ctrl);
+ spin_lock(&hsotg->lock);
if (ret < 0)
dev_dbg(hsotg->dev, "driver->setup() ret %d\n", ret);
}
return;
}
+ spin_lock(&hsotg->lock);
if (req->actual == 0)
s3c_hsotg_enqueue_setup(hsotg);
else
s3c_hsotg_process_control(hsotg, req->buf);
+ spin_unlock(&hsotg->lock);
}
/**
writel(GINTSTS_USBSusp, hsotg->regs + GINTSTS);
call_gadget(hsotg, suspend);
- s3c_hsotg_disconnect(hsotg);
}
if (gintsts & GINTSTS_WkUpInt) {
return curlun->filp != NULL;
}
-/* Big enough to hold our biggest descriptor */
-#define EP0_BUFSIZE 256
-#define DELAYED_STATUS (EP0_BUFSIZE + 999) /* An impossibly large value */
-
/* Default size of buffer length. */
#define FSG_BUFLEN ((u32)16384)
return -ENOMEM;
}
-void bot_cleanup_old_alt(struct f_uas *fu)
+static void bot_cleanup_old_alt(struct f_uas *fu)
{
if (!(fu->flags & USBG_ENABLED))
return;
}
fabric->tf_ops = usbg_ops;
- TF_CIT_TMPL(fabric)->tfc_wwn_cit.ct_attrs = usbg_wwn_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_base_cit.ct_attrs = usbg_base_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_attrib_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_param_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_np_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_param_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_wwn_cit.ct_attrs = usbg_wwn_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_base_cit.ct_attrs = usbg_base_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_attrib_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_param_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_np_base_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_base_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_param_cit.ct_attrs = NULL;
ret = target_fabric_configfs_register(fabric);
if (ret < 0) {
printk(KERN_ERR "target_fabric_configfs_register() failed"
* functional coverage for the "USBCV" test harness from USB-IF.
* It's always set if OTG mode is enabled.
*/
-unsigned autoresume = DEFAULT_AUTORESUME;
+static unsigned autoresume = DEFAULT_AUTORESUME;
module_param(autoresume, uint, S_IRUGO);
MODULE_PARM_DESC(autoresume, "zero, or seconds before remote wakeup");
/* Maximum Autoresume time */
-unsigned max_autoresume;
+static unsigned max_autoresume;
module_param(max_autoresume, uint, S_IRUGO);
MODULE_PARM_DESC(max_autoresume, "maximum seconds before remote wakeup");
/* Interval between two remote wakeups */
-unsigned autoresume_interval_ms;
+static unsigned autoresume_interval_ms;
module_param(autoresume_interval_ms, uint, S_IRUGO);
MODULE_PARM_DESC(autoresume_interval_ms,
"milliseconds to increase successive wakeup delays");
struct ohci_hcd *ohci;
int retval;
struct usb_hcd *hcd = NULL;
-
- if (pdev->num_resources != 2) {
- pr_debug("hcd probe: invalid num_resources");
- return -ENODEV;
+ struct device *dev = &pdev->dev;
+ struct resource *res;
+ int irq;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ dev_dbg(dev, "hcd probe: missing memory resource\n");
+ return -ENXIO;
}
- if ((pdev->resource[0].flags != IORESOURCE_MEM)
- || (pdev->resource[1].flags != IORESOURCE_IRQ)) {
- pr_debug("hcd probe: invalid resource type\n");
- return -ENODEV;
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0) {
+ dev_dbg(dev, "hcd probe: missing irq resource\n");
+ return irq;
}
hcd = usb_create_hcd(driver, &pdev->dev, "at91");
if (!hcd)
return -ENOMEM;
- hcd->rsrc_start = pdev->resource[0].start;
- hcd->rsrc_len = resource_size(&pdev->resource[0]);
+ hcd->rsrc_start = res->start;
+ hcd->rsrc_len = resource_size(res);
if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len, hcd_name)) {
pr_debug("request_mem_region failed\n");
ohci->num_ports = board->ports;
at91_start_hc(pdev);
- retval = usb_add_hcd(hcd, pdev->resource[1].start, IRQF_SHARED);
+ retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
if (retval == 0)
return retval;
#include <linux/clk.h>
#include <linux/device.h>
+#include <linux/dma-mapping.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
* any other sleep) on Haswell machines with LPT and LPT-LP
* with the new Intel BIOS
*/
- xhci->quirks |= XHCI_SPURIOUS_WAKEUP;
+ /* Limit the quirk to only known vendors, as this triggers
+ * yet another BIOS bug on some other machines
+ * https://bugzilla.kernel.org/show_bug.cgi?id=66171
+ */
+ if (pdev->subsystem_vendor == PCI_VENDOR_ID_HP)
+ xhci->quirks |= XHCI_SPURIOUS_WAKEUP;
}
if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
pdev->device == PCI_DEVICE_ID_ASROCK_P67) {
}
while (1) {
- if (room_on_ring(xhci, ep_ring, num_trbs))
- break;
+ if (room_on_ring(xhci, ep_ring, num_trbs)) {
+ union xhci_trb *trb = ep_ring->enqueue;
+ unsigned int usable = ep_ring->enq_seg->trbs +
+ TRBS_PER_SEGMENT - 1 - trb;
+ u32 nop_cmd;
+
+ /*
+ * Section 4.11.7.1 TD Fragments states that a link
+ * TRB must only occur at the boundary between
+ * data bursts (eg 512 bytes for 480M).
+ * While it is possible to split a large fragment
+ * we don't know the size yet.
+ * Simplest solution is to fill the trb before the
+ * LINK with nop commands.
+ */
+ if (num_trbs == 1 || num_trbs <= usable || usable == 0)
+ break;
+
+ if (ep_ring->type != TYPE_BULK)
+ /*
+ * While isoc transfers might have a buffer that
+ * crosses a 64k boundary it is unlikely.
+ * Since we can't add NOPs without generating
+ * gaps in the traffic just hope it never
+ * happens at the end of the ring.
+ * This could be fixed by writing a LINK TRB
+ * instead of the first NOP - however the
+ * TRB_TYPE_LINK_LE32() calls would all need
+ * changing to check the ring length.
+ */
+ break;
+
+ if (num_trbs >= TRBS_PER_SEGMENT) {
+ xhci_err(xhci, "Too many fragments %d, max %d\n",
+ num_trbs, TRBS_PER_SEGMENT - 1);
+ return -ENOMEM;
+ }
+
+ nop_cmd = cpu_to_le32(TRB_TYPE(TRB_TR_NOOP) |
+ ep_ring->cycle_state);
+ ep_ring->num_trbs_free -= usable;
+ do {
+ trb->generic.field[0] = 0;
+ trb->generic.field[1] = 0;
+ trb->generic.field[2] = 0;
+ trb->generic.field[3] = nop_cmd;
+ trb++;
+ } while (--usable);
+ ep_ring->enqueue = trb;
+ if (room_on_ring(xhci, ep_ring, num_trbs))
+ break;
+ }
if (ep_ring == xhci->cmd_ring) {
xhci_err(xhci, "Do not support expand command ring\n");
disable_irq_wake(musb->nIrq);
free_irq(musb->nIrq, musb);
}
- cancel_work_sync(&musb->irq_work);
musb_host_free(musb);
}
musb_platform_disable(musb);
musb_generic_disable(musb);
+ /* Init IRQ workqueue before request_irq */
+ INIT_WORK(&musb->irq_work, musb_irq_work);
+
/* setup musb parts of the core (especially endpoints) */
status = musb_core_init(plat->config->multipoint
? MUSB_CONTROLLER_MHDRC
setup_timer(&musb->otg_timer, musb_otg_timer_func, (unsigned long) musb);
- /* Init IRQ workqueue before request_irq */
- INIT_WORK(&musb->irq_work, musb_irq_work);
-
/* attach to the IRQ */
if (request_irq(nIrq, musb->isr, 0, dev_name(dev), musb)) {
dev_err(dev, "request_irq %d failed!\n", nIrq);
musb_host_cleanup(musb);
fail3:
+ cancel_work_sync(&musb->irq_work);
if (musb->dma_controller)
dma_controller_destroy(musb->dma_controller);
fail2_5:
if (musb->dma_controller)
dma_controller_destroy(musb->dma_controller);
+ cancel_work_sync(&musb->irq_work);
musb_free(musb);
device_init_wakeup(dev, 0);
return 0;
u32 prog_len;
u32 transferred;
u32 packet_sz;
+ struct list_head tx_check;
};
#define MUSB_DMA_NUM_CHANNELS 15
struct cppi41_dma_channel rx_channel[MUSB_DMA_NUM_CHANNELS];
struct cppi41_dma_channel tx_channel[MUSB_DMA_NUM_CHANNELS];
struct musb *musb;
+ struct hrtimer early_tx;
+ struct list_head early_tx_list;
u32 rx_mode;
u32 tx_mode;
u32 auto_req;
cppi41_channel->usb_toggle = toggle;
}
-static void cppi41_dma_callback(void *private_data)
+static bool musb_is_tx_fifo_empty(struct musb_hw_ep *hw_ep)
{
- struct dma_channel *channel = private_data;
- struct cppi41_dma_channel *cppi41_channel = channel->private_data;
- struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep;
- struct musb *musb = hw_ep->musb;
- unsigned long flags;
- struct dma_tx_state txstate;
- u32 transferred;
+ u8 epnum = hw_ep->epnum;
+ struct musb *musb = hw_ep->musb;
+ void __iomem *epio = musb->endpoints[epnum].regs;
+ u16 csr;
- spin_lock_irqsave(&musb->lock, flags);
+ csr = musb_readw(epio, MUSB_TXCSR);
+ if (csr & MUSB_TXCSR_TXPKTRDY)
+ return false;
+ return true;
+}
- dmaengine_tx_status(cppi41_channel->dc, cppi41_channel->cookie,
- &txstate);
- transferred = cppi41_channel->prog_len - txstate.residue;
- cppi41_channel->transferred += transferred;
+static void cppi41_dma_callback(void *private_data);
- dev_dbg(musb->controller, "DMA transfer done on hw_ep=%d bytes=%d/%d\n",
- hw_ep->epnum, cppi41_channel->transferred,
- cppi41_channel->total_len);
+static void cppi41_trans_done(struct cppi41_dma_channel *cppi41_channel)
+{
+ struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep;
+ struct musb *musb = hw_ep->musb;
- update_rx_toggle(cppi41_channel);
-
- if (cppi41_channel->transferred == cppi41_channel->total_len ||
- transferred < cppi41_channel->packet_sz) {
+ if (!cppi41_channel->prog_len) {
/* done, complete */
cppi41_channel->channel.actual_len =
remain_bytes,
direction,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (WARN_ON(!dma_desc)) {
- spin_unlock_irqrestore(&musb->lock, flags);
+ if (WARN_ON(!dma_desc))
return;
- }
dma_desc->callback = cppi41_dma_callback;
- dma_desc->callback_param = channel;
+ dma_desc->callback_param = &cppi41_channel->channel;
cppi41_channel->cookie = dma_desc->tx_submit(dma_desc);
dma_async_issue_pending(dc);
musb_writew(epio, MUSB_RXCSR, csr);
}
}
+}
+
+static enum hrtimer_restart cppi41_recheck_tx_req(struct hrtimer *timer)
+{
+ struct cppi41_dma_controller *controller;
+ struct cppi41_dma_channel *cppi41_channel, *n;
+ struct musb *musb;
+ unsigned long flags;
+ enum hrtimer_restart ret = HRTIMER_NORESTART;
+
+ controller = container_of(timer, struct cppi41_dma_controller,
+ early_tx);
+ musb = controller->musb;
+
+ spin_lock_irqsave(&musb->lock, flags);
+ list_for_each_entry_safe(cppi41_channel, n, &controller->early_tx_list,
+ tx_check) {
+ bool empty;
+ struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep;
+
+ empty = musb_is_tx_fifo_empty(hw_ep);
+ if (empty) {
+ list_del_init(&cppi41_channel->tx_check);
+ cppi41_trans_done(cppi41_channel);
+ }
+ }
+
+ if (!list_empty(&controller->early_tx_list)) {
+ ret = HRTIMER_RESTART;
+ hrtimer_forward_now(&controller->early_tx,
+ ktime_set(0, 150 * NSEC_PER_USEC));
+ }
+
+ spin_unlock_irqrestore(&musb->lock, flags);
+ return ret;
+}
+
+static void cppi41_dma_callback(void *private_data)
+{
+ struct dma_channel *channel = private_data;
+ struct cppi41_dma_channel *cppi41_channel = channel->private_data;
+ struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep;
+ struct musb *musb = hw_ep->musb;
+ unsigned long flags;
+ struct dma_tx_state txstate;
+ u32 transferred;
+ bool empty;
+
+ spin_lock_irqsave(&musb->lock, flags);
+
+ dmaengine_tx_status(cppi41_channel->dc, cppi41_channel->cookie,
+ &txstate);
+ transferred = cppi41_channel->prog_len - txstate.residue;
+ cppi41_channel->transferred += transferred;
+
+ dev_dbg(musb->controller, "DMA transfer done on hw_ep=%d bytes=%d/%d\n",
+ hw_ep->epnum, cppi41_channel->transferred,
+ cppi41_channel->total_len);
+
+ update_rx_toggle(cppi41_channel);
+
+ if (cppi41_channel->transferred == cppi41_channel->total_len ||
+ transferred < cppi41_channel->packet_sz)
+ cppi41_channel->prog_len = 0;
+
+ empty = musb_is_tx_fifo_empty(hw_ep);
+ if (empty) {
+ cppi41_trans_done(cppi41_channel);
+ } else {
+ struct cppi41_dma_controller *controller;
+ /*
+ * On AM335x it has been observed that the TX interrupt fires
+ * too early that means the TXFIFO is not yet empty but the DMA
+ * engine says that it is done with the transfer. We don't
+ * receive a FIFO empty interrupt so the only thing we can do is
+ * to poll for the bit. On HS it usually takes 2us, on FS around
+ * 110us - 150us depending on the transfer size.
+ * We spin on HS (no longer than than 25us and setup a timer on
+ * FS to check for the bit and complete the transfer.
+ */
+ controller = cppi41_channel->controller;
+
+ if (musb->g.speed == USB_SPEED_HIGH) {
+ unsigned wait = 25;
+
+ do {
+ empty = musb_is_tx_fifo_empty(hw_ep);
+ if (empty)
+ break;
+ wait--;
+ if (!wait)
+ break;
+ udelay(1);
+ } while (1);
+
+ empty = musb_is_tx_fifo_empty(hw_ep);
+ if (empty) {
+ cppi41_trans_done(cppi41_channel);
+ goto out;
+ }
+ }
+ list_add_tail(&cppi41_channel->tx_check,
+ &controller->early_tx_list);
+ if (!hrtimer_active(&controller->early_tx)) {
+ hrtimer_start_range_ns(&controller->early_tx,
+ ktime_set(0, 140 * NSEC_PER_USEC),
+ 40 * NSEC_PER_USEC,
+ HRTIMER_MODE_REL);
+ }
+ }
+out:
spin_unlock_irqrestore(&musb->lock, flags);
}
WARN_ON(1);
return 1;
}
+ if (cppi41_channel->hw_ep->ep_in.type != USB_ENDPOINT_XFER_BULK)
+ return 0;
if (cppi41_channel->is_tx)
return 1;
/* AM335x Advisory 1.0.13. No workaround for device RX mode */
if (cppi41_channel->channel.status == MUSB_DMA_STATUS_FREE)
return 0;
+ list_del_init(&cppi41_channel->tx_check);
if (is_tx) {
csr = musb_readw(epio, MUSB_TXCSR);
csr &= ~MUSB_TXCSR_DMAENAB;
cppi41_channel->controller = controller;
cppi41_channel->port_num = port;
cppi41_channel->is_tx = is_tx;
+ INIT_LIST_HEAD(&cppi41_channel->tx_check);
musb_dma = &cppi41_channel->channel;
musb_dma->private_data = cppi41_channel;
struct cppi41_dma_controller *controller = container_of(c,
struct cppi41_dma_controller, controller);
+ hrtimer_cancel(&controller->early_tx);
cppi41_dma_controller_stop(controller);
kfree(controller);
}
if (!controller)
goto kzalloc_fail;
+ hrtimer_init(&controller->early_tx, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ controller->early_tx.function = cppi41_recheck_tx_req;
+ INIT_LIST_HEAD(&controller->early_tx_list);
controller->musb = musb;
controller->controller.channel_alloc = cppi41_dma_channel_allocate;
/* this "gadget" abstracts/virtualizes the controller */
musb->g.name = musb_driver_name;
+#if IS_ENABLED(CONFIG_USB_MUSB_DUAL_ROLE)
musb->g.is_otg = 1;
+#elif IS_ENABLED(CONFIG_USB_MUSB_GADGET)
+ musb->g.is_otg = 0;
+#endif
musb_g_init_endpoints(musb);
in host mode, low speed.
config FSL_USB2_OTG
- bool "Freescale USB OTG Transceiver Driver"
+ tristate "Freescale USB OTG Transceiver Driver"
depends on USB_EHCI_FSL && USB_FSL_USB2 && PM_RUNTIME
+ depends on USB
select USB_OTG
select USB_PHY
help
config ISP1301_OMAP
tristate "Philips ISP1301 with OMAP OTG"
depends on I2C && ARCH_OMAP_OTG
+ depends on USB
select USB_PHY
help
If you say yes here you get support for the Philips ISP1301
return am_phy->id;
}
- ret = usb_phy_gen_create_phy(dev, &am_phy->usb_phy_gen,
- USB_PHY_TYPE_USB2, 0, false);
+ ret = usb_phy_gen_create_phy(dev, &am_phy->usb_phy_gen, NULL);
if (ret)
return ret;
platform_set_drvdata(pdev, am_phy);
return 0;
-
- return ret;
}
static int am335x_phy_remove(struct platform_device *pdev)
if (pd)
return;
pd = platform_device_register_simple("usb_phy_gen_xceiv", -1, NULL, 0);
- if (!pd) {
+ if (IS_ERR(pd)) {
pr_err("Unable to register generic usb transceiver\n");
+ pd = NULL;
return;
}
}
}
int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_gen_xceiv *nop,
- enum usb_phy_type type, u32 clk_rate, bool needs_vcc)
+ struct usb_phy_gen_xceiv_platform_data *pdata)
{
+ enum usb_phy_type type = USB_PHY_TYPE_USB2;
int err;
+ u32 clk_rate = 0;
+ bool needs_vcc = false;
+
+ nop->reset_active_low = true; /* default behaviour */
+
+ if (dev->of_node) {
+ struct device_node *node = dev->of_node;
+ enum of_gpio_flags flags = 0;
+
+ if (of_property_read_u32(node, "clock-frequency", &clk_rate))
+ clk_rate = 0;
+
+ needs_vcc = of_property_read_bool(node, "vcc-supply");
+ nop->gpio_reset = of_get_named_gpio_flags(node, "reset-gpios",
+ 0, &flags);
+ if (nop->gpio_reset == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+
+ nop->reset_active_low = flags & OF_GPIO_ACTIVE_LOW;
+
+ } else if (pdata) {
+ type = pdata->type;
+ clk_rate = pdata->clk_rate;
+ needs_vcc = pdata->needs_vcc;
+ nop->gpio_reset = pdata->gpio_reset;
+ } else {
+ nop->gpio_reset = -1;
+ }
+
nop->phy.otg = devm_kzalloc(dev, sizeof(*nop->phy.otg),
GFP_KERNEL);
if (!nop->phy.otg)
static int usb_phy_gen_xceiv_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
- struct usb_phy_gen_xceiv_platform_data *pdata =
- dev_get_platdata(&pdev->dev);
struct usb_phy_gen_xceiv *nop;
- enum usb_phy_type type = USB_PHY_TYPE_USB2;
int err;
- u32 clk_rate = 0;
- bool needs_vcc = false;
nop = devm_kzalloc(dev, sizeof(*nop), GFP_KERNEL);
if (!nop)
return -ENOMEM;
- nop->reset_active_low = true; /* default behaviour */
-
- if (dev->of_node) {
- struct device_node *node = dev->of_node;
- enum of_gpio_flags flags;
-
- if (of_property_read_u32(node, "clock-frequency", &clk_rate))
- clk_rate = 0;
-
- needs_vcc = of_property_read_bool(node, "vcc-supply");
- nop->gpio_reset = of_get_named_gpio_flags(node, "reset-gpios",
- 0, &flags);
- if (nop->gpio_reset == -EPROBE_DEFER)
- return -EPROBE_DEFER;
-
- nop->reset_active_low = flags & OF_GPIO_ACTIVE_LOW;
-
- } else if (pdata) {
- type = pdata->type;
- clk_rate = pdata->clk_rate;
- needs_vcc = pdata->needs_vcc;
- nop->gpio_reset = pdata->gpio_reset;
- }
-
- err = usb_phy_gen_create_phy(dev, nop, type, clk_rate, needs_vcc);
+ err = usb_phy_gen_create_phy(dev, nop, dev_get_platdata(&pdev->dev));
if (err)
return err;
platform_set_drvdata(pdev, nop);
return 0;
-
- return err;
}
static int usb_phy_gen_xceiv_remove(struct platform_device *pdev)
#ifndef _PHY_GENERIC_H_
#define _PHY_GENERIC_H_
+#include <linux/usb/usb_phy_gen_xceiv.h>
+
struct usb_phy_gen_xceiv {
struct usb_phy phy;
struct device *dev;
void usb_gen_phy_shutdown(struct usb_phy *phy);
int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_gen_xceiv *nop,
- enum usb_phy_type type, u32 clk_rate, bool needs_vcc);
+ struct usb_phy_gen_xceiv_platform_data *pdata);
#endif
mxs_phy->clk = clk;
- platform_set_drvdata(pdev, &mxs_phy->phy);
+ platform_set_drvdata(pdev, mxs_phy);
ret = usb_add_phy_dev(&mxs_phy->phy);
if (ret)
clk_prepare_enable(priv->clk);
/* Set USB channels in the USBHS UGCTRL2 register */
- val = ioread32(priv->base);
+ val = ioread32(priv->base + USBHS_UGCTRL2_REG);
val &= ~(USBHS_UGCTRL2_USB0_HS | USBHS_UGCTRL2_USB2_SS);
val |= priv->ugctrl2;
- iowrite32(val, priv->base);
+ iowrite32(val, priv->base + USBHS_UGCTRL2_REG);
}
/* Shutdown USB channels */
tegra_phy->pad_regs = devm_ioremap(&pdev->dev, res->start,
resource_size(res));
- if (!tegra_phy->regs) {
+ if (!tegra_phy->pad_regs) {
dev_err(&pdev->dev, "Failed to remap UTMI Pad regs\n");
return -ENOMEM;
}
static inline u8 twl6030_readb(struct twl6030_usb *twl, u8 module, u8 address)
{
- u8 data, ret = 0;
+ u8 data;
+ int ret;
ret = twl_i2c_read_u8(module, &data, address);
if (ret >= 0)
termios->c_cflag |= CRTSCTS;
}
+ /*
+ * All FTDI UART chips are limited to CS7/8. We won't pretend to
+ * support CS5/6 and revert the CSIZE setting instead.
+ */
+ if ((C_CSIZE(tty) != CS8) && (C_CSIZE(tty) != CS7)) {
+ dev_warn(ddev, "requested CSIZE setting not supported\n");
+
+ termios->c_cflag &= ~CSIZE;
+ if (old_termios)
+ termios->c_cflag |= old_termios->c_cflag & CSIZE;
+ else
+ termios->c_cflag |= CS8;
+ }
+
cflag = termios->c_cflag;
if (!old_termios)
} else {
urb_value |= FTDI_SIO_SET_DATA_PARITY_NONE;
}
- if (cflag & CSIZE) {
- switch (cflag & CSIZE) {
- case CS7:
- urb_value |= 7;
- dev_dbg(ddev, "Setting CS7\n");
- break;
- case CS8:
- urb_value |= 8;
- dev_dbg(ddev, "Setting CS8\n");
- break;
- default:
- dev_err(ddev, "CSIZE was set but not CS7-CS8\n");
- }
+ switch (cflag & CSIZE) {
+ case CS7:
+ urb_value |= 7;
+ dev_dbg(ddev, "Setting CS7\n");
+ break;
+ default:
+ case CS8:
+ urb_value |= 8;
+ dev_dbg(ddev, "Setting CS8\n");
+ break;
}
/* This is needed by the break command since it uses the same command
clear_bit_unlock(USB_SERIAL_WRITE_BUSY, &port->flags);
return result;
}
- /*
- * Try sending off another urb, unless called from completion handler
- * (in which case there will be no free urb or no data).
- */
- if (mem_flags != GFP_ATOMIC)
- goto retry;
- clear_bit_unlock(USB_SERIAL_WRITE_BUSY, &port->flags);
-
- return 0;
+ goto retry; /* try sending off another urb */
}
EXPORT_SYMBOL_GPL(usb_serial_generic_write_start);
return 0;
count = kfifo_in_locked(&port->write_fifo, buf, count, &port->lock);
- result = usb_serial_generic_write_start(port, GFP_KERNEL);
+ result = usb_serial_generic_write_start(port, GFP_ATOMIC);
if (result)
return result;
iflag = tty->termios.c_iflag;
/* Change the number of bits */
- if (cflag & CSIZE) {
- switch (cflag & CSIZE) {
- case CS5:
- lData = LCR_BITS_5;
- break;
+ switch (cflag & CSIZE) {
+ case CS5:
+ lData = LCR_BITS_5;
+ break;
- case CS6:
- lData = LCR_BITS_6;
- break;
+ case CS6:
+ lData = LCR_BITS_6;
+ break;
- case CS7:
- lData = LCR_BITS_7;
- break;
- default:
- case CS8:
- lData = LCR_BITS_8;
- break;
- }
+ case CS7:
+ lData = LCR_BITS_7;
+ break;
+
+ default:
+ case CS8:
+ lData = LCR_BITS_8;
+ break;
}
+
/* Change the Parity bit */
if (cflag & PARENB) {
if (cflag & PARODD) {
#define HUAWEI_PRODUCT_K4505 0x1464
#define HUAWEI_PRODUCT_K3765 0x1465
#define HUAWEI_PRODUCT_K4605 0x14C6
+#define HUAWEI_PRODUCT_E173S6 0x1C07
#define QUANTA_VENDOR_ID 0x0408
#define QUANTA_PRODUCT_Q101 0xEA02
#define ZTE_PRODUCT_MF628 0x0015
#define ZTE_PRODUCT_MF626 0x0031
#define ZTE_PRODUCT_MC2718 0xffe8
+#define ZTE_PRODUCT_AC2726 0xfff1
#define BENQ_VENDOR_ID 0x04a5
#define BENQ_PRODUCT_H10 0x4068
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1c23, USB_CLASS_COMM, 0x02, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E173, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t) &net_intf1_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E173S6, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t) &net_intf1_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E1750, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t) &net_intf2_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1441, USB_CLASS_COMM, 0x02, 0xff) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6D) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6E) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x72) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x73) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x74) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x75) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x78) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x79) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) },
{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) },
{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC2726, 0xff, 0xff, 0xff) },
{ USB_DEVICE(BENQ_VENDOR_ID, BENQ_PRODUCT_H10) },
{ USB_DEVICE(DLINK_VENDOR_ID, DLINK_PRODUCT_DWM_652) },
0, 0, buf, 7, 100);
dev_dbg(&port->dev, "0xa1:0x21:0:0 %d - %7ph\n", i, buf);
- if (C_CSIZE(tty)) {
- switch (C_CSIZE(tty)) {
- case CS5:
- buf[6] = 5;
- break;
- case CS6:
- buf[6] = 6;
- break;
- case CS7:
- buf[6] = 7;
- break;
- default:
- case CS8:
- buf[6] = 8;
- }
- dev_dbg(&port->dev, "data bits = %d\n", buf[6]);
+ switch (C_CSIZE(tty)) {
+ case CS5:
+ buf[6] = 5;
+ break;
+ case CS6:
+ buf[6] = 6;
+ break;
+ case CS7:
+ buf[6] = 7;
+ break;
+ default:
+ case CS8:
+ buf[6] = 8;
}
+ dev_dbg(&port->dev, "data bits = %d\n", buf[6]);
/* For reference buf[0]:buf[3] baud rate value */
pl2303_encode_baudrate(tty, port, &buf[0]);
}
/* Set Data Length : 00:5bit, 01:6bit, 10:7bit, 11:8bit */
- if (cflag & CSIZE) {
- switch (cflag & CSIZE) {
- case CS5:
- buf[1] |= SET_UART_FORMAT_SIZE_5;
- break;
- case CS6:
- buf[1] |= SET_UART_FORMAT_SIZE_6;
- break;
- case CS7:
- buf[1] |= SET_UART_FORMAT_SIZE_7;
- break;
- default:
- case CS8:
- buf[1] |= SET_UART_FORMAT_SIZE_8;
- break;
- }
+ switch (cflag & CSIZE) {
+ case CS5:
+ buf[1] |= SET_UART_FORMAT_SIZE_5;
+ break;
+ case CS6:
+ buf[1] |= SET_UART_FORMAT_SIZE_6;
+ break;
+ case CS7:
+ buf[1] |= SET_UART_FORMAT_SIZE_7;
+ break;
+ default:
+ case CS8:
+ buf[1] |= SET_UART_FORMAT_SIZE_8;
+ break;
}
/* Set Stop bit2 : 0:1bit 1:2bit */
{ USB_DEVICE(0x19d2, 0xfffd) },
{ USB_DEVICE(0x19d2, 0xfffc) },
{ USB_DEVICE(0x19d2, 0xfffb) },
- /* AC2726, AC8710_V3 */
- { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xfff1, 0xff, 0xff, 0xff) },
+ /* AC8710_V3 */
{ USB_DEVICE(0x19d2, 0xfff6) },
{ USB_DEVICE(0x19d2, 0xfff7) },
{ USB_DEVICE(0x19d2, 0xfff8) },
static void wusb_dev_free(struct wusb_dev *wusb_dev)
{
- if (wusb_dev) {
- kfree(wusb_dev->set_gtk_req);
- usb_free_urb(wusb_dev->set_gtk_urb);
- kfree(wusb_dev);
- }
+ kfree(wusb_dev);
}
static struct wusb_dev *wusb_dev_alloc(struct wusbhc *wusbhc)
{
struct wusb_dev *wusb_dev;
- struct urb *urb;
- struct usb_ctrlrequest *req;
wusb_dev = kzalloc(sizeof(*wusb_dev), GFP_KERNEL);
if (wusb_dev == NULL)
INIT_WORK(&wusb_dev->devconnect_acked_work, wusbhc_devconnect_acked_work);
- urb = usb_alloc_urb(0, GFP_KERNEL);
- if (urb == NULL)
- goto err;
- wusb_dev->set_gtk_urb = urb;
-
- req = kmalloc(sizeof(*req), GFP_KERNEL);
- if (req == NULL)
- goto err;
- wusb_dev->set_gtk_req = req;
-
- req->bRequestType = USB_DIR_OUT | USB_TYPE_STANDARD | USB_RECIP_DEVICE;
- req->bRequest = USB_REQ_SET_DESCRIPTOR;
- req->wValue = cpu_to_le16(USB_DT_KEY << 8 | wusbhc->gtk_index);
- req->wIndex = 0;
- req->wLength = cpu_to_le16(wusbhc->gtk.descr.bLength);
-
return wusb_dev;
err:
wusb_dev_free(wusb_dev);
/*
* Refresh the list of keep alives to emit in the MMC
*
- * Some devices don't respond to keep alives unless they've been
- * authenticated, so skip unauthenticated devices.
- *
* We only publish the first four devices that have a coming timeout
* condition. Then when we are done processing those, we go for the
* next ones. We ignore the ones that have timed out already (they'll
if (wusb_dev == NULL)
continue;
- if (wusb_dev->usb_dev == NULL || !wusb_dev->usb_dev->authenticated)
+ if (wusb_dev->usb_dev == NULL)
continue;
if (time_after(jiffies, wusb_dev->entry_ts + tt)) {
*
* @wusbhc shall be referenced and unlocked
*/
-static void wusbhc_handle_dn_alive(struct wusbhc *wusbhc, struct wusb_dev *wusb_dev)
+static void wusbhc_handle_dn_alive(struct wusbhc *wusbhc, u8 srcaddr)
{
+ struct wusb_dev *wusb_dev;
+
mutex_lock(&wusbhc->mutex);
- wusb_dev->entry_ts = jiffies;
- __wusbhc_keep_alive(wusbhc);
+ wusb_dev = wusbhc_find_dev_by_addr(wusbhc, srcaddr);
+ if (wusb_dev == NULL) {
+ dev_dbg(wusbhc->dev, "ignoring DN_Alive from unconnected device %02x\n",
+ srcaddr);
+ } else {
+ wusb_dev->entry_ts = jiffies;
+ __wusbhc_keep_alive(wusbhc);
+ }
mutex_unlock(&wusbhc->mutex);
}
*
* @wusbhc shall be referenced and unlocked
*/
-static void wusbhc_handle_dn_disconnect(struct wusbhc *wusbhc, struct wusb_dev *wusb_dev)
+static void wusbhc_handle_dn_disconnect(struct wusbhc *wusbhc, u8 srcaddr)
{
struct device *dev = wusbhc->dev;
-
- dev_info(dev, "DN DISCONNECT: device 0x%02x going down\n", wusb_dev->addr);
+ struct wusb_dev *wusb_dev;
mutex_lock(&wusbhc->mutex);
- __wusbhc_dev_disconnect(wusbhc, wusb_port_by_idx(wusbhc, wusb_dev->port_idx));
+ wusb_dev = wusbhc_find_dev_by_addr(wusbhc, srcaddr);
+ if (wusb_dev == NULL) {
+ dev_dbg(dev, "ignoring DN DISCONNECT from unconnected device %02x\n",
+ srcaddr);
+ } else {
+ dev_info(dev, "DN DISCONNECT: device 0x%02x going down\n",
+ wusb_dev->addr);
+ __wusbhc_dev_disconnect(wusbhc, wusb_port_by_idx(wusbhc,
+ wusb_dev->port_idx));
+ }
mutex_unlock(&wusbhc->mutex);
}
struct wusb_dn_hdr *dn_hdr, size_t size)
{
struct device *dev = wusbhc->dev;
- struct wusb_dev *wusb_dev;
if (size < sizeof(struct wusb_dn_hdr)) {
dev_err(dev, "DN data shorter than DN header (%d < %d)\n",
(int)size, (int)sizeof(struct wusb_dn_hdr));
return;
}
-
- wusb_dev = wusbhc_find_dev_by_addr(wusbhc, srcaddr);
- if (wusb_dev == NULL && dn_hdr->bType != WUSB_DN_CONNECT) {
- dev_dbg(dev, "ignoring DN %d from unconnected device %02x\n",
- dn_hdr->bType, srcaddr);
- return;
- }
-
switch (dn_hdr->bType) {
case WUSB_DN_CONNECT:
wusbhc_handle_dn_connect(wusbhc, dn_hdr, size);
break;
case WUSB_DN_ALIVE:
- wusbhc_handle_dn_alive(wusbhc, wusb_dev);
+ wusbhc_handle_dn_alive(wusbhc, srcaddr);
break;
case WUSB_DN_DISCONNECT:
- wusbhc_handle_dn_disconnect(wusbhc, wusb_dev);
+ wusbhc_handle_dn_disconnect(wusbhc, srcaddr);
break;
case WUSB_DN_MASAVAILCHANGED:
case WUSB_DN_RWAKE:
#include <linux/export.h>
#include "wusbhc.h"
-static void wusbhc_set_gtk_callback(struct urb *urb);
-static void wusbhc_gtk_rekey_done_work(struct work_struct *work);
+static void wusbhc_gtk_rekey_work(struct work_struct *work);
int wusbhc_sec_create(struct wusbhc *wusbhc)
{
wusbhc->gtk.descr.bLength = sizeof(wusbhc->gtk.descr) + sizeof(wusbhc->gtk.data);
wusbhc->gtk.descr.bDescriptorType = USB_DT_KEY;
wusbhc->gtk.descr.bReserved = 0;
+ wusbhc->gtk_index = 0;
- wusbhc->gtk_index = wusb_key_index(0, WUSB_KEY_INDEX_TYPE_GTK,
- WUSB_KEY_INDEX_ORIGINATOR_HOST);
-
- INIT_WORK(&wusbhc->gtk_rekey_done_work, wusbhc_gtk_rekey_done_work);
+ INIT_WORK(&wusbhc->gtk_rekey_work, wusbhc_gtk_rekey_work);
return 0;
}
wusbhc_generate_gtk(wusbhc);
result = wusbhc->set_gtk(wusbhc, wusbhc->gtk_tkid,
- &wusbhc->gtk.descr.bKeyData, key_size);
+ &wusbhc->gtk.descr.bKeyData, key_size);
if (result < 0)
dev_err(wusbhc->dev, "cannot set GTK for the host: %d\n",
result);
*/
void wusbhc_sec_stop(struct wusbhc *wusbhc)
{
- cancel_work_sync(&wusbhc->gtk_rekey_done_work);
+ cancel_work_sync(&wusbhc->gtk_rekey_work);
}
static int wusb_dev_set_gtk(struct wusbhc *wusbhc, struct wusb_dev *wusb_dev)
{
struct usb_device *usb_dev = wusb_dev->usb_dev;
+ u8 key_index = wusb_key_index(wusbhc->gtk_index,
+ WUSB_KEY_INDEX_TYPE_GTK, WUSB_KEY_INDEX_ORIGINATOR_HOST);
return usb_control_msg(
usb_dev, usb_sndctrlpipe(usb_dev, 0),
USB_REQ_SET_DESCRIPTOR,
USB_DIR_OUT | USB_TYPE_STANDARD | USB_RECIP_DEVICE,
- USB_DT_KEY << 8 | wusbhc->gtk_index, 0,
+ USB_DT_KEY << 8 | key_index, 0,
&wusbhc->gtk.descr, wusbhc->gtk.descr.bLength,
1000);
}
* Once all connected and authenticated devices have received the new
* GTK, switch the host to using it.
*/
-static void wusbhc_gtk_rekey_done_work(struct work_struct *work)
+static void wusbhc_gtk_rekey_work(struct work_struct *work)
{
- struct wusbhc *wusbhc = container_of(work, struct wusbhc, gtk_rekey_done_work);
+ struct wusbhc *wusbhc = container_of(work,
+ struct wusbhc, gtk_rekey_work);
size_t key_size = sizeof(wusbhc->gtk.data);
+ int port_idx;
+ struct wusb_dev *wusb_dev, *wusb_dev_next;
+ LIST_HEAD(rekey_list);
mutex_lock(&wusbhc->mutex);
+ /* generate the new key */
+ wusbhc_generate_gtk(wusbhc);
+ /* roll the gtk index. */
+ wusbhc->gtk_index = (wusbhc->gtk_index + 1) % (WUSB_KEY_INDEX_MAX + 1);
+ /*
+ * Save all connected devices on a list while holding wusbhc->mutex and
+ * take a reference to each one. Then submit the set key request to
+ * them after releasing the lock in order to avoid a deadlock.
+ */
+ for (port_idx = 0; port_idx < wusbhc->ports_max; port_idx++) {
+ wusb_dev = wusbhc->port[port_idx].wusb_dev;
+ if (!wusb_dev || !wusb_dev->usb_dev
+ || !wusb_dev->usb_dev->authenticated)
+ continue;
- if (--wusbhc->pending_set_gtks == 0)
- wusbhc->set_gtk(wusbhc, wusbhc->gtk_tkid, &wusbhc->gtk.descr.bKeyData, key_size);
-
+ wusb_dev_get(wusb_dev);
+ list_add_tail(&wusb_dev->rekey_node, &rekey_list);
+ }
mutex_unlock(&wusbhc->mutex);
-}
-static void wusbhc_set_gtk_callback(struct urb *urb)
-{
- struct wusbhc *wusbhc = urb->context;
+ /* Submit the rekey requests without holding wusbhc->mutex. */
+ list_for_each_entry_safe(wusb_dev, wusb_dev_next, &rekey_list,
+ rekey_node) {
+ list_del_init(&wusb_dev->rekey_node);
+ dev_dbg(&wusb_dev->usb_dev->dev, "%s: rekey device at port %d\n",
+ __func__, wusb_dev->port_idx);
+
+ if (wusb_dev_set_gtk(wusbhc, wusb_dev) < 0) {
+ dev_err(&wusb_dev->usb_dev->dev, "%s: rekey device at port %d failed\n",
+ __func__, wusb_dev->port_idx);
+ }
+ wusb_dev_put(wusb_dev);
+ }
- queue_work(wusbd, &wusbhc->gtk_rekey_done_work);
+ /* Switch the host controller to use the new GTK. */
+ mutex_lock(&wusbhc->mutex);
+ wusbhc->set_gtk(wusbhc, wusbhc->gtk_tkid,
+ &wusbhc->gtk.descr.bKeyData, key_size);
+ mutex_unlock(&wusbhc->mutex);
}
/**
*/
void wusbhc_gtk_rekey(struct wusbhc *wusbhc)
{
- static const size_t key_size = sizeof(wusbhc->gtk.data);
- int p;
-
- wusbhc_generate_gtk(wusbhc);
-
- for (p = 0; p < wusbhc->ports_max; p++) {
- struct wusb_dev *wusb_dev;
-
- wusb_dev = wusbhc->port[p].wusb_dev;
- if (!wusb_dev || !wusb_dev->usb_dev || !wusb_dev->usb_dev->authenticated)
- continue;
-
- usb_fill_control_urb(wusb_dev->set_gtk_urb, wusb_dev->usb_dev,
- usb_sndctrlpipe(wusb_dev->usb_dev, 0),
- (void *)wusb_dev->set_gtk_req,
- &wusbhc->gtk.descr, wusbhc->gtk.descr.bLength,
- wusbhc_set_gtk_callback, wusbhc);
- if (usb_submit_urb(wusb_dev->set_gtk_urb, GFP_KERNEL) == 0)
- wusbhc->pending_set_gtks++;
- }
- if (wusbhc->pending_set_gtks == 0)
- wusbhc->set_gtk(wusbhc, wusbhc->gtk_tkid, &wusbhc->gtk.descr.bKeyData, key_size);
+ /*
+ * We need to submit a URB to the downstream WUSB devices in order to
+ * change the group key. This can't be done while holding the
+ * wusbhc->mutex since that is also taken in the urb_enqueue routine
+ * and will cause a deadlock. Instead, queue a work item to do
+ * it when the lock is not held
+ */
+ queue_work(wusbd, &wusbhc->gtk_rekey_work);
}
struct kref refcnt;
struct wusbhc *wusbhc;
struct list_head cack_node; /* Connect-Ack list */
+ struct list_head rekey_node; /* GTK rekey list */
u8 port_idx;
u8 addr;
u8 beacon_type:4;
struct usb_wireless_cap_descriptor *wusb_cap_descr;
struct uwb_mas_bm availability;
struct work_struct devconnect_acked_work;
- struct urb *set_gtk_urb;
- struct usb_ctrlrequest *set_gtk_req;
struct usb_device *usb_dev;
};
} __attribute__((packed)) gtk;
u8 gtk_index;
u32 gtk_tkid;
- struct work_struct gtk_rekey_done_work;
- int pending_set_gtks;
+ struct work_struct gtk_rekey_work;
struct usb_encryption_descriptor *ccm1_etd;
};
/*
* Setup default attribute lists for various fabric->tf_cit_tmpl
*/
- TF_CIT_TMPL(fabric)->tfc_wwn_cit.ct_attrs = tcm_vhost_wwn_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_base_cit.ct_attrs = tcm_vhost_tpg_attrs;
- TF_CIT_TMPL(fabric)->tfc_tpg_attrib_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_param_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_np_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_base_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
- TF_CIT_TMPL(fabric)->tfc_tpg_nacl_param_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_wwn_cit.ct_attrs = tcm_vhost_wwn_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_base_cit.ct_attrs = tcm_vhost_tpg_attrs;
+ fabric->tf_cit_tmpl.tfc_tpg_attrib_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_param_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_np_base_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_base_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_attrib_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_auth_cit.ct_attrs = NULL;
+ fabric->tf_cit_tmpl.tfc_tpg_nacl_param_cit.ct_attrs = NULL;
/*
* Register the fabric for use within TCM
*/
/* terminator */
}
};
+MODULE_DEVICE_TABLE(platform, atmel_lcdfb_devtypes);
static struct atmel_lcdfb_config *
atmel_lcdfb_get_config(struct platform_device *pdev)
return -EINVAL;
}
case KYRO_IOCTL_UVSTRIDE:
- if (copy_to_user(argp, &deviceInfo.ulOverlayUVStride, sizeof(unsigned long)))
+ if (copy_to_user(argp, &deviceInfo.ulOverlayUVStride, sizeof(deviceInfo.ulOverlayUVStride)))
return -EFAULT;
break;
case KYRO_IOCTL_STRIDE:
- if (copy_to_user(argp, &deviceInfo.ulOverlayStride, sizeof(unsigned long)))
+ if (copy_to_user(argp, &deviceInfo.ulOverlayStride, sizeof(deviceInfo.ulOverlayStride)))
return -EFAULT;
break;
case KYRO_IOCTL_OVERLAY_OFFSET:
- if (copy_to_user(argp, &deviceInfo.ulOverlayOffset, sizeof(unsigned long)))
+ if (copy_to_user(argp, &deviceInfo.ulOverlayOffset, sizeof(deviceInfo.ulOverlayOffset)))
return -EFAULT;
break;
}
#define AVIVO_DC_LUTB_WHITE_OFFSET_GREEN 0x6cd4
#define AVIVO_DC_LUTB_WHITE_OFFSET_RED 0x6cd8
+#define FB_RIGHT_POS(p, bpp) (fb_be_math(p) ? 0 : (32 - (bpp)))
+
+static inline u32 offb_cmap_byteswap(struct fb_info *info, u32 value)
+{
+ u32 bpp = info->var.bits_per_pixel;
+
+ return cpu_to_be32(value) >> FB_RIGHT_POS(info, bpp);
+}
+
/*
* Set a single color register. The values supplied are already
* rounded down to the hardware's capabilities (according to the
mask <<= info->var.transp.offset;
value |= mask;
}
- pal[regno] = value;
+ pal[regno] = offb_cmap_byteswap(info, value);
return 0;
}
static void __iomem *offb_map_reg(struct device_node *np, int index,
unsigned long offset, unsigned long size)
{
- const u32 *addrp;
+ const __be32 *addrp;
u64 asize, taddr;
unsigned int flags;
}
of_node_put(pciparent);
} else if (dp && of_device_is_compatible(dp, "qemu,std-vga")) {
- const u32 io_of_addr[3] = { 0x01000000, 0x0, 0x0 };
+#ifdef __BIG_ENDIAN
+ const __be32 io_of_addr[3] = { 0x01000000, 0x0, 0x0 };
+#else
+ const __be32 io_of_addr[3] = { 0x00000001, 0x0, 0x0 };
+#endif
u64 io_addr = of_translate_address(dp, io_of_addr);
if (io_addr != OF_BAD_ADDR) {
par->cmap_adr = ioremap(io_addr + 0x3c8, 2);
unsigned int flags, rsize, addr_prop = 0;
unsigned long max_size = 0;
u64 rstart, address = OF_BAD_ADDR;
- const u32 *pp, *addrp, *up;
+ const __be32 *pp, *addrp, *up;
u64 asize;
int foreign_endian = 0;
if (pp == NULL)
pp = of_get_property(dp, "depth", &len);
if (pp && len == sizeof(u32))
- depth = *pp;
+ depth = be32_to_cpup(pp);
pp = of_get_property(dp, "linux,bootx-width", &len);
if (pp == NULL)
pp = of_get_property(dp, "width", &len);
if (pp && len == sizeof(u32))
- width = *pp;
+ width = be32_to_cpup(pp);
pp = of_get_property(dp, "linux,bootx-height", &len);
if (pp == NULL)
pp = of_get_property(dp, "height", &len);
if (pp && len == sizeof(u32))
- height = *pp;
+ height = be32_to_cpup(pp);
pp = of_get_property(dp, "linux,bootx-linebytes", &len);
if (pp == NULL)
pp = of_get_property(dp, "linebytes", &len);
if (pp && len == sizeof(u32) && (*pp != 0xffffffffu))
- pitch = *pp;
+ pitch = be32_to_cpup(pp);
else
pitch = width * ((depth + 7) / 8);
struct omap_dss_device *in = ddata->in;
int r;
+ mutex_lock(&ddata->mutex);
+
dev_dbg(&ddata->spi->dev, "%s\n", __func__);
in->ops.sdi->set_timings(in, &ddata->videomode);
if (omapdss_device_is_enabled(dssdev))
return 0;
- mutex_lock(&ddata->mutex);
r = acx565akm_panel_power_on(dssdev);
- mutex_unlock(&ddata->mutex);
-
if (r)
return r;
* Power management
*/
+#if defined(CONFIG_PM_SLEEP) || defined(CONFIG_PM_RUNTIME)
static int sh_mobile_meram_suspend(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
meram_write_reg(priv->base, common_regs[i], priv->regs[i]);
return 0;
}
+#endif /* CONFIG_PM_SLEEP || CONFIG_PM_RUNTIME */
static UNIVERSAL_DEV_PM_OPS(sh_mobile_meram_dev_pm_ops,
sh_mobile_meram_suspend,
+ sizeof(u32) * 16, GFP_KERNEL);
if (!fbi) {
dev_err(&pdev->dev, "Failed to initialize framebuffer device\n");
- ret = -ENOMEM;
- goto failed;
+ return -ENOMEM;
}
strcpy(fbi->fb.fix.id, "VT8500 LCD");
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (res == NULL) {
dev_err(&pdev->dev, "no I/O memory resource defined\n");
- ret = -ENODEV;
- goto failed_fbi;
+ return -ENODEV;
}
res = request_mem_region(res->start, resource_size(res), "vt8500lcd");
if (res == NULL) {
dev_err(&pdev->dev, "failed to request I/O memory\n");
- ret = -EBUSY;
- goto failed_fbi;
+ return -EBUSY;
}
fbi->regbase = ioremap(res->start, resource_size(res));
}
disp_timing = of_get_display_timings(pdev->dev.of_node);
- if (!disp_timing)
- return -EINVAL;
+ if (!disp_timing) {
+ ret = -EINVAL;
+ goto failed_free_io;
+ }
ret = of_get_fb_videomode(pdev->dev.of_node, &of_mode,
OF_USE_NATIVE_MODE);
if (ret)
- return ret;
+ goto failed_free_io;
ret = of_property_read_u32(pdev->dev.of_node, "bits-per-pixel", &bpp);
if (ret)
- return ret;
+ goto failed_free_io;
/* try allocating the framebuffer */
fb_mem_len = of_mode.xres * of_mode.yres * 2 * (bpp / 8);
GFP_KERNEL);
if (!fb_mem_virt) {
pr_err("%s: Failed to allocate framebuffer\n", __func__);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto failed_free_io;
}
fbi->fb.fix.smem_start = fb_mem_phys;
iounmap(fbi->regbase);
failed_free_res:
release_mem_region(res->start, resource_size(res));
-failed_fbi:
- kfree(fbi);
-failed:
return ret;
}
{
__le32 actual = cpu_to_le32(vb->num_pages);
- virtio_cwrite(vb->vdev, struct virtio_balloon_config, num_pages,
+ virtio_cwrite(vb->vdev, struct virtio_balloon_config, actual,
&actual);
}
#include <linux/watchdog.h>
#include <linux/platform_device.h>
#include <linux/of_address.h>
-#include <linux/miscdevice.h>
#define PM_RSTC 0x1c
#define PM_WDOG 0x24
#include <linux/platform_device.h>
#include <linux/module.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/timer.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/watchdog.h>
-#include <linux/miscdevice.h>
#include <linux/seq_file.h>
#include <linux/debugfs.h>
#include <linux/uaccess.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
-#include <linux/miscdevice.h>
#include <linux/uaccess.h>
#include <linux/watchdog.h>
#include <linux/platform_device.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/bitops.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/platform_device.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/watchdog.h>
-#include <linux/miscdevice.h>
#include <linux/moduleparam.h>
#include <linux/platform_device.h>
#if defined CONFIG_PNP
/* now that the user has specified an IO port and we haven't detected
* any devices, disable pnp support */
+ if (isapnp)
+ pnp_unregister_driver(&scl200wdt_pnp_driver);
isapnp = 0;
- pnp_unregister_driver(&scl200wdt_pnp_driver);
#endif
if (!request_region(io, io_len, SC1200_MODULE_NAME)) {
#include <linux/init.h>
#include <linux/types.h>
#include <linux/spinlock.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/pm_runtime.h>
#include <linux/fs.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/timer.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/notifier.h>
#include <linux/reboot.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/platform_device.h>
#include <linux/stmp3xxx_rtc_wdt.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/moduleparam.h>
-#include <linux/miscdevice.h>
#include <linux/err.h>
#include <linux/uaccess.h>
#include <linux/watchdog.h>
pfn = page_to_pfn(page);
- set_phys_to_machine(pfn, frame_list[i]);
-
#ifdef CONFIG_XEN_HAVE_PVMMU
- /* Link back into the page tables if not highmem. */
- if (xen_pv_domain() && !PageHighMem(page)) {
- int ret;
- ret = HYPERVISOR_update_va_mapping(
- (unsigned long)__va(pfn << PAGE_SHIFT),
- mfn_pte(frame_list[i], PAGE_KERNEL),
- 0);
- BUG_ON(ret);
+ if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+ set_phys_to_machine(pfn, frame_list[i]);
+
+ /* Link back into the page tables if not highmem. */
+ if (!PageHighMem(page)) {
+ int ret;
+ ret = HYPERVISOR_update_va_mapping(
+ (unsigned long)__va(pfn << PAGE_SHIFT),
+ mfn_pte(frame_list[i], PAGE_KERNEL),
+ 0);
+ BUG_ON(ret);
+ }
}
#endif
enum bp_state state = BP_DONE;
unsigned long pfn, i;
struct page *page;
- struct page *scratch_page;
int ret;
struct xen_memory_reservation reservation = {
.address_bits = 0,
scrub_page(page);
+#ifdef CONFIG_XEN_HAVE_PVMMU
/*
* Ballooned out frames are effectively replaced with
* a scratch frame. Ensure direct mappings and the
* p2m are consistent.
*/
- scratch_page = get_balloon_scratch_page();
-#ifdef CONFIG_XEN_HAVE_PVMMU
- if (xen_pv_domain() && !PageHighMem(page)) {
- ret = HYPERVISOR_update_va_mapping(
- (unsigned long)__va(pfn << PAGE_SHIFT),
- pfn_pte(page_to_pfn(scratch_page),
- PAGE_KERNEL_RO), 0);
- BUG_ON(ret);
- }
-#endif
if (!xen_feature(XENFEAT_auto_translated_physmap)) {
unsigned long p;
+ struct page *scratch_page = get_balloon_scratch_page();
+
+ if (!PageHighMem(page)) {
+ ret = HYPERVISOR_update_va_mapping(
+ (unsigned long)__va(pfn << PAGE_SHIFT),
+ pfn_pte(page_to_pfn(scratch_page),
+ PAGE_KERNEL_RO), 0);
+ BUG_ON(ret);
+ }
p = page_to_pfn(scratch_page);
__set_phys_to_machine(pfn, pfn_to_mfn(p));
+
+ put_balloon_scratch_page();
}
- put_balloon_scratch_page();
+#endif
balloon_append(pfn_to_page(pfn));
}
if (!xen_domain())
return -ENODEV;
- for_each_online_cpu(cpu)
- {
- per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
- if (per_cpu(balloon_scratch_page, cpu) == NULL) {
- pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
- return -ENOMEM;
+ if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+ for_each_online_cpu(cpu)
+ {
+ per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
+ if (per_cpu(balloon_scratch_page, cpu) == NULL) {
+ pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+ return -ENOMEM;
+ }
}
+ register_cpu_notifier(&balloon_cpu_notifier);
}
- register_cpu_notifier(&balloon_cpu_notifier);
pr_info("Initialising balloon driver\n");
ret = m2p_add_override(mfn, pages[i], kmap_ops ?
&kmap_ops[i] : NULL);
if (ret)
- return ret;
+ goto out;
}
+ out:
if (lazy)
arch_leave_lazy_mmu_mode();
ret = m2p_remove_override(pages[i], kmap_ops ?
&kmap_ops[i] : NULL);
if (ret)
- return ret;
+ goto out;
}
+ out:
if (lazy)
arch_leave_lazy_mmu_mode();
gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
PAGE_SIZE * max_nr_gframes);
if (gnttab_shared.addr == NULL) {
- pr_warn("Failed to ioremap gnttab share frames!\n");
+ pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
+ xen_hvm_resume_frames);
return -ENOMEM;
}
}
add.flags = XEN_PCI_DEV_EXTFN;
#ifdef CONFIG_ACPI
- handle = DEVICE_ACPI_HANDLE(&pci_dev->dev);
+ handle = ACPI_HANDLE(&pci_dev->dev);
if (!handle && pci_dev->bus->bridge)
- handle = DEVICE_ACPI_HANDLE(pci_dev->bus->bridge);
+ handle = ACPI_HANDLE(pci_dev->bus->bridge);
#ifdef CONFIG_PCI_IOV
if (!handle && pci_dev->is_virtfn)
- handle = DEVICE_ACPI_HANDLE(physfn->bus->bridge);
+ handle = ACPI_HANDLE(physfn->bus->bridge);
#endif
if (handle) {
acpi_status status;
{
struct page **pages = vma->vm_private_data;
int numpgs = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+ int rc;
if (!xen_feature(XENFEAT_auto_translated_physmap) || !numpgs || !pages)
return;
- xen_unmap_domain_mfn_range(vma, numpgs, pages);
- free_xenballooned_pages(numpgs, pages);
+ rc = xen_unmap_domain_mfn_range(vma, numpgs, pages);
+ if (rc == 0)
+ free_xenballooned_pages(numpgs, pages);
+ else
+ pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n",
+ numpgs, rc);
kfree(pages);
}
sg_dma_len(sgl) = 0;
return 0;
}
+ xen_dma_map_page(hwdev, pfn_to_page(map >> PAGE_SHIFT),
+ map & ~PAGE_MASK,
+ sg->length,
+ dir,
+ attrs);
sg->dma_address = xen_phys_to_bus(map);
} else {
/* we are not interested in the dma_addr returned by
#include "v9fs_vfs.h"
#include "fid.h"
-/**
- * v9fs_dentry_delete - called when dentry refcount equals 0
- * @dentry: dentry in question
- *
- * By returning 1 here we should remove cacheing of unused
- * dentry components.
- *
- */
-
-static int v9fs_dentry_delete(const struct dentry *dentry)
-{
- p9_debug(P9_DEBUG_VFS, " dentry: %s (%p)\n",
- dentry->d_name.name, dentry);
-
- return 1;
-}
-
/**
* v9fs_cached_dentry_delete - called when dentry refcount equals 0
* @dentry: dentry in question
};
const struct dentry_operations v9fs_dentry_operations = {
- .d_delete = v9fs_dentry_delete,
+ .d_delete = always_delete_dentry,
.d_release = v9fs_dentry_release,
};
Version 3.11
------------
-- Converted to use 2.3.x page cache [Dave Jones <dave@powertweak.com>]
+- Converted to use 2.3.x page cache [Dave Jones]
- Corruption in truncate() bugfix [Ken Tyler <kent@werple.net.au>]
Version 3.10
struct percpu_ref users;
atomic_t dead;
+ struct percpu_ref reqs;
+
unsigned long user_id;
struct __percpu kioctx_cpu *cpu;
struct page **ring_pages;
long nr_pages;
- struct rcu_head rcu_head;
struct work_struct free_work;
struct {
int i;
for (i = 0; i < ctx->nr_pages; i++) {
+ struct page *page;
pr_debug("pid(%d) [%d] page->count=%d\n", current->pid, i,
page_count(ctx->ring_pages[i]));
- put_page(ctx->ring_pages[i]);
+ page = ctx->ring_pages[i];
+ if (!page)
+ continue;
+ ctx->ring_pages[i] = NULL;
+ put_page(page);
}
put_aio_ring_file(ctx);
- if (ctx->ring_pages && ctx->ring_pages != ctx->internal_pages)
+ if (ctx->ring_pages && ctx->ring_pages != ctx->internal_pages) {
kfree(ctx->ring_pages);
+ ctx->ring_pages = NULL;
+ }
}
static int aio_ring_mmap(struct file *file, struct vm_area_struct *vma)
unsigned long flags;
int rc;
+ rc = 0;
+
+ /* Make sure the old page hasn't already been changed */
+ spin_lock(&mapping->private_lock);
+ ctx = mapping->private_data;
+ if (ctx) {
+ pgoff_t idx;
+ spin_lock_irqsave(&ctx->completion_lock, flags);
+ idx = old->index;
+ if (idx < (pgoff_t)ctx->nr_pages) {
+ if (ctx->ring_pages[idx] != old)
+ rc = -EAGAIN;
+ } else
+ rc = -EINVAL;
+ spin_unlock_irqrestore(&ctx->completion_lock, flags);
+ } else
+ rc = -EINVAL;
+ spin_unlock(&mapping->private_lock);
+
+ if (rc != 0)
+ return rc;
+
/* Writeback must be complete */
BUG_ON(PageWriteback(old));
- put_page(old);
+ get_page(new);
- rc = migrate_page_move_mapping(mapping, new, old, NULL, mode);
+ rc = migrate_page_move_mapping(mapping, new, old, NULL, mode, 1);
if (rc != MIGRATEPAGE_SUCCESS) {
- get_page(old);
+ put_page(new);
return rc;
}
- get_page(new);
-
/* We can potentially race against kioctx teardown here. Use the
* address_space's private data lock to protect the mapping's
* private_data.
spin_lock_irqsave(&ctx->completion_lock, flags);
migrate_page_copy(new, old);
idx = old->index;
- if (idx < (pgoff_t)ctx->nr_pages)
- ctx->ring_pages[idx] = new;
+ if (idx < (pgoff_t)ctx->nr_pages) {
+ /* And only do the move if things haven't changed */
+ if (ctx->ring_pages[idx] == old)
+ ctx->ring_pages[idx] = new;
+ else
+ rc = -EAGAIN;
+ } else
+ rc = -EINVAL;
spin_unlock_irqrestore(&ctx->completion_lock, flags);
} else
rc = -EBUSY;
spin_unlock(&mapping->private_lock);
+ if (rc == MIGRATEPAGE_SUCCESS)
+ put_page(old);
+ else
+ put_page(new);
+
return rc;
}
#endif
struct aio_ring *ring;
unsigned nr_events = ctx->max_reqs;
struct mm_struct *mm = current->mm;
- unsigned long size, populate;
+ unsigned long size, unused;
int nr_pages;
int i;
struct file *file;
return -EAGAIN;
}
+ ctx->aio_ring_file = file;
+ nr_events = (PAGE_SIZE * nr_pages - sizeof(struct aio_ring))
+ / sizeof(struct io_event);
+
+ ctx->ring_pages = ctx->internal_pages;
+ if (nr_pages > AIO_RING_PAGES) {
+ ctx->ring_pages = kcalloc(nr_pages, sizeof(struct page *),
+ GFP_KERNEL);
+ if (!ctx->ring_pages) {
+ put_aio_ring_file(ctx);
+ return -ENOMEM;
+ }
+ }
+
for (i = 0; i < nr_pages; i++) {
struct page *page;
page = find_or_create_page(file->f_inode->i_mapping,
SetPageUptodate(page);
SetPageDirty(page);
unlock_page(page);
+
+ ctx->ring_pages[i] = page;
}
- ctx->aio_ring_file = file;
- nr_events = (PAGE_SIZE * nr_pages - sizeof(struct aio_ring))
- / sizeof(struct io_event);
+ ctx->nr_pages = i;
- ctx->ring_pages = ctx->internal_pages;
- if (nr_pages > AIO_RING_PAGES) {
- ctx->ring_pages = kcalloc(nr_pages, sizeof(struct page *),
- GFP_KERNEL);
- if (!ctx->ring_pages)
- return -ENOMEM;
+ if (unlikely(i != nr_pages)) {
+ aio_free_ring(ctx);
+ return -EAGAIN;
}
ctx->mmap_size = nr_pages * PAGE_SIZE;
down_write(&mm->mmap_sem);
ctx->mmap_base = do_mmap_pgoff(ctx->aio_ring_file, 0, ctx->mmap_size,
PROT_READ | PROT_WRITE,
- MAP_SHARED | MAP_POPULATE, 0, &populate);
+ MAP_SHARED, 0, &unused);
+ up_write(&mm->mmap_sem);
if (IS_ERR((void *)ctx->mmap_base)) {
- up_write(&mm->mmap_sem);
ctx->mmap_size = 0;
aio_free_ring(ctx);
return -EAGAIN;
pr_debug("mmap address: 0x%08lx\n", ctx->mmap_base);
- /* We must do this while still holding mmap_sem for write, as we
- * need to be protected against userspace attempting to mremap()
- * or munmap() the ring buffer.
- */
- ctx->nr_pages = get_user_pages(current, mm, ctx->mmap_base, nr_pages,
- 1, 0, ctx->ring_pages, NULL);
-
- /* Dropping the reference here is safe as the page cache will hold
- * onto the pages for us. It is also required so that page migration
- * can unmap the pages and get the right reference count.
- */
- for (i = 0; i < ctx->nr_pages; i++)
- put_page(ctx->ring_pages[i]);
-
- up_write(&mm->mmap_sem);
-
- if (unlikely(ctx->nr_pages != nr_pages)) {
- aio_free_ring(ctx);
- return -EAGAIN;
- }
-
ctx->user_id = ctx->mmap_base;
ctx->nr_events = nr_events; /* trusted copy */
return cancel(kiocb);
}
-static void free_ioctx_rcu(struct rcu_head *head)
+static void free_ioctx(struct work_struct *work)
{
- struct kioctx *ctx = container_of(head, struct kioctx, rcu_head);
+ struct kioctx *ctx = container_of(work, struct kioctx, free_work);
+
+ pr_debug("freeing %p\n", ctx);
+ aio_free_ring(ctx);
free_percpu(ctx->cpu);
kmem_cache_free(kioctx_cachep, ctx);
}
+static void free_ioctx_reqs(struct percpu_ref *ref)
+{
+ struct kioctx *ctx = container_of(ref, struct kioctx, reqs);
+
+ INIT_WORK(&ctx->free_work, free_ioctx);
+ schedule_work(&ctx->free_work);
+}
+
/*
* When this function runs, the kioctx has been removed from the "hash table"
* and ctx->users has dropped to 0, so we know no more kiocbs can be submitted -
* now it's safe to cancel any that need to be.
*/
-static void free_ioctx(struct work_struct *work)
+static void free_ioctx_users(struct percpu_ref *ref)
{
- struct kioctx *ctx = container_of(work, struct kioctx, free_work);
- struct aio_ring *ring;
+ struct kioctx *ctx = container_of(ref, struct kioctx, users);
struct kiocb *req;
- unsigned cpu, avail;
- DEFINE_WAIT(wait);
spin_lock_irq(&ctx->ctx_lock);
spin_unlock_irq(&ctx->ctx_lock);
- for_each_possible_cpu(cpu) {
- struct kioctx_cpu *kcpu = per_cpu_ptr(ctx->cpu, cpu);
-
- atomic_add(kcpu->reqs_available, &ctx->reqs_available);
- kcpu->reqs_available = 0;
- }
-
- while (1) {
- prepare_to_wait(&ctx->wait, &wait, TASK_UNINTERRUPTIBLE);
-
- ring = kmap_atomic(ctx->ring_pages[0]);
- avail = (ring->head <= ring->tail)
- ? ring->tail - ring->head
- : ctx->nr_events - ring->head + ring->tail;
-
- atomic_add(avail, &ctx->reqs_available);
- ring->head = ring->tail;
- kunmap_atomic(ring);
-
- if (atomic_read(&ctx->reqs_available) >= ctx->nr_events - 1)
- break;
-
- schedule();
- }
- finish_wait(&ctx->wait, &wait);
-
- WARN_ON(atomic_read(&ctx->reqs_available) > ctx->nr_events - 1);
-
- aio_free_ring(ctx);
-
- pr_debug("freeing %p\n", ctx);
-
- /*
- * Here the call_rcu() is between the wait_event() for reqs_active to
- * hit 0, and freeing the ioctx.
- *
- * aio_complete() decrements reqs_active, but it has to touch the ioctx
- * after to issue a wakeup so we use rcu.
- */
- call_rcu(&ctx->rcu_head, free_ioctx_rcu);
-}
-
-static void free_ioctx_ref(struct percpu_ref *ref)
-{
- struct kioctx *ctx = container_of(ref, struct kioctx, users);
-
- INIT_WORK(&ctx->free_work, free_ioctx);
- schedule_work(&ctx->free_work);
+ percpu_ref_kill(&ctx->reqs);
+ percpu_ref_put(&ctx->reqs);
}
static int ioctx_add_table(struct kioctx *ctx, struct mm_struct *mm)
}
}
+static void aio_nr_sub(unsigned nr)
+{
+ spin_lock(&aio_nr_lock);
+ if (WARN_ON(aio_nr - nr > aio_nr))
+ aio_nr = 0;
+ else
+ aio_nr -= nr;
+ spin_unlock(&aio_nr_lock);
+}
+
/* ioctx_alloc
* Allocates and initializes an ioctx. Returns an ERR_PTR if it failed.
*/
ctx->max_reqs = nr_events;
- if (percpu_ref_init(&ctx->users, free_ioctx_ref))
- goto out_freectx;
+ if (percpu_ref_init(&ctx->users, free_ioctx_users))
+ goto err;
+
+ if (percpu_ref_init(&ctx->reqs, free_ioctx_reqs))
+ goto err;
spin_lock_init(&ctx->ctx_lock);
spin_lock_init(&ctx->completion_lock);
ctx->cpu = alloc_percpu(struct kioctx_cpu);
if (!ctx->cpu)
- goto out_freeref;
+ goto err;
if (aio_setup_ring(ctx) < 0)
- goto out_freepcpu;
+ goto err;
atomic_set(&ctx->reqs_available, ctx->nr_events - 1);
ctx->req_batch = (ctx->nr_events - 1) / (num_possible_cpus() * 4);
if (aio_nr + nr_events > (aio_max_nr * 2UL) ||
aio_nr + nr_events < aio_nr) {
spin_unlock(&aio_nr_lock);
- goto out_cleanup;
+ err = -EAGAIN;
+ goto err_ctx;
}
aio_nr += ctx->max_reqs;
spin_unlock(&aio_nr_lock);
- percpu_ref_get(&ctx->users); /* io_setup() will drop this ref */
+ percpu_ref_get(&ctx->users); /* io_setup() will drop this ref */
+ percpu_ref_get(&ctx->reqs); /* free_ioctx_users() will drop this */
err = ioctx_add_table(ctx, mm);
if (err)
- goto out_cleanup_put;
+ goto err_cleanup;
pr_debug("allocated ioctx %p[%ld]: mm=%p mask=0x%x\n",
ctx, ctx->user_id, mm, ctx->nr_events);
return ctx;
-out_cleanup_put:
- percpu_ref_put(&ctx->users);
-out_cleanup:
- err = -EAGAIN;
+err_cleanup:
+ aio_nr_sub(ctx->max_reqs);
+err_ctx:
aio_free_ring(ctx);
-out_freepcpu:
+err:
free_percpu(ctx->cpu);
-out_freeref:
+ free_percpu(ctx->reqs.pcpu_count);
free_percpu(ctx->users.pcpu_count);
-out_freectx:
- put_aio_ring_file(ctx);
kmem_cache_free(kioctx_cachep, ctx);
pr_debug("error allocating ioctx %d\n", err);
return ERR_PTR(err);
* -EAGAIN with no ioctxs actually in use (as far as userspace
* could tell).
*/
- spin_lock(&aio_nr_lock);
- BUG_ON(aio_nr - ctx->max_reqs > aio_nr);
- aio_nr -= ctx->max_reqs;
- spin_unlock(&aio_nr_lock);
+ aio_nr_sub(ctx->max_reqs);
if (ctx->mmap_size)
vm_munmap(ctx->mmap_base, ctx->mmap_size);
if (unlikely(!req))
goto out_put;
+ percpu_ref_get(&ctx->reqs);
+
req->ki_ctx = ctx;
return req;
out_put:
return;
}
- /*
- * Take rcu_read_lock() in case the kioctx is being destroyed, as we
- * need to issue a wakeup after incrementing reqs_available.
- */
- rcu_read_lock();
-
if (iocb->ki_list.next) {
unsigned long flags;
if (waitqueue_active(&ctx->wait))
wake_up(&ctx->wait);
- rcu_read_unlock();
+ percpu_ref_put(&ctx->reqs);
}
EXPORT_SYMBOL(aio_complete);
return 0;
out_put_req:
put_reqs_available(ctx, 1);
+ percpu_ref_put(&ctx->reqs);
kiocb_free(req);
return ret;
}
static int __bio_add_page(struct request_queue *q, struct bio *bio, struct page
*page, unsigned int len, unsigned int offset,
- unsigned short max_sectors)
+ unsigned int max_sectors)
{
int retried_segments = 0;
struct bio_vec *bvec;
select XOR_BLOCKS
help
- Btrfs is a new filesystem with extents, writable snapshotting,
- support for multiple devices and many more features.
+ Btrfs is a general purpose copy-on-write filesystem with extents,
+ writable snapshotting, support for multiple devices and many more
+ features focused on fault tolerance, repair and easy administration.
- Btrfs is highly experimental, and THE DISK FORMAT IS NOT YET
- FINALIZED. You should say N here unless you are interested in
- testing Btrfs with non-critical data.
+ The filesystem disk format is no longer unstable, and it's not
+ expected to change unless there are strong reasons to do so. If there
+ is a format change, file systems with a unchanged format will
+ continue to be mountable and usable by newer kernels.
+
+ For more information, please see the web pages at
+ http://btrfs.wiki.kernel.org.
To compile this file system support as a module, choose M here. The
module will be called btrfs.
spin_lock_irq(&workers->lock);
if (workers->stopping) {
spin_unlock_irq(&workers->lock);
+ ret = -EINVAL;
goto fail_kthread;
}
list_add_tail(&worker->worker_list, &workers->idle_list);
* the integrity of (super)-block write requests, do not
* enable the config option BTRFS_FS_CHECK_INTEGRITY to
* include and compile the integrity check tool.
+ *
+ * Expect millions of lines of information in the kernel log with an
+ * enabled check_int_print_mask. Therefore set LOG_BUF_SHIFT in the
+ * kernel config to at least 26 (which is 64MB). Usually the value is
+ * limited to 21 (which is 2MB) in init/Kconfig. The file needs to be
+ * changed like this before LOG_BUF_SHIFT can be set to a high value:
+ * config LOG_BUF_SHIFT
+ * int "Kernel log buffer size (16 => 64KB, 17 => 128KB)"
+ * range 12 30
*/
#include <linux/sched.h>
#define BTRFSIC_PRINT_MASK_INITIAL_DATABASE 0x00000400
#define BTRFSIC_PRINT_MASK_NUM_COPIES 0x00000800
#define BTRFSIC_PRINT_MASK_TREE_WITH_ALL_MIRRORS 0x00001000
+#define BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH_VERBOSE 0x00002000
struct btrfsic_dev_state;
struct btrfsic_state;
static int btrfsic_read_block(struct btrfsic_state *state,
struct btrfsic_block_data_ctx *block_ctx);
static void btrfsic_dump_database(struct btrfsic_state *state);
-static void btrfsic_complete_bio_end_io(struct bio *bio, int err);
static int btrfsic_test_for_metadata(struct btrfsic_state *state,
char **datav, unsigned int num_pages);
static void btrfsic_process_written_block(struct btrfsic_dev_state *dev_state,
for (i = 0; i < num_pages;) {
struct bio *bio;
unsigned int j;
- DECLARE_COMPLETION_ONSTACK(complete);
bio = btrfs_io_bio_alloc(GFP_NOFS, num_pages - i);
if (!bio) {
}
bio->bi_bdev = block_ctx->dev->bdev;
bio->bi_sector = dev_bytenr >> 9;
- bio->bi_end_io = btrfsic_complete_bio_end_io;
- bio->bi_private = &complete;
for (j = i; j < num_pages; j++) {
ret = bio_add_page(bio, block_ctx->pagev[j],
"btrfsic: error, failed to add a single page!\n");
return -1;
}
- submit_bio(READ, bio);
-
- /* this will also unplug the queue */
- wait_for_completion(&complete);
-
- if (!test_bit(BIO_UPTODATE, &bio->bi_flags)) {
+ if (submit_bio_wait(READ, bio)) {
printk(KERN_INFO
"btrfsic: read error at logical %llu dev %s!\n",
block_ctx->start, block_ctx->dev->name);
return block_ctx->len;
}
-static void btrfsic_complete_bio_end_io(struct bio *bio, int err)
-{
- complete((struct completion *)bio->bi_private);
-}
-
static void btrfsic_dump_database(struct btrfsic_state *state)
{
struct list_head *elem_all;
return submit_bh(rw, bh);
}
-void btrfsic_submit_bio(int rw, struct bio *bio)
+static void __btrfsic_submit_bio(int rw, struct bio *bio)
{
struct btrfsic_dev_state *dev_state;
- if (!btrfsic_is_initialized) {
- submit_bio(rw, bio);
+ if (!btrfsic_is_initialized)
return;
- }
mutex_lock(&btrfsic_mutex);
/* since btrfsic_submit_bio() is also called before
(rw & WRITE) && NULL != bio->bi_io_vec) {
unsigned int i;
u64 dev_bytenr;
+ u64 cur_bytenr;
int bio_is_patched;
char **mapped_datav;
GFP_NOFS);
if (!mapped_datav)
goto leave;
+ cur_bytenr = dev_bytenr;
for (i = 0; i < bio->bi_vcnt; i++) {
BUG_ON(bio->bi_io_vec[i].bv_len != PAGE_CACHE_SIZE);
mapped_datav[i] = kmap(bio->bi_io_vec[i].bv_page);
kfree(mapped_datav);
goto leave;
}
- if ((BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH |
- BTRFSIC_PRINT_MASK_VERBOSE) ==
- (dev_state->state->print_mask &
- (BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH |
- BTRFSIC_PRINT_MASK_VERBOSE)))
+ if (dev_state->state->print_mask &
+ BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH_VERBOSE)
printk(KERN_INFO
- "#%u: page=%p, len=%u, offset=%u\n",
- i, bio->bi_io_vec[i].bv_page,
- bio->bi_io_vec[i].bv_len,
+ "#%u: bytenr=%llu, len=%u, offset=%u\n",
+ i, cur_bytenr, bio->bi_io_vec[i].bv_len,
bio->bi_io_vec[i].bv_offset);
+ cur_bytenr += bio->bi_io_vec[i].bv_len;
}
btrfsic_process_written_block(dev_state, dev_bytenr,
mapped_datav, bio->bi_vcnt,
}
leave:
mutex_unlock(&btrfsic_mutex);
+}
+void btrfsic_submit_bio(int rw, struct bio *bio)
+{
+ __btrfsic_submit_bio(rw, bio);
submit_bio(rw, bio);
}
+int btrfsic_submit_bio_wait(int rw, struct bio *bio)
+{
+ __btrfsic_submit_bio(rw, bio);
+ return submit_bio_wait(rw, bio);
+}
+
int btrfsic_mount(struct btrfs_root *root,
struct btrfs_fs_devices *fs_devices,
int including_extent_data, u32 print_mask)
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
int btrfsic_submit_bh(int rw, struct buffer_head *bh);
void btrfsic_submit_bio(int rw, struct bio *bio);
+int btrfsic_submit_bio_wait(int rw, struct bio *bio);
#else
#define btrfsic_submit_bh submit_bh
#define btrfsic_submit_bio submit_bio
+#define btrfsic_submit_bio_wait submit_bio_wait
#endif
int btrfsic_mount(struct btrfs_root *root,
struct btrfs_ordered_sum *sums);
int btrfs_csum_one_bio(struct btrfs_root *root, struct inode *inode,
struct bio *bio, u64 file_start, int contig);
-int btrfs_csum_truncate(struct btrfs_trans_handle *trans,
- struct btrfs_root *root, struct btrfs_path *path,
- u64 isize);
int btrfs_lookup_csums_range(struct btrfs_root *root, u64 start, u64 end,
struct list_head *list, int search_commit);
/* inode.c */
int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync);
void btrfs_drop_extent_cache(struct inode *inode, u64 start, u64 end,
int skip_pinned);
-int btrfs_replace_extent_cache(struct inode *inode, struct extent_map *replace,
- u64 start, u64 end, int skip_pinned,
- int modified);
extern const struct file_operations btrfs_file_operations;
int __btrfs_drop_extents(struct btrfs_trans_handle *trans,
struct btrfs_root *root, struct inode *inode,
dev_replace->tgtdev = tgt_device;
printk_in_rcu(KERN_INFO
- "btrfs: dev_replace from %s (devid %llu) to %s) started\n",
+ "btrfs: dev_replace from %s (devid %llu) to %s started\n",
src_device->missing ? "<missing disk>" :
rcu_str_deref(src_device->name),
src_device->devid,
int btrfs_commit_super(struct btrfs_root *root)
{
struct btrfs_trans_handle *trans;
- int ret;
mutex_lock(&root->fs_info->cleaner_mutex);
btrfs_run_delayed_iputs(root);
trans = btrfs_join_transaction(root);
if (IS_ERR(trans))
return PTR_ERR(trans);
- ret = btrfs_commit_transaction(trans, root);
- if (ret)
- return ret;
- /* run commit again to drop the original snapshot */
- trans = btrfs_join_transaction(root);
- if (IS_ERR(trans))
- return PTR_ERR(trans);
- ret = btrfs_commit_transaction(trans, root);
- if (ret)
- return ret;
- ret = btrfs_write_and_wait_transaction(NULL, root);
- if (ret) {
- btrfs_error(root->fs_info, ret,
- "Failed to sync btree inode to disk.");
- return ret;
- }
-
- ret = write_ctree_super(NULL, root, 0);
- return ret;
+ return btrfs_commit_transaction(trans, root);
}
int close_ctree(struct btrfs_root *root)
if (!path)
return -ENOMEM;
- if (metadata) {
- key.objectid = bytenr;
- key.type = BTRFS_METADATA_ITEM_KEY;
- key.offset = offset;
- } else {
- key.objectid = bytenr;
- key.type = BTRFS_EXTENT_ITEM_KEY;
- key.offset = offset;
- }
-
if (!trans) {
path->skip_locking = 1;
path->search_commit_root = 1;
}
+
+search_again:
+ key.objectid = bytenr;
+ key.offset = offset;
+ if (metadata)
+ key.type = BTRFS_METADATA_ITEM_KEY;
+ else
+ key.type = BTRFS_EXTENT_ITEM_KEY;
+
again:
ret = btrfs_search_slot(trans, root->fs_info->extent_root,
&key, path, 0, 0);
goto out_free;
if (ret > 0 && metadata && key.type == BTRFS_METADATA_ITEM_KEY) {
- metadata = 0;
if (path->slots[0]) {
path->slots[0]--;
btrfs_item_key_to_cpu(path->nodes[0], &key,
mutex_lock(&head->mutex);
mutex_unlock(&head->mutex);
btrfs_put_delayed_ref(&head->node);
- goto again;
+ goto search_again;
}
if (head->extent_op && head->extent_op->update_flags)
extent_flags |= head->extent_op->flags_to_set;
return err;
}
-static void repair_io_failure_callback(struct bio *bio, int err)
-{
- complete(bio->bi_private);
-}
-
/*
* this bypasses the standard btrfs submit functions deliberately, as
* the standard behavior is to write all copies in a raid setup. here we only
{
struct bio *bio;
struct btrfs_device *dev;
- DECLARE_COMPLETION_ONSTACK(compl);
u64 map_length = 0;
u64 sector;
struct btrfs_bio *bbio = NULL;
struct btrfs_mapping_tree *map_tree = &fs_info->mapping_tree;
int ret;
+ ASSERT(!(fs_info->sb->s_flags & MS_RDONLY));
BUG_ON(!mirror_num);
/* we can't repair anything in raid56 yet */
bio = btrfs_io_bio_alloc(GFP_NOFS, 1);
if (!bio)
return -EIO;
- bio->bi_private = &compl;
- bio->bi_end_io = repair_io_failure_callback;
bio->bi_size = 0;
map_length = length;
}
bio->bi_bdev = dev->bdev;
bio_add_page(bio, page, length, start - page_offset(page));
- btrfsic_submit_bio(WRITE_SYNC, bio);
- wait_for_completion(&compl);
- if (!test_bit(BIO_UPTODATE, &bio->bi_flags)) {
+ if (btrfsic_submit_bio_wait(WRITE_SYNC, bio)) {
/* try to remap that extent elsewhere? */
bio_put(bio);
btrfs_dev_stat_inc_and_print(dev, BTRFS_DEV_STAT_WRITE_ERRS);
unsigned long i, num_pages = num_extent_pages(eb->start, eb->len);
int ret = 0;
+ if (root->fs_info->sb->s_flags & MS_RDONLY)
+ return -EROFS;
+
for (i = 0; i < num_pages; i++) {
struct page *p = extent_buffer_page(eb, i);
ret = repair_io_failure(root->fs_info, start, PAGE_CACHE_SIZE,
u64 private;
u64 private_failure;
struct io_failure_record *failrec;
- struct btrfs_fs_info *fs_info;
+ struct inode *inode = page->mapping->host;
+ struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info;
struct extent_state *state;
int num_copies;
int did_repair = 0;
int ret;
- struct inode *inode = page->mapping->host;
private = 0;
ret = count_range_bits(&BTRFS_I(inode)->io_failure_tree, &private,
did_repair = 1;
goto out;
}
+ if (fs_info->sb->s_flags & MS_RDONLY)
+ goto out;
spin_lock(&BTRFS_I(inode)->io_tree.lock);
state = find_first_extent_bit_state(&BTRFS_I(inode)->io_tree,
if (state && state->start <= failrec->start &&
state->end >= failrec->start + failrec->len - 1) {
- fs_info = BTRFS_I(inode)->root->fs_info;
num_copies = btrfs_num_copies(fs_info, failrec->logical,
failrec->len);
if (num_copies > 1) {
old->extent_offset, fs_info,
path, record_one_backref,
old);
- BUG_ON(ret < 0 && ret != -ENOENT);
+ if (ret < 0 && ret != -ENOENT)
+ return false;
/* no backref to be processed for this extent */
if (!old->count) {
write_unlock(&em_tree->lock);
out:
- if (em)
- trace_btrfs_get_extent(root, em);
+ trace_btrfs_get_extent(root, em);
if (path)
btrfs_free_path(path);
err = mutex_lock_killable_nested(&dir->i_mutex, I_MUTEX_PARENT);
if (err == -EINTR)
- goto out;
+ goto out_drop_write;
dentry = lookup_one_len(vol_args->name, parent, namelen);
if (IS_ERR(dentry)) {
err = PTR_ERR(dentry);
dput(dentry);
out_unlock_dir:
mutex_unlock(&dir->i_mutex);
+out_drop_write:
mnt_drop_write_file(file);
out:
kfree(vol_args);
WARN_ON(nr < 0);
}
}
+ list_splice_tail(&splice, &fs_info->ordered_roots);
spin_unlock(&fs_info->ordered_root_lock);
}
btrfs_put_ordered_extent(ordered);
break;
}
- if (ordered->file_offset + ordered->len < start) {
+ if (ordered->file_offset + ordered->len <= start) {
btrfs_put_ordered_extent(ordered);
break;
}
root_objectid == BTRFS_CHUNK_TREE_OBJECTID ||
root_objectid == BTRFS_DEV_TREE_OBJECTID ||
root_objectid == BTRFS_TREE_LOG_OBJECTID ||
- root_objectid == BTRFS_CSUM_TREE_OBJECTID)
+ root_objectid == BTRFS_CSUM_TREE_OBJECTID ||
+ root_objectid == BTRFS_UUID_TREE_OBJECTID ||
+ root_objectid == BTRFS_QUOTA_TREE_OBJECTID)
return 1;
return 0;
}
}
/*
- * helper to update/delete the 'address of tree root -> reloc tree'
+ * helper to delete the 'address of tree root -> reloc tree'
* mapping
*/
-static int __update_reloc_root(struct btrfs_root *root, int del)
+static void __del_reloc_root(struct btrfs_root *root)
{
struct rb_node *rb_node;
struct mapping_node *node = NULL;
spin_lock(&rc->reloc_root_tree.lock);
rb_node = tree_search(&rc->reloc_root_tree.rb_root,
- root->commit_root->start);
+ root->node->start);
if (rb_node) {
node = rb_entry(rb_node, struct mapping_node, rb_node);
rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
spin_unlock(&rc->reloc_root_tree.lock);
if (!node)
- return 0;
+ return;
BUG_ON((struct btrfs_root *)node->data != root);
- if (!del) {
- spin_lock(&rc->reloc_root_tree.lock);
- node->bytenr = root->node->start;
- rb_node = tree_insert(&rc->reloc_root_tree.rb_root,
- node->bytenr, &node->rb_node);
- spin_unlock(&rc->reloc_root_tree.lock);
- if (rb_node)
- backref_tree_panic(rb_node, -EEXIST, node->bytenr);
- } else {
- spin_lock(&root->fs_info->trans_lock);
- list_del_init(&root->root_list);
- spin_unlock(&root->fs_info->trans_lock);
- kfree(node);
+ spin_lock(&root->fs_info->trans_lock);
+ list_del_init(&root->root_list);
+ spin_unlock(&root->fs_info->trans_lock);
+ kfree(node);
+}
+
+/*
+ * helper to update the 'address of tree root -> reloc tree'
+ * mapping
+ */
+static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr)
+{
+ struct rb_node *rb_node;
+ struct mapping_node *node = NULL;
+ struct reloc_control *rc = root->fs_info->reloc_ctl;
+
+ spin_lock(&rc->reloc_root_tree.lock);
+ rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+ root->node->start);
+ if (rb_node) {
+ node = rb_entry(rb_node, struct mapping_node, rb_node);
+ rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
}
+ spin_unlock(&rc->reloc_root_tree.lock);
+
+ if (!node)
+ return 0;
+ BUG_ON((struct btrfs_root *)node->data != root);
+
+ spin_lock(&rc->reloc_root_tree.lock);
+ node->bytenr = new_bytenr;
+ rb_node = tree_insert(&rc->reloc_root_tree.rb_root,
+ node->bytenr, &node->rb_node);
+ spin_unlock(&rc->reloc_root_tree.lock);
+ if (rb_node)
+ backref_tree_panic(rb_node, -EEXIST, node->bytenr);
return 0;
}
{
struct btrfs_root *reloc_root;
struct btrfs_root_item *root_item;
- int del = 0;
int ret;
if (!root->reloc_root)
if (root->fs_info->reloc_ctl->merge_reloc_tree &&
btrfs_root_refs(root_item) == 0) {
root->reloc_root = NULL;
- del = 1;
+ __del_reloc_root(reloc_root);
}
- __update_reloc_root(reloc_root, del);
-
if (reloc_root->commit_root != reloc_root->node) {
btrfs_set_root_node(root_item, reloc_root->node);
free_extent_buffer(reloc_root->commit_root);
while (!list_empty(list)) {
reloc_root = list_entry(list->next, struct btrfs_root,
root_list);
- __update_reloc_root(reloc_root, 1);
+ __del_reloc_root(reloc_root);
free_extent_buffer(reloc_root->node);
free_extent_buffer(reloc_root->commit_root);
kfree(reloc_root);
ret = merge_reloc_root(rc, root);
if (ret) {
- __update_reloc_root(reloc_root, 1);
+ __del_reloc_root(reloc_root);
free_extent_buffer(reloc_root->node);
free_extent_buffer(reloc_root->commit_root);
kfree(reloc_root);
btrfs_std_error(root->fs_info, ret);
if (!list_empty(&reloc_roots))
free_reloc_roots(&reloc_roots);
+
+ /* new reloc root may be added */
+ mutex_lock(&root->fs_info->reloc_mutex);
+ list_splice_init(&rc->reloc_roots, &reloc_roots);
+ mutex_unlock(&root->fs_info->reloc_mutex);
+ if (!list_empty(&reloc_roots))
+ free_reloc_roots(&reloc_roots);
}
BUG_ON(!RB_EMPTY_ROOT(&rc->reloc_root_tree.rb_root));
BUG_ON(rc->stage == UPDATE_DATA_PTRS &&
root->root_key.objectid == BTRFS_DATA_RELOC_TREE_OBJECTID);
+ if (root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) {
+ if (buf == root->node)
+ __update_reloc_root(root, cow->start);
+ }
+
level = btrfs_header_level(buf);
if (btrfs_header_generation(buf) <=
btrfs_root_last_snapshot(&root->root_item))
int is_metadata, int have_csum,
const u8 *csum, u64 generation,
u16 csum_size);
-static void scrub_complete_bio_end_io(struct bio *bio, int err);
static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
struct scrub_block *sblock_good,
int force_write);
BTRFS_DEV_STAT_CORRUPTION_ERRS);
}
- if (sctx->readonly && !sctx->is_dev_replace)
- goto did_not_correct_error;
+ if (sctx->readonly) {
+ ASSERT(!sctx->is_dev_replace);
+ goto out;
+ }
if (!is_metadata && !have_csum) {
struct scrub_fixup_nodatasum *fixup_nodatasum;
for (page_num = 0; page_num < sblock->page_count; page_num++) {
struct bio *bio;
struct scrub_page *page = sblock->pagev[page_num];
- DECLARE_COMPLETION_ONSTACK(complete);
if (page->dev->bdev == NULL) {
page->io_error = 1;
}
bio->bi_bdev = page->dev->bdev;
bio->bi_sector = page->physical >> 9;
- bio->bi_end_io = scrub_complete_bio_end_io;
- bio->bi_private = &complete;
bio_add_page(bio, page->page, PAGE_SIZE, 0);
- btrfsic_submit_bio(READ, bio);
-
- /* this will also unplug the queue */
- wait_for_completion(&complete);
-
- page->io_error = !test_bit(BIO_UPTODATE, &bio->bi_flags);
- if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
+ if (btrfsic_submit_bio_wait(READ, bio))
sblock->no_io_error_seen = 0;
+
bio_put(bio);
}
sblock->checksum_error = 1;
}
-static void scrub_complete_bio_end_io(struct bio *bio, int err)
-{
- complete((struct completion *)bio->bi_private);
-}
-
static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
struct scrub_block *sblock_good,
int force_write)
sblock_bad->checksum_error || page_bad->io_error) {
struct bio *bio;
int ret;
- DECLARE_COMPLETION_ONSTACK(complete);
if (!page_bad->dev->bdev) {
printk_ratelimited(KERN_WARNING
return -EIO;
bio->bi_bdev = page_bad->dev->bdev;
bio->bi_sector = page_bad->physical >> 9;
- bio->bi_end_io = scrub_complete_bio_end_io;
- bio->bi_private = &complete;
ret = bio_add_page(bio, page_good->page, PAGE_SIZE, 0);
if (PAGE_SIZE != ret) {
bio_put(bio);
return -EIO;
}
- btrfsic_submit_bio(WRITE, bio);
- /* this will also unplug the queue */
- wait_for_completion(&complete);
- if (!bio_flagged(bio, BIO_UPTODATE)) {
+ if (btrfsic_submit_bio_wait(WRITE, bio)) {
btrfs_dev_stat_inc_and_print(page_bad->dev,
BTRFS_DEV_STAT_WRITE_ERRS);
btrfs_dev_replace_stats_inc(
struct bio *bio;
struct btrfs_device *dev;
int ret;
- DECLARE_COMPLETION_ONSTACK(compl);
dev = sctx->wr_ctx.tgtdev;
if (!dev)
spin_unlock(&sctx->stat_lock);
return -ENOMEM;
}
- bio->bi_private = &compl;
- bio->bi_end_io = scrub_complete_bio_end_io;
bio->bi_size = 0;
bio->bi_sector = physical_for_dev_replace >> 9;
bio->bi_bdev = dev->bdev;
btrfs_dev_stat_inc_and_print(dev, BTRFS_DEV_STAT_WRITE_ERRS);
return -EIO;
}
- btrfsic_submit_bio(WRITE_SYNC, bio);
- wait_for_completion(&compl);
- if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
+ if (btrfsic_submit_bio_wait(WRITE_SYNC, bio))
goto leave_with_eio;
bio_put(bio);
}
if (!access_ok(VERIFY_READ, arg->clone_sources,
- sizeof(*arg->clone_sources *
- arg->clone_sources_count))) {
+ sizeof(*arg->clone_sources) *
+ arg->clone_sources_count)) {
ret = -EFAULT;
goto out;
}
} else {
printk(KERN_INFO "btrfs: setting nodatacow\n");
}
- info->compress_type = BTRFS_COMPRESS_NONE;
btrfs_clear_opt(info->mount_opt, COMPRESS);
btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
btrfs_set_opt(info->mount_opt, NODATACOW);
btrfs_set_fs_incompat(info, COMPRESS_LZO);
} else if (strncmp(args[0].from, "no", 2) == 0) {
compress_type = "no";
- info->compress_type = BTRFS_COMPRESS_NONE;
btrfs_clear_opt(info->mount_opt, COMPRESS);
btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
compress_force = false;
btrfs_set_opt(info->mount_opt, FORCE_COMPRESS);
pr_info("btrfs: force %s compression\n",
compress_type);
- } else
+ } else if (btrfs_test_opt(root, COMPRESS)) {
pr_info("btrfs: use %s compression\n",
compress_type);
+ }
break;
case Opt_ssd:
printk(KERN_INFO "btrfs: use ssd allocation scheme\n");
* We've got freeze protection passed with the transaction.
* Tell lockdep about it.
*/
- if (ac->newtrans->type < TRANS_JOIN_NOLOCK)
+ if (ac->newtrans->type & __TRANS_FREEZABLE)
rwsem_acquire_read(
&ac->root->fs_info->sb->s_writers.lock_map[SB_FREEZE_FS-1],
0, 1, _THIS_IP_);
* Tell lockdep we've released the freeze rwsem, since the
* async commit thread will be the one to unlock it.
*/
- if (trans->type < TRANS_JOIN_NOLOCK)
+ if (ac->newtrans->type & __TRANS_FREEZABLE)
rwsem_release(
&root->fs_info->sb->s_writers.lock_map[SB_FREEZE_FS-1],
1, _THIS_IP_);
ret = btrfs_truncate_inode_items(trans, log,
inode, 0, 0);
} else if (test_and_clear_bit(BTRFS_INODE_COPY_EVERYTHING,
- &BTRFS_I(inode)->runtime_flags)) {
+ &BTRFS_I(inode)->runtime_flags) ||
+ inode_only == LOG_INODE_EXISTS) {
if (inode_only == LOG_INODE_ALL)
fast_search = true;
max_key.type = BTRFS_XATTR_ITEM_KEY;
err = ret;
goto out_unlock;
}
- } else {
+ } else if (inode_only == LOG_INODE_ALL) {
struct extent_map_tree *tree = &BTRFS_I(inode)->extent_tree;
struct extent_map *em, *n;
{
struct bio_vec *prev;
struct request_queue *q = bdev_get_queue(bdev);
- unsigned short max_sectors = queue_max_sectors(q);
+ unsigned int max_sectors = queue_max_sectors(q);
struct bvec_merge_data bvm = {
.bi_bdev = bdev,
.bi_sector = sector,
if (err < 0) {
SetPageError(page);
goto out;
- } else if (err < PAGE_CACHE_SIZE) {
+ } else {
+ if (err < PAGE_CACHE_SIZE) {
/* zero fill remainder of page */
- zero_user_segment(page, err, PAGE_CACHE_SIZE);
+ zero_user_segment(page, err, PAGE_CACHE_SIZE);
+ } else {
+ flush_dcache_page(page);
+ }
}
SetPageUptodate(page);
- if (err == 0)
+ if (err >= 0)
ceph_readpage_to_fscache(inode, page);
out:
{
struct ceph_inode_info *ci = ceph_inode(inode);
+ if (!PageFsCache(page))
+ return;
+
fscache_wait_on_page_write(ci->fscache, page);
fscache_uncache_page(ci->fscache, page);
}
* caller should hold i_ceph_lock.
* caller will not hold session s_mutex if called from destroy_inode.
*/
-void __ceph_remove_cap(struct ceph_cap *cap)
+void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release)
{
struct ceph_mds_session *session = cap->session;
struct ceph_inode_info *ci = cap->ci;
/* remove from session list */
spin_lock(&session->s_cap_lock);
+ /*
+ * s_cap_reconnect is protected by s_cap_lock. no one changes
+ * s_cap_gen while session is in the reconnect state.
+ */
+ if (queue_release &&
+ (!session->s_cap_reconnect ||
+ cap->cap_gen == session->s_cap_gen))
+ __queue_cap_release(session, ci->i_vino.ino, cap->cap_id,
+ cap->mseq, cap->issue_seq);
+
if (session->s_cap_iterator == cap) {
/* not yet, we are iterating over this very cap */
dout("__ceph_remove_cap delaying %p removal from session %p\n",
struct ceph_mds_cap_release *head;
struct ceph_mds_cap_item *item;
- spin_lock(&session->s_cap_lock);
BUG_ON(!session->s_num_cap_releases);
msg = list_first_entry(&session->s_cap_releases,
struct ceph_msg, list_head);
(int)CEPH_CAPS_PER_RELEASE,
(int)msg->front.iov_len);
}
- spin_unlock(&session->s_cap_lock);
}
/*
p = rb_first(&ci->i_caps);
while (p) {
struct ceph_cap *cap = rb_entry(p, struct ceph_cap, ci_node);
- struct ceph_mds_session *session = cap->session;
-
- __queue_cap_release(session, ceph_ino(inode), cap->cap_id,
- cap->mseq, cap->issue_seq);
p = rb_next(p);
- __ceph_remove_cap(cap);
+ __ceph_remove_cap(cap, true);
}
}
}
spin_unlock(&mdsc->cap_dirty_lock);
}
- __ceph_remove_cap(cap);
+ __ceph_remove_cap(cap, false);
}
/* else, we already released it */
if (!inode) {
dout(" i don't have ino %llx\n", vino.ino);
- if (op == CEPH_CAP_OP_IMPORT)
+ if (op == CEPH_CAP_OP_IMPORT) {
+ spin_lock(&session->s_cap_lock);
__queue_cap_release(session, vino.ino, cap_id,
mseq, seq);
+ spin_unlock(&session->s_cap_lock);
+ }
goto flush_cap_releases;
}
}
/* note next offset and last dentry name */
+ rinfo = &req->r_reply_info;
+ if (le32_to_cpu(rinfo->dir_dir->frag) != frag) {
+ frag = le32_to_cpu(rinfo->dir_dir->frag);
+ if (ceph_frag_is_leftmost(frag))
+ fi->next_offset = 2;
+ else
+ fi->next_offset = 0;
+ off = fi->next_offset;
+ }
fi->offset = fi->next_offset;
fi->last_readdir = req;
+ fi->frag = frag;
if (req->r_reply_info.dir_end) {
kfree(fi->last_name);
else
fi->next_offset = 0;
} else {
- rinfo = &req->r_reply_info;
err = note_last_dentry(fi,
rinfo->dir_dname[rinfo->dir_nr-1],
rinfo->dir_dname_len[rinfo->dir_nr-1]);
int issued = 0, implemented;
struct timespec mtime, atime, ctime;
u32 nsplits;
+ struct ceph_inode_frag *frag;
+ struct rb_node *rb_node;
struct ceph_buffer *xattr_blob = NULL;
int err = 0;
int queue_trunc = 0;
/* FIXME: move me up, if/when version reflects fragtree changes */
nsplits = le32_to_cpu(info->fragtree.nsplits);
mutex_lock(&ci->i_fragtree_mutex);
+ rb_node = rb_first(&ci->i_fragtree);
for (i = 0; i < nsplits; i++) {
u32 id = le32_to_cpu(info->fragtree.splits[i].frag);
- struct ceph_inode_frag *frag = __get_or_create_frag(ci, id);
-
- if (IS_ERR(frag))
- continue;
+ frag = NULL;
+ while (rb_node) {
+ frag = rb_entry(rb_node, struct ceph_inode_frag, node);
+ if (ceph_frag_compare(frag->frag, id) >= 0) {
+ if (frag->frag != id)
+ frag = NULL;
+ else
+ rb_node = rb_next(rb_node);
+ break;
+ }
+ rb_node = rb_next(rb_node);
+ rb_erase(&frag->node, &ci->i_fragtree);
+ kfree(frag);
+ frag = NULL;
+ }
+ if (!frag) {
+ frag = __get_or_create_frag(ci, id);
+ if (IS_ERR(frag))
+ continue;
+ }
frag->split_by = le32_to_cpu(info->fragtree.splits[i].by);
dout(" frag %x split by %d\n", frag->frag, frag->split_by);
}
+ while (rb_node) {
+ frag = rb_entry(rb_node, struct ceph_inode_frag, node);
+ rb_node = rb_next(rb_node);
+ rb_erase(&frag->node, &ci->i_fragtree);
+ kfree(frag);
+ }
mutex_unlock(&ci->i_fragtree_mutex);
/* were we issued a capability? */
struct ceph_mds_reply_inode *ininfo;
struct ceph_vino vino;
struct ceph_fs_client *fsc = ceph_sb_to_client(sb);
- int i = 0;
int err = 0;
dout("fill_trace %p is_dentry %d is_target %d\n", req,
}
}
+ if (rinfo->head->is_target) {
+ vino.ino = le64_to_cpu(rinfo->targeti.in->ino);
+ vino.snap = le64_to_cpu(rinfo->targeti.in->snapid);
+
+ in = ceph_get_inode(sb, vino);
+ if (IS_ERR(in)) {
+ err = PTR_ERR(in);
+ goto done;
+ }
+ req->r_target_inode = in;
+
+ err = fill_inode(in, &rinfo->targeti, NULL,
+ session, req->r_request_started,
+ (le32_to_cpu(rinfo->head->result) == 0) ?
+ req->r_fmode : -1,
+ &req->r_caps_reservation);
+ if (err < 0) {
+ pr_err("fill_inode badness %p %llx.%llx\n",
+ in, ceph_vinop(in));
+ goto done;
+ }
+ }
+
/*
* ignore null lease/binding on snapdir ENOENT, or else we
* will have trouble splicing in the virtual snapdir later
ceph_dentry(req->r_old_dentry)->offset);
dn = req->r_old_dentry; /* use old_dentry */
- in = dn->d_inode;
}
/* null dentry? */
}
/* attach proper inode */
- ininfo = rinfo->targeti.in;
- vino.ino = le64_to_cpu(ininfo->ino);
- vino.snap = le64_to_cpu(ininfo->snapid);
- in = dn->d_inode;
- if (!in) {
- in = ceph_get_inode(sb, vino);
- if (IS_ERR(in)) {
- pr_err("fill_trace bad get_inode "
- "%llx.%llx\n", vino.ino, vino.snap);
- err = PTR_ERR(in);
- d_drop(dn);
- goto done;
- }
+ if (!dn->d_inode) {
+ ihold(in);
dn = splice_dentry(dn, in, &have_lease, true);
if (IS_ERR(dn)) {
err = PTR_ERR(dn);
goto done;
}
req->r_dentry = dn; /* may have spliced */
- ihold(in);
- } else if (ceph_ino(in) == vino.ino &&
- ceph_snap(in) == vino.snap) {
- ihold(in);
- } else {
+ } else if (dn->d_inode && dn->d_inode != in) {
dout(" %p links to %p %llx.%llx, not %llx.%llx\n",
- dn, in, ceph_ino(in), ceph_snap(in),
- vino.ino, vino.snap);
+ dn, dn->d_inode, ceph_vinop(dn->d_inode),
+ ceph_vinop(in));
have_lease = false;
- in = NULL;
}
if (have_lease)
update_dentry_lease(dn, rinfo->dlease, session,
req->r_request_started);
dout(" final dn %p\n", dn);
- i++;
- } else if ((req->r_op == CEPH_MDS_OP_LOOKUPSNAP ||
- req->r_op == CEPH_MDS_OP_MKSNAP) && !req->r_aborted) {
+ } else if (!req->r_aborted &&
+ (req->r_op == CEPH_MDS_OP_LOOKUPSNAP ||
+ req->r_op == CEPH_MDS_OP_MKSNAP)) {
struct dentry *dn = req->r_dentry;
/* fill out a snapdir LOOKUPSNAP dentry */
ininfo = rinfo->targeti.in;
vino.ino = le64_to_cpu(ininfo->ino);
vino.snap = le64_to_cpu(ininfo->snapid);
- in = ceph_get_inode(sb, vino);
- if (IS_ERR(in)) {
- pr_err("fill_inode get_inode badness %llx.%llx\n",
- vino.ino, vino.snap);
- err = PTR_ERR(in);
- d_delete(dn);
- goto done;
- }
dout(" linking snapped dir %p to dn %p\n", in, dn);
+ ihold(in);
dn = splice_dentry(dn, in, NULL, true);
if (IS_ERR(dn)) {
err = PTR_ERR(dn);
goto done;
}
req->r_dentry = dn; /* may have spliced */
- ihold(in);
- rinfo->head->is_dentry = 1; /* fool notrace handlers */
- }
-
- if (rinfo->head->is_target) {
- vino.ino = le64_to_cpu(rinfo->targeti.in->ino);
- vino.snap = le64_to_cpu(rinfo->targeti.in->snapid);
-
- if (in == NULL || ceph_ino(in) != vino.ino ||
- ceph_snap(in) != vino.snap) {
- in = ceph_get_inode(sb, vino);
- if (IS_ERR(in)) {
- err = PTR_ERR(in);
- goto done;
- }
- }
- req->r_target_inode = in;
-
- err = fill_inode(in,
- &rinfo->targeti, NULL,
- session, req->r_request_started,
- (le32_to_cpu(rinfo->head->result) == 0) ?
- req->r_fmode : -1,
- &req->r_caps_reservation);
- if (err < 0) {
- pr_err("fill_inode badness %p %llx.%llx\n",
- in, ceph_vinop(in));
- goto done;
- }
}
-
done:
dout("fill_trace done err=%d\n", err);
return err;
struct qstr dname;
struct dentry *dn;
struct inode *in;
- int err = 0, i;
+ int err = 0, ret, i;
struct inode *snapdir = NULL;
struct ceph_mds_request_head *rhead = req->r_request->front.iov_base;
- u64 frag = le32_to_cpu(rhead->args.readdir.frag);
struct ceph_dentry_info *di;
+ u64 r_readdir_offset = req->r_readdir_offset;
+ u32 frag = le32_to_cpu(rhead->args.readdir.frag);
+
+ if (rinfo->dir_dir &&
+ le32_to_cpu(rinfo->dir_dir->frag) != frag) {
+ dout("readdir_prepopulate got new frag %x -> %x\n",
+ frag, le32_to_cpu(rinfo->dir_dir->frag));
+ frag = le32_to_cpu(rinfo->dir_dir->frag);
+ if (ceph_frag_is_leftmost(frag))
+ r_readdir_offset = 2;
+ else
+ r_readdir_offset = 0;
+ }
if (req->r_aborted)
return readdir_prepopulate_inodes_only(req, session);
ceph_fill_dirfrag(parent->d_inode, rinfo->dir_dir);
}
+ /* FIXME: release caps/leases if error occurs */
for (i = 0; i < rinfo->dir_nr; i++) {
struct ceph_vino vino;
err = -ENOMEM;
goto out;
}
- err = ceph_init_dentry(dn);
- if (err < 0) {
+ ret = ceph_init_dentry(dn);
+ if (ret < 0) {
dput(dn);
+ err = ret;
goto out;
}
} else if (dn->d_inode &&
spin_unlock(&parent->d_lock);
}
- di = dn->d_fsdata;
- di->offset = ceph_make_fpos(frag, i + req->r_readdir_offset);
-
/* inode */
if (dn->d_inode) {
in = dn->d_inode;
err = PTR_ERR(in);
goto out;
}
- dn = splice_dentry(dn, in, NULL, false);
- if (IS_ERR(dn))
- dn = NULL;
}
if (fill_inode(in, &rinfo->dir_in[i], NULL, session,
req->r_request_started, -1,
&req->r_caps_reservation) < 0) {
pr_err("fill_inode badness on %p\n", in);
+ if (!dn->d_inode)
+ iput(in);
+ d_drop(dn);
goto next_item;
}
- if (dn)
- update_dentry_lease(dn, rinfo->dir_dlease[i],
- req->r_session,
- req->r_request_started);
+
+ if (!dn->d_inode) {
+ dn = splice_dentry(dn, in, NULL, false);
+ if (IS_ERR(dn)) {
+ err = PTR_ERR(dn);
+ dn = NULL;
+ goto next_item;
+ }
+ }
+
+ di = dn->d_fsdata;
+ di->offset = ceph_make_fpos(frag, i + r_readdir_offset);
+
+ update_dentry_lease(dn, rinfo->dir_dlease[i],
+ req->r_session,
+ req->r_request_started);
next_item:
if (dn)
dput(dn);
}
- req->r_did_prepopulate = true;
+ if (err == 0)
+ req->r_did_prepopulate = true;
out:
if (snapdir) {
*/
struct ceph_reconnect_state {
+ int nr_caps;
struct ceph_pagelist *pagelist;
bool flock;
};
INIT_LIST_HEAD(&s->s_waiting);
INIT_LIST_HEAD(&s->s_unsafe);
s->s_num_cap_releases = 0;
+ s->s_cap_reconnect = 0;
s->s_cap_iterator = NULL;
INIT_LIST_HEAD(&s->s_cap_releases);
INIT_LIST_HEAD(&s->s_cap_releases_done);
req->r_unsafe_dir = NULL;
}
+ complete_all(&req->r_safe_completion);
+
ceph_mdsc_put_request(req);
}
dout("removing cap %p, ci is %p, inode is %p\n",
cap, ci, &ci->vfs_inode);
spin_lock(&ci->i_ceph_lock);
- __ceph_remove_cap(cap);
+ __ceph_remove_cap(cap, false);
if (!__ceph_is_any_real_caps(ci)) {
struct ceph_mds_client *mdsc =
ceph_sb_to_client(inode->i_sb)->mdsc;
session->s_trim_caps--;
if (oissued) {
/* we aren't the only cap.. just remove us */
- __queue_cap_release(session, ceph_ino(inode), cap->cap_id,
- cap->mseq, cap->issue_seq);
- __ceph_remove_cap(cap);
+ __ceph_remove_cap(cap, true);
} else {
/* try to drop referring dentries */
spin_unlock(&ci->i_ceph_lock);
unsigned num;
dout("discard_cap_releases mds%d\n", session->s_mds);
- spin_lock(&session->s_cap_lock);
/* zero out the in-progress message */
msg = list_first_entry(&session->s_cap_releases,
msg->front.iov_len = sizeof(*head);
list_add(&msg->list_head, &session->s_cap_releases);
}
-
- spin_unlock(&session->s_cap_lock);
}
/*
int mds = -1;
int err = -EAGAIN;
- if (req->r_err || req->r_got_result)
+ if (req->r_err || req->r_got_result) {
+ if (req->r_aborted)
+ __unregister_request(mdsc, req);
goto out;
+ }
if (req->r_timeout &&
time_after_eq(jiffies, req->r_started + req->r_timeout)) {
if (head->safe) {
req->r_got_safe = true;
__unregister_request(mdsc, req);
- complete_all(&req->r_safe_completion);
if (req->r_got_unsafe) {
/*
err = ceph_fill_trace(mdsc->fsc->sb, req, req->r_session);
if (err == 0) {
if (result == 0 && (req->r_op == CEPH_MDS_OP_READDIR ||
- req->r_op == CEPH_MDS_OP_LSSNAP) &&
- rinfo->dir_nr)
+ req->r_op == CEPH_MDS_OP_LSSNAP))
ceph_readdir_prepopulate(req, req->r_session);
ceph_unreserve_caps(mdsc, &req->r_caps_reservation);
}
cap->seq = 0; /* reset cap seq */
cap->issue_seq = 0; /* and issue_seq */
cap->mseq = 0; /* and migrate_seq */
+ cap->cap_gen = cap->session->s_cap_gen;
if (recon_state->flock) {
rec.v2.cap_id = cpu_to_le64(cap->cap_id);
} else {
err = ceph_pagelist_append(pagelist, &rec, reclen);
}
+
+ recon_state->nr_caps++;
out_free:
kfree(path);
out_dput:
struct rb_node *p;
int mds = session->s_mds;
int err = -ENOMEM;
+ int s_nr_caps;
struct ceph_pagelist *pagelist;
struct ceph_reconnect_state recon_state;
dout("session %p state %s\n", session,
session_state_name(session->s_state));
+ spin_lock(&session->s_gen_ttl_lock);
+ session->s_cap_gen++;
+ spin_unlock(&session->s_gen_ttl_lock);
+
+ spin_lock(&session->s_cap_lock);
+ /*
+ * notify __ceph_remove_cap() that we are composing cap reconnect.
+ * If a cap get released before being added to the cap reconnect,
+ * __ceph_remove_cap() should skip queuing cap release.
+ */
+ session->s_cap_reconnect = 1;
/* drop old cap expires; we're about to reestablish that state */
discard_cap_releases(mdsc, session);
+ spin_unlock(&session->s_cap_lock);
/* traverse this session's caps */
- err = ceph_pagelist_encode_32(pagelist, session->s_nr_caps);
+ s_nr_caps = session->s_nr_caps;
+ err = ceph_pagelist_encode_32(pagelist, s_nr_caps);
if (err)
goto fail;
+ recon_state.nr_caps = 0;
recon_state.pagelist = pagelist;
recon_state.flock = session->s_con.peer_features & CEPH_FEATURE_FLOCK;
err = iterate_session_caps(session, encode_caps_cb, &recon_state);
if (err < 0)
goto fail;
+ spin_lock(&session->s_cap_lock);
+ session->s_cap_reconnect = 0;
+ spin_unlock(&session->s_cap_lock);
+
/*
* snaprealms. we provide mds with the ino, seq (version), and
* parent for all of our realms. If the mds has any newer info,
if (recon_state.flock)
reply->hdr.version = cpu_to_le16(2);
- if (pagelist->length) {
- /* set up outbound data if we have any */
- reply->hdr.data_len = cpu_to_le32(pagelist->length);
- ceph_msg_data_add_pagelist(reply, pagelist);
+
+ /* raced with cap release? */
+ if (s_nr_caps != recon_state.nr_caps) {
+ struct page *page = list_first_entry(&pagelist->head,
+ struct page, lru);
+ __le32 *addr = kmap_atomic(page);
+ *addr = cpu_to_le32(recon_state.nr_caps);
+ kunmap_atomic(addr);
}
+
+ reply->hdr.data_len = cpu_to_le32(pagelist->length);
+ ceph_msg_data_add_pagelist(reply, pagelist);
ceph_con_send(&session->s_con, reply);
mutex_unlock(&session->s_mutex);
struct list_head s_caps; /* all caps issued by this session */
int s_nr_caps, s_trim_caps;
int s_num_cap_releases;
+ int s_cap_reconnect;
struct list_head s_cap_releases; /* waiting cap_release messages */
struct list_head s_cap_releases_done; /* ready to send */
struct ceph_cap *s_cap_iterator;
int fmode, unsigned issued, unsigned wanted,
unsigned cap, unsigned seq, u64 realmino, int flags,
struct ceph_cap_reservation *caps_reservation);
-extern void __ceph_remove_cap(struct ceph_cap *cap);
-static inline void ceph_remove_cap(struct ceph_cap *cap)
-{
- spin_lock(&cap->ci->i_ceph_lock);
- __ceph_remove_cap(cap);
- spin_unlock(&cap->ci->i_ceph_lock);
-}
+extern void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release);
extern void ceph_put_cap(struct ceph_mds_client *mdsc,
struct ceph_cap *cap);
int (*clone_range)(const unsigned int, struct cifsFileInfo *src_file,
struct cifsFileInfo *target_file, u64 src_off, u64 len,
u64 dest_off);
+ int (*validate_negotiate)(const unsigned int, struct cifs_tcon *);
};
struct smb_version_values {
const int netfid, __u64 *pExtAttrBits, __u64 *pMask);
extern void cifs_autodisable_serverino(struct cifs_sb_info *cifs_sb);
extern bool CIFSCouldBeMFSymlink(const struct cifs_fattr *fattr);
-extern int CIFSCheckMFSymlink(struct cifs_fattr *fattr,
- const unsigned char *path,
- struct cifs_sb_info *cifs_sb, unsigned int xid);
+extern int CIFSCheckMFSymlink(unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_sb_info *cifs_sb,
+ struct cifs_fattr *fattr,
+ const unsigned char *path);
extern int mdfour(unsigned char *, unsigned char *, int);
extern int E_md4hash(const unsigned char *passwd, unsigned char *p16,
const struct nls_table *codepage);
rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB,
(struct smb_hdr *) pSMBr, &bytes_returned, 0);
if (rc) {
- cifs_dbg(FYI, "Send error in QPathInfo = %d\n", rc);
+ cifs_dbg(FYI, "Send error in QFileInfo = %d", rc);
} else { /* decode response */
rc = validate_t2((struct smb_t2_rsp *)pSMBr);
rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB,
(struct smb_hdr *) pSMBr, &bytes_returned, 0);
if (rc) {
- cifs_dbg(FYI, "Send error in QPathInfo = %d\n", rc);
+ cifs_dbg(FYI, "Send error in UnixQFileInfo = %d", rc);
} else { /* decode response */
rc = validate_t2((struct smb_t2_rsp *)pSMBr);
rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB,
(struct smb_hdr *) pSMBr, &bytes_returned, 0);
if (rc) {
- cifs_dbg(FYI, "Send error in QPathInfo = %d\n", rc);
+ cifs_dbg(FYI, "Send error in UnixQPathInfo = %d", rc);
} else { /* decode response */
rc = validate_t2((struct smb_t2_rsp *)pSMBr);
static int
cifs_do_create(struct inode *inode, struct dentry *direntry, unsigned int xid,
struct tcon_link *tlink, unsigned oflags, umode_t mode,
- __u32 *oplock, struct cifs_fid *fid, int *created)
+ __u32 *oplock, struct cifs_fid *fid)
{
int rc = -ENOENT;
int create_options = CREATE_NOT_DIR;
.device = 0,
};
- *created |= FILE_CREATED;
if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) {
args.uid = current_fsuid();
if (inode->i_mode & S_ISGID)
cifs_add_pending_open(&fid, tlink, &open);
rc = cifs_do_create(inode, direntry, xid, tlink, oflags, mode,
- &oplock, &fid, opened);
+ &oplock, &fid);
if (rc) {
cifs_del_pending_open(&open);
goto out;
}
+ if ((oflags & (O_CREAT | O_EXCL)) == (O_CREAT | O_EXCL))
+ *opened |= FILE_CREATED;
+
rc = finish_open(file, direntry, generic_file_open, opened);
if (rc) {
if (server->ops->close)
struct TCP_Server_Info *server;
struct cifs_fid fid;
__u32 oplock;
- int created = FILE_CREATED;
cifs_dbg(FYI, "cifs_create parent inode = 0x%p name is: %s and dentry = 0x%p\n",
inode, direntry->d_name.name, direntry);
server->ops->new_lease_key(&fid);
rc = cifs_do_create(inode, direntry, xid, tlink, oflags, mode,
- &oplock, &fid, &created);
+ &oplock, &fid);
if (!rc && server->ops->close)
server->ops->close(xid, tcon, &fid);
/* check for Minshall+French symlinks */
if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MF_SYMLINKS) {
- int tmprc = CIFSCheckMFSymlink(&fattr, full_path, cifs_sb, xid);
+ int tmprc = CIFSCheckMFSymlink(xid, tcon, cifs_sb, &fattr,
+ full_path);
if (tmprc)
cifs_dbg(FYI, "CIFSCheckMFSymlink: %d\n", tmprc);
}
/* check for Minshall+French symlinks */
if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MF_SYMLINKS) {
- tmprc = CIFSCheckMFSymlink(&fattr, full_path, cifs_sb, xid);
+ tmprc = CIFSCheckMFSymlink(xid, tcon, cifs_sb, &fattr,
+ full_path);
if (tmprc)
cifs_dbg(FYI, "CIFSCheckMFSymlink: %d\n", tmprc);
}
#include <linux/mount.h>
#include <linux/mm.h>
#include <linux/pagemap.h>
-#include <linux/btrfs.h>
#include "cifspdu.h"
#include "cifsglob.h"
#include "cifsproto.h"
#include "cifs_debug.h"
#include "cifsfs.h"
+#define CIFS_IOCTL_MAGIC 0xCF
+#define CIFS_IOC_COPYCHUNK_FILE _IOW(CIFS_IOCTL_MAGIC, 3, int)
+
static long cifs_ioctl_clone(unsigned int xid, struct file *dst_file,
unsigned long srcfd, u64 off, u64 len, u64 destoff)
{
cifs_dbg(FYI, "set compress flag rc %d\n", rc);
}
break;
- case BTRFS_IOC_CLONE:
+ case CIFS_IOC_COPYCHUNK_FILE:
rc = cifs_ioctl_clone(xid, filep, arg, 0, 0, 0);
break;
default:
int
-CIFSCheckMFSymlink(struct cifs_fattr *fattr,
- const unsigned char *path,
- struct cifs_sb_info *cifs_sb, unsigned int xid)
+CIFSCheckMFSymlink(unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_sb_info *cifs_sb, struct cifs_fattr *fattr,
+ const unsigned char *path)
{
- int rc = 0;
+ int rc;
u8 *buf = NULL;
unsigned int link_len = 0;
unsigned int bytes_read = 0;
- struct cifs_tcon *ptcon;
if (!CIFSCouldBeMFSymlink(fattr))
/* it's not a symlink */
return 0;
buf = kmalloc(CIFS_MF_SYMLINK_FILE_SIZE, GFP_KERNEL);
- if (!buf) {
- rc = -ENOMEM;
- goto out;
- }
+ if (!buf)
+ return -ENOMEM;
- ptcon = tlink_tcon(cifs_sb_tlink(cifs_sb));
- if ((ptcon->ses) && (ptcon->ses->server->ops->query_mf_symlink))
- rc = ptcon->ses->server->ops->query_mf_symlink(path, buf,
- &bytes_read, cifs_sb, xid);
+ if (tcon->ses->server->ops->query_mf_symlink)
+ rc = tcon->ses->server->ops->query_mf_symlink(path, buf,
+ &bytes_read, cifs_sb, xid);
else
- goto out;
+ rc = -ENOSYS;
- if (rc != 0)
+ if (rc)
goto out;
if (bytes_read == 0) /* not a symlink */
int rc;
unsigned int ret_data_len;
struct copychunk_ioctl *pcchunk;
- char *retbuf = NULL;
+ struct copychunk_ioctl_rsp *retbuf = NULL;
+ struct cifs_tcon *tcon;
+ int chunks_copied = 0;
+ bool chunk_sizes_updated = false;
pcchunk = kmalloc(sizeof(struct copychunk_ioctl), GFP_KERNEL);
/* Note: request_res_key sets res_key null only if rc !=0 */
if (rc)
- return rc;
+ goto cchunk_out;
/* For now array only one chunk long, will make more flexible later */
pcchunk->ChunkCount = __constant_cpu_to_le32(1);
pcchunk->Reserved = 0;
- pcchunk->SourceOffset = cpu_to_le64(src_off);
- pcchunk->TargetOffset = cpu_to_le64(dest_off);
- pcchunk->Length = cpu_to_le32(len);
pcchunk->Reserved2 = 0;
- /* Request that server copy to target from src file identified by key */
- rc = SMB2_ioctl(xid, tlink_tcon(trgtfile->tlink),
- trgtfile->fid.persistent_fid,
- trgtfile->fid.volatile_fid, FSCTL_SRV_COPYCHUNK_WRITE,
- true /* is_fsctl */, (char *)pcchunk,
- sizeof(struct copychunk_ioctl), &retbuf, &ret_data_len);
+ tcon = tlink_tcon(trgtfile->tlink);
- /* BB need to special case rc = EINVAL to alter chunk size */
+ while (len > 0) {
+ pcchunk->SourceOffset = cpu_to_le64(src_off);
+ pcchunk->TargetOffset = cpu_to_le64(dest_off);
+ pcchunk->Length =
+ cpu_to_le32(min_t(u32, len, tcon->max_bytes_chunk));
- cifs_dbg(FYI, "rc %d data length out %d\n", rc, ret_data_len);
+ /* Request server copy to target from src identified by key */
+ rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid,
+ trgtfile->fid.volatile_fid, FSCTL_SRV_COPYCHUNK_WRITE,
+ true /* is_fsctl */, (char *)pcchunk,
+ sizeof(struct copychunk_ioctl), (char **)&retbuf,
+ &ret_data_len);
+ if (rc == 0) {
+ if (ret_data_len !=
+ sizeof(struct copychunk_ioctl_rsp)) {
+ cifs_dbg(VFS, "invalid cchunk response size\n");
+ rc = -EIO;
+ goto cchunk_out;
+ }
+ if (retbuf->TotalBytesWritten == 0) {
+ cifs_dbg(FYI, "no bytes copied\n");
+ rc = -EIO;
+ goto cchunk_out;
+ }
+ /*
+ * Check if server claimed to write more than we asked
+ */
+ if (le32_to_cpu(retbuf->TotalBytesWritten) >
+ le32_to_cpu(pcchunk->Length)) {
+ cifs_dbg(VFS, "invalid copy chunk response\n");
+ rc = -EIO;
+ goto cchunk_out;
+ }
+ if (le32_to_cpu(retbuf->ChunksWritten) != 1) {
+ cifs_dbg(VFS, "invalid num chunks written\n");
+ rc = -EIO;
+ goto cchunk_out;
+ }
+ chunks_copied++;
+
+ src_off += le32_to_cpu(retbuf->TotalBytesWritten);
+ dest_off += le32_to_cpu(retbuf->TotalBytesWritten);
+ len -= le32_to_cpu(retbuf->TotalBytesWritten);
+
+ cifs_dbg(FYI, "Chunks %d PartialChunk %d Total %d\n",
+ le32_to_cpu(retbuf->ChunksWritten),
+ le32_to_cpu(retbuf->ChunkBytesWritten),
+ le32_to_cpu(retbuf->TotalBytesWritten));
+ } else if (rc == -EINVAL) {
+ if (ret_data_len != sizeof(struct copychunk_ioctl_rsp))
+ goto cchunk_out;
+
+ cifs_dbg(FYI, "MaxChunks %d BytesChunk %d MaxCopy %d\n",
+ le32_to_cpu(retbuf->ChunksWritten),
+ le32_to_cpu(retbuf->ChunkBytesWritten),
+ le32_to_cpu(retbuf->TotalBytesWritten));
+
+ /*
+ * Check if this is the first request using these sizes,
+ * (ie check if copy succeed once with original sizes
+ * and check if the server gave us different sizes after
+ * we already updated max sizes on previous request).
+ * if not then why is the server returning an error now
+ */
+ if ((chunks_copied != 0) || chunk_sizes_updated)
+ goto cchunk_out;
+
+ /* Check that server is not asking us to grow size */
+ if (le32_to_cpu(retbuf->ChunkBytesWritten) <
+ tcon->max_bytes_chunk)
+ tcon->max_bytes_chunk =
+ le32_to_cpu(retbuf->ChunkBytesWritten);
+ else
+ goto cchunk_out; /* server gave us bogus size */
+
+ /* No need to change MaxChunks since already set to 1 */
+ chunk_sizes_updated = true;
+ }
+ }
+cchunk_out:
kfree(pcchunk);
return rc;
}
.create_lease_buf = smb3_create_lease_buf,
.parse_lease_buf = smb3_parse_lease_buf,
.clone_range = smb2_clone_range,
+ .validate_negotiate = smb3_validate_negotiate,
};
struct smb_version_values smb20_values = {
return rc;
}
+int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+{
+ int rc = 0;
+ struct validate_negotiate_info_req vneg_inbuf;
+ struct validate_negotiate_info_rsp *pneg_rsp;
+ u32 rsplen;
+
+ cifs_dbg(FYI, "validate negotiate\n");
+
+ /*
+ * validation ioctl must be signed, so no point sending this if we
+ * can not sign it. We could eventually change this to selectively
+ * sign just this, the first and only signed request on a connection.
+ * This is good enough for now since a user who wants better security
+ * would also enable signing on the mount. Having validation of
+ * negotiate info for signed connections helps reduce attack vectors
+ */
+ if (tcon->ses->server->sign == false)
+ return 0; /* validation requires signing */
+
+ vneg_inbuf.Capabilities =
+ cpu_to_le32(tcon->ses->server->vals->req_capabilities);
+ memcpy(vneg_inbuf.Guid, cifs_client_guid, SMB2_CLIENT_GUID_SIZE);
+
+ if (tcon->ses->sign)
+ vneg_inbuf.SecurityMode =
+ cpu_to_le16(SMB2_NEGOTIATE_SIGNING_REQUIRED);
+ else if (global_secflags & CIFSSEC_MAY_SIGN)
+ vneg_inbuf.SecurityMode =
+ cpu_to_le16(SMB2_NEGOTIATE_SIGNING_ENABLED);
+ else
+ vneg_inbuf.SecurityMode = 0;
+
+ vneg_inbuf.DialectCount = cpu_to_le16(1);
+ vneg_inbuf.Dialects[0] =
+ cpu_to_le16(tcon->ses->server->vals->protocol_id);
+
+ rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+ FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */,
+ (char *)&vneg_inbuf, sizeof(struct validate_negotiate_info_req),
+ (char **)&pneg_rsp, &rsplen);
+
+ if (rc != 0) {
+ cifs_dbg(VFS, "validate protocol negotiate failed: %d\n", rc);
+ return -EIO;
+ }
+
+ if (rsplen != sizeof(struct validate_negotiate_info_rsp)) {
+ cifs_dbg(VFS, "invalid size of protocol negotiate response\n");
+ return -EIO;
+ }
+
+ /* check validate negotiate info response matches what we got earlier */
+ if (pneg_rsp->Dialect !=
+ cpu_to_le16(tcon->ses->server->vals->protocol_id))
+ goto vneg_out;
+
+ if (pneg_rsp->SecurityMode != cpu_to_le16(tcon->ses->server->sec_mode))
+ goto vneg_out;
+
+ /* do not validate server guid because not saved at negprot time yet */
+
+ if ((le32_to_cpu(pneg_rsp->Capabilities) | SMB2_NT_FIND |
+ SMB2_LARGE_FILES) != tcon->ses->server->capabilities)
+ goto vneg_out;
+
+ /* validate negotiate successful */
+ cifs_dbg(FYI, "validate negotiate info successful\n");
+ return 0;
+
+vneg_out:
+ cifs_dbg(VFS, "protocol revalidation - security settings mismatch\n");
+ return -EIO;
+}
+
int
SMB2_sess_setup(const unsigned int xid, struct cifs_ses *ses,
const struct nls_table *nls_cp)
((tcon->share_flags & SHI1005_FLAGS_DFS) == 0))
cifs_dbg(VFS, "DFS capability contradicts DFS flag\n");
init_copy_chunk_defaults(tcon);
+ if (tcon->ses->server->ops->validate_negotiate)
+ rc = tcon->ses->server->ops->validate_negotiate(xid, tcon);
tcon_exit:
free_rsp_buf(resp_buftype, rsp);
kfree(unc_path);
rc = SendReceive2(xid, ses, iov, num_iovecs, &resp_buftype, 0);
rsp = (struct smb2_ioctl_rsp *)iov[0].iov_base;
- if (rc != 0) {
+ if ((rc != 0) && (rc != -EINVAL)) {
if (tcon)
cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);
goto ioctl_exit;
+ } else if (rc == -EINVAL) {
+ if ((opcode != FSCTL_SRV_COPYCHUNK_WRITE) &&
+ (opcode != FSCTL_SRV_COPYCHUNK)) {
+ if (tcon)
+ cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);
+ goto ioctl_exit;
+ }
}
/* check if caller wants to look at return data or just return rc */
rc = SendReceive2(xid, ses, iov, num, &resp_buftype, 0);
rsp = (struct smb2_set_info_rsp *)iov[0].iov_base;
- if (rc != 0) {
+ if (rc != 0)
cifs_stats_fail_inc(tcon, SMB2_SET_INFO_HE);
- goto out;
- }
-out:
+
free_rsp_buf(resp_buftype, rsp);
kfree(iov);
return rc;
__le32 TotalBytesWritten;
} __packed;
-/* Response and Request are the same format */
-struct validate_negotiate_info {
+struct validate_negotiate_info_req {
__le32 Capabilities;
__u8 Guid[SMB2_CLIENT_GUID_SIZE];
__le16 SecurityMode;
__le16 DialectCount;
- __le16 Dialect[1];
+ __le16 Dialects[1]; /* dialect (someday maybe list) client asked for */
+} __packed;
+
+struct validate_negotiate_info_rsp {
+ __le32 Capabilities;
+ __u8 Guid[SMB2_CLIENT_GUID_SIZE];
+ __le16 SecurityMode;
+ __le16 Dialect; /* Dialect in use for the connection */
} __packed;
#define RSS_CAPABLE 0x00000001
struct smb2_lock_element *buf);
extern int SMB2_lease_break(const unsigned int xid, struct cifs_tcon *tcon,
__u8 *lease_key, const __le32 lease_state);
+extern int smb3_validate_negotiate(const unsigned int, struct cifs_tcon *);
#endif /* _SMB2PROTO_H */
#define FSCTL_LMR_REQUEST_RESILIENCY 0x001401D4 /* BB add struct */
#define FSCTL_LMR_GET_LINK_TRACK_INF 0x001400E8 /* BB add struct */
#define FSCTL_LMR_SET_LINK_TRACK_INF 0x001400EC /* BB add struct */
-#define FSCTL_VALIDATE_NEGOTIATE_INFO 0x00140204 /* BB add struct */
+#define FSCTL_VALIDATE_NEGOTIATE_INFO 0x00140204
/* Perform server-side data movement */
#define FSCTL_SRV_COPYCHUNK 0x001440F2
#define FSCTL_SRV_COPYCHUNK_WRITE 0x001480F2
struct configfs_dirent *sd = dentry->d_fsdata;
if (sd) {
- BUG_ON(sd->s_dentry != dentry);
/* Coordinate with configfs_readdir */
spin_lock(&configfs_dirent_lock);
- sd->s_dentry = NULL;
+ /* Coordinate with configfs_attach_attr where will increase
+ * sd->s_count and update sd->s_dentry to new allocated one.
+ * Only set sd->dentry to null when this dentry is the only
+ * sd owner.
+ * If not do so, configfs_d_iput may run just after
+ * configfs_attach_attr and set sd->s_dentry to null
+ * even it's still in use.
+ */
+ if (atomic_read(&sd->s_count) <= 2)
+ sd->s_dentry = NULL;
+
spin_unlock(&configfs_dirent_lock);
configfs_put(sd);
}
iput(inode);
}
-/*
- * We _must_ delete our dentries on last dput, as the chain-to-parent
- * behavior is required to clear the parents of default_groups.
- */
-static int configfs_d_delete(const struct dentry *dentry)
-{
- return 1;
-}
-
const struct dentry_operations configfs_dentry_ops = {
.d_iput = configfs_d_iput,
- /* simple_delete_dentry() isn't exported */
- .d_delete = configfs_d_delete,
+ .d_delete = always_delete_dentry,
};
#ifdef CONFIG_LOCKDEP
struct configfs_attribute * attr = sd->s_element;
int error;
+ spin_lock(&configfs_dirent_lock);
dentry->d_fsdata = configfs_get(sd);
sd->s_dentry = dentry;
+ spin_unlock(&configfs_dirent_lock);
+
error = configfs_create(dentry, (attr->ca_mode & S_IALLUGO) | S_IFREG,
configfs_init_file);
if (error) {
while (nr) {
if (dump_interrupted())
return 0;
- n = vfs_write(file, addr, nr, &pos);
+ n = __kernel_write(file, addr, nr, &pos);
if (n <= 0)
return 0;
file->f_pos = pos;
{
unsigned mod = cprm->written & (align - 1);
if (align & (align - 1))
- return -EINVAL;
- return mod ? dump_skip(cprm, align - mod) : 0;
+ return 0;
+ return mod ? dump_skip(cprm, align - mod) : 1;
}
EXPORT_SYMBOL(dump_align);
static struct kmem_cache *dentry_cache __read_mostly;
-/**
- * read_seqbegin_or_lock - begin a sequence number check or locking block
- * @lock: sequence lock
- * @seq : sequence number to be checked
- *
- * First try it once optimistically without taking the lock. If that fails,
- * take the lock. The sequence number is also used as a marker for deciding
- * whether to be a reader (even) or writer (odd).
- * N.B. seq must be initialized to an even number to begin with.
- */
-static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq)
-{
- if (!(*seq & 1)) /* Even */
- *seq = read_seqbegin(lock);
- else /* Odd */
- read_seqlock_excl(lock);
-}
-
-static inline int need_seqretry(seqlock_t *lock, int seq)
-{
- return !(seq & 1) && read_seqretry(lock, seq);
-}
-
-static inline void done_seqretry(seqlock_t *lock, int seq)
-{
- if (seq & 1)
- read_sequnlock_excl(lock);
-}
-
/*
* This is the single most critical data structure when it comes
* to the dcache: the hashtable for lookups. Somebody should try
* This hash-function tries to avoid losing too many bits of hash
* information, yet avoid using a prime hash-size or similar.
*/
-#define D_HASHBITS d_hash_shift
-#define D_HASHMASK d_hash_mask
static unsigned int d_hash_mask __read_mostly;
static unsigned int d_hash_shift __read_mostly;
unsigned int hash)
{
hash += (unsigned long) parent / L1_CACHE_BYTES;
- hash = hash + (hash >> D_HASHBITS);
- return dentry_hashtable + (hash & D_HASHMASK);
+ hash = hash + (hash >> d_hash_shift);
+ return dentry_hashtable + (hash & d_hash_mask);
}
/* Statistics gathering. */
if (!tcount)
return 0;
}
- mask = ~(~0ul << tcount*8);
+ mask = bytemask_from_count(tcount);
return unlikely(!!((a ^ b) & mask));
}
{
list_del(&dentry->d_u.d_child);
/*
- * Inform try_to_ascend() that we are no longer attached to the
+ * Inform d_walk() that we are no longer attached to the
* dentry tree
*/
dentry->d_flags |= DCACHE_DENTRY_KILLED;
}
EXPORT_SYMBOL(shrink_dcache_sb);
-/*
- * This tries to ascend one level of parenthood, but
- * we can race with renaming, so we need to re-check
- * the parenthood after dropping the lock and check
- * that the sequence number still matches.
- */
-static struct dentry *try_to_ascend(struct dentry *old, unsigned seq)
-{
- struct dentry *new = old->d_parent;
-
- rcu_read_lock();
- spin_unlock(&old->d_lock);
- spin_lock(&new->d_lock);
-
- /*
- * might go back up the wrong parent if we have had a rename
- * or deletion
- */
- if (new != old->d_parent ||
- (old->d_flags & DCACHE_DENTRY_KILLED) ||
- need_seqretry(&rename_lock, seq)) {
- spin_unlock(&new->d_lock);
- new = NULL;
- }
- rcu_read_unlock();
- return new;
-}
-
/**
* enum d_walk_ret - action to talke during tree walk
* @D_WALK_CONTINUE: contrinue walk
*/
if (this_parent != parent) {
struct dentry *child = this_parent;
- this_parent = try_to_ascend(this_parent, seq);
- if (!this_parent)
+ this_parent = child->d_parent;
+
+ rcu_read_lock();
+ spin_unlock(&child->d_lock);
+ spin_lock(&this_parent->d_lock);
+
+ /*
+ * might go back up the wrong parent if we have had a rename
+ * or deletion
+ */
+ if (this_parent != child->d_parent ||
+ (child->d_flags & DCACHE_DENTRY_KILLED) ||
+ need_seqretry(&rename_lock, seq)) {
+ spin_unlock(&this_parent->d_lock);
+ rcu_read_unlock();
goto rename_retry;
+ }
+ rcu_read_unlock();
next = child->d_u.d_child.next;
goto resume;
}
static long
ecryptfs_unlocked_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
- struct file *lower_file = NULL;
+ struct file *lower_file = ecryptfs_file_to_lower(file);
long rc = -ENOTTY;
- if (ecryptfs_file_to_private(file))
- lower_file = ecryptfs_file_to_lower(file);
if (lower_file->f_op->unlocked_ioctl)
rc = lower_file->f_op->unlocked_ioctl(lower_file, cmd, arg);
return rc;
static long
ecryptfs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
- struct file *lower_file = NULL;
+ struct file *lower_file = ecryptfs_file_to_lower(file);
long rc = -ENOIOCTLCMD;
- if (ecryptfs_file_to_private(file))
- lower_file = ecryptfs_file_to_lower(file);
if (lower_file->f_op && lower_file->f_op->compat_ioctl)
rc = lower_file->f_op->compat_ioctl(lower_file, cmd, arg);
return rc;
return 0;
}
-/*
- * Retaining negative dentries for an in-memory filesystem just wastes
- * memory and lookup time: arrange for them to be deleted immediately.
- */
-static int efivarfs_delete_dentry(const struct dentry *dentry)
-{
- return 1;
-}
-
static struct dentry_operations efivarfs_d_ops = {
.d_compare = efivarfs_d_compare,
.d_hash = efivarfs_d_hash,
- .d_delete = efivarfs_delete_dentry,
+ .d_delete = always_delete_dentry,
};
static struct dentry *efivarfs_alloc_dentry(struct dentry *parent, char *name)
goto error_tgt_fput;
/* Check if EPOLLWAKEUP is allowed */
- if ((epds.events & EPOLLWAKEUP) && !capable(CAP_BLOCK_SUSPEND))
- epds.events &= ~EPOLLWAKEUP;
+ ep_take_care_of_epollwakeup(&epds);
/*
* We have to check that the file structure underneath the file descriptor
}
}
}
- if (op == EPOLL_CTL_DEL && is_file_epoll(tf.file)) {
- tep = tf.file->private_data;
- mutex_lock_nested(&tep->mtx, 1);
- }
/*
* Try to lookup the file inside our RB tree, Since we grabbed "mtx"
if (retval)
return retval;
- retval = audit_bprm(bprm);
- if (retval)
- return retval;
-
retval = -ENOENT;
retry:
read_lock(&binfmt_lock);
ret = search_binary_handler(bprm);
if (ret >= 0) {
+ audit_bprm(bprm);
trace_sched_process_exec(current, old_pid, bprm);
ptrace_event(PTRACE_EVENT_EXEC, old_vpid);
current->did_exec = 1;
sb->s_blocksize - offset : towrite;
tmp_bh.b_state = 0;
+ tmp_bh.b_size = sb->s_blocksize;
err = ext2_get_block(inode, blk, &tmp_bh, 1);
if (err < 0)
goto out;
/* Translate # of blks to # of clusters */
#define EXT4_NUM_B2C(sbi, blks) (((blks) + (sbi)->s_cluster_ratio - 1) >> \
(sbi)->s_cluster_bits)
+/* Mask out the low bits to get the starting block of the cluster */
+#define EXT4_PBLK_CMASK(s, pblk) ((pblk) & \
+ ~((ext4_fsblk_t) (s)->s_cluster_ratio - 1))
+#define EXT4_LBLK_CMASK(s, lblk) ((lblk) & \
+ ~((ext4_lblk_t) (s)->s_cluster_ratio - 1))
+/* Get the cluster offset */
+#define EXT4_PBLK_COFF(s, pblk) ((pblk) & \
+ ((ext4_fsblk_t) (s)->s_cluster_ratio - 1))
+#define EXT4_LBLK_COFF(s, lblk) ((lblk) & \
+ ((ext4_lblk_t) (s)->s_cluster_ratio - 1))
/*
* Structure of a blocks group descriptor
if (WARN_ON_ONCE(err)) {
ext4_journal_abort_handle(where, line, __func__, bh,
handle, err);
+ ext4_error_inode(inode, where, line,
+ bh->b_blocknr,
+ "journal_dirty_metadata failed: "
+ "handle type %u started at line %u, "
+ "credits %u/%u, errcode %d",
+ handle->h_type,
+ handle->h_line_no,
+ handle->h_requested_credits,
+ handle->h_buffer_credits, err);
}
} else {
if (inode)
{
ext4_fsblk_t block = ext4_ext_pblock(ext);
int len = ext4_ext_get_actual_len(ext);
+ ext4_lblk_t lblock = le32_to_cpu(ext->ee_block);
+ ext4_lblk_t last = lblock + len - 1;
- if (len == 0)
+ if (lblock > last)
return 0;
return ext4_data_block_valid(EXT4_SB(inode->i_sb), block, len);
}
if (depth == 0) {
/* leaf entries */
struct ext4_extent *ext = EXT_FIRST_EXTENT(eh);
+ struct ext4_super_block *es = EXT4_SB(inode->i_sb)->s_es;
+ ext4_fsblk_t pblock = 0;
+ ext4_lblk_t lblock = 0;
+ ext4_lblk_t prev = 0;
+ int len = 0;
while (entries) {
if (!ext4_valid_extent(inode, ext))
return 0;
+
+ /* Check for overlapping extents */
+ lblock = le32_to_cpu(ext->ee_block);
+ len = ext4_ext_get_actual_len(ext);
+ if ((lblock <= prev) && prev) {
+ pblock = ext4_ext_pblock(ext);
+ es->s_last_error_block = cpu_to_le64(pblock);
+ return 0;
+ }
ext++;
entries--;
+ prev = lblock + len - 1;
}
} else {
struct ext4_extent_idx *ext_idx = EXT_FIRST_INDEX(eh);
depth = ext_depth(inode);
if (!path[depth].p_ext)
goto out;
- b2 = le32_to_cpu(path[depth].p_ext->ee_block);
- b2 &= ~(sbi->s_cluster_ratio - 1);
+ b2 = EXT4_LBLK_CMASK(sbi, le32_to_cpu(path[depth].p_ext->ee_block));
/*
* get the next allocated block if the extent in the path
b2 = ext4_ext_next_allocated_block(path);
if (b2 == EXT_MAX_BLOCKS)
goto out;
- b2 &= ~(sbi->s_cluster_ratio - 1);
+ b2 = EXT4_LBLK_CMASK(sbi, b2);
}
/* check for wrap through zero on extent logical start block*/
* extent, we have to mark the cluster as used (store negative
* cluster number in partial_cluster).
*/
- unaligned = pblk & (sbi->s_cluster_ratio - 1);
+ unaligned = EXT4_PBLK_COFF(sbi, pblk);
if (unaligned && (ee_len == num) &&
(*partial_cluster != -((long long)EXT4_B2C(sbi, pblk))))
*partial_cluster = EXT4_B2C(sbi, pblk);
* accidentally freeing it later on
*/
pblk = ext4_ext_pblock(ex);
- if (pblk & (sbi->s_cluster_ratio - 1))
+ if (EXT4_PBLK_COFF(sbi, pblk))
*partial_cluster =
-((long long)EXT4_B2C(sbi, pblk));
ex--;
{
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
ext4_lblk_t lblk_start, lblk_end;
- lblk_start = lblk & (~(sbi->s_cluster_ratio - 1));
+ lblk_start = EXT4_LBLK_CMASK(sbi, lblk);
lblk_end = lblk_start + sbi->s_cluster_ratio - 1;
return ext4_find_delalloc_range(inode, lblk_start, lblk_end);
trace_ext4_get_reserved_cluster_alloc(inode, lblk_start, num_blks);
/* Check towards left side */
- c_offset = lblk_start & (sbi->s_cluster_ratio - 1);
+ c_offset = EXT4_LBLK_COFF(sbi, lblk_start);
if (c_offset) {
- lblk_from = lblk_start & (~(sbi->s_cluster_ratio - 1));
+ lblk_from = EXT4_LBLK_CMASK(sbi, lblk_start);
lblk_to = lblk_from + c_offset - 1;
if (ext4_find_delalloc_range(inode, lblk_from, lblk_to))
}
/* Now check towards right. */
- c_offset = (lblk_start + num_blks) & (sbi->s_cluster_ratio - 1);
+ c_offset = EXT4_LBLK_COFF(sbi, lblk_start + num_blks);
if (allocated_clusters && c_offset) {
lblk_from = lblk_start + num_blks;
lblk_to = lblk_from + (sbi->s_cluster_ratio - c_offset) - 1;
struct ext4_ext_path *path)
{
struct ext4_sb_info *sbi = EXT4_SB(sb);
- ext4_lblk_t c_offset = map->m_lblk & (sbi->s_cluster_ratio-1);
+ ext4_lblk_t c_offset = EXT4_LBLK_COFF(sbi, map->m_lblk);
ext4_lblk_t ex_cluster_start, ex_cluster_end;
ext4_lblk_t rr_cluster_start;
ext4_lblk_t ee_block = le32_to_cpu(ex->ee_block);
(rr_cluster_start == ex_cluster_start)) {
if (rr_cluster_start == ex_cluster_end)
ee_start += ee_len - 1;
- map->m_pblk = (ee_start & ~(sbi->s_cluster_ratio - 1)) +
- c_offset;
+ map->m_pblk = EXT4_PBLK_CMASK(sbi, ee_start) + c_offset;
map->m_len = min(map->m_len,
(unsigned) sbi->s_cluster_ratio - c_offset);
/*
*/
map->m_flags &= ~EXT4_MAP_FROM_CLUSTER;
newex.ee_block = cpu_to_le32(map->m_lblk);
- cluster_offset = map->m_lblk & (sbi->s_cluster_ratio-1);
+ cluster_offset = EXT4_LBLK_COFF(sbi, map->m_lblk);
/*
* If we are doing bigalloc, check to see if the extent returned
* needed so that future calls to get_implied_cluster_alloc()
* work correctly.
*/
- offset = map->m_lblk & (sbi->s_cluster_ratio - 1);
+ offset = EXT4_LBLK_COFF(sbi, map->m_lblk);
ar.len = EXT4_NUM_B2C(sbi, offset+allocated);
ar.goal -= offset;
ar.logical -= offset;
*/
static int ext4_da_reserve_metadata(struct inode *inode, ext4_lblk_t lblock)
{
- int retries = 0;
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
struct ext4_inode_info *ei = EXT4_I(inode);
unsigned int md_needed;
* in order to allocate nrblocks
* worse case is one extent per block
*/
-repeat:
spin_lock(&ei->i_block_reservation_lock);
/*
* ext4_calc_metadata_amount() has side effects, which we have
ei->i_da_metadata_calc_len = save_len;
ei->i_da_metadata_calc_last_lblock = save_last_lblock;
spin_unlock(&ei->i_block_reservation_lock);
- if (ext4_should_retry_alloc(inode->i_sb, &retries)) {
- cond_resched();
- goto repeat;
- }
return -ENOSPC;
}
ei->i_reserved_meta_blocks += md_needed;
*/
static int ext4_da_reserve_space(struct inode *inode, ext4_lblk_t lblock)
{
- int retries = 0;
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
struct ext4_inode_info *ei = EXT4_I(inode);
unsigned int md_needed;
* in order to allocate nrblocks
* worse case is one extent per block
*/
-repeat:
spin_lock(&ei->i_block_reservation_lock);
/*
* ext4_calc_metadata_amount() has side effects, which we have
ei->i_da_metadata_calc_len = save_len;
ei->i_da_metadata_calc_last_lblock = save_last_lblock;
spin_unlock(&ei->i_block_reservation_lock);
- if (ext4_should_retry_alloc(inode->i_sb, &retries)) {
- cond_resched();
- goto repeat;
- }
dquot_release_reservation_block(inode, EXT4_C2B(sbi, 1));
return -ENOSPC;
}
{
struct ext4_prealloc_space *pa;
pa = container_of(head, struct ext4_prealloc_space, u.pa_rcu);
+
+ BUG_ON(atomic_read(&pa->pa_count));
+ BUG_ON(pa->pa_deleted == 0);
kmem_cache_free(ext4_pspace_cachep, pa);
}
ext4_group_t grp;
ext4_fsblk_t grp_blk;
- if (!atomic_dec_and_test(&pa->pa_count) || pa->pa_free != 0)
- return;
-
/* in this short window concurrent discard can set pa_deleted */
spin_lock(&pa->pa_lock);
+ if (!atomic_dec_and_test(&pa->pa_count) || pa->pa_free != 0) {
+ spin_unlock(&pa->pa_lock);
+ return;
+ }
+
if (pa->pa_deleted == 1) {
spin_unlock(&pa->pa_lock);
return;
ext4_get_group_no_and_offset(sb, goal, &group, &block);
/* set up allocation goals */
- ac->ac_b_ex.fe_logical = ar->logical & ~(sbi->s_cluster_ratio - 1);
+ ac->ac_b_ex.fe_logical = EXT4_LBLK_CMASK(sbi, ar->logical);
ac->ac_status = AC_STATUS_CONTINUE;
ac->ac_sb = sb;
ac->ac_inode = ar->inode;
* blocks at the beginning or the end unless we are explicitly
* requested to avoid doing so.
*/
- overflow = block & (sbi->s_cluster_ratio - 1);
+ overflow = EXT4_PBLK_COFF(sbi, block);
if (overflow) {
if (flags & EXT4_FREE_BLOCKS_NOFREE_FIRST_CLUSTER) {
overflow = sbi->s_cluster_ratio - overflow;
count += overflow;
}
}
- overflow = count & (sbi->s_cluster_ratio - 1);
+ overflow = EXT4_LBLK_COFF(sbi, count);
if (overflow) {
if (flags & EXT4_FREE_BLOCKS_NOFREE_LAST_CLUSTER) {
if (count > overflow)
}
ext4_es_unregister_shrinker(sbi);
- del_timer(&sbi->s_err_report);
+ del_timer_sync(&sbi->s_err_report);
ext4_release_system_zone(sb);
ext4_mb_release(sb);
ext4_ext_release(sb);
}
-static ext4_fsblk_t ext4_calculate_resv_clusters(struct ext4_sb_info *sbi)
+static ext4_fsblk_t ext4_calculate_resv_clusters(struct super_block *sb)
{
ext4_fsblk_t resv_clusters;
+ /*
+ * There's no need to reserve anything when we aren't using extents.
+ * The space estimates are exact, there are no unwritten extents,
+ * hole punching doesn't need new metadata... This is needed especially
+ * to keep ext2/3 backward compatibility.
+ */
+ if (!EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_EXTENTS))
+ return 0;
/*
* By default we reserve 2% or 4096 clusters, whichever is smaller.
* This should cover the situations where we can not afford to run
* allocation would require 1, or 2 blocks, higher numbers are
* very rare.
*/
- resv_clusters = ext4_blocks_count(sbi->s_es) >> sbi->s_cluster_bits;
+ resv_clusters = ext4_blocks_count(EXT4_SB(sb)->s_es) >>
+ EXT4_SB(sb)->s_cluster_bits;
do_div(resv_clusters, 50);
resv_clusters = min_t(ext4_fsblk_t, resv_clusters, 4096);
"available");
}
- err = ext4_reserve_clusters(sbi, ext4_calculate_resv_clusters(sbi));
+ err = ext4_reserve_clusters(sbi, ext4_calculate_resv_clusters(sb));
if (err) {
ext4_msg(sb, KERN_ERR, "failed to reserve %llu clusters for "
- "reserved pool", ext4_calculate_resv_clusters(sbi));
+ "reserved pool", ext4_calculate_resv_clusters(sb));
goto failed_mount4a;
}
}
failed_mount3:
ext4_es_unregister_shrinker(sbi);
- del_timer(&sbi->s_err_report);
+ del_timer_sync(&sbi->s_err_report);
if (sbi->s_flex_groups)
ext4_kvfree(sbi->s_flex_groups);
percpu_counter_destroy(&sbi->s_freeclusters_counter);
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
+ struct address_space *mapping = inode->i_mapping;
struct gfs2_inode *ip = GFS2_I(inode);
struct gfs2_holder gh;
int rv;
if (rv != 1)
goto out; /* dio not valid, fall back to buffered i/o */
+ /*
+ * Now since we are holding a deferred (CW) lock at this point, you
+ * might be wondering why this is ever needed. There is a case however
+ * where we've granted a deferred local lock against a cached exclusive
+ * glock. That is ok provided all granted local locks are deferred, but
+ * it also means that it is possible to encounter pages which are
+ * cached and possibly also mapped. So here we check for that and sort
+ * them out ahead of the dio. The glock state machine will take care of
+ * everything else.
+ *
+ * If in fact the cached glock state (gl->gl_state) is deferred (CW) in
+ * the first place, mapping->nr_pages will always be zero.
+ */
+ if (mapping->nrpages) {
+ loff_t lstart = offset & (PAGE_CACHE_SIZE - 1);
+ loff_t len = iov_length(iov, nr_segs);
+ loff_t end = PAGE_ALIGN(offset + len) - 1;
+
+ rv = 0;
+ if (len == 0)
+ goto out;
+ if (test_and_clear_bit(GIF_SW_PAGED, &ip->i_flags))
+ unmap_shared_mapping_range(ip->i_inode.i_mapping, offset, len);
+ rv = filemap_write_and_wait_range(mapping, lstart, end);
+ if (rv)
+ return rv;
+ truncate_inode_pages_range(mapping, lstart, end);
+ }
+
rv = __blockdev_direct_IO(rw, iocb, inode, inode->i_sb->s_bdev, iov,
offset, nr_segs, gfs2_get_block_direct,
NULL, NULL, 0);
struct task_struct *gh_owner = NULL;
char flags_buf[32];
+ rcu_read_lock();
if (gh->gh_owner_pid)
gh_owner = pid_task(gh->gh_owner_pid, PIDTYPE_PID);
gfs2_print_dbg(seq, " H: s:%s f:%s e:%d p:%ld [%s] %pS\n",
gh->gh_owner_pid ? (long)pid_nr(gh->gh_owner_pid) : -1,
gh_owner ? gh_owner->comm : "(ended)",
(void *)gh->gh_ip);
+ rcu_read_unlock();
return 0;
}
gi->nhash = 0;
}
/* Skip entries for other sb and dead entries */
- } while (gi->sdp != gi->gl->gl_sbd || __lockref_is_dead(&gl->gl_lockref));
+ } while (gi->sdp != gi->gl->gl_sbd ||
+ __lockref_is_dead(&gi->gl->gl_lockref));
return 0;
}
if (ip && !S_ISREG(ip->i_inode.i_mode))
ip = NULL;
- if (ip && test_and_clear_bit(GIF_SW_PAGED, &ip->i_flags))
- unmap_shared_mapping_range(ip->i_inode.i_mapping, 0, 0);
+ if (ip) {
+ if (test_and_clear_bit(GIF_SW_PAGED, &ip->i_flags))
+ unmap_shared_mapping_range(ip->i_inode.i_mapping, 0, 0);
+ inode_dio_wait(&ip->i_inode);
+ }
if (!test_and_clear_bit(GLF_DIRTY, &gl->gl_flags))
return;
return error;
}
+ if (gh->gh_state != LM_ST_DEFERRED)
+ inode_dio_wait(&ip->i_inode);
+
if ((ip->i_diskflags & GFS2_DIF_TRUNC_IN_PROG) &&
(gl->gl_state == LM_ST_EXCLUSIVE) &&
(gh->gh_state == LM_ST_EXCLUSIVE)) {
if (d != NULL)
dentry = d;
if (dentry->d_inode) {
- if (!(*opened & FILE_OPENED))
+ if (!(*opened & FILE_OPENED)) {
+ if (d == NULL)
+ dget(dentry);
return finish_no_open(file, dentry);
+ }
dput(d);
return 0;
}
static void control_lvb_read(struct lm_lockstruct *ls, uint32_t *lvb_gen,
char *lvb_bits)
{
- uint32_t gen;
+ __le32 gen;
memcpy(lvb_bits, ls->ls_control_lvb, GDLM_LVB_SIZE);
- memcpy(&gen, lvb_bits, sizeof(uint32_t));
+ memcpy(&gen, lvb_bits, sizeof(__le32));
*lvb_gen = le32_to_cpu(gen);
}
static void control_lvb_write(struct lm_lockstruct *ls, uint32_t lvb_gen,
char *lvb_bits)
{
- uint32_t gen;
+ __le32 gen;
memcpy(ls->ls_control_lvb, lvb_bits, GDLM_LVB_SIZE);
gen = cpu_to_le32(lvb_gen);
- memcpy(ls->ls_control_lvb, &gen, sizeof(uint32_t));
+ memcpy(ls->ls_control_lvb, &gen, sizeof(__le32));
}
static int all_jid_bits_clear(char *lvb)
struct buffer_head *bh = bd->bd_bh;
struct gfs2_glock *gl = bd->bd_gl;
- gfs2_remove_from_ail(bd);
- bd->bd_bh = NULL;
bh->b_private = NULL;
bd->bd_blkno = bh->b_blocknr;
+ gfs2_remove_from_ail(bd); /* drops ref on bh */
+ bd->bd_bh = NULL;
bd->bd_ops = &gfs2_revoke_lops;
sdp->sd_log_num_revoke++;
atomic_inc(&gl->gl_revokes);
struct address_space *mapping = bh->b_page->mapping;
struct gfs2_sbd *sdp = gfs2_mapping2sbd(mapping);
struct gfs2_bufdata *bd = bh->b_private;
+ int was_pinned = 0;
if (test_clear_buffer_pinned(bh)) {
trace_gfs2_pin(bd, 0);
tr->tr_num_databuf_rm++;
}
tr->tr_touched = 1;
+ was_pinned = 1;
brelse(bh);
}
if (bd) {
spin_lock(&sdp->sd_ail_lock);
if (bd->bd_tr) {
gfs2_trans_add_revoke(sdp, bd);
+ } else if (was_pinned) {
+ bh->b_private = NULL;
+ kmem_cache_free(gfs2_bufdata_cachep, bd);
}
spin_unlock(&sdp->sd_ail_lock);
}
if (IS_ERR(s))
goto error_bdev;
- if (s->s_root)
+ if (s->s_root) {
+ /*
+ * s_umount nests inside bd_mutex during
+ * __invalidate_device(). blkdev_put() acquires
+ * bd_mutex and can't be called under s_umount. Drop
+ * s_umount temporarily. This is safe as we're
+ * holding an active reference.
+ */
+ up_write(&s->s_umount);
blkdev_put(bdev, mode);
+ down_write(&s->s_umount);
+ }
memset(&args, 0, sizeof(args));
args.ar_quota = GFS2_QUOTA_DEFAULT;
struct buffer_head *bh;
struct page *page;
void *kaddr, *ptr;
- struct gfs2_quota q, *qp;
+ struct gfs2_quota q;
int err, nbytes;
u64 size;
return err;
err = -EIO;
- qp = &q;
- qp->qu_value = be64_to_cpu(qp->qu_value);
- qp->qu_value += change;
- qp->qu_value = cpu_to_be64(qp->qu_value);
- qd->qd_qb.qb_value = qp->qu_value;
+ be64_add_cpu(&q.qu_value, change);
+ qd->qd_qb.qb_value = q.qu_value;
if (fdq) {
if (fdq->d_fieldmask & FS_DQ_BSOFT) {
- qp->qu_warn = cpu_to_be64(fdq->d_blk_softlimit >> sdp->sd_fsb2bb_shift);
- qd->qd_qb.qb_warn = qp->qu_warn;
+ q.qu_warn = cpu_to_be64(fdq->d_blk_softlimit >> sdp->sd_fsb2bb_shift);
+ qd->qd_qb.qb_warn = q.qu_warn;
}
if (fdq->d_fieldmask & FS_DQ_BHARD) {
- qp->qu_limit = cpu_to_be64(fdq->d_blk_hardlimit >> sdp->sd_fsb2bb_shift);
- qd->qd_qb.qb_limit = qp->qu_limit;
+ q.qu_limit = cpu_to_be64(fdq->d_blk_hardlimit >> sdp->sd_fsb2bb_shift);
+ qd->qd_qb.qb_limit = q.qu_limit;
}
if (fdq->d_fieldmask & FS_DQ_BCOUNT) {
- qp->qu_value = cpu_to_be64(fdq->d_bcount >> sdp->sd_fsb2bb_shift);
- qd->qd_qb.qb_value = qp->qu_value;
+ q.qu_value = cpu_to_be64(fdq->d_bcount >> sdp->sd_fsb2bb_shift);
+ qd->qd_qb.qb_value = q.qu_value;
}
}
/* Write the quota into the quota file on disk */
- ptr = qp;
+ ptr = &q;
nbytes = sizeof(struct gfs2_quota);
get_a_page:
page = find_or_create_page(mapping, index, GFP_NOFS);
rgd->rd_flags |= (GFS2_RDF_UPTODATE | GFS2_RDF_CHECK);
rgd->rd_free_clone = rgd->rd_free;
}
- if (be32_to_cpu(GFS2_MAGIC) != rgd->rd_rgl->rl_magic) {
+ if (cpu_to_be32(GFS2_MAGIC) != rgd->rd_rgl->rl_magic) {
rgd->rd_rgl->rl_unlinked = cpu_to_be32(count_unlinked(rgd));
gfs2_rgrp_ondisk2lvb(rgd->rd_rgl,
rgd->rd_bits[0].bi_bh->b_data);
if (rgd->rd_flags & GFS2_RDF_UPTODATE)
return 0;
- if (be32_to_cpu(GFS2_MAGIC) != rgd->rd_rgl->rl_magic)
+ if (cpu_to_be32(GFS2_MAGIC) != rgd->rd_rgl->rl_magic)
return gfs2_rgrp_bh_get(rgd);
rl_flags = be32_to_cpu(rgd->rd_rgl->rl_flags);
u16 embed_count;
};
-static void hfsplus_end_io_sync(struct bio *bio, int err)
-{
- if (err)
- clear_bit(BIO_UPTODATE, &bio->bi_flags);
- complete(bio->bi_private);
-}
-
/*
* hfsplus_submit_bio - Perfrom block I/O
* @sb: super block of volume for I/O
int hfsplus_submit_bio(struct super_block *sb, sector_t sector,
void *buf, void **data, int rw)
{
- DECLARE_COMPLETION_ONSTACK(wait);
struct bio *bio;
int ret = 0;
u64 io_size;
bio = bio_alloc(GFP_NOIO, 1);
bio->bi_sector = sector;
bio->bi_bdev = sb->s_bdev;
- bio->bi_end_io = hfsplus_end_io_sync;
- bio->bi_private = &wait;
if (!(rw & WRITE) && data)
*data = (u8 *)buf + offset;
buf = (u8 *)buf + len;
}
- submit_bio(rw, bio);
- wait_for_completion(&wait);
-
- if (!bio_flagged(bio, BIO_UPTODATE))
- ret = -EIO;
-
+ ret = submit_bio_wait(rw, bio);
out:
bio_put(bio);
return ret < 0 ? ret : 0;
#define FILE_HOSTFS_I(file) HOSTFS_I(file_inode(file))
-static int hostfs_d_delete(const struct dentry *dentry)
-{
- return 1;
-}
-
-static const struct dentry_operations hostfs_dentry_ops = {
- .d_delete = hostfs_d_delete,
-};
-
/* Changed in hostfs_args before the kernel starts running */
static char *root_ino = "";
static int append = 0;
sb->s_blocksize_bits = 10;
sb->s_magic = HOSTFS_SUPER_MAGIC;
sb->s_op = &hostfs_sbops;
- sb->s_d_op = &hostfs_dentry_ops;
+ sb->s_d_op = &simple_dentry_operations;
sb->s_maxbytes = MAX_LFS_FILESIZE;
/* NULL is printed as <NULL> by sprintf: avoid that. */
read_lock(&journal->j_state_lock);
#ifdef CONFIG_JBD2_DEBUG
if (!tid_geq(journal->j_commit_request, tid)) {
- printk(KERN_EMERG
+ printk(KERN_ERR
"%s: error: j_commit_request=%d, tid=%d\n",
__func__, journal->j_commit_request, tid);
}
}
read_unlock(&journal->j_state_lock);
- if (unlikely(is_journal_aborted(journal))) {
- printk(KERN_EMERG "journal commit I/O error\n");
+ if (unlikely(is_journal_aborted(journal)))
err = -EIO;
- }
return err;
}
if (JBD2_HAS_COMPAT_FEATURE(journal, JBD2_FEATURE_COMPAT_CHECKSUM) &&
JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2)) {
/* Can't have checksum v1 and v2 on at the same time! */
- printk(KERN_ERR "JBD: Can't enable checksumming v1 and v2 "
+ printk(KERN_ERR "JBD2: Can't enable checksumming v1 and v2 "
"at the same time!\n");
goto out;
}
if (!jbd2_verify_csum_type(journal, sb)) {
- printk(KERN_ERR "JBD: Unknown checksum type\n");
+ printk(KERN_ERR "JBD2: Unknown checksum type\n");
goto out;
}
if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2)) {
journal->j_chksum_driver = crypto_alloc_shash("crc32c", 0, 0);
if (IS_ERR(journal->j_chksum_driver)) {
- printk(KERN_ERR "JBD: Cannot load crc32c driver.\n");
+ printk(KERN_ERR "JBD2: Cannot load crc32c driver.\n");
err = PTR_ERR(journal->j_chksum_driver);
journal->j_chksum_driver = NULL;
goto out;
/* Check superblock checksum */
if (!jbd2_superblock_csum_verify(journal, sb)) {
- printk(KERN_ERR "JBD: journal checksum error\n");
+ printk(KERN_ERR "JBD2: journal checksum error\n");
goto out;
}
journal->j_chksum_driver = crypto_alloc_shash("crc32c",
0, 0);
if (IS_ERR(journal->j_chksum_driver)) {
- printk(KERN_ERR "JBD: Cannot load crc32c "
+ printk(KERN_ERR "JBD2: Cannot load crc32c "
"driver.\n");
journal->j_chksum_driver = NULL;
return 0;
#ifdef CONFIG_JBD2_DEBUG
int n = atomic_read(&nr_journal_heads);
if (n)
- printk(KERN_EMERG "JBD2: leaked %d journal_heads!\n", n);
+ printk(KERN_ERR "JBD2: leaked %d journal_heads!\n", n);
#endif
jbd2_remove_jbd_stats_proc_entry();
jbd2_journal_destroy_caches();
be32_to_cpu(tmp->h_sequence))) {
brelse(obh);
success = -EIO;
- printk(KERN_ERR "JBD: Invalid "
+ printk(KERN_ERR "JBD2: Invalid "
"checksum recovering "
"block %llu in log\n",
blocknr);
jbd2_alloc(jh2bh(jh)->b_size,
GFP_NOFS);
if (!frozen_buffer) {
- printk(KERN_EMERG
+ printk(KERN_ERR
"%s: OOM for frozen_buffer\n",
__func__);
JBUFFER_TRACE(jh, "oom!");
if (!jh->b_committed_data) {
committed_data = jbd2_alloc(jh2bh(jh)->b_size, GFP_NOFS);
if (!committed_data) {
- printk(KERN_EMERG "%s: No memory for committed data\n",
+ printk(KERN_ERR "%s: No memory for committed data\n",
__func__);
err = -ENOMEM;
goto out;
* once a transaction -bzzz
*/
jh->b_modified = 1;
- J_ASSERT_JH(jh, handle->h_buffer_credits > 0);
+ if (handle->h_buffer_credits <= 0) {
+ ret = -ENOSPC;
+ goto out_unlock_bh;
+ }
handle->h_buffer_credits--;
}
JBUFFER_TRACE(jh, "fastpath");
if (unlikely(jh->b_transaction !=
journal->j_running_transaction)) {
- printk(KERN_EMERG "JBD: %s: "
+ printk(KERN_ERR "JBD2: %s: "
"jh->b_transaction (%llu, %p, %u) != "
"journal->j_running_transaction (%p, %u)",
journal->j_devname,
JBUFFER_TRACE(jh, "already on other transaction");
if (unlikely(jh->b_transaction !=
journal->j_committing_transaction)) {
- printk(KERN_EMERG "JBD: %s: "
+ printk(KERN_ERR "JBD2: %s: "
"jh->b_transaction (%llu, %p, %u) != "
"journal->j_committing_transaction (%p, %u)",
journal->j_devname,
ret = -EINVAL;
}
if (unlikely(jh->b_next_transaction != transaction)) {
- printk(KERN_EMERG "JBD: %s: "
+ printk(KERN_ERR "JBD2: %s: "
"jh->b_next_transaction (%llu, %p, %u) != "
"transaction (%p, %u)",
journal->j_devname,
jbd2_journal_put_journal_head(jh);
out:
JBUFFER_TRACE(jh, "exit");
- WARN_ON(ret); /* All errors are bugs, so dump the stack */
return ret;
}
* Retaining negative dentries for an in-memory filesystem just wastes
* memory and lookup time: arrange for them to be deleted immediately.
*/
-static int simple_delete_dentry(const struct dentry *dentry)
+int always_delete_dentry(const struct dentry *dentry)
{
return 1;
}
+EXPORT_SYMBOL(always_delete_dentry);
+
+const struct dentry_operations simple_dentry_operations = {
+ .d_delete = always_delete_dentry,
+};
+EXPORT_SYMBOL(simple_dentry_operations);
/*
* Lookup the data. This is trivial - if the dentry didn't already
*/
struct dentry *simple_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
{
- static const struct dentry_operations simple_dentry_operations = {
- .d_delete = simple_delete_dentry,
- };
-
if (dentry->d_name.len > NAME_MAX)
return ERR_PTR(-ENAMETOOLONG);
if (!dentry->d_sb->s_d_op)
#define PAGE_OFS(ofs) ((ofs) & (PAGE_SIZE-1))
-static void request_complete(struct bio *bio, int err)
-{
- complete((struct completion *)bio->bi_private);
-}
-
static int sync_request(struct page *page, struct block_device *bdev, int rw)
{
struct bio bio;
struct bio_vec bio_vec;
- struct completion complete;
bio_init(&bio);
bio.bi_max_vecs = 1;
bio.bi_size = PAGE_SIZE;
bio.bi_bdev = bdev;
bio.bi_sector = page->index * (PAGE_SIZE >> 9);
- init_completion(&complete);
- bio.bi_private = &complete;
- bio.bi_end_io = request_complete;
- submit_bio(rw, &bio);
- wait_for_completion(&complete);
- return test_bit(BIO_UPTODATE, &bio.bi_flags) ? 0 : -EIO;
+ return submit_bio_wait(rw, &bio);
}
static int bdev_readpage(void *_sb, struct page *page)
if (!lockref_get_not_dead(&parent->d_lockref)) {
nd->path.dentry = NULL;
- rcu_read_unlock();
- return -ECHILD;
+ goto out;
}
/*
* do a "get_unaligned()" if this helps and is sufficiently
* fast.
*
- * - Little-endian machines (so that we can generate the mask
- * of low bytes efficiently). Again, we *could* do a byte
- * swapping load on big-endian architectures if that is not
- * expensive enough to make the optimization worthless.
- *
* - non-CONFIG_DEBUG_PAGEALLOC configurations (so that we
* do not trap on the (extremely unlikely) case of a page
* crossing operation.
if (!len)
goto done;
}
- mask = ~(~0ul << len*8);
+ mask = bytemask_from_count(len);
hash += mask & a;
done:
return fold_hash(hash);
*/
static inline int may_create(struct inode *dir, struct dentry *child)
{
+ audit_inode_child(dir, child, AUDIT_TYPE_CHILD_CREATE);
if (child->d_inode)
return -EEXIST;
if (IS_DEADDIR(dir))
#include <linux/nfs_fs.h>
#include <linux/sunrpc/rpc_pipe_fs.h>
+#include "../nfs4_fs.h"
#include "../pnfs.h"
#include "../netns.h"
static inline sector_t normalize(sector_t s, int base)
{
sector_t tmp = s; /* Since do_div modifies its argument */
- return s - do_div(tmp, base);
+ return s - sector_div(tmp, base);
}
static inline sector_t normalize_up(sector_t s, int base)
#include <linux/sunrpc/cache.h>
#include <linux/sunrpc/svcauth.h>
#include <linux/sunrpc/rpc_pipe_fs.h>
+#include <linux/nfs_fs.h>
+#include "nfs4_fs.h"
#include "dns_resolve.h"
#include "cache_lib.h"
#include "netns.h"
}
EXPORT_SYMBOL_GPL(nfs4_label_alloc);
#else
-void inline nfs_setsecurity(struct inode *inode, struct nfs_fattr *fattr,
+void nfs_setsecurity(struct inode *inode, struct nfs_fattr *fattr,
struct nfs4_label *label)
{
}
extern struct rpc_procinfo nfs4_procedures[];
#endif
+#ifdef CONFIG_NFS_V4_SECURITY_LABEL
+extern struct nfs4_label *nfs4_label_alloc(struct nfs_server *server, gfp_t flags);
+static inline void nfs4_label_free(struct nfs4_label *label)
+{
+ if (label) {
+ kfree(label->label);
+ kfree(label);
+ }
+ return;
+}
+#else
+static inline struct nfs4_label *nfs4_label_alloc(struct nfs_server *server, gfp_t flags) { return NULL; }
+static inline void nfs4_label_free(void *label) {}
+#endif /* CONFIG_NFS_V4_SECURITY_LABEL */
+
/* proc.c */
void nfs_close_context(struct nfs_open_context *ctx, int is_sync);
extern struct nfs_client *nfs_init_client(struct nfs_client *clp,
#ifndef __LINUX_FS_NFS_NFS4_FS_H
#define __LINUX_FS_NFS_NFS4_FS_H
+#if defined(CONFIG_NFS_V4_2)
+#define NFS4_MAX_MINOR_VERSION 2
+#elif defined(CONFIG_NFS_V4_1)
+#define NFS4_MAX_MINOR_VERSION 1
+#else
+#define NFS4_MAX_MINOR_VERSION 0
+#endif
+
#if IS_ENABLED(CONFIG_NFS_V4)
#define NFS4_MAX_LOOP_ON_RECOVER (10)
calldata->roc_barrier);
nfs_set_open_stateid(state, &calldata->res.stateid, 0);
renew_lease(server, calldata->timestamp);
- nfs4_close_clear_stateid_flags(state,
- calldata->arg.fmode);
break;
+ case -NFS4ERR_ADMIN_REVOKED:
case -NFS4ERR_STALE_STATEID:
case -NFS4ERR_OLD_STATEID:
case -NFS4ERR_BAD_STATEID:
if (calldata->arg.fmode == 0)
break;
default:
- if (nfs4_async_handle_error(task, server, state) == -EAGAIN)
+ if (nfs4_async_handle_error(task, server, state) == -EAGAIN) {
rpc_restart_call_prepare(task);
+ goto out_release;
+ }
}
+ nfs4_close_clear_stateid_flags(state, calldata->arg.fmode);
+out_release:
nfs_release_seqid(calldata->arg.seqid);
nfs_refresh_inode(calldata->inode, calldata->res.fattr);
dprintk("%s: done, ret = %d!\n", __func__, task->tk_status);
dprintk("%s ERROR %d, Reset session\n", __func__,
task->tk_status);
nfs4_schedule_session_recovery(clp->cl_session, task->tk_status);
- goto restart_call;
+ goto wait_on_recovery;
#endif /* CONFIG_NFS_V4_1 */
case -NFS4ERR_DELAY:
nfs_inc_server_stats(server, NFSIOS_DELAY);
trace_nfs4_delegreturn_exit(&data->args, &data->res, task->tk_status);
switch (task->tk_status) {
- case -NFS4ERR_STALE_STATEID:
- case -NFS4ERR_EXPIRED:
case 0:
renew_lease(data->res.server, data->timestamp);
break;
+ case -NFS4ERR_ADMIN_REVOKED:
+ case -NFS4ERR_DELEG_REVOKED:
+ case -NFS4ERR_BAD_STATEID:
+ case -NFS4ERR_OLD_STATEID:
+ case -NFS4ERR_STALE_STATEID:
+ case -NFS4ERR_EXPIRED:
+ task->tk_status = 0;
+ break;
default:
if (nfs4_async_handle_error(task, data->res.server, NULL) ==
-EAGAIN) {
return;
server = NFS_SERVER(lrp->args.inode);
- if (nfs4_async_handle_error(task, server, NULL) == -EAGAIN) {
+ switch (task->tk_status) {
+ default:
+ task->tk_status = 0;
+ case 0:
+ break;
+ case -NFS4ERR_DELAY:
+ if (nfs4_async_handle_error(task, server, NULL) != -EAGAIN)
+ break;
rpc_restart_call_prepare(task);
return;
}
static void next_decode_page(struct nfsd4_compoundargs *argp)
{
- argp->pagelist++;
argp->p = page_address(argp->pagelist[0]);
+ argp->pagelist++;
if (argp->pagelen < PAGE_SIZE) {
argp->end = argp->p + (argp->pagelen>>2);
argp->pagelen = 0;
len -= pages * PAGE_SIZE;
argp->p = (__be32 *)page_address(argp->pagelist[0]);
+ argp->pagelist++;
argp->end = argp->p + XDR_QUADLEN(PAGE_SIZE);
}
argp->p += XDR_QUADLEN(len);
return rp;
}
+static void
+nfsd_reply_cache_unhash(struct svc_cacherep *rp)
+{
+ hlist_del_init(&rp->c_hash);
+ list_del_init(&rp->c_lru);
+}
+
static void
nfsd_reply_cache_free_locked(struct svc_cacherep *rp)
{
rp = list_first_entry(&lru_head, struct svc_cacherep, c_lru);
if (nfsd_cache_entry_expired(rp) ||
num_drc_entries >= max_drc_entries) {
- lru_put_end(rp);
+ nfsd_reply_cache_unhash(rp);
prune_cache_entries();
goto search_cache;
}
}
/*
- * Set various file attributes.
- * N.B. After this call fhp needs an fh_put
+ * Go over the attributes and take care of the small differences between
+ * NFS semantics and what Linux expects.
*/
-__be32
-nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
- int check_guard, time_t guardtime)
+static void
+nfsd_sanitize_attrs(struct inode *inode, struct iattr *iap)
{
- struct dentry *dentry;
- struct inode *inode;
- int accmode = NFSD_MAY_SATTR;
- umode_t ftype = 0;
- __be32 err;
- int host_err;
- int size_change = 0;
-
- if (iap->ia_valid & (ATTR_ATIME | ATTR_MTIME | ATTR_SIZE))
- accmode |= NFSD_MAY_WRITE|NFSD_MAY_OWNER_OVERRIDE;
- if (iap->ia_valid & ATTR_SIZE)
- ftype = S_IFREG;
-
- /* Get inode */
- err = fh_verify(rqstp, fhp, ftype, accmode);
- if (err)
- goto out;
-
- dentry = fhp->fh_dentry;
- inode = dentry->d_inode;
-
- /* Ignore any mode updates on symlinks */
- if (S_ISLNK(inode->i_mode))
- iap->ia_valid &= ~ATTR_MODE;
-
- if (!iap->ia_valid)
- goto out;
-
/*
* NFSv2 does not differentiate between "set-[ac]time-to-now"
* which only requires access, and "set-[ac]time-to-X" which
* convert to "set to now" instead of "set to explicit time"
*
* We only call inode_change_ok as the last test as technically
- * it is not an interface that we should be using. It is only
- * valid if the filesystem does not define it's own i_op->setattr.
+ * it is not an interface that we should be using.
*/
#define BOTH_TIME_SET (ATTR_ATIME_SET | ATTR_MTIME_SET)
#define MAX_TOUCH_TIME_ERROR (30*60)
iap->ia_valid &= ~BOTH_TIME_SET;
}
}
-
- /*
- * The size case is special.
- * It changes the file as well as the attributes.
- */
- if (iap->ia_valid & ATTR_SIZE) {
- if (iap->ia_size < inode->i_size) {
- err = nfsd_permission(rqstp, fhp->fh_export, dentry,
- NFSD_MAY_TRUNC|NFSD_MAY_OWNER_OVERRIDE);
- if (err)
- goto out;
- }
-
- host_err = get_write_access(inode);
- if (host_err)
- goto out_nfserr;
-
- size_change = 1;
- host_err = locks_verify_truncate(inode, NULL, iap->ia_size);
- if (host_err) {
- put_write_access(inode);
- goto out_nfserr;
- }
- }
/* sanitize the mode change */
if (iap->ia_valid & ATTR_MODE) {
iap->ia_valid |= (ATTR_KILL_SUID | ATTR_KILL_SGID);
}
}
+}
- /* Change the attributes. */
+static __be32
+nfsd_get_write_access(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ struct iattr *iap)
+{
+ struct inode *inode = fhp->fh_dentry->d_inode;
+ int host_err;
- iap->ia_valid |= ATTR_CTIME;
+ if (iap->ia_size < inode->i_size) {
+ __be32 err;
- err = nfserr_notsync;
- if (!check_guard || guardtime == inode->i_ctime.tv_sec) {
- host_err = nfsd_break_lease(inode);
- if (host_err)
- goto out_nfserr;
- fh_lock(fhp);
+ err = nfsd_permission(rqstp, fhp->fh_export, fhp->fh_dentry,
+ NFSD_MAY_TRUNC | NFSD_MAY_OWNER_OVERRIDE);
+ if (err)
+ return err;
+ }
- host_err = notify_change(dentry, iap, NULL);
- err = nfserrno(host_err);
- fh_unlock(fhp);
+ host_err = get_write_access(inode);
+ if (host_err)
+ goto out_nfserrno;
+
+ host_err = locks_verify_truncate(inode, NULL, iap->ia_size);
+ if (host_err)
+ goto out_put_write_access;
+ return 0;
+
+out_put_write_access:
+ put_write_access(inode);
+out_nfserrno:
+ return nfserrno(host_err);
+}
+
+/*
+ * Set various file attributes. After this call fhp needs an fh_put.
+ */
+__be32
+nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
+ int check_guard, time_t guardtime)
+{
+ struct dentry *dentry;
+ struct inode *inode;
+ int accmode = NFSD_MAY_SATTR;
+ umode_t ftype = 0;
+ __be32 err;
+ int host_err;
+ int size_change = 0;
+
+ if (iap->ia_valid & (ATTR_ATIME | ATTR_MTIME | ATTR_SIZE))
+ accmode |= NFSD_MAY_WRITE|NFSD_MAY_OWNER_OVERRIDE;
+ if (iap->ia_valid & ATTR_SIZE)
+ ftype = S_IFREG;
+
+ /* Get inode */
+ err = fh_verify(rqstp, fhp, ftype, accmode);
+ if (err)
+ goto out;
+
+ dentry = fhp->fh_dentry;
+ inode = dentry->d_inode;
+
+ /* Ignore any mode updates on symlinks */
+ if (S_ISLNK(inode->i_mode))
+ iap->ia_valid &= ~ATTR_MODE;
+
+ if (!iap->ia_valid)
+ goto out;
+
+ nfsd_sanitize_attrs(inode, iap);
+
+ /*
+ * The size case is special, it changes the file in addition to the
+ * attributes.
+ */
+ if (iap->ia_valid & ATTR_SIZE) {
+ err = nfsd_get_write_access(rqstp, fhp, iap);
+ if (err)
+ goto out;
+ size_change = 1;
}
+
+ iap->ia_valid |= ATTR_CTIME;
+
+ if (check_guard && guardtime != inode->i_ctime.tv_sec) {
+ err = nfserr_notsync;
+ goto out_put_write_access;
+ }
+
+ host_err = nfsd_break_lease(inode);
+ if (host_err)
+ goto out_put_write_access_nfserror;
+
+ fh_lock(fhp);
+ host_err = notify_change(dentry, iap, NULL);
+ fh_unlock(fhp);
+
+out_put_write_access_nfserror:
+ err = nfserrno(host_err);
+out_put_write_access:
if (size_change)
put_write_access(inode);
if (!err)
commit_metadata(fhp);
out:
return err;
-
-out_nfserr:
- err = nfserrno(host_err);
- goto out;
}
#if defined(CONFIG_NFSD_V2_ACL) || \
return mask;
}
+static void put_pipe_info(struct inode *inode, struct pipe_inode_info *pipe)
+{
+ int kill = 0;
+
+ spin_lock(&inode->i_lock);
+ if (!--pipe->files) {
+ inode->i_pipe = NULL;
+ kill = 1;
+ }
+ spin_unlock(&inode->i_lock);
+
+ if (kill)
+ free_pipe_info(pipe);
+}
+
static int
pipe_release(struct inode *inode, struct file *file)
{
- struct pipe_inode_info *pipe = inode->i_pipe;
- int kill = 0;
+ struct pipe_inode_info *pipe = file->private_data;
__pipe_lock(pipe);
if (file->f_mode & FMODE_READ)
kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
}
- spin_lock(&inode->i_lock);
- if (!--pipe->files) {
- inode->i_pipe = NULL;
- kill = 1;
- }
- spin_unlock(&inode->i_lock);
__pipe_unlock(pipe);
- if (kill)
- free_pipe_info(pipe);
-
+ put_pipe_info(inode, pipe);
return 0;
}
{
struct pipe_inode_info *pipe;
bool is_pipe = inode->i_sb->s_magic == PIPEFS_MAGIC;
- int kill = 0;
int ret;
filp->f_version = 0;
goto err;
err:
- spin_lock(&inode->i_lock);
- if (!--pipe->files) {
- inode->i_pipe = NULL;
- kill = 1;
- }
- spin_unlock(&inode->i_lock);
__pipe_unlock(pipe);
- if (kill)
- free_pipe_info(pipe);
+
+ put_pipe_info(inode, pipe);
return ret;
}
goto out_free_page;
}
- kloginuid = make_kuid(file->f_cred->user_ns, loginuid);
- if (!uid_valid(kloginuid)) {
- length = -EINVAL;
- goto out_free_page;
+
+ /* is userspace tring to explicitly UNSET the loginuid? */
+ if (loginuid == AUDIT_UID_UNSET) {
+ kloginuid = INVALID_UID;
+ } else {
+ kloginuid = make_kuid(file->f_cred->user_ns, loginuid);
+ if (!uid_valid(kloginuid)) {
+ length = -EINVAL;
+ goto out_free_page;
+ }
}
length = audit_set_loginuid(kloginuid);
.follow_link = proc_follow_link,
};
-/*
- * As some entries in /proc are volatile, we want to
- * get rid of unused dentries. This could be made
- * smarter: we could keep a "volatile" flag in the
- * inode to indicate which ones to keep.
- */
-static int proc_delete_dentry(const struct dentry * dentry)
-{
- return 1;
-}
-
-static const struct dentry_operations proc_dentry_operations =
-{
- .d_delete = proc_delete_dentry,
-};
-
/*
* Don't create negative dentries here, return -ENOENT by hand
* instead.
inode = proc_get_inode(dir->i_sb, de);
if (!inode)
return ERR_PTR(-ENOMEM);
- d_set_d_op(dentry, &proc_dentry_operations);
+ d_set_d_op(dentry, &simple_dentry_operations);
d_add(dentry, inode);
return NULL;
}
{
struct proc_dir_entry *pde = PDE(file_inode(file));
unsigned long rv = -EIO;
- unsigned long (*get_area)(struct file *, unsigned long, unsigned long,
- unsigned long, unsigned long) = NULL;
+
if (use_pde(pde)) {
+ typeof(proc_reg_get_unmapped_area) *get_area;
+
+ get_area = pde->proc_fops->get_unmapped_area;
#ifdef CONFIG_MMU
- get_area = current->mm->get_unmapped_area;
+ if (!get_area)
+ get_area = current->mm->get_unmapped_area;
#endif
- if (pde->proc_fops->get_unmapped_area)
- get_area = pde->proc_fops->get_unmapped_area;
+
if (get_area)
rv = get_area(file, orig_addr, len, pgoff, flags);
+ else
+ rv = orig_addr;
unuse_pde(pde);
}
return rv;
.setattr = proc_setattr,
};
-static int ns_delete_dentry(const struct dentry *dentry)
-{
- /* Don't cache namespace inodes when not in use */
- return 1;
-}
-
static char *ns_dname(struct dentry *dentry, char *buffer, int buflen)
{
struct inode *inode = dentry->d_inode;
const struct dentry_operations ns_dentry_operations =
{
- .d_delete = ns_delete_dentry,
+ .d_delete = always_delete_dentry,
.d_dname = ns_dname,
};
pstore_get_records(0);
kmsg_dump_register(&pstore_dumper);
- pstore_register_console();
- pstore_register_ftrace();
+
+ if ((psi->flags & PSTORE_FLAGS_FRAGILE) == 0) {
+ pstore_register_console();
+ pstore_register_ftrace();
+ }
if (pstore_update_ms >= 0) {
pstore_timer.expires = jiffies +
If unsure, say N.
+choice
+ prompt "File decompression options"
+ depends on SQUASHFS
+ help
+ Squashfs now supports two options for decompressing file
+ data. Traditionally Squashfs has decompressed into an
+ intermediate buffer and then memcopied it into the page cache.
+ Squashfs now supports the ability to decompress directly into
+ the page cache.
+
+ If unsure, select "Decompress file data into an intermediate buffer"
+
+config SQUASHFS_FILE_CACHE
+ bool "Decompress file data into an intermediate buffer"
+ help
+ Decompress file data into an intermediate buffer and then
+ memcopy it into the page cache.
+
+config SQUASHFS_FILE_DIRECT
+ bool "Decompress files directly into the page cache"
+ help
+ Directly decompress file data into the page cache.
+ Doing so can significantly improve performance because
+ it eliminates a memcpy and it also removes the lock contention
+ on the single buffer.
+
+endchoice
+
+choice
+ prompt "Decompressor parallelisation options"
+ depends on SQUASHFS
+ help
+ Squashfs now supports three parallelisation options for
+ decompression. Each one exhibits various trade-offs between
+ decompression performance and CPU and memory usage.
+
+ If in doubt, select "Single threaded compression"
+
+config SQUASHFS_DECOMP_SINGLE
+ bool "Single threaded compression"
+ help
+ Traditionally Squashfs has used single-threaded decompression.
+ Only one block (data or metadata) can be decompressed at any
+ one time. This limits CPU and memory usage to a minimum.
+
+config SQUASHFS_DECOMP_MULTI
+ bool "Use multiple decompressors for parallel I/O"
+ help
+ By default Squashfs uses a single decompressor but it gives
+ poor performance on parallel I/O workloads when using multiple CPU
+ machines due to waiting on decompressor availability.
+
+ If you have a parallel I/O workload and your system has enough memory,
+ using this option may improve overall I/O performance.
+
+ This decompressor implementation uses up to two parallel
+ decompressors per core. It dynamically allocates decompressors
+ on a demand basis.
+
+config SQUASHFS_DECOMP_MULTI_PERCPU
+ bool "Use percpu multiple decompressors for parallel I/O"
+ help
+ By default Squashfs uses a single decompressor but it gives
+ poor performance on parallel I/O workloads when using multiple CPU
+ machines due to waiting on decompressor availability.
+
+ This decompressor implementation uses a maximum of one
+ decompressor per core. It uses percpu variables to ensure
+ decompression is load-balanced across the cores.
+
+endchoice
+
config SQUASHFS_XATTR
bool "Squashfs XATTR support"
depends on SQUASHFS
obj-$(CONFIG_SQUASHFS) += squashfs.o
squashfs-y += block.o cache.o dir.o export.o file.o fragment.o id.o inode.o
squashfs-y += namei.o super.o symlink.o decompressor.o
+squashfs-$(CONFIG_SQUASHFS_FILE_CACHE) += file_cache.o
+squashfs-$(CONFIG_SQUASHFS_FILE_DIRECT) += file_direct.o page_actor.o
+squashfs-$(CONFIG_SQUASHFS_DECOMP_SINGLE) += decompressor_single.o
+squashfs-$(CONFIG_SQUASHFS_DECOMP_MULTI) += decompressor_multi.o
+squashfs-$(CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU) += decompressor_multi_percpu.o
squashfs-$(CONFIG_SQUASHFS_XATTR) += xattr.o xattr_id.o
squashfs-$(CONFIG_SQUASHFS_LZO) += lzo_wrapper.o
squashfs-$(CONFIG_SQUASHFS_XZ) += xz_wrapper.o
#include "squashfs_fs_sb.h"
#include "squashfs.h"
#include "decompressor.h"
+#include "page_actor.h"
/*
* Read the metadata block length, this is stored in the first two
* generated a larger block - this does occasionally happen with compression
* algorithms).
*/
-int squashfs_read_data(struct super_block *sb, void **buffer, u64 index,
- int length, u64 *next_index, int srclength, int pages)
+int squashfs_read_data(struct super_block *sb, u64 index, int length,
+ u64 *next_index, struct squashfs_page_actor *output)
{
struct squashfs_sb_info *msblk = sb->s_fs_info;
struct buffer_head **bh;
int offset = index & ((1 << msblk->devblksize_log2) - 1);
u64 cur_index = index >> msblk->devblksize_log2;
- int bytes, compressed, b = 0, k = 0, page = 0, avail;
+ int bytes, compressed, b = 0, k = 0, avail, i;
- bh = kcalloc(((srclength + msblk->devblksize - 1)
+ bh = kcalloc(((output->length + msblk->devblksize - 1)
>> msblk->devblksize_log2) + 1, sizeof(*bh), GFP_KERNEL);
if (bh == NULL)
return -ENOMEM;
*next_index = index + length;
TRACE("Block @ 0x%llx, %scompressed size %d, src size %d\n",
- index, compressed ? "" : "un", length, srclength);
+ index, compressed ? "" : "un", length, output->length);
- if (length < 0 || length > srclength ||
+ if (length < 0 || length > output->length ||
(index + length) > msblk->bytes_used)
goto read_failure;
TRACE("Block @ 0x%llx, %scompressed size %d\n", index,
compressed ? "" : "un", length);
- if (length < 0 || length > srclength ||
+ if (length < 0 || length > output->length ||
(index + length) > msblk->bytes_used)
goto block_release;
ll_rw_block(READ, b - 1, bh + 1);
}
+ for (i = 0; i < b; i++) {
+ wait_on_buffer(bh[i]);
+ if (!buffer_uptodate(bh[i]))
+ goto block_release;
+ }
+
if (compressed) {
- length = squashfs_decompress(msblk, buffer, bh, b, offset,
- length, srclength, pages);
+ length = squashfs_decompress(msblk, bh, b, offset, length,
+ output);
if (length < 0)
goto read_failure;
} else {
* Block is uncompressed.
*/
int in, pg_offset = 0;
+ void *data = squashfs_first_page(output);
for (bytes = length; k < b; k++) {
in = min(bytes, msblk->devblksize - offset);
bytes -= in;
- wait_on_buffer(bh[k]);
- if (!buffer_uptodate(bh[k]))
- goto block_release;
while (in) {
if (pg_offset == PAGE_CACHE_SIZE) {
- page++;
+ data = squashfs_next_page(output);
pg_offset = 0;
}
avail = min_t(int, in, PAGE_CACHE_SIZE -
pg_offset);
- memcpy(buffer[page] + pg_offset,
- bh[k]->b_data + offset, avail);
+ memcpy(data + pg_offset, bh[k]->b_data + offset,
+ avail);
in -= avail;
pg_offset += avail;
offset += avail;
offset = 0;
put_bh(bh[k]);
}
+ squashfs_finish_page(output);
}
kfree(bh);
#include "squashfs_fs.h"
#include "squashfs_fs_sb.h"
#include "squashfs.h"
+#include "page_actor.h"
/*
* Look-up block in cache, and increment usage count. If not in cache, read
entry->error = 0;
spin_unlock(&cache->lock);
- entry->length = squashfs_read_data(sb, entry->data,
- block, length, &entry->next_index,
- cache->block_size, cache->pages);
+ entry->length = squashfs_read_data(sb, block, length,
+ &entry->next_index, entry->actor);
spin_lock(&cache->lock);
kfree(cache->entry[i].data[j]);
kfree(cache->entry[i].data);
}
+ kfree(cache->entry[i].actor);
}
kfree(cache->entry);
goto cleanup;
}
}
+
+ entry->actor = squashfs_page_actor_init(entry->data,
+ cache->pages, 0);
+ if (entry->actor == NULL) {
+ ERROR("Failed to allocate %s cache entry\n", name);
+ goto cleanup;
+ }
}
return cache;
int pages = (length + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
int i, res;
void *table, *buffer, **data;
+ struct squashfs_page_actor *actor;
table = buffer = kmalloc(length, GFP_KERNEL);
if (table == NULL)
goto failed;
}
+ actor = squashfs_page_actor_init(data, pages, length);
+ if (actor == NULL) {
+ res = -ENOMEM;
+ goto failed2;
+ }
+
for (i = 0; i < pages; i++, buffer += PAGE_CACHE_SIZE)
data[i] = buffer;
- res = squashfs_read_data(sb, data, block, length |
- SQUASHFS_COMPRESSED_BIT_BLOCK, NULL, length, pages);
+ res = squashfs_read_data(sb, block, length |
+ SQUASHFS_COMPRESSED_BIT_BLOCK, NULL, actor);
kfree(data);
+ kfree(actor);
if (res < 0)
goto failed;
return table;
+failed2:
+ kfree(data);
failed:
kfree(table);
return ERR_PTR(res);
#include "squashfs_fs_sb.h"
#include "decompressor.h"
#include "squashfs.h"
+#include "page_actor.h"
/*
* This file (and decompressor.h) implements a decompressor framework for
*/
static const struct squashfs_decompressor squashfs_lzma_unsupported_comp_ops = {
- NULL, NULL, NULL, LZMA_COMPRESSION, "lzma", 0
+ NULL, NULL, NULL, NULL, LZMA_COMPRESSION, "lzma", 0
};
#ifndef CONFIG_SQUASHFS_LZO
static const struct squashfs_decompressor squashfs_lzo_comp_ops = {
- NULL, NULL, NULL, LZO_COMPRESSION, "lzo", 0
+ NULL, NULL, NULL, NULL, LZO_COMPRESSION, "lzo", 0
};
#endif
#ifndef CONFIG_SQUASHFS_XZ
static const struct squashfs_decompressor squashfs_xz_comp_ops = {
- NULL, NULL, NULL, XZ_COMPRESSION, "xz", 0
+ NULL, NULL, NULL, NULL, XZ_COMPRESSION, "xz", 0
};
#endif
#ifndef CONFIG_SQUASHFS_ZLIB
static const struct squashfs_decompressor squashfs_zlib_comp_ops = {
- NULL, NULL, NULL, ZLIB_COMPRESSION, "zlib", 0
+ NULL, NULL, NULL, NULL, ZLIB_COMPRESSION, "zlib", 0
};
#endif
static const struct squashfs_decompressor squashfs_unknown_comp_ops = {
- NULL, NULL, NULL, 0, "unknown", 0
+ NULL, NULL, NULL, NULL, 0, "unknown", 0
};
static const struct squashfs_decompressor *decompressor[] = {
}
-void *squashfs_decompressor_init(struct super_block *sb, unsigned short flags)
+static void *get_comp_opts(struct super_block *sb, unsigned short flags)
{
struct squashfs_sb_info *msblk = sb->s_fs_info;
- void *strm, *buffer = NULL;
+ void *buffer = NULL, *comp_opts;
+ struct squashfs_page_actor *actor = NULL;
int length = 0;
/*
*/
if (SQUASHFS_COMP_OPTS(flags)) {
buffer = kmalloc(PAGE_CACHE_SIZE, GFP_KERNEL);
- if (buffer == NULL)
- return ERR_PTR(-ENOMEM);
+ if (buffer == NULL) {
+ comp_opts = ERR_PTR(-ENOMEM);
+ goto out;
+ }
+
+ actor = squashfs_page_actor_init(&buffer, 1, 0);
+ if (actor == NULL) {
+ comp_opts = ERR_PTR(-ENOMEM);
+ goto out;
+ }
- length = squashfs_read_data(sb, &buffer,
- sizeof(struct squashfs_super_block), 0, NULL,
- PAGE_CACHE_SIZE, 1);
+ length = squashfs_read_data(sb,
+ sizeof(struct squashfs_super_block), 0, NULL, actor);
if (length < 0) {
- strm = ERR_PTR(length);
- goto finished;
+ comp_opts = ERR_PTR(length);
+ goto out;
}
}
- strm = msblk->decompressor->init(msblk, buffer, length);
+ comp_opts = squashfs_comp_opts(msblk, buffer, length);
-finished:
+out:
+ kfree(actor);
kfree(buffer);
+ return comp_opts;
+}
+
+
+void *squashfs_decompressor_setup(struct super_block *sb, unsigned short flags)
+{
+ struct squashfs_sb_info *msblk = sb->s_fs_info;
+ void *stream, *comp_opts = get_comp_opts(sb, flags);
+
+ if (IS_ERR(comp_opts))
+ return comp_opts;
+
+ stream = squashfs_decompressor_create(msblk, comp_opts);
+ if (IS_ERR(stream))
+ kfree(comp_opts);
- return strm;
+ return stream;
}
*/
struct squashfs_decompressor {
- void *(*init)(struct squashfs_sb_info *, void *, int);
+ void *(*init)(struct squashfs_sb_info *, void *);
+ void *(*comp_opts)(struct squashfs_sb_info *, void *, int);
void (*free)(void *);
- int (*decompress)(struct squashfs_sb_info *, void **,
- struct buffer_head **, int, int, int, int, int);
+ int (*decompress)(struct squashfs_sb_info *, void *,
+ struct buffer_head **, int, int, int,
+ struct squashfs_page_actor *);
int id;
char *name;
int supported;
};
-static inline void squashfs_decompressor_free(struct squashfs_sb_info *msblk,
- void *s)
+static inline void *squashfs_comp_opts(struct squashfs_sb_info *msblk,
+ void *buff, int length)
{
- if (msblk->decompressor)
- msblk->decompressor->free(s);
-}
-
-static inline int squashfs_decompress(struct squashfs_sb_info *msblk,
- void **buffer, struct buffer_head **bh, int b, int offset, int length,
- int srclength, int pages)
-{
- return msblk->decompressor->decompress(msblk, buffer, bh, b, offset,
- length, srclength, pages);
+ return msblk->decompressor->comp_opts ?
+ msblk->decompressor->comp_opts(msblk, buff, length) : NULL;
}
#ifdef CONFIG_SQUASHFS_XZ
--- /dev/null
+/*
+ * Copyright (c) 2013
+ * Minchan Kim <minchan@kernel.org>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+#include <linux/types.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+#include <linux/buffer_head.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+#include <linux/cpumask.h>
+
+#include "squashfs_fs.h"
+#include "squashfs_fs_sb.h"
+#include "decompressor.h"
+#include "squashfs.h"
+
+/*
+ * This file implements multi-threaded decompression in the
+ * decompressor framework
+ */
+
+
+/*
+ * The reason that multiply two is that a CPU can request new I/O
+ * while it is waiting previous request.
+ */
+#define MAX_DECOMPRESSOR (num_online_cpus() * 2)
+
+
+int squashfs_max_decompressors(void)
+{
+ return MAX_DECOMPRESSOR;
+}
+
+
+struct squashfs_stream {
+ void *comp_opts;
+ struct list_head strm_list;
+ struct mutex mutex;
+ int avail_decomp;
+ wait_queue_head_t wait;
+};
+
+
+struct decomp_stream {
+ void *stream;
+ struct list_head list;
+};
+
+
+static void put_decomp_stream(struct decomp_stream *decomp_strm,
+ struct squashfs_stream *stream)
+{
+ mutex_lock(&stream->mutex);
+ list_add(&decomp_strm->list, &stream->strm_list);
+ mutex_unlock(&stream->mutex);
+ wake_up(&stream->wait);
+}
+
+void *squashfs_decompressor_create(struct squashfs_sb_info *msblk,
+ void *comp_opts)
+{
+ struct squashfs_stream *stream;
+ struct decomp_stream *decomp_strm = NULL;
+ int err = -ENOMEM;
+
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ goto out;
+
+ stream->comp_opts = comp_opts;
+ mutex_init(&stream->mutex);
+ INIT_LIST_HEAD(&stream->strm_list);
+ init_waitqueue_head(&stream->wait);
+
+ /*
+ * We should have a decompressor at least as default
+ * so if we fail to allocate new decompressor dynamically,
+ * we could always fall back to default decompressor and
+ * file system works.
+ */
+ decomp_strm = kmalloc(sizeof(*decomp_strm), GFP_KERNEL);
+ if (!decomp_strm)
+ goto out;
+
+ decomp_strm->stream = msblk->decompressor->init(msblk,
+ stream->comp_opts);
+ if (IS_ERR(decomp_strm->stream)) {
+ err = PTR_ERR(decomp_strm->stream);
+ goto out;
+ }
+
+ list_add(&decomp_strm->list, &stream->strm_list);
+ stream->avail_decomp = 1;
+ return stream;
+
+out:
+ kfree(decomp_strm);
+ kfree(stream);
+ return ERR_PTR(err);
+}
+
+
+void squashfs_decompressor_destroy(struct squashfs_sb_info *msblk)
+{
+ struct squashfs_stream *stream = msblk->stream;
+ if (stream) {
+ struct decomp_stream *decomp_strm;
+
+ while (!list_empty(&stream->strm_list)) {
+ decomp_strm = list_entry(stream->strm_list.prev,
+ struct decomp_stream, list);
+ list_del(&decomp_strm->list);
+ msblk->decompressor->free(decomp_strm->stream);
+ kfree(decomp_strm);
+ stream->avail_decomp--;
+ }
+ WARN_ON(stream->avail_decomp);
+ kfree(stream->comp_opts);
+ kfree(stream);
+ }
+}
+
+
+static struct decomp_stream *get_decomp_stream(struct squashfs_sb_info *msblk,
+ struct squashfs_stream *stream)
+{
+ struct decomp_stream *decomp_strm;
+
+ while (1) {
+ mutex_lock(&stream->mutex);
+
+ /* There is available decomp_stream */
+ if (!list_empty(&stream->strm_list)) {
+ decomp_strm = list_entry(stream->strm_list.prev,
+ struct decomp_stream, list);
+ list_del(&decomp_strm->list);
+ mutex_unlock(&stream->mutex);
+ break;
+ }
+
+ /*
+ * If there is no available decomp and already full,
+ * let's wait for releasing decomp from other users.
+ */
+ if (stream->avail_decomp >= MAX_DECOMPRESSOR)
+ goto wait;
+
+ /* Let's allocate new decomp */
+ decomp_strm = kmalloc(sizeof(*decomp_strm), GFP_KERNEL);
+ if (!decomp_strm)
+ goto wait;
+
+ decomp_strm->stream = msblk->decompressor->init(msblk,
+ stream->comp_opts);
+ if (IS_ERR(decomp_strm->stream)) {
+ kfree(decomp_strm);
+ goto wait;
+ }
+
+ stream->avail_decomp++;
+ WARN_ON(stream->avail_decomp > MAX_DECOMPRESSOR);
+
+ mutex_unlock(&stream->mutex);
+ break;
+wait:
+ /*
+ * If system memory is tough, let's for other's
+ * releasing instead of hurting VM because it could
+ * make page cache thrashing.
+ */
+ mutex_unlock(&stream->mutex);
+ wait_event(stream->wait,
+ !list_empty(&stream->strm_list));
+ }
+
+ return decomp_strm;
+}
+
+
+int squashfs_decompress(struct squashfs_sb_info *msblk, struct buffer_head **bh,
+ int b, int offset, int length, struct squashfs_page_actor *output)
+{
+ int res;
+ struct squashfs_stream *stream = msblk->stream;
+ struct decomp_stream *decomp_stream = get_decomp_stream(msblk, stream);
+ res = msblk->decompressor->decompress(msblk, decomp_stream->stream,
+ bh, b, offset, length, output);
+ put_decomp_stream(decomp_stream, stream);
+ if (res < 0)
+ ERROR("%s decompression failed, data probably corrupt\n",
+ msblk->decompressor->name);
+ return res;
+}
--- /dev/null
+/*
+ * Copyright (c) 2013
+ * Phillip Lougher <phillip@squashfs.org.uk>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/percpu.h>
+#include <linux/buffer_head.h>
+
+#include "squashfs_fs.h"
+#include "squashfs_fs_sb.h"
+#include "decompressor.h"
+#include "squashfs.h"
+
+/*
+ * This file implements multi-threaded decompression using percpu
+ * variables, one thread per cpu core.
+ */
+
+struct squashfs_stream {
+ void *stream;
+};
+
+void *squashfs_decompressor_create(struct squashfs_sb_info *msblk,
+ void *comp_opts)
+{
+ struct squashfs_stream *stream;
+ struct squashfs_stream __percpu *percpu;
+ int err, cpu;
+
+ percpu = alloc_percpu(struct squashfs_stream);
+ if (percpu == NULL)
+ return ERR_PTR(-ENOMEM);
+
+ for_each_possible_cpu(cpu) {
+ stream = per_cpu_ptr(percpu, cpu);
+ stream->stream = msblk->decompressor->init(msblk, comp_opts);
+ if (IS_ERR(stream->stream)) {
+ err = PTR_ERR(stream->stream);
+ goto out;
+ }
+ }
+
+ kfree(comp_opts);
+ return (__force void *) percpu;
+
+out:
+ for_each_possible_cpu(cpu) {
+ stream = per_cpu_ptr(percpu, cpu);
+ if (!IS_ERR_OR_NULL(stream->stream))
+ msblk->decompressor->free(stream->stream);
+ }
+ free_percpu(percpu);
+ return ERR_PTR(err);
+}
+
+void squashfs_decompressor_destroy(struct squashfs_sb_info *msblk)
+{
+ struct squashfs_stream __percpu *percpu =
+ (struct squashfs_stream __percpu *) msblk->stream;
+ struct squashfs_stream *stream;
+ int cpu;
+
+ if (msblk->stream) {
+ for_each_possible_cpu(cpu) {
+ stream = per_cpu_ptr(percpu, cpu);
+ msblk->decompressor->free(stream->stream);
+ }
+ free_percpu(percpu);
+ }
+}
+
+int squashfs_decompress(struct squashfs_sb_info *msblk, struct buffer_head **bh,
+ int b, int offset, int length, struct squashfs_page_actor *output)
+{
+ struct squashfs_stream __percpu *percpu =
+ (struct squashfs_stream __percpu *) msblk->stream;
+ struct squashfs_stream *stream = get_cpu_ptr(percpu);
+ int res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
+ offset, length, output);
+ put_cpu_ptr(stream);
+
+ if (res < 0)
+ ERROR("%s decompression failed, data probably corrupt\n",
+ msblk->decompressor->name);
+
+ return res;
+}
+
+int squashfs_max_decompressors(void)
+{
+ return num_possible_cpus();
+}
--- /dev/null
+/*
+ * Copyright (c) 2013
+ * Phillip Lougher <phillip@squashfs.org.uk>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/types.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+#include <linux/buffer_head.h>
+
+#include "squashfs_fs.h"
+#include "squashfs_fs_sb.h"
+#include "decompressor.h"
+#include "squashfs.h"
+
+/*
+ * This file implements single-threaded decompression in the
+ * decompressor framework
+ */
+
+struct squashfs_stream {
+ void *stream;
+ struct mutex mutex;
+};
+
+void *squashfs_decompressor_create(struct squashfs_sb_info *msblk,
+ void *comp_opts)
+{
+ struct squashfs_stream *stream;
+ int err = -ENOMEM;
+
+ stream = kmalloc(sizeof(*stream), GFP_KERNEL);
+ if (stream == NULL)
+ goto out;
+
+ stream->stream = msblk->decompressor->init(msblk, comp_opts);
+ if (IS_ERR(stream->stream)) {
+ err = PTR_ERR(stream->stream);
+ goto out;
+ }
+
+ kfree(comp_opts);
+ mutex_init(&stream->mutex);
+ return stream;
+
+out:
+ kfree(stream);
+ return ERR_PTR(err);
+}
+
+void squashfs_decompressor_destroy(struct squashfs_sb_info *msblk)
+{
+ struct squashfs_stream *stream = msblk->stream;
+
+ if (stream) {
+ msblk->decompressor->free(stream->stream);
+ kfree(stream);
+ }
+}
+
+int squashfs_decompress(struct squashfs_sb_info *msblk, struct buffer_head **bh,
+ int b, int offset, int length, struct squashfs_page_actor *output)
+{
+ int res;
+ struct squashfs_stream *stream = msblk->stream;
+
+ mutex_lock(&stream->mutex);
+ res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
+ offset, length, output);
+ mutex_unlock(&stream->mutex);
+
+ if (res < 0)
+ ERROR("%s decompression failed, data probably corrupt\n",
+ msblk->decompressor->name);
+
+ return res;
+}
+
+int squashfs_max_decompressors(void)
+{
+ return 1;
+}
return le32_to_cpu(size);
}
-
-static int squashfs_readpage(struct file *file, struct page *page)
+/* Copy data into page cache */
+void squashfs_copy_cache(struct page *page, struct squashfs_cache_entry *buffer,
+ int bytes, int offset)
{
struct inode *inode = page->mapping->host;
struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info;
- int bytes, i, offset = 0, sparse = 0;
- struct squashfs_cache_entry *buffer = NULL;
void *pageaddr;
-
- int mask = (1 << (msblk->block_log - PAGE_CACHE_SHIFT)) - 1;
- int index = page->index >> (msblk->block_log - PAGE_CACHE_SHIFT);
- int start_index = page->index & ~mask;
- int end_index = start_index | mask;
- int file_end = i_size_read(inode) >> msblk->block_log;
-
- TRACE("Entered squashfs_readpage, page index %lx, start block %llx\n",
- page->index, squashfs_i(inode)->start);
-
- if (page->index >= ((i_size_read(inode) + PAGE_CACHE_SIZE - 1) >>
- PAGE_CACHE_SHIFT))
- goto out;
-
- if (index < file_end || squashfs_i(inode)->fragment_block ==
- SQUASHFS_INVALID_BLK) {
- /*
- * Reading a datablock from disk. Need to read block list
- * to get location and block size.
- */
- u64 block = 0;
- int bsize = read_blocklist(inode, index, &block);
- if (bsize < 0)
- goto error_out;
-
- if (bsize == 0) { /* hole */
- bytes = index == file_end ?
- (i_size_read(inode) & (msblk->block_size - 1)) :
- msblk->block_size;
- sparse = 1;
- } else {
- /*
- * Read and decompress datablock.
- */
- buffer = squashfs_get_datablock(inode->i_sb,
- block, bsize);
- if (buffer->error) {
- ERROR("Unable to read page, block %llx, size %x"
- "\n", block, bsize);
- squashfs_cache_put(buffer);
- goto error_out;
- }
- bytes = buffer->length;
- }
- } else {
- /*
- * Datablock is stored inside a fragment (tail-end packed
- * block).
- */
- buffer = squashfs_get_fragment(inode->i_sb,
- squashfs_i(inode)->fragment_block,
- squashfs_i(inode)->fragment_size);
-
- if (buffer->error) {
- ERROR("Unable to read page, block %llx, size %x\n",
- squashfs_i(inode)->fragment_block,
- squashfs_i(inode)->fragment_size);
- squashfs_cache_put(buffer);
- goto error_out;
- }
- bytes = i_size_read(inode) & (msblk->block_size - 1);
- offset = squashfs_i(inode)->fragment_offset;
- }
+ int i, mask = (1 << (msblk->block_log - PAGE_CACHE_SHIFT)) - 1;
+ int start_index = page->index & ~mask, end_index = start_index | mask;
/*
* Loop copying datablock into pages. As the datablock likely covers
for (i = start_index; i <= end_index && bytes > 0; i++,
bytes -= PAGE_CACHE_SIZE, offset += PAGE_CACHE_SIZE) {
struct page *push_page;
- int avail = sparse ? 0 : min_t(int, bytes, PAGE_CACHE_SIZE);
+ int avail = buffer ? min_t(int, bytes, PAGE_CACHE_SIZE) : 0;
TRACE("bytes %d, i %d, available_bytes %d\n", bytes, i, avail);
if (i != page->index)
page_cache_release(push_page);
}
+}
+
+/* Read datablock stored packed inside a fragment (tail-end packed block) */
+static int squashfs_readpage_fragment(struct page *page)
+{
+ struct inode *inode = page->mapping->host;
+ struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info;
+ struct squashfs_cache_entry *buffer = squashfs_get_fragment(inode->i_sb,
+ squashfs_i(inode)->fragment_block,
+ squashfs_i(inode)->fragment_size);
+ int res = buffer->error;
+
+ if (res)
+ ERROR("Unable to read page, block %llx, size %x\n",
+ squashfs_i(inode)->fragment_block,
+ squashfs_i(inode)->fragment_size);
+ else
+ squashfs_copy_cache(page, buffer, i_size_read(inode) &
+ (msblk->block_size - 1),
+ squashfs_i(inode)->fragment_offset);
+
+ squashfs_cache_put(buffer);
+ return res;
+}
- if (!sparse)
- squashfs_cache_put(buffer);
+static int squashfs_readpage_sparse(struct page *page, int index, int file_end)
+{
+ struct inode *inode = page->mapping->host;
+ struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info;
+ int bytes = index == file_end ?
+ (i_size_read(inode) & (msblk->block_size - 1)) :
+ msblk->block_size;
+ squashfs_copy_cache(page, NULL, bytes, 0);
return 0;
+}
+
+static int squashfs_readpage(struct file *file, struct page *page)
+{
+ struct inode *inode = page->mapping->host;
+ struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info;
+ int index = page->index >> (msblk->block_log - PAGE_CACHE_SHIFT);
+ int file_end = i_size_read(inode) >> msblk->block_log;
+ int res;
+ void *pageaddr;
+
+ TRACE("Entered squashfs_readpage, page index %lx, start block %llx\n",
+ page->index, squashfs_i(inode)->start);
+
+ if (page->index >= ((i_size_read(inode) + PAGE_CACHE_SIZE - 1) >>
+ PAGE_CACHE_SHIFT))
+ goto out;
+
+ if (index < file_end || squashfs_i(inode)->fragment_block ==
+ SQUASHFS_INVALID_BLK) {
+ u64 block = 0;
+ int bsize = read_blocklist(inode, index, &block);
+ if (bsize < 0)
+ goto error_out;
+
+ if (bsize == 0)
+ res = squashfs_readpage_sparse(page, index, file_end);
+ else
+ res = squashfs_readpage_block(page, block, bsize);
+ } else
+ res = squashfs_readpage_fragment(page);
+
+ if (!res)
+ return 0;
error_out:
SetPageError(page);
--- /dev/null
+/*
+ * Copyright (c) 2013
+ * Phillip Lougher <phillip@squashfs.org.uk>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/fs.h>
+#include <linux/vfs.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/pagemap.h>
+#include <linux/mutex.h>
+
+#include "squashfs_fs.h"
+#include "squashfs_fs_sb.h"
+#include "squashfs_fs_i.h"
+#include "squashfs.h"
+
+/* Read separately compressed datablock and memcopy into page cache */
+int squashfs_readpage_block(struct page *page, u64 block, int bsize)
+{
+ struct inode *i = page->mapping->host;
+ struct squashfs_cache_entry *buffer = squashfs_get_datablock(i->i_sb,
+ block, bsize);
+ int res = buffer->error;
+
+ if (res)
+ ERROR("Unable to read page, block %llx, size %x\n", block,
+ bsize);
+ else
+ squashfs_copy_cache(page, buffer, buffer->length, 0);
+
+ squashfs_cache_put(buffer);
+ return res;
+}
--- /dev/null
+/*
+ * Copyright (c) 2013
+ * Phillip Lougher <phillip@squashfs.org.uk>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/fs.h>
+#include <linux/vfs.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/pagemap.h>
+#include <linux/mutex.h>
+
+#include "squashfs_fs.h"
+#include "squashfs_fs_sb.h"
+#include "squashfs_fs_i.h"
+#include "squashfs.h"
+#include "page_actor.h"
+
+static int squashfs_read_cache(struct page *target_page, u64 block, int bsize,
+ int pages, struct page **page);
+
+/* Read separately compressed datablock directly into page cache */
+int squashfs_readpage_block(struct page *target_page, u64 block, int bsize)
+
+{
+ struct inode *inode = target_page->mapping->host;
+ struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info;
+
+ int file_end = (i_size_read(inode) - 1) >> PAGE_CACHE_SHIFT;
+ int mask = (1 << (msblk->block_log - PAGE_CACHE_SHIFT)) - 1;
+ int start_index = target_page->index & ~mask;
+ int end_index = start_index | mask;
+ int i, n, pages, missing_pages, bytes, res = -ENOMEM;
+ struct page **page;
+ struct squashfs_page_actor *actor;
+ void *pageaddr;
+
+ if (end_index > file_end)
+ end_index = file_end;
+
+ pages = end_index - start_index + 1;
+
+ page = kmalloc(sizeof(void *) * pages, GFP_KERNEL);
+ if (page == NULL)
+ return res;
+
+ /*
+ * Create a "page actor" which will kmap and kunmap the
+ * page cache pages appropriately within the decompressor
+ */
+ actor = squashfs_page_actor_init_special(page, pages, 0);
+ if (actor == NULL)
+ goto out;
+
+ /* Try to grab all the pages covered by the Squashfs block */
+ for (missing_pages = 0, i = 0, n = start_index; i < pages; i++, n++) {
+ page[i] = (n == target_page->index) ? target_page :
+ grab_cache_page_nowait(target_page->mapping, n);
+
+ if (page[i] == NULL) {
+ missing_pages++;
+ continue;
+ }
+
+ if (PageUptodate(page[i])) {
+ unlock_page(page[i]);
+ page_cache_release(page[i]);
+ page[i] = NULL;
+ missing_pages++;
+ }
+ }
+
+ if (missing_pages) {
+ /*
+ * Couldn't get one or more pages, this page has either
+ * been VM reclaimed, but others are still in the page cache
+ * and uptodate, or we're racing with another thread in
+ * squashfs_readpage also trying to grab them. Fall back to
+ * using an intermediate buffer.
+ */
+ res = squashfs_read_cache(target_page, block, bsize, pages,
+ page);
+ if (res < 0)
+ goto mark_errored;
+
+ goto out;
+ }
+
+ /* Decompress directly into the page cache buffers */
+ res = squashfs_read_data(inode->i_sb, block, bsize, NULL, actor);
+ if (res < 0)
+ goto mark_errored;
+
+ /* Last page may have trailing bytes not filled */
+ bytes = res % PAGE_CACHE_SIZE;
+ if (bytes) {
+ pageaddr = kmap_atomic(page[pages - 1]);
+ memset(pageaddr + bytes, 0, PAGE_CACHE_SIZE - bytes);
+ kunmap_atomic(pageaddr);
+ }
+
+ /* Mark pages as uptodate, unlock and release */
+ for (i = 0; i < pages; i++) {
+ flush_dcache_page(page[i]);
+ SetPageUptodate(page[i]);
+ unlock_page(page[i]);
+ if (page[i] != target_page)
+ page_cache_release(page[i]);
+ }
+
+ kfree(actor);
+ kfree(page);
+
+ return 0;
+
+mark_errored:
+ /* Decompression failed, mark pages as errored. Target_page is
+ * dealt with by the caller
+ */
+ for (i = 0; i < pages; i++) {
+ if (page[i] == NULL || page[i] == target_page)
+ continue;
+ flush_dcache_page(page[i]);
+ SetPageError(page[i]);
+ unlock_page(page[i]);
+ page_cache_release(page[i]);
+ }
+
+out:
+ kfree(actor);
+ kfree(page);
+ return res;
+}
+
+
+static int squashfs_read_cache(struct page *target_page, u64 block, int bsize,
+ int pages, struct page **page)
+{
+ struct inode *i = target_page->mapping->host;
+ struct squashfs_cache_entry *buffer = squashfs_get_datablock(i->i_sb,
+ block, bsize);
+ int bytes = buffer->length, res = buffer->error, n, offset = 0;
+ void *pageaddr;
+
+ if (res) {
+ ERROR("Unable to read page, block %llx, size %x\n", block,
+ bsize);
+ goto out;
+ }
+
+ for (n = 0; n < pages && bytes > 0; n++,
+ bytes -= PAGE_CACHE_SIZE, offset += PAGE_CACHE_SIZE) {
+ int avail = min_t(int, bytes, PAGE_CACHE_SIZE);
+
+ if (page[n] == NULL)
+ continue;
+
+ pageaddr = kmap_atomic(page[n]);
+ squashfs_copy_data(pageaddr, buffer, offset, avail);
+ memset(pageaddr + avail, 0, PAGE_CACHE_SIZE - avail);
+ kunmap_atomic(pageaddr);
+ flush_dcache_page(page[n]);
+ SetPageUptodate(page[n]);
+ unlock_page(page[n]);
+ if (page[n] != target_page)
+ page_cache_release(page[n]);
+ }
+
+out:
+ squashfs_cache_put(buffer);
+ return res;
+}
#include "squashfs_fs_sb.h"
#include "squashfs.h"
#include "decompressor.h"
+#include "page_actor.h"
struct squashfs_lzo {
void *input;
void *output;
};
-static void *lzo_init(struct squashfs_sb_info *msblk, void *buff, int len)
+static void *lzo_init(struct squashfs_sb_info *msblk, void *buff)
{
int block_size = max_t(int, msblk->block_size, SQUASHFS_METADATA_SIZE);
}
-static int lzo_uncompress(struct squashfs_sb_info *msblk, void **buffer,
- struct buffer_head **bh, int b, int offset, int length, int srclength,
- int pages)
+static int lzo_uncompress(struct squashfs_sb_info *msblk, void *strm,
+ struct buffer_head **bh, int b, int offset, int length,
+ struct squashfs_page_actor *output)
{
- struct squashfs_lzo *stream = msblk->stream;
- void *buff = stream->input;
+ struct squashfs_lzo *stream = strm;
+ void *buff = stream->input, *data;
int avail, i, bytes = length, res;
- size_t out_len = srclength;
-
- mutex_lock(&msblk->read_data_mutex);
+ size_t out_len = output->length;
for (i = 0; i < b; i++) {
- wait_on_buffer(bh[i]);
- if (!buffer_uptodate(bh[i]))
- goto block_release;
-
avail = min(bytes, msblk->devblksize - offset);
memcpy(buff, bh[i]->b_data + offset, avail);
buff += avail;
goto failed;
res = bytes = (int)out_len;
- for (i = 0, buff = stream->output; bytes && i < pages; i++) {
- avail = min_t(int, bytes, PAGE_CACHE_SIZE);
- memcpy(buffer[i], buff, avail);
- buff += avail;
- bytes -= avail;
+ data = squashfs_first_page(output);
+ buff = stream->output;
+ while (data) {
+ if (bytes <= PAGE_CACHE_SIZE) {
+ memcpy(data, buff, bytes);
+ break;
+ } else {
+ memcpy(data, buff, PAGE_CACHE_SIZE);
+ buff += PAGE_CACHE_SIZE;
+ bytes -= PAGE_CACHE_SIZE;
+ data = squashfs_next_page(output);
+ }
}
+ squashfs_finish_page(output);
- mutex_unlock(&msblk->read_data_mutex);
return res;
-block_release:
- for (; i < b; i++)
- put_bh(bh[i]);
-
failed:
- mutex_unlock(&msblk->read_data_mutex);
-
- ERROR("lzo decompression failed, data probably corrupt\n");
return -EIO;
}
--- /dev/null
+/*
+ * Copyright (c) 2013
+ * Phillip Lougher <phillip@squashfs.org.uk>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/pagemap.h>
+#include "page_actor.h"
+
+/*
+ * This file contains implementations of page_actor for decompressing into
+ * an intermediate buffer, and for decompressing directly into the
+ * page cache.
+ *
+ * Calling code should avoid sleeping between calls to squashfs_first_page()
+ * and squashfs_finish_page().
+ */
+
+/* Implementation of page_actor for decompressing into intermediate buffer */
+static void *cache_first_page(struct squashfs_page_actor *actor)
+{
+ actor->next_page = 1;
+ return actor->buffer[0];
+}
+
+static void *cache_next_page(struct squashfs_page_actor *actor)
+{
+ if (actor->next_page == actor->pages)
+ return NULL;
+
+ return actor->buffer[actor->next_page++];
+}
+
+static void cache_finish_page(struct squashfs_page_actor *actor)
+{
+ /* empty */
+}
+
+struct squashfs_page_actor *squashfs_page_actor_init(void **buffer,
+ int pages, int length)
+{
+ struct squashfs_page_actor *actor = kmalloc(sizeof(*actor), GFP_KERNEL);
+
+ if (actor == NULL)
+ return NULL;
+
+ actor->length = length ? : pages * PAGE_CACHE_SIZE;
+ actor->buffer = buffer;
+ actor->pages = pages;
+ actor->next_page = 0;
+ actor->squashfs_first_page = cache_first_page;
+ actor->squashfs_next_page = cache_next_page;
+ actor->squashfs_finish_page = cache_finish_page;
+ return actor;
+}
+
+/* Implementation of page_actor for decompressing directly into page cache. */
+static void *direct_first_page(struct squashfs_page_actor *actor)
+{
+ actor->next_page = 1;
+ return actor->pageaddr = kmap_atomic(actor->page[0]);
+}
+
+static void *direct_next_page(struct squashfs_page_actor *actor)
+{
+ if (actor->pageaddr)
+ kunmap_atomic(actor->pageaddr);
+
+ return actor->pageaddr = actor->next_page == actor->pages ? NULL :
+ kmap_atomic(actor->page[actor->next_page++]);
+}
+
+static void direct_finish_page(struct squashfs_page_actor *actor)
+{
+ if (actor->pageaddr)
+ kunmap_atomic(actor->pageaddr);
+}
+
+struct squashfs_page_actor *squashfs_page_actor_init_special(struct page **page,
+ int pages, int length)
+{
+ struct squashfs_page_actor *actor = kmalloc(sizeof(*actor), GFP_KERNEL);
+
+ if (actor == NULL)
+ return NULL;
+
+ actor->length = length ? : pages * PAGE_CACHE_SIZE;
+ actor->page = page;
+ actor->pages = pages;
+ actor->next_page = 0;
+ actor->pageaddr = NULL;
+ actor->squashfs_first_page = direct_first_page;
+ actor->squashfs_next_page = direct_next_page;
+ actor->squashfs_finish_page = direct_finish_page;
+ return actor;
+}
--- /dev/null
+#ifndef PAGE_ACTOR_H
+#define PAGE_ACTOR_H
+/*
+ * Copyright (c) 2013
+ * Phillip Lougher <phillip@squashfs.org.uk>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef CONFIG_SQUASHFS_FILE_DIRECT
+struct squashfs_page_actor {
+ void **page;
+ int pages;
+ int length;
+ int next_page;
+};
+
+static inline struct squashfs_page_actor *squashfs_page_actor_init(void **page,
+ int pages, int length)
+{
+ struct squashfs_page_actor *actor = kmalloc(sizeof(*actor), GFP_KERNEL);
+
+ if (actor == NULL)
+ return NULL;
+
+ actor->length = length ? : pages * PAGE_CACHE_SIZE;
+ actor->page = page;
+ actor->pages = pages;
+ actor->next_page = 0;
+ return actor;
+}
+
+static inline void *squashfs_first_page(struct squashfs_page_actor *actor)
+{
+ actor->next_page = 1;
+ return actor->page[0];
+}
+
+static inline void *squashfs_next_page(struct squashfs_page_actor *actor)
+{
+ return actor->next_page == actor->pages ? NULL :
+ actor->page[actor->next_page++];
+}
+
+static inline void squashfs_finish_page(struct squashfs_page_actor *actor)
+{
+ /* empty */
+}
+#else
+struct squashfs_page_actor {
+ union {
+ void **buffer;
+ struct page **page;
+ };
+ void *pageaddr;
+ void *(*squashfs_first_page)(struct squashfs_page_actor *);
+ void *(*squashfs_next_page)(struct squashfs_page_actor *);
+ void (*squashfs_finish_page)(struct squashfs_page_actor *);
+ int pages;
+ int length;
+ int next_page;
+};
+
+extern struct squashfs_page_actor *squashfs_page_actor_init(void **, int, int);
+extern struct squashfs_page_actor *squashfs_page_actor_init_special(struct page
+ **, int, int);
+static inline void *squashfs_first_page(struct squashfs_page_actor *actor)
+{
+ return actor->squashfs_first_page(actor);
+}
+static inline void *squashfs_next_page(struct squashfs_page_actor *actor)
+{
+ return actor->squashfs_next_page(actor);
+}
+static inline void squashfs_finish_page(struct squashfs_page_actor *actor)
+{
+ actor->squashfs_finish_page(actor);
+}
+#endif
+#endif
#define WARNING(s, args...) pr_warning("SQUASHFS: "s, ## args)
/* block.c */
-extern int squashfs_read_data(struct super_block *, void **, u64, int, u64 *,
- int, int);
+extern int squashfs_read_data(struct super_block *, u64, int, u64 *,
+ struct squashfs_page_actor *);
/* cache.c */
extern struct squashfs_cache *squashfs_cache_init(char *, int, int);
/* decompressor.c */
extern const struct squashfs_decompressor *squashfs_lookup_decompressor(int);
-extern void *squashfs_decompressor_init(struct super_block *, unsigned short);
+extern void *squashfs_decompressor_setup(struct super_block *, unsigned short);
+
+/* decompressor_xxx.c */
+extern void *squashfs_decompressor_create(struct squashfs_sb_info *, void *);
+extern void squashfs_decompressor_destroy(struct squashfs_sb_info *);
+extern int squashfs_decompress(struct squashfs_sb_info *, struct buffer_head **,
+ int, int, int, struct squashfs_page_actor *);
+extern int squashfs_max_decompressors(void);
/* export.c */
extern __le64 *squashfs_read_inode_lookup_table(struct super_block *, u64, u64,
extern __le64 *squashfs_read_fragment_index_table(struct super_block *,
u64, u64, unsigned int);
+/* file.c */
+void squashfs_copy_cache(struct page *, struct squashfs_cache_entry *, int,
+ int);
+
+/* file_xxx.c */
+extern int squashfs_readpage_block(struct page *, u64, int);
+
/* id.c */
extern int squashfs_get_id(struct super_block *, unsigned int, unsigned int *);
extern __le64 *squashfs_read_id_index_table(struct super_block *, u64, u64,
wait_queue_head_t wait_queue;
struct squashfs_cache *cache;
void **data;
+ struct squashfs_page_actor *actor;
};
struct squashfs_sb_info {
__le64 *id_table;
__le64 *fragment_index;
__le64 *xattr_id_table;
- struct mutex read_data_mutex;
struct mutex meta_index_mutex;
struct meta_index *meta_index;
- void *stream;
+ struct squashfs_stream *stream;
__le64 *inode_lookup_table;
u64 inode_table;
u64 directory_table;
msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE);
msblk->devblksize_log2 = ffz(~msblk->devblksize);
- mutex_init(&msblk->read_data_mutex);
mutex_init(&msblk->meta_index_mutex);
/*
goto failed_mount;
/* Allocate read_page block */
- msblk->read_page = squashfs_cache_init("data", 1, msblk->block_size);
+ msblk->read_page = squashfs_cache_init("data",
+ squashfs_max_decompressors(), msblk->block_size);
if (msblk->read_page == NULL) {
ERROR("Failed to allocate read_page block\n");
goto failed_mount;
}
- msblk->stream = squashfs_decompressor_init(sb, flags);
+ msblk->stream = squashfs_decompressor_setup(sb, flags);
if (IS_ERR(msblk->stream)) {
err = PTR_ERR(msblk->stream);
msblk->stream = NULL;
squashfs_cache_delete(msblk->block_cache);
squashfs_cache_delete(msblk->fragment_cache);
squashfs_cache_delete(msblk->read_page);
- squashfs_decompressor_free(msblk, msblk->stream);
+ squashfs_decompressor_destroy(msblk);
kfree(msblk->inode_lookup_table);
kfree(msblk->fragment_index);
kfree(msblk->id_table);
squashfs_cache_delete(sbi->block_cache);
squashfs_cache_delete(sbi->fragment_cache);
squashfs_cache_delete(sbi->read_page);
- squashfs_decompressor_free(sbi, sbi->stream);
+ squashfs_decompressor_destroy(sbi);
kfree(sbi->id_table);
kfree(sbi->fragment_index);
kfree(sbi->meta_index);
#include "squashfs_fs_sb.h"
#include "squashfs.h"
#include "decompressor.h"
+#include "page_actor.h"
struct squashfs_xz {
struct xz_dec *state;
struct xz_buf buf;
};
-struct comp_opts {
+struct disk_comp_opts {
__le32 dictionary_size;
__le32 flags;
};
-static void *squashfs_xz_init(struct squashfs_sb_info *msblk, void *buff,
- int len)
+struct comp_opts {
+ int dict_size;
+};
+
+static void *squashfs_xz_comp_opts(struct squashfs_sb_info *msblk,
+ void *buff, int len)
{
- struct comp_opts *comp_opts = buff;
- struct squashfs_xz *stream;
- int dict_size = msblk->block_size;
- int err, n;
+ struct disk_comp_opts *comp_opts = buff;
+ struct comp_opts *opts;
+ int err = 0, n;
+
+ opts = kmalloc(sizeof(*opts), GFP_KERNEL);
+ if (opts == NULL) {
+ err = -ENOMEM;
+ goto out2;
+ }
if (comp_opts) {
/* check compressor options are the expected length */
if (len < sizeof(*comp_opts)) {
err = -EIO;
- goto failed;
+ goto out;
}
- dict_size = le32_to_cpu(comp_opts->dictionary_size);
+ opts->dict_size = le32_to_cpu(comp_opts->dictionary_size);
/* the dictionary size should be 2^n or 2^n+2^(n+1) */
- n = ffs(dict_size) - 1;
- if (dict_size != (1 << n) && dict_size != (1 << n) +
+ n = ffs(opts->dict_size) - 1;
+ if (opts->dict_size != (1 << n) && opts->dict_size != (1 << n) +
(1 << (n + 1))) {
err = -EIO;
- goto failed;
+ goto out;
}
- }
+ } else
+ /* use defaults */
+ opts->dict_size = max_t(int, msblk->block_size,
+ SQUASHFS_METADATA_SIZE);
+
+ return opts;
+
+out:
+ kfree(opts);
+out2:
+ return ERR_PTR(err);
+}
+
- dict_size = max_t(int, dict_size, SQUASHFS_METADATA_SIZE);
+static void *squashfs_xz_init(struct squashfs_sb_info *msblk, void *buff)
+{
+ struct comp_opts *comp_opts = buff;
+ struct squashfs_xz *stream;
+ int err;
stream = kmalloc(sizeof(*stream), GFP_KERNEL);
if (stream == NULL) {
goto failed;
}
- stream->state = xz_dec_init(XZ_PREALLOC, dict_size);
+ stream->state = xz_dec_init(XZ_PREALLOC, comp_opts->dict_size);
if (stream->state == NULL) {
kfree(stream);
err = -ENOMEM;
}
-static int squashfs_xz_uncompress(struct squashfs_sb_info *msblk, void **buffer,
- struct buffer_head **bh, int b, int offset, int length, int srclength,
- int pages)
+static int squashfs_xz_uncompress(struct squashfs_sb_info *msblk, void *strm,
+ struct buffer_head **bh, int b, int offset, int length,
+ struct squashfs_page_actor *output)
{
enum xz_ret xz_err;
- int avail, total = 0, k = 0, page = 0;
- struct squashfs_xz *stream = msblk->stream;
-
- mutex_lock(&msblk->read_data_mutex);
+ int avail, total = 0, k = 0;
+ struct squashfs_xz *stream = strm;
xz_dec_reset(stream->state);
stream->buf.in_pos = 0;
stream->buf.in_size = 0;
stream->buf.out_pos = 0;
stream->buf.out_size = PAGE_CACHE_SIZE;
- stream->buf.out = buffer[page++];
+ stream->buf.out = squashfs_first_page(output);
do {
if (stream->buf.in_pos == stream->buf.in_size && k < b) {
avail = min(length, msblk->devblksize - offset);
length -= avail;
- wait_on_buffer(bh[k]);
- if (!buffer_uptodate(bh[k]))
- goto release_mutex;
-
stream->buf.in = bh[k]->b_data + offset;
stream->buf.in_size = avail;
stream->buf.in_pos = 0;
offset = 0;
}
- if (stream->buf.out_pos == stream->buf.out_size
- && page < pages) {
- stream->buf.out = buffer[page++];
- stream->buf.out_pos = 0;
- total += PAGE_CACHE_SIZE;
+ if (stream->buf.out_pos == stream->buf.out_size) {
+ stream->buf.out = squashfs_next_page(output);
+ if (stream->buf.out != NULL) {
+ stream->buf.out_pos = 0;
+ total += PAGE_CACHE_SIZE;
+ }
}
xz_err = xz_dec_run(stream->state, &stream->buf);
put_bh(bh[k++]);
} while (xz_err == XZ_OK);
- if (xz_err != XZ_STREAM_END) {
- ERROR("xz_dec_run error, data probably corrupt\n");
- goto release_mutex;
- }
-
- if (k < b) {
- ERROR("xz_uncompress error, input remaining\n");
- goto release_mutex;
- }
+ squashfs_finish_page(output);
- total += stream->buf.out_pos;
- mutex_unlock(&msblk->read_data_mutex);
- return total;
+ if (xz_err != XZ_STREAM_END || k < b)
+ goto out;
-release_mutex:
- mutex_unlock(&msblk->read_data_mutex);
+ return total + stream->buf.out_pos;
+out:
for (; k < b; k++)
put_bh(bh[k]);
const struct squashfs_decompressor squashfs_xz_comp_ops = {
.init = squashfs_xz_init,
+ .comp_opts = squashfs_xz_comp_opts,
.free = squashfs_xz_free,
.decompress = squashfs_xz_uncompress,
.id = XZ_COMPRESSION,
#include "squashfs_fs_sb.h"
#include "squashfs.h"
#include "decompressor.h"
+#include "page_actor.h"
-static void *zlib_init(struct squashfs_sb_info *dummy, void *buff, int len)
+static void *zlib_init(struct squashfs_sb_info *dummy, void *buff)
{
z_stream *stream = kmalloc(sizeof(z_stream), GFP_KERNEL);
if (stream == NULL)
}
-static int zlib_uncompress(struct squashfs_sb_info *msblk, void **buffer,
- struct buffer_head **bh, int b, int offset, int length, int srclength,
- int pages)
+static int zlib_uncompress(struct squashfs_sb_info *msblk, void *strm,
+ struct buffer_head **bh, int b, int offset, int length,
+ struct squashfs_page_actor *output)
{
- int zlib_err, zlib_init = 0;
- int k = 0, page = 0;
- z_stream *stream = msblk->stream;
-
- mutex_lock(&msblk->read_data_mutex);
+ int zlib_err, zlib_init = 0, k = 0;
+ z_stream *stream = strm;
- stream->avail_out = 0;
+ stream->avail_out = PAGE_CACHE_SIZE;
+ stream->next_out = squashfs_first_page(output);
stream->avail_in = 0;
do {
if (stream->avail_in == 0 && k < b) {
int avail = min(length, msblk->devblksize - offset);
length -= avail;
- wait_on_buffer(bh[k]);
- if (!buffer_uptodate(bh[k]))
- goto release_mutex;
-
stream->next_in = bh[k]->b_data + offset;
stream->avail_in = avail;
offset = 0;
}
- if (stream->avail_out == 0 && page < pages) {
- stream->next_out = buffer[page++];
- stream->avail_out = PAGE_CACHE_SIZE;
+ if (stream->avail_out == 0) {
+ stream->next_out = squashfs_next_page(output);
+ if (stream->next_out != NULL)
+ stream->avail_out = PAGE_CACHE_SIZE;
}
if (!zlib_init) {
zlib_err = zlib_inflateInit(stream);
if (zlib_err != Z_OK) {
- ERROR("zlib_inflateInit returned unexpected "
- "result 0x%x, srclength %d\n",
- zlib_err, srclength);
- goto release_mutex;
+ squashfs_finish_page(output);
+ goto out;
}
zlib_init = 1;
}
put_bh(bh[k++]);
} while (zlib_err == Z_OK);
- if (zlib_err != Z_STREAM_END) {
- ERROR("zlib_inflate error, data probably corrupt\n");
- goto release_mutex;
- }
+ squashfs_finish_page(output);
- zlib_err = zlib_inflateEnd(stream);
- if (zlib_err != Z_OK) {
- ERROR("zlib_inflate error, data probably corrupt\n");
- goto release_mutex;
- }
+ if (zlib_err != Z_STREAM_END)
+ goto out;
- if (k < b) {
- ERROR("zlib_uncompress error, data remaining\n");
- goto release_mutex;
- }
+ zlib_err = zlib_inflateEnd(stream);
+ if (zlib_err != Z_OK)
+ goto out;
- length = stream->total_out;
- mutex_unlock(&msblk->read_data_mutex);
- return length;
+ if (k < b)
+ goto out;
-release_mutex:
- mutex_unlock(&msblk->read_data_mutex);
+ return stream->total_out;
+out:
for (; k < b; k++)
put_bh(bh[k]);
if (!of)
goto err_out;
- mutex_init(&of->mutex);
+ /*
+ * The following is done to give a different lockdep key to
+ * @of->mutex for files which implement mmap. This is a rather
+ * crude way to avoid false positive lockdep warning around
+ * mm->mmap_sem - mmap nests @of->mutex under mm->mmap_sem and
+ * reading /sys/block/sda/trace/act_mask grabs sr_mutex, under
+ * which mm->mmap_sem nests, while holding @of->mutex. As each
+ * open file has a separate mutex, it's okay as long as those don't
+ * happen on the same file. At this point, we can't easily give
+ * each file a separate locking class. Let's differentiate on
+ * whether the file is bin or not for now.
+ */
+ if (sysfs_is_bin(attr_sd))
+ mutex_init(&of->mutex);
+ else
+ mutex_init(&of->mutex);
+
of->sd = attr_sd;
of->file = file;
int committed; /* xaction was committed */
int logflags; /* logging flags */
int error; /* error return value */
+ int cancel_flags = 0;
ASSERT(XFS_IFORK_Q(ip) == 0);
if (rsvd)
tp->t_flags |= XFS_TRANS_RESERVE;
error = xfs_trans_reserve(tp, &M_RES(mp)->tr_addafork, blks, 0);
- if (error)
- goto error0;
+ if (error) {
+ xfs_trans_cancel(tp, 0);
+ return error;
+ }
+ cancel_flags = XFS_TRANS_RELEASE_LOG_RES;
xfs_ilock(ip, XFS_ILOCK_EXCL);
error = xfs_trans_reserve_quota_nblks(tp, ip, blks, 0, rsvd ?
XFS_QMOPT_RES_REGBLKS | XFS_QMOPT_FORCE_RES :
XFS_QMOPT_RES_REGBLKS);
- if (error) {
- xfs_iunlock(ip, XFS_ILOCK_EXCL);
- xfs_trans_cancel(tp, XFS_TRANS_RELEASE_LOG_RES);
- return error;
- }
+ if (error)
+ goto trans_cancel;
+ cancel_flags |= XFS_TRANS_ABORT;
if (XFS_IFORK_Q(ip))
- goto error1;
+ goto trans_cancel;
if (ip->i_d.di_aformat != XFS_DINODE_FMT_EXTENTS) {
/*
* For inodes coming from pre-6.2 filesystems.
}
ASSERT(ip->i_d.di_anextents == 0);
- xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
+ xfs_trans_ijoin(tp, ip, 0);
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
switch (ip->i_d.di_format) {
default:
ASSERT(0);
error = XFS_ERROR(EINVAL);
- goto error1;
+ goto trans_cancel;
}
ASSERT(ip->i_afp == NULL);
if (logflags)
xfs_trans_log_inode(tp, ip, logflags);
if (error)
- goto error2;
+ goto bmap_cancel;
if (!xfs_sb_version_hasattr(&mp->m_sb) ||
(!xfs_sb_version_hasattr2(&mp->m_sb) && version == 2)) {
__int64_t sbfields = 0;
error = xfs_bmap_finish(&tp, &flist, &committed);
if (error)
- goto error2;
- return xfs_trans_commit(tp, XFS_TRANS_RELEASE_LOG_RES);
-error2:
+ goto bmap_cancel;
+ error = xfs_trans_commit(tp, XFS_TRANS_RELEASE_LOG_RES);
+ xfs_iunlock(ip, XFS_ILOCK_EXCL);
+ return error;
+
+bmap_cancel:
xfs_bmap_cancel(&flist);
-error1:
+trans_cancel:
+ xfs_trans_cancel(tp, cancel_flags);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
-error0:
- xfs_trans_cancel(tp, XFS_TRANS_RELEASE_LOG_RES|XFS_TRANS_ABORT);
return error;
}
* blocks at the end of the file which do not start at the previous data block,
* we will try to align the new blocks at stripe unit boundaries.
*
- * Returns 0 in bma->aeof if the file (fork) is empty as any new write will be
+ * Returns 1 in bma->aeof if the file (fork) is empty as any new write will be
* at, or past the EOF.
*/
STATIC int
bma->aeof = 0;
error = xfs_bmap_last_extent(NULL, bma->ip, whichfork, &rec,
&is_empty);
- if (error || is_empty)
+ if (error)
return error;
+ if (is_empty) {
+ bma->aeof = 1;
+ return 0;
+ }
+
/*
* Check if we are allocation or past the last extent, or at least into
* the last delayed allocated extent.
int isaligned;
int tryagain;
int error;
+ int stripe_align;
ASSERT(ap->length);
mp = ap->ip->i_mount;
+
+ /* stripe alignment for allocation is determined by mount parameters */
+ stripe_align = 0;
+ if (mp->m_swidth && (mp->m_flags & XFS_MOUNT_SWALLOC))
+ stripe_align = mp->m_swidth;
+ else if (mp->m_dalign)
+ stripe_align = mp->m_dalign;
+
align = ap->userdata ? xfs_get_extsz_hint(ap->ip) : 0;
if (unlikely(align)) {
error = xfs_bmap_extsize_align(mp, &ap->got, &ap->prev,
ASSERT(!error);
ASSERT(ap->length);
}
+
+
nullfb = *ap->firstblock == NULLFSBLOCK;
fb_agno = nullfb ? NULLAGNUMBER : XFS_FSB_TO_AGNO(mp, *ap->firstblock);
if (nullfb) {
*/
if (!ap->flist->xbf_low && ap->aeof) {
if (!ap->offset) {
- args.alignment = mp->m_dalign;
+ args.alignment = stripe_align;
atype = args.type;
isaligned = 1;
/*
* of minlen+alignment+slop doesn't go up
* between the calls.
*/
- if (blen > mp->m_dalign && blen <= args.maxlen)
- nextminlen = blen - mp->m_dalign;
+ if (blen > stripe_align && blen <= args.maxlen)
+ nextminlen = blen - stripe_align;
else
nextminlen = args.minlen;
- if (nextminlen + mp->m_dalign > args.minlen + 1)
+ if (nextminlen + stripe_align > args.minlen + 1)
args.minalignslop =
- nextminlen + mp->m_dalign -
+ nextminlen + stripe_align -
args.minlen - 1;
else
args.minalignslop = 0;
*/
args.type = atype;
args.fsbno = ap->blkno;
- args.alignment = mp->m_dalign;
+ args.alignment = stripe_align;
args.minlen = nextminlen;
args.minalignslop = 0;
isaligned = 1;
XFS_BUF_UNWRITE(bp);
XFS_BUF_READ(bp);
XFS_BUF_SET_ADDR(bp, xfs_fsb_to_db(ip, imap.br_startblock));
- xfsbdstrat(mp, bp);
+
+ if (XFS_FORCED_SHUTDOWN(mp)) {
+ error = XFS_ERROR(EIO);
+ break;
+ }
+ xfs_buf_iorequest(bp);
error = xfs_buf_iowait(bp);
if (error) {
xfs_buf_ioerror_alert(bp,
XFS_BUF_UNDONE(bp);
XFS_BUF_UNREAD(bp);
XFS_BUF_WRITE(bp);
- xfsbdstrat(mp, bp);
+
+ if (XFS_FORCED_SHUTDOWN(mp)) {
+ error = XFS_ERROR(EIO);
+ break;
+ }
+ xfs_buf_iorequest(bp);
error = xfs_buf_iowait(bp);
if (error) {
xfs_buf_ioerror_alert(bp,
bp->b_flags |= XBF_READ;
bp->b_ops = ops;
- xfsbdstrat(target->bt_mount, bp);
+ if (XFS_FORCED_SHUTDOWN(target->bt_mount)) {
+ xfs_buf_relse(bp);
+ return NULL;
+ }
+ xfs_buf_iorequest(bp);
xfs_buf_iowait(bp);
return bp;
}
* This is meant for userdata errors; metadata bufs come with
* iodone functions attached, so that we can track down errors.
*/
-STATIC int
+int
xfs_bioerror_relse(
struct xfs_buf *bp)
{
ASSERT(xfs_buf_islocked(bp));
bp->b_flags |= XBF_WRITE;
- bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q);
+ bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q | XBF_WRITE_FAIL);
xfs_bdstrat_cb(bp);
return error;
}
-/*
- * Wrapper around bdstrat so that we can stop data from going to disk in case
- * we are shutting down the filesystem. Typically user data goes thru this
- * path; one of the exceptions is the superblock.
- */
-void
-xfsbdstrat(
- struct xfs_mount *mp,
- struct xfs_buf *bp)
-{
- if (XFS_FORCED_SHUTDOWN(mp)) {
- trace_xfs_bdstrat_shut(bp, _RET_IP_);
- xfs_bioerror_relse(bp);
- return;
- }
-
- xfs_buf_iorequest(bp);
-}
-
STATIC void
_xfs_buf_ioend(
xfs_buf_t *bp,
struct xfs_buf *bp;
bp = list_first_entry(&dispose, struct xfs_buf, b_lru);
list_del_init(&bp->b_lru);
+ if (bp->b_flags & XBF_WRITE_FAIL) {
+ xfs_alert(btp->bt_mount,
+"Corruption Alert: Buffer at block 0x%llx had permanent write failures!\n"
+"Please run xfs_repair to determine the extent of the problem.",
+ (long long)bp->b_bn);
+ }
xfs_buf_rele(bp);
}
if (loop++ != 0)
blk_start_plug(&plug);
list_for_each_entry_safe(bp, n, io_list, b_list) {
- bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC);
+ bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC | XBF_WRITE_FAIL);
bp->b_flags |= XBF_WRITE;
if (!wait) {
#define XBF_ASYNC (1 << 4) /* initiator will not wait for completion */
#define XBF_DONE (1 << 5) /* all pages in the buffer uptodate */
#define XBF_STALE (1 << 6) /* buffer has been staled, do not find it */
+#define XBF_WRITE_FAIL (1 << 24)/* async writes have failed on this buffer */
/* I/O hints for the BIO layer */
#define XBF_SYNCIO (1 << 10)/* treat this buffer as synchronous I/O */
{ XBF_ASYNC, "ASYNC" }, \
{ XBF_DONE, "DONE" }, \
{ XBF_STALE, "STALE" }, \
+ { XBF_WRITE_FAIL, "WRITE_FAIL" }, \
{ XBF_SYNCIO, "SYNCIO" }, \
{ XBF_FUA, "FUA" }, \
{ XBF_FLUSH, "FLUSH" }, \
{ _XBF_DELWRI_Q, "DELWRI_Q" }, \
{ _XBF_COMPOUND, "COMPOUND" }
+
/*
* Internal state flags.
*/
/* Buffer Read and Write Routines */
extern int xfs_bwrite(struct xfs_buf *bp);
-
-extern void xfsbdstrat(struct xfs_mount *, struct xfs_buf *);
-
extern void xfs_buf_ioend(xfs_buf_t *, int);
extern void xfs_buf_ioerror(xfs_buf_t *, int);
extern void xfs_buf_ioerror_alert(struct xfs_buf *, const char *func);
#define xfs_buf_zero(bp, off, len) \
xfs_buf_iomove((bp), (off), (len), NULL, XBRW_ZERO)
+extern int xfs_bioerror_relse(struct xfs_buf *);
+
static inline int xfs_buf_geterror(xfs_buf_t *bp)
{
return bp ? bp->b_error : ENOMEM;
#define XFS_BUF_ZEROFLAGS(bp) \
((bp)->b_flags &= ~(XBF_READ|XBF_WRITE|XBF_ASYNC| \
- XBF_SYNCIO|XBF_FUA|XBF_FLUSH))
+ XBF_SYNCIO|XBF_FUA|XBF_FLUSH| \
+ XBF_WRITE_FAIL))
void xfs_buf_stale(struct xfs_buf *bp);
#define XFS_BUF_UNSTALE(bp) ((bp)->b_flags &= ~XBF_STALE)
}
}
+/*
+ * Buffer IO error rate limiting. Limit it to no more than 10 messages per 30
+ * seconds so as to not spam logs too much on repeated detection of the same
+ * buffer being bad..
+ */
+
+DEFINE_RATELIMIT_STATE(xfs_buf_write_fail_rl_state, 30 * HZ, 10);
+
STATIC uint
xfs_buf_item_push(
struct xfs_log_item *lip,
trace_xfs_buf_item_push(bip);
+ /* has a previous flush failed due to IO errors? */
+ if ((bp->b_flags & XBF_WRITE_FAIL) &&
+ ___ratelimit(&xfs_buf_write_fail_rl_state, "XFS:")) {
+ xfs_warn(bp->b_target->bt_mount,
+"Detected failing async write on buffer block 0x%llx. Retrying async write.\n",
+ (long long)bp->b_bn);
+ }
+
if (!xfs_buf_delwri_queue(bp, buffer_list))
rval = XFS_ITEM_FLUSHING;
xfs_buf_unlock(bp);
xfs_buf_ioerror(bp, 0); /* errno of 0 unsets the flag */
- if (!XFS_BUF_ISSTALE(bp)) {
- bp->b_flags |= XBF_WRITE | XBF_ASYNC | XBF_DONE;
+ if (!(bp->b_flags & (XBF_STALE|XBF_WRITE_FAIL))) {
+ bp->b_flags |= XBF_WRITE | XBF_ASYNC |
+ XBF_DONE | XBF_WRITE_FAIL;
xfs_buf_iorequest(bp);
} else {
xfs_buf_relse(bp);
*/
int /* error */
xfs_dir2_node_removename(
- xfs_da_args_t *args) /* operation arguments */
+ struct xfs_da_args *args) /* operation arguments */
{
- xfs_da_state_blk_t *blk; /* leaf block */
+ struct xfs_da_state_blk *blk; /* leaf block */
int error; /* error return value */
int rval; /* operation return value */
- xfs_da_state_t *state; /* btree cursor */
+ struct xfs_da_state *state; /* btree cursor */
trace_xfs_dir2_node_removename(args);
state->mp = args->dp->i_mount;
state->blocksize = state->mp->m_dirblksize;
state->node_ents = state->mp->m_dir_node_ents;
- /*
- * Look up the entry we're deleting, set up the cursor.
- */
+
+ /* Look up the entry we're deleting, set up the cursor. */
error = xfs_da3_node_lookup_int(state, &rval);
if (error)
- rval = error;
- /*
- * Didn't find it, upper layer screwed up.
- */
+ goto out_free;
+
+ /* Didn't find it, upper layer screwed up. */
if (rval != EEXIST) {
- xfs_da_state_free(state);
- return rval;
+ error = rval;
+ goto out_free;
}
+
blk = &state->path.blk[state->path.active - 1];
ASSERT(blk->magic == XFS_DIR2_LEAFN_MAGIC);
ASSERT(state->extravalid);
error = xfs_dir2_leafn_remove(args, blk->bp, blk->index,
&state->extrablk, &rval);
if (error)
- return error;
+ goto out_free;
/*
* Fix the hash values up the btree.
*/
*/
if (!error)
error = xfs_dir2_node_to_leaf(state);
+out_free:
xfs_da_state_free(state);
return error;
}
struct xfs_mount *mp,
struct fstrim_range __user *urange)
{
- struct request_queue *q = mp->m_ddev_targp->bt_bdev->bd_disk->queue;
+ struct request_queue *q = bdev_get_queue(mp->m_ddev_targp->bt_bdev);
unsigned int granularity = q->limits.discard_granularity;
struct fstrim_range range;
xfs_daddr_t start, end, minlen;
* matter as trimming blocks is an advisory interface.
*/
if (range.start >= XFS_FSB_TO_B(mp, mp->m_sb.sb_dblocks) ||
- range.minlen > XFS_FSB_TO_B(mp, XFS_ALLOC_AG_MAX_USABLE(mp)))
+ range.minlen > XFS_FSB_TO_B(mp, XFS_ALLOC_AG_MAX_USABLE(mp)) ||
+ range.len < mp->m_sb.sb_blocksize)
return -XFS_ERROR(EINVAL);
start = BTOBB(range.start);
*/
nfree = 0;
for (agno = nagcount - 1; agno >= oagcount; agno--, new -= agsize) {
+ __be32 *agfl_bno;
+
/*
* AG freespace header block
*/
agfl->agfl_seqno = cpu_to_be32(agno);
uuid_copy(&agfl->agfl_uuid, &mp->m_sb.sb_uuid);
}
+
+ agfl_bno = XFS_BUF_TO_AGFL_BNO(mp, bp);
for (bucket = 0; bucket < XFS_AGFL_SIZE(mp); bucket++)
- agfl->agfl_bno[bucket] = cpu_to_be32(NULLAGBLOCK);
+ agfl_bno[bucket] = cpu_to_be32(NULLAGBLOCK);
error = xfs_bwrite(bp);
xfs_buf_relse(bp);
return -XFS_ERROR(EPERM);
if (copy_from_user(&al_hreq, arg, sizeof(xfs_fsop_attrlist_handlereq_t)))
return -XFS_ERROR(EFAULT);
- if (al_hreq.buflen > XATTR_LIST_MAX)
+ if (al_hreq.buflen < sizeof(struct attrlist) ||
+ al_hreq.buflen > XATTR_LIST_MAX)
return -XFS_ERROR(EINVAL);
/*
if (copy_from_user(&al_hreq, arg,
sizeof(compat_xfs_fsop_attrlist_handlereq_t)))
return -XFS_ERROR(EFAULT);
- if (al_hreq.buflen > XATTR_LIST_MAX)
+ if (al_hreq.buflen < sizeof(struct attrlist) ||
+ al_hreq.buflen > XATTR_LIST_MAX)
return -XFS_ERROR(EINVAL);
/*
}
if (!gid_eq(igid, gid)) {
if (XFS_IS_QUOTA_RUNNING(mp) && XFS_IS_GQUOTA_ON(mp)) {
- ASSERT(!XFS_IS_PQUOTA_ON(mp));
+ ASSERT(xfs_sb_version_has_pquotino(&mp->m_sb) ||
+ !XFS_IS_PQUOTA_ON(mp));
ASSERT(mask & ATTR_GID);
ASSERT(gdqp);
olddquot2 = xfs_qm_vop_chown(tp, ip,
bp->b_io_length = nbblks;
bp->b_error = 0;
- xfsbdstrat(log->l_mp, bp);
+ if (XFS_FORCED_SHUTDOWN(log->l_mp))
+ return XFS_ERROR(EIO);
+
+ xfs_buf_iorequest(bp);
error = xfs_buf_iowait(bp);
if (error)
xfs_buf_ioerror_alert(bp, __func__);
XFS_BUF_READ(bp);
XFS_BUF_UNASYNC(bp);
bp->b_ops = &xfs_sb_buf_ops;
- xfsbdstrat(log->l_mp, bp);
+
+ if (XFS_FORCED_SHUTDOWN(log->l_mp)) {
+ xfs_buf_relse(bp);
+ return XFS_ERROR(EIO);
+ }
+
+ xfs_buf_iorequest(bp);
error = xfs_buf_iowait(bp);
if (error) {
xfs_buf_ioerror_alert(bp, __func__);
#include "xfs_fsops.h"
#include "xfs_trace.h"
#include "xfs_icache.h"
+#include "xfs_dinode.h"
#ifdef HAVE_PERCPU_SB
* Set the inode cluster size.
* This may still be overridden by the file system
* block size if it is larger than the chosen cluster size.
+ *
+ * For v5 filesystems, scale the cluster size with the inode size to
+ * keep a constant ratio of inode per cluster buffer, but only if mkfs
+ * has set the inode alignment value appropriately for larger cluster
+ * sizes.
*/
mp->m_inode_cluster_size = XFS_INODE_BIG_CLUSTER_SIZE;
+ if (xfs_sb_version_hascrc(&mp->m_sb)) {
+ int new_size = mp->m_inode_cluster_size;
+
+ new_size *= mp->m_sb.sb_inodesize / XFS_DINODE_MIN_SIZE;
+ if (mp->m_sb.sb_inoalignmt >= XFS_B_TO_FSBT(mp, new_size))
+ mp->m_inode_cluster_size = new_size;
+ xfs_info(mp, "Using inode cluster size of %d bytes",
+ mp->m_inode_cluster_size);
+ }
/*
* Set inode alignment fields
__uint8_t m_blkbb_log; /* blocklog - BBSHIFT */
__uint8_t m_agno_log; /* log #ag's */
__uint8_t m_agino_log; /* #bits for agino in inum */
- __uint16_t m_inode_cluster_size;/* min inode buf size */
+ uint m_inode_cluster_size;/* min inode buf size */
uint m_blockmask; /* sb_blocksize-1 */
uint m_blockwsize; /* sb_blocksize in words */
uint m_blockwmask; /* blockwsize-1 */
{
struct xfs_mount *mp = dqp->q_mount;
struct xfs_quotainfo *qi = mp->m_quotainfo;
- struct xfs_dquot *gdqp = NULL;
- struct xfs_dquot *pdqp = NULL;
xfs_dqlock(dqp);
if ((dqp->dq_flags & XFS_DQ_FREEING) || dqp->q_nrefs != 0) {
return EAGAIN;
}
- /*
- * If this quota has a hint attached, prepare for releasing it now.
- */
- gdqp = dqp->q_gdquot;
- if (gdqp) {
- xfs_dqlock(gdqp);
- dqp->q_gdquot = NULL;
- }
-
- pdqp = dqp->q_pdquot;
- if (pdqp) {
- xfs_dqlock(pdqp);
- dqp->q_pdquot = NULL;
- }
-
dqp->dq_flags |= XFS_DQ_FREEING;
xfs_dqflock(dqp);
XFS_STATS_DEC(xs_qm_dquot_unused);
xfs_qm_dqdestroy(dqp);
+ return 0;
+}
+
+/*
+ * Release the group or project dquot pointers the user dquots maybe carrying
+ * around as a hint, and proceed to purge the user dquot cache if requested.
+*/
+STATIC int
+xfs_qm_dqpurge_hints(
+ struct xfs_dquot *dqp,
+ void *data)
+{
+ struct xfs_dquot *gdqp = NULL;
+ struct xfs_dquot *pdqp = NULL;
+ uint flags = *((uint *)data);
+
+ xfs_dqlock(dqp);
+ if (dqp->dq_flags & XFS_DQ_FREEING) {
+ xfs_dqunlock(dqp);
+ return EAGAIN;
+ }
+
+ /* If this quota has a hint attached, prepare for releasing it now */
+ gdqp = dqp->q_gdquot;
+ if (gdqp)
+ dqp->q_gdquot = NULL;
+
+ pdqp = dqp->q_pdquot;
+ if (pdqp)
+ dqp->q_pdquot = NULL;
+
+ xfs_dqunlock(dqp);
if (gdqp)
- xfs_qm_dqput(gdqp);
+ xfs_qm_dqrele(gdqp);
if (pdqp)
- xfs_qm_dqput(pdqp);
+ xfs_qm_dqrele(pdqp);
+
+ if (flags & XFS_QMOPT_UQUOTA)
+ return xfs_qm_dqpurge(dqp, NULL);
+
return 0;
}
struct xfs_mount *mp,
uint flags)
{
- if (flags & XFS_QMOPT_UQUOTA)
- xfs_qm_dquot_walk(mp, XFS_DQ_USER, xfs_qm_dqpurge, NULL);
+ /*
+ * We have to release group/project dquot hint(s) from the user dquot
+ * at first if they are there, otherwise we would run into an infinite
+ * loop while walking through radix tree to purge other type of dquots
+ * since their refcount is not zero if the user dquot refers to them
+ * as hint.
+ *
+ * Call the special xfs_qm_dqpurge_hints() will end up go through the
+ * general xfs_qm_dqpurge() against user dquot cache if requested.
+ */
+ xfs_qm_dquot_walk(mp, XFS_DQ_USER, xfs_qm_dqpurge_hints, &flags);
+
if (flags & XFS_QMOPT_GQUOTA)
xfs_qm_dquot_walk(mp, XFS_DQ_GROUP, xfs_qm_dqpurge, NULL);
if (flags & XFS_QMOPT_PQUOTA)
ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
ASSERT(XFS_IS_QUOTA_RUNNING(mp));
- if (udqp) {
+ if (udqp && XFS_IS_UQUOTA_ON(mp)) {
ASSERT(ip->i_udquot == NULL);
- ASSERT(XFS_IS_UQUOTA_ON(mp));
ASSERT(ip->i_d.di_uid == be32_to_cpu(udqp->q_core.d_id));
ip->i_udquot = xfs_qm_dqhold(udqp);
xfs_trans_mod_dquot(tp, udqp, XFS_TRANS_DQ_ICOUNT, 1);
}
- if (gdqp) {
+ if (gdqp && XFS_IS_GQUOTA_ON(mp)) {
ASSERT(ip->i_gdquot == NULL);
- ASSERT(XFS_IS_GQUOTA_ON(mp));
ASSERT(ip->i_d.di_gid == be32_to_cpu(gdqp->q_core.d_id));
ip->i_gdquot = xfs_qm_dqhold(gdqp);
xfs_trans_mod_dquot(tp, gdqp, XFS_TRANS_DQ_ICOUNT, 1);
}
- if (pdqp) {
+ if (pdqp && XFS_IS_PQUOTA_ON(mp)) {
ASSERT(ip->i_pdquot == NULL);
- ASSERT(XFS_IS_PQUOTA_ON(mp));
ASSERT(xfs_get_projid(ip) == be32_to_cpu(pdqp->q_core.d_id));
ip->i_pdquot = xfs_qm_dqhold(pdqp);
ASSERT(bp->b_iodone == NULL);
XFS_BUF_READ(bp);
bp->b_ops = ops;
- xfsbdstrat(tp->t_mountp, bp);
+
+ /*
+ * XXX(hch): clean up the error handling here to be less
+ * of a mess..
+ */
+ if (XFS_FORCED_SHUTDOWN(mp)) {
+ trace_xfs_bdstrat_shut(bp, _RET_IP_);
+ xfs_bioerror_relse(bp);
+ } else {
+ xfs_buf_iorequest(bp);
+ }
+
error = xfs_buf_iowait(bp);
if (error) {
xfs_buf_ioerror_alert(bp, __func__);
/*
* First time we log the inode in a transaction, bump the inode change
- * counter if it is configured for this to occur.
+ * counter if it is configured for this to occur. We don't use
+ * inode_inc_version() because there is no need for extra locking around
+ * i_version as we already hold the inode locked exclusively for
+ * metadata modification.
*/
if (!(ip->i_itemp->ili_item.li_desc->lid_flags & XFS_LID_DIRTY) &&
IS_I_VERSION(VFS_I(ip))) {
- inode_inc_iversion(VFS_I(ip));
- ip->i_d.di_changecount = VFS_I(ip)->i_version;
+ ip->i_d.di_changecount = ++VFS_I(ip)->i_version;
flags |= XFS_ILOG_CORE;
}
xfs_calc_inode_res(mp, 1) +
xfs_calc_buf_res(2, mp->m_sb.sb_sectsize) +
xfs_calc_buf_res(1, XFS_FSB_TO_B(mp, 1)) +
- MAX((__uint16_t)XFS_FSB_TO_B(mp, 1),
- XFS_INODE_CLUSTER_SIZE(mp)) +
+ max_t(uint, XFS_FSB_TO_B(mp, 1), XFS_INODE_CLUSTER_SIZE(mp)) +
xfs_calc_buf_res(1, 0) +
xfs_calc_buf_res(2 + XFS_IALLOC_BLOCKS(mp) +
mp->m_in_maxlevels, 0) +
* Should the subsystem abort the loading of an ACPI table if the
* table checksum is incorrect?
*/
+#ifndef ACPI_CHECKSUM_ABORT
#define ACPI_CHECKSUM_ABORT FALSE
+#endif
/*
* Generate a version of ACPICA that only supports "reduced hardware"
struct acpi_hotplug_profile {
struct kobject kobj;
bool enabled:1;
+ bool ignore:1;
enum acpi_hotplug_mode mode;
};
u32 ejectable:1;
u32 power_manageable:1;
u32 match_driver:1;
- u32 reserved:27;
+ u32 no_hotplug:1;
+ u32 reserved:26;
};
/* File System */
extern int acpi_bus_generate_netlink_event(const char*, const char*, u8, int);
void acpi_bus_private_data_handler(acpi_handle, void *);
int acpi_bus_get_private_data(acpi_handle, void **);
+void acpi_bus_no_hotplug(acpi_handle handle);
extern int acpi_notifier_call_chain(struct acpi_device *, u32, u32);
extern int register_acpi_notifier(struct notifier_block *);
extern int unregister_acpi_notifier(struct notifier_block *);
{
return acpi_find_child(handle, addr, false);
}
+void acpi_preset_companion(struct device *dev, acpi_handle parent, u64 addr);
int acpi_is_root_bridge(acpi_handle);
struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle);
-#define DEVICE_ACPI_HANDLE(dev) ((acpi_handle)ACPI_HANDLE(dev))
int acpi_enable_wakeup_device_power(struct acpi_device *dev, int state);
int acpi_disable_wakeup_device_power(struct acpi_device *dev);
/* Current ACPICA subsystem version in YYYYMMDD format */
-#define ACPI_CA_VERSION 0x20130927
+#define ACPI_CA_VERSION 0x20131115
#include <acpi/acconfig.h>
#include <acpi/actypes.h>
#endif
#ifndef pte_accessible
-# define pte_accessible(pte) ((void)(pte),1)
+# define pte_accessible(mm, pte) ((void)(pte), 1)
#endif
#ifndef flush_tlb_fix_spurious_fault
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
barrier();
#endif
- if (pmd_none(pmdval))
+ if (pmd_none(pmdval) || pmd_trans_huge(pmdval))
return 1;
if (unlikely(pmd_bad(pmdval))) {
- if (!pmd_trans_huge(pmdval))
- pmd_clear_bad(pmd);
+ pmd_clear_bad(pmd);
return 1;
}
return 0;
#include <linux/thread_info.h>
-/*
- * We mask the PREEMPT_NEED_RESCHED bit so as not to confuse all current users
- * that think a non-zero value indicates we cannot preempt.
- */
+#define PREEMPT_ENABLED (0)
+
static __always_inline int preempt_count(void)
{
- return current_thread_info()->preempt_count & ~PREEMPT_NEED_RESCHED;
+ return current_thread_info()->preempt_count;
}
static __always_inline int *preempt_count_ptr(void)
return ¤t_thread_info()->preempt_count;
}
-/*
- * We now loose PREEMPT_NEED_RESCHED and cause an extra reschedule; however the
- * alternative is loosing a reschedule. Better schedule too often -- also this
- * should be a very rare operation.
- */
static __always_inline void preempt_count_set(int pc)
{
*preempt_count_ptr() = pc;
task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \
} while (0)
-/*
- * We fold the NEED_RESCHED bit into the preempt count such that
- * preempt_enable() can decrement and test for needing to reschedule with a
- * single instruction.
- *
- * We invert the actual bit, so that when the decrement hits 0 we know we both
- * need to resched (the bit is cleared) and can resched (no preempt count).
- */
-
static __always_inline void set_preempt_need_resched(void)
{
- *preempt_count_ptr() &= ~PREEMPT_NEED_RESCHED;
}
static __always_inline void clear_preempt_need_resched(void)
{
- *preempt_count_ptr() |= PREEMPT_NEED_RESCHED;
}
static __always_inline bool test_preempt_need_resched(void)
{
- return !(*preempt_count_ptr() & PREEMPT_NEED_RESCHED);
+ return false;
}
/*
static __always_inline bool __preempt_count_dec_and_test(void)
{
- return !--*preempt_count_ptr();
+ /*
+ * Because of load-store architectures cannot do per-cpu atomic
+ * operations; we cannot use PREEMPT_NEED_RESCHED because it might get
+ * lost.
+ */
+ return !--*preempt_count_ptr() && tif_need_resched();
}
/*
*/
static __always_inline bool should_resched(void)
{
- return unlikely(!*preempt_count_ptr());
+ return unlikely(!preempt_count() && tif_need_resched());
}
#ifdef CONFIG_PREEMPT
--- /dev/null
+
+#include <linux/hardirq.h>
+
+/*
+ * may_use_simd - whether it is allowable at this time to issue SIMD
+ * instructions or access the SIMD register file
+ *
+ * As architectures typically don't preserve the SIMD register file when
+ * taking an interrupt, !in_interrupt() should be a reasonable default.
+ */
+static __must_check inline bool may_use_simd(void)
+{
+ return !in_interrupt();
+}
return (val + c->high_bits) & ~rhs;
}
+#ifndef zero_bytemask
+#ifdef CONFIG_64BIT
+#define zero_bytemask(mask) (~0ul << fls64(mask))
+#else
+#define zero_bytemask(mask) (~0ul << fls(mask))
+#endif /* CONFIG_64BIT */
+#endif /* zero_bytemask */
+
#endif /* _ASM_WORD_AT_A_TIME_H */
--- /dev/null
+/*
+ * Shared async block cipher helpers
+ */
+
+#ifndef _CRYPTO_ABLK_HELPER_H
+#define _CRYPTO_ABLK_HELPER_H
+
+#include <linux/crypto.h>
+#include <linux/kernel.h>
+#include <crypto/cryptd.h>
+
+struct async_helper_ctx {
+ struct cryptd_ablkcipher *cryptd_tfm;
+};
+
+extern int ablk_set_key(struct crypto_ablkcipher *tfm, const u8 *key,
+ unsigned int key_len);
+
+extern int __ablk_encrypt(struct ablkcipher_request *req);
+
+extern int ablk_encrypt(struct ablkcipher_request *req);
+
+extern int ablk_decrypt(struct ablkcipher_request *req);
+
+extern void ablk_exit(struct crypto_tfm *tfm);
+
+extern int ablk_init_common(struct crypto_tfm *tfm, const char *drv_name);
+
+extern int ablk_init(struct crypto_tfm *tfm);
+
+#endif /* _CRYPTO_ABLK_HELPER_H */
return (type ^ CRYPTO_ALG_ASYNC) & mask & CRYPTO_ALG_ASYNC;
}
-#endif /* _CRYPTO_ALGAPI_H */
+noinline unsigned long __crypto_memneq(const void *a, const void *b, size_t size);
+
+/**
+ * crypto_memneq - Compare two areas of memory without leaking
+ * timing information.
+ *
+ * @a: One area of memory
+ * @b: Another area of memory
+ * @size: The size of the area.
+ *
+ * Returns 0 when data is equal, 1 otherwise.
+ */
+static inline int crypto_memneq(const void *a, const void *b, size_t size)
+{
+ return __crypto_memneq(a, b, size) != 0UL ? 1 : 0;
+}
+#endif /* _CRYPTO_ALGAPI_H */
__be32 enckeylen;
};
-#endif /* _CRYPTO_AUTHENC_H */
+struct crypto_authenc_keys {
+ const u8 *authkey;
+ const u8 *enckey;
+
+ unsigned int authkeylen;
+ unsigned int enckeylen;
+};
+int crypto_authenc_extractkeys(struct crypto_authenc_keys *keys, const u8 *key,
+ unsigned int keylen);
+
+#endif /* _CRYPTO_AUTHENC_H */
--- /dev/null
+/*
+ * Hash Info: Hash algorithms information
+ *
+ * Copyright (c) 2013 Dmitry Kasatkin <d.kasatkin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+
+#ifndef _CRYPTO_HASH_INFO_H
+#define _CRYPTO_HASH_INFO_H
+
+#include <crypto/sha.h>
+#include <crypto/md5.h>
+
+#include <uapi/linux/hash_info.h>
+
+/* not defined in include/crypto/ */
+#define RMD128_DIGEST_SIZE 16
+#define RMD160_DIGEST_SIZE 20
+#define RMD256_DIGEST_SIZE 32
+#define RMD320_DIGEST_SIZE 40
+
+/* not defined in include/crypto/ */
+#define WP512_DIGEST_SIZE 64
+#define WP384_DIGEST_SIZE 48
+#define WP256_DIGEST_SIZE 32
+
+/* not defined in include/crypto/ */
+#define TGR128_DIGEST_SIZE 16
+#define TGR160_DIGEST_SIZE 20
+#define TGR192_DIGEST_SIZE 24
+
+extern const char *const hash_algo_name[HASH_ALGO__LAST];
+extern const int hash_digest_size[HASH_ALGO__LAST];
+
+#endif /* _CRYPTO_HASH_INFO_H */
#define _LINUX_PUBLIC_KEY_H
#include <linux/mpi.h>
+#include <crypto/hash_info.h>
enum pkey_algo {
PKEY_ALGO_DSA,
PKEY_ALGO__LAST
};
-extern const char *const pkey_algo[PKEY_ALGO__LAST];
+extern const char *const pkey_algo_name[PKEY_ALGO__LAST];
+extern const struct public_key_algorithm *pkey_algo[PKEY_ALGO__LAST];
-enum pkey_hash_algo {
- PKEY_HASH_MD4,
- PKEY_HASH_MD5,
- PKEY_HASH_SHA1,
- PKEY_HASH_RIPE_MD_160,
- PKEY_HASH_SHA256,
- PKEY_HASH_SHA384,
- PKEY_HASH_SHA512,
- PKEY_HASH_SHA224,
- PKEY_HASH__LAST
-};
-
-extern const char *const pkey_hash_algo[PKEY_HASH__LAST];
+/* asymmetric key implementation supports only up to SHA224 */
+#define PKEY_HASH__LAST (HASH_ALGO_SHA224 + 1)
enum pkey_id_type {
PKEY_ID_PGP, /* OpenPGP generated key ID */
PKEY_ID_TYPE__LAST
};
-extern const char *const pkey_id_type[PKEY_ID_TYPE__LAST];
+extern const char *const pkey_id_type_name[PKEY_ID_TYPE__LAST];
/*
* Cryptographic data for the public-key subtype of the asymmetric key type.
#define PKEY_CAN_DECRYPT 0x02
#define PKEY_CAN_SIGN 0x04
#define PKEY_CAN_VERIFY 0x08
+ enum pkey_algo pkey_algo : 8;
enum pkey_id_type id_type : 8;
union {
MPI mpi[5];
u8 *digest;
u8 digest_size; /* Number of bytes in digest */
u8 nr_mpi; /* Occupancy of mpi[] */
- enum pkey_hash_algo pkey_hash_algo : 8;
+ enum pkey_algo pkey_algo : 8;
+ enum hash_algo pkey_hash_algo : 8;
union {
MPI mpi[2];
struct {
{
sg_set_page(&sg1[num - 1], (void *)sg2, 0, 0);
sg1[num - 1].page_link &= ~0x02;
+ sg1[num - 1].page_link |= 0x01;
}
static inline struct scatterlist *scatterwalk_sg_next(struct scatterlist *sg)
if (sg_is_last(sg))
return NULL;
- return (++sg)->length ? sg : (void *)sg_page(sg);
+ return (++sg)->length ? sg : sg_chain_ptr(sg);
}
static inline void scatterwalk_crypto_chain(struct scatterlist *head,
{0x1002, 0x9645, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO2|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9647, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\
{0x1002, 0x9648, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\
- {0x1002, 0x9649, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\
+ {0x1002, 0x9649, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO2|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\
{0x1002, 0x964a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x964b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x964c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
* @offset: The current GPU offset, which can have different meanings
* depending on the memory type. For SYSTEM type memory, it should be 0.
* @cur_placement: Hint of current placement.
+ * @wu_mutex: Wait unreserved mutex.
*
* Base class for TTM buffer object, that deals with data placement and CPU
* mappings. GPU mappings are really up to the driver, but for simpler GPUs
struct reservation_object *resv;
struct reservation_object ttm_resv;
+ struct mutex wu_mutex;
};
/**
size_t count, loff_t *f_pos, bool write);
extern void ttm_bo_swapout_all(struct ttm_bo_device *bdev);
-
+extern int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo);
#endif
/**
* function ttm_eu_reserve_buffers
*
- * @ticket: [out] ww_acquire_ctx returned by call.
+ * @ticket: [out] ww_acquire_ctx filled in by call, or NULL if only
+ * non-blocking reserves should be tried.
* @list: thread private list of ttm_validate_buffer structs.
*
* Tries to reserve bos pointed to by the list entries for validation.
#include <drm/drm_hashtab.h>
#include <linux/kref.h>
#include <linux/rcupdate.h>
+#include <linux/dma-buf.h>
#include <ttm/ttm_memory.h>
/**
ttm_fence_type,
ttm_buffer_type,
ttm_lock_type,
+ ttm_prime_type,
ttm_driver_type0 = 256,
ttm_driver_type1,
ttm_driver_type2,
enum ttm_ref_type ref_type);
};
+
+/**
+ * struct ttm_prime_object - Modified base object that is prime-aware
+ *
+ * @base: struct ttm_base_object that we derive from
+ * @mutex: Mutex protecting the @dma_buf member.
+ * @size: Size of the dma_buf associated with this object
+ * @real_type: Type of the underlying object. Needed since we're setting
+ * the value of @base::object_type to ttm_prime_type
+ * @dma_buf: Non ref-coutned pointer to a struct dma_buf created from this
+ * object.
+ * @refcount_release: The underlying object's release method. Needed since
+ * we set @base::refcount_release to our own release method.
+ */
+
+struct ttm_prime_object {
+ struct ttm_base_object base;
+ struct mutex mutex;
+ size_t size;
+ enum ttm_object_type real_type;
+ struct dma_buf *dma_buf;
+ void (*refcount_release) (struct ttm_base_object **);
+};
+
/**
* ttm_base_object_init
*
/**
* ttm_object device init - initialize a struct ttm_object_device
*
+ * @mem_glob: struct ttm_mem_global for memory accounting.
* @hash_order: Order of hash table used to hash the base objects.
+ * @ops: DMA buf ops for prime objects of this device.
*
* This function is typically called on device initialization to prepare
* data structures needed for ttm base and ref objects.
*/
-extern struct ttm_object_device *ttm_object_device_init
- (struct ttm_mem_global *mem_glob, unsigned int hash_order);
+extern struct ttm_object_device *
+ttm_object_device_init(struct ttm_mem_global *mem_glob,
+ unsigned int hash_order,
+ const struct dma_buf_ops *ops);
/**
* ttm_object_device_release - release data held by a ttm_object_device
#define ttm_base_object_kfree(__object, __base)\
kfree_rcu(__object, __base.rhead)
+
+extern int ttm_prime_object_init(struct ttm_object_file *tfile,
+ size_t size,
+ struct ttm_prime_object *prime,
+ bool shareable,
+ enum ttm_object_type type,
+ void (*refcount_release)
+ (struct ttm_base_object **),
+ void (*ref_obj_release)
+ (struct ttm_base_object *,
+ enum ttm_ref_type ref_type));
+
+static inline enum ttm_object_type
+ttm_base_object_type(struct ttm_base_object *base)
+{
+ return (base->object_type == ttm_prime_type) ?
+ container_of(base, struct ttm_prime_object, base)->real_type :
+ base->object_type;
+}
+extern int ttm_prime_fd_to_handle(struct ttm_object_file *tfile,
+ int fd, u32 *handle);
+extern int ttm_prime_handle_to_fd(struct ttm_object_file *tfile,
+ uint32_t handle, uint32_t flags,
+ int *prime_fd);
+
+#define ttm_prime_object_kfree(__obj, __prime) \
+ kfree_rcu(__obj, __prime.base.rhead)
#endif
--- /dev/null
+/* Big capacity key type.
+ *
+ * Copyright (C) 2013 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _KEYS_BIG_KEY_TYPE_H
+#define _KEYS_BIG_KEY_TYPE_H
+
+#include <linux/key-type.h>
+
+extern struct key_type key_type_big_key;
+
+extern int big_key_instantiate(struct key *key, struct key_preparsed_payload *prep);
+extern void big_key_revoke(struct key *key);
+extern void big_key_destroy(struct key *key);
+extern void big_key_describe(const struct key *big_key, struct seq_file *m);
+extern long big_key_read(const struct key *key, char __user *buffer, size_t buflen);
+
+#endif /* _KEYS_BIG_KEY_TYPE_H */
/* Keyring key type
*
- * Copyright (C) 2008 Red Hat, Inc. All Rights Reserved.
+ * Copyright (C) 2008, 2013 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
#define _KEYS_KEYRING_TYPE_H
#include <linux/key.h>
-#include <linux/rcupdate.h>
-
-/*
- * the keyring payload contains a list of the keys to which the keyring is
- * subscribed
- */
-struct keyring_list {
- struct rcu_head rcu; /* RCU deletion hook */
- unsigned short maxkeys; /* max keys this list can hold */
- unsigned short nkeys; /* number of keys currently held */
- unsigned short delkey; /* key to be unlinked by RCU */
- struct key __rcu *keys[0];
-};
-
+#include <linux/assoc_array.h>
#endif /* _KEYS_KEYRING_TYPE_H */
--- /dev/null
+/* System keyring containing trusted public keys.
+ *
+ * Copyright (C) 2013 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#ifndef _KEYS_SYSTEM_KEYRING_H
+#define _KEYS_SYSTEM_KEYRING_H
+
+#ifdef CONFIG_SYSTEM_TRUSTED_KEYRING
+
+#include <linux/key.h>
+
+extern struct key *system_trusted_keyring;
+
+#endif
+
+#endif /* _KEYS_SYSTEM_KEYRING_H */
#include <acpi/acpi_numa.h>
#include <asm/acpi.h>
+static inline acpi_handle acpi_device_handle(struct acpi_device *adev)
+{
+ return adev ? adev->handle : NULL;
+}
+
+#define ACPI_COMPANION(dev) ((dev)->acpi_node.companion)
+#define ACPI_COMPANION_SET(dev, adev) ACPI_COMPANION(dev) = (adev)
+#define ACPI_HANDLE(dev) acpi_device_handle(ACPI_COMPANION(dev))
+
+static inline const char *acpi_dev_name(struct acpi_device *adev)
+{
+ return dev_name(&adev->dev);
+}
+
enum acpi_irq_model_id {
ACPI_IRQ_MODEL_PIC = 0,
ACPI_IRQ_MODEL_IOAPIC,
#define acpi_disabled 1
+#define ACPI_COMPANION(dev) (NULL)
+#define ACPI_COMPANION_SET(dev, adev) do { } while (0)
+#define ACPI_HANDLE(dev) (NULL)
+
+static inline const char *acpi_dev_name(struct acpi_device *adev)
+{
+ return NULL;
+}
+
static inline void acpi_early_init(void) { }
static inline int early_acpi_boot_init(void)
--- /dev/null
+/* Generic associative array implementation.
+ *
+ * See Documentation/assoc_array.txt for information.
+ *
+ * Copyright (C) 2013 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#ifndef _LINUX_ASSOC_ARRAY_H
+#define _LINUX_ASSOC_ARRAY_H
+
+#ifdef CONFIG_ASSOCIATIVE_ARRAY
+
+#include <linux/types.h>
+
+#define ASSOC_ARRAY_KEY_CHUNK_SIZE BITS_PER_LONG /* Key data retrieved in chunks of this size */
+
+/*
+ * Generic associative array.
+ */
+struct assoc_array {
+ struct assoc_array_ptr *root; /* The node at the root of the tree */
+ unsigned long nr_leaves_on_tree;
+};
+
+/*
+ * Operations on objects and index keys for use by array manipulation routines.
+ */
+struct assoc_array_ops {
+ /* Method to get a chunk of an index key from caller-supplied data */
+ unsigned long (*get_key_chunk)(const void *index_key, int level);
+
+ /* Method to get a piece of an object's index key */
+ unsigned long (*get_object_key_chunk)(const void *object, int level);
+
+ /* Is this the object we're looking for? */
+ bool (*compare_object)(const void *object, const void *index_key);
+
+ /* How different is an object from an index key, to a bit position in
+ * their keys? (or -1 if they're the same)
+ */
+ int (*diff_objects)(const void *object, const void *index_key);
+
+ /* Method to free an object. */
+ void (*free_object)(void *object);
+};
+
+/*
+ * Access and manipulation functions.
+ */
+struct assoc_array_edit;
+
+static inline void assoc_array_init(struct assoc_array *array)
+{
+ array->root = NULL;
+ array->nr_leaves_on_tree = 0;
+}
+
+extern int assoc_array_iterate(const struct assoc_array *array,
+ int (*iterator)(const void *object,
+ void *iterator_data),
+ void *iterator_data);
+extern void *assoc_array_find(const struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ const void *index_key);
+extern void assoc_array_destroy(struct assoc_array *array,
+ const struct assoc_array_ops *ops);
+extern struct assoc_array_edit *assoc_array_insert(struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ const void *index_key,
+ void *object);
+extern void assoc_array_insert_set_object(struct assoc_array_edit *edit,
+ void *object);
+extern struct assoc_array_edit *assoc_array_delete(struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ const void *index_key);
+extern struct assoc_array_edit *assoc_array_clear(struct assoc_array *array,
+ const struct assoc_array_ops *ops);
+extern void assoc_array_apply_edit(struct assoc_array_edit *edit);
+extern void assoc_array_cancel_edit(struct assoc_array_edit *edit);
+extern int assoc_array_gc(struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ bool (*iterator)(void *object, void *iterator_data),
+ void *iterator_data);
+
+#endif /* CONFIG_ASSOCIATIVE_ARRAY */
+#endif /* _LINUX_ASSOC_ARRAY_H */
--- /dev/null
+/* Private definitions for the generic associative array implementation.
+ *
+ * See Documentation/assoc_array.txt for information.
+ *
+ * Copyright (C) 2013 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#ifndef _LINUX_ASSOC_ARRAY_PRIV_H
+#define _LINUX_ASSOC_ARRAY_PRIV_H
+
+#ifdef CONFIG_ASSOCIATIVE_ARRAY
+
+#include <linux/assoc_array.h>
+
+#define ASSOC_ARRAY_FAN_OUT 16 /* Number of slots per node */
+#define ASSOC_ARRAY_FAN_MASK (ASSOC_ARRAY_FAN_OUT - 1)
+#define ASSOC_ARRAY_LEVEL_STEP (ilog2(ASSOC_ARRAY_FAN_OUT))
+#define ASSOC_ARRAY_LEVEL_STEP_MASK (ASSOC_ARRAY_LEVEL_STEP - 1)
+#define ASSOC_ARRAY_KEY_CHUNK_MASK (ASSOC_ARRAY_KEY_CHUNK_SIZE - 1)
+#define ASSOC_ARRAY_KEY_CHUNK_SHIFT (ilog2(BITS_PER_LONG))
+
+/*
+ * Undefined type representing a pointer with type information in the bottom
+ * two bits.
+ */
+struct assoc_array_ptr;
+
+/*
+ * An N-way node in the tree.
+ *
+ * Each slot contains one of four things:
+ *
+ * (1) Nothing (NULL).
+ *
+ * (2) A leaf object (pointer types 0).
+ *
+ * (3) A next-level node (pointer type 1, subtype 0).
+ *
+ * (4) A shortcut (pointer type 1, subtype 1).
+ *
+ * The tree is optimised for search-by-ID, but permits reasonable iteration
+ * also.
+ *
+ * The tree is navigated by constructing an index key consisting of an array of
+ * segments, where each segment is ilog2(ASSOC_ARRAY_FAN_OUT) bits in size.
+ *
+ * The segments correspond to levels of the tree (the first segment is used at
+ * level 0, the second at level 1, etc.).
+ */
+struct assoc_array_node {
+ struct assoc_array_ptr *back_pointer;
+ u8 parent_slot;
+ struct assoc_array_ptr *slots[ASSOC_ARRAY_FAN_OUT];
+ unsigned long nr_leaves_on_branch;
+};
+
+/*
+ * A shortcut through the index space out to where a collection of nodes/leaves
+ * with the same IDs live.
+ */
+struct assoc_array_shortcut {
+ struct assoc_array_ptr *back_pointer;
+ int parent_slot;
+ int skip_to_level;
+ struct assoc_array_ptr *next_node;
+ unsigned long index_key[];
+};
+
+/*
+ * Preallocation cache.
+ */
+struct assoc_array_edit {
+ struct rcu_head rcu;
+ struct assoc_array *array;
+ const struct assoc_array_ops *ops;
+ const struct assoc_array_ops *ops_for_excised_subtree;
+ struct assoc_array_ptr *leaf;
+ struct assoc_array_ptr **leaf_p;
+ struct assoc_array_ptr *dead_leaf;
+ struct assoc_array_ptr *new_meta[3];
+ struct assoc_array_ptr *excised_meta[1];
+ struct assoc_array_ptr *excised_subtree;
+ struct assoc_array_ptr **set_backpointers[ASSOC_ARRAY_FAN_OUT];
+ struct assoc_array_ptr *set_backpointers_to;
+ struct assoc_array_node *adjust_count_on;
+ long adjust_count_by;
+ struct {
+ struct assoc_array_ptr **ptr;
+ struct assoc_array_ptr *to;
+ } set[2];
+ struct {
+ u8 *p;
+ u8 to;
+ } set_parent_slot[1];
+ u8 segment_cache[ASSOC_ARRAY_FAN_OUT + 1];
+};
+
+/*
+ * Internal tree member pointers are marked in the bottom one or two bits to
+ * indicate what type they are so that we don't have to look behind every
+ * pointer to see what it points to.
+ *
+ * We provide functions to test type annotations and to create and translate
+ * the annotated pointers.
+ */
+#define ASSOC_ARRAY_PTR_TYPE_MASK 0x1UL
+#define ASSOC_ARRAY_PTR_LEAF_TYPE 0x0UL /* Points to leaf (or nowhere) */
+#define ASSOC_ARRAY_PTR_META_TYPE 0x1UL /* Points to node or shortcut */
+#define ASSOC_ARRAY_PTR_SUBTYPE_MASK 0x2UL
+#define ASSOC_ARRAY_PTR_NODE_SUBTYPE 0x0UL
+#define ASSOC_ARRAY_PTR_SHORTCUT_SUBTYPE 0x2UL
+
+static inline bool assoc_array_ptr_is_meta(const struct assoc_array_ptr *x)
+{
+ return (unsigned long)x & ASSOC_ARRAY_PTR_TYPE_MASK;
+}
+static inline bool assoc_array_ptr_is_leaf(const struct assoc_array_ptr *x)
+{
+ return !assoc_array_ptr_is_meta(x);
+}
+static inline bool assoc_array_ptr_is_shortcut(const struct assoc_array_ptr *x)
+{
+ return (unsigned long)x & ASSOC_ARRAY_PTR_SUBTYPE_MASK;
+}
+static inline bool assoc_array_ptr_is_node(const struct assoc_array_ptr *x)
+{
+ return !assoc_array_ptr_is_shortcut(x);
+}
+
+static inline void *assoc_array_ptr_to_leaf(const struct assoc_array_ptr *x)
+{
+ return (void *)((unsigned long)x & ~ASSOC_ARRAY_PTR_TYPE_MASK);
+}
+
+static inline
+unsigned long __assoc_array_ptr_to_meta(const struct assoc_array_ptr *x)
+{
+ return (unsigned long)x &
+ ~(ASSOC_ARRAY_PTR_SUBTYPE_MASK | ASSOC_ARRAY_PTR_TYPE_MASK);
+}
+static inline
+struct assoc_array_node *assoc_array_ptr_to_node(const struct assoc_array_ptr *x)
+{
+ return (struct assoc_array_node *)__assoc_array_ptr_to_meta(x);
+}
+static inline
+struct assoc_array_shortcut *assoc_array_ptr_to_shortcut(const struct assoc_array_ptr *x)
+{
+ return (struct assoc_array_shortcut *)__assoc_array_ptr_to_meta(x);
+}
+
+static inline
+struct assoc_array_ptr *__assoc_array_x_to_ptr(const void *p, unsigned long t)
+{
+ return (struct assoc_array_ptr *)((unsigned long)p | t);
+}
+static inline
+struct assoc_array_ptr *assoc_array_leaf_to_ptr(const void *p)
+{
+ return __assoc_array_x_to_ptr(p, ASSOC_ARRAY_PTR_LEAF_TYPE);
+}
+static inline
+struct assoc_array_ptr *assoc_array_node_to_ptr(const struct assoc_array_node *p)
+{
+ return __assoc_array_x_to_ptr(
+ p, ASSOC_ARRAY_PTR_META_TYPE | ASSOC_ARRAY_PTR_NODE_SUBTYPE);
+}
+static inline
+struct assoc_array_ptr *assoc_array_shortcut_to_ptr(const struct assoc_array_shortcut *p)
+{
+ return __assoc_array_x_to_ptr(
+ p, ASSOC_ARRAY_PTR_META_TYPE | ASSOC_ARRAY_PTR_SHORTCUT_SUBTYPE);
+}
+
+#endif /* CONFIG_ASSOCIATIVE_ARRAY */
+#endif /* _LINUX_ASSOC_ARRAY_PRIV_H */
void *lsm_rule;
};
+extern int is_audit_feature_set(int which);
+
extern int __init audit_register_class(int class, unsigned *list);
extern int audit_classify_syscall(int abi, unsigned syscall);
extern int audit_classify_arch(int arch);
extern void __audit_ipc_obj(struct kern_ipc_perm *ipcp);
extern void __audit_ipc_set_perm(unsigned long qbytes, uid_t uid, gid_t gid, umode_t mode);
-extern int __audit_bprm(struct linux_binprm *bprm);
+extern void __audit_bprm(struct linux_binprm *bprm);
extern int __audit_socketcall(int nargs, unsigned long *args);
extern int __audit_sockaddr(int len, void *addr);
extern void __audit_fd_pair(int fd1, int fd2);
if (unlikely(!audit_dummy_context()))
__audit_ipc_set_perm(qbytes, uid, gid, mode);
}
-static inline int audit_bprm(struct linux_binprm *bprm)
+static inline void audit_bprm(struct linux_binprm *bprm)
{
if (unlikely(!audit_dummy_context()))
- return __audit_bprm(bprm);
- return 0;
+ __audit_bprm(bprm);
}
static inline int audit_socketcall(int nargs, unsigned long *args)
{
static inline void audit_ipc_set_perm(unsigned long qbytes, uid_t uid,
gid_t gid, umode_t mode)
{ }
-static inline int audit_bprm(struct linux_binprm *bprm)
-{
- return 0;
-}
+static inline void audit_bprm(struct linux_binprm *bprm)
+{ }
static inline int audit_socketcall(int nargs, unsigned long *args)
{
return 0;
#include <uapi/linux/auxvec.h>
-#define AT_VECTOR_SIZE_BASE 19 /* NEW_AUX_ENT entries in auxiliary table */
+#define AT_VECTOR_SIZE_BASE 20 /* NEW_AUX_ENT entries in auxiliary table */
/* number of "#define AT_.*" above, minus {AT_NULL, AT_IGNORE, AT_NOTELF} */
#endif /* _LINUX_AUXVEC_H */
(1 << QUEUE_FLAG_SAME_COMP) | \
(1 << QUEUE_FLAG_ADD_RANDOM))
+#define QUEUE_FLAG_MQ_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \
+ (1 << QUEUE_FLAG_SAME_COMP))
+
static inline void queue_lockdep_assert_held(struct request_queue *q)
{
if (q->queue_lock)
#endif
-#define uninitialized_var(x) x
-
#ifndef __HAVE_BUILTIN_BSWAP16__
/* icc has this, but it's called _bswap16 */
#define __HAVE_BUILTIN_BSWAP16__
/* The hash is always the low bits of hash_len */
#ifdef __LITTLE_ENDIAN
#define HASH_LEN_DECLARE u32 hash; u32 len;
+ #define bytemask_from_count(cnt) (~(~0ul << (cnt)*8))
#else
#define HASH_LEN_DECLARE u32 len; u32 hash;
+ #define bytemask_from_count(cnt) (~(~0ul >> (cnt)*8))
#endif
/*
unsigned long segment_boundary_mask;
};
+struct acpi_device;
+
struct acpi_dev_node {
#ifdef CONFIG_ACPI
- void *handle;
+ struct acpi_device *companion;
#endif
};
return container_of(kobj, struct device, kobj);
}
-#ifdef CONFIG_ACPI
-#define ACPI_HANDLE(dev) ((dev)->acpi_node.handle)
-#define ACPI_HANDLE_SET(dev, _handle_) (dev)->acpi_node.handle = (_handle_)
-#else
-#define ACPI_HANDLE(dev) (NULL)
-#define ACPI_HANDLE_SET(dev, _handle_) do { } while (0)
-#endif
-
/* Get the wakeup routines, which depend on struct device */
#include <linux/pm_wakeup.h>
/**
* enum dma_status - DMA transaction status
- * @DMA_SUCCESS: transaction completed successfully
+ * @DMA_COMPLETE: transaction completed
* @DMA_IN_PROGRESS: transaction not yet processed
* @DMA_PAUSED: transaction is paused
* @DMA_ERROR: transaction failed
*/
enum dma_status {
- DMA_SUCCESS,
+ DMA_COMPLETE,
DMA_IN_PROGRESS,
DMA_PAUSED,
DMA_ERROR,
* @DMA_CTRL_ACK - if clear, the descriptor cannot be reused until the client
* acknowledges receipt, i.e. has has a chance to establish any dependency
* chains
- * @DMA_COMPL_SKIP_SRC_UNMAP - set to disable dma-unmapping the source buffer(s)
- * @DMA_COMPL_SKIP_DEST_UNMAP - set to disable dma-unmapping the destination(s)
- * @DMA_COMPL_SRC_UNMAP_SINGLE - set to do the source dma-unmapping as single
- * (if not set, do the source dma-unmapping as page)
- * @DMA_COMPL_DEST_UNMAP_SINGLE - set to do the destination dma-unmapping as single
- * (if not set, do the destination dma-unmapping as page)
* @DMA_PREP_PQ_DISABLE_P - prevent generation of P while generating Q
* @DMA_PREP_PQ_DISABLE_Q - prevent generation of Q while generating P
* @DMA_PREP_CONTINUE - indicate to a driver that it is reusing buffers as
enum dma_ctrl_flags {
DMA_PREP_INTERRUPT = (1 << 0),
DMA_CTRL_ACK = (1 << 1),
- DMA_COMPL_SKIP_SRC_UNMAP = (1 << 2),
- DMA_COMPL_SKIP_DEST_UNMAP = (1 << 3),
- DMA_COMPL_SRC_UNMAP_SINGLE = (1 << 4),
- DMA_COMPL_DEST_UNMAP_SINGLE = (1 << 5),
- DMA_PREP_PQ_DISABLE_P = (1 << 6),
- DMA_PREP_PQ_DISABLE_Q = (1 << 7),
- DMA_PREP_CONTINUE = (1 << 8),
- DMA_PREP_FENCE = (1 << 9),
+ DMA_PREP_PQ_DISABLE_P = (1 << 2),
+ DMA_PREP_PQ_DISABLE_Q = (1 << 3),
+ DMA_PREP_CONTINUE = (1 << 4),
+ DMA_PREP_FENCE = (1 << 5),
};
/**
typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
typedef void (*dma_async_tx_callback)(void *dma_async_param);
+
+struct dmaengine_unmap_data {
+ u8 to_cnt;
+ u8 from_cnt;
+ u8 bidi_cnt;
+ struct device *dev;
+ struct kref kref;
+ size_t len;
+ dma_addr_t addr[0];
+};
+
/**
* struct dma_async_tx_descriptor - async transaction descriptor
* ---dma generic offload fields---
dma_cookie_t (*tx_submit)(struct dma_async_tx_descriptor *tx);
dma_async_tx_callback callback;
void *callback_param;
+ struct dmaengine_unmap_data *unmap;
#ifdef CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH
struct dma_async_tx_descriptor *next;
struct dma_async_tx_descriptor *parent;
#endif
};
+#ifdef CONFIG_DMA_ENGINE
+static inline void dma_set_unmap(struct dma_async_tx_descriptor *tx,
+ struct dmaengine_unmap_data *unmap)
+{
+ kref_get(&unmap->kref);
+ tx->unmap = unmap;
+}
+
+struct dmaengine_unmap_data *
+dmaengine_get_unmap_data(struct device *dev, int nr, gfp_t flags);
+void dmaengine_unmap_put(struct dmaengine_unmap_data *unmap);
+#else
+static inline void dma_set_unmap(struct dma_async_tx_descriptor *tx,
+ struct dmaengine_unmap_data *unmap)
+{
+}
+static inline struct dmaengine_unmap_data *
+dmaengine_get_unmap_data(struct device *dev, int nr, gfp_t flags)
+{
+ return NULL;
+}
+static inline void dmaengine_unmap_put(struct dmaengine_unmap_data *unmap)
+{
+}
+#endif
+
+static inline void dma_descriptor_unmap(struct dma_async_tx_descriptor *tx)
+{
+ if (tx->unmap) {
+ dmaengine_unmap_put(tx->unmap);
+ tx->unmap = NULL;
+ }
+}
+
#ifndef CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH
static inline void txd_lock(struct dma_async_tx_descriptor *txd)
{
{
if (last_complete <= last_used) {
if ((cookie <= last_complete) || (cookie > last_used))
- return DMA_SUCCESS;
+ return DMA_COMPLETE;
} else {
if ((cookie <= last_complete) && (cookie > last_used))
- return DMA_SUCCESS;
+ return DMA_COMPLETE;
}
return DMA_IN_PROGRESS;
}
}
static inline enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie)
{
- return DMA_SUCCESS;
+ return DMA_COMPLETE;
}
static inline enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
{
- return DMA_SUCCESS;
+ return DMA_COMPLETE;
}
static inline void dma_issue_pending_all(void)
{
struct efi_variable var;
struct list_head list;
struct kobject kobj;
+ bool scanning;
+ bool deleting;
};
#if defined(CONFIG_EFI_VARS) || defined(CONFIG_EFI_VARS_MODULE)
int efivars_sysfs_init(void);
+#define EFIVARS_DATA_SIZE_MAX 1024
+
#endif /* CONFIG_EFI_VARS */
#endif /* _LINUX_EFI_H */
extern int simple_write_end(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len, unsigned copied,
struct page *page, void *fsdata);
+extern int always_delete_dentry(const struct dentry *);
extern struct inode *alloc_anon_inode(struct super_block *);
+extern const struct dentry_operations simple_dentry_operations;
extern struct dentry *simple_lookup(struct inode *, struct dentry *, unsigned int flags);
extern ssize_t generic_read_dir(struct file *, char __user *, size_t, loff_t *);
#ifdef CONFIG_PERF_EVENTS
int perf_refcount;
struct hlist_head __percpu *perf_events;
+
+ int (*perf_perm)(struct ftrace_event_call *,
+ struct perf_event *);
#endif
};
} \
early_initcall(trace_init_flags_##name);
+#define __TRACE_EVENT_PERF_PERM(name, expr...) \
+ static int perf_perm_##name(struct ftrace_event_call *tp_event, \
+ struct perf_event *p_event) \
+ { \
+ return ({ expr; }); \
+ } \
+ static int __init trace_init_perf_perm_##name(void) \
+ { \
+ event_##name.perf_perm = &perf_perm_##name; \
+ return 0; \
+ } \
+ early_initcall(trace_init_perf_perm_##name);
+
#define PERF_MAX_TRACE_SIZE 2048
#define MAX_FILTER_STR_VAL 256 /* Should handle KSYM_SYMBOL_LEN */
#define __LINUX_GPIO_DRIVER_H
#include <linux/types.h>
+#include <linux/module.h>
struct device;
struct gpio_desc;
+struct of_phandle_args;
+struct device_node;
struct seq_file;
/**
int gpiod_lock_as_irq(struct gpio_desc *desc);
void gpiod_unlock_as_irq(struct gpio_desc *desc);
+enum gpio_lookup_flags {
+ GPIO_ACTIVE_HIGH = (0 << 0),
+ GPIO_ACTIVE_LOW = (1 << 0),
+ GPIO_OPEN_DRAIN = (1 << 1),
+ GPIO_OPEN_SOURCE = (1 << 2),
+};
+
/**
* Lookup table for associating GPIOs to specific devices and functions using
* platform data.
*/
unsigned int idx;
/*
- * mask of GPIOF_* values
+ * mask of GPIO_* values
*/
- unsigned long flags;
+ enum gpio_lookup_flags flags;
};
/*
#include <linux/hid.h>
#include <linux/hid-sensor-ids.h>
+#include <linux/iio/iio.h>
+#include <linux/iio/trigger.h>
/**
* struct hid_sensor_hub_attribute_info - Attribute info
s32 units;
s32 unit_expo;
s32 size;
+ s32 logical_minimum;
+ s32 logical_maximum;
};
/**
struct platform_device *pdev;
unsigned usage_id;
bool data_ready;
+ struct iio_trigger *trigger;
struct hid_sensor_hub_attribute_info poll;
struct hid_sensor_hub_attribute_info report_state;
struct hid_sensor_hub_attribute_info power_state;
#define HID_USAGE_SENSOR_PROP_REPORT_STATE 0x200316
#define HID_USAGE_SENSOR_PROY_POWER_STATE 0x200319
+/* Power state enumerations */
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_UNDEFINED_ENUM 0x00
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_D0_FULL_POWER_ENUM 0x01
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_D1_LOW_POWER_ENUM 0x02
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_D2_STANDBY_WITH_WAKE_ENUM 0x03
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_D3_SLEEP_WITH_WAKE_ENUM 0x04
+#define HID_USAGE_SENSOR_PROP_POWER_STATE_D4_POWER_OFF_ENUM 0x05
+
+/* Report State enumerations */
+#define HID_USAGE_SENSOR_PROP_REPORTING_STATE_NO_EVENTS_ENUM 0x00
+#define HID_USAGE_SENSOR_PROP_REPORTING_STATE_ALL_EVENTS_ENUM 0x01
+
#endif
void hugepage_put_subpool(struct hugepage_subpool *spool);
int PageHuge(struct page *page);
+int PageHeadHuge(struct page *page_head);
void reset_vma_resv_huge_pages(struct vm_area_struct *vma);
int hugetlb_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
bool isolate_huge_page(struct page *page, struct list_head *list);
void putback_active_hugepage(struct page *page);
bool is_hugepage_active(struct page *page);
-void copy_huge_page(struct page *dst, struct page *src);
#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE
pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud);
return 0;
}
+static inline int PageHeadHuge(struct page *page_head)
+{
+ return 0;
+}
+
static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma)
{
}
return 0;
}
-#define isolate_huge_page(p, l) false
-#define putback_active_hugepage(p) do {} while (0)
-#define is_hugepage_active(x) false
-static inline void copy_huge_page(struct page *dst, struct page *src)
+static inline bool isolate_huge_page(struct page *page, struct list_head *list)
{
+ return false;
}
+#define putback_active_hugepage(p) do {} while (0)
+#define is_hugepage_active(x) false
static inline unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
unsigned long address, unsigned long end, pgprot_t newprot)
#include <uapi/linux/ipv6.h>
#define ipv6_optlen(p) (((p)->hdrlen+1) << 3)
+#define ipv6_authlen(p) (((p)->hdrlen+2) << 2)
/*
* This structure contains configuration options per IPv6 link.
*/
};
typedef enum irqreturn irqreturn_t;
-#define IRQ_RETVAL(x) ((x) != IRQ_NONE)
+#define IRQ_RETVAL(x) ((x) ? IRQ_HANDLED : IRQ_NONE)
#endif
(__x < 0) ? -__x : __x; \
})
-#if defined(CONFIG_PROVE_LOCKING) || defined(CONFIG_DEBUG_ATOMIC_SLEEP)
+#if defined(CONFIG_MMU) && \
+ (defined(CONFIG_PROVE_LOCKING) || defined(CONFIG_DEBUG_ATOMIC_SLEEP))
void might_fault(void);
#else
static inline void might_fault(void) { }
extern size_t vmcoreinfo_size;
extern size_t vmcoreinfo_max_size;
+/* flag to track if kexec reboot is in progress */
+extern bool kexec_in_progress;
+
int __init parse_crashkernel(char *cmdline, unsigned long long system_ram,
unsigned long long *crash_size, unsigned long long *crash_base);
int parse_crashkernel_high(char *cmdline, unsigned long long system_ram,
const void *data; /* Raw data */
size_t datalen; /* Raw datalen */
size_t quotalen; /* Quota length for proposed payload */
+ bool trusted; /* True if key is trusted */
};
typedef int (*request_key_actor_t)(struct key_construction *key,
*/
size_t def_datalen;
+ /* Default key search algorithm. */
+ unsigned def_lookup_type;
+#define KEYRING_SEARCH_LOOKUP_DIRECT 0x0000 /* Direct lookup by description. */
+#define KEYRING_SEARCH_LOOKUP_ITERATE 0x0001 /* Iterative search. */
+
/* vet a description */
int (*vet_description)(const char *description);
#include <linux/sysctl.h>
#include <linux/rwsem.h>
#include <linux/atomic.h>
+#include <linux/assoc_array.h>
#ifdef __KERNEL__
#include <linux/uidgid.h>
struct keyring_list;
struct keyring_name;
+struct keyring_index_key {
+ struct key_type *type;
+ const char *description;
+ size_t desc_len;
+};
+
/*****************************************************************************/
/*
* key reference with possession attribute handling
typedef struct __key_reference_with_attributes *key_ref_t;
static inline key_ref_t make_key_ref(const struct key *key,
- unsigned long possession)
+ bool possession)
{
return (key_ref_t) ((unsigned long) key | possession);
}
return (struct key *) ((unsigned long) key_ref & ~1UL);
}
-static inline unsigned long is_key_possessed(const key_ref_t key_ref)
+static inline bool is_key_possessed(const key_ref_t key_ref)
{
return (unsigned long) key_ref & 1UL;
}
struct list_head graveyard_link;
struct rb_node serial_node;
};
- struct key_type *type; /* type of key */
struct rw_semaphore sem; /* change vs change sem */
struct key_user *user; /* owner of this key */
void *security; /* security data for this key */
#define KEY_FLAG_NEGATIVE 5 /* set if key is negative */
#define KEY_FLAG_ROOT_CAN_CLEAR 6 /* set if key can be cleared by root without permission */
#define KEY_FLAG_INVALIDATED 7 /* set if key has been invalidated */
+#define KEY_FLAG_TRUSTED 8 /* set if key is trusted */
+#define KEY_FLAG_TRUSTED_ONLY 9 /* set if keyring only accepts links to trusted keys */
- /* the description string
- * - this is used to match a key against search criteria
- * - this should be a printable string
+ /* the key type and key description string
+ * - the desc is used to match a key against search criteria
+ * - it should be a printable string
* - eg: for krb5 AFS, this might be "afs@REDHAT.COM"
*/
- char *description;
+ union {
+ struct keyring_index_key index_key;
+ struct {
+ struct key_type *type; /* type of key */
+ char *description;
+ };
+ };
/* type specific data
* - this is used by the keyring type to index the name
* whatever
*/
union {
- unsigned long value;
- void __rcu *rcudata;
- void *data;
- struct keyring_list __rcu *subscriptions;
- } payload;
+ union {
+ unsigned long value;
+ void __rcu *rcudata;
+ void *data;
+ void *data2[2];
+ } payload;
+ struct assoc_array keys;
+ };
};
extern struct key *key_alloc(struct key_type *type,
#define KEY_ALLOC_IN_QUOTA 0x0000 /* add to quota, reject if would overrun */
#define KEY_ALLOC_QUOTA_OVERRUN 0x0001 /* add to quota, permit even if overrun */
#define KEY_ALLOC_NOT_IN_QUOTA 0x0002 /* not in quota */
+#define KEY_ALLOC_TRUSTED 0x0004 /* Key should be flagged as trusted */
extern void key_revoke(struct key *key);
extern void key_invalidate(struct key *key);
extern void key_put(struct key *key);
-static inline struct key *key_get(struct key *key)
+static inline struct key *__key_get(struct key *key)
{
- if (key)
- atomic_inc(&key->usage);
+ atomic_inc(&key->usage);
return key;
}
+static inline struct key *key_get(struct key *key)
+{
+ return key ? __key_get(key) : key;
+}
+
static inline void key_ref_put(key_ref_t key_ref)
{
key_put(key_ref_to_ptr(key_ref));
ATA_HORKAGE_DUMP_ID = (1 << 16), /* dump IDENTIFY data */
ATA_HORKAGE_MAX_SEC_LBA48 = (1 << 17), /* Set max sects to 65535 */
ATA_HORKAGE_ATAPI_DMADIR = (1 << 18), /* device requires dmadir */
+ ATA_HORKAGE_NO_NCQ_TRIM = (1 << 19), /* don't use queued TRIM */
/* DMA mask for user DMA control: User visible values; DO NOT
renumber */
#define USE_CMPXCHG_LOCKREF \
(IS_ENABLED(CONFIG_ARCH_USE_CMPXCHG_LOCKREF) && \
- IS_ENABLED(CONFIG_SMP) && !BLOATED_SPINLOCKS)
+ IS_ENABLED(CONFIG_SMP) && SPINLOCK_SIZE <= 4)
struct lockref {
union {
return ret;
}
+#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__)
+
+#ifndef mul_u64_u32_shr
+static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift)
+{
+ return (u64)(((unsigned __int128)a * mul) >> shift);
+}
+#endif /* mul_u64_u32_shr */
+
+#else
+
+#ifndef mul_u64_u32_shr
+static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift)
+{
+ u32 ah, al;
+ u64 ret;
+
+ al = a;
+ ah = a >> 32;
+
+ ret = ((u64)al * mul) >> shift;
+ if (ah)
+ ret += ((u64)ah * mul) << (32 - shift);
+
+ return ret;
+}
+#endif /* mul_u64_u32_shr */
+
+#endif
+
#endif /* _LINUX_MATH64_H */
struct sec_pmic_dev {
struct device *dev;
struct sec_platform_data *pdata;
- struct regmap *regmap;
+ struct regmap *regmap_pmic;
+ struct regmap *regmap_rtc;
struct i2c_client *i2c;
struct i2c_client *rtc;
#define PHY_ID_KSZ8021 0x00221555
#define PHY_ID_KSZ8031 0x00221556
#define PHY_ID_KSZ8041 0x00221510
+/* undocumented */
+#define PHY_ID_KSZ8041RNLI 0x00221537
#define PHY_ID_KSZ8051 0x00221550
/* same id: ks8001 Rev. A/B, and ks8721 Rev 3. */
#define PHY_ID_KSZ8001 0x0022161A
struct page *newpage, struct page *page);
extern int migrate_page_move_mapping(struct address_space *mapping,
struct page *newpage, struct page *page,
- struct buffer_head *head, enum migrate_mode mode);
+ struct buffer_head *head, enum migrate_mode mode,
+ int extra_count);
#else
static inline void putback_lru_pages(struct list_head *l) {}
#endif /* CONFIG_MIGRATION */
#ifdef CONFIG_NUMA_BALANCING
+extern bool pmd_trans_migrating(pmd_t pmd);
+extern void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd);
extern int migrate_misplaced_page(struct page *page,
struct vm_area_struct *vma, int node);
extern bool migrate_ratelimited(int node);
#else
+static inline bool pmd_trans_migrating(pmd_t pmd)
+{
+ return false;
+}
+static inline void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd)
+{
+}
static inline int migrate_misplaced_page(struct page *page,
struct vm_area_struct *vma, int node)
{
#endif /* CONFIG_MMU && !__ARCH_HAS_4LEVEL_HACK */
#if USE_SPLIT_PTE_PTLOCKS
-#if BLOATED_SPINLOCKS
-void __init ptlock_cache_init(void);
+#if ALLOC_SPLIT_PTLOCKS
extern bool ptlock_alloc(struct page *page);
extern void ptlock_free(struct page *page);
{
return page->ptl;
}
-#else /* BLOATED_SPINLOCKS */
-static inline void ptlock_cache_init(void) {}
+#else /* ALLOC_SPLIT_PTLOCKS */
static inline bool ptlock_alloc(struct page *page)
{
return true;
{
return &page->ptl;
}
-#endif /* BLOATED_SPINLOCKS */
+#endif /* ALLOC_SPLIT_PTLOCKS */
static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
{
{
return &mm->page_table_lock;
}
-static inline void ptlock_cache_init(void) {}
static inline bool ptlock_init(struct page *page) { return true; }
static inline void pte_lock_deinit(struct page *page) {}
#endif /* USE_SPLIT_PTE_PTLOCKS */
-static inline void pgtable_init(void)
-{
- ptlock_cache_init();
- pgtable_cache_init();
-}
-
static inline bool pgtable_page_ctor(struct page *page)
{
inc_zone_page_state(page, NR_PAGETABLE);
#define USE_SPLIT_PTE_PTLOCKS (NR_CPUS >= CONFIG_SPLIT_PTLOCK_CPUS)
#define USE_SPLIT_PMD_PTLOCKS (USE_SPLIT_PTE_PTLOCKS && \
IS_ENABLED(CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK))
+#define ALLOC_SPLIT_PTLOCKS (SPINLOCK_SIZE > BITS_PER_LONG/8)
/*
* Each physical page in the system has a struct page associated with
/* First double word block */
unsigned long flags; /* Atomic flags, some possibly
* updated asynchronously */
- struct address_space *mapping; /* If low bit clear, points to
- * inode address_space, or NULL.
- * If page mapped as anonymous
- * memory, low bit is set, and
- * it points to anon_vma object:
- * see PAGE_MAPPING_ANON below.
- */
+ union {
+ struct address_space *mapping; /* If low bit clear, points to
+ * inode address_space, or NULL.
+ * If page mapped as anonymous
+ * memory, low bit is set, and
+ * it points to anon_vma object:
+ * see PAGE_MAPPING_ANON below.
+ */
+ void *s_mem; /* slab first object */
+ };
+
/* Second double word */
struct {
union {
pgoff_t index; /* Our offset within mapping. */
- void *freelist; /* slub/slob first free object */
+ void *freelist; /* sl[aou]b first free object */
bool pfmemalloc; /* If set by the page allocator,
* ALLOC_NO_WATERMARKS was set
* and the low watermark was not
* this page is only used to
* free other pages.
*/
-#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && USE_SPLIT_PMD_PTLOCKS
- pgtable_t pmd_huge_pte; /* protected by page->ptl */
-#endif
};
union {
};
atomic_t _count; /* Usage count, see below. */
};
+ unsigned int active; /* SLAB */
};
};
struct list_head list; /* slobs list of pages */
struct slab *slab_page; /* slab fields */
+ struct rcu_head rcu_head; /* Used by SLAB
+ * when destroying via RCU
+ */
+#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && USE_SPLIT_PMD_PTLOCKS
+ pgtable_t pmd_huge_pte; /* protected by page->ptl */
+#endif
};
/* Remainder is not double word aligned */
* system if PG_buddy is set.
*/
#if USE_SPLIT_PTE_PTLOCKS
-#if BLOATED_SPINLOCKS
+#if ALLOC_SPLIT_PTLOCKS
spinlock_t *ptl;
#else
spinlock_t ptl;
/* numa_scan_seq prevents two threads setting pte_numa */
int numa_scan_seq;
+#endif
+#if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION)
+ /*
+ * An operation with batched TLB flushing is going on. Anything that
+ * can move process memory needs to flush the TLB when moving a
+ * PROT_NONE or PROT_NUMA mapped page.
+ */
+ bool tlb_flush_pending;
#endif
struct uprobes_state uprobes_state;
};
return mm->cpu_vm_mask_var;
}
+#if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION)
+/*
+ * Memory barriers to keep this state in sync are graciously provided by
+ * the page table locks, outside of which no page table modifications happen.
+ * The barriers below prevent the compiler from re-ordering the instructions
+ * around the memory barriers that are already present in the code.
+ */
+static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
+{
+ barrier();
+ return mm->tlb_flush_pending;
+}
+static inline void set_tlb_flush_pending(struct mm_struct *mm)
+{
+ mm->tlb_flush_pending = true;
+
+ /*
+ * Guarantee that the tlb_flush_pending store does not leak into the
+ * critical section updating the page tables
+ */
+ smp_mb__before_spinlock();
+}
+/* Clearing is done after a TLB flush, which also provides a barrier. */
+static inline void clear_tlb_flush_pending(struct mm_struct *mm)
+{
+ barrier();
+ mm->tlb_flush_pending = false;
+}
+#else
+static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
+{
+ return false;
+}
+static inline void set_tlb_flush_pending(struct mm_struct *mm)
+{
+}
+static inline void clear_tlb_flush_pending(struct mm_struct *mm)
+{
+}
+#endif
+
#endif /* _LINUX_MM_TYPES_H */
struct {
__u8 is_msix : 1;
__u8 multiple: 3; /* log2 number of messages */
- __u8 maskbit : 1; /* mask-pending bit supported ? */
- __u8 is_64 : 1; /* Address size: 0=32bit 1=64bit */
- __u8 pos; /* Location of the msi capability */
- __u16 entry_nr; /* specific enabled entry */
- unsigned default_irq; /* default pre-assigned irq */
+ __u8 maskbit : 1; /* mask-pending bit supported ? */
+ __u8 is_64 : 1; /* Address size: 0=32bit 1=64bit */
+ __u8 pos; /* Location of the msi capability */
+ __u16 entry_nr; /* specific enabled entry */
+ unsigned default_irq; /* default pre-assigned irq */
} msi_attrib;
u32 masked; /* mask bits */
int offset, size_t size, int flags);
ssize_t (*splice_read)(struct socket *sock, loff_t *ppos,
struct pipe_inode_info *pipe, size_t len, unsigned int flags);
- void (*set_peek_off)(struct sock *sk, int val);
+ int (*set_peek_off)(struct sock *sk, int val);
};
#define DECLARE_SOCKADDR(type, dst, src) \
unsigned char perm_addr[MAX_ADDR_LEN]; /* permanent hw address */
unsigned char addr_assign_type; /* hw address assignment type */
unsigned char addr_len; /* hardware address length */
- unsigned char neigh_priv_len;
+ unsigned short neigh_priv_len;
unsigned short dev_id; /* Used to differentiate devices
* that share the same link
* layer address
return dev->header_ops->parse(skb, haddr);
}
+static inline int dev_rebuild_header(struct sk_buff *skb)
+{
+ const struct net_device *dev = skb->dev;
+
+ if (!dev->header_ops || !dev->header_ops->rebuild)
+ return 0;
+ return dev->header_ops->rebuild(skb);
+}
+
typedef int gifconf_func_t(struct net_device * dev, char __user * bufptr, int len);
int register_gifconf(unsigned int family, gifconf_func_t *gifconf);
static inline int unregister_gifconf(unsigned int family)
dev->gso_max_size = size;
}
+static inline void skb_gso_error_unwind(struct sk_buff *skb, __be16 protocol,
+ int pulled_hlen, u16 mac_offset,
+ int mac_len)
+{
+ skb->protocol = protocol;
+ skb->encapsulation = 1;
+ skb_push(skb, pulled_hlen);
+ skb_reset_transport_header(skb);
+ skb->mac_header = mac_offset;
+ skb->network_header = skb->mac_header + mac_len;
+ skb->mac_len = mac_len;
+}
+
static inline bool netif_is_macvlan(struct net_device *dev)
{
return dev->priv_flags & IFF_MACVLAN;
#define NFS4_VERSION 4
#define NFS4_MINOR_VERSION 0
-#if defined(CONFIG_NFS_V4_2)
-#define NFS4_MAX_MINOR_VERSION 2
-#else
-#if defined(CONFIG_NFS_V4_1)
-#define NFS4_MAX_MINOR_VERSION 1
-#else
-#define NFS4_MAX_MINOR_VERSION 0
-#endif /* CONFIG_NFS_V4_1 */
-#endif /* CONFIG_NFS_V4_2 */
-
#define NFS4_DEBUG 1
/* Index of predefined Linux client operations */
extern int nfs_mountpoint_expiry_timeout;
extern void nfs_release_automount_timer(void);
-/*
- * linux/fs/nfs/nfs4proc.c
- */
-#ifdef CONFIG_NFS_V4_SECURITY_LABEL
-extern struct nfs4_label *nfs4_label_alloc(struct nfs_server *server, gfp_t flags);
-static inline void nfs4_label_free(struct nfs4_label *label)
-{
- if (label) {
- kfree(label->label);
- kfree(label);
- }
- return;
-}
-#else
-static inline struct nfs4_label *nfs4_label_alloc(struct nfs_server *server, gfp_t flags) { return NULL; }
-static inline void nfs4_label_free(void *label) {}
-#endif
-
/*
* linux/fs/nfs/unlink.c
*/
struct padata_serial_queue __percpu *squeue;
atomic_t reorder_objects;
atomic_t refcnt;
+ atomic_t seq_nr;
struct padata_cpumask cpumask;
spinlock_t lock ____cacheline_aligned;
- spinlock_t seq_lock;
- unsigned int seq_nr;
unsigned int processed;
struct timer_list timer;
};
while (!pci_is_root_bus(pbus))
pbus = pbus->parent;
- return DEVICE_ACPI_HANDLE(pbus->bridge);
+ return ACPI_HANDLE(pbus->bridge);
}
static inline acpi_handle acpi_pci_get_bridge_handle(struct pci_bus *pbus)
else
dev = &pbus->self->dev;
- return DEVICE_ACPI_HANDLE(dev);
+ return ACPI_HANDLE(dev);
}
void acpi_pci_add_bus(struct pci_bus *bus);
#include <linux/irqreturn.h>
#include <uapi/linux/pci.h>
-/* Include the ID list */
#include <linux/pci_ids.h>
/*
*
* 7:3 = slot
* 2:0 = function
- * PCI_DEVFN(), PCI_SLOT(), and PCI_FUNC() are defined uapi/linux/pci.h
+ *
+ * PCI_DEVFN(), PCI_SLOT(), and PCI_FUNC() are defined in uapi/linux/pci.h.
* In the interest of not exposing interfaces to user-space unnecessarily,
- * the following kernel only defines are being added here.
+ * the following kernel-only defines are being added here.
*/
#define PCI_DEVID(bus, devfn) ((((u16)bus) << 8) | devfn)
/* return bus from PCI devid = ((u16)bus_number) << 8) | devfn */
/* Reset is NOT asserted (Use to deassert reset) */
pcie_deassert_reset = (__force pcie_reset_state_t) 1,
- /* Use #PERST to reset PCI-E device */
+ /* Use #PERST to reset PCIe device */
pcie_warm_reset = (__force pcie_reset_state_t) 2,
- /* Use PCI-E Hot Reset to reset device */
+ /* Use PCIe Hot Reset to reset device */
pcie_hot_reset = (__force pcie_reset_state_t) 3
};
unsigned int class; /* 3 bytes: (base,sub,prog-if) */
u8 revision; /* PCI revision, low byte of class word */
u8 hdr_type; /* PCI header type (`multi' flag masked out) */
- u8 pcie_cap; /* PCI-E capability offset */
+ u8 pcie_cap; /* PCIe capability offset */
u8 msi_cap; /* MSI capability offset */
u8 msix_cap; /* MSI-X capability offset */
- u8 pcie_mpss:3; /* PCI-E Max Payload Size Supported */
+ u8 pcie_mpss:3; /* PCIe Max Payload Size Supported */
u8 rom_base_reg; /* which config register controls the ROM */
- u8 pin; /* which interrupt pin this device uses */
- u16 pcie_flags_reg; /* cached PCI-E Capabilities Register */
+ u8 pin; /* which interrupt pin this device uses */
+ u16 pcie_flags_reg; /* cached PCIe Capabilities Register */
struct pci_driver *driver; /* which driver has allocated this device */
u64 dma_mask; /* Mask of the bits of bus address this
unsigned int d3cold_delay; /* D3cold->D0 transition time in ms */
#ifdef CONFIG_PCIEASPM
- struct pcie_link_state *link_state; /* ASPM link state. */
+ struct pcie_link_state *link_state; /* ASPM link state */
#endif
pci_channel_state_t error_state; /* current connectivity state */
bool match_driver; /* Skip attaching driver */
/* These fields are used by common fixups */
- unsigned int transparent:1; /* Transparent PCI bridge */
+ unsigned int transparent:1; /* Subtractive decode PCI bridge */
unsigned int multifunction:1;/* Part of multi-function device */
/* keep track of device state */
unsigned int is_added:1;
unsigned int block_cfg_access:1; /* config space access is blocked */
unsigned int broken_parity_status:1; /* Device generates false positive parity */
unsigned int irq_reroute_variant:2; /* device needs IRQ rerouting variant */
- unsigned int msi_enabled:1;
+ unsigned int msi_enabled:1;
unsigned int msix_enabled:1;
unsigned int ari_enabled:1; /* ARI forwarding */
unsigned int is_managed:1;
if (dev->is_virtfn)
dev = dev->physfn;
#endif
-
return dev;
}
char name[48];
unsigned short bridge_ctl; /* manage NO_ISA/FBB/et al behaviors */
- pci_bus_flags_t bus_flags; /* Inherited by child busses */
+ pci_bus_flags_t bus_flags; /* inherited by child buses */
struct device *bridge;
struct device dev;
struct bin_attribute *legacy_io; /* legacy I/O for this bus */
#define to_pci_bus(n) container_of(n, struct pci_bus, dev)
/*
- * Returns true if the pci bus is root (behind host-pci bridge),
+ * Returns true if the PCI bus is root (behind host-PCI bridge),
* false otherwise
*
* Some code assumes that "bus->self == NULL" means that bus is a root bus.
#define PCIBIOS_BUFFER_TOO_SMALL 0x89
/*
- * Translate above to generic errno for passing back through non-pci.
+ * Translate above to generic errno for passing back through non-PCI code.
*/
static inline int pcibios_err_to_errno(int err)
{
struct list_head list; /* for IDs added at runtime */
};
-/* ---------------------------------------------------------------- */
-/** PCI Error Recovery System (PCI-ERS). If a PCI device driver provides
- * a set of callbacks in struct pci_error_handlers, then that device driver
- * will be notified of PCI bus errors, and will be driven to recovery
- * when an error occurs.
+
+/*
+ * PCI Error Recovery System (PCI-ERS). If a PCI device driver provides
+ * a set of callbacks in struct pci_error_handlers, that device driver
+ * will be notified of PCI bus errors, and will be driven to recovery
+ * when an error occurs.
*/
typedef unsigned int __bitwise pci_ers_result_t;
void (*resume)(struct pci_dev *dev);
};
-/* ---------------------------------------------------------------- */
struct module;
struct pci_driver {
extern struct bus_type pci_bus_type;
-/* Do NOT directly access these two variables, unless you are arch specific pci
- * code, or pci core code. */
+/* Do NOT directly access these two variables, unless you are arch-specific PCI
+ * code, or PCI core code. */
extern struct list_head pci_root_buses; /* list of all known PCI buses */
-/* Some device drivers need know if pci is initiated */
+/* Some device drivers need know if PCI is initiated */
int no_pci_devices(void);
void pcibios_resource_survey_bus(struct pci_bus *bus);
void pcibios_remove_bus(struct pci_bus *bus);
void pcibios_fixup_bus(struct pci_bus *);
int __must_check pcibios_enable_device(struct pci_dev *, int mask);
-/* Architecture specific versions may override this (weak) */
+/* Architecture-specific versions may override this (weak) */
char *pcibios_setup(char *str);
/* Used only when drivers/pci/setup.c is used */
int __must_check pci_assign_resource(struct pci_dev *dev, int i);
int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align);
int pci_select_bars(struct pci_dev *dev, unsigned long flags);
+bool pci_device_is_present(struct pci_dev *pdev);
/* ROM control related routines */
int pci_enable_rom(struct pci_dev *pdev);
/*
* PCI domain support. Sometimes called PCI segment (eg by ACPI),
- * a PCI domain is defined to be a set of PCI busses which share
+ * a PCI domain is defined to be a set of PCI buses which share
* configuration space.
*/
#ifdef CONFIG_PCI_DOMAINS
/* Anonymous variables would be nice... */
#define DECLARE_PCI_FIXUP_SECTION(section, name, vendor, device, class, \
class_shift, hook) \
- static const struct pci_fixup __pci_fixup_##name __used \
+ static const struct pci_fixup __PASTE(__pci_fixup_##name,__LINE__) __used \
__attribute__((__section__(#section), aligned((sizeof(void *))))) \
= { vendor, device, class, class_shift, hook };
#define DECLARE_PCI_FIXUP_CLASS_EARLY(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_early, \
- vendor##device##hook, vendor, device, class, class_shift, hook)
+ hook, vendor, device, class, class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_HEADER(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_header, \
- vendor##device##hook, vendor, device, class, class_shift, hook)
+ hook, vendor, device, class, class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_FINAL(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_final, \
- vendor##device##hook, vendor, device, class, class_shift, hook)
+ hook, vendor, device, class, class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_ENABLE(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_enable, \
- vendor##device##hook, vendor, device, class, class_shift, hook)
+ hook, vendor, device, class, class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_RESUME(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_resume, \
- resume##vendor##device##hook, vendor, device, class, \
+ resume##hook, vendor, device, class, \
class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_RESUME_EARLY(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_resume_early, \
- resume_early##vendor##device##hook, vendor, device, \
+ resume_early##hook, vendor, device, \
class, class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_SUSPEND(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_suspend, \
- suspend##vendor##device##hook, vendor, device, class, \
+ suspend##hook, vendor, device, class, \
class_shift, hook)
#define DECLARE_PCI_FIXUP_EARLY(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_early, \
- vendor##device##hook, vendor, device, PCI_ANY_ID, 0, hook)
+ hook, vendor, device, PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_HEADER(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_header, \
- vendor##device##hook, vendor, device, PCI_ANY_ID, 0, hook)
+ hook, vendor, device, PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_FINAL(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_final, \
- vendor##device##hook, vendor, device, PCI_ANY_ID, 0, hook)
+ hook, vendor, device, PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_ENABLE(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_enable, \
- vendor##device##hook, vendor, device, PCI_ANY_ID, 0, hook)
+ hook, vendor, device, PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_RESUME(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_resume, \
- resume##vendor##device##hook, vendor, device, \
+ resume##hook, vendor, device, \
PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_RESUME_EARLY(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_resume_early, \
- resume_early##vendor##device##hook, vendor, device, \
+ resume_early##hook, vendor, device, \
PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_SUSPEND(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_suspend, \
- suspend##vendor##device##hook, vendor, device, \
+ suspend##hook, vendor, device, \
PCI_ANY_ID, 0, hook)
#ifdef CONFIG_PCI_QUIRKS
extern unsigned long pci_hotplug_io_size;
extern unsigned long pci_hotplug_mem_size;
-/* Architecture specific versions may override these (weak) */
+/* Architecture-specific versions may override these (weak) */
int pcibios_add_platform_entries(struct pci_dev *dev);
void pcibios_disable_device(struct pci_dev *dev);
void pcibios_set_master(struct pci_dev *dev);
* @hardware_test: Called to run a specified hardware test on the specified
* slot.
* @get_power_status: Called to get the current power status of a slot.
- * If this field is NULL, the value passed in the struct hotplug_slot_info
- * will be used when this value is requested by a user.
+ * If this field is NULL, the value passed in the struct hotplug_slot_info
+ * will be used when this value is requested by a user.
* @get_attention_status: Called to get the current attention status of a slot.
* If this field is NULL, the value passed in the struct hotplug_slot_info
* will be used when this value is requested by a user.
void pci_configure_slot(struct pci_dev *dev);
#endif
-
#define PCIE_PORT_SERVICE_VC (1 << PCIE_PORT_SERVICE_VC_SHIFT)
struct pcie_device {
- int irq; /* Service IRQ/MSI/MSI-X Vector */
+ int irq; /* Service IRQ/MSI/MSI-X Vector */
struct pci_dev *port; /* Root/Upstream/Downstream Port */
u32 service; /* Port service this device represents */
void *priv_data; /* Service Private Data */
__PCPU_DUMMY_ATTRS char __pcpu_scope_##name; \
extern __PCPU_DUMMY_ATTRS char __pcpu_unique_##name; \
__PCPU_DUMMY_ATTRS char __pcpu_unique_##name; \
+ extern __PCPU_ATTRS(sec) __typeof__(type) name; \
__PCPU_ATTRS(sec) PER_CPU_DEF_ATTRIBUTES __weak \
__typeof__(type) name
#else
#define ITCCHEN BIT(23)
/*ch_status paramater of callback function possible values*/
-#define DMA_COMPLETE 1
-#define DMA_CC_ERROR 2
-#define DMA_TC1_ERROR 3
-#define DMA_TC2_ERROR 4
+#define EDMA_DMA_COMPLETE 1
+#define EDMA_DMA_CC_ERROR 2
+#define EDMA_DMA_TC1_ERROR 3
+#define EDMA_DMA_TC2_ERROR 4
enum address_mode {
INCR = 0,
char *buf;
size_t bufsize;
struct mutex read_mutex; /* serialize open/read/close */
+ int flags;
int (*open)(struct pstore_info *psi);
int (*close)(struct pstore_info *psi);
ssize_t (*read)(u64 *id, enum pstore_type_id *type,
void *data;
};
+#define PSTORE_FLAGS_FRAGILE 1
+
#ifdef CONFIG_PSTORE
extern int pstore_register(struct pstore_info *);
extern bool pstore_cannot_block_path(enum kmsg_dump_reason reason);
* Architecture-specific implementations of sys_reboot commands.
*/
+extern void migrate_to_reboot_cpu(void);
extern void machine_restart(char *cmd);
extern void machine_halt(void);
extern void machine_power_off(void);
extern int rtnl_is_locked(void);
#ifdef CONFIG_PROVE_LOCKING
extern int lockdep_rtnl_is_held(void);
+#else
+static inline int lockdep_rtnl_is_held(void)
+{
+ return 1;
+}
#endif /* #ifdef CONFIG_PROVE_LOCKING */
/**
.sum_exec_runtime = 0, \
}
-#define PREEMPT_ENABLED (PREEMPT_NEED_RESCHED)
-
#ifdef CONFIG_PREEMPT_COUNT
#define PREEMPT_DISABLED (1 + PREEMPT_ENABLED)
#else
unsigned int balance_interval; /* initialise to 1. units in ms. */
unsigned int nr_balance_failed; /* initialise to 0 */
- u64 last_update;
-
/* idle_balance() stats */
u64 max_newidle_lb_cost;
unsigned long next_decay_max_lb_cost;
struct uts_namespace;
struct load_weight {
- unsigned long weight, inv_weight;
+ unsigned long weight;
+ u32 inv_weight;
};
struct sched_avg {
* @xfrm_policy_delete_security:
* @ctx contains the xfrm_sec_ctx.
* Authorize deletion of xp->security.
- * @xfrm_state_alloc_security:
+ * @xfrm_state_alloc:
* @x contains the xfrm_state being added to the Security Association
* Database by the XFRM system.
* @sec_ctx contains the security context information being provided by
* the user-level SA generation program (e.g., setkey or racoon).
- * @secid contains the secid from which to take the mls portion of the context.
* Allocate a security structure to the x->security field; the security
* field is initialized to NULL when the xfrm_state is allocated. Set the
- * context to correspond to either sec_ctx or polsec, with the mls portion
- * taken from secid in the latter case.
- * Return 0 if operation was successful (memory to allocate, legal context).
+ * context to correspond to sec_ctx. Return 0 if operation was successful
+ * (memory to allocate, legal context).
+ * @xfrm_state_alloc_acquire:
+ * @x contains the xfrm_state being added to the Security Association
+ * Database by the XFRM system.
+ * @polsec contains the policy's security context.
+ * @secid contains the secid from which to take the mls portion of the
+ * context.
+ * Allocate a security structure to the x->security field; the security
+ * field is initialized to NULL when the xfrm_state is allocated. Set the
+ * context to correspond to secid. Return 0 if operation was successful
+ * (memory to allocate, legal context).
* @xfrm_state_free_security:
* @x contains the xfrm_state.
* Deallocate x->security.
int (*xfrm_policy_clone_security) (struct xfrm_sec_ctx *old_ctx, struct xfrm_sec_ctx **new_ctx);
void (*xfrm_policy_free_security) (struct xfrm_sec_ctx *ctx);
int (*xfrm_policy_delete_security) (struct xfrm_sec_ctx *ctx);
- int (*xfrm_state_alloc_security) (struct xfrm_state *x,
- struct xfrm_user_sec_ctx *sec_ctx,
- u32 secid);
+ int (*xfrm_state_alloc) (struct xfrm_state *x,
+ struct xfrm_user_sec_ctx *sec_ctx);
+ int (*xfrm_state_alloc_acquire) (struct xfrm_state *x,
+ struct xfrm_sec_ctx *polsec,
+ u32 secid);
void (*xfrm_state_free_security) (struct xfrm_state *x);
int (*xfrm_state_delete_security) (struct xfrm_state *x);
int (*xfrm_policy_lookup) (struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir);
spin_unlock(&sl->lock);
}
+/**
+ * read_seqbegin_or_lock - begin a sequence number check or locking block
+ * @lock: sequence lock
+ * @seq : sequence number to be checked
+ *
+ * First try it once optimistically without taking the lock. If that fails,
+ * take the lock. The sequence number is also used as a marker for deciding
+ * whether to be a reader (even) or writer (odd).
+ * N.B. seq must be initialized to an even number to begin with.
+ */
+static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq)
+{
+ if (!(*seq & 1)) /* Even */
+ *seq = read_seqbegin(lock);
+ else /* Odd */
+ read_seqlock_excl(lock);
+}
+
+static inline int need_seqretry(seqlock_t *lock, int seq)
+{
+ return !(seq & 1) && read_seqretry(lock, seq);
+}
+
+static inline void done_seqretry(seqlock_t *lock, int seq)
+{
+ if (seq & 1)
+ read_sequnlock_excl(lock);
+}
+
static inline void read_seqlock_excl_bh(seqlock_t *sl)
{
spin_lock_bh(&sl->lock);
extern int shmem_fill_super(struct super_block *sb, void *data, int silent);
extern struct file *shmem_file_setup(const char *name,
loff_t size, unsigned long flags);
+extern struct file *shmem_kernel_file_setup(const char *name, loff_t size,
+ unsigned long flags);
extern int shmem_zero_setup(struct vm_area_struct *);
extern int shmem_lock(struct file *file, int lock, struct user_struct *user);
extern void shmem_unlock_mapping(struct address_space *mapping);
skb->mac_header += offset;
}
+static inline void skb_pop_mac_header(struct sk_buff *skb)
+{
+ skb->mac_header = skb->network_header;
+}
+
static inline void skb_probe_transport_header(struct sk_buff *skb,
const int offset_hint)
{
unsigned char *skb_pull_rcsum(struct sk_buff *skb, unsigned int len);
+/**
+ * pskb_trim_rcsum - trim received skb and update checksum
+ * @skb: buffer to trim
+ * @len: new length
+ *
+ * This is exactly the same as pskb_trim except that it ensures the
+ * checksum of received packets are still valid after the operation.
+ */
+
+static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len)
+{
+ if (likely(len >= skb->len))
+ return 0;
+ if (skb->ip_summed == CHECKSUM_COMPLETE)
+ skb->ip_summed = CHECKSUM_NONE;
+ return __pskb_trim(skb, len);
+}
+
#define skb_queue_walk(queue, skb) \
for (skb = (queue)->next; \
skb != (struct sk_buff *)(queue); \
__wsum skb_checksum(const struct sk_buff *skb, int offset, int len,
__wsum csum);
-/**
- * pskb_trim_rcsum - trim received skb and update checksum
- * @skb: buffer to trim
- * @len: new length
- *
- * This is exactly the same as pskb_trim except that it ensures the
- * checksum of received packets are still valid after the operation.
- */
-
-static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len)
-{
- if (likely(len >= skb->len))
- return 0;
- if (skb->ip_summed == CHECKSUM_COMPLETE) {
- __wsum adj = skb_checksum(skb, len, skb->len - len, 0);
-
- skb->csum = csum_sub(skb->csum, adj);
- }
- return __pskb_trim(skb, len);
-}
-
static inline void *skb_header_pointer(const struct sk_buff *skb, int offset,
int len, void *buffer)
{
* Ethernet MAC Drivers should call this function in their hard_xmit()
* function immediately before giving the sk_buff to the MAC hardware.
*
+ * Specifically, one should make absolutely sure that this function is
+ * called before TX completion of this packet can trigger. Otherwise
+ * the packet could potentially already be freed.
+ *
* @skb: A socket buffer.
*/
static inline void skb_tx_timestamp(struct sk_buff *skb)
* }
* rcu_read_unlock();
*
- * See also the comment on struct slab_rcu in mm/slab.c.
+ * This is useful if we need to approach a kernel structure obliquely,
+ * from its address obtained without the usual locking. We can lock
+ * the structure to stabilize it and check it's still at the given address,
+ * only if we can be sure that the memory has not been meanwhile reused
+ * for some other kind of object (which our subsystem's lock might corrupt).
+ *
+ * rcu_read_lock before reading the address, then rcu_read_unlock after
+ * taking the spinlock within the structure expected at that address.
*/
#define SLAB_DESTROY_BY_RCU 0x00080000UL /* Defer freeing slabs to RCU */
#define SLAB_MEM_SPREAD 0x00100000UL /* Spread some memory over cpuset */
/**
* kmalloc - allocate memory
* @size: how many bytes of memory are required.
- * @flags: the type of memory to allocate (see kcalloc).
+ * @flags: the type of memory to allocate.
*
* kmalloc is the normal method of allocating memory
* for objects smaller than page size in the kernel.
+ *
+ * The @flags argument may be one of:
+ *
+ * %GFP_USER - Allocate memory on behalf of user. May sleep.
+ *
+ * %GFP_KERNEL - Allocate normal kernel ram. May sleep.
+ *
+ * %GFP_ATOMIC - Allocation will not sleep. May use emergency pools.
+ * For example, use this inside interrupt handlers.
+ *
+ * %GFP_HIGHUSER - Allocate pages from high memory.
+ *
+ * %GFP_NOIO - Do not do any I/O at all while trying to get memory.
+ *
+ * %GFP_NOFS - Do not make any fs calls while trying to get memory.
+ *
+ * %GFP_NOWAIT - Allocation will not sleep.
+ *
+ * %GFP_THISNODE - Allocate node-local memory only.
+ *
+ * %GFP_DMA - Allocation suitable for DMA.
+ * Should only be used for kmalloc() caches. Otherwise, use a
+ * slab created with SLAB_DMA.
+ *
+ * Also it is possible to set different flags by OR'ing
+ * in one or more of the following additional @flags:
+ *
+ * %__GFP_COLD - Request cache-cold pages instead of
+ * trying to return cache-warm pages.
+ *
+ * %__GFP_HIGH - This allocation has high priority and may use emergency pools.
+ *
+ * %__GFP_NOFAIL - Indicate that this allocation is in no way allowed to fail
+ * (think twice before using).
+ *
+ * %__GFP_NORETRY - If memory is not immediately available,
+ * then give up at once.
+ *
+ * %__GFP_NOWARN - If allocation fails, don't issue any warnings.
+ *
+ * %__GFP_REPEAT - If allocation fails initially, try once more before failing.
+ *
+ * There are other flags available as well, but these are not intended
+ * for general use, and so are not documented here. For a full list of
+ * potential flags, always refer to linux/gfp.h.
*/
static __always_inline void *kmalloc(size_t size, gfp_t flags)
{
int cache_show(struct kmem_cache *s, struct seq_file *m);
void print_slabinfo_header(struct seq_file *m);
-/**
- * kmalloc - allocate memory
- * @size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * The @flags argument may be one of:
- *
- * %GFP_USER - Allocate memory on behalf of user. May sleep.
- *
- * %GFP_KERNEL - Allocate normal kernel ram. May sleep.
- *
- * %GFP_ATOMIC - Allocation will not sleep. May use emergency pools.
- * For example, use this inside interrupt handlers.
- *
- * %GFP_HIGHUSER - Allocate pages from high memory.
- *
- * %GFP_NOIO - Do not do any I/O at all while trying to get memory.
- *
- * %GFP_NOFS - Do not make any fs calls while trying to get memory.
- *
- * %GFP_NOWAIT - Allocation will not sleep.
- *
- * %GFP_THISNODE - Allocate node-local memory only.
- *
- * %GFP_DMA - Allocation suitable for DMA.
- * Should only be used for kmalloc() caches. Otherwise, use a
- * slab created with SLAB_DMA.
- *
- * Also it is possible to set different flags by OR'ing
- * in one or more of the following additional @flags:
- *
- * %__GFP_COLD - Request cache-cold pages instead of
- * trying to return cache-warm pages.
- *
- * %__GFP_HIGH - This allocation has high priority and may use emergency pools.
- *
- * %__GFP_NOFAIL - Indicate that this allocation is in no way allowed to fail
- * (think twice before using).
- *
- * %__GFP_NORETRY - If memory is not immediately available,
- * then give up at once.
- *
- * %__GFP_NOWARN - If allocation fails, don't issue any warnings.
- *
- * %__GFP_REPEAT - If allocation fails initially, try once more before failing.
- *
- * There are other flags available as well, but these are not intended
- * for general use, and so are not documented here. For a full list of
- * potential flags, always refer to linux/gfp.h.
- *
- * kmalloc is the normal method of allocating memory
- * in the kernel.
- */
-static __always_inline void *kmalloc(size_t size, gfp_t flags);
-
/**
* kmalloc_array - allocate memory for an array.
* @n: number of elements.
size_t colour; /* cache colouring range */
unsigned int colour_off; /* colour offset */
- struct kmem_cache *slabp_cache;
- unsigned int slab_size;
+ struct kmem_cache *freelist_cache;
+ unsigned int freelist_size;
/* constructor func */
void (*ctor)(void *obj);
enum stat_item {
ALLOC_FASTPATH, /* Allocation from cpu slab */
ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */
- FREE_FASTPATH, /* Free to cpu slub */
+ FREE_FASTPATH, /* Free to cpu slab */
FREE_SLOWPATH, /* Freeing not to cpu slab */
FREE_FROZEN, /* Freeing to frozen slab */
FREE_ADD_PARTIAL, /* Freeing moves slab to partial list */
#define TEGRA_POWERGATE_3D0 TEGRA_POWERGATE_3D
+#ifdef CONFIG_ARCH_TEGRA
int tegra_powergate_is_powered(int id);
int tegra_powergate_power_on(int id);
int tegra_powergate_power_off(int id);
/* Must be called with clk disabled, and returns with clk enabled */
int tegra_powergate_sequence_power_up(int id, struct clk *clk);
+#else
+static inline int tegra_powergate_is_powered(int id)
+{
+ return -ENOSYS;
+}
+
+static inline int tegra_powergate_power_on(int id)
+{
+ return -ENOSYS;
+}
+
+static inline int tegra_powergate_power_off(int id)
+{
+ return -ENOSYS;
+}
+
+static inline int tegra_powergate_remove_clamping(int id)
+{
+ return -ENOSYS;
+}
+
+static inline int tegra_powergate_sequence_power_up(int id, struct clk *clk)
+{
+ return -ENOSYS;
+}
+#endif
#endif /* _MACH_TEGRA_POWERGATE_H_ */
#define TRACE_EVENT_FLAGS(event, flag)
+#define TRACE_EVENT_PERF_PERM(event, expr...)
+
#endif /* DECLARE_TRACE */
#ifndef TRACE_EVENT
#define TRACE_EVENT_FLAGS(event, flag)
+#define TRACE_EVENT_PERF_PERM(event, expr...)
+
#endif /* ifdef TRACE_EVENT (see note above) */
* @sg: scatter gather buffer list, the buffer size of each element in
* the list (except the last) must be divisible by the endpoint's
* max packet size if no_sg_constraint isn't set in 'struct usb_bus'
+ * (FIXME: scatter-gather under xHCI is broken for periodic transfers.
+ * Do not use urb->sg for interrupt endpoints for now, only bulk.)
* @num_mapped_sgs: (internal) number of mapped sg entries
* @num_sgs: number of entries in the sg list
* @transfer_buffer_length: How big is transfer_buffer. The transfer may
#define WUSB_KEY_INDEX_TYPE_GTK 2
#define WUSB_KEY_INDEX_ORIGINATOR_HOST 0
#define WUSB_KEY_INDEX_ORIGINATOR_DEVICE 1
+/* bits 0-3 used for the key index. */
+#define WUSB_KEY_INDEX_MAX 15
/* A CCM Nonce, defined in WUSB1.0[6.4.1] */
struct aes_ccm_nonce {
kuid_t owner;
kgid_t group;
unsigned int proc_inum;
+
+ /* Register of per-UID persistent keyrings for this namespace */
+#ifdef CONFIG_PERSISTENT_KEYRINGS
+ struct key *persistent_keyring_register;
+ struct rw_semaphore persistent_keyring_register_sem;
+#endif
};
extern struct user_namespace init_user_ns;
__ret; \
})
+#define __wait_event_cmd(wq, condition, cmd1, cmd2) \
+ (void)___wait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
+ cmd1; schedule(); cmd2)
+
+/**
+ * wait_event_cmd - sleep until a condition gets true
+ * @wq: the waitqueue to wait on
+ * @condition: a C expression for the event to wait for
+ * cmd1: the command will be executed before sleep
+ * cmd2: the command will be executed after sleep
+ *
+ * The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
+ * @condition evaluates to true. The @condition is checked each time
+ * the waitqueue @wq is woken up.
+ *
+ * wake_up() has to be called after changing any variable that could
+ * change the result of the wait condition.
+ */
+#define wait_event_cmd(wq, condition, cmd1, cmd2) \
+do { \
+ if (condition) \
+ break; \
+ __wait_event_cmd(wq, condition, cmd1, cmd2); \
+} while (0)
+
#define __wait_event_interruptible(wq, condition) \
___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, 0, \
schedule())
struct vb2_mem_ops {
void *(*alloc)(void *alloc_ctx, unsigned long size, gfp_t gfp_flags);
void (*put)(void *buf_priv);
- struct dma_buf *(*get_dmabuf)(void *buf_priv);
+ struct dma_buf *(*get_dmabuf)(void *buf_priv, unsigned long flags);
void *(*get_userptr)(void *alloc_ctx, unsigned long vaddr,
unsigned long size, int write);
int ip_ra_control(struct sock *sk, unsigned char on,
void (*destructor)(struct sock *));
-int ip_recv_error(struct sock *sk, struct msghdr *msg, int len);
+int ip_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len);
void ip_icmp_error(struct sock *sk, struct sk_buff *skb, int err, __be16 port,
u32 info, u8 *payload);
void ip_local_error(struct sock *sk, int err, __be32 daddr, __be16 dport,
__be32 identification;
};
-#define IP6_MF 0x0001
+#define IP6_MF 0x0001
+#define IP6_OFFSET 0xFFF8
#include <net/sock.h>
int ip6_datagram_connect(struct sock *sk, struct sockaddr *addr, int addr_len);
-int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len);
-int ipv6_recv_rxpmtu(struct sock *sk, struct msghdr *msg, int len);
+int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,
+ int *addr_len);
+int ipv6_recv_rxpmtu(struct sock *sk, struct msghdr *msg, int len,
+ int *addr_len);
void ipv6_icmp_error(struct sock *sk, struct sk_buff *skb, int err, __be16 port,
u32 info, u8 *payload);
void ipv6_local_error(struct sock *sk, int err, struct flowi6 *fl6, u32 info);
#define LLC_S_PF_IS_1(pdu) ((pdu->ctrl_2 & LLC_S_PF_BIT_MASK) ? 1 : 0)
#define PDU_SUPV_GET_Nr(pdu) ((pdu->ctrl_2 & 0xFE) >> 1)
-#define PDU_GET_NEXT_Vr(sn) (++sn & ~LLC_2_SEQ_NBR_MODULO)
+#define PDU_GET_NEXT_Vr(sn) (((sn) + 1) & ~LLC_2_SEQ_NBR_MODULO)
/* FRMR information field macros */
/* Compatibility glue so we can support IPv6 when it's compiled as a module */
struct pingv6_ops {
- int (*ipv6_recv_error)(struct sock *sk, struct msghdr *msg, int len);
+ int (*ipv6_recv_error)(struct sock *sk, struct msghdr *msg, int len,
+ int *addr_len);
int (*ip6_datagram_recv_ctl)(struct sock *sk, struct msghdr *msg,
struct sk_buff *skb);
int (*icmpv6_err_convert)(u8 type, u8 code, int *err);
#define SCTP_NEED_FRTX 0x1
#define SCTP_DONT_FRTX 0x2
__u16 rtt_in_progress:1, /* This chunk used for RTT calc? */
+ resent:1, /* Has this chunk ever been resent. */
has_tsn:1, /* Does this chunk have a TSN yet? */
has_ssn:1, /* Does this chunk have a SSN yet? */
singleton:1, /* Only chunk in the packet? */
/* Corked? */
char cork;
-
- /* Is this structure empty? */
- char empty;
};
void sctp_outq_init(struct sctp_association *, struct sctp_outq *);
/* How many duplicated TSNs have we seen? */
int numduptsns;
- /* Number of seconds of idle time before an association is closed.
- * In the association context, this is really used as a boolean
- * since the real timeout is stored in the timeouts array
- */
- __u32 autoclose;
-
/* These are to support
* "SCTP Extensions for Dynamic Reconfiguration of IP Addresses
* and Enforcement of Flow and Message Limits"
};
struct cg_proto {
- void (*enter_memory_pressure)(struct sock *sk);
struct res_counter memory_allocated; /* Current allocated memory. */
struct percpu_counter sockets_allocated; /* Current number of sockets. */
int memory_pressure;
struct proto *prot = sk->sk_prot;
for (; cg_proto; cg_proto = parent_cg_proto(prot, cg_proto))
- if (cg_proto->memory_pressure)
- cg_proto->memory_pressure = 0;
+ cg_proto->memory_pressure = 0;
}
}
struct proto *prot = sk->sk_prot;
for (; cg_proto; cg_proto = parent_cg_proto(prot, cg_proto))
- cg_proto->enter_memory_pressure(sk);
+ cg_proto->memory_pressure = 1;
}
sk->sk_prot->enter_memory_pressure(sk);
};
struct ib_udata {
- void __user *inbuf;
+ const void __user *inbuf;
void __user *outbuf;
size_t inlen;
size_t outlen;
*/
unsigned ordered_tag:1;
+ /* True if the controller does not support WRITE SAME */
+ unsigned no_write_same:1;
+
/*
* Countdown for host blocking with no commands outstanding.
*/
/* Don't resume host in EH */
unsigned eh_noresume:1;
+ /* The controller does not support WRITE SAME */
+ unsigned no_write_same:1;
+
/*
* Optional work queue to be utilized by the transport
*/
{
struct snd_sg_buf *sgbuf = dmab->private_data;
dma_addr_t addr = sgbuf->table[offset >> PAGE_SHIFT].addr;
- addr &= PAGE_MASK;
+ addr &= ~((dma_addr_t)PAGE_SIZE - 1);
return addr + offset % PAGE_SIZE;
}
SND_SOC_DAPM_INIT_REG_VAL(wreg, wshift, winvert), \
.kcontrol_news = wcontrols, .num_kcontrols = 1}
#define SND_SOC_DAPM_MUX(wname, wreg, wshift, winvert, wcontrols) \
-{ .id = snd_soc_dapm_mux, .name = wname, .reg = wreg, \
+{ .id = snd_soc_dapm_mux, .name = wname, \
+ SND_SOC_DAPM_INIT_REG_VAL(wreg, wshift, winvert), \
.kcontrol_news = wcontrols, .num_kcontrols = 1}
#define SND_SOC_DAPM_VIRT_MUX(wname, wreg, wshift, winvert, wcontrols) \
{ .id = snd_soc_dapm_virt_mux, .name = wname, \
sense_reason_t (*parse_cdb)(struct se_cmd *cmd);
u32 (*get_device_type)(struct se_device *);
sector_t (*get_blocks)(struct se_device *);
+ sector_t (*get_alignment_offset_lbas)(struct se_device *);
+ /* lbppbe = logical blocks per physical block exponent. see SBC-3 */
+ unsigned int (*get_lbppbe)(struct se_device *);
+ unsigned int (*get_io_min)(struct se_device *);
+ unsigned int (*get_io_opt)(struct se_device *);
unsigned char *(*get_sense_buffer)(struct se_cmd *);
bool (*get_write_cache)(struct se_device *);
};
/* fabric independent task management response values */
enum tcm_tmrsp_table {
+ TMR_FUNCTION_FAILED = 0,
TMR_FUNCTION_COMPLETE = 1,
TMR_TASK_DOES_NOT_EXIST = 2,
TMR_LUN_DOES_NOT_EXIST = 3,
struct t10_alua_tg_pt_gp {
u16 tg_pt_gp_id;
int tg_pt_gp_valid_id;
+ int tg_pt_gp_alua_supported_states;
int tg_pt_gp_alua_access_status;
int tg_pt_gp_alua_access_type;
int tg_pt_gp_nonop_delay_msecs;
int tg_pt_gp_trans_delay_msecs;
- int tg_pt_gp_implict_trans_secs;
+ int tg_pt_gp_implicit_trans_secs;
int tg_pt_gp_pref;
int tg_pt_gp_write_metadata;
/* Used by struct t10_alua_tg_pt_gp->tg_pt_gp_md_buf_len */
/* Used for sense data */
void *sense_buffer;
struct list_head se_delayed_node;
- struct list_head se_lun_node;
struct list_head se_qf_node;
struct se_device *se_dev;
struct se_dev_entry *se_deve;
#define CMD_T_SENT (1 << 4)
#define CMD_T_STOP (1 << 5)
#define CMD_T_FAILED (1 << 6)
-#define CMD_T_LUN_STOP (1 << 7)
-#define CMD_T_LUN_FE_STOP (1 << 8)
-#define CMD_T_DEV_ACTIVE (1 << 9)
-#define CMD_T_REQUEST_STOP (1 << 10)
-#define CMD_T_BUSY (1 << 11)
+#define CMD_T_DEV_ACTIVE (1 << 7)
+#define CMD_T_REQUEST_STOP (1 << 8)
+#define CMD_T_BUSY (1 << 9)
spinlock_t t_state_lock;
struct completion t_transport_stop_comp;
- struct completion transport_lun_fe_stop_comp;
- struct completion transport_lun_stop_comp;
struct work_struct work;
/* backend private data */
void *priv;
+
+ /* Used for lun->lun_ref counting */
+ bool lun_ref_active;
};
struct se_ua {
u32 acl_index;
#define MAX_ACL_TAG_SIZE 64
char acl_tag[MAX_ACL_TAG_SIZE];
- u64 num_cmds;
- u64 read_bytes;
- u64 write_bytes;
- spinlock_t stats_lock;
/* Used for PR SPEC_I_PT=1 and REGISTER_AND_MOVE */
atomic_t acl_pr_ref_count;
struct se_dev_entry **device_list;
u32 unmap_granularity;
u32 unmap_granularity_alignment;
u32 max_write_same_len;
+ u32 max_bytes_per_io;
struct se_device *da_dev;
struct config_group da_group;
};
+struct se_port_stat_grps {
+ struct config_group stat_group;
+ struct config_group scsi_port_group;
+ struct config_group scsi_tgt_port_group;
+ struct config_group scsi_transport_group;
+};
+
+struct se_lun {
+#define SE_LUN_LINK_MAGIC 0xffff7771
+ u32 lun_link_magic;
+ /* See transport_lun_status_table */
+ enum transport_lun_status_table lun_status;
+ u32 lun_access;
+ u32 lun_flags;
+ u32 unpacked_lun;
+ atomic_t lun_acl_count;
+ spinlock_t lun_acl_lock;
+ spinlock_t lun_sep_lock;
+ struct completion lun_shutdown_comp;
+ struct list_head lun_acl_list;
+ struct se_device *lun_se_dev;
+ struct se_port *lun_sep;
+ struct config_group lun_group;
+ struct se_port_stat_grps port_stat_grps;
+ struct completion lun_ref_comp;
+ struct percpu_ref lun_ref;
+};
+
struct se_dev_stat_grps {
struct config_group stat_group;
struct config_group scsi_dev_group;
/* Pointer to transport specific device structure */
u32 dev_index;
u64 creation_time;
- u32 num_resets;
- u64 num_cmds;
- u64 read_bytes;
- u64 write_bytes;
- spinlock_t stats_lock;
+ atomic_long_t num_resets;
+ atomic_long_t num_cmds;
+ atomic_long_t read_bytes;
+ atomic_long_t write_bytes;
/* Active commands on this virtual SE device */
atomic_t simple_cmds;
atomic_t dev_ordered_id;
struct se_subsystem_api *transport;
/* Linked list for struct se_hba struct se_device list */
struct list_head dev_list;
+ struct se_lun xcopy_lun;
};
struct se_hba {
struct se_subsystem_api *transport;
};
-struct se_port_stat_grps {
- struct config_group stat_group;
- struct config_group scsi_port_group;
- struct config_group scsi_tgt_port_group;
- struct config_group scsi_transport_group;
-};
-
-struct se_lun {
-#define SE_LUN_LINK_MAGIC 0xffff7771
- u32 lun_link_magic;
- /* See transport_lun_status_table */
- enum transport_lun_status_table lun_status;
- u32 lun_access;
- u32 lun_flags;
- u32 unpacked_lun;
- atomic_t lun_acl_count;
- spinlock_t lun_acl_lock;
- spinlock_t lun_cmd_lock;
- spinlock_t lun_sep_lock;
- struct completion lun_shutdown_comp;
- struct list_head lun_cmd_list;
- struct list_head lun_acl_list;
- struct se_device *lun_se_dev;
- struct se_port *lun_sep;
- struct config_group lun_group;
- struct se_port_stat_grps port_stat_grps;
-};
-
struct scsi_port_stats {
u64 cmd_pdus;
u64 tx_data_octets;
struct target_fabric_configfs_template tf_cit_tmpl;
};
-#define TF_CIT_TMPL(tf) (&(tf)->tf_cit_tmpl)
void __target_execute_cmd(struct se_cmd *);
int transport_lookup_tmr_lun(struct se_cmd *, u32);
+struct se_node_acl *core_tpg_get_initiator_node_acl(struct se_portal_group *tpg,
+ unsigned char *);
struct se_node_acl *core_tpg_check_initiator_node_acl(struct se_portal_group *,
unsigned char *);
void core_tpg_clear_object_luns(struct se_portal_group *);
{ EXTENT_FLAG_LOGGING, "LOGGING" }, \
{ EXTENT_FLAG_FILLING, "FILLING" })
-TRACE_EVENT(btrfs_get_extent,
+TRACE_EVENT_CONDITION(btrfs_get_extent,
TP_PROTO(struct btrfs_root *root, struct extent_map *map),
TP_ARGS(root, map),
+ TP_CONDITION(map),
+
TP_STRUCT__entry(
__field( u64, root_objectid )
__field( u64, start )
#define TRACE_EVENT_FLAGS(name, value) \
__TRACE_EVENT_FLAGS(name, value)
+#undef TRACE_EVENT_PERF_PERM
+#define TRACE_EVENT_PERF_PERM(name, expr...) \
+ __TRACE_EVENT_PERF_PERM(name, expr)
+
#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
#undef TRACE_EVENT_FLAGS
#define TRACE_EVENT_FLAGS(event, flag)
+#undef TRACE_EVENT_PERF_PERM
+#define TRACE_EVENT_PERF_PERM(event, expr...)
+
#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
/*
__data_size += (len) * sizeof(type);
#undef __string
-#define __string(item, src) __dynamic_array(char, item, strlen(src) + 1)
+#define __string(item, src) __dynamic_array(char, item, \
+ strlen((src) ? (const char *)(src) : "(null)") + 1)
#undef DECLARE_EVENT_CLASS
#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
#undef __assign_str
#define __assign_str(dst, src) \
- strcpy(__get_str(dst), src);
+ strcpy(__get_str(dst), (src) ? (const char *)(src) : "(null)");
#undef TP_fast_assign
#define TP_fast_assign(args...) args
#define RADEON_INFO_SI_TILE_MODE_ARRAY 0x16
/* query if CP DMA is supported on the compute ring */
#define RADEON_INFO_SI_CP_DMA_COMPUTE 0x17
+/* CIK macrotile mode array */
+#define RADEON_INFO_CIK_MACROTILE_MODE_ARRAY 0x18
+/* query the number of render backends */
+#define RADEON_INFO_SI_BACKEND_ENABLED_MASK 0x19
struct drm_radeon_info {
#define DRM_VMW_PARAM_FIFO_CAPS 4
#define DRM_VMW_PARAM_MAX_FB_SIZE 5
#define DRM_VMW_PARAM_FIFO_HW_VERSION 6
+#define DRM_VMW_PARAM_MAX_SURF_MEMORY 7
/**
* struct drm_vmw_getparam_arg
#define AUDIT_MAKE_EQUIV 1015 /* Append to watched tree */
#define AUDIT_TTY_GET 1016 /* Get TTY auditing status */
#define AUDIT_TTY_SET 1017 /* Set TTY auditing status */
+#define AUDIT_SET_FEATURE 1018 /* Turn an audit feature on or off */
+#define AUDIT_GET_FEATURE 1019 /* Get which features are enabled */
+#define AUDIT_FEATURE_CHANGE 1020 /* audit log listing feature changes */
#define AUDIT_FIRST_USER_MSG 1100 /* Userspace messages mostly uninteresting to kernel */
#define AUDIT_USER_AVC 1107 /* We filter this differently */
#define AUDIT_PERM_READ 4
#define AUDIT_PERM_ATTR 8
+/* MAX_AUDIT_MESSAGE_LENGTH is set in audit:lib/libaudit.h as:
+ * 8970 // PATH_MAX*2+CONTEXT_SIZE*2+11+256+1
+ * max header+body+tailer: 44 + 29 + 32 + 262 + 7 + pad
+ */
+#define AUDIT_MESSAGE_TEXT_MAX 8560
+
struct audit_status {
__u32 mask; /* Bit mask for valid entries */
__u32 enabled; /* 1 = enabled, 0 = disabled */
__u32 backlog; /* messages waiting in queue */
};
+struct audit_features {
+#define AUDIT_FEATURE_VERSION 1
+ __u32 vers;
+ __u32 mask; /* which bits we are dealing with */
+ __u32 features; /* which feature to enable/disable */
+ __u32 lock; /* which features to lock */
+};
+
+#define AUDIT_FEATURE_ONLY_UNSET_LOGINUID 0
+#define AUDIT_FEATURE_LOGINUID_IMMUTABLE 1
+#define AUDIT_LAST_FEATURE AUDIT_FEATURE_LOGINUID_IMMUTABLE
+
+#define audit_feature_valid(x) ((x) >= 0 && (x) <= AUDIT_LAST_FEATURE)
+#define AUDIT_FEATURE_TO_MASK(x) (1 << ((x) & 31)) /* mask for __u32 */
+
struct audit_tty_status {
__u32 enabled; /* 1 = enabled, 0 = disabled */
__u32 log_passwd; /* 1 = enabled, 0 = disabled */
};
+#define AUDIT_UID_UNSET (unsigned int)-1
+
/* audit_rule_data supports filter rules with both integer and string
* fields. It corresponds with AUDIT_ADD_RULE, AUDIT_DEL_RULE and
* AUDIT_LIST_RULES requests.
__u64 data;
} EPOLL_PACKED;
-
+#ifdef CONFIG_PM_SLEEP
+static inline void ep_take_care_of_epollwakeup(struct epoll_event *epev)
+{
+ if ((epev->events & EPOLLWAKEUP) && !capable(CAP_BLOCK_SUSPEND))
+ epev->events &= ~EPOLLWAKEUP;
+}
+#else
+static inline void ep_take_care_of_epollwakeup(struct epoll_event *epev)
+{
+ epev->events &= ~EPOLLWAKEUP;
+}
+#endif
#endif /* _UAPI_LINUX_EVENTPOLL_H */
#define GENL_ID_GENERATE 0
#define GENL_ID_CTRL NLMSG_MIN_TYPE
#define GENL_ID_VFS_DQUOT (NLMSG_MIN_TYPE + 1)
+#define GENL_ID_PMCRAID (NLMSG_MIN_TYPE + 2)
/**************************************************************************
* Controller
--- /dev/null
+/*
+ * Hash Info: Hash algorithms information
+ *
+ * Copyright (c) 2013 Dmitry Kasatkin <d.kasatkin@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+
+#ifndef _UAPI_LINUX_HASH_INFO_H
+#define _UAPI_LINUX_HASH_INFO_H
+
+enum hash_algo {
+ HASH_ALGO_MD4,
+ HASH_ALGO_MD5,
+ HASH_ALGO_SHA1,
+ HASH_ALGO_RIPE_MD_160,
+ HASH_ALGO_SHA256,
+ HASH_ALGO_SHA384,
+ HASH_ALGO_SHA512,
+ HASH_ALGO_SHA224,
+ HASH_ALGO_RIPE_MD_128,
+ HASH_ALGO_RIPE_MD_256,
+ HASH_ALGO_RIPE_MD_320,
+ HASH_ALGO_WP_256,
+ HASH_ALGO_WP_384,
+ HASH_ALGO_WP_512,
+ HASH_ALGO_TGR_128,
+ HASH_ALGO_TGR_160,
+ HASH_ALGO_TGR_192,
+ HASH_ALGO__LAST
+};
+
+#endif /* _UAPI_LINUX_HASH_INFO_H */
IFLA_HSR_UNSPEC,
IFLA_HSR_SLAVE1,
IFLA_HSR_SLAVE2,
- IFLA_HSR_MULTICAST_SPEC,
+ IFLA_HSR_MULTICAST_SPEC, /* Last byte of supervision addr */
+ IFLA_HSR_SUPERVISION_ADDR, /* Supervision frame multicast addr */
+ IFLA_HSR_SEQ_NR,
__IFLA_HSR_MAX,
};
#define KEY_BRIGHTNESS_ZERO 244 /* brightness off, use ambient */
#define KEY_DISPLAY_OFF 245 /* display device to off state */
-#define KEY_WIMAX 246
+#define KEY_WWAN 246 /* Wireless WAN (LTE, UMTS, GSM, etc.) */
+#define KEY_WIMAX KEY_WWAN
#define KEY_RFKILL 247 /* Key that controls all radios */
#define KEY_MICMUTE 248 /* Mute / unmute the microphone */
#define BTN_DPAD_LEFT 0x222
#define BTN_DPAD_RIGHT 0x223
+#define KEY_ALS_TOGGLE 0x230 /* Ambient light sensor */
+
#define BTN_TRIGGER_HAPPY 0x2c0
#define BTN_TRIGGER_HAPPY1 0x2c0
#define BTN_TRIGGER_HAPPY2 0x2c1
#define SW_FRONT_PROXIMITY 0x0b /* set = front proximity sensor active */
#define SW_ROTATE_LOCK 0x0c /* set = rotate locked/disabled */
#define SW_LINEIN_INSERT 0x0d /* set = inserted */
+#define SW_MUTE_DEVICE 0x0e /* set = device disabled */
#define SW_MAX 0x0f
#define SW_CNT (SW_MAX+1)
#define KEYCTL_REJECT 19 /* reject a partially constructed key */
#define KEYCTL_INSTANTIATE_IOV 20 /* instantiate a partially constructed key */
#define KEYCTL_INVALIDATE 21 /* invalidate a key */
+#define KEYCTL_GET_PERSISTENT 22 /* get a user's persistent keyring */
#endif /* _LINUX_KEYCTL_H */
#include <linux/virtio_ring.h>
-#ifndef __KERNEL__
-#define ALIGN(a, x) (((a) + (x) - 1) & ~((x) - 1))
-#define __aligned(x) __attribute__ ((aligned(x)))
-#endif
-
-#define mic_aligned_size(x) ALIGN(sizeof(x), 8)
+#define __mic_align(a, x) (((a) + (x) - 1) & ~((x) - 1))
/**
* struct mic_device_desc: Virtio device information shared between the
__u8 feature_len;
__u8 config_len;
__u8 status;
- __u64 config[0];
-} __aligned(8);
+ __le64 config[0];
+} __attribute__ ((aligned(8)));
/**
* struct mic_device_ctrl: Per virtio device information in the device page
* @h2c_vdev_db: The doorbell number to be used by host. Set by guest.
*/
struct mic_device_ctrl {
- __u64 vdev;
+ __le64 vdev;
__u8 config_change;
__u8 vdev_reset;
__u8 guest_ack;
__u8 used_address_updated;
__s8 c2h_vdev_db;
__s8 h2c_vdev_db;
-} __aligned(8);
+} __attribute__ ((aligned(8)));
/**
* struct mic_bootparam: Virtio device independent information in device page
* @shutdown_card: Set to 1 by the host when a card shutdown is initiated
*/
struct mic_bootparam {
- __u32 magic;
+ __le32 magic;
__s8 c2h_shutdown_db;
__s8 h2c_shutdown_db;
__s8 h2c_config_db;
__u8 shutdown_status;
__u8 shutdown_card;
-} __aligned(8);
+} __attribute__ ((aligned(8)));
/**
* struct mic_device_page: High level representation of the device page
* @num: The number of entries in the virtio_ring
*/
struct mic_vqconfig {
- __u64 address;
- __u64 used_address;
- __u16 num;
-} __aligned(8);
+ __le64 address;
+ __le64 used_address;
+ __le16 num;
+} __attribute__ ((aligned(8)));
/*
* The alignment to use between consumer and producer parts of vring.
*/
struct _mic_vring_info {
__u16 avail_idx;
- int magic;
+ __le32 magic;
};
/**
int len;
};
-#define mic_aligned_desc_size(d) ALIGN(mic_desc_size(d), 8)
+#define mic_aligned_desc_size(d) __mic_align(mic_desc_size(d), 8)
#ifndef INTEL_MIC_CARD
static inline unsigned mic_desc_size(const struct mic_device_desc *desc)
{
- return mic_aligned_size(*desc)
- + desc->num_vq * mic_aligned_size(struct mic_vqconfig)
- + desc->feature_len * 2
- + desc->config_len;
+ return sizeof(*desc) + desc->num_vq * sizeof(struct mic_vqconfig)
+ + desc->feature_len * 2 + desc->config_len;
}
static inline struct mic_vqconfig *
}
static inline unsigned mic_total_desc_size(struct mic_device_desc *desc)
{
- return mic_aligned_desc_size(desc) +
- mic_aligned_size(struct mic_device_ctrl);
+ return mic_aligned_desc_size(desc) + sizeof(struct mic_device_ctrl);
}
#endif
};
enum {
+ /* NETLINK_DIAG_NONE, standard nl API requires this attribute! */
NETLINK_DIAG_MEMINFO,
NETLINK_DIAG_GROUPS,
NETLINK_DIAG_RX_RING,
};
enum {
+ /* PACKET_DIAG_NONE, standard nl API requires this attribute! */
PACKET_DIAG_INFO,
PACKET_DIAG_MCLIST,
PACKET_DIAG_RX_RING,
* PCI to PCI Bridge Specification
* PCI System Design Guide
*
- * For hypertransport information, please consult the following manuals
- * from http://www.hypertransport.org
+ * For HyperTransport information, please consult the following manuals
+ * from http://www.hypertransport.org
*
- * The Hypertransport I/O Link Specification
+ * The HyperTransport I/O Link Specification
*/
#ifndef LINUX_PCI_REGS_H
#define PCI_COMMAND_INVALIDATE 0x10 /* Use memory write and invalidate */
#define PCI_COMMAND_VGA_PALETTE 0x20 /* Enable palette snooping */
#define PCI_COMMAND_PARITY 0x40 /* Enable parity checking */
-#define PCI_COMMAND_WAIT 0x80 /* Enable address/data stepping */
+#define PCI_COMMAND_WAIT 0x80 /* Enable address/data stepping */
#define PCI_COMMAND_SERR 0x100 /* Enable SERR */
#define PCI_COMMAND_FAST_BACK 0x200 /* Enable back-to-back writes */
#define PCI_COMMAND_INTX_DISABLE 0x400 /* INTx Emulation Disable */
#define PCI_STATUS 0x06 /* 16 bits */
#define PCI_STATUS_INTERRUPT 0x08 /* Interrupt status */
#define PCI_STATUS_CAP_LIST 0x10 /* Support Capability List */
-#define PCI_STATUS_66MHZ 0x20 /* Support 66 Mhz PCI 2.1 bus */
+#define PCI_STATUS_66MHZ 0x20 /* Support 66 MHz PCI 2.1 bus */
#define PCI_STATUS_UDF 0x40 /* Support User Definable Features [obsolete] */
#define PCI_STATUS_FAST_BACK 0x80 /* Accept fast-back to back */
#define PCI_STATUS_PARITY 0x100 /* Detected parity error */
#define PCI_CAP_ID_CHSWP 0x06 /* CompactPCI HotSwap */
#define PCI_CAP_ID_PCIX 0x07 /* PCI-X */
#define PCI_CAP_ID_HT 0x08 /* HyperTransport */
-#define PCI_CAP_ID_VNDR 0x09 /* Vendor specific */
+#define PCI_CAP_ID_VNDR 0x09 /* Vendor-Specific */
#define PCI_CAP_ID_DBG 0x0A /* Debug port */
#define PCI_CAP_ID_CCRC 0x0B /* CompactPCI Central Resource Control */
-#define PCI_CAP_ID_SHPC 0x0C /* PCI Standard Hot-Plug Controller */
+#define PCI_CAP_ID_SHPC 0x0C /* PCI Standard Hot-Plug Controller */
#define PCI_CAP_ID_SSVID 0x0D /* Bridge subsystem vendor/device ID */
#define PCI_CAP_ID_AGP3 0x0E /* AGP Target PCI-PCI bridge */
#define PCI_CAP_ID_SECDEV 0x0F /* Secure Device */
-#define PCI_CAP_ID_EXP 0x10 /* PCI Express */
+#define PCI_CAP_ID_EXP 0x10 /* PCI Express */
#define PCI_CAP_ID_MSIX 0x11 /* MSI-X */
#define PCI_CAP_ID_SATA 0x12 /* SATA Data/Index Conf. */
#define PCI_CAP_ID_AF 0x13 /* PCI Advanced Features */
#define PCI_AGP_COMMAND_RQ_MASK 0xff000000 /* Master: Maximum number of requests */
#define PCI_AGP_COMMAND_SBA 0x0200 /* Sideband addressing enabled */
#define PCI_AGP_COMMAND_AGP 0x0100 /* Allow processing of AGP transactions */
-#define PCI_AGP_COMMAND_64BIT 0x0020 /* Allow processing of 64-bit addresses */
-#define PCI_AGP_COMMAND_FW 0x0010 /* Force FW transfers */
+#define PCI_AGP_COMMAND_64BIT 0x0020 /* Allow processing of 64-bit addresses */
+#define PCI_AGP_COMMAND_FW 0x0010 /* Force FW transfers */
#define PCI_AGP_COMMAND_RATE4 0x0004 /* Use 4x rate */
#define PCI_AGP_COMMAND_RATE2 0x0002 /* Use 2x rate */
#define PCI_AGP_COMMAND_RATE1 0x0001 /* Use 1x rate */
#define PCI_MSIX_PBA_OFFSET 0xfffffff8 /* Offset into specified BAR */
#define PCI_CAP_MSIX_SIZEOF 12 /* size of MSIX registers */
-/* MSI-X entry's format */
+/* MSI-X Table entry format */
#define PCI_MSIX_ENTRY_SIZE 16
#define PCI_MSIX_ENTRY_LOWER_ADDR 0
#define PCI_MSIX_ENTRY_UPPER_ADDR 4
#define PCI_X_CMD_SPLIT_16 0x0060 /* Max 16 */
#define PCI_X_CMD_SPLIT_32 0x0070 /* Max 32 */
#define PCI_X_CMD_MAX_SPLIT 0x0070 /* Max Outstanding Split Transactions */
-#define PCI_X_CMD_VERSION(x) (((x) >> 12) & 3) /* Version */
+#define PCI_X_CMD_VERSION(x) (((x) >> 12) & 3) /* Version */
#define PCI_X_STATUS 4 /* PCI-X capabilities */
#define PCI_X_STATUS_DEVFN 0x000000ff /* A copy of devfn */
#define PCI_X_STATUS_BUS 0x0000ff00 /* A copy of bus nr */
/* PCI Bridge Subsystem ID registers */
-#define PCI_SSVID_VENDOR_ID 4 /* PCI-Bridge subsystem vendor id register */
-#define PCI_SSVID_DEVICE_ID 6 /* PCI-Bridge subsystem device id register */
+#define PCI_SSVID_VENDOR_ID 4 /* PCI Bridge subsystem vendor ID */
+#define PCI_SSVID_DEVICE_ID 6 /* PCI Bridge subsystem device ID */
/* PCI Express capability registers */
#define PCI_EXP_LNKCTL_CLKREQ_EN 0x0100 /* Enable clkreq */
#define PCI_EXP_LNKCTL_HAWD 0x0200 /* Hardware Autonomous Width Disable */
#define PCI_EXP_LNKCTL_LBMIE 0x0400 /* Link Bandwidth Management Interrupt Enable */
-#define PCI_EXP_LNKCTL_LABIE 0x0800 /* Lnk Autonomous Bandwidth Interrupt Enable */
+#define PCI_EXP_LNKCTL_LABIE 0x0800 /* Link Autonomous Bandwidth Interrupt Enable */
#define PCI_EXP_LNKSTA 18 /* Link Status */
#define PCI_EXP_LNKSTA_CLS 0x000f /* Current Link Speed */
#define PCI_EXP_LNKSTA_CLS_2_5GB 0x0001 /* Current Link Speed 2.5GT/s */
#define PCI_EXP_LNKSTA_CLS_5_0GB 0x0002 /* Current Link Speed 5.0GT/s */
-#define PCI_EXP_LNKSTA_NLW 0x03f0 /* Nogotiated Link Width */
+#define PCI_EXP_LNKSTA_NLW 0x03f0 /* Negotiated Link Width */
#define PCI_EXP_LNKSTA_NLW_SHIFT 4 /* start of NLW mask in link status */
#define PCI_EXP_LNKSTA_LT 0x0800 /* Link Training */
#define PCI_EXP_LNKSTA_SLC 0x1000 /* Slot Clock Configuration */
#define PCI_EXT_CAP_ID_MFVC 0x08 /* Multi-Function VC Capability */
#define PCI_EXT_CAP_ID_VC9 0x09 /* same as _VC */
#define PCI_EXT_CAP_ID_RCRB 0x0A /* Root Complex RB? */
-#define PCI_EXT_CAP_ID_VNDR 0x0B /* Vendor Specific */
+#define PCI_EXT_CAP_ID_VNDR 0x0B /* Vendor-Specific */
#define PCI_EXT_CAP_ID_CAC 0x0C /* Config Access - obsolete */
#define PCI_EXT_CAP_ID_ACS 0x0D /* Access Control Services */
#define PCI_EXT_CAP_ID_ARI 0x0E /* Alternate Routing ID */
#define PCI_EXT_CAP_ID_MRIOV 0x11 /* Multi Root I/O Virtualization */
#define PCI_EXT_CAP_ID_MCAST 0x12 /* Multicast */
#define PCI_EXT_CAP_ID_PRI 0x13 /* Page Request Interface */
-#define PCI_EXT_CAP_ID_AMD_XXX 0x14 /* reserved for AMD */
-#define PCI_EXT_CAP_ID_REBAR 0x15 /* resizable BAR */
-#define PCI_EXT_CAP_ID_DPA 0x16 /* dynamic power alloc */
-#define PCI_EXT_CAP_ID_TPH 0x17 /* TPH request */
-#define PCI_EXT_CAP_ID_LTR 0x18 /* latency tolerance reporting */
-#define PCI_EXT_CAP_ID_SECPCI 0x19 /* Secondary PCIe */
+#define PCI_EXT_CAP_ID_AMD_XXX 0x14 /* Reserved for AMD */
+#define PCI_EXT_CAP_ID_REBAR 0x15 /* Resizable BAR */
+#define PCI_EXT_CAP_ID_DPA 0x16 /* Dynamic Power Allocation */
+#define PCI_EXT_CAP_ID_TPH 0x17 /* TPH Requester */
+#define PCI_EXT_CAP_ID_LTR 0x18 /* Latency Tolerance Reporting */
+#define PCI_EXT_CAP_ID_SECPCI 0x19 /* Secondary PCIe Capability */
#define PCI_EXT_CAP_ID_PMUX 0x1A /* Protocol Multiplexing */
#define PCI_EXT_CAP_ID_PASID 0x1B /* Process Address Space ID */
#define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PASID
#define PCI_ERR_ROOT_COR_RCV 0x00000001 /* ERR_COR Received */
/* Multi ERR_COR Received */
#define PCI_ERR_ROOT_MULTI_COR_RCV 0x00000002
-/* ERR_FATAL/NONFATAL Recevied */
+/* ERR_FATAL/NONFATAL Received */
#define PCI_ERR_ROOT_UNCOR_RCV 0x00000004
-/* Multi ERR_FATAL/NONFATAL Recevied */
+/* Multi ERR_FATAL/NONFATAL Received */
#define PCI_ERR_ROOT_MULTI_UNCOR_RCV 0x00000008
#define PCI_ERR_ROOT_FIRST_FATAL 0x00000010 /* First Fatal */
#define PCI_ERR_ROOT_NONFATAL_RCV 0x00000020 /* Non-Fatal Received */
/* Virtual Channel */
#define PCI_VC_PORT_REG1 4
-#define PCI_VC_REG1_EVCC 0x7 /* extended vc count */
+#define PCI_VC_REG1_EVCC 0x7 /* extended VC count */
#define PCI_VC_PORT_REG2 8
#define PCI_VC_REG2_32_PHASE 0x2
#define PCI_VC_REG2_64_PHASE 0x4
#define PCI_VNDR_HEADER_LEN(x) (((x) >> 20) & 0xfff)
/*
- * Hypertransport sub capability types
+ * HyperTransport sub capability types
*
* Unfortunately there are both 3 bit and 5 bit capability types defined
* in the HT spec, catering for that is a little messy. You probably don't
#define HT_CAPTYPE_DIRECT_ROUTE 0xB0 /* Direct routing configuration */
#define HT_CAPTYPE_VCSET 0xB8 /* Virtual Channel configuration */
#define HT_CAPTYPE_ERROR_RETRY 0xC0 /* Retry on error configuration */
-#define HT_CAPTYPE_GEN3 0xD0 /* Generation 3 hypertransport configuration */
-#define HT_CAPTYPE_PM 0xE0 /* Hypertransport powermanagement configuration */
+#define HT_CAPTYPE_GEN3 0xD0 /* Generation 3 HyperTransport configuration */
+#define HT_CAPTYPE_PM 0xE0 /* HyperTransport power management configuration */
#define HT_CAP_SIZEOF_LONG 28 /* slave & primary */
#define HT_CAP_SIZEOF_SHORT 24 /* host & secondary */
#define PCI_PRI_ALLOC_REQ 0x0c /* PRI max reqs allowed */
#define PCI_EXT_CAP_PRI_SIZEOF 16
-/* PASID capability */
+/* Process Address Space ID */
#define PCI_PASID_CAP 0x04 /* PASID feature register */
#define PCI_PASID_CAP_EXEC 0x02 /* Exec permissions Supported */
-#define PCI_PASID_CAP_PRIV 0x04 /* Priviledge Mode Supported */
+#define PCI_PASID_CAP_PRIV 0x04 /* Privilege Mode Supported */
#define PCI_PASID_CTRL 0x06 /* PASID control register */
#define PCI_PASID_CTRL_ENABLE 0x01 /* Enable bit */
#define PCI_PASID_CTRL_EXEC 0x02 /* Exec permissions Enable */
-#define PCI_PASID_CTRL_PRIV 0x04 /* Priviledge Mode Enable */
+#define PCI_PASID_CTRL_PRIV 0x04 /* Privilege Mode Enable */
#define PCI_EXT_CAP_PASID_SIZEOF 8
/* Single Root I/O Virtualization */
#define PCI_ACS_CTRL 0x06 /* ACS Control Register */
#define PCI_ACS_EGRESS_CTL_V 0x08 /* ACS Egress Control Vector */
-#define PCI_VSEC_HDR 4 /* extended cap - vendor specific */
+#define PCI_VSEC_HDR 4 /* extended cap - vendor-specific */
#define PCI_VSEC_HDR_LEN_SHIFT 20 /* shift for length field */
-/* sata capability */
+/* SATA capability */
#define PCI_SATA_REGS 4 /* SATA REGs specifier */
#define PCI_SATA_REGS_MASK 0xF /* location - BAR#/inline */
#define PCI_SATA_REGS_INLINE 0xF /* REGS in config space */
#define PCI_SATA_SIZEOF_SHORT 8
#define PCI_SATA_SIZEOF_LONG 16
-/* resizable BARs */
+/* Resizable BARs */
#define PCI_REBAR_CTRL 8 /* control register */
#define PCI_REBAR_CTRL_NBAR_MASK (7 << 5) /* mask for # bars */
#define PCI_REBAR_CTRL_NBAR_SHIFT 5 /* shift for # bars */
-/* dynamic power allocation */
+/* Dynamic Power Allocation */
#define PCI_DPA_CAP 4 /* capability register */
#define PCI_DPA_CAP_SUBSTATE_MASK 0x1F /* # substates - 1 */
#define PCI_DPA_BASE_SIZEOF 16 /* size with 0 substates */
*
* { u64 weight; } && PERF_SAMPLE_WEIGHT
* { u64 data_src; } && PERF_SAMPLE_DATA_SRC
+ * { u64 transaction; } && PERF_SAMPLE_TRANSACTION
* };
*/
PERF_RECORD_SAMPLE = 9,
#define _MD_P_H
#include <linux/types.h>
+#include <asm/byteorder.h>
/*
* RAID superblock.
};
enum {
+ /* UNIX_DIAG_NONE, standard nl API requires this attribute! */
UNIX_DIAG_NAME,
UNIX_DIAG_VFS,
UNIX_DIAG_PEER,
#include <sound/compress_params.h>
-#define SNDRV_COMPRESS_VERSION SNDRV_PROTOCOL_VERSION(0, 1, 1)
+#define SNDRV_COMPRESS_VERSION SNDRV_PROTOCOL_VERSION(0, 1, 2)
/**
* struct snd_compressed_buffer: compressed buffer
* @fragment_size: size of buffer fragment in bytes
struct snd_compr_tstamp {
__u32 byte_offset;
__u32 copied_total;
- snd_pcm_uframes_t pcm_frames;
- snd_pcm_uframes_t pcm_io_frames;
+ __u32 pcm_frames;
+ __u32 pcm_io_frames;
__u32 sampling_rate;
};
struct blkif_request_rw {
uint8_t nr_segments; /* number of segments */
blkif_vdev_t handle; /* only for read/write requests */
-#ifdef CONFIG_X86_64
+#ifndef CONFIG_X86_32
uint32_t _pad1; /* offsetof(blkif_request,u.rw.id) == 8 */
#endif
uint64_t id; /* private guest value, echoed in resp */
uint8_t flag; /* BLKIF_DISCARD_SECURE or zero. */
#define BLKIF_DISCARD_SECURE (1<<0) /* ignored if discard-secure=0 */
blkif_vdev_t _pad1; /* only for read/write requests */
-#ifdef CONFIG_X86_64
+#ifndef CONFIG_X86_32
uint32_t _pad2; /* offsetof(blkif_req..,u.discard.id)==8*/
#endif
uint64_t id; /* private guest value, echoed in resp */
struct blkif_request_other {
uint8_t _pad1;
blkif_vdev_t _pad2; /* only for read/write requests */
-#ifdef CONFIG_X86_64
+#ifndef CONFIG_X86_32
uint32_t _pad3; /* offsetof(blkif_req..,u.other.id)==8*/
#endif
uint64_t id; /* private guest value, echoed in resp */
struct blkif_request_indirect {
uint8_t indirect_op;
uint16_t nr_segments;
-#ifdef CONFIG_X86_64
+#ifndef CONFIG_X86_32
uint32_t _pad1; /* offsetof(blkif_...,u.indirect.id) == 8 */
#endif
uint64_t id;
blkif_vdev_t handle;
uint16_t _pad2;
grant_ref_t indirect_grefs[BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST];
-#ifdef CONFIG_X86_64
+#ifndef CONFIG_X86_32
uint32_t _pad3; /* make it 64 byte aligned */
#else
uint64_t _pad3; /* make it 64 byte aligned */
depends on AUDITSYSCALL
select FSNOTIFY
-config AUDIT_LOGINUID_IMMUTABLE
- bool "Make audit loginuid immutable"
- depends on AUDIT
- help
- The config option toggles if a task setting its loginuid requires
- CAP_SYS_AUDITCONTROL or if that task should require no special permissions
- but should instead only allow setting its loginuid if it was never
- previously set. On systems which use systemd or a similar central
- process to restart login services this should be set to true. On older
- systems in which an admin would typically have to directly stop and
- start processes this should be set to false. Setting this to true allows
- one to drop potentially dangerous capabilites from the login tasks,
- but may not be backwards compatible with older init systems.
-
source "kernel/irq/Kconfig"
source "kernel/time/Kconfig"
config ARCH_SUPPORTS_NUMA_BALANCING
bool
+#
+# For architectures that know their GCC __int128 support is sound
+#
+config ARCH_SUPPORTS_INT128
+ bool
+
# For architectures that (ab)use NUMA to represent different memory regions
# all cpu-local but of different latencies, such as SuperH.
#
default 0 if BASE_FULL
default 1 if !BASE_FULL
+config SYSTEM_TRUSTED_KEYRING
+ bool "Provide system-wide ring of trusted keys"
+ depends on KEYS
+ help
+ Provide a system keyring to which trusted keys can be added. Keys in
+ the keyring are considered to be trusted. Keys may be added at will
+ by the kernel from compiled-in data and from hardware key stores, but
+ userspace may only add extra keys if those keys can be verified by
+ keys already in the keyring.
+
+ Keys in this keyring are used by module signature checking.
+
menuconfig MODULES
bool "Enable loadable module support"
option modules
config MODULE_SIG
bool "Module signature verification"
depends on MODULES
+ select SYSTEM_TRUSTED_KEYRING
select KEYS
select CRYPTO
select ASYMMETRIC_KEY_TYPE
mem_init();
kmem_cache_init();
percpu_init_late();
- pgtable_init();
+ pgtable_cache_init();
vmalloc_init();
}
*/
static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp)
{
+ struct file *shm_file;
+
+ shm_file = shp->shm_file;
+ shp->shm_file = NULL;
ns->shm_tot -= (shp->shm_segsz + PAGE_SIZE - 1) >> PAGE_SHIFT;
shm_rmid(ns, shp);
shm_unlock(shp);
- if (!is_file_hugepages(shp->shm_file))
- shmem_lock(shp->shm_file, 0, shp->mlock_user);
+ if (!is_file_hugepages(shm_file))
+ shmem_lock(shm_file, 0, shp->mlock_user);
else if (shp->mlock_user)
- user_shm_unlock(file_inode(shp->shm_file)->i_size,
- shp->mlock_user);
- fput (shp->shm_file);
+ user_shm_unlock(file_inode(shm_file)->i_size, shp->mlock_user);
+ fput(shm_file);
ipc_rcu_putref(shp, shm_rcu_free);
}
ipc_lock_object(&shp->shm_perm);
if (!ns_capable(ns->user_ns, CAP_IPC_LOCK)) {
kuid_t euid = current_euid();
- err = -EPERM;
if (!uid_eq(euid, shp->shm_perm.uid) &&
- !uid_eq(euid, shp->shm_perm.cuid))
+ !uid_eq(euid, shp->shm_perm.cuid)) {
+ err = -EPERM;
goto out_unlock0;
- if (cmd == SHM_LOCK && !rlimit(RLIMIT_MEMLOCK))
+ }
+ if (cmd == SHM_LOCK && !rlimit(RLIMIT_MEMLOCK)) {
+ err = -EPERM;
goto out_unlock0;
+ }
}
shm_file = shp->shm_file;
+
+ /* check if shm_destroy() is tearing down shp */
+ if (shm_file == NULL) {
+ err = -EIDRM;
+ goto out_unlock0;
+ }
+
if (is_file_hugepages(shm_file))
goto out_unlock0;
goto out_unlock;
ipc_lock_object(&shp->shm_perm);
+
+ /* check if shm_destroy() is tearing down shp */
+ if (shp->shm_file == NULL) {
+ ipc_unlock_object(&shp->shm_perm);
+ err = -EIDRM;
+ goto out_unlock;
+ }
+
path = shp->shm_file->f_path;
path_get(&path);
shp->shm_nattch++;
config_data.gz
timeconst.h
hz.bc
+x509_certificate_list
obj-y += up.o
endif
obj-$(CONFIG_UID16) += uid16.o
+obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o
obj-$(CONFIG_MODULES) += module.o
-obj-$(CONFIG_MODULE_SIG) += module_signing.o modsign_pubkey.o modsign_certificate.o
+obj-$(CONFIG_MODULE_SIG) += module_signing.o
obj-$(CONFIG_KALLSYMS) += kallsyms.o
obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o
obj-$(CONFIG_KEXEC) += kexec.o
$(obj)/timeconst.h: $(obj)/hz.bc $(src)/timeconst.bc FORCE
$(call if_changed,bc)
-ifeq ($(CONFIG_MODULE_SIG),y)
+###############################################################################
+#
+# Roll all the X.509 certificates that we can find together and pull them into
+# the kernel so that they get loaded into the system trusted keyring during
+# boot.
#
-# Pull the signing certificate and any extra certificates into the kernel
+# We look in the source root and the build root for all files whose name ends
+# in ".x509". Unfortunately, this will generate duplicate filenames, so we
+# have make canonicalise the pathnames and then sort them to discard the
+# duplicates.
#
+###############################################################################
+ifeq ($(CONFIG_SYSTEM_TRUSTED_KEYRING),y)
+X509_CERTIFICATES-y := $(wildcard *.x509) $(wildcard $(srctree)/*.x509)
+X509_CERTIFICATES-$(CONFIG_MODULE_SIG) += $(objtree)/signing_key.x509
+X509_CERTIFICATES-raw := $(sort $(foreach CERT,$(X509_CERTIFICATES-y), \
+ $(or $(realpath $(CERT)),$(CERT))))
+X509_CERTIFICATES := $(subst $(realpath $(objtree))/,,$(X509_CERTIFICATES-raw))
+
+ifeq ($(X509_CERTIFICATES),)
+$(warning *** No X.509 certificates found ***)
+endif
+
+ifneq ($(wildcard $(obj)/.x509.list),)
+ifneq ($(shell cat $(obj)/.x509.list),$(X509_CERTIFICATES))
+$(info X.509 certificate list changed)
+$(shell rm $(obj)/.x509.list)
+endif
+endif
+
+kernel/system_certificates.o: $(obj)/x509_certificate_list
-quiet_cmd_touch = TOUCH $@
- cmd_touch = touch $@
+quiet_cmd_x509certs = CERTS $@
+ cmd_x509certs = cat $(X509_CERTIFICATES) /dev/null >$@ $(foreach X509,$(X509_CERTIFICATES),; echo " - Including cert $(X509)")
-extra_certificates:
- $(call cmd,touch)
+targets += $(obj)/x509_certificate_list
+$(obj)/x509_certificate_list: $(X509_CERTIFICATES) $(obj)/.x509.list
+ $(call if_changed,x509certs)
-kernel/modsign_certificate.o: signing_key.x509 extra_certificates
+targets += $(obj)/.x509.list
+$(obj)/.x509.list:
+ @echo $(X509_CERTIFICATES) >$@
+endif
+
+clean-files := x509_certificate_list .x509.list
+ifeq ($(CONFIG_MODULE_SIG),y)
###############################################################################
#
# If module signing is requested, say by allyesconfig, but a key has not been
#ifdef CONFIG_SECURITY
#include <linux/security.h>
#endif
-#include <net/netlink.h>
#include <linux/freezer.h>
#include <linux/tty.h>
#include <linux/pid_namespace.h>
static DECLARE_WAIT_QUEUE_HEAD(kauditd_wait);
static DECLARE_WAIT_QUEUE_HEAD(audit_backlog_wait);
+static struct audit_features af = {.vers = AUDIT_FEATURE_VERSION,
+ .mask = -1,
+ .features = 0,
+ .lock = 0,};
+
+static char *audit_feature_names[2] = {
+ "only_unset_loginuid",
+ "loginuid_immutable",
+};
+
+
/* Serialize requests from userspace. */
DEFINE_MUTEX(audit_cmd_mutex);
return -EOPNOTSUPP;
case AUDIT_GET:
case AUDIT_SET:
+ case AUDIT_GET_FEATURE:
+ case AUDIT_SET_FEATURE:
case AUDIT_LIST_RULES:
case AUDIT_ADD_RULE:
case AUDIT_DEL_RULE:
int rc = 0;
uid_t uid = from_kuid(&init_user_ns, current_uid());
- if (!audit_enabled) {
+ if (!audit_enabled && msg_type != AUDIT_USER_AVC) {
*ab = NULL;
return rc;
}
return rc;
}
+int is_audit_feature_set(int i)
+{
+ return af.features & AUDIT_FEATURE_TO_MASK(i);
+}
+
+
+static int audit_get_feature(struct sk_buff *skb)
+{
+ u32 seq;
+
+ seq = nlmsg_hdr(skb)->nlmsg_seq;
+
+ audit_send_reply(NETLINK_CB(skb).portid, seq, AUDIT_GET, 0, 0,
+ &af, sizeof(af));
+
+ return 0;
+}
+
+static void audit_log_feature_change(int which, u32 old_feature, u32 new_feature,
+ u32 old_lock, u32 new_lock, int res)
+{
+ struct audit_buffer *ab;
+
+ ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_FEATURE_CHANGE);
+ audit_log_format(ab, "feature=%s new=%d old=%d old_lock=%d new_lock=%d res=%d",
+ audit_feature_names[which], !!old_feature, !!new_feature,
+ !!old_lock, !!new_lock, res);
+ audit_log_end(ab);
+}
+
+static int audit_set_feature(struct sk_buff *skb)
+{
+ struct audit_features *uaf;
+ int i;
+
+ BUILD_BUG_ON(AUDIT_LAST_FEATURE + 1 > sizeof(audit_feature_names)/sizeof(audit_feature_names[0]));
+ uaf = nlmsg_data(nlmsg_hdr(skb));
+
+ /* if there is ever a version 2 we should handle that here */
+
+ for (i = 0; i <= AUDIT_LAST_FEATURE; i++) {
+ u32 feature = AUDIT_FEATURE_TO_MASK(i);
+ u32 old_feature, new_feature, old_lock, new_lock;
+
+ /* if we are not changing this feature, move along */
+ if (!(feature & uaf->mask))
+ continue;
+
+ old_feature = af.features & feature;
+ new_feature = uaf->features & feature;
+ new_lock = (uaf->lock | af.lock) & feature;
+ old_lock = af.lock & feature;
+
+ /* are we changing a locked feature? */
+ if ((af.lock & feature) && (new_feature != old_feature)) {
+ audit_log_feature_change(i, old_feature, new_feature,
+ old_lock, new_lock, 0);
+ return -EPERM;
+ }
+ }
+ /* nothing invalid, do the changes */
+ for (i = 0; i <= AUDIT_LAST_FEATURE; i++) {
+ u32 feature = AUDIT_FEATURE_TO_MASK(i);
+ u32 old_feature, new_feature, old_lock, new_lock;
+
+ /* if we are not changing this feature, move along */
+ if (!(feature & uaf->mask))
+ continue;
+
+ old_feature = af.features & feature;
+ new_feature = uaf->features & feature;
+ old_lock = af.lock & feature;
+ new_lock = (uaf->lock | af.lock) & feature;
+
+ if (new_feature != old_feature)
+ audit_log_feature_change(i, old_feature, new_feature,
+ old_lock, new_lock, 1);
+
+ if (new_feature)
+ af.features |= feature;
+ else
+ af.features &= ~feature;
+ af.lock |= new_lock;
+ }
+
+ return 0;
+}
+
static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
{
u32 seq;
switch (msg_type) {
case AUDIT_GET:
+ memset(&status_set, 0, sizeof(status_set));
status_set.enabled = audit_enabled;
status_set.failure = audit_failure;
status_set.pid = audit_pid;
&status_set, sizeof(status_set));
break;
case AUDIT_SET:
- if (nlh->nlmsg_len < sizeof(struct audit_status))
+ if (nlmsg_len(nlh) < sizeof(struct audit_status))
return -EINVAL;
status_get = (struct audit_status *)data;
if (status_get->mask & AUDIT_STATUS_ENABLED) {
if (status_get->mask & AUDIT_STATUS_BACKLOG_LIMIT)
err = audit_set_backlog_limit(status_get->backlog_limit);
break;
+ case AUDIT_GET_FEATURE:
+ err = audit_get_feature(skb);
+ if (err)
+ return err;
+ break;
+ case AUDIT_SET_FEATURE:
+ err = audit_set_feature(skb);
+ if (err)
+ return err;
+ break;
case AUDIT_USER:
case AUDIT_FIRST_USER_MSG ... AUDIT_LAST_USER_MSG:
case AUDIT_FIRST_USER_MSG2 ... AUDIT_LAST_USER_MSG2:
}
audit_log_common_recv_msg(&ab, msg_type);
if (msg_type != AUDIT_USER_TTY)
- audit_log_format(ab, " msg='%.1024s'",
+ audit_log_format(ab, " msg='%.*s'",
+ AUDIT_MESSAGE_TEXT_MAX,
(char *)data);
else {
int size;
struct task_struct *tsk = current;
spin_lock(&tsk->sighand->siglock);
- s.enabled = tsk->signal->audit_tty != 0;
+ s.enabled = tsk->signal->audit_tty;
s.log_passwd = tsk->signal->audit_tty_log_passwd;
spin_unlock(&tsk->sighand->siglock);
memset(&s, 0, sizeof(s));
/* guard against past and future API changes */
- memcpy(&s, data, min(sizeof(s), (size_t)nlh->nlmsg_len));
+ memcpy(&s, data, min_t(size_t, sizeof(s), nlmsg_len(nlh)));
if ((s.enabled != 0 && s.enabled != 1) ||
(s.log_passwd != 0 && s.log_passwd != 1))
return -EINVAL;
remove_wait_queue(&audit_backlog_wait, &wait);
}
-/* Obtain an audit buffer. This routine does locking to obtain the
- * audit buffer, but then no locking is required for calls to
- * audit_log_*format. If the tsk is a task that is currently in a
- * syscall, then the syscall is marked as auditable and an audit record
- * will be written at syscall exit. If there is no associated task, tsk
- * should be NULL. */
-
/**
* audit_log_start - obtain an audit buffer
* @ctx: audit_context (may be NULL)
u32 sessionid = audit_get_sessionid(current);
uid_t auid = from_kuid(&init_user_ns, audit_get_loginuid(current));
- audit_log_format(ab, " auid=%u ses=%u\n", auid, sessionid);
+ audit_log_format(ab, " auid=%u ses=%u", auid, sessionid);
}
void audit_log_key(struct audit_buffer *ab, char *key)
}
}
+ /* log the audit_names record type */
+ audit_log_format(ab, " nametype=");
+ switch(n->type) {
+ case AUDIT_TYPE_NORMAL:
+ audit_log_format(ab, "NORMAL");
+ break;
+ case AUDIT_TYPE_PARENT:
+ audit_log_format(ab, "PARENT");
+ break;
+ case AUDIT_TYPE_CHILD_DELETE:
+ audit_log_format(ab, "DELETE");
+ break;
+ case AUDIT_TYPE_CHILD_CREATE:
+ audit_log_format(ab, "CREATE");
+ break;
+ default:
+ audit_log_format(ab, "UNKNOWN");
+ break;
+ }
+
audit_log_fcaps(ab, n);
audit_log_end(ab);
}
int fd;
int flags;
} mmap;
+ struct {
+ int argc;
+ } execve;
};
int fds[2];
case AUDIT_DEVMINOR:
case AUDIT_EXIT:
case AUDIT_SUCCESS:
+ case AUDIT_INODE:
/* bit ops are only useful on syscall args */
if (f->op == Audit_bitmask || f->op == Audit_bittest)
return -EINVAL;
f->lsm_rule = NULL;
/* Support legacy tests for a valid loginuid */
- if ((f->type == AUDIT_LOGINUID) && (f->val == ~0U)) {
+ if ((f->type == AUDIT_LOGINUID) && (f->val == AUDIT_UID_UNSET)) {
f->type = AUDIT_LOGINUID_SET;
f->val = 0;
}
/* Number of target pids per aux struct. */
#define AUDIT_AUX_PIDS 16
-struct audit_aux_data_execve {
- struct audit_aux_data d;
- int argc;
- int envc;
- struct mm_struct *mm;
-};
-
struct audit_aux_data_pids {
struct audit_aux_data d;
pid_t target_pid[AUDIT_AUX_PIDS];
struct audit_cap_data new_pcap;
};
-struct audit_aux_data_capset {
- struct audit_aux_data d;
- pid_t pid;
- struct audit_cap_data cap;
-};
-
struct audit_tree_refs {
struct audit_tree_refs *next;
struct audit_chunk *c[31];
break;
case AUDIT_INODE:
if (name)
- result = (name->ino == f->val);
+ result = audit_comparator(name->ino, f->op, f->val);
else if (ctx) {
list_for_each_entry(n, &ctx->names_list, list) {
if (audit_comparator(n->ino, f->op, f->val)) {
return 0; /* Return if not auditing. */
state = audit_filter_task(tsk, &key);
- if (state == AUDIT_DISABLED)
+ if (state == AUDIT_DISABLED) {
+ clear_tsk_thread_flag(tsk, TIF_SYSCALL_AUDIT);
return 0;
+ }
if (!(context = audit_alloc_context(state))) {
kfree(key);
}
static void audit_log_execve_info(struct audit_context *context,
- struct audit_buffer **ab,
- struct audit_aux_data_execve *axi)
+ struct audit_buffer **ab)
{
int i, len;
size_t len_sent = 0;
const char __user *p;
char *buf;
- if (axi->mm != current->mm)
- return; /* execve failed, no additional info */
-
- p = (const char __user *)axi->mm->arg_start;
+ p = (const char __user *)current->mm->arg_start;
- audit_log_format(*ab, "argc=%d", axi->argc);
+ audit_log_format(*ab, "argc=%d", context->execve.argc);
/*
* we need some kernel buffer to hold the userspace args. Just
return;
}
- for (i = 0; i < axi->argc; i++) {
+ for (i = 0; i < context->execve.argc; i++) {
len = audit_log_single_execve_arg(context, ab, i,
&len_sent, p, buf);
if (len <= 0)
audit_log_format(ab, "fd=%d flags=0x%x", context->mmap.fd,
context->mmap.flags);
break; }
+ case AUDIT_EXECVE: {
+ audit_log_execve_info(context, &ab);
+ break; }
}
audit_log_end(ab);
}
switch (aux->type) {
- case AUDIT_EXECVE: {
- struct audit_aux_data_execve *axi = (void *)aux;
- audit_log_execve_info(context, &ab, axi);
- break; }
-
case AUDIT_BPRM_FCAPS: {
struct audit_aux_data_bprm_fcaps *axs = (void *)aux;
audit_log_format(ab, "fver=%x", axs->fcap_ver);
/* global counter which is incremented every time something logs in */
static atomic_t session_id = ATOMIC_INIT(0);
+static int audit_set_loginuid_perm(kuid_t loginuid)
+{
+ /* if we are unset, we don't need privs */
+ if (!audit_loginuid_set(current))
+ return 0;
+ /* if AUDIT_FEATURE_LOGINUID_IMMUTABLE means never ever allow a change*/
+ if (is_audit_feature_set(AUDIT_FEATURE_LOGINUID_IMMUTABLE))
+ return -EPERM;
+ /* it is set, you need permission */
+ if (!capable(CAP_AUDIT_CONTROL))
+ return -EPERM;
+ /* reject if this is not an unset and we don't allow that */
+ if (is_audit_feature_set(AUDIT_FEATURE_ONLY_UNSET_LOGINUID) && uid_valid(loginuid))
+ return -EPERM;
+ return 0;
+}
+
+static void audit_log_set_loginuid(kuid_t koldloginuid, kuid_t kloginuid,
+ unsigned int oldsessionid, unsigned int sessionid,
+ int rc)
+{
+ struct audit_buffer *ab;
+ uid_t uid, ologinuid, nloginuid;
+
+ uid = from_kuid(&init_user_ns, task_uid(current));
+ ologinuid = from_kuid(&init_user_ns, koldloginuid);
+ nloginuid = from_kuid(&init_user_ns, kloginuid),
+
+ ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_LOGIN);
+ if (!ab)
+ return;
+ audit_log_format(ab, "pid=%d uid=%u old auid=%u new auid=%u old "
+ "ses=%u new ses=%u res=%d", current->pid, uid, ologinuid,
+ nloginuid, oldsessionid, sessionid, !rc);
+ audit_log_end(ab);
+}
+
/**
* audit_set_loginuid - set current task's audit_context loginuid
* @loginuid: loginuid value
int audit_set_loginuid(kuid_t loginuid)
{
struct task_struct *task = current;
- struct audit_context *context = task->audit_context;
- unsigned int sessionid;
+ unsigned int oldsessionid, sessionid = (unsigned int)-1;
+ kuid_t oldloginuid;
+ int rc;
-#ifdef CONFIG_AUDIT_LOGINUID_IMMUTABLE
- if (audit_loginuid_set(task))
- return -EPERM;
-#else /* CONFIG_AUDIT_LOGINUID_IMMUTABLE */
- if (!capable(CAP_AUDIT_CONTROL))
- return -EPERM;
-#endif /* CONFIG_AUDIT_LOGINUID_IMMUTABLE */
+ oldloginuid = audit_get_loginuid(current);
+ oldsessionid = audit_get_sessionid(current);
- sessionid = atomic_inc_return(&session_id);
- if (context && context->in_syscall) {
- struct audit_buffer *ab;
+ rc = audit_set_loginuid_perm(loginuid);
+ if (rc)
+ goto out;
+
+ /* are we setting or clearing? */
+ if (uid_valid(loginuid))
+ sessionid = atomic_inc_return(&session_id);
- ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_LOGIN);
- if (ab) {
- audit_log_format(ab, "login pid=%d uid=%u "
- "old auid=%u new auid=%u"
- " old ses=%u new ses=%u",
- task->pid,
- from_kuid(&init_user_ns, task_uid(task)),
- from_kuid(&init_user_ns, task->loginuid),
- from_kuid(&init_user_ns, loginuid),
- task->sessionid, sessionid);
- audit_log_end(ab);
- }
- }
task->sessionid = sessionid;
task->loginuid = loginuid;
- return 0;
+out:
+ audit_log_set_loginuid(oldloginuid, loginuid, oldsessionid, sessionid, rc);
+ return rc;
}
/**
context->ipc.has_perm = 1;
}
-int __audit_bprm(struct linux_binprm *bprm)
+void __audit_bprm(struct linux_binprm *bprm)
{
- struct audit_aux_data_execve *ax;
struct audit_context *context = current->audit_context;
- ax = kmalloc(sizeof(*ax), GFP_KERNEL);
- if (!ax)
- return -ENOMEM;
-
- ax->argc = bprm->argc;
- ax->envc = bprm->envc;
- ax->mm = bprm->mm;
- ax->d.type = AUDIT_EXECVE;
- ax->d.next = context->aux;
- context->aux = (void *)ax;
- return 0;
+ context->type = AUDIT_EXECVE;
+ context->execve.argc = bprm->argc;
}
#ifdef CONFIG_SMP
DEFINE(NR_CPUS_BITS, ilog2(CONFIG_NR_CPUS));
#endif
- DEFINE(BLOATED_SPINLOCKS, sizeof(spinlock_t) > sizeof(int));
+ DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t));
/* End of constants */
}
static DEFINE_MUTEX(cgroup_root_mutex);
+/*
+ * cgroup destruction makes heavy use of work items and there can be a lot
+ * of concurrent destructions. Use a separate workqueue so that cgroup
+ * destruction work items don't end up filling up max_active of system_wq
+ * which may lead to deadlock.
+ */
+static struct workqueue_struct *cgroup_destroy_wq;
+
/*
* Generate an array of cgroup subsystem pointers. At boot time, this is
* populated with the built in subsystems, and modular subsystems are
static int cgroup_destroy_locked(struct cgroup *cgrp);
static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[],
bool is_add);
+static int cgroup_file_release(struct inode *inode, struct file *file);
/**
* cgroup_css - obtain a cgroup's css for the specified subsystem
struct cgroup *cgrp = container_of(head, struct cgroup, rcu_head);
INIT_WORK(&cgrp->destroy_work, cgroup_free_fn);
- schedule_work(&cgrp->destroy_work);
+ queue_work(cgroup_destroy_wq, &cgrp->destroy_work);
}
static void cgroup_diput(struct dentry *dentry, struct inode *inode)
struct cgroup *cgrp = dentry->d_fsdata;
BUG_ON(!(cgroup_is_dead(cgrp)));
+
+ /*
+ * XXX: cgrp->id is only used to look up css's. As cgroup
+ * and css's lifetimes will be decoupled, it should be made
+ * per-subsystem and moved to css->id so that lookups are
+ * successful until the target css is released.
+ */
+ idr_remove(&cgrp->root->cgroup_idr, cgrp->id);
+ cgrp->id = -1;
+
call_rcu(&cgrp->rcu_head, cgroup_free_rcu);
} else {
struct cfent *cfe = __d_cfe(dentry);
iput(inode);
}
-static int cgroup_delete(const struct dentry *d)
-{
- return 1;
-}
-
static void remove_dir(struct dentry *d)
{
struct dentry *parent = dget(d->d_parent);
{
static const struct dentry_operations cgroup_dops = {
.d_iput = cgroup_diput,
- .d_delete = cgroup_delete,
+ .d_delete = always_delete_dentry,
};
struct inode *inode =
.read = seq_read,
.write = cgroup_file_write,
.llseek = seq_lseek,
- .release = single_release,
+ .release = cgroup_file_release,
};
static int cgroup_file_open(struct inode *inode, struct file *file)
ret = cft->release(inode, file);
if (css->ss)
css_put(css);
+ if (file->f_op == &cgroup_seqfile_operations)
+ single_release(inode, file);
return ret;
}
* css_put(). dput() requires process context which we don't have.
*/
INIT_WORK(&css->destroy_work, css_free_work_fn);
- schedule_work(&css->destroy_work);
+ queue_work(cgroup_destroy_wq, &css->destroy_work);
}
static void css_release(struct percpu_ref *ref)
struct cgroup_subsys_state *css =
container_of(ref, struct cgroup_subsys_state, refcnt);
+ rcu_assign_pointer(css->cgroup->subsys[css->ss->subsys_id], NULL);
call_rcu(&css->rcu_head, css_free_rcu_fn);
}
list_add_tail_rcu(&cgrp->sibling, &cgrp->parent->children);
root->number_of_cgroups++;
- /* each css holds a ref to the cgroup's dentry and the parent css */
- for_each_root_subsys(root, ss) {
- struct cgroup_subsys_state *css = css_ar[ss->subsys_id];
-
- dget(dentry);
- css_get(css->parent);
- }
-
/* hold a ref to the parent's dentry */
dget(parent->dentry);
if (err)
goto err_destroy;
+ /* each css holds a ref to the cgroup's dentry and parent css */
+ dget(dentry);
+ css_get(css->parent);
+
+ /* mark it consumed for error path */
+ css_ar[ss->subsys_id] = NULL;
+
if (ss->broken_hierarchy && !ss->warned_broken_hierarchy &&
parent->parent) {
pr_warning("cgroup: %s (%d) created nested cgroup for controller \"%s\" which has incomplete hierarchy support. Nested cgroups may change behavior in the future.\n",
return err;
err_destroy:
+ for_each_root_subsys(root, ss) {
+ struct cgroup_subsys_state *css = css_ar[ss->subsys_id];
+
+ if (css) {
+ percpu_ref_cancel_init(&css->refcnt);
+ ss->css_free(css);
+ }
+ }
cgroup_destroy_locked(cgrp);
mutex_unlock(&cgroup_mutex);
mutex_unlock(&dentry->d_inode->i_mutex);
container_of(ref, struct cgroup_subsys_state, refcnt);
INIT_WORK(&css->destroy_work, css_killed_work_fn);
- schedule_work(&css->destroy_work);
+ queue_work(cgroup_destroy_wq, &css->destroy_work);
}
/**
* will be invoked to perform the rest of destruction once the
* percpu refs of all css's are confirmed to be killed.
*/
- for_each_root_subsys(cgrp->root, ss)
- kill_css(cgroup_css(cgrp, ss));
+ for_each_root_subsys(cgrp->root, ss) {
+ struct cgroup_subsys_state *css = cgroup_css(cgrp, ss);
+
+ if (css)
+ kill_css(css);
+ }
/*
* Mark @cgrp dead. This prevents further task migration and child
/* delete this cgroup from parent->children */
list_del_rcu(&cgrp->sibling);
- /*
- * We should remove the cgroup object from idr before its grace
- * period starts, so we won't be looking up a cgroup while the
- * cgroup is being freed.
- */
- idr_remove(&cgrp->root->cgroup_idr, cgrp->id);
- cgrp->id = -1;
-
dput(d);
set_bit(CGRP_RELEASABLE, &parent->flags);
return err;
}
+static int __init cgroup_wq_init(void)
+{
+ /*
+ * There isn't much point in executing destruction path in
+ * parallel. Good chunk is serialized with cgroup_mutex anyway.
+ * Use 1 for @max_active.
+ *
+ * We would prefer to do this in cgroup_init() above, but that
+ * is called before init_workqueues(): so leave this until after.
+ */
+ cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1);
+ BUG_ON(!cgroup_destroy_wq);
+ return 0;
+}
+core_initcall(cgroup_wq_init);
+
/*
* proc_cgroup_show()
* - Print task's cgroup paths into seq_file, one line for each hierarchy
need_loop = task_has_mempolicy(tsk) ||
!nodes_intersects(*newmems, tsk->mems_allowed);
- if (need_loop)
+ if (need_loop) {
+ local_irq_disable();
write_seqcount_begin(&tsk->mems_allowed_seq);
+ }
nodes_or(tsk->mems_allowed, tsk->mems_allowed, *newmems);
mpol_rebind_task(tsk, newmems, MPOL_REBIND_STEP1);
mpol_rebind_task(tsk, newmems, MPOL_REBIND_STEP2);
tsk->mems_allowed = *newmems;
- if (need_loop)
+ if (need_loop) {
write_seqcount_end(&tsk->mems_allowed_seq);
+ local_irq_enable();
+ }
task_unlock(tsk);
}
if (event->state != PERF_EVENT_STATE_ACTIVE)
return;
+ perf_pmu_disable(event->pmu);
+
event->state = PERF_EVENT_STATE_INACTIVE;
if (event->pending_disable) {
event->pending_disable = 0;
ctx->nr_freq--;
if (event->attr.exclusive || !cpuctx->active_oncpu)
cpuctx->exclusive = 0;
+
+ perf_pmu_enable(event->pmu);
}
static void
struct perf_event_context *ctx)
{
u64 tstamp = perf_event_time(event);
+ int ret = 0;
if (event->state <= PERF_EVENT_STATE_OFF)
return 0;
*/
smp_wmb();
+ perf_pmu_disable(event->pmu);
+
if (event->pmu->add(event, PERF_EF_START)) {
event->state = PERF_EVENT_STATE_INACTIVE;
event->oncpu = -1;
- return -EAGAIN;
+ ret = -EAGAIN;
+ goto out;
}
event->tstamp_running += tstamp - event->tstamp_stopped;
if (event->attr.exclusive)
cpuctx->exclusive = 1;
- return 0;
+out:
+ perf_pmu_enable(event->pmu);
+
+ return ret;
}
static int
if (!event_filter_match(event))
continue;
+ perf_pmu_disable(event->pmu);
+
hwc = &event->hw;
if (hwc->interrupts == MAX_INTERRUPTS) {
}
if (!event->attr.freq || !event->attr.sample_freq)
- continue;
+ goto next;
/*
* stop the event and update event->count
perf_adjust_period(event, period, delta, false);
event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0);
+ next:
+ perf_pmu_enable(event->pmu);
}
perf_pmu_enable(ctx->pmu);
{
int cpu;
- if (event->cpu != -1) {
- swevent_hlist_put_cpu(event, event->cpu);
- return;
- }
-
for_each_possible_cpu(cpu)
swevent_hlist_put_cpu(event, cpu);
}
int err;
int cpu, failed_cpu;
- if (event->cpu != -1)
- return swevent_hlist_get_cpu(event, event->cpu);
-
get_online_cpus();
for_each_possible_cpu(cpu) {
err = swevent_hlist_get_cpu(event, cpu);
static inline int init_kernel_text(unsigned long addr)
{
if (addr >= (unsigned long)_sinittext &&
- addr <= (unsigned long)_einittext)
+ addr < (unsigned long)_einittext)
return 1;
return 0;
}
int core_kernel_text(unsigned long addr)
{
if (addr >= (unsigned long)_stext &&
- addr <= (unsigned long)_etext)
+ addr < (unsigned long)_etext)
return 1;
if (system_state == SYSTEM_BOOTING &&
spin_lock_init(&mm->page_table_lock);
mm_init_aio(mm);
mm_init_owner(mm, p);
+ clear_tlb_flush_pending(mm);
if (likely(!mm_alloc_pgd(mm))) {
mm->def_flags = 0;
bool pm_freezing;
bool pm_nosig_freezing;
+/*
+ * Temporary export for the deadlock workaround in ata_scsi_hotplug().
+ * Remove once the hack becomes unnecessary.
+ */
+EXPORT_SYMBOL_GPL(pm_freezing);
+
/* protects freezing and frozen transitions */
static DEFINE_SPINLOCK(freezer_lock);
return -EINVAL;
address -= key->both.offset;
+ if (unlikely(!access_ok(rw, uaddr, sizeof(u32))))
+ return -EFAULT;
+
/*
* PROCESS_PRIVATE futexes are fast.
* As the mm cannot disappear under us and the 'key' only needs
* but access_ok() should be faster than find_vma()
*/
if (!fshared) {
- if (unlikely(!access_ok(VERIFY_WRITE, uaddr, sizeof(u32))))
- return -EFAULT;
key->private.mm = mm;
key->private.address = address;
get_futex_key_refs(key);
put_page(page);
/* serialize against __split_huge_page_splitting() */
local_irq_disable();
- if (likely(__get_user_pages_fast(address, 1, 1, &page) == 1)) {
+ if (likely(__get_user_pages_fast(address, 1, !ro, &page) == 1)) {
page_head = compound_head(page);
/*
* page_head is valid pointer but we must pin
bool is_early = desc->action &&
desc->action->flags & IRQF_EARLY_RESUME;
- if (is_early != want_early)
+ if (!is_early && want_early)
continue;
raw_spin_lock_irqsave(&desc->lock, flags);
size_t vmcoreinfo_size;
size_t vmcoreinfo_max_size = sizeof(vmcoreinfo_data);
+/* Flag to indicate we are going to kexec a new kernel */
+bool kexec_in_progress = false;
+
/* Location of the reserved area for the crash kernel */
struct resource crashk_res = {
.name = "Crash kernel",
} else
#endif
{
+ kexec_in_progress = true;
kernel_restart_prepare(NULL);
+ migrate_to_reboot_cpu();
printk(KERN_EMERG "Starting new kernel\n");
machine_shutdown();
}
+++ /dev/null
-#include <linux/export.h>
-
-#define GLOBAL(name) \
- .globl VMLINUX_SYMBOL(name); \
- VMLINUX_SYMBOL(name):
-
- .section ".init.data","aw"
-
-GLOBAL(modsign_certificate_list)
- .incbin "signing_key.x509"
- .incbin "extra_certificates"
-GLOBAL(modsign_certificate_list_end)
+++ /dev/null
-/* Public keys for module signature verification
- *
- * Copyright (C) 2012 Red Hat, Inc. All Rights Reserved.
- * Written by David Howells (dhowells@redhat.com)
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public Licence
- * as published by the Free Software Foundation; either version
- * 2 of the Licence, or (at your option) any later version.
- */
-
-#include <linux/kernel.h>
-#include <linux/sched.h>
-#include <linux/cred.h>
-#include <linux/err.h>
-#include <keys/asymmetric-type.h>
-#include "module-internal.h"
-
-struct key *modsign_keyring;
-
-extern __initconst const u8 modsign_certificate_list[];
-extern __initconst const u8 modsign_certificate_list_end[];
-
-/*
- * We need to make sure ccache doesn't cache the .o file as it doesn't notice
- * if modsign.pub changes.
- */
-static __initconst const char annoy_ccache[] = __TIME__ "foo";
-
-/*
- * Load the compiled-in keys
- */
-static __init int module_verify_init(void)
-{
- pr_notice("Initialise module verification\n");
-
- modsign_keyring = keyring_alloc(".module_sign",
- KUIDT_INIT(0), KGIDT_INIT(0),
- current_cred(),
- ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
- KEY_USR_VIEW | KEY_USR_READ),
- KEY_ALLOC_NOT_IN_QUOTA, NULL);
- if (IS_ERR(modsign_keyring))
- panic("Can't allocate module signing keyring\n");
-
- return 0;
-}
-
-/*
- * Must be initialised before we try and load the keys into the keyring.
- */
-device_initcall(module_verify_init);
-
-/*
- * Load the compiled-in keys
- */
-static __init int load_module_signing_keys(void)
-{
- key_ref_t key;
- const u8 *p, *end;
- size_t plen;
-
- pr_notice("Loading module verification certificates\n");
-
- end = modsign_certificate_list_end;
- p = modsign_certificate_list;
- while (p < end) {
- /* Each cert begins with an ASN.1 SEQUENCE tag and must be more
- * than 256 bytes in size.
- */
- if (end - p < 4)
- goto dodgy_cert;
- if (p[0] != 0x30 &&
- p[1] != 0x82)
- goto dodgy_cert;
- plen = (p[2] << 8) | p[3];
- plen += 4;
- if (plen > end - p)
- goto dodgy_cert;
-
- key = key_create_or_update(make_key_ref(modsign_keyring, 1),
- "asymmetric",
- NULL,
- p,
- plen,
- (KEY_POS_ALL & ~KEY_POS_SETATTR) |
- KEY_USR_VIEW,
- KEY_ALLOC_NOT_IN_QUOTA);
- if (IS_ERR(key))
- pr_err("MODSIGN: Problem loading in-kernel X.509 certificate (%ld)\n",
- PTR_ERR(key));
- else
- pr_notice("MODSIGN: Loaded cert '%s'\n",
- key_ref_to_ptr(key)->description);
- p += plen;
- }
-
- return 0;
-
-dodgy_cert:
- pr_err("MODSIGN: Problem parsing in-kernel X.509 certificate list\n");
- return 0;
-}
-late_initcall(load_module_signing_keys);
* 2 of the Licence, or (at your option) any later version.
*/
-extern struct key *modsign_keyring;
-
extern int mod_verify_sig(const void *mod, unsigned long *_modlen);
#include <crypto/public_key.h>
#include <crypto/hash.h>
#include <keys/asymmetric-type.h>
+#include <keys/system_keyring.h>
#include "module-internal.h"
/*
*/
struct module_signature {
u8 algo; /* Public-key crypto algorithm [enum pkey_algo] */
- u8 hash; /* Digest algorithm [enum pkey_hash_algo] */
+ u8 hash; /* Digest algorithm [enum hash_algo] */
u8 id_type; /* Key identifier type [enum pkey_id_type] */
u8 signer_len; /* Length of signer's name */
u8 key_id_len; /* Length of key identifier */
/*
* Digest the module contents.
*/
-static struct public_key_signature *mod_make_digest(enum pkey_hash_algo hash,
+static struct public_key_signature *mod_make_digest(enum hash_algo hash,
const void *mod,
unsigned long modlen)
{
/* Allocate the hashing algorithm we're going to need and find out how
* big the hash operational data will be.
*/
- tfm = crypto_alloc_shash(pkey_hash_algo[hash], 0, 0);
+ tfm = crypto_alloc_shash(hash_algo_name[hash], 0, 0);
if (IS_ERR(tfm))
return (PTR_ERR(tfm) == -ENOENT) ? ERR_PTR(-ENOPKG) : ERR_CAST(tfm);
pr_debug("Look up: \"%s\"\n", id);
- key = keyring_search(make_key_ref(modsign_keyring, 1),
+ key = keyring_search(make_key_ref(system_trusted_keyring, 1),
&key_type_asymmetric, id);
if (IS_ERR(key))
pr_warn("Request for unknown module key '%s' err %ld\n",
return -ENOPKG;
if (ms.hash >= PKEY_HASH__LAST ||
- !pkey_hash_algo[ms.hash])
+ !hash_algo_name[ms.hash])
return -ENOPKG;
key = request_asymmetric_key(sig, ms.signer_len,
static int padata_cpu_hash(struct parallel_data *pd)
{
+ unsigned int seq_nr;
int cpu_index;
/*
* seq_nr mod. number of cpus in use.
*/
- spin_lock(&pd->seq_lock);
- cpu_index = pd->seq_nr % cpumask_weight(pd->cpumask.pcpu);
- pd->seq_nr++;
- spin_unlock(&pd->seq_lock);
+ seq_nr = atomic_inc_return(&pd->seq_nr);
+ cpu_index = seq_nr % cpumask_weight(pd->cpumask.pcpu);
return padata_index_to_cpu(pd, cpu_index);
}
padata_init_pqueues(pd);
padata_init_squeues(pd);
setup_timer(&pd->timer, padata_reorder_timer, (unsigned long)pd);
- pd->seq_nr = 0;
+ atomic_set(&pd->seq_nr, -1);
atomic_set(&pd->reorder_objects, 0);
atomic_set(&pd->refcnt, 0);
pd->pinst = pinst;
list_for_each_entry(tmp, &pm_vt_switch_list, head) {
if (tmp->dev == dev) {
list_del(&tmp->head);
+ kfree(tmp);
break;
}
}
{
struct memory_bitmap *bm1, *bm2;
- BUG_ON(!(forbidden_pages_map && free_pages_map));
+ if (WARN_ON(!(forbidden_pages_map && free_pages_map)))
+ return;
bm1 = forbidden_pages_map;
bm2 = free_pages_map;
data->swap = swsusp_resume_device ?
swap_type_of(swsusp_resume_device, 0, NULL) : -1;
data->mode = O_RDONLY;
+ data->free_bitmaps = false;
error = pm_notifier_call_chain(PM_HIBERNATION_PREPARE);
if (error)
pm_notifier_call_chain(PM_POST_HIBERNATION);
static int rcu_idle_lazy_gp_delay = RCU_IDLE_LAZY_GP_DELAY;
module_param(rcu_idle_lazy_gp_delay, int, 0644);
-extern int tick_nohz_enabled;
+extern int tick_nohz_active;
/*
* Try to advance callbacks for all flavors of RCU on the current CPU, but
int tne;
/* Handle nohz enablement switches conservatively. */
- tne = ACCESS_ONCE(tick_nohz_enabled);
+ tne = ACCESS_ONCE(tick_nohz_active);
if (tne != rdtp->tick_nohz_enabled_snap) {
if (rcu_cpu_has_callbacks(cpu, NULL))
invoke_rcu_core(); /* force nohz to see update. */
}
EXPORT_SYMBOL(unregister_reboot_notifier);
-static void migrate_to_reboot_cpu(void)
+void migrate_to_reboot_cpu(void)
{
/* The boot cpu is always logical cpu 0 */
int cpu = reboot_cpu;
} while (need_resched());
}
EXPORT_SYMBOL(preempt_schedule);
+#endif /* CONFIG_PREEMPT */
/*
* this is the entry point to schedule() from kernel preemption
exception_exit(prev_state);
}
-#endif /* CONFIG_PREEMPT */
-
int default_wake_function(wait_queue_t *curr, unsigned mode, int wake_flags,
void *key)
{
cpumask_clear_cpu(rq->cpu, old_rd->span);
/*
- * If we dont want to free the old_rt yet then
+ * If we dont want to free the old_rd yet then
* set old_rd to NULL to skip the freeing later
* in this function:
*/
static void update_top_cache_domain(int cpu)
{
struct sched_domain *sd;
+ struct sched_domain *busy_sd = NULL;
int id = cpu;
int size = 1;
if (sd) {
id = cpumask_first(sched_domain_span(sd));
size = cpumask_weight(sched_domain_span(sd));
- rcu_assign_pointer(per_cpu(sd_busy, cpu), sd->parent);
+ busy_sd = sd->parent; /* sd_busy */
}
+ rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd);
rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
per_cpu(sd_llc_size, cpu) = size;
* die on a /0 trap.
*/
sg->sgp->power = SCHED_POWER_SCALE * cpumask_weight(sg_span);
+ sg->sgp->power_orig = sg->sgp->power;
/*
* Make sure the first group of this domain contains the
update_sysctl();
}
-#if BITS_PER_LONG == 32
-# define WMULT_CONST (~0UL)
-#else
-# define WMULT_CONST (1UL << 32)
-#endif
-
+#define WMULT_CONST (~0U)
#define WMULT_SHIFT 32
-/*
- * Shift right and round:
- */
-#define SRR(x, y) (((x) + (1UL << ((y) - 1))) >> (y))
+static void __update_inv_weight(struct load_weight *lw)
+{
+ unsigned long w;
+
+ if (likely(lw->inv_weight))
+ return;
+
+ w = scale_load_down(lw->weight);
+
+ if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST))
+ lw->inv_weight = 1;
+ else if (unlikely(!w))
+ lw->inv_weight = WMULT_CONST;
+ else
+ lw->inv_weight = WMULT_CONST / w;
+}
/*
- * delta *= weight / lw
+ * delta_exec * weight / lw.weight
+ * OR
+ * (delta_exec * (weight * lw->inv_weight)) >> WMULT_SHIFT
+ *
+ * Either weight := NICE_0_LOAD and lw \e prio_to_wmult[], in which case
+ * we're guaranteed shift stays positive because inv_weight is guaranteed to
+ * fit 32 bits, and NICE_0_LOAD gives another 10 bits; therefore shift >= 22.
+ *
+ * Or, weight =< lw.weight (because lw.weight is the runqueue weight), thus
+ * weight/lw.weight <= 1, and therefore our shift will also be positive.
*/
-static unsigned long
-calc_delta_mine(unsigned long delta_exec, unsigned long weight,
- struct load_weight *lw)
+static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw)
{
- u64 tmp;
+ u64 fact = scale_load_down(weight);
+ int shift = WMULT_SHIFT;
- /*
- * weight can be less than 2^SCHED_LOAD_RESOLUTION for task group sched
- * entities since MIN_SHARES = 2. Treat weight as 1 if less than
- * 2^SCHED_LOAD_RESOLUTION.
- */
- if (likely(weight > (1UL << SCHED_LOAD_RESOLUTION)))
- tmp = (u64)delta_exec * scale_load_down(weight);
- else
- tmp = (u64)delta_exec;
+ __update_inv_weight(lw);
- if (!lw->inv_weight) {
- unsigned long w = scale_load_down(lw->weight);
-
- if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST))
- lw->inv_weight = 1;
- else if (unlikely(!w))
- lw->inv_weight = WMULT_CONST;
- else
- lw->inv_weight = WMULT_CONST / w;
+ if (unlikely(fact >> 32)) {
+ while (fact >> 32) {
+ fact >>= 1;
+ shift--;
+ }
}
- /*
- * Check whether we'd overflow the 64-bit multiplication:
- */
- if (unlikely(tmp > WMULT_CONST))
- tmp = SRR(SRR(tmp, WMULT_SHIFT/2) * lw->inv_weight,
- WMULT_SHIFT/2);
- else
- tmp = SRR(tmp * lw->inv_weight, WMULT_SHIFT);
+ /* hint to use a 32x32->64 mul */
+ fact = (u64)(u32)fact * lw->inv_weight;
- return (unsigned long)min(tmp, (u64)(unsigned long)LONG_MAX);
+ while (fact >> 32) {
+ fact >>= 1;
+ shift--;
+ }
+
+ return mul_u64_u32_shr(delta_exec, fact, shift);
}
#endif /* CONFIG_FAIR_GROUP_SCHED */
static __always_inline
-void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, unsigned long delta_exec);
+void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec);
/**************************************************************
* Scheduling class tree data structure manipulation methods:
/*
* delta /= w
*/
-static inline unsigned long
-calc_delta_fair(unsigned long delta, struct sched_entity *se)
+static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se)
{
if (unlikely(se->load.weight != NICE_0_LOAD))
- delta = calc_delta_mine(delta, NICE_0_LOAD, &se->load);
+ delta = __calc_delta(delta, NICE_0_LOAD, &se->load);
return delta;
}
update_load_add(&lw, se->load.weight);
load = &lw;
}
- slice = calc_delta_mine(slice, se->load.weight, load);
+ slice = __calc_delta(slice, se->load.weight, load);
}
return slice;
}
#endif
/*
- * Update the current task's runtime statistics. Skip current tasks that
- * are not in our scheduling class.
+ * Update the current task's runtime statistics.
*/
-static inline void
-__update_curr(struct cfs_rq *cfs_rq, struct sched_entity *curr,
- unsigned long delta_exec)
-{
- unsigned long delta_exec_weighted;
-
- schedstat_set(curr->statistics.exec_max,
- max((u64)delta_exec, curr->statistics.exec_max));
-
- curr->sum_exec_runtime += delta_exec;
- schedstat_add(cfs_rq, exec_clock, delta_exec);
- delta_exec_weighted = calc_delta_fair(delta_exec, curr);
-
- curr->vruntime += delta_exec_weighted;
- update_min_vruntime(cfs_rq);
-}
-
static void update_curr(struct cfs_rq *cfs_rq)
{
struct sched_entity *curr = cfs_rq->curr;
u64 now = rq_clock_task(rq_of(cfs_rq));
- unsigned long delta_exec;
+ u64 delta_exec;
if (unlikely(!curr))
return;
- /*
- * Get the amount of time the current task was running
- * since the last time we changed load (this cannot
- * overflow on 32 bits):
- */
- delta_exec = (unsigned long)(now - curr->exec_start);
- if (!delta_exec)
+ delta_exec = now - curr->exec_start;
+ if (unlikely((s64)delta_exec <= 0))
return;
- __update_curr(cfs_rq, curr, delta_exec);
curr->exec_start = now;
+ schedstat_set(curr->statistics.exec_max,
+ max(delta_exec, curr->statistics.exec_max));
+
+ curr->sum_exec_runtime += delta_exec;
+ schedstat_add(cfs_rq, exec_clock, delta_exec);
+
+ curr->vruntime += calc_delta_fair(delta_exec, curr);
+ update_min_vruntime(cfs_rq);
+
if (entity_is_task(curr)) {
struct task_struct *curtask = task_of(curr);
(vma->vm_file && (vma->vm_flags & (VM_READ|VM_WRITE)) == (VM_READ)))
continue;
+ /*
+ * Skip inaccessible VMAs to avoid any confusion between
+ * PROT_NONE and NUMA hinting ptes
+ */
+ if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))
+ continue;
+
do {
start = max(start, vma->vm_start);
end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);
}
}
-static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq,
- unsigned long delta_exec)
+static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
{
/* dock delta_exec before expiring quota (as it could span periods) */
cfs_rq->runtime_remaining -= delta_exec;
}
static __always_inline
-void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, unsigned long delta_exec)
+void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
{
if (!cfs_bandwidth_used() || !cfs_rq->runtime_enabled)
return;
return rq_clock_task(rq_of(cfs_rq));
}
-static void account_cfs_rq_runtime(struct cfs_rq *cfs_rq,
- unsigned long delta_exec) {}
+static void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec) {}
static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq) {}
static void check_enqueue_throttle(struct cfs_rq *cfs_rq) {}
static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq) {}
*/
for_each_cpu(cpu, sched_group_cpus(sdg)) {
- struct sched_group *sg = cpu_rq(cpu)->sd->groups;
+ struct sched_group_power *sgp;
+ struct rq *rq = cpu_rq(cpu);
+
+ /*
+ * build_sched_domains() -> init_sched_groups_power()
+ * gets here before we've attached the domains to the
+ * runqueues.
+ *
+ * Use power_of(), which is set irrespective of domains
+ * in update_cpu_power().
+ *
+ * This avoids power/power_orig from being 0 and
+ * causing divide-by-zero issues on boot.
+ *
+ * Runtime updates will correct power_orig.
+ */
+ if (unlikely(!rq->sd)) {
+ power_orig += power_of(cpu);
+ power += power_of(cpu);
+ continue;
+ }
- power_orig += sg->sgp->power_orig;
- power += sg->sgp->power;
+ sgp = rq->sd->groups->sgp;
+ power_orig += sgp->power_orig;
+ power += sgp->power;
}
} else {
/*
{
struct rq *rq = rq_of_rt_rq(rt_rq);
+#ifdef CONFIG_RT_GROUP_SCHED
+ /*
+ * Change rq's cpupri only if rt_rq is the top queue.
+ */
+ if (&rq->rt != rt_rq)
+ return;
+#endif
if (rq->online && prio < prev_prio)
cpupri_set(&rq->rd->cpupri, rq->cpu, prio);
}
{
struct rq *rq = rq_of_rt_rq(rt_rq);
+#ifdef CONFIG_RT_GROUP_SCHED
+ /*
+ * Change rq's cpupri only if rt_rq is the top queue.
+ */
+ if (&rq->rt != rt_rq)
+ return;
+#endif
if (rq->online && rt_rq->highest_prio.curr != prev_prio)
cpupri_set(&rq->rd->cpupri, rq->cpu, rt_rq->highest_prio.curr);
}
--- /dev/null
+#include <linux/export.h>
+#include <linux/init.h>
+
+ __INITRODATA
+
+ .align 8
+ .globl VMLINUX_SYMBOL(system_certificate_list)
+VMLINUX_SYMBOL(system_certificate_list):
+__cert_list_start:
+ .incbin "kernel/x509_certificate_list"
+__cert_list_end:
+
+ .align 8
+ .globl VMLINUX_SYMBOL(system_certificate_list_size)
+VMLINUX_SYMBOL(system_certificate_list_size):
+#ifdef CONFIG_64BIT
+ .quad __cert_list_end - __cert_list_start
+#else
+ .long __cert_list_end - __cert_list_start
+#endif
--- /dev/null
+/* System trusted keyring for trusted public keys
+ *
+ * Copyright (C) 2012 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#include <linux/export.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/cred.h>
+#include <linux/err.h>
+#include <keys/asymmetric-type.h>
+#include <keys/system_keyring.h>
+#include "module-internal.h"
+
+struct key *system_trusted_keyring;
+EXPORT_SYMBOL_GPL(system_trusted_keyring);
+
+extern __initconst const u8 system_certificate_list[];
+extern __initconst const unsigned long system_certificate_list_size;
+
+/*
+ * Load the compiled-in keys
+ */
+static __init int system_trusted_keyring_init(void)
+{
+ pr_notice("Initialise system trusted keyring\n");
+
+ system_trusted_keyring =
+ keyring_alloc(".system_keyring",
+ KUIDT_INIT(0), KGIDT_INIT(0), current_cred(),
+ ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
+ KEY_USR_VIEW | KEY_USR_READ | KEY_USR_SEARCH),
+ KEY_ALLOC_NOT_IN_QUOTA, NULL);
+ if (IS_ERR(system_trusted_keyring))
+ panic("Can't allocate system trusted keyring\n");
+
+ set_bit(KEY_FLAG_TRUSTED_ONLY, &system_trusted_keyring->flags);
+ return 0;
+}
+
+/*
+ * Must be initialised before we try and load the keys into the keyring.
+ */
+device_initcall(system_trusted_keyring_init);
+
+/*
+ * Load the compiled-in list of X.509 certificates.
+ */
+static __init int load_system_certificate_list(void)
+{
+ key_ref_t key;
+ const u8 *p, *end;
+ size_t plen;
+
+ pr_notice("Loading compiled-in X.509 certificates\n");
+
+ p = system_certificate_list;
+ end = p + system_certificate_list_size;
+ while (p < end) {
+ /* Each cert begins with an ASN.1 SEQUENCE tag and must be more
+ * than 256 bytes in size.
+ */
+ if (end - p < 4)
+ goto dodgy_cert;
+ if (p[0] != 0x30 &&
+ p[1] != 0x82)
+ goto dodgy_cert;
+ plen = (p[2] << 8) | p[3];
+ plen += 4;
+ if (plen > end - p)
+ goto dodgy_cert;
+
+ key = key_create_or_update(make_key_ref(system_trusted_keyring, 1),
+ "asymmetric",
+ NULL,
+ p,
+ plen,
+ ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
+ KEY_USR_VIEW | KEY_USR_READ),
+ KEY_ALLOC_NOT_IN_QUOTA |
+ KEY_ALLOC_TRUSTED);
+ if (IS_ERR(key)) {
+ pr_err("Problem loading in-kernel X.509 certificate (%ld)\n",
+ PTR_ERR(key));
+ } else {
+ pr_notice("Loaded X.509 cert '%s'\n",
+ key_ref_to_ptr(key)->description);
+ key_ref_put(key);
+ }
+ p += plen;
+ }
+
+ return 0;
+
+dodgy_cert:
+ pr_err("Problem parsing in-kernel X.509 certificate list\n");
+ return 0;
+}
+late_initcall(load_system_certificate_list);
*/
ktime_t tick_next_period;
ktime_t tick_period;
+
+/*
+ * tick_do_timer_cpu is a timer core internal variable which holds the CPU NR
+ * which is responsible for calling do_timer(), i.e. the timekeeping stuff. This
+ * variable has two functions:
+ *
+ * 1) Prevent a thundering herd issue of a gazillion of CPUs trying to grab the
+ * timekeeping lock all at once. Only the CPU which is assigned to do the
+ * update is handling it.
+ *
+ * 2) Hand off the duty in the NOHZ idle case by setting the value to
+ * TICK_DO_TIMER_NONE, i.e. a non existing CPU. So the next cpu which looks
+ * at it will take over and keep the time keeping alive. The handover
+ * procedure also covers cpu hotplug.
+ */
int tick_do_timer_cpu __read_mostly = TICK_DO_TIMER_BOOT;
/*
/*
* NO HZ enabled ?
*/
-int tick_nohz_enabled __read_mostly = 1;
-
+static int tick_nohz_enabled __read_mostly = 1;
+int tick_nohz_active __read_mostly;
/*
* Enable / Disable tickless mode
*/
struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
ktime_t now, idle;
- if (!tick_nohz_enabled)
+ if (!tick_nohz_active)
return -1;
now = ktime_get();
struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
ktime_t now, iowait;
- if (!tick_nohz_enabled)
+ if (!tick_nohz_active)
return -1;
now = ktime_get();
return false;
}
- if (unlikely(ts->nohz_mode == NOHZ_MODE_INACTIVE))
+ if (unlikely(ts->nohz_mode == NOHZ_MODE_INACTIVE)) {
+ ts->sleep_length = (ktime_t) { .tv64 = NSEC_PER_SEC/HZ };
return false;
+ }
if (need_resched())
return false;
local_irq_disable();
ts = &__get_cpu_var(tick_cpu_sched);
- /*
- * set ts->inidle unconditionally. even if the system did not
- * switch to nohz mode the cpu frequency governers rely on the
- * update of the idle time accounting in tick_nohz_start_idle().
- */
ts->inidle = 1;
__tick_nohz_idle_enter(ts);
struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched);
ktime_t next;
- if (!tick_nohz_enabled)
+ if (!tick_nohz_active)
return;
local_irq_disable();
local_irq_enable();
return;
}
-
+ tick_nohz_active = 1;
ts->nohz_mode = NOHZ_MODE_LOWRES;
/*
}
#ifdef CONFIG_NO_HZ_COMMON
- if (tick_nohz_enabled)
+ if (tick_nohz_enabled) {
ts->nohz_mode = NOHZ_MODE_HIGHRES;
+ tick_nohz_active = 1;
+ }
#endif
}
#endif /* HIGH_RES_TIMERS */
tk->xtime_nsec -= remainder;
tk->xtime_nsec += 1ULL << tk->shift;
tk->ntp_error += remainder << tk->ntp_error_shift;
-
+ tk->ntp_error -= (1ULL << tk->shift) << tk->ntp_error_shift;
}
#else
#define old_vsyscall_fixup(tk)
/*
* The APs use this path later in boot
*/
- base = kmalloc_node(sizeof(*base),
- GFP_KERNEL | __GFP_ZERO,
- cpu_to_node(cpu));
+ base = kzalloc_node(sizeof(*base), GFP_KERNEL,
+ cpu_to_node(cpu));
if (!base)
return -ENOMEM;
static int __register_ftrace_function(struct ftrace_ops *ops)
{
- if (unlikely(ftrace_disabled))
- return -ENODEV;
-
if (FTRACE_WARN_ON(ops == &global_ops))
return -EINVAL;
{
int ret;
- if (ftrace_disabled)
- return -ENODEV;
-
if (WARN_ON(!(ops->flags & FTRACE_OPS_FL_ENABLED)))
return -EBUSY;
int cpu;
int ret = 0;
- for_each_online_cpu(cpu) {
+ for_each_possible_cpu(cpu) {
ret = ftrace_profile_init_cpu(cpu);
if (ret)
break;
static int ftrace_startup(struct ftrace_ops *ops, int command)
{
bool hash_enable = true;
+ int ret;
if (unlikely(ftrace_disabled))
return -ENODEV;
+ ret = __register_ftrace_function(ops);
+ if (ret)
+ return ret;
+
ftrace_start_up++;
command |= FTRACE_UPDATE_CALLS;
return 0;
}
-static void ftrace_shutdown(struct ftrace_ops *ops, int command)
+static int ftrace_shutdown(struct ftrace_ops *ops, int command)
{
bool hash_disable = true;
+ int ret;
if (unlikely(ftrace_disabled))
- return;
+ return -ENODEV;
+
+ ret = __unregister_ftrace_function(ops);
+ if (ret)
+ return ret;
ftrace_start_up--;
/*
}
if (!command || !ftrace_enabled)
- return;
+ return 0;
ftrace_run_update_code(command);
+ return 0;
}
static void ftrace_startup_sysctl(void)
if (i == FTRACE_FUNC_HASHSIZE)
return;
- ret = __register_ftrace_function(&trace_probe_ops);
- if (!ret)
- ret = ftrace_startup(&trace_probe_ops, 0);
+ ret = ftrace_startup(&trace_probe_ops, 0);
ftrace_probe_registered = 1;
}
static void __disable_ftrace_function_probe(void)
{
- int ret;
int i;
if (!ftrace_probe_registered)
}
/* no more funcs left */
- ret = __unregister_ftrace_function(&trace_probe_ops);
- if (!ret)
- ftrace_shutdown(&trace_probe_ops, 0);
+ ftrace_shutdown(&trace_probe_ops, 0);
ftrace_probe_registered = 0;
}
static inline int ftrace_init_dyn_debugfs(struct dentry *d_tracer) { return 0; }
static inline void ftrace_startup_enable(int command) { }
/* Keep as macros so we do not need to define the commands */
-# define ftrace_startup(ops, command) \
- ({ \
- (ops)->flags |= FTRACE_OPS_FL_ENABLED; \
- 0; \
+# define ftrace_startup(ops, command) \
+ ({ \
+ int ___ret = __register_ftrace_function(ops); \
+ if (!___ret) \
+ (ops)->flags |= FTRACE_OPS_FL_ENABLED; \
+ ___ret; \
})
-# define ftrace_shutdown(ops, command) do { } while (0)
+# define ftrace_shutdown(ops, command) __unregister_ftrace_function(ops)
+
# define ftrace_startup_sysctl() do { } while (0)
# define ftrace_shutdown_sysctl() do { } while (0)
mutex_lock(&ftrace_lock);
- ret = __register_ftrace_function(ops);
- if (!ret)
- ret = ftrace_startup(ops, 0);
+ ret = ftrace_startup(ops, 0);
mutex_unlock(&ftrace_lock);
int ret;
mutex_lock(&ftrace_lock);
- ret = __unregister_ftrace_function(ops);
- if (!ret)
- ftrace_shutdown(ops, 0);
+ ret = ftrace_shutdown(ops, 0);
mutex_unlock(&ftrace_lock);
return ret;
return NOTIFY_DONE;
}
+/* Just a place holder for function graph */
+static struct ftrace_ops fgraph_ops __read_mostly = {
+ .func = ftrace_stub,
+ .flags = FTRACE_OPS_FL_STUB | FTRACE_OPS_FL_GLOBAL |
+ FTRACE_OPS_FL_RECURSION_SAFE,
+};
+
int register_ftrace_graph(trace_func_graph_ret_t retfunc,
trace_func_graph_ent_t entryfunc)
{
ftrace_graph_return = retfunc;
ftrace_graph_entry = entryfunc;
- ret = ftrace_startup(&global_ops, FTRACE_START_FUNC_RET);
+ ret = ftrace_startup(&fgraph_ops, FTRACE_START_FUNC_RET);
out:
mutex_unlock(&ftrace_lock);
ftrace_graph_active--;
ftrace_graph_return = (trace_func_graph_ret_t)ftrace_stub;
ftrace_graph_entry = ftrace_graph_entry_stub;
- ftrace_shutdown(&global_ops, FTRACE_STOP_FUNC_RET);
+ ftrace_shutdown(&fgraph_ops, FTRACE_STOP_FUNC_RET);
unregister_pm_notifier(&ftrace_suspend_notifier);
unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL);
static int perf_trace_event_perm(struct ftrace_event_call *tp_event,
struct perf_event *p_event)
{
+ if (tp_event->perf_perm) {
+ int ret = tp_event->perf_perm(tp_event, p_event);
+ if (ret)
+ return ret;
+ }
+
/* The ftrace function trace is allowed only for root. */
if (ftrace_event_is_function(tp_event) &&
perf_paranoid_tracepoint_raw() && !capable(CAP_SYS_ADMIN))
int perf_trace_init(struct perf_event *p_event)
{
struct ftrace_event_call *tp_event;
- int event_id = p_event->attr.config;
+ u64 event_id = p_event->attr.config;
int ret = -EINVAL;
mutex_lock(&event_mutex);
/* Disable any running events */
__ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0);
+ /* Access to events are within rcu_read_lock_sched() */
+ synchronize_sched();
+
down_write(&trace_event_sem);
__trace_remove_event_dirs(tr);
debugfs_remove_recursive(tr->event_dir);
if (!tr->sys_refcount_enter)
unregister_trace_sys_enter(ftrace_syscall_enter, tr);
mutex_unlock(&syscall_trace_lock);
- /*
- * Callers expect the event to be completely disabled on
- * return, so wait for current handlers to finish.
- */
- synchronize_sched();
}
static int reg_event_syscall_exit(struct ftrace_event_file *file,
if (!tr->sys_refcount_exit)
unregister_trace_sys_exit(ftrace_syscall_exit, tr);
mutex_unlock(&syscall_trace_lock);
- /*
- * Callers expect the event to be completely disabled on
- * return, so wait for current handlers to finish.
- */
- synchronize_sched();
}
static int __init init_syscall_trace(struct ftrace_event_call *call)
.owner = GLOBAL_ROOT_UID,
.group = GLOBAL_ROOT_GID,
.proc_inum = PROC_USER_INIT_INO,
+#ifdef CONFIG_PERSISTENT_KEYRINGS
+ .persistent_keyring_register_sem =
+ __RWSEM_INITIALIZER(init_user_ns.persistent_keyring_register_sem),
+#endif
};
EXPORT_SYMBOL_GPL(init_user_ns);
set_cred_user_ns(new, ns);
+#ifdef CONFIG_PERSISTENT_KEYRINGS
+ init_rwsem(&ns->persistent_keyring_register_sem);
+#endif
return 0;
}
do {
parent = ns->parent;
+#ifdef CONFIG_PERSISTENT_KEYRINGS
+ key_put(ns->persistent_keyring_register);
+#endif
proc_free_inum(ns->proc_inum);
kmem_cache_free(user_ns_cachep, ns);
ns = parent;
/* I: attributes used when instantiating standard unbound pools on demand */
static struct workqueue_attrs *unbound_std_wq_attrs[NR_STD_WORKER_POOLS];
+/* I: attributes used when instantiating ordered pools on demand */
+static struct workqueue_attrs *ordered_wq_attrs[NR_STD_WORKER_POOLS];
+
struct workqueue_struct *system_wq __read_mostly;
EXPORT_SYMBOL(system_wq);
struct workqueue_struct *system_highpri_wq __read_mostly;
static inline void debug_work_deactivate(struct work_struct *work) { }
#endif
-/* allocate ID and assign it to @pool */
+/**
+ * worker_pool_assign_id - allocate ID and assing it to @pool
+ * @pool: the pool pointer of interest
+ *
+ * Returns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned
+ * successfully, -errno on failure.
+ */
static int worker_pool_assign_id(struct worker_pool *pool)
{
int ret;
lockdep_assert_held(&wq_pool_mutex);
- ret = idr_alloc(&worker_pool_idr, pool, 0, 0, GFP_KERNEL);
+ ret = idr_alloc(&worker_pool_idr, pool, 0, WORK_OFFQ_POOL_NONE,
+ GFP_KERNEL);
if (ret >= 0) {
pool->id = ret;
return 0;
debug_work_activate(work);
- /* if dying, only works from the same workqueue are allowed */
+ /* if draining, only works from the same workqueue are allowed */
if (unlikely(wq->flags & __WQ_DRAINING) &&
WARN_ON_ONCE(!is_chained_work(wq)))
return;
if (IS_ERR(worker->task))
goto fail;
+ set_user_nice(worker->task, pool->attrs->nice);
+
+ /* prevent userland from meddling with cpumask of workqueue workers */
+ worker->task->flags |= PF_NO_SETAFFINITY;
+
/*
* set_cpus_allowed_ptr() will fail if the cpumask doesn't have any
* online CPUs. It'll be re-applied when any of the CPUs come up.
*/
- set_user_nice(worker->task, pool->attrs->nice);
set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
- /* prevent userland from meddling with cpumask of workqueue workers */
- worker->task->flags |= PF_NO_SETAFFINITY;
-
/*
* The caller is responsible for ensuring %POOL_DISASSOCIATED
* remains stable across this function. See the comments above the
return false;
}
-static bool __flush_work(struct work_struct *work)
-{
- struct wq_barrier barr;
-
- if (start_flush_work(work, &barr)) {
- wait_for_completion(&barr.done);
- destroy_work_on_stack(&barr.work);
- return true;
- } else {
- return false;
- }
-}
-
/**
* flush_work - wait for a work to finish executing the last queueing instance
* @work: the work to flush
*/
bool flush_work(struct work_struct *work)
{
+ struct wq_barrier barr;
+
lock_map_acquire(&work->lockdep_map);
lock_map_release(&work->lockdep_map);
- return __flush_work(work);
+ if (start_flush_work(work, &barr)) {
+ wait_for_completion(&barr.done);
+ destroy_work_on_stack(&barr.work);
+ return true;
+ } else {
+ return false;
+ }
}
EXPORT_SYMBOL_GPL(flush_work);
static int alloc_and_link_pwqs(struct workqueue_struct *wq)
{
bool highpri = wq->flags & WQ_HIGHPRI;
- int cpu;
+ int cpu, ret;
if (!(wq->flags & WQ_UNBOUND)) {
wq->cpu_pwqs = alloc_percpu(struct pool_workqueue);
mutex_unlock(&wq->mutex);
}
return 0;
+ } else if (wq->flags & __WQ_ORDERED) {
+ ret = apply_workqueue_attrs(wq, ordered_wq_attrs[highpri]);
+ /* there should only be single pwq for ordering guarantee */
+ WARN(!ret && (wq->pwqs.next != &wq->dfl_pwq->pwqs_node ||
+ wq->pwqs.prev != &wq->dfl_pwq->pwqs_node),
+ "ordering guarantee broken for workqueue %s\n", wq->name);
+ return ret;
} else {
return apply_workqueue_attrs(wq, unbound_std_wq_attrs[highpri]);
}
INIT_WORK_ONSTACK(&wfc.work, work_for_cpu_fn);
schedule_work_on(cpu, &wfc.work);
-
- /*
- * The work item is on-stack and can't lead to deadlock through
- * flushing. Use __flush_work() to avoid spurious lockdep warnings
- * when work_on_cpu()s are nested.
- */
- __flush_work(&wfc.work);
-
+ flush_work(&wfc.work);
return wfc.ret;
}
EXPORT_SYMBOL_GPL(work_on_cpu);
int std_nice[NR_STD_WORKER_POOLS] = { 0, HIGHPRI_NICE_LEVEL };
int i, cpu;
- /* make sure we have enough bits for OFFQ pool ID */
- BUILD_BUG_ON((1LU << (BITS_PER_LONG - WORK_OFFQ_POOL_SHIFT)) <
- WORK_CPU_END * NR_STD_WORKER_POOLS);
-
WARN_ON(__alignof__(struct pool_workqueue) < __alignof__(long long));
pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC);
}
}
- /* create default unbound wq attrs */
+ /* create default unbound and ordered wq attrs */
for (i = 0; i < NR_STD_WORKER_POOLS; i++) {
struct workqueue_attrs *attrs;
BUG_ON(!(attrs = alloc_workqueue_attrs(GFP_KERNEL)));
attrs->nice = std_nice[i];
unbound_std_wq_attrs[i] = attrs;
+
+ /*
+ * An ordered wq should have only one pwq as ordering is
+ * guaranteed by max_active which is enforced by pwqs.
+ * Turn off NUMA so that dfl_pwq is used for all nodes.
+ */
+ BUG_ON(!(attrs = alloc_workqueue_attrs(GFP_KERNEL)));
+ attrs->nice = std_nice[i];
+ attrs->no_numa = true;
+ ordered_wq_attrs[i] = attrs;
}
system_wq = alloc_workqueue("events", 0, 0);
config BTREE
boolean
+config ASSOCIATIVE_ARRAY
+ bool
+ help
+ Generic associative array. Can be searched and iterated over whilst
+ it is being modified. It is also reasonably quick to search and
+ modify. The algorithms are non-recursive, and the trees are highly
+ capacious.
+
+ See:
+
+ Documentation/assoc_array.txt
+
+ for more information.
+
config HAS_IOMEM
boolean
depends on !NO_IOMEM
sha1.o md5.o irq_regs.o reciprocal_div.o argv_split.o \
proportions.o flex_proportions.o prio_heap.o ratelimit.o show_mem.o \
is_single_threaded.o plist.o decompress.o kobject_uevent.o \
- earlycpio.o percpu-refcount.o percpu_ida.o
+ earlycpio.o
obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
lib-$(CONFIG_MMU) += ioremap.o
bust_spinlocks.o hexdump.o kasprintf.o bitmap.o scatterlist.o \
gcd.o lcm.o list_sort.o uuid.o flex_array.o iovec.o clz_ctz.o \
bsearch.o find_last_bit.o find_next_bit.o llist.o memweight.o kfifo.o \
- percpu_ida.o
+ percpu-refcount.o percpu_ida.o
obj-y += string_helpers.o
obj-$(CONFIG_TEST_STRING_HELPERS) += test-string_helpers.o
obj-y += kstrtox.o
obj-$(CONFIG_GENERIC_HWEIGHT) += hweight.o
obj-$(CONFIG_BTREE) += btree.o
+obj-$(CONFIG_ASSOCIATIVE_ARRAY) += assoc_array.o
obj-$(CONFIG_DEBUG_PREEMPT) += smp_processor_id.o
obj-$(CONFIG_DEBUG_LIST) += list_debug.o
obj-$(CONFIG_DEBUG_OBJECTS) += debugobjects.o
--- /dev/null
+/* Generic associative array implementation.
+ *
+ * See Documentation/assoc_array.txt for information.
+ *
+ * Copyright (C) 2013 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+//#define DEBUG
+#include <linux/slab.h>
+#include <linux/err.h>
+#include <linux/assoc_array_priv.h>
+
+/*
+ * Iterate over an associative array. The caller must hold the RCU read lock
+ * or better.
+ */
+static int assoc_array_subtree_iterate(const struct assoc_array_ptr *root,
+ const struct assoc_array_ptr *stop,
+ int (*iterator)(const void *leaf,
+ void *iterator_data),
+ void *iterator_data)
+{
+ const struct assoc_array_shortcut *shortcut;
+ const struct assoc_array_node *node;
+ const struct assoc_array_ptr *cursor, *ptr, *parent;
+ unsigned long has_meta;
+ int slot, ret;
+
+ cursor = root;
+
+begin_node:
+ if (assoc_array_ptr_is_shortcut(cursor)) {
+ /* Descend through a shortcut */
+ shortcut = assoc_array_ptr_to_shortcut(cursor);
+ smp_read_barrier_depends();
+ cursor = ACCESS_ONCE(shortcut->next_node);
+ }
+
+ node = assoc_array_ptr_to_node(cursor);
+ smp_read_barrier_depends();
+ slot = 0;
+
+ /* We perform two passes of each node.
+ *
+ * The first pass does all the leaves in this node. This means we
+ * don't miss any leaves if the node is split up by insertion whilst
+ * we're iterating over the branches rooted here (we may, however, see
+ * some leaves twice).
+ */
+ has_meta = 0;
+ for (; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ ptr = ACCESS_ONCE(node->slots[slot]);
+ has_meta |= (unsigned long)ptr;
+ if (ptr && assoc_array_ptr_is_leaf(ptr)) {
+ /* We need a barrier between the read of the pointer
+ * and dereferencing the pointer - but only if we are
+ * actually going to dereference it.
+ */
+ smp_read_barrier_depends();
+
+ /* Invoke the callback */
+ ret = iterator(assoc_array_ptr_to_leaf(ptr),
+ iterator_data);
+ if (ret)
+ return ret;
+ }
+ }
+
+ /* The second pass attends to all the metadata pointers. If we follow
+ * one of these we may find that we don't come back here, but rather go
+ * back to a replacement node with the leaves in a different layout.
+ *
+ * We are guaranteed to make progress, however, as the slot number for
+ * a particular portion of the key space cannot change - and we
+ * continue at the back pointer + 1.
+ */
+ if (!(has_meta & ASSOC_ARRAY_PTR_META_TYPE))
+ goto finished_node;
+ slot = 0;
+
+continue_node:
+ node = assoc_array_ptr_to_node(cursor);
+ smp_read_barrier_depends();
+
+ for (; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ ptr = ACCESS_ONCE(node->slots[slot]);
+ if (assoc_array_ptr_is_meta(ptr)) {
+ cursor = ptr;
+ goto begin_node;
+ }
+ }
+
+finished_node:
+ /* Move up to the parent (may need to skip back over a shortcut) */
+ parent = ACCESS_ONCE(node->back_pointer);
+ slot = node->parent_slot;
+ if (parent == stop)
+ return 0;
+
+ if (assoc_array_ptr_is_shortcut(parent)) {
+ shortcut = assoc_array_ptr_to_shortcut(parent);
+ smp_read_barrier_depends();
+ cursor = parent;
+ parent = ACCESS_ONCE(shortcut->back_pointer);
+ slot = shortcut->parent_slot;
+ if (parent == stop)
+ return 0;
+ }
+
+ /* Ascend to next slot in parent node */
+ cursor = parent;
+ slot++;
+ goto continue_node;
+}
+
+/**
+ * assoc_array_iterate - Pass all objects in the array to a callback
+ * @array: The array to iterate over.
+ * @iterator: The callback function.
+ * @iterator_data: Private data for the callback function.
+ *
+ * Iterate over all the objects in an associative array. Each one will be
+ * presented to the iterator function.
+ *
+ * If the array is being modified concurrently with the iteration then it is
+ * possible that some objects in the array will be passed to the iterator
+ * callback more than once - though every object should be passed at least
+ * once. If this is undesirable then the caller must lock against modification
+ * for the duration of this function.
+ *
+ * The function will return 0 if no objects were in the array or else it will
+ * return the result of the last iterator function called. Iteration stops
+ * immediately if any call to the iteration function results in a non-zero
+ * return.
+ *
+ * The caller should hold the RCU read lock or better if concurrent
+ * modification is possible.
+ */
+int assoc_array_iterate(const struct assoc_array *array,
+ int (*iterator)(const void *object,
+ void *iterator_data),
+ void *iterator_data)
+{
+ struct assoc_array_ptr *root = ACCESS_ONCE(array->root);
+
+ if (!root)
+ return 0;
+ return assoc_array_subtree_iterate(root, NULL, iterator, iterator_data);
+}
+
+enum assoc_array_walk_status {
+ assoc_array_walk_tree_empty,
+ assoc_array_walk_found_terminal_node,
+ assoc_array_walk_found_wrong_shortcut,
+} status;
+
+struct assoc_array_walk_result {
+ struct {
+ struct assoc_array_node *node; /* Node in which leaf might be found */
+ int level;
+ int slot;
+ } terminal_node;
+ struct {
+ struct assoc_array_shortcut *shortcut;
+ int level;
+ int sc_level;
+ unsigned long sc_segments;
+ unsigned long dissimilarity;
+ } wrong_shortcut;
+};
+
+/*
+ * Navigate through the internal tree looking for the closest node to the key.
+ */
+static enum assoc_array_walk_status
+assoc_array_walk(const struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ const void *index_key,
+ struct assoc_array_walk_result *result)
+{
+ struct assoc_array_shortcut *shortcut;
+ struct assoc_array_node *node;
+ struct assoc_array_ptr *cursor, *ptr;
+ unsigned long sc_segments, dissimilarity;
+ unsigned long segments;
+ int level, sc_level, next_sc_level;
+ int slot;
+
+ pr_devel("-->%s()\n", __func__);
+
+ cursor = ACCESS_ONCE(array->root);
+ if (!cursor)
+ return assoc_array_walk_tree_empty;
+
+ level = 0;
+
+ /* Use segments from the key for the new leaf to navigate through the
+ * internal tree, skipping through nodes and shortcuts that are on
+ * route to the destination. Eventually we'll come to a slot that is
+ * either empty or contains a leaf at which point we've found a node in
+ * which the leaf we're looking for might be found or into which it
+ * should be inserted.
+ */
+jumped:
+ segments = ops->get_key_chunk(index_key, level);
+ pr_devel("segments[%d]: %lx\n", level, segments);
+
+ if (assoc_array_ptr_is_shortcut(cursor))
+ goto follow_shortcut;
+
+consider_node:
+ node = assoc_array_ptr_to_node(cursor);
+ smp_read_barrier_depends();
+
+ slot = segments >> (level & ASSOC_ARRAY_KEY_CHUNK_MASK);
+ slot &= ASSOC_ARRAY_FAN_MASK;
+ ptr = ACCESS_ONCE(node->slots[slot]);
+
+ pr_devel("consider slot %x [ix=%d type=%lu]\n",
+ slot, level, (unsigned long)ptr & 3);
+
+ if (!assoc_array_ptr_is_meta(ptr)) {
+ /* The node doesn't have a node/shortcut pointer in the slot
+ * corresponding to the index key that we have to follow.
+ */
+ result->terminal_node.node = node;
+ result->terminal_node.level = level;
+ result->terminal_node.slot = slot;
+ pr_devel("<--%s() = terminal_node\n", __func__);
+ return assoc_array_walk_found_terminal_node;
+ }
+
+ if (assoc_array_ptr_is_node(ptr)) {
+ /* There is a pointer to a node in the slot corresponding to
+ * this index key segment, so we need to follow it.
+ */
+ cursor = ptr;
+ level += ASSOC_ARRAY_LEVEL_STEP;
+ if ((level & ASSOC_ARRAY_KEY_CHUNK_MASK) != 0)
+ goto consider_node;
+ goto jumped;
+ }
+
+ /* There is a shortcut in the slot corresponding to the index key
+ * segment. We follow the shortcut if its partial index key matches
+ * this leaf's. Otherwise we need to split the shortcut.
+ */
+ cursor = ptr;
+follow_shortcut:
+ shortcut = assoc_array_ptr_to_shortcut(cursor);
+ smp_read_barrier_depends();
+ pr_devel("shortcut to %d\n", shortcut->skip_to_level);
+ sc_level = level + ASSOC_ARRAY_LEVEL_STEP;
+ BUG_ON(sc_level > shortcut->skip_to_level);
+
+ do {
+ /* Check the leaf against the shortcut's index key a word at a
+ * time, trimming the final word (the shortcut stores the index
+ * key completely from the root to the shortcut's target).
+ */
+ if ((sc_level & ASSOC_ARRAY_KEY_CHUNK_MASK) == 0)
+ segments = ops->get_key_chunk(index_key, sc_level);
+
+ sc_segments = shortcut->index_key[sc_level >> ASSOC_ARRAY_KEY_CHUNK_SHIFT];
+ dissimilarity = segments ^ sc_segments;
+
+ if (round_up(sc_level, ASSOC_ARRAY_KEY_CHUNK_SIZE) > shortcut->skip_to_level) {
+ /* Trim segments that are beyond the shortcut */
+ int shift = shortcut->skip_to_level & ASSOC_ARRAY_KEY_CHUNK_MASK;
+ dissimilarity &= ~(ULONG_MAX << shift);
+ next_sc_level = shortcut->skip_to_level;
+ } else {
+ next_sc_level = sc_level + ASSOC_ARRAY_KEY_CHUNK_SIZE;
+ next_sc_level = round_down(next_sc_level, ASSOC_ARRAY_KEY_CHUNK_SIZE);
+ }
+
+ if (dissimilarity != 0) {
+ /* This shortcut points elsewhere */
+ result->wrong_shortcut.shortcut = shortcut;
+ result->wrong_shortcut.level = level;
+ result->wrong_shortcut.sc_level = sc_level;
+ result->wrong_shortcut.sc_segments = sc_segments;
+ result->wrong_shortcut.dissimilarity = dissimilarity;
+ return assoc_array_walk_found_wrong_shortcut;
+ }
+
+ sc_level = next_sc_level;
+ } while (sc_level < shortcut->skip_to_level);
+
+ /* The shortcut matches the leaf's index to this point. */
+ cursor = ACCESS_ONCE(shortcut->next_node);
+ if (((level ^ sc_level) & ~ASSOC_ARRAY_KEY_CHUNK_MASK) != 0) {
+ level = sc_level;
+ goto jumped;
+ } else {
+ level = sc_level;
+ goto consider_node;
+ }
+}
+
+/**
+ * assoc_array_find - Find an object by index key
+ * @array: The associative array to search.
+ * @ops: The operations to use.
+ * @index_key: The key to the object.
+ *
+ * Find an object in an associative array by walking through the internal tree
+ * to the node that should contain the object and then searching the leaves
+ * there. NULL is returned if the requested object was not found in the array.
+ *
+ * The caller must hold the RCU read lock or better.
+ */
+void *assoc_array_find(const struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ const void *index_key)
+{
+ struct assoc_array_walk_result result;
+ const struct assoc_array_node *node;
+ const struct assoc_array_ptr *ptr;
+ const void *leaf;
+ int slot;
+
+ if (assoc_array_walk(array, ops, index_key, &result) !=
+ assoc_array_walk_found_terminal_node)
+ return NULL;
+
+ node = result.terminal_node.node;
+ smp_read_barrier_depends();
+
+ /* If the target key is available to us, it's has to be pointed to by
+ * the terminal node.
+ */
+ for (slot = 0; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ ptr = ACCESS_ONCE(node->slots[slot]);
+ if (ptr && assoc_array_ptr_is_leaf(ptr)) {
+ /* We need a barrier between the read of the pointer
+ * and dereferencing the pointer - but only if we are
+ * actually going to dereference it.
+ */
+ leaf = assoc_array_ptr_to_leaf(ptr);
+ smp_read_barrier_depends();
+ if (ops->compare_object(leaf, index_key))
+ return (void *)leaf;
+ }
+ }
+
+ return NULL;
+}
+
+/*
+ * Destructively iterate over an associative array. The caller must prevent
+ * other simultaneous accesses.
+ */
+static void assoc_array_destroy_subtree(struct assoc_array_ptr *root,
+ const struct assoc_array_ops *ops)
+{
+ struct assoc_array_shortcut *shortcut;
+ struct assoc_array_node *node;
+ struct assoc_array_ptr *cursor, *parent = NULL;
+ int slot = -1;
+
+ pr_devel("-->%s()\n", __func__);
+
+ cursor = root;
+ if (!cursor) {
+ pr_devel("empty\n");
+ return;
+ }
+
+move_to_meta:
+ if (assoc_array_ptr_is_shortcut(cursor)) {
+ /* Descend through a shortcut */
+ pr_devel("[%d] shortcut\n", slot);
+ BUG_ON(!assoc_array_ptr_is_shortcut(cursor));
+ shortcut = assoc_array_ptr_to_shortcut(cursor);
+ BUG_ON(shortcut->back_pointer != parent);
+ BUG_ON(slot != -1 && shortcut->parent_slot != slot);
+ parent = cursor;
+ cursor = shortcut->next_node;
+ slot = -1;
+ BUG_ON(!assoc_array_ptr_is_node(cursor));
+ }
+
+ pr_devel("[%d] node\n", slot);
+ node = assoc_array_ptr_to_node(cursor);
+ BUG_ON(node->back_pointer != parent);
+ BUG_ON(slot != -1 && node->parent_slot != slot);
+ slot = 0;
+
+continue_node:
+ pr_devel("Node %p [back=%p]\n", node, node->back_pointer);
+ for (; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ struct assoc_array_ptr *ptr = node->slots[slot];
+ if (!ptr)
+ continue;
+ if (assoc_array_ptr_is_meta(ptr)) {
+ parent = cursor;
+ cursor = ptr;
+ goto move_to_meta;
+ }
+
+ if (ops) {
+ pr_devel("[%d] free leaf\n", slot);
+ ops->free_object(assoc_array_ptr_to_leaf(ptr));
+ }
+ }
+
+ parent = node->back_pointer;
+ slot = node->parent_slot;
+ pr_devel("free node\n");
+ kfree(node);
+ if (!parent)
+ return; /* Done */
+
+ /* Move back up to the parent (may need to free a shortcut on
+ * the way up) */
+ if (assoc_array_ptr_is_shortcut(parent)) {
+ shortcut = assoc_array_ptr_to_shortcut(parent);
+ BUG_ON(shortcut->next_node != cursor);
+ cursor = parent;
+ parent = shortcut->back_pointer;
+ slot = shortcut->parent_slot;
+ pr_devel("free shortcut\n");
+ kfree(shortcut);
+ if (!parent)
+ return;
+
+ BUG_ON(!assoc_array_ptr_is_node(parent));
+ }
+
+ /* Ascend to next slot in parent node */
+ pr_devel("ascend to %p[%d]\n", parent, slot);
+ cursor = parent;
+ node = assoc_array_ptr_to_node(cursor);
+ slot++;
+ goto continue_node;
+}
+
+/**
+ * assoc_array_destroy - Destroy an associative array
+ * @array: The array to destroy.
+ * @ops: The operations to use.
+ *
+ * Discard all metadata and free all objects in an associative array. The
+ * array will be empty and ready to use again upon completion. This function
+ * cannot fail.
+ *
+ * The caller must prevent all other accesses whilst this takes place as no
+ * attempt is made to adjust pointers gracefully to permit RCU readlock-holding
+ * accesses to continue. On the other hand, no memory allocation is required.
+ */
+void assoc_array_destroy(struct assoc_array *array,
+ const struct assoc_array_ops *ops)
+{
+ assoc_array_destroy_subtree(array->root, ops);
+ array->root = NULL;
+}
+
+/*
+ * Handle insertion into an empty tree.
+ */
+static bool assoc_array_insert_in_empty_tree(struct assoc_array_edit *edit)
+{
+ struct assoc_array_node *new_n0;
+
+ pr_devel("-->%s()\n", __func__);
+
+ new_n0 = kzalloc(sizeof(struct assoc_array_node), GFP_KERNEL);
+ if (!new_n0)
+ return false;
+
+ edit->new_meta[0] = assoc_array_node_to_ptr(new_n0);
+ edit->leaf_p = &new_n0->slots[0];
+ edit->adjust_count_on = new_n0;
+ edit->set[0].ptr = &edit->array->root;
+ edit->set[0].to = assoc_array_node_to_ptr(new_n0);
+
+ pr_devel("<--%s() = ok [no root]\n", __func__);
+ return true;
+}
+
+/*
+ * Handle insertion into a terminal node.
+ */
+static bool assoc_array_insert_into_terminal_node(struct assoc_array_edit *edit,
+ const struct assoc_array_ops *ops,
+ const void *index_key,
+ struct assoc_array_walk_result *result)
+{
+ struct assoc_array_shortcut *shortcut, *new_s0;
+ struct assoc_array_node *node, *new_n0, *new_n1, *side;
+ struct assoc_array_ptr *ptr;
+ unsigned long dissimilarity, base_seg, blank;
+ size_t keylen;
+ bool have_meta;
+ int level, diff;
+ int slot, next_slot, free_slot, i, j;
+
+ node = result->terminal_node.node;
+ level = result->terminal_node.level;
+ edit->segment_cache[ASSOC_ARRAY_FAN_OUT] = result->terminal_node.slot;
+
+ pr_devel("-->%s()\n", __func__);
+
+ /* We arrived at a node which doesn't have an onward node or shortcut
+ * pointer that we have to follow. This means that (a) the leaf we
+ * want must go here (either by insertion or replacement) or (b) we
+ * need to split this node and insert in one of the fragments.
+ */
+ free_slot = -1;
+
+ /* Firstly, we have to check the leaves in this node to see if there's
+ * a matching one we should replace in place.
+ */
+ for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) {
+ ptr = node->slots[i];
+ if (!ptr) {
+ free_slot = i;
+ continue;
+ }
+ if (ops->compare_object(assoc_array_ptr_to_leaf(ptr), index_key)) {
+ pr_devel("replace in slot %d\n", i);
+ edit->leaf_p = &node->slots[i];
+ edit->dead_leaf = node->slots[i];
+ pr_devel("<--%s() = ok [replace]\n", __func__);
+ return true;
+ }
+ }
+
+ /* If there is a free slot in this node then we can just insert the
+ * leaf here.
+ */
+ if (free_slot >= 0) {
+ pr_devel("insert in free slot %d\n", free_slot);
+ edit->leaf_p = &node->slots[free_slot];
+ edit->adjust_count_on = node;
+ pr_devel("<--%s() = ok [insert]\n", __func__);
+ return true;
+ }
+
+ /* The node has no spare slots - so we're either going to have to split
+ * it or insert another node before it.
+ *
+ * Whatever, we're going to need at least two new nodes - so allocate
+ * those now. We may also need a new shortcut, but we deal with that
+ * when we need it.
+ */
+ new_n0 = kzalloc(sizeof(struct assoc_array_node), GFP_KERNEL);
+ if (!new_n0)
+ return false;
+ edit->new_meta[0] = assoc_array_node_to_ptr(new_n0);
+ new_n1 = kzalloc(sizeof(struct assoc_array_node), GFP_KERNEL);
+ if (!new_n1)
+ return false;
+ edit->new_meta[1] = assoc_array_node_to_ptr(new_n1);
+
+ /* We need to find out how similar the leaves are. */
+ pr_devel("no spare slots\n");
+ have_meta = false;
+ for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) {
+ ptr = node->slots[i];
+ if (assoc_array_ptr_is_meta(ptr)) {
+ edit->segment_cache[i] = 0xff;
+ have_meta = true;
+ continue;
+ }
+ base_seg = ops->get_object_key_chunk(
+ assoc_array_ptr_to_leaf(ptr), level);
+ base_seg >>= level & ASSOC_ARRAY_KEY_CHUNK_MASK;
+ edit->segment_cache[i] = base_seg & ASSOC_ARRAY_FAN_MASK;
+ }
+
+ if (have_meta) {
+ pr_devel("have meta\n");
+ goto split_node;
+ }
+
+ /* The node contains only leaves */
+ dissimilarity = 0;
+ base_seg = edit->segment_cache[0];
+ for (i = 1; i < ASSOC_ARRAY_FAN_OUT; i++)
+ dissimilarity |= edit->segment_cache[i] ^ base_seg;
+
+ pr_devel("only leaves; dissimilarity=%lx\n", dissimilarity);
+
+ if ((dissimilarity & ASSOC_ARRAY_FAN_MASK) == 0) {
+ /* The old leaves all cluster in the same slot. We will need
+ * to insert a shortcut if the new node wants to cluster with them.
+ */
+ if ((edit->segment_cache[ASSOC_ARRAY_FAN_OUT] ^ base_seg) == 0)
+ goto all_leaves_cluster_together;
+
+ /* Otherwise we can just insert a new node ahead of the old
+ * one.
+ */
+ goto present_leaves_cluster_but_not_new_leaf;
+ }
+
+split_node:
+ pr_devel("split node\n");
+
+ /* We need to split the current node; we know that the node doesn't
+ * simply contain a full set of leaves that cluster together (it
+ * contains meta pointers and/or non-clustering leaves).
+ *
+ * We need to expel at least two leaves out of a set consisting of the
+ * leaves in the node and the new leaf.
+ *
+ * We need a new node (n0) to replace the current one and a new node to
+ * take the expelled nodes (n1).
+ */
+ edit->set[0].to = assoc_array_node_to_ptr(new_n0);
+ new_n0->back_pointer = node->back_pointer;
+ new_n0->parent_slot = node->parent_slot;
+ new_n1->back_pointer = assoc_array_node_to_ptr(new_n0);
+ new_n1->parent_slot = -1; /* Need to calculate this */
+
+do_split_node:
+ pr_devel("do_split_node\n");
+
+ new_n0->nr_leaves_on_branch = node->nr_leaves_on_branch;
+ new_n1->nr_leaves_on_branch = 0;
+
+ /* Begin by finding two matching leaves. There have to be at least two
+ * that match - even if there are meta pointers - because any leaf that
+ * would match a slot with a meta pointer in it must be somewhere
+ * behind that meta pointer and cannot be here. Further, given N
+ * remaining leaf slots, we now have N+1 leaves to go in them.
+ */
+ for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) {
+ slot = edit->segment_cache[i];
+ if (slot != 0xff)
+ for (j = i + 1; j < ASSOC_ARRAY_FAN_OUT + 1; j++)
+ if (edit->segment_cache[j] == slot)
+ goto found_slot_for_multiple_occupancy;
+ }
+found_slot_for_multiple_occupancy:
+ pr_devel("same slot: %x %x [%02x]\n", i, j, slot);
+ BUG_ON(i >= ASSOC_ARRAY_FAN_OUT);
+ BUG_ON(j >= ASSOC_ARRAY_FAN_OUT + 1);
+ BUG_ON(slot >= ASSOC_ARRAY_FAN_OUT);
+
+ new_n1->parent_slot = slot;
+
+ /* Metadata pointers cannot change slot */
+ for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++)
+ if (assoc_array_ptr_is_meta(node->slots[i]))
+ new_n0->slots[i] = node->slots[i];
+ else
+ new_n0->slots[i] = NULL;
+ BUG_ON(new_n0->slots[slot] != NULL);
+ new_n0->slots[slot] = assoc_array_node_to_ptr(new_n1);
+
+ /* Filter the leaf pointers between the new nodes */
+ free_slot = -1;
+ next_slot = 0;
+ for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) {
+ if (assoc_array_ptr_is_meta(node->slots[i]))
+ continue;
+ if (edit->segment_cache[i] == slot) {
+ new_n1->slots[next_slot++] = node->slots[i];
+ new_n1->nr_leaves_on_branch++;
+ } else {
+ do {
+ free_slot++;
+ } while (new_n0->slots[free_slot] != NULL);
+ new_n0->slots[free_slot] = node->slots[i];
+ }
+ }
+
+ pr_devel("filtered: f=%x n=%x\n", free_slot, next_slot);
+
+ if (edit->segment_cache[ASSOC_ARRAY_FAN_OUT] != slot) {
+ do {
+ free_slot++;
+ } while (new_n0->slots[free_slot] != NULL);
+ edit->leaf_p = &new_n0->slots[free_slot];
+ edit->adjust_count_on = new_n0;
+ } else {
+ edit->leaf_p = &new_n1->slots[next_slot++];
+ edit->adjust_count_on = new_n1;
+ }
+
+ BUG_ON(next_slot <= 1);
+
+ edit->set_backpointers_to = assoc_array_node_to_ptr(new_n0);
+ for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) {
+ if (edit->segment_cache[i] == 0xff) {
+ ptr = node->slots[i];
+ BUG_ON(assoc_array_ptr_is_leaf(ptr));
+ if (assoc_array_ptr_is_node(ptr)) {
+ side = assoc_array_ptr_to_node(ptr);
+ edit->set_backpointers[i] = &side->back_pointer;
+ } else {
+ shortcut = assoc_array_ptr_to_shortcut(ptr);
+ edit->set_backpointers[i] = &shortcut->back_pointer;
+ }
+ }
+ }
+
+ ptr = node->back_pointer;
+ if (!ptr)
+ edit->set[0].ptr = &edit->array->root;
+ else if (assoc_array_ptr_is_node(ptr))
+ edit->set[0].ptr = &assoc_array_ptr_to_node(ptr)->slots[node->parent_slot];
+ else
+ edit->set[0].ptr = &assoc_array_ptr_to_shortcut(ptr)->next_node;
+ edit->excised_meta[0] = assoc_array_node_to_ptr(node);
+ pr_devel("<--%s() = ok [split node]\n", __func__);
+ return true;
+
+present_leaves_cluster_but_not_new_leaf:
+ /* All the old leaves cluster in the same slot, but the new leaf wants
+ * to go into a different slot, so we create a new node to hold the new
+ * leaf and a pointer to a new node holding all the old leaves.
+ */
+ pr_devel("present leaves cluster but not new leaf\n");
+
+ new_n0->back_pointer = node->back_pointer;
+ new_n0->parent_slot = node->parent_slot;
+ new_n0->nr_leaves_on_branch = node->nr_leaves_on_branch;
+ new_n1->back_pointer = assoc_array_node_to_ptr(new_n0);
+ new_n1->parent_slot = edit->segment_cache[0];
+ new_n1->nr_leaves_on_branch = node->nr_leaves_on_branch;
+ edit->adjust_count_on = new_n0;
+
+ for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++)
+ new_n1->slots[i] = node->slots[i];
+
+ new_n0->slots[edit->segment_cache[0]] = assoc_array_node_to_ptr(new_n0);
+ edit->leaf_p = &new_n0->slots[edit->segment_cache[ASSOC_ARRAY_FAN_OUT]];
+
+ edit->set[0].ptr = &assoc_array_ptr_to_node(node->back_pointer)->slots[node->parent_slot];
+ edit->set[0].to = assoc_array_node_to_ptr(new_n0);
+ edit->excised_meta[0] = assoc_array_node_to_ptr(node);
+ pr_devel("<--%s() = ok [insert node before]\n", __func__);
+ return true;
+
+all_leaves_cluster_together:
+ /* All the leaves, new and old, want to cluster together in this node
+ * in the same slot, so we have to replace this node with a shortcut to
+ * skip over the identical parts of the key and then place a pair of
+ * nodes, one inside the other, at the end of the shortcut and
+ * distribute the keys between them.
+ *
+ * Firstly we need to work out where the leaves start diverging as a
+ * bit position into their keys so that we know how big the shortcut
+ * needs to be.
+ *
+ * We only need to make a single pass of N of the N+1 leaves because if
+ * any keys differ between themselves at bit X then at least one of
+ * them must also differ with the base key at bit X or before.
+ */
+ pr_devel("all leaves cluster together\n");
+ diff = INT_MAX;
+ for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) {
+ int x = ops->diff_objects(assoc_array_ptr_to_leaf(node->slots[i]),
+ index_key);
+ if (x < diff) {
+ BUG_ON(x < 0);
+ diff = x;
+ }
+ }
+ BUG_ON(diff == INT_MAX);
+ BUG_ON(diff < level + ASSOC_ARRAY_LEVEL_STEP);
+
+ keylen = round_up(diff, ASSOC_ARRAY_KEY_CHUNK_SIZE);
+ keylen >>= ASSOC_ARRAY_KEY_CHUNK_SHIFT;
+
+ new_s0 = kzalloc(sizeof(struct assoc_array_shortcut) +
+ keylen * sizeof(unsigned long), GFP_KERNEL);
+ if (!new_s0)
+ return false;
+ edit->new_meta[2] = assoc_array_shortcut_to_ptr(new_s0);
+
+ edit->set[0].to = assoc_array_shortcut_to_ptr(new_s0);
+ new_s0->back_pointer = node->back_pointer;
+ new_s0->parent_slot = node->parent_slot;
+ new_s0->next_node = assoc_array_node_to_ptr(new_n0);
+ new_n0->back_pointer = assoc_array_shortcut_to_ptr(new_s0);
+ new_n0->parent_slot = 0;
+ new_n1->back_pointer = assoc_array_node_to_ptr(new_n0);
+ new_n1->parent_slot = -1; /* Need to calculate this */
+
+ new_s0->skip_to_level = level = diff & ~ASSOC_ARRAY_LEVEL_STEP_MASK;
+ pr_devel("skip_to_level = %d [diff %d]\n", level, diff);
+ BUG_ON(level <= 0);
+
+ for (i = 0; i < keylen; i++)
+ new_s0->index_key[i] =
+ ops->get_key_chunk(index_key, i * ASSOC_ARRAY_KEY_CHUNK_SIZE);
+
+ blank = ULONG_MAX << (level & ASSOC_ARRAY_KEY_CHUNK_MASK);
+ pr_devel("blank off [%zu] %d: %lx\n", keylen - 1, level, blank);
+ new_s0->index_key[keylen - 1] &= ~blank;
+
+ /* This now reduces to a node splitting exercise for which we'll need
+ * to regenerate the disparity table.
+ */
+ for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) {
+ ptr = node->slots[i];
+ base_seg = ops->get_object_key_chunk(assoc_array_ptr_to_leaf(ptr),
+ level);
+ base_seg >>= level & ASSOC_ARRAY_KEY_CHUNK_MASK;
+ edit->segment_cache[i] = base_seg & ASSOC_ARRAY_FAN_MASK;
+ }
+
+ base_seg = ops->get_key_chunk(index_key, level);
+ base_seg >>= level & ASSOC_ARRAY_KEY_CHUNK_MASK;
+ edit->segment_cache[ASSOC_ARRAY_FAN_OUT] = base_seg & ASSOC_ARRAY_FAN_MASK;
+ goto do_split_node;
+}
+
+/*
+ * Handle insertion into the middle of a shortcut.
+ */
+static bool assoc_array_insert_mid_shortcut(struct assoc_array_edit *edit,
+ const struct assoc_array_ops *ops,
+ struct assoc_array_walk_result *result)
+{
+ struct assoc_array_shortcut *shortcut, *new_s0, *new_s1;
+ struct assoc_array_node *node, *new_n0, *side;
+ unsigned long sc_segments, dissimilarity, blank;
+ size_t keylen;
+ int level, sc_level, diff;
+ int sc_slot;
+
+ shortcut = result->wrong_shortcut.shortcut;
+ level = result->wrong_shortcut.level;
+ sc_level = result->wrong_shortcut.sc_level;
+ sc_segments = result->wrong_shortcut.sc_segments;
+ dissimilarity = result->wrong_shortcut.dissimilarity;
+
+ pr_devel("-->%s(ix=%d dis=%lx scix=%d)\n",
+ __func__, level, dissimilarity, sc_level);
+
+ /* We need to split a shortcut and insert a node between the two
+ * pieces. Zero-length pieces will be dispensed with entirely.
+ *
+ * First of all, we need to find out in which level the first
+ * difference was.
+ */
+ diff = __ffs(dissimilarity);
+ diff &= ~ASSOC_ARRAY_LEVEL_STEP_MASK;
+ diff += sc_level & ~ASSOC_ARRAY_KEY_CHUNK_MASK;
+ pr_devel("diff=%d\n", diff);
+
+ if (!shortcut->back_pointer) {
+ edit->set[0].ptr = &edit->array->root;
+ } else if (assoc_array_ptr_is_node(shortcut->back_pointer)) {
+ node = assoc_array_ptr_to_node(shortcut->back_pointer);
+ edit->set[0].ptr = &node->slots[shortcut->parent_slot];
+ } else {
+ BUG();
+ }
+
+ edit->excised_meta[0] = assoc_array_shortcut_to_ptr(shortcut);
+
+ /* Create a new node now since we're going to need it anyway */
+ new_n0 = kzalloc(sizeof(struct assoc_array_node), GFP_KERNEL);
+ if (!new_n0)
+ return false;
+ edit->new_meta[0] = assoc_array_node_to_ptr(new_n0);
+ edit->adjust_count_on = new_n0;
+
+ /* Insert a new shortcut before the new node if this segment isn't of
+ * zero length - otherwise we just connect the new node directly to the
+ * parent.
+ */
+ level += ASSOC_ARRAY_LEVEL_STEP;
+ if (diff > level) {
+ pr_devel("pre-shortcut %d...%d\n", level, diff);
+ keylen = round_up(diff, ASSOC_ARRAY_KEY_CHUNK_SIZE);
+ keylen >>= ASSOC_ARRAY_KEY_CHUNK_SHIFT;
+
+ new_s0 = kzalloc(sizeof(struct assoc_array_shortcut) +
+ keylen * sizeof(unsigned long), GFP_KERNEL);
+ if (!new_s0)
+ return false;
+ edit->new_meta[1] = assoc_array_shortcut_to_ptr(new_s0);
+ edit->set[0].to = assoc_array_shortcut_to_ptr(new_s0);
+ new_s0->back_pointer = shortcut->back_pointer;
+ new_s0->parent_slot = shortcut->parent_slot;
+ new_s0->next_node = assoc_array_node_to_ptr(new_n0);
+ new_s0->skip_to_level = diff;
+
+ new_n0->back_pointer = assoc_array_shortcut_to_ptr(new_s0);
+ new_n0->parent_slot = 0;
+
+ memcpy(new_s0->index_key, shortcut->index_key,
+ keylen * sizeof(unsigned long));
+
+ blank = ULONG_MAX << (diff & ASSOC_ARRAY_KEY_CHUNK_MASK);
+ pr_devel("blank off [%zu] %d: %lx\n", keylen - 1, diff, blank);
+ new_s0->index_key[keylen - 1] &= ~blank;
+ } else {
+ pr_devel("no pre-shortcut\n");
+ edit->set[0].to = assoc_array_node_to_ptr(new_n0);
+ new_n0->back_pointer = shortcut->back_pointer;
+ new_n0->parent_slot = shortcut->parent_slot;
+ }
+
+ side = assoc_array_ptr_to_node(shortcut->next_node);
+ new_n0->nr_leaves_on_branch = side->nr_leaves_on_branch;
+
+ /* We need to know which slot in the new node is going to take a
+ * metadata pointer.
+ */
+ sc_slot = sc_segments >> (diff & ASSOC_ARRAY_KEY_CHUNK_MASK);
+ sc_slot &= ASSOC_ARRAY_FAN_MASK;
+
+ pr_devel("new slot %lx >> %d -> %d\n",
+ sc_segments, diff & ASSOC_ARRAY_KEY_CHUNK_MASK, sc_slot);
+
+ /* Determine whether we need to follow the new node with a replacement
+ * for the current shortcut. We could in theory reuse the current
+ * shortcut if its parent slot number doesn't change - but that's a
+ * 1-in-16 chance so not worth expending the code upon.
+ */
+ level = diff + ASSOC_ARRAY_LEVEL_STEP;
+ if (level < shortcut->skip_to_level) {
+ pr_devel("post-shortcut %d...%d\n", level, shortcut->skip_to_level);
+ keylen = round_up(shortcut->skip_to_level, ASSOC_ARRAY_KEY_CHUNK_SIZE);
+ keylen >>= ASSOC_ARRAY_KEY_CHUNK_SHIFT;
+
+ new_s1 = kzalloc(sizeof(struct assoc_array_shortcut) +
+ keylen * sizeof(unsigned long), GFP_KERNEL);
+ if (!new_s1)
+ return false;
+ edit->new_meta[2] = assoc_array_shortcut_to_ptr(new_s1);
+
+ new_s1->back_pointer = assoc_array_node_to_ptr(new_n0);
+ new_s1->parent_slot = sc_slot;
+ new_s1->next_node = shortcut->next_node;
+ new_s1->skip_to_level = shortcut->skip_to_level;
+
+ new_n0->slots[sc_slot] = assoc_array_shortcut_to_ptr(new_s1);
+
+ memcpy(new_s1->index_key, shortcut->index_key,
+ keylen * sizeof(unsigned long));
+
+ edit->set[1].ptr = &side->back_pointer;
+ edit->set[1].to = assoc_array_shortcut_to_ptr(new_s1);
+ } else {
+ pr_devel("no post-shortcut\n");
+
+ /* We don't have to replace the pointed-to node as long as we
+ * use memory barriers to make sure the parent slot number is
+ * changed before the back pointer (the parent slot number is
+ * irrelevant to the old parent shortcut).
+ */
+ new_n0->slots[sc_slot] = shortcut->next_node;
+ edit->set_parent_slot[0].p = &side->parent_slot;
+ edit->set_parent_slot[0].to = sc_slot;
+ edit->set[1].ptr = &side->back_pointer;
+ edit->set[1].to = assoc_array_node_to_ptr(new_n0);
+ }
+
+ /* Install the new leaf in a spare slot in the new node. */
+ if (sc_slot == 0)
+ edit->leaf_p = &new_n0->slots[1];
+ else
+ edit->leaf_p = &new_n0->slots[0];
+
+ pr_devel("<--%s() = ok [split shortcut]\n", __func__);
+ return edit;
+}
+
+/**
+ * assoc_array_insert - Script insertion of an object into an associative array
+ * @array: The array to insert into.
+ * @ops: The operations to use.
+ * @index_key: The key to insert at.
+ * @object: The object to insert.
+ *
+ * Precalculate and preallocate a script for the insertion or replacement of an
+ * object in an associative array. This results in an edit script that can
+ * either be applied or cancelled.
+ *
+ * The function returns a pointer to an edit script or -ENOMEM.
+ *
+ * The caller should lock against other modifications and must continue to hold
+ * the lock until assoc_array_apply_edit() has been called.
+ *
+ * Accesses to the tree may take place concurrently with this function,
+ * provided they hold the RCU read lock.
+ */
+struct assoc_array_edit *assoc_array_insert(struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ const void *index_key,
+ void *object)
+{
+ struct assoc_array_walk_result result;
+ struct assoc_array_edit *edit;
+
+ pr_devel("-->%s()\n", __func__);
+
+ /* The leaf pointer we're given must not have the bottom bit set as we
+ * use those for type-marking the pointer. NULL pointers are also not
+ * allowed as they indicate an empty slot but we have to allow them
+ * here as they can be updated later.
+ */
+ BUG_ON(assoc_array_ptr_is_meta(object));
+
+ edit = kzalloc(sizeof(struct assoc_array_edit), GFP_KERNEL);
+ if (!edit)
+ return ERR_PTR(-ENOMEM);
+ edit->array = array;
+ edit->ops = ops;
+ edit->leaf = assoc_array_leaf_to_ptr(object);
+ edit->adjust_count_by = 1;
+
+ switch (assoc_array_walk(array, ops, index_key, &result)) {
+ case assoc_array_walk_tree_empty:
+ /* Allocate a root node if there isn't one yet */
+ if (!assoc_array_insert_in_empty_tree(edit))
+ goto enomem;
+ return edit;
+
+ case assoc_array_walk_found_terminal_node:
+ /* We found a node that doesn't have a node/shortcut pointer in
+ * the slot corresponding to the index key that we have to
+ * follow.
+ */
+ if (!assoc_array_insert_into_terminal_node(edit, ops, index_key,
+ &result))
+ goto enomem;
+ return edit;
+
+ case assoc_array_walk_found_wrong_shortcut:
+ /* We found a shortcut that didn't match our key in a slot we
+ * needed to follow.
+ */
+ if (!assoc_array_insert_mid_shortcut(edit, ops, &result))
+ goto enomem;
+ return edit;
+ }
+
+enomem:
+ /* Clean up after an out of memory error */
+ pr_devel("enomem\n");
+ assoc_array_cancel_edit(edit);
+ return ERR_PTR(-ENOMEM);
+}
+
+/**
+ * assoc_array_insert_set_object - Set the new object pointer in an edit script
+ * @edit: The edit script to modify.
+ * @object: The object pointer to set.
+ *
+ * Change the object to be inserted in an edit script. The object pointed to
+ * by the old object is not freed. This must be done prior to applying the
+ * script.
+ */
+void assoc_array_insert_set_object(struct assoc_array_edit *edit, void *object)
+{
+ BUG_ON(!object);
+ edit->leaf = assoc_array_leaf_to_ptr(object);
+}
+
+struct assoc_array_delete_collapse_context {
+ struct assoc_array_node *node;
+ const void *skip_leaf;
+ int slot;
+};
+
+/*
+ * Subtree collapse to node iterator.
+ */
+static int assoc_array_delete_collapse_iterator(const void *leaf,
+ void *iterator_data)
+{
+ struct assoc_array_delete_collapse_context *collapse = iterator_data;
+
+ if (leaf == collapse->skip_leaf)
+ return 0;
+
+ BUG_ON(collapse->slot >= ASSOC_ARRAY_FAN_OUT);
+
+ collapse->node->slots[collapse->slot++] = assoc_array_leaf_to_ptr(leaf);
+ return 0;
+}
+
+/**
+ * assoc_array_delete - Script deletion of an object from an associative array
+ * @array: The array to search.
+ * @ops: The operations to use.
+ * @index_key: The key to the object.
+ *
+ * Precalculate and preallocate a script for the deletion of an object from an
+ * associative array. This results in an edit script that can either be
+ * applied or cancelled.
+ *
+ * The function returns a pointer to an edit script if the object was found,
+ * NULL if the object was not found or -ENOMEM.
+ *
+ * The caller should lock against other modifications and must continue to hold
+ * the lock until assoc_array_apply_edit() has been called.
+ *
+ * Accesses to the tree may take place concurrently with this function,
+ * provided they hold the RCU read lock.
+ */
+struct assoc_array_edit *assoc_array_delete(struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ const void *index_key)
+{
+ struct assoc_array_delete_collapse_context collapse;
+ struct assoc_array_walk_result result;
+ struct assoc_array_node *node, *new_n0;
+ struct assoc_array_edit *edit;
+ struct assoc_array_ptr *ptr;
+ bool has_meta;
+ int slot, i;
+
+ pr_devel("-->%s()\n", __func__);
+
+ edit = kzalloc(sizeof(struct assoc_array_edit), GFP_KERNEL);
+ if (!edit)
+ return ERR_PTR(-ENOMEM);
+ edit->array = array;
+ edit->ops = ops;
+ edit->adjust_count_by = -1;
+
+ switch (assoc_array_walk(array, ops, index_key, &result)) {
+ case assoc_array_walk_found_terminal_node:
+ /* We found a node that should contain the leaf we've been
+ * asked to remove - *if* it's in the tree.
+ */
+ pr_devel("terminal_node\n");
+ node = result.terminal_node.node;
+
+ for (slot = 0; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ ptr = node->slots[slot];
+ if (ptr &&
+ assoc_array_ptr_is_leaf(ptr) &&
+ ops->compare_object(assoc_array_ptr_to_leaf(ptr),
+ index_key))
+ goto found_leaf;
+ }
+ case assoc_array_walk_tree_empty:
+ case assoc_array_walk_found_wrong_shortcut:
+ default:
+ assoc_array_cancel_edit(edit);
+ pr_devel("not found\n");
+ return NULL;
+ }
+
+found_leaf:
+ BUG_ON(array->nr_leaves_on_tree <= 0);
+
+ /* In the simplest form of deletion we just clear the slot and release
+ * the leaf after a suitable interval.
+ */
+ edit->dead_leaf = node->slots[slot];
+ edit->set[0].ptr = &node->slots[slot];
+ edit->set[0].to = NULL;
+ edit->adjust_count_on = node;
+
+ /* If that concludes erasure of the last leaf, then delete the entire
+ * internal array.
+ */
+ if (array->nr_leaves_on_tree == 1) {
+ edit->set[1].ptr = &array->root;
+ edit->set[1].to = NULL;
+ edit->adjust_count_on = NULL;
+ edit->excised_subtree = array->root;
+ pr_devel("all gone\n");
+ return edit;
+ }
+
+ /* However, we'd also like to clear up some metadata blocks if we
+ * possibly can.
+ *
+ * We go for a simple algorithm of: if this node has FAN_OUT or fewer
+ * leaves in it, then attempt to collapse it - and attempt to
+ * recursively collapse up the tree.
+ *
+ * We could also try and collapse in partially filled subtrees to take
+ * up space in this node.
+ */
+ if (node->nr_leaves_on_branch <= ASSOC_ARRAY_FAN_OUT + 1) {
+ struct assoc_array_node *parent, *grandparent;
+ struct assoc_array_ptr *ptr;
+
+ /* First of all, we need to know if this node has metadata so
+ * that we don't try collapsing if all the leaves are already
+ * here.
+ */
+ has_meta = false;
+ for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) {
+ ptr = node->slots[i];
+ if (assoc_array_ptr_is_meta(ptr)) {
+ has_meta = true;
+ break;
+ }
+ }
+
+ pr_devel("leaves: %ld [m=%d]\n",
+ node->nr_leaves_on_branch - 1, has_meta);
+
+ /* Look further up the tree to see if we can collapse this node
+ * into a more proximal node too.
+ */
+ parent = node;
+ collapse_up:
+ pr_devel("collapse subtree: %ld\n", parent->nr_leaves_on_branch);
+
+ ptr = parent->back_pointer;
+ if (!ptr)
+ goto do_collapse;
+ if (assoc_array_ptr_is_shortcut(ptr)) {
+ struct assoc_array_shortcut *s = assoc_array_ptr_to_shortcut(ptr);
+ ptr = s->back_pointer;
+ if (!ptr)
+ goto do_collapse;
+ }
+
+ grandparent = assoc_array_ptr_to_node(ptr);
+ if (grandparent->nr_leaves_on_branch <= ASSOC_ARRAY_FAN_OUT + 1) {
+ parent = grandparent;
+ goto collapse_up;
+ }
+
+ do_collapse:
+ /* There's no point collapsing if the original node has no meta
+ * pointers to discard and if we didn't merge into one of that
+ * node's ancestry.
+ */
+ if (has_meta || parent != node) {
+ node = parent;
+
+ /* Create a new node to collapse into */
+ new_n0 = kzalloc(sizeof(struct assoc_array_node), GFP_KERNEL);
+ if (!new_n0)
+ goto enomem;
+ edit->new_meta[0] = assoc_array_node_to_ptr(new_n0);
+
+ new_n0->back_pointer = node->back_pointer;
+ new_n0->parent_slot = node->parent_slot;
+ new_n0->nr_leaves_on_branch = node->nr_leaves_on_branch;
+ edit->adjust_count_on = new_n0;
+
+ collapse.node = new_n0;
+ collapse.skip_leaf = assoc_array_ptr_to_leaf(edit->dead_leaf);
+ collapse.slot = 0;
+ assoc_array_subtree_iterate(assoc_array_node_to_ptr(node),
+ node->back_pointer,
+ assoc_array_delete_collapse_iterator,
+ &collapse);
+ pr_devel("collapsed %d,%lu\n", collapse.slot, new_n0->nr_leaves_on_branch);
+ BUG_ON(collapse.slot != new_n0->nr_leaves_on_branch - 1);
+
+ if (!node->back_pointer) {
+ edit->set[1].ptr = &array->root;
+ } else if (assoc_array_ptr_is_leaf(node->back_pointer)) {
+ BUG();
+ } else if (assoc_array_ptr_is_node(node->back_pointer)) {
+ struct assoc_array_node *p =
+ assoc_array_ptr_to_node(node->back_pointer);
+ edit->set[1].ptr = &p->slots[node->parent_slot];
+ } else if (assoc_array_ptr_is_shortcut(node->back_pointer)) {
+ struct assoc_array_shortcut *s =
+ assoc_array_ptr_to_shortcut(node->back_pointer);
+ edit->set[1].ptr = &s->next_node;
+ }
+ edit->set[1].to = assoc_array_node_to_ptr(new_n0);
+ edit->excised_subtree = assoc_array_node_to_ptr(node);
+ }
+ }
+
+ return edit;
+
+enomem:
+ /* Clean up after an out of memory error */
+ pr_devel("enomem\n");
+ assoc_array_cancel_edit(edit);
+ return ERR_PTR(-ENOMEM);
+}
+
+/**
+ * assoc_array_clear - Script deletion of all objects from an associative array
+ * @array: The array to clear.
+ * @ops: The operations to use.
+ *
+ * Precalculate and preallocate a script for the deletion of all the objects
+ * from an associative array. This results in an edit script that can either
+ * be applied or cancelled.
+ *
+ * The function returns a pointer to an edit script if there are objects to be
+ * deleted, NULL if there are no objects in the array or -ENOMEM.
+ *
+ * The caller should lock against other modifications and must continue to hold
+ * the lock until assoc_array_apply_edit() has been called.
+ *
+ * Accesses to the tree may take place concurrently with this function,
+ * provided they hold the RCU read lock.
+ */
+struct assoc_array_edit *assoc_array_clear(struct assoc_array *array,
+ const struct assoc_array_ops *ops)
+{
+ struct assoc_array_edit *edit;
+
+ pr_devel("-->%s()\n", __func__);
+
+ if (!array->root)
+ return NULL;
+
+ edit = kzalloc(sizeof(struct assoc_array_edit), GFP_KERNEL);
+ if (!edit)
+ return ERR_PTR(-ENOMEM);
+ edit->array = array;
+ edit->ops = ops;
+ edit->set[1].ptr = &array->root;
+ edit->set[1].to = NULL;
+ edit->excised_subtree = array->root;
+ edit->ops_for_excised_subtree = ops;
+ pr_devel("all gone\n");
+ return edit;
+}
+
+/*
+ * Handle the deferred destruction after an applied edit.
+ */
+static void assoc_array_rcu_cleanup(struct rcu_head *head)
+{
+ struct assoc_array_edit *edit =
+ container_of(head, struct assoc_array_edit, rcu);
+ int i;
+
+ pr_devel("-->%s()\n", __func__);
+
+ if (edit->dead_leaf)
+ edit->ops->free_object(assoc_array_ptr_to_leaf(edit->dead_leaf));
+ for (i = 0; i < ARRAY_SIZE(edit->excised_meta); i++)
+ if (edit->excised_meta[i])
+ kfree(assoc_array_ptr_to_node(edit->excised_meta[i]));
+
+ if (edit->excised_subtree) {
+ BUG_ON(assoc_array_ptr_is_leaf(edit->excised_subtree));
+ if (assoc_array_ptr_is_node(edit->excised_subtree)) {
+ struct assoc_array_node *n =
+ assoc_array_ptr_to_node(edit->excised_subtree);
+ n->back_pointer = NULL;
+ } else {
+ struct assoc_array_shortcut *s =
+ assoc_array_ptr_to_shortcut(edit->excised_subtree);
+ s->back_pointer = NULL;
+ }
+ assoc_array_destroy_subtree(edit->excised_subtree,
+ edit->ops_for_excised_subtree);
+ }
+
+ kfree(edit);
+}
+
+/**
+ * assoc_array_apply_edit - Apply an edit script to an associative array
+ * @edit: The script to apply.
+ *
+ * Apply an edit script to an associative array to effect an insertion,
+ * deletion or clearance. As the edit script includes preallocated memory,
+ * this is guaranteed not to fail.
+ *
+ * The edit script, dead objects and dead metadata will be scheduled for
+ * destruction after an RCU grace period to permit those doing read-only
+ * accesses on the array to continue to do so under the RCU read lock whilst
+ * the edit is taking place.
+ */
+void assoc_array_apply_edit(struct assoc_array_edit *edit)
+{
+ struct assoc_array_shortcut *shortcut;
+ struct assoc_array_node *node;
+ struct assoc_array_ptr *ptr;
+ int i;
+
+ pr_devel("-->%s()\n", __func__);
+
+ smp_wmb();
+ if (edit->leaf_p)
+ *edit->leaf_p = edit->leaf;
+
+ smp_wmb();
+ for (i = 0; i < ARRAY_SIZE(edit->set_parent_slot); i++)
+ if (edit->set_parent_slot[i].p)
+ *edit->set_parent_slot[i].p = edit->set_parent_slot[i].to;
+
+ smp_wmb();
+ for (i = 0; i < ARRAY_SIZE(edit->set_backpointers); i++)
+ if (edit->set_backpointers[i])
+ *edit->set_backpointers[i] = edit->set_backpointers_to;
+
+ smp_wmb();
+ for (i = 0; i < ARRAY_SIZE(edit->set); i++)
+ if (edit->set[i].ptr)
+ *edit->set[i].ptr = edit->set[i].to;
+
+ if (edit->array->root == NULL) {
+ edit->array->nr_leaves_on_tree = 0;
+ } else if (edit->adjust_count_on) {
+ node = edit->adjust_count_on;
+ for (;;) {
+ node->nr_leaves_on_branch += edit->adjust_count_by;
+
+ ptr = node->back_pointer;
+ if (!ptr)
+ break;
+ if (assoc_array_ptr_is_shortcut(ptr)) {
+ shortcut = assoc_array_ptr_to_shortcut(ptr);
+ ptr = shortcut->back_pointer;
+ if (!ptr)
+ break;
+ }
+ BUG_ON(!assoc_array_ptr_is_node(ptr));
+ node = assoc_array_ptr_to_node(ptr);
+ }
+
+ edit->array->nr_leaves_on_tree += edit->adjust_count_by;
+ }
+
+ call_rcu(&edit->rcu, assoc_array_rcu_cleanup);
+}
+
+/**
+ * assoc_array_cancel_edit - Discard an edit script.
+ * @edit: The script to discard.
+ *
+ * Free an edit script and all the preallocated data it holds without making
+ * any changes to the associative array it was intended for.
+ *
+ * NOTE! In the case of an insertion script, this does _not_ release the leaf
+ * that was to be inserted. That is left to the caller.
+ */
+void assoc_array_cancel_edit(struct assoc_array_edit *edit)
+{
+ struct assoc_array_ptr *ptr;
+ int i;
+
+ pr_devel("-->%s()\n", __func__);
+
+ /* Clean up after an out of memory error */
+ for (i = 0; i < ARRAY_SIZE(edit->new_meta); i++) {
+ ptr = edit->new_meta[i];
+ if (ptr) {
+ if (assoc_array_ptr_is_node(ptr))
+ kfree(assoc_array_ptr_to_node(ptr));
+ else
+ kfree(assoc_array_ptr_to_shortcut(ptr));
+ }
+ }
+ kfree(edit);
+}
+
+/**
+ * assoc_array_gc - Garbage collect an associative array.
+ * @array: The array to clean.
+ * @ops: The operations to use.
+ * @iterator: A callback function to pass judgement on each object.
+ * @iterator_data: Private data for the callback function.
+ *
+ * Collect garbage from an associative array and pack down the internal tree to
+ * save memory.
+ *
+ * The iterator function is asked to pass judgement upon each object in the
+ * array. If it returns false, the object is discard and if it returns true,
+ * the object is kept. If it returns true, it must increment the object's
+ * usage count (or whatever it needs to do to retain it) before returning.
+ *
+ * This function returns 0 if successful or -ENOMEM if out of memory. In the
+ * latter case, the array is not changed.
+ *
+ * The caller should lock against other modifications and must continue to hold
+ * the lock until assoc_array_apply_edit() has been called.
+ *
+ * Accesses to the tree may take place concurrently with this function,
+ * provided they hold the RCU read lock.
+ */
+int assoc_array_gc(struct assoc_array *array,
+ const struct assoc_array_ops *ops,
+ bool (*iterator)(void *object, void *iterator_data),
+ void *iterator_data)
+{
+ struct assoc_array_shortcut *shortcut, *new_s;
+ struct assoc_array_node *node, *new_n;
+ struct assoc_array_edit *edit;
+ struct assoc_array_ptr *cursor, *ptr;
+ struct assoc_array_ptr *new_root, *new_parent, **new_ptr_pp;
+ unsigned long nr_leaves_on_tree;
+ int keylen, slot, nr_free, next_slot, i;
+
+ pr_devel("-->%s()\n", __func__);
+
+ if (!array->root)
+ return 0;
+
+ edit = kzalloc(sizeof(struct assoc_array_edit), GFP_KERNEL);
+ if (!edit)
+ return -ENOMEM;
+ edit->array = array;
+ edit->ops = ops;
+ edit->ops_for_excised_subtree = ops;
+ edit->set[0].ptr = &array->root;
+ edit->excised_subtree = array->root;
+
+ new_root = new_parent = NULL;
+ new_ptr_pp = &new_root;
+ cursor = array->root;
+
+descend:
+ /* If this point is a shortcut, then we need to duplicate it and
+ * advance the target cursor.
+ */
+ if (assoc_array_ptr_is_shortcut(cursor)) {
+ shortcut = assoc_array_ptr_to_shortcut(cursor);
+ keylen = round_up(shortcut->skip_to_level, ASSOC_ARRAY_KEY_CHUNK_SIZE);
+ keylen >>= ASSOC_ARRAY_KEY_CHUNK_SHIFT;
+ new_s = kmalloc(sizeof(struct assoc_array_shortcut) +
+ keylen * sizeof(unsigned long), GFP_KERNEL);
+ if (!new_s)
+ goto enomem;
+ pr_devel("dup shortcut %p -> %p\n", shortcut, new_s);
+ memcpy(new_s, shortcut, (sizeof(struct assoc_array_shortcut) +
+ keylen * sizeof(unsigned long)));
+ new_s->back_pointer = new_parent;
+ new_s->parent_slot = shortcut->parent_slot;
+ *new_ptr_pp = new_parent = assoc_array_shortcut_to_ptr(new_s);
+ new_ptr_pp = &new_s->next_node;
+ cursor = shortcut->next_node;
+ }
+
+ /* Duplicate the node at this position */
+ node = assoc_array_ptr_to_node(cursor);
+ new_n = kzalloc(sizeof(struct assoc_array_node), GFP_KERNEL);
+ if (!new_n)
+ goto enomem;
+ pr_devel("dup node %p -> %p\n", node, new_n);
+ new_n->back_pointer = new_parent;
+ new_n->parent_slot = node->parent_slot;
+ *new_ptr_pp = new_parent = assoc_array_node_to_ptr(new_n);
+ new_ptr_pp = NULL;
+ slot = 0;
+
+continue_node:
+ /* Filter across any leaves and gc any subtrees */
+ for (; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ ptr = node->slots[slot];
+ if (!ptr)
+ continue;
+
+ if (assoc_array_ptr_is_leaf(ptr)) {
+ if (iterator(assoc_array_ptr_to_leaf(ptr),
+ iterator_data))
+ /* The iterator will have done any reference
+ * counting on the object for us.
+ */
+ new_n->slots[slot] = ptr;
+ continue;
+ }
+
+ new_ptr_pp = &new_n->slots[slot];
+ cursor = ptr;
+ goto descend;
+ }
+
+ pr_devel("-- compress node %p --\n", new_n);
+
+ /* Count up the number of empty slots in this node and work out the
+ * subtree leaf count.
+ */
+ new_n->nr_leaves_on_branch = 0;
+ nr_free = 0;
+ for (slot = 0; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ ptr = new_n->slots[slot];
+ if (!ptr)
+ nr_free++;
+ else if (assoc_array_ptr_is_leaf(ptr))
+ new_n->nr_leaves_on_branch++;
+ }
+ pr_devel("free=%d, leaves=%lu\n", nr_free, new_n->nr_leaves_on_branch);
+
+ /* See what we can fold in */
+ next_slot = 0;
+ for (slot = 0; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ struct assoc_array_shortcut *s;
+ struct assoc_array_node *child;
+
+ ptr = new_n->slots[slot];
+ if (!ptr || assoc_array_ptr_is_leaf(ptr))
+ continue;
+
+ s = NULL;
+ if (assoc_array_ptr_is_shortcut(ptr)) {
+ s = assoc_array_ptr_to_shortcut(ptr);
+ ptr = s->next_node;
+ }
+
+ child = assoc_array_ptr_to_node(ptr);
+ new_n->nr_leaves_on_branch += child->nr_leaves_on_branch;
+
+ if (child->nr_leaves_on_branch <= nr_free + 1) {
+ /* Fold the child node into this one */
+ pr_devel("[%d] fold node %lu/%d [nx %d]\n",
+ slot, child->nr_leaves_on_branch, nr_free + 1,
+ next_slot);
+
+ /* We would already have reaped an intervening shortcut
+ * on the way back up the tree.
+ */
+ BUG_ON(s);
+
+ new_n->slots[slot] = NULL;
+ nr_free++;
+ if (slot < next_slot)
+ next_slot = slot;
+ for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) {
+ struct assoc_array_ptr *p = child->slots[i];
+ if (!p)
+ continue;
+ BUG_ON(assoc_array_ptr_is_meta(p));
+ while (new_n->slots[next_slot])
+ next_slot++;
+ BUG_ON(next_slot >= ASSOC_ARRAY_FAN_OUT);
+ new_n->slots[next_slot++] = p;
+ nr_free--;
+ }
+ kfree(child);
+ } else {
+ pr_devel("[%d] retain node %lu/%d [nx %d]\n",
+ slot, child->nr_leaves_on_branch, nr_free + 1,
+ next_slot);
+ }
+ }
+
+ pr_devel("after: %lu\n", new_n->nr_leaves_on_branch);
+
+ nr_leaves_on_tree = new_n->nr_leaves_on_branch;
+
+ /* Excise this node if it is singly occupied by a shortcut */
+ if (nr_free == ASSOC_ARRAY_FAN_OUT - 1) {
+ for (slot = 0; slot < ASSOC_ARRAY_FAN_OUT; slot++)
+ if ((ptr = new_n->slots[slot]))
+ break;
+
+ if (assoc_array_ptr_is_meta(ptr) &&
+ assoc_array_ptr_is_shortcut(ptr)) {
+ pr_devel("excise node %p with 1 shortcut\n", new_n);
+ new_s = assoc_array_ptr_to_shortcut(ptr);
+ new_parent = new_n->back_pointer;
+ slot = new_n->parent_slot;
+ kfree(new_n);
+ if (!new_parent) {
+ new_s->back_pointer = NULL;
+ new_s->parent_slot = 0;
+ new_root = ptr;
+ goto gc_complete;
+ }
+
+ if (assoc_array_ptr_is_shortcut(new_parent)) {
+ /* We can discard any preceding shortcut also */
+ struct assoc_array_shortcut *s =
+ assoc_array_ptr_to_shortcut(new_parent);
+
+ pr_devel("excise preceding shortcut\n");
+
+ new_parent = new_s->back_pointer = s->back_pointer;
+ slot = new_s->parent_slot = s->parent_slot;
+ kfree(s);
+ if (!new_parent) {
+ new_s->back_pointer = NULL;
+ new_s->parent_slot = 0;
+ new_root = ptr;
+ goto gc_complete;
+ }
+ }
+
+ new_s->back_pointer = new_parent;
+ new_s->parent_slot = slot;
+ new_n = assoc_array_ptr_to_node(new_parent);
+ new_n->slots[slot] = ptr;
+ goto ascend_old_tree;
+ }
+ }
+
+ /* Excise any shortcuts we might encounter that point to nodes that
+ * only contain leaves.
+ */
+ ptr = new_n->back_pointer;
+ if (!ptr)
+ goto gc_complete;
+
+ if (assoc_array_ptr_is_shortcut(ptr)) {
+ new_s = assoc_array_ptr_to_shortcut(ptr);
+ new_parent = new_s->back_pointer;
+ slot = new_s->parent_slot;
+
+ if (new_n->nr_leaves_on_branch <= ASSOC_ARRAY_FAN_OUT) {
+ struct assoc_array_node *n;
+
+ pr_devel("excise shortcut\n");
+ new_n->back_pointer = new_parent;
+ new_n->parent_slot = slot;
+ kfree(new_s);
+ if (!new_parent) {
+ new_root = assoc_array_node_to_ptr(new_n);
+ goto gc_complete;
+ }
+
+ n = assoc_array_ptr_to_node(new_parent);
+ n->slots[slot] = assoc_array_node_to_ptr(new_n);
+ }
+ } else {
+ new_parent = ptr;
+ }
+ new_n = assoc_array_ptr_to_node(new_parent);
+
+ascend_old_tree:
+ ptr = node->back_pointer;
+ if (assoc_array_ptr_is_shortcut(ptr)) {
+ shortcut = assoc_array_ptr_to_shortcut(ptr);
+ slot = shortcut->parent_slot;
+ cursor = shortcut->back_pointer;
+ } else {
+ slot = node->parent_slot;
+ cursor = ptr;
+ }
+ BUG_ON(!ptr);
+ node = assoc_array_ptr_to_node(cursor);
+ slot++;
+ goto continue_node;
+
+gc_complete:
+ edit->set[0].to = new_root;
+ assoc_array_apply_edit(edit);
+ edit->array->nr_leaves_on_tree = nr_leaves_on_tree;
+ return 0;
+
+enomem:
+ pr_devel("enomem\n");
+ assoc_array_destroy_subtree(new_root, edit->ops);
+ kfree(edit);
+ return -ENOMEM;
+}
#include <linux/export.h>
#include <linux/lockref.h>
+#include <linux/mutex.h>
#if USE_CMPXCHG_LOCKREF
# define cmpxchg64_relaxed cmpxchg64
#endif
-/*
- * Allow architectures to override the default cpu_relax() within CMPXCHG_LOOP.
- * This is useful for architectures with an expensive cpu_relax().
- */
-#ifndef arch_mutex_cpu_relax
-# define arch_mutex_cpu_relax() cpu_relax()
-#endif
-
/*
* Note that the "cmpxchg()" reloads the "old" value for the
* failure case.
kfree(a);
}
EXPORT_SYMBOL_GPL(mpi_free);
+
+MODULE_DESCRIPTION("Multiprecision maths library");
+MODULE_LICENSE("GPL");
min(pool->nr_free, pool->percpu_batch_size));
}
-static inline unsigned alloc_local_tag(struct percpu_ida *pool,
- struct percpu_ida_cpu *tags)
+static inline unsigned alloc_local_tag(struct percpu_ida_cpu *tags)
{
int tag = -ENOSPC;
tags = this_cpu_ptr(pool->tag_cpu);
/* Fastpath */
- tag = alloc_local_tag(pool, tags);
+ tag = alloc_local_tag(tags);
if (likely(tag >= 0)) {
local_irq_restore(flags);
return tag;
config MEM_SOFT_DIRTY
bool "Track memory changes"
- depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY
+ depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS
select PROC_PAGE_MONITOR
help
This option enables memory changes tracking by introducing a
bool migrate_scanner)
{
struct zone *zone = cc->zone;
+
+ if (cc->ignore_skip_hint)
+ return;
+
if (!page)
return;
if (mapping_cap_account_dirty(mapping)) {
unsigned long addr;
struct file *file = get_file(vma->vm_file);
+ /* mmap_region may free vma; grab the info now */
+ vm_flags = vma->vm_flags;
- addr = mmap_region(file, start, size,
- vma->vm_flags, pgoff);
+ addr = mmap_region(file, start, size, vm_flags, pgoff);
fput(file);
if (IS_ERR_VALUE(addr)) {
err = addr;
BUG_ON(addr != start);
err = 0;
}
- goto out;
+ goto out_freed;
}
mutex_lock(&mapping->i_mmap_mutex);
flush_dcache_mmap_lock(mapping);
out:
if (vma)
vm_flags = vma->vm_flags;
+out_freed:
if (likely(!has_write_lock))
up_read(&mm->mmap_sem);
else
ret = 0;
goto out_unlock;
}
+
if (unlikely(pmd_trans_splitting(pmd))) {
/* split huge page running from under us */
spin_unlock(src_ptl);
if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
return ERR_PTR(-EFAULT);
+ /* Full NUMA hinting faults to serialise migration in fault paths */
+ if ((flags & FOLL_NUMA) && pmd_numa(*pmd))
+ goto out;
+
page = pmd_page(*pmd);
VM_BUG_ON(!PageHead(page));
if (flags & FOLL_TOUCH) {
if (unlikely(!pmd_same(pmd, *pmdp)))
goto out_unlock;
+ /*
+ * If there are potential migrations, wait for completion and retry
+ * without disrupting NUMA hinting information. Do not relock and
+ * check_same as the page may no longer be mapped.
+ */
+ if (unlikely(pmd_trans_migrating(*pmdp))) {
+ spin_unlock(ptl);
+ wait_migrate_huge_page(vma->anon_vma, pmdp);
+ goto out;
+ }
+
page = pmd_page(pmd);
BUG_ON(is_huge_zero_page(page));
page_nid = page_to_nid(page);
/* If the page was locked, there are no parallel migrations */
if (page_locked)
goto clear_pmdnuma;
+ }
- /*
- * Otherwise wait for potential migrations and retry. We do
- * relock and check_same as the page may no longer be mapped.
- * As the fault is being retried, do not account for it.
- */
+ /* Migration could have started since the pmd_trans_migrating check */
+ if (!page_locked) {
spin_unlock(ptl);
wait_on_page_locked(page);
page_nid = -1;
goto out;
}
- /* Page is misplaced, serialise migrations and parallel THP splits */
+ /*
+ * Page is misplaced. Page lock serialises migrations. Acquire anon_vma
+ * to serialises splits
+ */
get_page(page);
spin_unlock(ptl);
- if (!page_locked)
- lock_page(page);
anon_vma = page_lock_anon_vma_read(page);
/* Confirm the PMD did not change while page_table_lock was released */
goto out_unlock;
}
+ /* Bail if we fail to protect against THP splits for any reason */
+ if (unlikely(!anon_vma)) {
+ put_page(page);
+ page_nid = -1;
+ goto clear_pmdnuma;
+ }
+
/*
* Migrate the THP to the requested node, returns with page unlocked
* and pmd_numa cleared.
pmd = pmdp_get_and_clear(mm, old_addr, old_pmd);
VM_BUG_ON(!pmd_none(*new_pmd));
set_pmd_at(mm, new_addr, new_pmd, pmd_mksoft_dirty(pmd));
- if (new_ptl != old_ptl)
+ if (new_ptl != old_ptl) {
+ pgtable_t pgtable;
+
+ /*
+ * Move preallocated PTE page table if new_pmd is on
+ * different PMD page table.
+ */
+ pgtable = pgtable_trans_huge_withdraw(mm, old_pmd);
+ pgtable_trans_huge_deposit(mm, new_pmd, pgtable);
+
spin_unlock(new_ptl);
+ }
spin_unlock(old_ptl);
}
out:
ret = 1;
if (!prot_numa) {
entry = pmdp_get_and_clear(mm, addr, pmd);
+ if (pmd_numa(entry))
+ entry = pmd_mknonnuma(entry);
entry = pmd_modify(entry, newprot);
ret = HPAGE_PMD_NR;
BUG_ON(pmd_write(entry));
*/
if (!is_huge_zero_page(page) &&
!pmd_numa(*pmd)) {
- entry = pmdp_get_and_clear(mm, addr, pmd);
+ entry = *pmd;
entry = pmd_mknuma(entry);
ret = HPAGE_PMD_NR;
}
return 0;
}
-static void copy_gigantic_page(struct page *dst, struct page *src)
-{
- int i;
- struct hstate *h = page_hstate(src);
- struct page *dst_base = dst;
- struct page *src_base = src;
-
- for (i = 0; i < pages_per_huge_page(h); ) {
- cond_resched();
- copy_highpage(dst, src);
-
- i++;
- dst = mem_map_next(dst, dst_base, i);
- src = mem_map_next(src, src_base, i);
- }
-}
-
-void copy_huge_page(struct page *dst, struct page *src)
-{
- int i;
- struct hstate *h = page_hstate(src);
-
- if (unlikely(pages_per_huge_page(h) > MAX_ORDER_NR_PAGES)) {
- copy_gigantic_page(dst, src);
- return;
- }
-
- might_sleep();
- for (i = 0; i < pages_per_huge_page(h); i++) {
- cond_resched();
- copy_highpage(dst + i, src + i);
- }
-}
-
static void enqueue_huge_page(struct hstate *h, struct page *page)
{
int nid = page_to_nid(page);
}
EXPORT_SYMBOL_GPL(PageHuge);
+/*
+ * PageHeadHuge() only returns true for hugetlbfs head page, but not for
+ * normal or transparent huge pages.
+ */
+int PageHeadHuge(struct page *page_head)
+{
+ compound_page_dtor *dtor;
+
+ if (!PageHead(page_head))
+ return 0;
+
+ dtor = get_compound_page_dtor(page_head);
+
+ return dtor == free_huge_page;
+}
+EXPORT_SYMBOL_GPL(PageHeadHuge);
+
pgoff_t __basepage_index(struct page *page)
{
struct page *page_head = compound_head(page);
static size_t memcg_size(void)
{
return sizeof(struct mem_cgroup) +
- nr_node_ids * sizeof(struct mem_cgroup_per_node);
+ nr_node_ids * sizeof(struct mem_cgroup_per_node *);
}
/* internal only representation about the status of kmem accounting. */
goto bypass;
if (unlikely(task_in_memcg_oom(current)))
- goto bypass;
+ goto nomem;
+
+ if (gfp_mask & __GFP_NOFAIL)
+ oom = false;
/*
* We always charge the cgroup the mm_struct belongs to.
static void mem_cgroup_css_free(struct cgroup_subsys_state *css)
{
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ /*
+ * XXX: css_offline() would be where we should reparent all
+ * memory to prepare the cgroup for destruction. However,
+ * memcg does not do css_tryget() and res_counter charging
+ * under the same RCU lock region, which means that charging
+ * could race with offlining. Offlining only happens to
+ * cgroups with no tasks in them but charges can show up
+ * without any tasks from the swapin path when the target
+ * memcg is looked up from the swapout record and not from the
+ * current task as it usually is. A race like this can leak
+ * charges and put pages with stale cgroup pointers into
+ * circulation:
+ *
+ * #0 #1
+ * lookup_swap_cgroup_id()
+ * rcu_read_lock()
+ * mem_cgroup_lookup()
+ * css_tryget()
+ * rcu_read_unlock()
+ * disable css_tryget()
+ * call_rcu()
+ * offline_css()
+ * reparent_charges()
+ * res_counter_charge()
+ * css_put()
+ * css_free()
+ * pc->mem_cgroup = dead memcg
+ * add page to lru
+ *
+ * The bulk of the charges are still moved in offline_css() to
+ * avoid pinning a lot of pages in case a long-term reference
+ * like a swapout record is deferring the css_free() to long
+ * after offlining. But this makes sure we catch any charges
+ * made after offlining:
+ */
+ mem_cgroup_reparent_charges(memcg);
memcg_destroy_kmem(memcg);
__mem_cgroup_free(memcg);
BUG_ON(!PageHWPoison(p));
return SWAP_FAIL;
}
+ /*
+ * We pinned the head page for hwpoison handling,
+ * now we split the thp and we are interested in
+ * the hwpoisoned raw page, so move the refcount
+ * to it.
+ */
+ if (hpage != p) {
+ put_page(hpage);
+ get_page(p);
+ }
/* THP is split, so ppage should be the real poisoned page. */
ppage = p;
}
if (ret > 0)
ret = -EIO;
} else {
- set_page_hwpoison_huge_page(hpage);
- dequeue_hwpoisoned_huge_page(hpage);
- atomic_long_add(1 << compound_order(hpage),
- &num_poisoned_pages);
+ /* overcommit hugetlb page will be freed to buddy */
+ if (PageHuge(page)) {
+ set_page_hwpoison_huge_page(hpage);
+ dequeue_hwpoisoned_huge_page(hpage);
+ atomic_long_add(1 << compound_order(hpage),
+ &num_poisoned_pages);
+ } else {
+ SetPageHWPoison(page);
+ atomic_long_inc(&num_poisoned_pages);
+ }
}
return ret;
}
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */
-#if USE_SPLIT_PTE_PTLOCKS && BLOATED_SPINLOCKS
-static struct kmem_cache *page_ptl_cachep;
-void __init ptlock_cache_init(void)
-{
- page_ptl_cachep = kmem_cache_create("page->ptl", sizeof(spinlock_t), 0,
- SLAB_PANIC, NULL);
-}
-
+#if USE_SPLIT_PTE_PTLOCKS && ALLOC_SPLIT_PTLOCKS
bool ptlock_alloc(struct page *page)
{
spinlock_t *ptl;
break;
vma = vma->vm_next;
}
+
+ if (PageHuge(page)) {
+ if (vma)
+ return alloc_huge_page_noerr(vma, address, 1);
+ else
+ return NULL;
+ }
/*
- * queue_pages_range() confirms that @page belongs to some vma,
- * so vma shouldn't be NULL.
+ * if !vma, alloc_page_vma() will use task or system default policy
*/
- BUG_ON(!vma);
-
- if (PageHuge(page))
- return alloc_huge_page_noerr(vma, address, 1);
return alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
}
#else
if (nr_failed && (flags & MPOL_MF_STRICT))
err = -EIO;
} else
- putback_lru_pages(&pagelist);
+ putback_movable_pages(&pagelist);
up_write(&mm->mmap_sem);
mpol_out:
return;
}
- p += snprintf(p, maxlen, policy_modes[mode]);
+ p += snprintf(p, maxlen, "%s", policy_modes[mode]);
if (flags & MPOL_MODE_FLAGS) {
p += snprintf(p, buffer + maxlen - p, "=");
#include <linux/hugetlb_cgroup.h>
#include <linux/gfp.h>
#include <linux/balloon_compaction.h>
+#include <linux/mmu_notifier.h>
#include <asm/tlbflush.h>
*/
int migrate_page_move_mapping(struct address_space *mapping,
struct page *newpage, struct page *page,
- struct buffer_head *head, enum migrate_mode mode)
+ struct buffer_head *head, enum migrate_mode mode,
+ int extra_count)
{
- int expected_count = 0;
+ int expected_count = 1 + extra_count;
void **pslot;
if (!mapping) {
/* Anonymous page without mapping */
- if (page_count(page) != 1)
+ if (page_count(page) != expected_count)
return -EAGAIN;
return MIGRATEPAGE_SUCCESS;
}
pslot = radix_tree_lookup_slot(&mapping->page_tree,
page_index(page));
- expected_count = 2 + page_has_private(page);
+ expected_count += 1 + page_has_private(page);
if (page_count(page) != expected_count ||
radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) {
spin_unlock_irq(&mapping->tree_lock);
return MIGRATEPAGE_SUCCESS;
}
+/*
+ * Gigantic pages are so large that we do not guarantee that page++ pointer
+ * arithmetic will work across the entire page. We need something more
+ * specialized.
+ */
+static void __copy_gigantic_page(struct page *dst, struct page *src,
+ int nr_pages)
+{
+ int i;
+ struct page *dst_base = dst;
+ struct page *src_base = src;
+
+ for (i = 0; i < nr_pages; ) {
+ cond_resched();
+ copy_highpage(dst, src);
+
+ i++;
+ dst = mem_map_next(dst, dst_base, i);
+ src = mem_map_next(src, src_base, i);
+ }
+}
+
+static void copy_huge_page(struct page *dst, struct page *src)
+{
+ int i;
+ int nr_pages;
+
+ if (PageHuge(src)) {
+ /* hugetlbfs page */
+ struct hstate *h = page_hstate(src);
+ nr_pages = pages_per_huge_page(h);
+
+ if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) {
+ __copy_gigantic_page(dst, src, nr_pages);
+ return;
+ }
+ } else {
+ /* thp page */
+ BUG_ON(!PageTransHuge(src));
+ nr_pages = hpage_nr_pages(src);
+ }
+
+ for (i = 0; i < nr_pages; i++) {
+ cond_resched();
+ copy_highpage(dst + i, src + i);
+ }
+}
+
/*
* Copy the page to its new location
*/
BUG_ON(PageWriteback(page)); /* Writeback must be complete */
- rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode);
+ rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0);
if (rc != MIGRATEPAGE_SUCCESS)
return rc;
head = page_buffers(page);
- rc = migrate_page_move_mapping(mapping, newpage, page, head, mode);
+ rc = migrate_page_move_mapping(mapping, newpage, page, head, mode, 0);
if (rc != MIGRATEPAGE_SUCCESS)
return rc;
return 1;
}
+bool pmd_trans_migrating(pmd_t pmd)
+{
+ struct page *page = pmd_page(pmd);
+ return PageLocked(page);
+}
+
+void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd)
+{
+ struct page *page = pmd_page(*pmd);
+ wait_on_page_locked(page);
+}
+
/*
* Attempt to migrate a misplaced page to the specified destination
* node. Caller is expected to have an elevated reference count on
struct page *page, int node)
{
spinlock_t *ptl;
- unsigned long haddr = address & HPAGE_PMD_MASK;
pg_data_t *pgdat = NODE_DATA(node);
int isolated = 0;
struct page *new_page = NULL;
struct mem_cgroup *memcg = NULL;
int page_lru = page_is_file_cache(page);
+ unsigned long mmun_start = address & HPAGE_PMD_MASK;
+ unsigned long mmun_end = mmun_start + HPAGE_PMD_SIZE;
+ pmd_t orig_entry;
/*
* Rate-limit the amount of data that is being migrated to a node.
goto out_fail;
}
+ if (mm_tlb_flush_pending(mm))
+ flush_tlb_range(vma, mmun_start, mmun_end);
+
/* Prepare a page as a migration target */
__set_page_locked(new_page);
SetPageSwapBacked(new_page);
WARN_ON(PageLRU(new_page));
/* Recheck the target PMD */
+ mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
ptl = pmd_lock(mm, pmd);
- if (unlikely(!pmd_same(*pmd, entry))) {
+ if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) {
+fail_putback:
spin_unlock(ptl);
+ mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
/* Reverse changes made by migrate_page_copy() */
if (TestClearPageActive(new_page))
putback_lru_page(page);
mod_zone_page_state(page_zone(page),
NR_ISOLATED_ANON + page_lru, -HPAGE_PMD_NR);
- goto out_fail;
+
+ goto out_unlock;
}
/*
*/
mem_cgroup_prepare_migration(page, new_page, &memcg);
+ orig_entry = *pmd;
entry = mk_pmd(new_page, vma->vm_page_prot);
- entry = pmd_mknonnuma(entry);
- entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
entry = pmd_mkhuge(entry);
+ entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
- pmdp_clear_flush(vma, haddr, pmd);
- set_pmd_at(mm, haddr, pmd, entry);
- page_add_new_anon_rmap(new_page, vma, haddr);
+ /*
+ * Clear the old entry under pagetable lock and establish the new PTE.
+ * Any parallel GUP will either observe the old page blocking on the
+ * page lock, block on the page table lock or observe the new page.
+ * The SetPageUptodate on the new page and page_add_new_anon_rmap
+ * guarantee the copy is visible before the pagetable update.
+ */
+ flush_cache_range(vma, mmun_start, mmun_end);
+ page_add_new_anon_rmap(new_page, vma, mmun_start);
+ pmdp_clear_flush(vma, mmun_start, pmd);
+ set_pmd_at(mm, mmun_start, pmd, entry);
+ flush_tlb_range(vma, mmun_start, mmun_end);
update_mmu_cache_pmd(vma, address, &entry);
+
+ if (page_count(page) != 2) {
+ set_pmd_at(mm, mmun_start, pmd, orig_entry);
+ flush_tlb_range(vma, mmun_start, mmun_end);
+ update_mmu_cache_pmd(vma, address, &entry);
+ page_remove_rmap(new_page);
+ goto fail_putback;
+ }
+
page_remove_rmap(page);
+
/*
* Finish the charge transaction under the page table lock to
* prevent split_huge_page() from dividing up the charge
*/
mem_cgroup_end_migration(memcg, page, new_page, true);
spin_unlock(ptl);
+ mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
unlock_page(new_page);
unlock_page(page);
out_fail:
count_vm_events(PGMIGRATE_FAIL, HPAGE_PMD_NR);
out_dropref:
- entry = pmd_mknonnuma(entry);
- set_pmd_at(mm, haddr, pmd, entry);
- update_mmu_cache_pmd(vma, address, &entry);
+ ptl = pmd_lock(mm, pmd);
+ if (pmd_same(*pmd, entry)) {
+ entry = pmd_mknonnuma(entry);
+ set_pmd_at(mm, mmun_start, pmd, entry);
+ update_mmu_cache_pmd(vma, address, &entry);
+ }
+ spin_unlock(ptl);
+out_unlock:
unlock_page(page);
put_page(page);
return 0;
/**
* munlock_vma_page - munlock a vma page
- * @page - page to be unlocked
+ * @page - page to be unlocked, either a normal page or THP page head
+ *
+ * returns the size of the page as a page mask (0 for normal page,
+ * HPAGE_PMD_NR - 1 for THP head page)
*
* called from munlock()/munmap() path with page supposedly on the LRU.
* When we munlock a page, because the vma where we found the page is being
*/
unsigned int munlock_vma_page(struct page *page)
{
- unsigned int page_mask = 0;
+ unsigned int nr_pages;
BUG_ON(!PageLocked(page));
if (TestClearPageMlocked(page)) {
- unsigned int nr_pages = hpage_nr_pages(page);
+ nr_pages = hpage_nr_pages(page);
mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
- page_mask = nr_pages - 1;
if (!isolate_lru_page(page))
__munlock_isolated_page(page);
else
__munlock_isolation_failed(page);
+ } else {
+ nr_pages = hpage_nr_pages(page);
}
- return page_mask;
+ /*
+ * Regardless of the original PageMlocked flag, we determine nr_pages
+ * after touching the flag. This leaves a possible race with a THP page
+ * split, such that a whole THP page was munlocked, but nr_pages == 1.
+ * Returning a smaller mask due to that is OK, the worst that can
+ * happen is subsequent useless scanning of the former tail pages.
+ * The NR_MLOCK accounting can however become broken.
+ */
+ return nr_pages - 1;
}
/**
{
int i;
int nr = pagevec_count(pvec);
- int delta_munlocked = -nr;
+ int delta_munlocked;
struct pagevec pvec_putback;
int pgrescued = 0;
+ pagevec_init(&pvec_putback, 0);
+
/* Phase 1: page isolation */
spin_lock_irq(&zone->lru_lock);
for (i = 0; i < nr; i++) {
/*
* We won't be munlocking this page in the next phase
* but we still need to release the follow_page_mask()
- * pin.
+ * pin. We cannot do it under lru_lock however. If it's
+ * the last pin, __page_cache_release would deadlock.
*/
+ pagevec_add(&pvec_putback, pvec->pages[i]);
pvec->pages[i] = NULL;
- put_page(page);
- delta_munlocked++;
}
}
+ delta_munlocked = -nr + pagevec_count(&pvec_putback);
__mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
spin_unlock_irq(&zone->lru_lock);
+ /* Now we can release pins of pages that we are not munlocking */
+ pagevec_release(&pvec_putback);
+
/* Phase 2: page munlock */
- pagevec_init(&pvec_putback, 0);
for (i = 0; i < nr; i++) {
struct page *page = pvec->pages[i];
while (start < end) {
struct page *page = NULL;
- unsigned int page_mask, page_increm;
+ unsigned int page_mask;
+ unsigned long page_increm;
struct pagevec pvec;
struct zone *zone;
int zoneid;
goto next;
}
}
- page_increm = 1 + (~(start >> PAGE_SHIFT) & page_mask);
+ /* It's a bug to munlock in the middle of a THP page */
+ VM_BUG_ON((start >> PAGE_SHIFT) & page_mask);
+ page_increm = 1 + page_mask;
start += page_increm * PAGE_SIZE;
next:
cond_resched();
pte_t ptent;
bool updated = false;
- ptent = ptep_modify_prot_start(mm, addr, pte);
if (!prot_numa) {
+ ptent = ptep_modify_prot_start(mm, addr, pte);
+ if (pte_numa(ptent))
+ ptent = pte_mknonnuma(ptent);
ptent = pte_modify(ptent, newprot);
updated = true;
} else {
struct page *page;
+ ptent = *pte;
page = vm_normal_page(vma, addr, oldpte);
if (page) {
if (!pte_numa(oldpte)) {
ptent = pte_mknuma(ptent);
+ set_pte_at(mm, addr, pte, ptent);
updated = true;
}
}
if (updated)
pages++;
- ptep_modify_prot_commit(mm, addr, pte, ptent);
+
+ /* Only !prot_numa always clears the pte */
+ if (!prot_numa)
+ ptep_modify_prot_commit(mm, addr, pte, ptent);
} else if (IS_ENABLED(CONFIG_MIGRATION) && !pte_file(oldpte)) {
swp_entry_t entry = pte_to_swp_entry(oldpte);
BUG_ON(addr >= end);
pgd = pgd_offset(mm, addr);
flush_cache_range(vma, addr, end);
+ set_tlb_flush_pending(mm);
do {
next = pgd_addr_end(addr, end);
if (pgd_none_or_clear_bad(pgd))
/* Only flush the TLB if we actually modified any entries: */
if (pages)
flush_tlb_range(vma, start, end);
+ clear_tlb_flush_pending(mm);
return pages;
}
static bool zone_local(struct zone *local_zone, struct zone *zone)
{
- return node_distance(local_zone->node, zone->node) == LOCAL_DISTANCE;
+ return local_zone->node == zone->node;
}
static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
* page was allocated in should have no effect on the
* time the page has in memory before being reclaimed.
*
- * When zone_reclaim_mode is enabled, try to stay in
- * local zones in the fastpath. If that fails, the
- * slowpath is entered, which will do another pass
- * starting with the local zones, but ultimately fall
- * back to remote zones that do not partake in the
- * fairness round-robin cycle of this zonelist.
+ * Try to stay in local zones in the fastpath. If
+ * that fails, the slowpath is entered, which will do
+ * another pass starting with the local zones, but
+ * ultimately fall back to remote zones that do not
+ * partake in the fairness round-robin cycle of this
+ * zonelist.
*/
if (alloc_flags & ALLOC_WMARK_LOW) {
if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0)
continue;
- if (zone_reclaim_mode &&
- !zone_local(preferred_zone, zone))
+ if (!zone_local(preferred_zone, zone))
continue;
}
/*
* thrash fairness information for zones that are not
* actually part of this zonelist's round-robin cycle.
*/
- if (zone_reclaim_mode && !zone_local(preferred_zone, zone))
+ if (!zone_local(preferred_zone, zone))
continue;
mod_zone_page_state(zone, NR_ALLOC_BATCH,
high_wmark_pages(zone) -
pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
pte_t *ptep)
{
+ struct mm_struct *mm = (vma)->vm_mm;
pte_t pte;
- pte = ptep_get_and_clear((vma)->vm_mm, address, ptep);
- if (pte_accessible(pte))
+ pte = ptep_get_and_clear(mm, address, ptep);
+ if (pte_accessible(mm, pte))
flush_tlb_page(vma, address);
return pte;
}
void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
{
+ pmd_t entry = *pmdp;
+ if (pmd_numa(entry))
+ entry = pmd_mknonnuma(entry);
set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(*pmdp));
flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
}
spinlock_t *ptl;
if (unlikely(PageHuge(page))) {
+ /* when pud is not present, pte will be NULL */
pte = huge_pte_offset(mm, address);
+ if (!pte)
+ return NULL;
+
ptl = huge_pte_lockptr(page_hstate(page), mm, pte);
goto check;
}
.d_dname = simple_dname
};
-/**
- * shmem_file_setup - get an unlinked file living in tmpfs
- * @name: name for dentry (to be seen in /proc/<pid>/maps
- * @size: size to be set for the file
- * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
- */
-struct file *shmem_file_setup(const char *name, loff_t size, unsigned long flags)
+static struct file *__shmem_file_setup(const char *name, loff_t size,
+ unsigned long flags, unsigned int i_flags)
{
struct file *res;
struct inode *inode;
if (!inode)
goto put_dentry;
+ inode->i_flags |= i_flags;
d_instantiate(path.dentry, inode);
inode->i_size = size;
clear_nlink(inode); /* It is unlinked */
shmem_unacct_size(flags, size);
return res;
}
+
+/**
+ * shmem_kernel_file_setup - get an unlinked file living in tmpfs which must be
+ * kernel internal. There will be NO LSM permission checks against the
+ * underlying inode. So users of this interface must do LSM checks at a
+ * higher layer. The one user is the big_key implementation. LSM checks
+ * are provided at the key level rather than the inode level.
+ * @name: name for dentry (to be seen in /proc/<pid>/maps
+ * @size: size to be set for the file
+ * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
+ */
+struct file *shmem_kernel_file_setup(const char *name, loff_t size, unsigned long flags)
+{
+ return __shmem_file_setup(name, size, flags, S_PRIVATE);
+}
+
+/**
+ * shmem_file_setup - get an unlinked file living in tmpfs
+ * @name: name for dentry (to be seen in /proc/<pid>/maps
+ * @size: size to be set for the file
+ * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
+ */
+struct file *shmem_file_setup(const char *name, loff_t size, unsigned long flags)
+{
+ return __shmem_file_setup(name, size, flags, 0);
+}
EXPORT_SYMBOL_GPL(shmem_file_setup);
/**
*/
static bool pfmemalloc_active __read_mostly;
-/*
- * kmem_bufctl_t:
- *
- * Bufctl's are used for linking objs within a slab
- * linked offsets.
- *
- * This implementation relies on "struct page" for locating the cache &
- * slab an object belongs to.
- * This allows the bufctl structure to be small (one int), but limits
- * the number of objects a slab (not a cache) can contain when off-slab
- * bufctls are used. The limit is the size of the largest general cache
- * that does not use off-slab slabs.
- * For 32bit archs with 4 kB pages, is this 56.
- * This is not serious, as it is only for large objects, when it is unwise
- * to have too many per slab.
- * Note: This limit can be raised by introducing a general cache whose size
- * is less than 512 (PAGE_SIZE<<3), but greater than 256.
- */
-
-typedef unsigned int kmem_bufctl_t;
-#define BUFCTL_END (((kmem_bufctl_t)(~0U))-0)
-#define BUFCTL_FREE (((kmem_bufctl_t)(~0U))-1)
-#define BUFCTL_ACTIVE (((kmem_bufctl_t)(~0U))-2)
-#define SLAB_LIMIT (((kmem_bufctl_t)(~0U))-3)
-
-/*
- * struct slab_rcu
- *
- * slab_destroy on a SLAB_DESTROY_BY_RCU cache uses this structure to
- * arrange for kmem_freepages to be called via RCU. This is useful if
- * we need to approach a kernel structure obliquely, from its address
- * obtained without the usual locking. We can lock the structure to
- * stabilize it and check it's still at the given address, only if we
- * can be sure that the memory has not been meanwhile reused for some
- * other kind of object (which our subsystem's lock might corrupt).
- *
- * rcu_read_lock before reading the address, then rcu_read_unlock after
- * taking the spinlock within the structure expected at that address.
- */
-struct slab_rcu {
- struct rcu_head head;
- struct kmem_cache *cachep;
- void *addr;
-};
-
-/*
- * struct slab
- *
- * Manages the objs in a slab. Placed either at the beginning of mem allocated
- * for a slab, or allocated from an general cache.
- * Slabs are chained into three list: fully used, partial, fully free slabs.
- */
-struct slab {
- union {
- struct {
- struct list_head list;
- unsigned long colouroff;
- void *s_mem; /* including colour offset */
- unsigned int inuse; /* num of objs active in slab */
- kmem_bufctl_t free;
- unsigned short nodeid;
- };
- struct slab_rcu __slab_cover_slab_rcu;
- };
-};
-
/*
* struct array_cache
*
return page->slab_cache;
}
-static inline struct slab *virt_to_slab(const void *obj)
-{
- struct page *page = virt_to_head_page(obj);
-
- VM_BUG_ON(!PageSlab(page));
- return page->slab_page;
-}
-
-static inline void *index_to_obj(struct kmem_cache *cache, struct slab *slab,
+static inline void *index_to_obj(struct kmem_cache *cache, struct page *page,
unsigned int idx)
{
- return slab->s_mem + cache->size * idx;
+ return page->s_mem + cache->size * idx;
}
/*
* reciprocal_divide(offset, cache->reciprocal_buffer_size)
*/
static inline unsigned int obj_to_index(const struct kmem_cache *cache,
- const struct slab *slab, void *obj)
+ const struct page *page, void *obj)
{
- u32 offset = (obj - slab->s_mem);
+ u32 offset = (obj - page->s_mem);
return reciprocal_divide(offset, cache->reciprocal_buffer_size);
}
static size_t slab_mgmt_size(size_t nr_objs, size_t align)
{
- return ALIGN(sizeof(struct slab)+nr_objs*sizeof(kmem_bufctl_t), align);
+ return ALIGN(nr_objs * sizeof(unsigned int), align);
}
/*
* on it. For the latter case, the memory allocated for a
* slab is used for:
*
- * - The struct slab
- * - One kmem_bufctl_t for each object
+ * - One unsigned int for each object
* - Padding to respect alignment of @align
* - @buffer_size bytes for each object
*
mgmt_size = 0;
nr_objs = slab_size / buffer_size;
- if (nr_objs > SLAB_LIMIT)
- nr_objs = SLAB_LIMIT;
} else {
/*
* Ignore padding for the initial guess. The padding
* into the memory allocation when taking the padding
* into account.
*/
- nr_objs = (slab_size - sizeof(struct slab)) /
- (buffer_size + sizeof(kmem_bufctl_t));
+ nr_objs = (slab_size) / (buffer_size + sizeof(unsigned int));
/*
* This calculated number will be either the right
> slab_size)
nr_objs--;
- if (nr_objs > SLAB_LIMIT)
- nr_objs = SLAB_LIMIT;
-
mgmt_size = slab_mgmt_size(nr_objs, align);
}
*num = nr_objs;
return nc;
}
-static inline bool is_slab_pfmemalloc(struct slab *slabp)
+static inline bool is_slab_pfmemalloc(struct page *page)
{
- struct page *page = virt_to_page(slabp->s_mem);
-
return PageSlabPfmemalloc(page);
}
struct array_cache *ac)
{
struct kmem_cache_node *n = cachep->node[numa_mem_id()];
- struct slab *slabp;
+ struct page *page;
unsigned long flags;
if (!pfmemalloc_active)
return;
spin_lock_irqsave(&n->list_lock, flags);
- list_for_each_entry(slabp, &n->slabs_full, list)
- if (is_slab_pfmemalloc(slabp))
+ list_for_each_entry(page, &n->slabs_full, lru)
+ if (is_slab_pfmemalloc(page))
goto out;
- list_for_each_entry(slabp, &n->slabs_partial, list)
- if (is_slab_pfmemalloc(slabp))
+ list_for_each_entry(page, &n->slabs_partial, lru)
+ if (is_slab_pfmemalloc(page))
goto out;
- list_for_each_entry(slabp, &n->slabs_free, list)
- if (is_slab_pfmemalloc(slabp))
+ list_for_each_entry(page, &n->slabs_free, lru)
+ if (is_slab_pfmemalloc(page))
goto out;
pfmemalloc_active = false;
*/
n = cachep->node[numa_mem_id()];
if (!list_empty(&n->slabs_free) && force_refill) {
- struct slab *slabp = virt_to_slab(objp);
- ClearPageSlabPfmemalloc(virt_to_head_page(slabp->s_mem));
+ struct page *page = virt_to_head_page(objp);
+ ClearPageSlabPfmemalloc(page);
clear_obj_pfmemalloc(&objp);
recheck_pfmemalloc_active(cachep, ac);
return objp;
static inline int cache_free_alien(struct kmem_cache *cachep, void *objp)
{
- struct slab *slabp = virt_to_slab(objp);
- int nodeid = slabp->nodeid;
+ int nodeid = page_to_nid(virt_to_page(objp));
struct kmem_cache_node *n;
struct array_cache *alien = NULL;
int node;
* Make sure we are not freeing a object from another node to the array
* cache on this cpu.
*/
- if (likely(slabp->nodeid == node))
+ if (likely(nodeid == node))
return 0;
n = cachep->node[node];
{
int i;
+ BUILD_BUG_ON(sizeof(((struct page *)NULL)->lru) <
+ sizeof(struct rcu_head));
kmem_cache = &kmem_cache_boot;
setup_node_pointer(kmem_cache);
slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nodeid)
{
struct kmem_cache_node *n;
- struct slab *slabp;
+ struct page *page;
unsigned long flags;
int node;
continue;
spin_lock_irqsave(&n->list_lock, flags);
- list_for_each_entry(slabp, &n->slabs_full, list) {
+ list_for_each_entry(page, &n->slabs_full, lru) {
active_objs += cachep->num;
active_slabs++;
}
- list_for_each_entry(slabp, &n->slabs_partial, list) {
- active_objs += slabp->inuse;
+ list_for_each_entry(page, &n->slabs_partial, lru) {
+ active_objs += page->active;
active_slabs++;
}
- list_for_each_entry(slabp, &n->slabs_free, list)
+ list_for_each_entry(page, &n->slabs_free, lru)
num_slabs++;
free_objects += n->free_objects;
* did not request dmaable memory, we might get it, but that
* would be relatively rare and ignorable.
*/
-static void *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid)
+static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
+ int nodeid)
{
struct page *page;
int nr_pages;
- int i;
-
-#ifndef CONFIG_MMU
- /*
- * Nommu uses slab's for process anonymous memory allocations, and thus
- * requires __GFP_COMP to properly refcount higher order allocations
- */
- flags |= __GFP_COMP;
-#endif
flags |= cachep->allocflags;
if (cachep->flags & SLAB_RECLAIM_ACCOUNT)
else
add_zone_page_state(page_zone(page),
NR_SLAB_UNRECLAIMABLE, nr_pages);
- for (i = 0; i < nr_pages; i++) {
- __SetPageSlab(page + i);
-
- if (page->pfmemalloc)
- SetPageSlabPfmemalloc(page + i);
- }
+ __SetPageSlab(page);
+ if (page->pfmemalloc)
+ SetPageSlabPfmemalloc(page);
memcg_bind_pages(cachep, cachep->gfporder);
if (kmemcheck_enabled && !(cachep->flags & SLAB_NOTRACK)) {
kmemcheck_mark_unallocated_pages(page, nr_pages);
}
- return page_address(page);
+ return page;
}
/*
* Interface to system's page release.
*/
-static void kmem_freepages(struct kmem_cache *cachep, void *addr)
+static void kmem_freepages(struct kmem_cache *cachep, struct page *page)
{
- unsigned long i = (1 << cachep->gfporder);
- struct page *page = virt_to_page(addr);
- const unsigned long nr_freed = i;
+ const unsigned long nr_freed = (1 << cachep->gfporder);
kmemcheck_free_shadow(page, cachep->gfporder);
else
sub_zone_page_state(page_zone(page),
NR_SLAB_UNRECLAIMABLE, nr_freed);
- while (i--) {
- BUG_ON(!PageSlab(page));
- __ClearPageSlabPfmemalloc(page);
- __ClearPageSlab(page);
- page++;
- }
+
+ BUG_ON(!PageSlab(page));
+ __ClearPageSlabPfmemalloc(page);
+ __ClearPageSlab(page);
+ page_mapcount_reset(page);
+ page->mapping = NULL;
memcg_release_pages(cachep, cachep->gfporder);
if (current->reclaim_state)
current->reclaim_state->reclaimed_slab += nr_freed;
- free_memcg_kmem_pages((unsigned long)addr, cachep->gfporder);
+ __free_memcg_kmem_pages(page, cachep->gfporder);
}
static void kmem_rcu_free(struct rcu_head *head)
{
- struct slab_rcu *slab_rcu = (struct slab_rcu *)head;
- struct kmem_cache *cachep = slab_rcu->cachep;
+ struct kmem_cache *cachep;
+ struct page *page;
- kmem_freepages(cachep, slab_rcu->addr);
- if (OFF_SLAB(cachep))
- kmem_cache_free(cachep->slabp_cache, slab_rcu);
+ page = container_of(head, struct page, rcu_head);
+ cachep = page->slab_cache;
+
+ kmem_freepages(cachep, page);
}
#if DEBUG
/* Print some data about the neighboring objects, if they
* exist:
*/
- struct slab *slabp = virt_to_slab(objp);
+ struct page *page = virt_to_head_page(objp);
unsigned int objnr;
- objnr = obj_to_index(cachep, slabp, objp);
+ objnr = obj_to_index(cachep, page, objp);
if (objnr) {
- objp = index_to_obj(cachep, slabp, objnr - 1);
+ objp = index_to_obj(cachep, page, objnr - 1);
realobj = (char *)objp + obj_offset(cachep);
printk(KERN_ERR "Prev obj: start=%p, len=%d\n",
realobj, size);
print_objinfo(cachep, objp, 2);
}
if (objnr + 1 < cachep->num) {
- objp = index_to_obj(cachep, slabp, objnr + 1);
+ objp = index_to_obj(cachep, page, objnr + 1);
realobj = (char *)objp + obj_offset(cachep);
printk(KERN_ERR "Next obj: start=%p, len=%d\n",
realobj, size);
#endif
#if DEBUG
-static void slab_destroy_debugcheck(struct kmem_cache *cachep, struct slab *slabp)
+static void slab_destroy_debugcheck(struct kmem_cache *cachep,
+ struct page *page)
{
int i;
for (i = 0; i < cachep->num; i++) {
- void *objp = index_to_obj(cachep, slabp, i);
+ void *objp = index_to_obj(cachep, page, i);
if (cachep->flags & SLAB_POISON) {
#ifdef CONFIG_DEBUG_PAGEALLOC
}
}
#else
-static void slab_destroy_debugcheck(struct kmem_cache *cachep, struct slab *slabp)
+static void slab_destroy_debugcheck(struct kmem_cache *cachep,
+ struct page *page)
{
}
#endif
* Before calling the slab must have been unlinked from the cache. The
* cache-lock is not held/needed.
*/
-static void slab_destroy(struct kmem_cache *cachep, struct slab *slabp)
+static void slab_destroy(struct kmem_cache *cachep, struct page *page)
{
- void *addr = slabp->s_mem - slabp->colouroff;
+ void *freelist;
- slab_destroy_debugcheck(cachep, slabp);
+ freelist = page->freelist;
+ slab_destroy_debugcheck(cachep, page);
if (unlikely(cachep->flags & SLAB_DESTROY_BY_RCU)) {
- struct slab_rcu *slab_rcu;
+ struct rcu_head *head;
+
+ /*
+ * RCU free overloads the RCU head over the LRU.
+ * slab_page has been overloeaded over the LRU,
+ * however it is not used from now on so that
+ * we can use it safely.
+ */
+ head = (void *)&page->rcu_head;
+ call_rcu(head, kmem_rcu_free);
- slab_rcu = (struct slab_rcu *)slabp;
- slab_rcu->cachep = cachep;
- slab_rcu->addr = addr;
- call_rcu(&slab_rcu->head, kmem_rcu_free);
} else {
- kmem_freepages(cachep, addr);
- if (OFF_SLAB(cachep))
- kmem_cache_free(cachep->slabp_cache, slabp);
+ kmem_freepages(cachep, page);
}
+
+ /*
+ * From now on, we don't use freelist
+ * although actual page can be freed in rcu context
+ */
+ if (OFF_SLAB(cachep))
+ kmem_cache_free(cachep->freelist_cache, freelist);
}
/**
* use off-slab slabs. Needed to avoid a possible
* looping condition in cache_grow().
*/
- offslab_limit = size - sizeof(struct slab);
- offslab_limit /= sizeof(kmem_bufctl_t);
+ offslab_limit = size;
+ offslab_limit /= sizeof(unsigned int);
if (num > offslab_limit)
break;
int
__kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
{
- size_t left_over, slab_size, ralign;
+ size_t left_over, freelist_size, ralign;
gfp_t gfp;
int err;
size_t size = cachep->size;
if (!cachep->num)
return -E2BIG;
- slab_size = ALIGN(cachep->num * sizeof(kmem_bufctl_t)
- + sizeof(struct slab), cachep->align);
+ freelist_size =
+ ALIGN(cachep->num * sizeof(unsigned int), cachep->align);
/*
* If the slab has been placed off-slab, and we have enough space then
* move it on-slab. This is at the expense of any extra colouring.
*/
- if (flags & CFLGS_OFF_SLAB && left_over >= slab_size) {
+ if (flags & CFLGS_OFF_SLAB && left_over >= freelist_size) {
flags &= ~CFLGS_OFF_SLAB;
- left_over -= slab_size;
+ left_over -= freelist_size;
}
if (flags & CFLGS_OFF_SLAB) {
/* really off slab. No need for manual alignment */
- slab_size =
- cachep->num * sizeof(kmem_bufctl_t) + sizeof(struct slab);
+ freelist_size = cachep->num * sizeof(unsigned int);
#ifdef CONFIG_PAGE_POISONING
/* If we're going to use the generic kernel_map_pages()
if (cachep->colour_off < cachep->align)
cachep->colour_off = cachep->align;
cachep->colour = left_over / cachep->colour_off;
- cachep->slab_size = slab_size;
+ cachep->freelist_size = freelist_size;
cachep->flags = flags;
- cachep->allocflags = 0;
+ cachep->allocflags = __GFP_COMP;
if (CONFIG_ZONE_DMA_FLAG && (flags & SLAB_CACHE_DMA))
cachep->allocflags |= GFP_DMA;
cachep->size = size;
cachep->reciprocal_buffer_size = reciprocal_value(size);
if (flags & CFLGS_OFF_SLAB) {
- cachep->slabp_cache = kmalloc_slab(slab_size, 0u);
+ cachep->freelist_cache = kmalloc_slab(freelist_size, 0u);
/*
* This is a possibility for one of the malloc_sizes caches.
* But since we go off slab only for object size greater than
* this should not happen at all.
* But leave a BUG_ON for some lucky dude.
*/
- BUG_ON(ZERO_OR_NULL_PTR(cachep->slabp_cache));
+ BUG_ON(ZERO_OR_NULL_PTR(cachep->freelist_cache));
}
err = setup_cpu_cache(cachep, gfp);
{
struct list_head *p;
int nr_freed;
- struct slab *slabp;
+ struct page *page;
nr_freed = 0;
while (nr_freed < tofree && !list_empty(&n->slabs_free)) {
goto out;
}
- slabp = list_entry(p, struct slab, list);
+ page = list_entry(p, struct page, lru);
#if DEBUG
- BUG_ON(slabp->inuse);
+ BUG_ON(page->active);
#endif
- list_del(&slabp->list);
+ list_del(&page->lru);
/*
* Safe to drop the lock. The slab is no longer linked
* to the cache.
*/
n->free_objects -= cache->num;
spin_unlock_irq(&n->list_lock);
- slab_destroy(cache, slabp);
+ slab_destroy(cache, page);
nr_freed++;
}
out:
* descriptors in kmem_cache_create, we search through the malloc_sizes array.
* If we are creating a malloc_sizes cache here it would not be visible to
* kmem_find_general_cachep till the initialization is complete.
- * Hence we cannot have slabp_cache same as the original cache.
+ * Hence we cannot have freelist_cache same as the original cache.
*/
-static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
- int colour_off, gfp_t local_flags,
- int nodeid)
+static void *alloc_slabmgmt(struct kmem_cache *cachep,
+ struct page *page, int colour_off,
+ gfp_t local_flags, int nodeid)
{
- struct slab *slabp;
+ void *freelist;
+ void *addr = page_address(page);
if (OFF_SLAB(cachep)) {
/* Slab management obj is off-slab. */
- slabp = kmem_cache_alloc_node(cachep->slabp_cache,
+ freelist = kmem_cache_alloc_node(cachep->freelist_cache,
local_flags, nodeid);
- /*
- * If the first object in the slab is leaked (it's allocated
- * but no one has a reference to it), we want to make sure
- * kmemleak does not treat the ->s_mem pointer as a reference
- * to the object. Otherwise we will not report the leak.
- */
- kmemleak_scan_area(&slabp->list, sizeof(struct list_head),
- local_flags);
- if (!slabp)
+ if (!freelist)
return NULL;
} else {
- slabp = objp + colour_off;
- colour_off += cachep->slab_size;
+ freelist = addr + colour_off;
+ colour_off += cachep->freelist_size;
}
- slabp->inuse = 0;
- slabp->colouroff = colour_off;
- slabp->s_mem = objp + colour_off;
- slabp->nodeid = nodeid;
- slabp->free = 0;
- return slabp;
+ page->active = 0;
+ page->s_mem = addr + colour_off;
+ return freelist;
}
-static inline kmem_bufctl_t *slab_bufctl(struct slab *slabp)
+static inline unsigned int *slab_freelist(struct page *page)
{
- return (kmem_bufctl_t *) (slabp + 1);
+ return (unsigned int *)(page->freelist);
}
static void cache_init_objs(struct kmem_cache *cachep,
- struct slab *slabp)
+ struct page *page)
{
int i;
for (i = 0; i < cachep->num; i++) {
- void *objp = index_to_obj(cachep, slabp, i);
+ void *objp = index_to_obj(cachep, page, i);
#if DEBUG
/* need to poison the objs? */
if (cachep->flags & SLAB_POISON)
if (cachep->ctor)
cachep->ctor(objp);
#endif
- slab_bufctl(slabp)[i] = i + 1;
+ slab_freelist(page)[i] = i;
}
- slab_bufctl(slabp)[i - 1] = BUFCTL_END;
}
static void kmem_flagcheck(struct kmem_cache *cachep, gfp_t flags)
}
}
-static void *slab_get_obj(struct kmem_cache *cachep, struct slab *slabp,
+static void *slab_get_obj(struct kmem_cache *cachep, struct page *page,
int nodeid)
{
- void *objp = index_to_obj(cachep, slabp, slabp->free);
- kmem_bufctl_t next;
+ void *objp;
- slabp->inuse++;
- next = slab_bufctl(slabp)[slabp->free];
+ objp = index_to_obj(cachep, page, slab_freelist(page)[page->active]);
+ page->active++;
#if DEBUG
- slab_bufctl(slabp)[slabp->free] = BUFCTL_FREE;
- WARN_ON(slabp->nodeid != nodeid);
+ WARN_ON(page_to_nid(virt_to_page(objp)) != nodeid);
#endif
- slabp->free = next;
return objp;
}
-static void slab_put_obj(struct kmem_cache *cachep, struct slab *slabp,
+static void slab_put_obj(struct kmem_cache *cachep, struct page *page,
void *objp, int nodeid)
{
- unsigned int objnr = obj_to_index(cachep, slabp, objp);
-
+ unsigned int objnr = obj_to_index(cachep, page, objp);
#if DEBUG
+ unsigned int i;
+
/* Verify that the slab belongs to the intended node */
- WARN_ON(slabp->nodeid != nodeid);
+ WARN_ON(page_to_nid(virt_to_page(objp)) != nodeid);
- if (slab_bufctl(slabp)[objnr] + 1 <= SLAB_LIMIT + 1) {
- printk(KERN_ERR "slab: double free detected in cache "
- "'%s', objp %p\n", cachep->name, objp);
- BUG();
+ /* Verify double free bug */
+ for (i = page->active; i < cachep->num; i++) {
+ if (slab_freelist(page)[i] == objnr) {
+ printk(KERN_ERR "slab: double free detected in cache "
+ "'%s', objp %p\n", cachep->name, objp);
+ BUG();
+ }
}
#endif
- slab_bufctl(slabp)[objnr] = slabp->free;
- slabp->free = objnr;
- slabp->inuse--;
+ page->active--;
+ slab_freelist(page)[page->active] = objnr;
}
/*
* for the slab allocator to be able to lookup the cache and slab of a
* virtual address for kfree, ksize, and slab debugging.
*/
-static void slab_map_pages(struct kmem_cache *cache, struct slab *slab,
- void *addr)
+static void slab_map_pages(struct kmem_cache *cache, struct page *page,
+ void *freelist)
{
- int nr_pages;
- struct page *page;
-
- page = virt_to_page(addr);
-
- nr_pages = 1;
- if (likely(!PageCompound(page)))
- nr_pages <<= cache->gfporder;
-
- do {
- page->slab_cache = cache;
- page->slab_page = slab;
- page++;
- } while (--nr_pages);
+ page->slab_cache = cache;
+ page->freelist = freelist;
}
/*
* kmem_cache_alloc() when there are no active objs left in a cache.
*/
static int cache_grow(struct kmem_cache *cachep,
- gfp_t flags, int nodeid, void *objp)
+ gfp_t flags, int nodeid, struct page *page)
{
- struct slab *slabp;
+ void *freelist;
size_t offset;
gfp_t local_flags;
struct kmem_cache_node *n;
* Get mem for the objs. Attempt to allocate a physical page from
* 'nodeid'.
*/
- if (!objp)
- objp = kmem_getpages(cachep, local_flags, nodeid);
- if (!objp)
+ if (!page)
+ page = kmem_getpages(cachep, local_flags, nodeid);
+ if (!page)
goto failed;
/* Get slab management. */
- slabp = alloc_slabmgmt(cachep, objp, offset,
+ freelist = alloc_slabmgmt(cachep, page, offset,
local_flags & ~GFP_CONSTRAINT_MASK, nodeid);
- if (!slabp)
+ if (!freelist)
goto opps1;
- slab_map_pages(cachep, slabp, objp);
+ slab_map_pages(cachep, page, freelist);
- cache_init_objs(cachep, slabp);
+ cache_init_objs(cachep, page);
if (local_flags & __GFP_WAIT)
local_irq_disable();
spin_lock(&n->list_lock);
/* Make slab active. */
- list_add_tail(&slabp->list, &(n->slabs_free));
+ list_add_tail(&page->lru, &(n->slabs_free));
STATS_INC_GROWN(cachep);
n->free_objects += cachep->num;
spin_unlock(&n->list_lock);
return 1;
opps1:
- kmem_freepages(cachep, objp);
+ kmem_freepages(cachep, page);
failed:
if (local_flags & __GFP_WAIT)
local_irq_disable();
static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp,
unsigned long caller)
{
- struct page *page;
unsigned int objnr;
- struct slab *slabp;
+ struct page *page;
BUG_ON(virt_to_cache(objp) != cachep);
kfree_debugcheck(objp);
page = virt_to_head_page(objp);
- slabp = page->slab_page;
-
if (cachep->flags & SLAB_RED_ZONE) {
verify_redzone_free(cachep, objp);
*dbg_redzone1(cachep, objp) = RED_INACTIVE;
if (cachep->flags & SLAB_STORE_USER)
*dbg_userword(cachep, objp) = (void *)caller;
- objnr = obj_to_index(cachep, slabp, objp);
+ objnr = obj_to_index(cachep, page, objp);
BUG_ON(objnr >= cachep->num);
- BUG_ON(objp != index_to_obj(cachep, slabp, objnr));
+ BUG_ON(objp != index_to_obj(cachep, page, objnr));
-#ifdef CONFIG_DEBUG_SLAB_LEAK
- slab_bufctl(slabp)[objnr] = BUFCTL_FREE;
-#endif
if (cachep->flags & SLAB_POISON) {
#ifdef CONFIG_DEBUG_PAGEALLOC
if ((cachep->size % PAGE_SIZE)==0 && OFF_SLAB(cachep)) {
return objp;
}
-static void check_slabp(struct kmem_cache *cachep, struct slab *slabp)
-{
- kmem_bufctl_t i;
- int entries = 0;
-
- /* Check slab's freelist to see if this obj is there. */
- for (i = slabp->free; i != BUFCTL_END; i = slab_bufctl(slabp)[i]) {
- entries++;
- if (entries > cachep->num || i >= cachep->num)
- goto bad;
- }
- if (entries != cachep->num - slabp->inuse) {
-bad:
- printk(KERN_ERR "slab: Internal list corruption detected in "
- "cache '%s'(%d), slabp %p(%d). Tainted(%s). Hexdump:\n",
- cachep->name, cachep->num, slabp, slabp->inuse,
- print_tainted());
- print_hex_dump(KERN_ERR, "", DUMP_PREFIX_OFFSET, 16, 1, slabp,
- sizeof(*slabp) + cachep->num * sizeof(kmem_bufctl_t),
- 1);
- BUG();
- }
-}
#else
#define kfree_debugcheck(x) do { } while(0)
#define cache_free_debugcheck(x,objp,z) (objp)
-#define check_slabp(x,y) do { } while(0)
#endif
static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags,
while (batchcount > 0) {
struct list_head *entry;
- struct slab *slabp;
+ struct page *page;
/* Get slab alloc is to come from. */
entry = n->slabs_partial.next;
if (entry == &n->slabs_partial) {
goto must_grow;
}
- slabp = list_entry(entry, struct slab, list);
- check_slabp(cachep, slabp);
+ page = list_entry(entry, struct page, lru);
check_spinlock_acquired(cachep);
/*
* there must be at least one object available for
* allocation.
*/
- BUG_ON(slabp->inuse >= cachep->num);
+ BUG_ON(page->active >= cachep->num);
- while (slabp->inuse < cachep->num && batchcount--) {
+ while (page->active < cachep->num && batchcount--) {
STATS_INC_ALLOCED(cachep);
STATS_INC_ACTIVE(cachep);
STATS_SET_HIGH(cachep);
- ac_put_obj(cachep, ac, slab_get_obj(cachep, slabp,
+ ac_put_obj(cachep, ac, slab_get_obj(cachep, page,
node));
}
- check_slabp(cachep, slabp);
/* move slabp to correct slabp list: */
- list_del(&slabp->list);
- if (slabp->free == BUFCTL_END)
- list_add(&slabp->list, &n->slabs_full);
+ list_del(&page->lru);
+ if (page->active == cachep->num)
+ list_add(&page->list, &n->slabs_full);
else
- list_add(&slabp->list, &n->slabs_partial);
+ list_add(&page->list, &n->slabs_partial);
}
must_grow:
*dbg_redzone1(cachep, objp) = RED_ACTIVE;
*dbg_redzone2(cachep, objp) = RED_ACTIVE;
}
-#ifdef CONFIG_DEBUG_SLAB_LEAK
- {
- struct slab *slabp;
- unsigned objnr;
-
- slabp = virt_to_head_page(objp)->slab_page;
- objnr = (unsigned)(objp - slabp->s_mem) / cachep->size;
- slab_bufctl(slabp)[objnr] = BUFCTL_ACTIVE;
- }
-#endif
objp += obj_offset(cachep);
if (cachep->ctor && cachep->flags & SLAB_POISON)
cachep->ctor(objp);
* We may trigger various forms of reclaim on the allowed
* set and go into memory reserves if necessary.
*/
+ struct page *page;
+
if (local_flags & __GFP_WAIT)
local_irq_enable();
kmem_flagcheck(cache, flags);
- obj = kmem_getpages(cache, local_flags, numa_mem_id());
+ page = kmem_getpages(cache, local_flags, numa_mem_id());
if (local_flags & __GFP_WAIT)
local_irq_disable();
- if (obj) {
+ if (page) {
/*
* Insert into the appropriate per node queues
*/
- nid = page_to_nid(virt_to_page(obj));
- if (cache_grow(cache, flags, nid, obj)) {
+ nid = page_to_nid(page);
+ if (cache_grow(cache, flags, nid, page)) {
obj = ____cache_alloc_node(cache,
flags | GFP_THISNODE, nid);
if (!obj)
int nodeid)
{
struct list_head *entry;
- struct slab *slabp;
+ struct page *page;
struct kmem_cache_node *n;
void *obj;
int x;
goto must_grow;
}
- slabp = list_entry(entry, struct slab, list);
+ page = list_entry(entry, struct page, lru);
check_spinlock_acquired_node(cachep, nodeid);
- check_slabp(cachep, slabp);
STATS_INC_NODEALLOCS(cachep);
STATS_INC_ACTIVE(cachep);
STATS_SET_HIGH(cachep);
- BUG_ON(slabp->inuse == cachep->num);
+ BUG_ON(page->active == cachep->num);
- obj = slab_get_obj(cachep, slabp, nodeid);
- check_slabp(cachep, slabp);
+ obj = slab_get_obj(cachep, page, nodeid);
n->free_objects--;
/* move slabp to correct slabp list: */
- list_del(&slabp->list);
+ list_del(&page->lru);
- if (slabp->free == BUFCTL_END)
- list_add(&slabp->list, &n->slabs_full);
+ if (page->active == cachep->num)
+ list_add(&page->lru, &n->slabs_full);
else
- list_add(&slabp->list, &n->slabs_partial);
+ list_add(&page->lru, &n->slabs_partial);
spin_unlock(&n->list_lock);
goto done;
for (i = 0; i < nr_objects; i++) {
void *objp;
- struct slab *slabp;
+ struct page *page;
clear_obj_pfmemalloc(&objpp[i]);
objp = objpp[i];
- slabp = virt_to_slab(objp);
+ page = virt_to_head_page(objp);
n = cachep->node[node];
- list_del(&slabp->list);
+ list_del(&page->lru);
check_spinlock_acquired_node(cachep, node);
- check_slabp(cachep, slabp);
- slab_put_obj(cachep, slabp, objp, node);
+ slab_put_obj(cachep, page, objp, node);
STATS_DEC_ACTIVE(cachep);
n->free_objects++;
- check_slabp(cachep, slabp);
/* fixup slab chains */
- if (slabp->inuse == 0) {
+ if (page->active == 0) {
if (n->free_objects > n->free_limit) {
n->free_objects -= cachep->num;
/* No need to drop any previously held
* a different cache, refer to comments before
* alloc_slabmgmt.
*/
- slab_destroy(cachep, slabp);
+ slab_destroy(cachep, page);
} else {
- list_add(&slabp->list, &n->slabs_free);
+ list_add(&page->lru, &n->slabs_free);
}
} else {
/* Unconditionally move a slab to the end of the
* partial list on free - maximum time for the
* other objects to be freed, too.
*/
- list_add_tail(&slabp->list, &n->slabs_partial);
+ list_add_tail(&page->lru, &n->slabs_partial);
}
}
}
p = n->slabs_free.next;
while (p != &(n->slabs_free)) {
- struct slab *slabp;
+ struct page *page;
- slabp = list_entry(p, struct slab, list);
- BUG_ON(slabp->inuse);
+ page = list_entry(p, struct page, lru);
+ BUG_ON(page->active);
i++;
p = p->next;
#ifdef CONFIG_SLABINFO
void get_slabinfo(struct kmem_cache *cachep, struct slabinfo *sinfo)
{
- struct slab *slabp;
+ struct page *page;
unsigned long active_objs;
unsigned long num_objs;
unsigned long active_slabs = 0;
check_irq_on();
spin_lock_irq(&n->list_lock);
- list_for_each_entry(slabp, &n->slabs_full, list) {
- if (slabp->inuse != cachep->num && !error)
+ list_for_each_entry(page, &n->slabs_full, lru) {
+ if (page->active != cachep->num && !error)
error = "slabs_full accounting error";
active_objs += cachep->num;
active_slabs++;
}
- list_for_each_entry(slabp, &n->slabs_partial, list) {
- if (slabp->inuse == cachep->num && !error)
- error = "slabs_partial inuse accounting error";
- if (!slabp->inuse && !error)
- error = "slabs_partial/inuse accounting error";
- active_objs += slabp->inuse;
+ list_for_each_entry(page, &n->slabs_partial, lru) {
+ if (page->active == cachep->num && !error)
+ error = "slabs_partial accounting error";
+ if (!page->active && !error)
+ error = "slabs_partial accounting error";
+ active_objs += page->active;
active_slabs++;
}
- list_for_each_entry(slabp, &n->slabs_free, list) {
- if (slabp->inuse && !error)
- error = "slabs_free/inuse accounting error";
+ list_for_each_entry(page, &n->slabs_free, lru) {
+ if (page->active && !error)
+ error = "slabs_free accounting error";
num_slabs++;
}
free_objects += n->free_objects;
return 1;
}
-static void handle_slab(unsigned long *n, struct kmem_cache *c, struct slab *s)
+static void handle_slab(unsigned long *n, struct kmem_cache *c,
+ struct page *page)
{
void *p;
- int i;
+ int i, j;
+
if (n[0] == n[1])
return;
- for (i = 0, p = s->s_mem; i < c->num; i++, p += c->size) {
- if (slab_bufctl(s)[i] != BUFCTL_ACTIVE)
+ for (i = 0, p = page->s_mem; i < c->num; i++, p += c->size) {
+ bool active = true;
+
+ for (j = page->active; j < c->num; j++) {
+ /* Skip freed item */
+ if (slab_freelist(page)[j] == i) {
+ active = false;
+ break;
+ }
+ }
+ if (!active)
continue;
+
if (!add_caller(n, (unsigned long)*dbg_userword(c, p)))
return;
}
static int leaks_show(struct seq_file *m, void *p)
{
struct kmem_cache *cachep = list_entry(p, struct kmem_cache, list);
- struct slab *slabp;
+ struct page *page;
struct kmem_cache_node *n;
const char *name;
unsigned long *x = m->private;
check_irq_on();
spin_lock_irq(&n->list_lock);
- list_for_each_entry(slabp, &n->slabs_full, list)
- handle_slab(x, cachep, slabp);
- list_for_each_entry(slabp, &n->slabs_partial, list)
- handle_slab(x, cachep, slabp);
+ list_for_each_entry(page, &n->slabs_full, lru)
+ handle_slab(x, cachep, page);
+ list_for_each_entry(page, &n->slabs_partial, lru)
+ handle_slab(x, cachep, page);
spin_unlock_irq(&n->list_lock);
}
name = cachep->name;
/*
* Maximum number of desirable partial slabs.
* The existence of more partial slabs makes kmem_cache_shrink
- * sort the partial list by the number of objects in the.
+ * sort the partial list by the number of objects in use.
*/
#define MAX_PARTIAL 10
* Hooks for other subsystems that check memory allocations. In a typical
* production configuration these hooks all should produce no code at all.
*/
+static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
+{
+ kmemleak_alloc(ptr, size, 1, flags);
+}
+
+static inline void kfree_hook(const void *x)
+{
+ kmemleak_free(x);
+}
+
static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
{
flags &= gfp_allowed_mask;
/*
* Enable debugging if selected on the kernel commandline.
*/
- if (slub_debug && (!slub_debug_slabs ||
- !strncmp(slub_debug_slabs, name, strlen(slub_debug_slabs))))
+ if (slub_debug && (!slub_debug_slabs || (name &&
+ !strncmp(slub_debug_slabs, name, strlen(slub_debug_slabs)))))
flags |= slub_debug;
return flags;
static inline void dec_slabs_node(struct kmem_cache *s, int node,
int objects) {}
+static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
+{
+ kmemleak_alloc(ptr, size, 1, flags);
+}
+
+static inline void kfree_hook(const void *x)
+{
+ kmemleak_free(x);
+}
+
static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
{ return 0; }
static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
- void *object) {}
+ void *object)
+{
+ kmemleak_alloc_recursive(object, s->object_size, 1, s->flags,
+ flags & gfp_allowed_mask);
+}
-static inline void slab_free_hook(struct kmem_cache *s, void *x) {}
+static inline void slab_free_hook(struct kmem_cache *s, void *x)
+{
+ kmemleak_free_recursive(x, s->flags);
+}
#endif /* CONFIG_SLUB_DEBUG */
* slab on the node for this slabcache. There are no concurrent accesses
* possible.
*
- * Note that this function only works on the kmalloc_node_cache
- * when allocating for the kmalloc_node_cache. This is used for bootstrapping
+ * Note that this function only works on the kmem_cache_node
+ * when allocating for the kmem_cache_node. This is used for bootstrapping
* memory on a fresh node that has no slab structures yet.
*/
static void early_kmem_cache_node_alloc(int node)
if (page)
ptr = page_address(page);
- kmemleak_alloc(ptr, size, 1, flags);
+ kmalloc_large_node_hook(ptr, size, flags);
return ptr;
}
page = virt_to_head_page(x);
if (unlikely(!PageSlab(page))) {
BUG_ON(!PageCompound(page));
- kmemleak_free(x);
+ kfree_hook(x);
__free_memcg_kmem_pages(page, compound_order(page));
return;
}
static void put_compound_page(struct page *page)
{
- /*
- * hugetlbfs pages cannot be split from under us. If this is a
- * hugetlbfs page, check refcount on head page and release the page if
- * the refcount becomes zero.
- */
- if (PageHuge(page)) {
- page = compound_head(page);
- if (put_page_testzero(page))
- __put_compound_page(page);
-
- return;
- }
-
if (unlikely(PageTail(page))) {
/* __split_huge_page_refcount can run under us */
struct page *page_head = compound_trans_head(page);
* still hot on arches that do not support
* this_cpu_cmpxchg_double().
*/
- if (PageSlab(page_head)) {
- if (PageTail(page)) {
+ if (PageSlab(page_head) || PageHeadHuge(page_head)) {
+ if (likely(PageTail(page))) {
+ /*
+ * __split_huge_page_refcount
+ * cannot race here.
+ */
+ VM_BUG_ON(!PageHead(page_head));
+ atomic_dec(&page->_mapcount);
if (put_page_testzero(page_head))
VM_BUG_ON(1);
-
- atomic_dec(&page->_mapcount);
- goto skip_lock_tail;
+ if (put_page_testzero(page_head))
+ __put_compound_page(page_head);
+ return;
} else
+ /*
+ * __split_huge_page_refcount
+ * run before us, "page" was a
+ * THP tail. The split
+ * page_head has been freed
+ * and reallocated as slab or
+ * hugetlbfs page of smaller
+ * order (only possible if
+ * reallocated as slab on
+ * x86).
+ */
goto skip_lock;
}
/*
/* __split_huge_page_refcount run before us */
compound_unlock_irqrestore(page_head, flags);
skip_lock:
- if (put_page_testzero(page_head))
- __put_single_page(page_head);
+ if (put_page_testzero(page_head)) {
+ /*
+ * The head page may have been
+ * freed and reallocated as a
+ * compound page of smaller
+ * order and then freed again.
+ * All we know is that it
+ * cannot have become: a THP
+ * page, a compound page of
+ * higher order, a tail page.
+ * That is because we still
+ * hold the refcount of the
+ * split THP tail and
+ * page_head was the THP head
+ * before the split.
+ */
+ if (PageHead(page_head))
+ __put_compound_page(page_head);
+ else
+ __put_single_page(page_head);
+ }
out_put_single:
if (put_page_testzero(page))
__put_single_page(page);
VM_BUG_ON(atomic_read(&page->_count) != 0);
compound_unlock_irqrestore(page_head, flags);
-skip_lock_tail:
if (put_page_testzero(page_head)) {
if (PageHead(page_head))
__put_compound_page(page_head);
* proper PT lock that already serializes against
* split_huge_page().
*/
+ unsigned long flags;
bool got = false;
- struct page *page_head;
-
- /*
- * If this is a hugetlbfs page it cannot be split under us. Simply
- * increment refcount for the head page.
- */
- if (PageHuge(page)) {
- page_head = compound_head(page);
- atomic_inc(&page_head->_count);
- got = true;
- } else {
- unsigned long flags;
+ struct page *page_head = compound_trans_head(page);
- page_head = compound_trans_head(page);
- if (likely(page != page_head &&
- get_page_unless_zero(page_head))) {
-
- /* Ref to put_compound_page() comment. */
- if (PageSlab(page_head)) {
- if (likely(PageTail(page))) {
- __get_page_tail_foll(page, false);
- return true;
- } else {
- put_page(page_head);
- return false;
- }
- }
-
- /*
- * page_head wasn't a dangling pointer but it
- * may not be a head page anymore by the time
- * we obtain the lock. That is ok as long as it
- * can't be freed from under us.
- */
- flags = compound_lock_irqsave(page_head);
- /* here __split_huge_page_refcount won't run anymore */
+ if (likely(page != page_head && get_page_unless_zero(page_head))) {
+ /* Ref to put_compound_page() comment. */
+ if (PageSlab(page_head) || PageHeadHuge(page_head)) {
if (likely(PageTail(page))) {
+ /*
+ * This is a hugetlbfs page or a slab
+ * page. __split_huge_page_refcount
+ * cannot race here.
+ */
+ VM_BUG_ON(!PageHead(page_head));
__get_page_tail_foll(page, false);
- got = true;
- }
- compound_unlock_irqrestore(page_head, flags);
- if (unlikely(!got))
+ return true;
+ } else {
+ /*
+ * __split_huge_page_refcount run
+ * before us, "page" was a THP
+ * tail. The split page_head has been
+ * freed and reallocated as slab or
+ * hugetlbfs page of smaller order
+ * (only possible if reallocated as
+ * slab on x86).
+ */
put_page(page_head);
+ return false;
+ }
+ }
+
+ /*
+ * page_head wasn't a dangling pointer but it
+ * may not be a head page anymore by the time
+ * we obtain the lock. That is ok as long as it
+ * can't be freed from under us.
+ */
+ flags = compound_lock_irqsave(page_head);
+ /* here __split_huge_page_refcount won't run anymore */
+ if (likely(PageTail(page))) {
+ __get_page_tail_foll(page, false);
+ got = true;
}
+ compound_unlock_irqrestore(page_head, flags);
+ if (unlikely(!got))
+ put_page(page_head);
}
return got;
}
.parse = eth_header_parse,
};
+static int vlan_passthru_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type,
+ const void *daddr, const void *saddr,
+ unsigned int len)
+{
+ struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
+ struct net_device *real_dev = vlan->real_dev;
+
+ return dev_hard_header(skb, real_dev, type, daddr, saddr, len);
+}
+
+static const struct header_ops vlan_passthru_header_ops = {
+ .create = vlan_passthru_hard_header,
+ .rebuild = dev_rebuild_header,
+ .parse = eth_header_parse,
+};
+
static struct device_type vlan_type = {
.name = "vlan",
};
dev->needed_headroom = real_dev->needed_headroom;
if (real_dev->features & NETIF_F_HW_VLAN_CTAG_TX) {
- dev->header_ops = real_dev->header_ops;
+ dev->header_ops = &vlan_passthru_header_ops;
dev->hard_header_len = real_dev->hard_header_len;
} else {
dev->header_ops = &vlan_header_ops;
config RPS
boolean
- depends on SMP && SYSFS && USE_GENERIC_SMP_HELPERS
+ depends on SMP && SYSFS
default y
config RFS_ACCEL
config XPS
boolean
- depends on SMP && USE_GENERIC_SMP_HELPERS
+ depends on SMP
default y
config NETPRIO_CGROUP
hard_iface->bat_iv.ogm_buff = ogm_buff;
batadv_ogm_packet = (struct batadv_ogm_packet *)ogm_buff;
- batadv_ogm_packet->header.packet_type = BATADV_IV_OGM;
- batadv_ogm_packet->header.version = BATADV_COMPAT_VERSION;
- batadv_ogm_packet->header.ttl = 2;
+ batadv_ogm_packet->packet_type = BATADV_IV_OGM;
+ batadv_ogm_packet->version = BATADV_COMPAT_VERSION;
+ batadv_ogm_packet->ttl = 2;
batadv_ogm_packet->flags = BATADV_NO_FLAGS;
batadv_ogm_packet->reserved = 0;
batadv_ogm_packet->tq = BATADV_TQ_MAX_VALUE;
batadv_ogm_packet = (struct batadv_ogm_packet *)ogm_buff;
batadv_ogm_packet->flags = BATADV_PRIMARIES_FIRST_HOP;
- batadv_ogm_packet->header.ttl = BATADV_TTL;
+ batadv_ogm_packet->ttl = BATADV_TTL;
}
/* when do we schedule our own ogm to be sent */
fwd_str, (packet_num > 0 ? "aggregated " : ""),
batadv_ogm_packet->orig,
ntohl(batadv_ogm_packet->seqno),
- batadv_ogm_packet->tq, batadv_ogm_packet->header.ttl,
+ batadv_ogm_packet->tq, batadv_ogm_packet->ttl,
(batadv_ogm_packet->flags & BATADV_DIRECTLINK ?
"on" : "off"),
hard_iface->net_dev->name,
/* multihomed peer assumed
* non-primary OGMs are only broadcasted on their interface
*/
- if ((directlink && (batadv_ogm_packet->header.ttl == 1)) ||
+ if ((directlink && (batadv_ogm_packet->ttl == 1)) ||
(forw_packet->own && (forw_packet->if_incoming != primary_if))) {
/* FIXME: what about aggregated packets ? */
batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
(forw_packet->own ? "Sending own" : "Forwarding"),
batadv_ogm_packet->orig,
ntohl(batadv_ogm_packet->seqno),
- batadv_ogm_packet->header.ttl,
+ batadv_ogm_packet->ttl,
forw_packet->if_incoming->net_dev->name,
forw_packet->if_incoming->net_dev->dev_addr);
*/
if ((!directlink) &&
(!(batadv_ogm_packet->flags & BATADV_DIRECTLINK)) &&
- (batadv_ogm_packet->header.ttl != 1) &&
+ (batadv_ogm_packet->ttl != 1) &&
/* own packets originating non-primary
* interfaces leave only that interface
* interface only - we still can aggregate
*/
if ((directlink) &&
- (new_bat_ogm_packet->header.ttl == 1) &&
+ (new_bat_ogm_packet->ttl == 1) &&
(forw_packet->if_incoming == if_incoming) &&
/* packets from direct neighbors or
struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface);
uint16_t tvlv_len;
- if (batadv_ogm_packet->header.ttl <= 1) {
+ if (batadv_ogm_packet->ttl <= 1) {
batadv_dbg(BATADV_DBG_BATMAN, bat_priv, "ttl exceeded\n");
return;
}
tvlv_len = ntohs(batadv_ogm_packet->tvlv_len);
- batadv_ogm_packet->header.ttl--;
+ batadv_ogm_packet->ttl--;
memcpy(batadv_ogm_packet->prev_sender, ethhdr->h_source, ETH_ALEN);
/* apply hop penalty */
batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
"Forwarding packet: tq: %i, ttl: %i\n",
- batadv_ogm_packet->tq, batadv_ogm_packet->header.ttl);
+ batadv_ogm_packet->tq, batadv_ogm_packet->ttl);
/* switch of primaries first hop flag when forwarding */
batadv_ogm_packet->flags &= ~BATADV_PRIMARIES_FIRST_HOP;
spin_unlock_bh(&neigh_node->bat_iv.lq_update_lock);
if (dup_status == BATADV_NO_DUP) {
- orig_node->last_ttl = batadv_ogm_packet->header.ttl;
- neigh_node->last_ttl = batadv_ogm_packet->header.ttl;
+ orig_node->last_ttl = batadv_ogm_packet->ttl;
+ neigh_node->last_ttl = batadv_ogm_packet->ttl;
}
batadv_bonding_candidate_add(bat_priv, orig_node, neigh_node);
* packet in an aggregation. Here we expect that the padding
* is always zero (or not 0x01)
*/
- if (batadv_ogm_packet->header.packet_type != BATADV_IV_OGM)
+ if (batadv_ogm_packet->packet_type != BATADV_IV_OGM)
return;
/* could be changed by schedule_own_packet() */
if_incoming->net_dev->dev_addr, batadv_ogm_packet->orig,
batadv_ogm_packet->prev_sender,
ntohl(batadv_ogm_packet->seqno), batadv_ogm_packet->tq,
- batadv_ogm_packet->header.ttl,
- batadv_ogm_packet->header.version, has_directlink_flag);
+ batadv_ogm_packet->ttl,
+ batadv_ogm_packet->version, has_directlink_flag);
rcu_read_lock();
list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) {
* seqno and similar ttl as the non-duplicate
*/
sameseq = orig_node->last_real_seqno == ntohl(batadv_ogm_packet->seqno);
- similar_ttl = orig_node->last_ttl - 3 <= batadv_ogm_packet->header.ttl;
+ similar_ttl = orig_node->last_ttl - 3 <= batadv_ogm_packet->ttl;
if (is_bidirect && ((dup_status == BATADV_NO_DUP) ||
(sameseq && similar_ttl)))
batadv_iv_ogm_orig_update(bat_priv, orig_node, ethhdr,
unicast_4addr_packet = (struct batadv_unicast_4addr_packet *)skb->data;
- switch (unicast_4addr_packet->u.header.packet_type) {
+ switch (unicast_4addr_packet->u.packet_type) {
case BATADV_UNICAST:
batadv_dbg(BATADV_DBG_DAT, bat_priv,
"* encapsulated within a UNICAST packet\n");
break;
default:
batadv_dbg(BATADV_DBG_DAT, bat_priv, "* type: Unknown (%u)!\n",
- unicast_4addr_packet->u.header.packet_type);
+ unicast_4addr_packet->u.packet_type);
}
break;
case BATADV_BCAST:
default:
batadv_dbg(BATADV_DBG_DAT, bat_priv,
"* encapsulated within an unknown packet type (0x%x)\n",
- unicast_4addr_packet->u.header.packet_type);
+ unicast_4addr_packet->u.packet_type);
}
}
batadv_add_counter(bat_priv, BATADV_CNT_FRAG_FWD_BYTES,
skb->len + ETH_HLEN);
- packet->header.ttl--;
+ packet->ttl--;
batadv_send_skb_packet(skb, neigh_node->if_incoming,
neigh_node->addr);
ret = true;
goto out_err;
/* Create one header to be copied to all fragments */
- frag_header.header.packet_type = BATADV_UNICAST_FRAG;
- frag_header.header.version = BATADV_COMPAT_VERSION;
- frag_header.header.ttl = BATADV_TTL;
+ frag_header.packet_type = BATADV_UNICAST_FRAG;
+ frag_header.version = BATADV_COMPAT_VERSION;
+ frag_header.ttl = BATADV_TTL;
frag_header.seqno = htons(atomic_inc_return(&bat_priv->frag_seqno));
frag_header.reserved = 0;
frag_header.no = 0;
goto free_skb;
}
- if (icmp_header->header.packet_type != BATADV_ICMP) {
+ if (icmp_header->packet_type != BATADV_ICMP) {
batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
"Error - can't send packet from char device: got bogus packet type (expected: BAT_ICMP)\n");
len = -EINVAL;
icmp_header->uid = socket_client->index;
- if (icmp_header->header.version != BATADV_COMPAT_VERSION) {
+ if (icmp_header->version != BATADV_COMPAT_VERSION) {
icmp_header->msg_type = BATADV_PARAMETER_PROBLEM;
- icmp_header->header.version = BATADV_COMPAT_VERSION;
+ icmp_header->version = BATADV_COMPAT_VERSION;
batadv_socket_add_packet(socket_client, icmp_header,
packet_len);
goto free_skb;
batadv_ogm_packet = (struct batadv_ogm_packet *)skb->data;
- if (batadv_ogm_packet->header.version != BATADV_COMPAT_VERSION) {
+ if (batadv_ogm_packet->version != BATADV_COMPAT_VERSION) {
batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
"Drop packet: incompatible batman version (%i)\n",
- batadv_ogm_packet->header.version);
+ batadv_ogm_packet->version);
goto err_free;
}
/* all receive handlers return whether they received or reused
* the supplied skb. if not, we have to free the skb.
*/
- idx = batadv_ogm_packet->header.packet_type;
+ idx = batadv_ogm_packet->packet_type;
ret = (*batadv_rx_handler[idx])(skb, hard_iface);
if (ret == NET_RX_DROP)
BUILD_BUG_ON(offsetof(struct batadv_unicast_packet, dest) != 4);
BUILD_BUG_ON(offsetof(struct batadv_unicast_tvlv_packet, dst) != 4);
BUILD_BUG_ON(offsetof(struct batadv_frag_packet, dest) != 4);
- BUILD_BUG_ON(offsetof(struct batadv_icmp_packet, icmph.dst) != 4);
- BUILD_BUG_ON(offsetof(struct batadv_icmp_packet_rr, icmph.dst) != 4);
+ BUILD_BUG_ON(offsetof(struct batadv_icmp_packet, dst) != 4);
+ BUILD_BUG_ON(offsetof(struct batadv_icmp_packet_rr, dst) != 4);
/* broadcast packet */
batadv_rx_handler[BATADV_BCAST] = batadv_recv_bcast_packet;
skb_reserve(skb, ETH_HLEN);
tvlv_buff = skb_put(skb, sizeof(*unicast_tvlv_packet) + tvlv_len);
unicast_tvlv_packet = (struct batadv_unicast_tvlv_packet *)tvlv_buff;
- unicast_tvlv_packet->header.packet_type = BATADV_UNICAST_TVLV;
- unicast_tvlv_packet->header.version = BATADV_COMPAT_VERSION;
- unicast_tvlv_packet->header.ttl = BATADV_TTL;
+ unicast_tvlv_packet->packet_type = BATADV_UNICAST_TVLV;
+ unicast_tvlv_packet->version = BATADV_COMPAT_VERSION;
+ unicast_tvlv_packet->ttl = BATADV_TTL;
unicast_tvlv_packet->reserved = 0;
unicast_tvlv_packet->tvlv_len = htons(tvlv_len);
unicast_tvlv_packet->align = 0;
{
if (orig_node->last_real_seqno != ntohl(ogm_packet->seqno))
return false;
- if (orig_node->last_ttl != ogm_packet->header.ttl + 1)
+ if (orig_node->last_ttl != ogm_packet->ttl + 1)
return false;
if (!batadv_compare_eth(ogm_packet->orig, ogm_packet->prev_sender))
return false;
coded_packet = (struct batadv_coded_packet *)skb_dest->data;
skb_reset_mac_header(skb_dest);
- coded_packet->header.packet_type = BATADV_CODED;
- coded_packet->header.version = BATADV_COMPAT_VERSION;
- coded_packet->header.ttl = packet1->header.ttl;
+ coded_packet->packet_type = BATADV_CODED;
+ coded_packet->version = BATADV_COMPAT_VERSION;
+ coded_packet->ttl = packet1->ttl;
/* Info about first unicast packet */
memcpy(coded_packet->first_source, first_source, ETH_ALEN);
memcpy(coded_packet->second_source, second_source, ETH_ALEN);
memcpy(coded_packet->second_orig_dest, packet2->dest, ETH_ALEN);
coded_packet->second_crc = packet_id2;
- coded_packet->second_ttl = packet2->header.ttl;
+ coded_packet->second_ttl = packet2->ttl;
coded_packet->second_ttvn = packet2->ttvn;
coded_packet->coded_len = htons(coding_len);
/* We only handle unicast packets */
payload = skb_network_header(skb);
packet = (struct batadv_unicast_packet *)payload;
- if (packet->header.packet_type != BATADV_UNICAST)
+ if (packet->packet_type != BATADV_UNICAST)
goto out;
/* Try to find a coding opportunity and send the skb if one is found */
/* Check for supported packet type */
payload = skb_network_header(skb);
packet = (struct batadv_unicast_packet *)payload;
- if (packet->header.packet_type != BATADV_UNICAST)
+ if (packet->packet_type != BATADV_UNICAST)
goto out;
/* Find existing nc_path or create a new */
ttvn = coded_packet_tmp.second_ttvn;
} else {
orig_dest = coded_packet_tmp.first_orig_dest;
- ttl = coded_packet_tmp.header.ttl;
+ ttl = coded_packet_tmp.ttl;
ttvn = coded_packet_tmp.first_ttvn;
}
/* Create decoded unicast packet */
unicast_packet = (struct batadv_unicast_packet *)skb->data;
- unicast_packet->header.packet_type = BATADV_UNICAST;
- unicast_packet->header.version = BATADV_COMPAT_VERSION;
- unicast_packet->header.ttl = ttl;
+ unicast_packet->packet_type = BATADV_UNICAST;
+ unicast_packet->version = BATADV_COMPAT_VERSION;
+ unicast_packet->ttl = ttl;
memcpy(unicast_packet->dest, orig_dest, ETH_ALEN);
unicast_packet->ttvn = ttvn;
BATADV_TVLV_ROAM = 0x05,
};
+#pragma pack(2)
/* the destination hardware field in the ARP frame is used to
* transport the claim type and the group id
*/
uint8_t type; /* bla_claimframe */
__be16 group; /* group id */
};
-
-struct batadv_header {
- uint8_t packet_type;
- uint8_t version; /* batman version field */
- uint8_t ttl;
- /* the parent struct has to add a byte after the header to make
- * everything 4 bytes aligned again
- */
-};
+#pragma pack()
/**
* struct batadv_ogm_packet - ogm (routing protocol) packet
- * @header: common batman packet header
+ * @packet_type: batman-adv packet type, part of the general header
+ * @version: batman-adv protocol version, part of the genereal header
+ * @ttl: time to live for this packet, part of the genereal header
* @flags: contains routing relevant flags - see enum batadv_iv_flags
* @tvlv_len: length of tvlv data following the ogm header
*/
struct batadv_ogm_packet {
- struct batadv_header header;
+ uint8_t packet_type;
+ uint8_t version;
+ uint8_t ttl;
uint8_t flags;
__be32 seqno;
uint8_t orig[ETH_ALEN];
#define BATADV_OGM_HLEN sizeof(struct batadv_ogm_packet)
/**
- * batadv_icmp_header - common ICMP header
- * @header: common batman header
+ * batadv_icmp_header - common members among all the ICMP packets
+ * @packet_type: batman-adv packet type, part of the general header
+ * @version: batman-adv protocol version, part of the genereal header
+ * @ttl: time to live for this packet, part of the genereal header
* @msg_type: ICMP packet type
* @dst: address of the destination node
* @orig: address of the source node
* @uid: local ICMP socket identifier
+ * @align: not used - useful for alignment purposes only
+ *
+ * This structure is used for ICMP packets parsing only and it is never sent
+ * over the wire. The alignment field at the end is there to ensure that
+ * members are padded the same way as they are in real packets.
*/
struct batadv_icmp_header {
- struct batadv_header header;
+ uint8_t packet_type;
+ uint8_t version;
+ uint8_t ttl;
uint8_t msg_type; /* see ICMP message types above */
uint8_t dst[ETH_ALEN];
uint8_t orig[ETH_ALEN];
uint8_t uid;
+ uint8_t align[3];
};
/**
* batadv_icmp_packet - ICMP packet
- * @icmph: common ICMP header
+ * @packet_type: batman-adv packet type, part of the general header
+ * @version: batman-adv protocol version, part of the genereal header
+ * @ttl: time to live for this packet, part of the genereal header
+ * @msg_type: ICMP packet type
+ * @dst: address of the destination node
+ * @orig: address of the source node
+ * @uid: local ICMP socket identifier
* @reserved: not used - useful for alignment
* @seqno: ICMP sequence number
*/
struct batadv_icmp_packet {
- struct batadv_icmp_header icmph;
+ uint8_t packet_type;
+ uint8_t version;
+ uint8_t ttl;
+ uint8_t msg_type; /* see ICMP message types above */
+ uint8_t dst[ETH_ALEN];
+ uint8_t orig[ETH_ALEN];
+ uint8_t uid;
uint8_t reserved;
__be16 seqno;
};
/**
* batadv_icmp_packet_rr - ICMP RouteRecord packet
- * @icmph: common ICMP header
+ * @packet_type: batman-adv packet type, part of the general header
+ * @version: batman-adv protocol version, part of the genereal header
+ * @ttl: time to live for this packet, part of the genereal header
+ * @msg_type: ICMP packet type
+ * @dst: address of the destination node
+ * @orig: address of the source node
+ * @uid: local ICMP socket identifier
* @rr_cur: number of entries the rr array
* @seqno: ICMP sequence number
* @rr: route record array
*/
struct batadv_icmp_packet_rr {
- struct batadv_icmp_header icmph;
+ uint8_t packet_type;
+ uint8_t version;
+ uint8_t ttl;
+ uint8_t msg_type; /* see ICMP message types above */
+ uint8_t dst[ETH_ALEN];
+ uint8_t orig[ETH_ALEN];
+ uint8_t uid;
uint8_t rr_cur;
__be16 seqno;
uint8_t rr[BATADV_RR_LEN][ETH_ALEN];
*/
#pragma pack(2)
+/**
+ * struct batadv_unicast_packet - unicast packet for network payload
+ * @packet_type: batman-adv packet type, part of the general header
+ * @version: batman-adv protocol version, part of the genereal header
+ * @ttl: time to live for this packet, part of the genereal header
+ * @ttvn: translation table version number
+ * @dest: originator destination of the unicast packet
+ */
struct batadv_unicast_packet {
- struct batadv_header header;
+ uint8_t packet_type;
+ uint8_t version;
+ uint8_t ttl;
uint8_t ttvn; /* destination translation table version number */
uint8_t dest[ETH_ALEN];
/* "4 bytes boundary + 2 bytes" long to make the payload after the
/**
* struct batadv_frag_packet - fragmented packet
- * @header: common batman packet header with type, compatversion, and ttl
+ * @packet_type: batman-adv packet type, part of the general header
+ * @version: batman-adv protocol version, part of the genereal header
+ * @ttl: time to live for this packet, part of the genereal header
* @dest: final destination used when routing fragments
* @orig: originator of the fragment used when merging the packet
* @no: fragment number within this sequence
* @total_size: size of the merged packet
*/
struct batadv_frag_packet {
- struct batadv_header header;
+ uint8_t packet_type;
+ uint8_t version; /* batman version field */
+ uint8_t ttl;
#if defined(__BIG_ENDIAN_BITFIELD)
uint8_t no:4;
uint8_t reserved:4;
__be16 total_size;
};
+/**
+ * struct batadv_bcast_packet - broadcast packet for network payload
+ * @packet_type: batman-adv packet type, part of the general header
+ * @version: batman-adv protocol version, part of the genereal header
+ * @ttl: time to live for this packet, part of the genereal header
+ * @reserved: reserved byte for alignment
+ * @seqno: sequence identification
+ * @orig: originator of the broadcast packet
+ */
struct batadv_bcast_packet {
- struct batadv_header header;
+ uint8_t packet_type;
+ uint8_t version; /* batman version field */
+ uint8_t ttl;
uint8_t reserved;
__be32 seqno;
uint8_t orig[ETH_ALEN];
*/
};
-#pragma pack()
-
/**
* struct batadv_coded_packet - network coded packet
- * @header: common batman packet header and ttl of first included packet
+ * @packet_type: batman-adv packet type, part of the general header
+ * @version: batman-adv protocol version, part of the genereal header
+ * @ttl: time to live for this packet, part of the genereal header
* @reserved: Align following fields to 2-byte boundaries
* @first_source: original source of first included packet
* @first_orig_dest: original destinal of first included packet
* @coded_len: length of network coded part of the payload
*/
struct batadv_coded_packet {
- struct batadv_header header;
+ uint8_t packet_type;
+ uint8_t version; /* batman version field */
+ uint8_t ttl;
uint8_t first_ttvn;
/* uint8_t first_dest[ETH_ALEN]; - saved in mac header destination */
uint8_t first_source[ETH_ALEN];
__be16 coded_len;
};
+#pragma pack()
+
/**
* struct batadv_unicast_tvlv - generic unicast packet with tvlv payload
- * @header: common batman packet header
+ * @packet_type: batman-adv packet type, part of the general header
+ * @version: batman-adv protocol version, part of the genereal header
+ * @ttl: time to live for this packet, part of the genereal header
* @reserved: reserved field (for packet alignment)
* @src: address of the source
* @dst: address of the destination
* @align: 2 bytes to align the header to a 4 byte boundry
*/
struct batadv_unicast_tvlv_packet {
- struct batadv_header header;
+ uint8_t packet_type;
+ uint8_t version; /* batman version field */
+ uint8_t ttl;
uint8_t reserved;
uint8_t dst[ETH_ALEN];
uint8_t src[ETH_ALEN];
* struct batadv_tvlv_tt_change - translation table diff data
* @flags: status indicators concerning the non-mesh client (see
* batadv_tt_client_flags)
- * @reserved: reserved field
+ * @reserved: reserved field - useful for alignment purposes only
* @addr: mac address of non-mesh client that triggered this tt change
* @vid: VLAN identifier
*/
struct batadv_tvlv_tt_change {
uint8_t flags;
- uint8_t reserved;
+ uint8_t reserved[3];
uint8_t addr[ETH_ALEN];
__be16 vid;
};
memcpy(icmph->dst, icmph->orig, ETH_ALEN);
memcpy(icmph->orig, primary_if->net_dev->dev_addr, ETH_ALEN);
icmph->msg_type = BATADV_ECHO_REPLY;
- icmph->header.ttl = BATADV_TTL;
+ icmph->ttl = BATADV_TTL;
res = batadv_send_skb_to_orig(skb, orig_node, NULL);
if (res != NET_XMIT_DROP)
icmp_packet = (struct batadv_icmp_packet *)skb->data;
/* send TTL exceeded if packet is an echo request (traceroute) */
- if (icmp_packet->icmph.msg_type != BATADV_ECHO_REQUEST) {
+ if (icmp_packet->msg_type != BATADV_ECHO_REQUEST) {
pr_debug("Warning - can't forward icmp packet from %pM to %pM: ttl exceeded\n",
- icmp_packet->icmph.orig, icmp_packet->icmph.dst);
+ icmp_packet->orig, icmp_packet->dst);
goto out;
}
goto out;
/* get routing information */
- orig_node = batadv_orig_hash_find(bat_priv, icmp_packet->icmph.orig);
+ orig_node = batadv_orig_hash_find(bat_priv, icmp_packet->orig);
if (!orig_node)
goto out;
icmp_packet = (struct batadv_icmp_packet *)skb->data;
- memcpy(icmp_packet->icmph.dst, icmp_packet->icmph.orig, ETH_ALEN);
- memcpy(icmp_packet->icmph.orig, primary_if->net_dev->dev_addr,
+ memcpy(icmp_packet->dst, icmp_packet->orig, ETH_ALEN);
+ memcpy(icmp_packet->orig, primary_if->net_dev->dev_addr,
ETH_ALEN);
- icmp_packet->icmph.msg_type = BATADV_TTL_EXCEEDED;
- icmp_packet->icmph.header.ttl = BATADV_TTL;
+ icmp_packet->msg_type = BATADV_TTL_EXCEEDED;
+ icmp_packet->ttl = BATADV_TTL;
if (batadv_send_skb_to_orig(skb, orig_node, NULL) != NET_XMIT_DROP)
ret = NET_RX_SUCCESS;
return batadv_recv_my_icmp_packet(bat_priv, skb);
/* TTL exceeded */
- if (icmph->header.ttl < 2)
+ if (icmph->ttl < 2)
return batadv_recv_icmp_ttl_exceeded(bat_priv, skb);
/* get routing information */
icmph = (struct batadv_icmp_header *)skb->data;
/* decrement ttl */
- icmph->header.ttl--;
+ icmph->ttl--;
/* route it */
if (batadv_send_skb_to_orig(skb, orig_node, recv_if) != NET_XMIT_DROP)
unicast_packet = (struct batadv_unicast_packet *)skb->data;
/* TTL exceeded */
- if (unicast_packet->header.ttl < 2) {
+ if (unicast_packet->ttl < 2) {
pr_debug("Warning - can't forward unicast packet from %pM to %pM: ttl exceeded\n",
ethhdr->h_source, unicast_packet->dest);
goto out;
/* decrement ttl */
unicast_packet = (struct batadv_unicast_packet *)skb->data;
- unicast_packet->header.ttl--;
+ unicast_packet->ttl--;
- switch (unicast_packet->header.packet_type) {
+ switch (unicast_packet->packet_type) {
case BATADV_UNICAST_4ADDR:
hdr_len = sizeof(struct batadv_unicast_4addr_packet);
break;
unicast_packet = (struct batadv_unicast_packet *)skb->data;
unicast_4addr_packet = (struct batadv_unicast_4addr_packet *)skb->data;
- is4addr = unicast_packet->header.packet_type == BATADV_UNICAST_4ADDR;
+ is4addr = unicast_packet->packet_type == BATADV_UNICAST_4ADDR;
/* the caller function should have already pulled 2 bytes */
if (is4addr)
hdr_size = sizeof(*unicast_4addr_packet);
if (batadv_is_my_mac(bat_priv, bcast_packet->orig))
goto out;
- if (bcast_packet->header.ttl < 2)
+ if (bcast_packet->ttl < 2)
goto out;
orig_node = batadv_orig_hash_find(bat_priv, bcast_packet->orig);
return false;
unicast_packet = (struct batadv_unicast_packet *)skb->data;
- unicast_packet->header.version = BATADV_COMPAT_VERSION;
+ unicast_packet->version = BATADV_COMPAT_VERSION;
/* batman packet type: unicast */
- unicast_packet->header.packet_type = BATADV_UNICAST;
+ unicast_packet->packet_type = BATADV_UNICAST;
/* set unicast ttl */
- unicast_packet->header.ttl = BATADV_TTL;
+ unicast_packet->ttl = BATADV_TTL;
/* copy the destination for faster routing */
memcpy(unicast_packet->dest, orig_node->orig, ETH_ALEN);
/* set the destination tt version number */
goto out;
uc_4addr_packet = (struct batadv_unicast_4addr_packet *)skb->data;
- uc_4addr_packet->u.header.packet_type = BATADV_UNICAST_4ADDR;
+ uc_4addr_packet->u.packet_type = BATADV_UNICAST_4ADDR;
memcpy(uc_4addr_packet->src, primary_if->net_dev->dev_addr, ETH_ALEN);
uc_4addr_packet->subtype = packet_subtype;
uc_4addr_packet->reserved = 0;
/* as we have a copy now, it is safe to decrease the TTL */
bcast_packet = (struct batadv_bcast_packet *)newskb->data;
- bcast_packet->header.ttl--;
+ bcast_packet->ttl--;
skb_reset_mac_header(newskb);
goto dropped;
bcast_packet = (struct batadv_bcast_packet *)skb->data;
- bcast_packet->header.version = BATADV_COMPAT_VERSION;
- bcast_packet->header.ttl = BATADV_TTL;
+ bcast_packet->version = BATADV_COMPAT_VERSION;
+ bcast_packet->ttl = BATADV_TTL;
/* batman packet type: broadcast */
- bcast_packet->header.packet_type = BATADV_BCAST;
+ bcast_packet->packet_type = BATADV_BCAST;
bcast_packet->reserved = 0;
/* hw address of first interface is the orig mac because only
struct sk_buff *skb, struct batadv_hard_iface *recv_if,
int hdr_size, struct batadv_orig_node *orig_node)
{
- struct batadv_header *batadv_header = (struct batadv_header *)skb->data;
+ struct batadv_bcast_packet *batadv_bcast_packet;
struct batadv_priv *bat_priv = netdev_priv(soft_iface);
__be16 ethertype = htons(ETH_P_BATMAN);
struct vlan_ethhdr *vhdr;
unsigned short vid;
bool is_bcast;
- is_bcast = (batadv_header->packet_type == BATADV_BCAST);
+ batadv_bcast_packet = (struct batadv_bcast_packet *)skb->data;
+ is_bcast = (batadv_bcast_packet->packet_type == BATADV_BCAST);
/* check if enough space is available for pulling, and pull */
if (!pskb_may_pull(skb, hdr_size))
skb_pull_rcsum(skb, hdr_size);
skb_reset_mac_header(skb);
- vid = batadv_get_vid(skb, hdr_size);
+ /* clean the netfilter state now that the batman-adv header has been
+ * removed
+ */
+ nf_reset(skb);
+
+ vid = batadv_get_vid(skb, 0);
ethhdr = eth_hdr(skb);
switch (ntohs(ethhdr->h_proto)) {
return;
tt_change_node->change.flags = flags;
- tt_change_node->change.reserved = 0;
+ memset(tt_change_node->change.reserved, 0,
+ sizeof(tt_change_node->change.reserved));
memcpy(tt_change_node->change.addr, common->addr, ETH_ALEN);
tt_change_node->change.vid = htons(common->vid);
ETH_ALEN);
tt_change->flags = tt_common_entry->flags;
tt_change->vid = htons(tt_common_entry->vid);
- tt_change->reserved = 0;
+ memset(tt_change->reserved, 0,
+ sizeof(tt_change->reserved));
tt_num_entries++;
tt_change++;
u32 old;
struct net_bridge_mdb_htable *mdb;
- spin_lock(&br->multicast_lock);
+ spin_lock_bh(&br->multicast_lock);
if (!netif_running(br->dev))
goto unlock;
}
unlock:
- spin_unlock(&br->multicast_lock);
+ spin_unlock_bh(&br->multicast_lock);
return err;
}
int br_handle_frame_finish(struct sk_buff *skb);
rx_handler_result_t br_handle_frame(struct sk_buff **pskb);
+static inline bool br_rx_handler_check_rcu(const struct net_device *dev)
+{
+ return rcu_dereference(dev->rx_handler) == br_handle_frame;
+}
+
+static inline struct net_bridge_port *br_port_get_check_rcu(const struct net_device *dev)
+{
+ return br_rx_handler_check_rcu(dev) ? br_port_get_rcu(dev) : NULL;
+}
+
/* br_ioctl.c */
int br_dev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
int br_ioctl_deviceless_stub(struct net *net, unsigned int cmd,
if (buf[0] != 0 || buf[1] != 0 || buf[2] != 0)
goto err;
- p = br_port_get_rcu(dev);
+ p = br_port_get_check_rcu(dev);
if (!p)
goto err;
__get_user(kmsg->msg_flags, &umsg->msg_flags))
return -EFAULT;
if (kmsg->msg_namelen > sizeof(struct sockaddr_storage))
- return -EINVAL;
+ kmsg->msg_namelen = sizeof(struct sockaddr_storage);
kmsg->msg_name = compat_ptr(tmp1);
kmsg->msg_iov = compat_ptr(tmp2);
kmsg->msg_control = compat_ptr(tmp3);
{
struct netdev_adjacent *upper;
- WARN_ON_ONCE(!rcu_read_lock_held());
+ WARN_ON_ONCE(!rcu_read_lock_held() && !lockdep_rtnl_is_held());
upper = list_entry_rcu((*iter)->next, struct netdev_adjacent, list);
.hdrsize = 0,
.name = "NET_DM",
.version = 2,
- .maxattr = NET_DM_CMD_MAX,
};
static DEFINE_PER_CPU(struct per_cpu_dm_data, dm_cpu_data);
neigh->parms->reachable_time :
0)));
neigh->nud_state = new;
+ notify = 1;
}
if (lladdr != neigh->ha) {
if (dev_hard_header(skb, dev, ntohs(skb->protocol), NULL, NULL,
skb->len) < 0 &&
- dev->header_ops->rebuild(skb))
+ dev_rebuild_header(skb))
return 0;
return dev_queue_xmit(skb);
!vlan_hw_offload_capable(netif_skb_features(skb),
skb->vlan_proto)) {
skb = __vlan_put_tag(skb, skb->vlan_proto, vlan_tx_tag_get(skb));
- if (unlikely(!skb))
- break;
+ if (unlikely(!skb)) {
+ /* This is actually a packet drop, but we
+ * don't want the code at the end of this
+ * function to try and re-queue a NULL skb.
+ */
+ status = NETDEV_TX_OK;
+ goto unlock_txq;
+ }
skb->vlan_tci = 0;
}
if (status == NETDEV_TX_OK)
txq_trans_update(txq);
}
+ unlock_txq:
__netif_tx_unlock(txq);
if (status == NETDEV_TX_OK)
if (x) {
int ret;
__u8 *eth;
+ struct iphdr *iph;
+
nhead = x->props.header_len - skb_headroom(skb);
if (nhead > 0) {
ret = pskb_expand_head(skb, nhead, 0, GFP_ATOMIC);
eth = (__u8 *) skb_push(skb, ETH_HLEN);
memcpy(eth, pkt_dev->hh, 12);
*(u16 *) ð[12] = protocol;
+
+ /* Update IPv4 header len as well as checksum value */
+ iph = ip_hdr(skb);
+ iph->tot_len = htons(skb->len - ETH_HLEN);
+ ip_send_check(iph);
}
}
return 1;
skb->tstamp.tv64 = 0;
skb->pkt_type = PACKET_HOST;
skb->skb_iif = 0;
+ skb->local_df = 0;
skb_dst_drop(skb);
skb->mark = 0;
secpath_reset(skb);
case SO_PEEK_OFF:
if (sock->ops->set_peek_off)
- sock->ops->set_peek_off(sk, val);
+ ret = sock->ops->set_peek_off(sk, val);
else
ret = -EOPNOTSUPP;
break;
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- usin->sin6_addr = flowlabel->dst;
fl6_sock_release(flowlabel);
}
}
.llseek = noop_llseek,
};
-static __init int setup_jprobe(void)
-{
- int ret = register_jprobe(&dccp_send_probe);
-
- if (ret) {
- request_module("dccp");
- ret = register_jprobe(&dccp_send_probe);
- }
- return ret;
-}
-
static __init int dccpprobe_init(void)
{
int ret = -ENOMEM;
if (!proc_create(procname, S_IRUSR, init_net.proc_net, &dccpprobe_fops))
goto err0;
- ret = setup_jprobe();
+ ret = register_jprobe(&dccp_send_probe);
+ if (ret) {
+ ret = request_module("dccp");
+ if (!ret)
+ ret = register_jprobe(&dccp_send_probe);
+ }
+
if (ret)
goto err1;
static bool seq_nr_after(u16 a, u16 b)
{
/* Remove inconsistency where
- * seq_nr_after(a, b) == seq_nr_before(a, b) */
+ * seq_nr_after(a, b) == seq_nr_before(a, b)
+ */
if ((int) b - a == 32768)
return false;
[IFLA_HSR_SLAVE1] = { .type = NLA_U32 },
[IFLA_HSR_SLAVE2] = { .type = NLA_U32 },
[IFLA_HSR_MULTICAST_SPEC] = { .type = NLA_U8 },
+ [IFLA_HSR_SUPERVISION_ADDR] = { .type = NLA_BINARY, .len = ETH_ALEN },
+ [IFLA_HSR_SEQ_NR] = { .type = NLA_U16 },
};
return hsr_dev_finalize(dev, link, multicast_spec);
}
+static int hsr_fill_info(struct sk_buff *skb, const struct net_device *dev)
+{
+ struct hsr_priv *hsr_priv;
+
+ hsr_priv = netdev_priv(dev);
+
+ if (hsr_priv->slave[0])
+ if (nla_put_u32(skb, IFLA_HSR_SLAVE1, hsr_priv->slave[0]->ifindex))
+ goto nla_put_failure;
+
+ if (hsr_priv->slave[1])
+ if (nla_put_u32(skb, IFLA_HSR_SLAVE2, hsr_priv->slave[1]->ifindex))
+ goto nla_put_failure;
+
+ if (nla_put(skb, IFLA_HSR_SUPERVISION_ADDR, ETH_ALEN,
+ hsr_priv->sup_multicast_addr) ||
+ nla_put_u16(skb, IFLA_HSR_SEQ_NR, hsr_priv->sequence_nr))
+ goto nla_put_failure;
+
+ return 0;
+
+nla_put_failure:
+ return -EMSGSIZE;
+}
+
static struct rtnl_link_ops hsr_link_ops __read_mostly = {
.kind = "hsr",
.maxtype = IFLA_HSR_MAX,
.priv_size = sizeof(struct hsr_priv),
.setup = hsr_dev_setup,
.newlink = hsr_newlink,
+ .fill_info = hsr_fill_info,
};
hc06_ptr += 3;
} else {
/* compress nothing */
- memcpy(hc06_ptr, &hdr, 4);
+ memcpy(hc06_ptr, hdr, 4);
/* replace the top byte with new ECN | DSCP format */
*hc06_ptr = tmp;
hc06_ptr += 4;
static bool fib4_rule_suppress(struct fib_rule *rule, struct fib_lookup_arg *arg)
{
struct fib_result *result = (struct fib_result *) arg->result;
- struct net_device *dev = result->fi->fib_dev;
+ struct net_device *dev = NULL;
+
+ if (result->fi)
+ dev = result->fi->fib_dev;
/* do not accept result if the route does
* not meet the required prefix length
netdev_features_t enc_features;
int ghl = GRE_HEADER_SECTION;
struct gre_base_hdr *greh;
+ u16 mac_offset = skb->mac_header;
int mac_len = skb->mac_len;
__be16 protocol = skb->protocol;
int tnl_hlen;
} else
csum = false;
+ if (unlikely(!pskb_may_pull(skb, ghl)))
+ goto out;
+
/* setup inner skb. */
skb->protocol = greh->protocol;
skb->encapsulation = 0;
- if (unlikely(!pskb_may_pull(skb, ghl)))
- goto out;
-
__skb_pull(skb, ghl);
skb_reset_mac_header(skb);
skb_set_network_header(skb, skb_inner_network_offset(skb));
/* segment inner packet. */
enc_features = skb->dev->hw_enc_features & netif_skb_features(skb);
segs = skb_mac_gso_segment(skb, enc_features);
- if (!segs || IS_ERR(segs))
+ if (!segs || IS_ERR(segs)) {
+ skb_gso_error_unwind(skb, protocol, ghl, mac_offset, mac_len);
goto out;
+ }
skb = segs;
tnl_hlen = skb_tnl_header_len(skb);
r->id.idiag_sport = inet->inet_sport;
r->id.idiag_dport = inet->inet_dport;
+
+ memset(&r->id.idiag_src, 0, sizeof(r->id.idiag_src));
+ memset(&r->id.idiag_dst, 0, sizeof(r->id.idiag_dst));
+
r->id.idiag_src[0] = inet->inet_rcv_saddr;
r->id.idiag_dst[0] = inet->inet_daddr;
r->idiag_family = tw->tw_family;
r->idiag_retrans = 0;
+
r->id.idiag_if = tw->tw_bound_dev_if;
sock_diag_save_cookie(tw, r->id.idiag_cookie);
+
r->id.idiag_sport = tw->tw_sport;
r->id.idiag_dport = tw->tw_dport;
+
+ memset(&r->id.idiag_src, 0, sizeof(r->id.idiag_src));
+ memset(&r->id.idiag_dst, 0, sizeof(r->id.idiag_dst));
+
r->id.idiag_src[0] = tw->tw_rcv_saddr;
r->id.idiag_dst[0] = tw->tw_daddr;
+
r->idiag_state = tw->tw_substate;
r->idiag_timer = 3;
r->idiag_expires = jiffies_to_msecs(tmo);
r->id.idiag_sport = inet->inet_sport;
r->id.idiag_dport = ireq->ir_rmt_port;
+
+ memset(&r->id.idiag_src, 0, sizeof(r->id.idiag_src));
+ memset(&r->id.idiag_dst, 0, sizeof(r->id.idiag_dst));
+
r->id.idiag_src[0] = ireq->ir_loc_addr;
r->id.idiag_dst[0] = ireq->ir_rmt_addr;
+
r->idiag_expires = jiffies_to_msecs(tmo);
r->idiag_rqueue = 0;
r->idiag_wqueue = 0;
iph->saddr, iph->daddr, tpi->key);
if (tunnel) {
+ skb_pop_mac_header(skb);
ip_tunnel_rcv(tunnel, skb, tpi, log_ecn_error);
return PACKET_RCVD;
}
if (cork->length + length > maxnonfragsize - fragheaderlen) {
ip_local_error(sk, EMSGSIZE, fl4->daddr, inet->inet_dport,
- mtu-exthdrlen);
+ mtu - (opt ? opt->optlen : 0));
return -EMSGSIZE;
}
mtu : 0xFFFF;
if (cork->length + size > maxnonfragsize - fragheaderlen) {
- ip_local_error(sk, EMSGSIZE, fl4->daddr, inet->inet_dport, mtu);
+ ip_local_error(sk, EMSGSIZE, fl4->daddr, inet->inet_dport,
+ mtu - (opt ? opt->optlen : 0));
return -EMSGSIZE;
}
/*
* Handle MSG_ERRQUEUE
*/
-int ip_recv_error(struct sock *sk, struct msghdr *msg, int len)
+int ip_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
{
struct sock_exterr_skb *serr;
struct sk_buff *skb, *skb2;
serr->addr_offset);
sin->sin_port = serr->port;
memset(&sin->sin_zero, 0, sizeof(sin->sin_zero));
+ *addr_len = sizeof(*sin);
}
memcpy(&errhdr.ee, &serr->ee, sizeof(struct sock_extended_err));
static struct xt_target synproxy_tg4_reg __read_mostly = {
.name = "SYNPROXY",
.family = NFPROTO_IPV4,
+ .hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD),
.target = synproxy_tg4,
.targetsize = sizeof(struct xt_synproxy_info),
.checkentry = synproxy_tg4_check,
{
const struct nft_reject *priv = nft_expr_priv(expr);
- if (nla_put_be32(skb, NFTA_REJECT_TYPE, priv->type))
+ if (nla_put_be32(skb, NFTA_REJECT_TYPE, htonl(priv->type)))
goto nla_put_failure;
switch (priv->type) {
err = PTR_ERR(rt);
rt = NULL;
if (err == -ENETUNREACH)
- IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
goto out;
}
if (flags & MSG_ERRQUEUE) {
if (family == AF_INET) {
- return ip_recv_error(sk, msg, len);
+ return ip_recv_error(sk, msg, len, addr_len);
#if IS_ENABLED(CONFIG_IPV6)
} else if (family == AF_INET6) {
- return pingv6_ops.ipv6_recv_error(sk, msg, len);
+ return pingv6_ops.ipv6_recv_error(sk, msg, len,
+ addr_len);
#endif
}
}
const struct net_protocol __rcu *inet_protos[MAX_INET_PROTOS] __read_mostly;
const struct net_offload __rcu *inet_offloads[MAX_INET_PROTOS] __read_mostly;
-/*
- * Add a protocol handler to the hash tables
- */
-
int inet_add_protocol(const struct net_protocol *prot, unsigned char protocol)
{
if (!prot->netns_ok) {
}
EXPORT_SYMBOL(inet_add_offload);
-/*
- * Remove a protocol from the hash tables.
- */
-
int inet_del_protocol(const struct net_protocol *prot, unsigned char protocol)
{
int ret;
goto out;
if (flags & MSG_ERRQUEUE) {
- err = ip_recv_error(sk, msg, len);
+ err = ip_recv_error(sk, msg, len, addr_len);
goto out;
}
do {
if (dma_async_is_tx_complete(tp->ucopy.dma_chan,
last_issued, &done,
- &used) == DMA_SUCCESS) {
+ &used) == DMA_COMPLETE) {
/* Safe to free early-copied skbs now */
__skb_queue_purge(&sk->sk_async_wait_queue);
break;
struct sk_buff *skb;
while ((skb = skb_peek(&sk->sk_async_wait_queue)) &&
(dma_async_is_complete(skb->dma_cookie, done,
- used) == DMA_SUCCESS)) {
+ used) == DMA_COMPLETE)) {
__skb_dequeue(&sk->sk_async_wait_queue);
kfree_skb(skb);
}
if (IS_ERR(rt)) {
err = PTR_ERR(rt);
if (err == -ENETUNREACH)
- IP_INC_STATS_BH(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
return err;
}
#include <linux/memcontrol.h>
#include <linux/module.h>
-static void memcg_tcp_enter_memory_pressure(struct sock *sk)
-{
- if (sk->sk_cgrp->memory_pressure)
- sk->sk_cgrp->memory_pressure = 1;
-}
-EXPORT_SYMBOL(memcg_tcp_enter_memory_pressure);
-
int tcp_init_cgroup(struct mem_cgroup *memcg, struct cgroup_subsys *ss)
{
/*
static int tcp_update_limit(struct mem_cgroup *memcg, u64 val)
{
struct cg_proto *cg_proto;
- u64 old_lim;
int i;
int ret;
if (val > RES_COUNTER_MAX)
val = RES_COUNTER_MAX;
- old_lim = res_counter_read_u64(&cg_proto->memory_allocated, RES_LIMIT);
ret = res_counter_set_limit(&cg_proto->memory_allocated, val);
if (ret)
return ret;
{
const struct iphdr *iph = skb_gro_network_header(skb);
__wsum wsum;
- __sum16 sum;
+
+ /* Don't bother verifying checksum if we're going to flush anyway. */
+ if (NAPI_GRO_CB(skb)->flush)
+ goto skip_csum;
+
+ wsum = skb->csum;
switch (skb->ip_summed) {
+ case CHECKSUM_NONE:
+ wsum = skb_checksum(skb, skb_gro_offset(skb), skb_gro_len(skb),
+ 0);
+
+ /* fall through */
+
case CHECKSUM_COMPLETE:
if (!tcp_v4_check(skb_gro_len(skb), iph->saddr, iph->daddr,
- skb->csum)) {
+ wsum)) {
skb->ip_summed = CHECKSUM_UNNECESSARY;
break;
}
-flush:
+
NAPI_GRO_CB(skb)->flush = 1;
return NULL;
-
- case CHECKSUM_NONE:
- wsum = csum_tcpudp_nofold(iph->saddr, iph->daddr,
- skb_gro_len(skb), IPPROTO_TCP, 0);
- sum = csum_fold(skb_checksum(skb,
- skb_gro_offset(skb),
- skb_gro_len(skb),
- wsum));
- if (sum)
- goto flush;
-
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- break;
}
+skip_csum:
return tcp_gro_receive(head, skb);
}
__be16 sport, __be16 dport,
struct udp_table *udptable)
{
- struct sock *sk;
const struct iphdr *iph = ip_hdr(skb);
- if (unlikely(sk = skb_steal_sock(skb)))
- return sk;
- else
- return __udp4_lib_lookup(dev_net(skb_dst(skb)->dev), iph->saddr, sport,
- iph->daddr, dport, inet_iif(skb),
- udptable);
+ return __udp4_lib_lookup(dev_net(skb_dst(skb)->dev), iph->saddr, sport,
+ iph->daddr, dport, inet_iif(skb),
+ udptable);
}
struct sock *udp4_lib_lookup(struct net *net, __be32 saddr, __be16 sport,
err = PTR_ERR(rt);
rt = NULL;
if (err == -ENETUNREACH)
- IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
goto out;
}
struct udp_sock *up = udp_sk(sk);
int ret;
+ if (flags & MSG_SENDPAGE_NOTLAST)
+ flags |= MSG_MORE;
+
if (!up->pending) {
struct msghdr msg = { .msg_flags = flags|MSG_MORE };
bool slow;
if (flags & MSG_ERRQUEUE)
- return ip_recv_error(sk, msg, len);
+ return ip_recv_error(sk, msg, len, addr_len);
try_again:
skb = __skb_recv_datagram(sk, flags | (noblock ? MSG_DONTWAIT : 0),
kfree_skb(skb1);
}
-static void udp_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
+/* For TCP sockets, sk_rx_dst is protected by socket lock
+ * For UDP, we use xchg() to guard against concurrent changes.
+ */
+static void udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst)
{
- struct dst_entry *dst = skb_dst(skb);
+ struct dst_entry *old;
dst_hold(dst);
- sk->sk_rx_dst = dst;
+ old = xchg(&sk->sk_rx_dst, dst);
+ dst_release(old);
}
/*
if (udp4_csum_init(skb, uh, proto))
goto csum_error;
- if (skb->sk) {
+ sk = skb_steal_sock(skb);
+ if (sk) {
+ struct dst_entry *dst = skb_dst(skb);
int ret;
- sk = skb->sk;
- if (unlikely(sk->sk_rx_dst == NULL))
- udp_sk_rx_dst_set(sk, skb);
+ if (unlikely(sk->sk_rx_dst != dst))
+ udp_sk_rx_dst_set(sk, dst);
ret = udp_queue_rcv_skb(sk, skb);
-
+ sock_put(sk);
/* a return value > 0 means to resubmit the input, but
* it wants the return to be -protocol, or 0
*/
void udp_v4_early_demux(struct sk_buff *skb)
{
- const struct iphdr *iph = ip_hdr(skb);
- const struct udphdr *uh = udp_hdr(skb);
+ struct net *net = dev_net(skb->dev);
+ const struct iphdr *iph;
+ const struct udphdr *uh;
struct sock *sk;
struct dst_entry *dst;
- struct net *net = dev_net(skb->dev);
int dif = skb->dev->ifindex;
/* validate the packet */
if (!pskb_may_pull(skb, skb_transport_offset(skb) + sizeof(struct udphdr)))
return;
+ iph = ip_hdr(skb);
+ uh = udp_hdr(skb);
+
if (skb->pkt_type == PACKET_BROADCAST ||
skb->pkt_type == PACKET_MULTICAST)
sk = __udp4_lib_mcast_demux_lookup(net, uh->dest, iph->daddr,
netdev_features_t features)
{
struct sk_buff *segs = ERR_PTR(-EINVAL);
+ u16 mac_offset = skb->mac_header;
int mac_len = skb->mac_len;
int tnl_hlen = skb_inner_mac_header(skb) - skb_transport_header(skb);
__be16 protocol = skb->protocol;
/* segment inner packet. */
enc_features = skb->dev->hw_enc_features & netif_skb_features(skb);
segs = skb_mac_gso_segment(skb, enc_features);
- if (!segs || IS_ERR(segs))
+ if (!segs || IS_ERR(segs)) {
+ skb_gso_error_unwind(skb, protocol, tnl_hlen, mac_offset,
+ mac_len);
goto out;
+ }
outer_hlen = skb_tnl_header_len(skb);
skb = segs;
{
struct sk_buff *segs = ERR_PTR(-EINVAL);
unsigned int mss;
+ int offset;
+ __wsum csum;
+
+ if (skb->encapsulation &&
+ skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL) {
+ segs = skb_udp_tunnel_segment(skb, features);
+ goto out;
+ }
mss = skb_shinfo(skb)->gso_size;
if (unlikely(skb->len <= mss))
goto out;
}
+ /* Do software UFO. Complete and fill in the UDP checksum as
+ * HW cannot do checksum of UDP packets sent as multiple
+ * IP fragments.
+ */
+ offset = skb_checksum_start_offset(skb);
+ csum = skb_checksum(skb, offset, skb->len - offset, 0);
+ offset += skb->csum_offset;
+ *(__sum16 *)(skb->data + offset) = csum_fold(csum);
+ skb->ip_summed = CHECKSUM_NONE;
+
/* Fragment the skb. IP headers of the fragments are updated in
* inet_gso_segment()
*/
- if (skb->encapsulation && skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL)
- segs = skb_udp_tunnel_segment(skb, features);
- else {
- int offset;
- __wsum csum;
-
- /* Do software UFO. Complete and fill in the UDP checksum as
- * HW cannot do checksum of UDP packets sent as multiple
- * IP fragments.
- */
- offset = skb_checksum_start_offset(skb);
- csum = skb_checksum(skb, offset, skb->len - offset, 0);
- offset += skb->csum_offset;
- *(__sum16 *)(skb->data + offset) = csum_fold(csum);
- skb->ip_summed = CHECKSUM_NONE;
-
- segs = skb_segment(skb, features);
- }
+ segs = skb_segment(skb, features);
out:
return segs;
}
static void addrconf_join_anycast(struct inet6_ifaddr *ifp)
{
struct in6_addr addr;
- if (ifp->prefix_len == 127) /* RFC 6164 */
+ if (ifp->prefix_len >= 127) /* RFC 6164 */
return;
ipv6_addr_prefix(&addr, &ifp->addr, ifp->prefix_len);
if (ipv6_addr_any(&addr))
static void addrconf_leave_anycast(struct inet6_ifaddr *ifp)
{
struct in6_addr addr;
- if (ifp->prefix_len == 127) /* RFC 6164 */
+ if (ifp->prefix_len >= 127) /* RFC 6164 */
return;
ipv6_addr_prefix(&addr, &ifp->addr, ifp->prefix_len);
if (ipv6_addr_any(&addr))
if (sp_ifa->rt)
continue;
- sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, 0);
+ sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, false);
/* Failure cases are ignored */
if (!IS_ERR(sp_rt)) {
&inet6_addr_lst[i], addr_lst) {
unsigned long age;
- if (ifp->flags & IFA_F_PERMANENT)
+ /* When setting preferred_lft to a value not zero or
+ * infinity, while valid_lft is infinity
+ * IFA_F_PERMANENT has a non-infinity life time.
+ */
+ if ((ifp->flags & IFA_F_PERMANENT) &&
+ (ifp->prefered_lft == INFINITY_LIFE_TIME))
continue;
spin_lock(&ifp->lock);
ifp->flags |= IFA_F_DEPRECATED;
}
- if (time_before(ifp->tstamp + ifp->valid_lft * HZ, next))
+ if ((ifp->valid_lft != INFINITY_LIFE_TIME) &&
+ (time_before(ifp->tstamp + ifp->valid_lft * HZ, next)))
next = ifp->tstamp + ifp->valid_lft * HZ;
spin_unlock(&ifp->lock);
put_ifaddrmsg(nlh, ifa->prefix_len, ifa->flags, rt_scope(ifa->scope),
ifa->idev->dev->ifindex);
- if (!(ifa->flags&IFA_F_PERMANENT)) {
+ if (!((ifa->flags&IFA_F_PERMANENT) &&
+ (ifa->prefered_lft == INFINITY_LIFE_TIME))) {
preferred = ifa->prefered_lft;
valid = ifa->valid_lft;
if (preferred != INFINITY_LIFE_TIME) {
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- usin->sin6_addr = flowlabel->dst;
}
}
/*
* Handle MSG_ERRQUEUE
*/
-int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len)
+int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
{
struct ipv6_pinfo *np = inet6_sk(sk);
struct sock_exterr_skb *serr;
&sin->sin6_addr);
sin->sin6_scope_id = 0;
}
+ *addr_len = sizeof(*sin);
}
memcpy(&errhdr.ee, &serr->ee, sizeof(struct sock_extended_err));
if (serr->ee.ee_origin != SO_EE_ORIGIN_LOCAL) {
sin->sin6_family = AF_INET6;
sin->sin6_flowinfo = 0;
+ sin->sin6_port = 0;
if (skb->protocol == htons(ETH_P_IPV6)) {
sin->sin6_addr = ipv6_hdr(skb)->saddr;
if (np->rxopt.all)
/*
* Handle IPV6_RECVPATHMTU
*/
-int ipv6_recv_rxpmtu(struct sock *sk, struct msghdr *msg, int len)
+int ipv6_recv_rxpmtu(struct sock *sk, struct msghdr *msg, int len,
+ int *addr_len)
{
struct ipv6_pinfo *np = inet6_sk(sk);
struct sk_buff *skb;
sin->sin6_port = 0;
sin->sin6_scope_id = mtu_info.ip6m_addr.sin6_scope_id;
sin->sin6_addr = mtu_info.ip6m_addr.sin6_addr;
+ *addr_len = sizeof(*sin);
}
put_cmsg(msg, SOL_IPV6, IPV6_PATHMTU, sizeof(mtu_info), &mtu_info);
static bool fib6_rule_suppress(struct fib_rule *rule, struct fib_lookup_arg *arg)
{
struct rt6_info *rt = (struct rt6_info *) arg->result;
- struct net_device *dev = rt->rt6i_idev->dev;
+ struct net_device *dev = NULL;
+
+ if (rt->rt6i_idev)
+ dev = rt->rt6i_idev->dev;
+
/* do not accept result if the route does
* not meet the required prefix length
*/
}
rcu_read_unlock_bh();
- IP6_INC_STATS_BH(dev_net(dst->dev),
- ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
+ IP6_INC_STATS(dev_net(dst->dev),
+ ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
kfree_skb(skb);
return -EINVAL;
}
fragheaderlen = sizeof(struct ipv6hdr) + rt->rt6i_nfheader_len +
(opt ? opt->opt_nflen : 0);
- maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen - sizeof(struct frag_hdr);
+ maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen -
+ sizeof(struct frag_hdr);
if (mtu <= sizeof(struct ipv6hdr) + IPV6_MAXPLEN) {
- if (cork->length + length > sizeof(struct ipv6hdr) + IPV6_MAXPLEN - fragheaderlen) {
- ipv6_local_error(sk, EMSGSIZE, fl6, mtu-exthdrlen);
+ unsigned int maxnonfragsize, headersize;
+
+ headersize = sizeof(struct ipv6hdr) +
+ (opt ? opt->tot_len : 0) +
+ (dst_allfrag(&rt->dst) ?
+ sizeof(struct frag_hdr) : 0) +
+ rt->rt6i_nfheader_len;
+
+ maxnonfragsize = (np->pmtudisc >= IPV6_PMTUDISC_DO) ?
+ mtu : sizeof(struct ipv6hdr) + IPV6_MAXPLEN;
+
+ /* dontfrag active */
+ if ((cork->length + length > mtu - headersize) && dontfrag &&
+ (sk->sk_protocol == IPPROTO_UDP ||
+ sk->sk_protocol == IPPROTO_RAW)) {
+ ipv6_local_rxpmtu(sk, fl6, mtu - headersize +
+ sizeof(struct ipv6hdr));
+ goto emsgsize;
+ }
+
+ if (cork->length + length > maxnonfragsize - headersize) {
+emsgsize:
+ ipv6_local_error(sk, EMSGSIZE, fl6,
+ mtu - headersize +
+ sizeof(struct ipv6hdr));
return -EMSGSIZE;
}
}
* --yoshfuji
*/
- if ((length > mtu) && dontfrag && (sk->sk_protocol == IPPROTO_UDP ||
- sk->sk_protocol == IPPROTO_RAW)) {
- ipv6_local_rxpmtu(sk, fl6, mtu-exthdrlen);
- return -EMSGSIZE;
- }
-
skb = skb_peek_tail(&sk->sk_write_queue);
cork->length += length;
if (((length > mtu) ||
static struct net_device_stats *ip6_get_stats(struct net_device *dev)
{
- struct pcpu_tstats sum = { 0 };
+ struct pcpu_tstats tmp, sum = { 0 };
int i;
for_each_possible_cpu(i) {
+ unsigned int start;
const struct pcpu_tstats *tstats = per_cpu_ptr(dev->tstats, i);
- sum.rx_packets += tstats->rx_packets;
- sum.rx_bytes += tstats->rx_bytes;
- sum.tx_packets += tstats->tx_packets;
- sum.tx_bytes += tstats->tx_bytes;
+ do {
+ start = u64_stats_fetch_begin_bh(&tstats->syncp);
+ tmp.rx_packets = tstats->rx_packets;
+ tmp.rx_bytes = tstats->rx_bytes;
+ tmp.tx_packets = tstats->tx_packets;
+ tmp.tx_bytes = tstats->tx_bytes;
+ } while (u64_stats_fetch_retry_bh(&tstats->syncp, start));
+
+ sum.rx_packets += tmp.rx_packets;
+ sum.rx_bytes += tmp.rx_bytes;
+ sum.tx_packets += tmp.tx_packets;
+ sum.tx_bytes += tmp.tx_bytes;
}
dev->stats.rx_packets = sum.rx_packets;
dev->stats.rx_bytes = sum.rx_bytes;
}
tstats = this_cpu_ptr(t->dev->tstats);
+ u64_stats_update_begin(&tstats->syncp);
tstats->rx_packets++;
tstats->rx_bytes += skb->len;
+ u64_stats_update_end(&tstats->syncp);
netif_rx(skb);
struct ip6_tnl __rcu **tnls[2];
};
-static struct net_device_stats *vti6_get_stats(struct net_device *dev)
-{
- struct pcpu_tstats sum = { 0 };
- int i;
-
- for_each_possible_cpu(i) {
- const struct pcpu_tstats *tstats = per_cpu_ptr(dev->tstats, i);
-
- sum.rx_packets += tstats->rx_packets;
- sum.rx_bytes += tstats->rx_bytes;
- sum.tx_packets += tstats->tx_packets;
- sum.tx_bytes += tstats->tx_bytes;
- }
- dev->stats.rx_packets = sum.rx_packets;
- dev->stats.rx_bytes = sum.rx_bytes;
- dev->stats.tx_packets = sum.tx_packets;
- dev->stats.tx_bytes = sum.tx_bytes;
- return &dev->stats;
-}
-
#define for_each_vti6_tunnel_rcu(start) \
for (t = rcu_dereference(start); t; t = rcu_dereference(t->next))
}
tstats = this_cpu_ptr(t->dev->tstats);
+ u64_stats_update_begin(&tstats->syncp);
tstats->rx_packets++;
tstats->rx_bytes += skb->len;
+ u64_stats_update_end(&tstats->syncp);
skb->mark = 0;
secpath_reset(skb);
.ndo_start_xmit = vti6_tnl_xmit,
.ndo_do_ioctl = vti6_ioctl,
.ndo_change_mtu = vti6_change_mtu,
- .ndo_get_stats = vti6_get_stats,
+ .ndo_get_stats64 = ip_tunnel_get_stats64,
};
/**
static inline int vti6_dev_init_gen(struct net_device *dev)
{
struct ip6_tnl *t = netdev_priv(dev);
+ int i;
t->dev = dev;
t->net = dev_net(dev);
dev->tstats = alloc_percpu(struct pcpu_tstats);
if (!dev->tstats)
return -ENOMEM;
+ for_each_possible_cpu(i) {
+ struct pcpu_tstats *stats;
+ stats = per_cpu_ptr(dev->tstats, i);
+ u64_stats_init(&stats->syncp);
+ }
return 0;
}
ri->prefix_len == 0)
continue;
#endif
+ if (ri->prefix_len == 0 &&
+ !in6_dev->cnf.accept_ra_defrtr)
+ continue;
if (ri->prefix_len > in6_dev->cnf.accept_ra_rt_info_max_plen)
continue;
rt6_route_rcv(skb->dev, (u8*)p, (p->nd_opt_len) << 3,
static struct xt_target synproxy_tg6_reg __read_mostly = {
.name = "SYNPROXY",
.family = NFPROTO_IPV6,
+ .hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD),
.target = synproxy_tg6,
.targetsize = sizeof(struct xt_synproxy_info),
.checkentry = synproxy_tg6_check,
/* Compatibility glue so we can support IPv6 when it's compiled as a module */
-static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len)
+static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,
+ int *addr_len)
{
return -EAFNOSUPPORT;
}
}
EXPORT_SYMBOL(inet6_add_protocol);
-/*
- * Remove a protocol from the hash tables.
- */
-
int inet6_del_protocol(const struct inet6_protocol *prot, unsigned char protocol)
{
int ret;
return -EOPNOTSUPP;
if (flags & MSG_ERRQUEUE)
- return ipv6_recv_error(sk, msg, len);
+ return ipv6_recv_error(sk, msg, len, addr_len);
if (np->rxpmtu && np->rxopt.bits.rxpmtu)
- return ipv6_recv_rxpmtu(sk, msg, len);
+ return ipv6_recv_rxpmtu(sk, msg, len, addr_len);
skb = skb_recv_datagram(sk, flags, noblock, &err);
if (!skb)
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- daddr = &flowlabel->dst;
}
}
static int ip6_pkt_discard(struct sk_buff *skb);
static int ip6_pkt_discard_out(struct sk_buff *skb);
+static int ip6_pkt_prohibit(struct sk_buff *skb);
+static int ip6_pkt_prohibit_out(struct sk_buff *skb);
static void ip6_link_failure(struct sk_buff *skb);
static void ip6_rt_update_pmtu(struct dst_entry *dst, struct sock *sk,
struct sk_buff *skb, u32 mtu);
#ifdef CONFIG_IPV6_MULTIPLE_TABLES
-static int ip6_pkt_prohibit(struct sk_buff *skb);
-static int ip6_pkt_prohibit_out(struct sk_buff *skb);
-
static const struct rt6_info ip6_prohibit_entry_template = {
.dst = {
.__refcnt = ATOMIC_INIT(1),
goto out;
}
}
- rt->dst.output = ip6_pkt_discard_out;
- rt->dst.input = ip6_pkt_discard;
rt->rt6i_flags = RTF_REJECT|RTF_NONEXTHOP;
switch (cfg->fc_type) {
case RTN_BLACKHOLE:
rt->dst.error = -EINVAL;
+ rt->dst.output = dst_discard;
+ rt->dst.input = dst_discard;
break;
case RTN_PROHIBIT:
rt->dst.error = -EACCES;
+ rt->dst.output = ip6_pkt_prohibit_out;
+ rt->dst.input = ip6_pkt_prohibit;
break;
case RTN_THROW:
- rt->dst.error = -EAGAIN;
- break;
default:
- rt->dst.error = -ENETUNREACH;
+ rt->dst.error = (cfg->fc_type == RTN_THROW) ? -EAGAIN
+ : -ENETUNREACH;
+ rt->dst.output = ip6_pkt_discard_out;
+ rt->dst.input = ip6_pkt_discard;
break;
}
goto install_route;
else
rt->rt6i_gateway = *dest;
rt->rt6i_flags = ort->rt6i_flags;
- if ((ort->rt6i_flags & (RTF_DEFAULT | RTF_ADDRCONF)) ==
- (RTF_DEFAULT | RTF_ADDRCONF))
- rt6_set_from(rt, ort);
+ rt6_set_from(rt, ort);
rt->rt6i_metric = 0;
#ifdef CONFIG_IPV6_SUBTREES
return ip6_pkt_drop(skb, ICMPV6_NOROUTE, IPSTATS_MIB_OUTNOROUTES);
}
-#ifdef CONFIG_IPV6_MULTIPLE_TABLES
-
static int ip6_pkt_prohibit(struct sk_buff *skb)
{
return ip6_pkt_drop(skb, ICMPV6_ADM_PROHIBITED, IPSTATS_MIB_INNOROUTES);
return ip6_pkt_drop(skb, ICMPV6_ADM_PROHIBITED, IPSTATS_MIB_OUTNOROUTES);
}
-#endif
-
/*
* Allocate a dst for local (unicast / anycast) address.
*/
bool anycast)
{
struct net *net = dev_net(idev->dev);
- struct rt6_info *rt = ip6_dst_alloc(net, net->loopback_dev, 0, NULL);
-
- if (!rt) {
- net_warn_ratelimited("Maximum number of routes reached, consider increasing route/max_size\n");
+ struct rt6_info *rt = ip6_dst_alloc(net, net->loopback_dev,
+ DST_NOCOUNT, NULL);
+ if (!rt)
return ERR_PTR(-ENOMEM);
- }
in6_dev_hold(idev);
dev_put(dev);
}
+/* Generate icmpv6 with type/code ICMPV6_DEST_UNREACH/ICMPV6_ADDR_UNREACH
+ * if sufficient data bytes are available
+ */
+static int ipip6_err_gen_icmpv6_unreach(struct sk_buff *skb)
+{
+ const struct iphdr *iph = (const struct iphdr *) skb->data;
+ struct rt6_info *rt;
+ struct sk_buff *skb2;
+
+ if (!pskb_may_pull(skb, iph->ihl * 4 + sizeof(struct ipv6hdr) + 8))
+ return 1;
+
+ skb2 = skb_clone(skb, GFP_ATOMIC);
+
+ if (!skb2)
+ return 1;
+
+ skb_dst_drop(skb2);
+ skb_pull(skb2, iph->ihl * 4);
+ skb_reset_network_header(skb2);
+
+ rt = rt6_lookup(dev_net(skb->dev), &ipv6_hdr(skb2)->saddr, NULL, 0, 0);
+
+ if (rt && rt->dst.dev)
+ skb2->dev = rt->dst.dev;
+
+ icmpv6_send(skb2, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0);
+
+ if (rt)
+ ip6_rt_put(rt);
+
+ kfree_skb(skb2);
+
+ return 0;
+}
static int ipip6_err(struct sk_buff *skb, u32 info)
{
-
-/* All the routers (except for Linux) return only
- 8 bytes of packet payload. It means, that precise relaying of
- ICMP in the real Internet is absolutely infeasible.
- */
const struct iphdr *iph = (const struct iphdr *)skb->data;
const int type = icmp_hdr(skb)->type;
const int code = icmp_hdr(skb)->code;
case ICMP_DEST_UNREACH:
switch (code) {
case ICMP_SR_FAILED:
- case ICMP_PORT_UNREACH:
/* Impossible event. */
return 0;
default:
goto out;
err = 0;
+ if (!ipip6_err_gen_icmpv6_unreach(skb))
+ goto out;
+
if (t->parms.iph.ttl == 0 && type == ICMP_TIME_EXCEEDED)
goto out;
}
tstats = this_cpu_ptr(tunnel->dev->tstats);
+ u64_stats_update_begin(&tstats->syncp);
tstats->rx_packets++;
tstats->rx_bytes += skb->len;
+ u64_stats_update_end(&tstats->syncp);
netif_rx(skb);
if (tunnel->parms.iph.daddr && skb_dst(skb))
skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu);
- if (skb->len > mtu) {
+ if (skb->len > mtu && !skb_is_gso(skb)) {
icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
ip_rt_put(rt);
goto tx_error;
if (!new_skb) {
ip_rt_put(rt);
dev->stats.tx_dropped++;
- dev_kfree_skb(skb);
+ kfree_skb(skb);
return NETDEV_TX_OK;
}
if (skb->sk)
tos = INET_ECN_encapsulate(tos, ipv6_get_dsfield(iph6));
skb = iptunnel_handle_offloads(skb, false, SKB_GSO_SIT);
- if (IS_ERR(skb))
+ if (IS_ERR(skb)) {
+ ip_rt_put(rt);
goto out;
+ }
err = iptunnel_xmit(rt, skb, fl4.saddr, fl4.daddr, IPPROTO_IPV6, tos,
ttl, df, !net_eq(tunnel->net, dev_net(dev)));
tx_error_icmp:
dst_link_failure(skb);
tx_error:
- dev_kfree_skb(skb);
+ kfree_skb(skb);
out:
dev->stats.tx_errors++;
return NETDEV_TX_OK;
tx_err:
dev->stats.tx_errors++;
- dev_kfree_skb(skb);
+ kfree_skb(skb);
return NETDEV_TX_OK;
}
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- usin->sin6_addr = flowlabel->dst;
fl6_sock_release(flowlabel);
}
}
{
const struct ipv6hdr *iph = skb_gro_network_header(skb);
__wsum wsum;
- __sum16 sum;
+
+ /* Don't bother verifying checksum if we're going to flush anyway. */
+ if (NAPI_GRO_CB(skb)->flush)
+ goto skip_csum;
+
+ wsum = skb->csum;
switch (skb->ip_summed) {
+ case CHECKSUM_NONE:
+ wsum = skb_checksum(skb, skb_gro_offset(skb), skb_gro_len(skb),
+ wsum);
+
+ /* fall through */
+
case CHECKSUM_COMPLETE:
if (!tcp_v6_check(skb_gro_len(skb), &iph->saddr, &iph->daddr,
- skb->csum)) {
+ wsum)) {
skb->ip_summed = CHECKSUM_UNNECESSARY;
break;
}
-flush:
+
NAPI_GRO_CB(skb)->flush = 1;
return NULL;
-
- case CHECKSUM_NONE:
- wsum = ~csum_unfold(csum_ipv6_magic(&iph->saddr, &iph->daddr,
- skb_gro_len(skb),
- IPPROTO_TCP, 0));
- sum = csum_fold(skb_checksum(skb,
- skb_gro_offset(skb),
- skb_gro_len(skb),
- wsum));
- if (sum)
- goto flush;
-
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- break;
}
+skip_csum:
return tcp_gro_receive(head, skb);
}
bool slow;
if (flags & MSG_ERRQUEUE)
- return ipv6_recv_error(sk, msg, len);
+ return ipv6_recv_error(sk, msg, len, addr_len);
if (np->rxpmtu && np->rxopt.bits.rxpmtu)
- return ipv6_recv_rxpmtu(sk, msg, len);
+ return ipv6_recv_rxpmtu(sk, msg, len, addr_len);
try_again:
skb = __skb_recv_datagram(sk, flags | (noblock ? MSG_DONTWAIT : 0),
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- daddr = &flowlabel->dst;
}
}
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- daddr = &flowlabel->dst;
}
}
*addr_len = sizeof(*lsa);
if (flags & MSG_ERRQUEUE)
- return ipv6_recv_error(sk, msg, len);
+ return ipv6_recv_error(sk, msg, len, addr_len);
skb = skb_recv_datagram(sk, flags, noblock, &err);
if (!skb)
unsigned long cpu_flags;
size_t copied = 0;
u32 peek_seq = 0;
- u32 *seq;
+ u32 *seq, skb_len;
unsigned long used;
int target; /* Read at least this many bytes */
long timeo;
}
continue;
found_ok_skb:
+ skb_len = skb->len;
/* Ok so how much can we use? */
used = skb->len - offset;
if (len < used)
}
/* Partial read */
- if (used + offset < skb->len)
+ if (used + offset < skb_len)
continue;
} while (len > 0);
u32 *multi)
{
return ip1->ipcmp == ip2->ipcmp &&
- ip2->ccmp == ip2->ccmp;
+ ip1->ccmp == ip2->ccmp;
}
static inline int
#include <net/ip_vs.h>
#include <net/netfilter/nf_conntrack_core.h>
#include <net/netfilter/nf_conntrack_expect.h>
+#include <net/netfilter/nf_conntrack_seqadj.h>
#include <net/netfilter/nf_conntrack_helper.h>
#include <net/netfilter/nf_conntrack_zones.h>
if (CTINFO2DIR(ctinfo) != IP_CT_DIR_ORIGINAL)
return;
+ /* Applications may adjust TCP seqs */
+ if (cp->app && nf_ct_protonum(ct) == IPPROTO_TCP &&
+ !nfct_seqadj(ct) && !nfct_seqadj_ext_add(ct))
+ return;
+
/*
* The connection is not yet in the hashtable, so we update it.
* CIP->VIP will remain the same, so leave the tuple in
if (off == 0)
return 0;
+ if (unlikely(!seqadj)) {
+ WARN(1, "Wrong seqadj usage, missing nfct_seqadj_ext_add()\n");
+ return 0;
+ }
+
set_bit(IPS_SEQ_ADJUST_BIT, &ct->status);
spin_lock_bh(&ct->lock);
void nf_conntrack_tstamp_pernet_fini(struct net *net)
{
nf_conntrack_tstamp_fini_sysctl(net);
- nf_ct_extend_unregister(&tstamp_extend);
}
int nf_conntrack_tstamp_init(void)
int err, i = 0;
list_for_each_entry(chain, &table->chains, list) {
+ if (!(chain->flags & NFT_BASE_CHAIN))
+ continue;
+
err = nf_register_hook(&nft_base_chain(chain)->ops);
if (err < 0)
goto err;
return 0;
err:
list_for_each_entry(chain, &table->chains, list) {
+ if (!(chain->flags & NFT_BASE_CHAIN))
+ continue;
+
if (i-- <= 0)
break;
{
struct nft_chain *chain;
- list_for_each_entry(chain, &table->chains, list)
- nf_unregister_hook(&nft_base_chain(chain)->ops);
+ list_for_each_entry(chain, &table->chains, list) {
+ if (chain->flags & NFT_BASE_CHAIN)
+ nf_unregister_hook(&nft_base_chain(chain)->ops);
+ }
return 0;
}
return -ENOENT;
}
+static int nf_table_delrule_by_chain(struct nft_ctx *ctx)
+{
+ struct nft_rule *rule;
+ int err;
+
+ list_for_each_entry(rule, &ctx->chain->rules, list) {
+ err = nf_tables_delrule_one(ctx, rule);
+ if (err < 0)
+ return err;
+ }
+ return 0;
+}
+
static int nf_tables_delrule(struct sock *nlsk, struct sk_buff *skb,
const struct nlmsghdr *nlh,
const struct nlattr * const nla[])
const struct nft_af_info *afi;
struct net *net = sock_net(skb->sk);
const struct nft_table *table;
- struct nft_chain *chain;
- struct nft_rule *rule, *tmp;
+ struct nft_chain *chain = NULL;
+ struct nft_rule *rule;
int family = nfmsg->nfgen_family, err = 0;
struct nft_ctx ctx;
if (IS_ERR(table))
return PTR_ERR(table);
- chain = nf_tables_chain_lookup(table, nla[NFTA_RULE_CHAIN]);
- if (IS_ERR(chain))
- return PTR_ERR(chain);
+ if (nla[NFTA_RULE_CHAIN]) {
+ chain = nf_tables_chain_lookup(table, nla[NFTA_RULE_CHAIN]);
+ if (IS_ERR(chain))
+ return PTR_ERR(chain);
+ }
nft_ctx_init(&ctx, skb, nlh, afi, table, chain, nla);
- if (nla[NFTA_RULE_HANDLE]) {
- rule = nf_tables_rule_lookup(chain, nla[NFTA_RULE_HANDLE]);
- if (IS_ERR(rule))
- return PTR_ERR(rule);
+ if (chain) {
+ if (nla[NFTA_RULE_HANDLE]) {
+ rule = nf_tables_rule_lookup(chain,
+ nla[NFTA_RULE_HANDLE]);
+ if (IS_ERR(rule))
+ return PTR_ERR(rule);
- err = nf_tables_delrule_one(&ctx, rule);
- } else {
- /* Remove all rules in this chain */
- list_for_each_entry_safe(rule, tmp, &chain->rules, list) {
err = nf_tables_delrule_one(&ctx, rule);
+ } else {
+ err = nf_table_delrule_by_chain(&ctx);
+ }
+ } else {
+ list_for_each_entry(chain, &table->chains, list) {
+ ctx.chain = chain;
+ err = nf_table_delrule_by_chain(&ctx);
if (err < 0)
break;
}
struct netlink_callback *cb)
{
const struct nft_set *set;
- unsigned int idx = 0, s_idx = cb->args[0];
+ unsigned int idx, s_idx = cb->args[0];
struct nft_table *table, *cur_table = (struct nft_table *)cb->args[2];
if (cb->args[1])
return skb->len;
list_for_each_entry(table, &ctx->afi->tables, list) {
- if (cur_table && cur_table != table)
- continue;
+ if (cur_table) {
+ if (cur_table != table)
+ continue;
+ cur_table = NULL;
+ }
ctx->table = table;
+ idx = 0;
list_for_each_entry(set, &ctx->table->sets, list) {
if (idx < s_idx)
goto cont;
enum nft_registers dreg;
dreg = nft_type_to_reg(set->dtype);
- return nft_validate_data_load(ctx, dreg, &elem->data, set->dtype);
+ return nft_validate_data_load(ctx, dreg, &elem->data,
+ set->dtype == NFT_DATA_VERDICT ?
+ NFT_DATA_VERDICT : NFT_DATA_VALUE);
}
int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set,
#ifdef CONFIG_PROC_FS
remove_proc_entry("nfnetlink_log", net->nf.proc_netfilter);
#endif
+ nf_log_unset(net, &nfulnl_logger);
}
static struct pernet_operations nfnl_log_net_ops = {
{
struct nft_exthdr *priv = nft_expr_priv(expr);
struct nft_data *dest = &data[priv->dreg];
- unsigned int offset;
+ unsigned int offset = 0;
int err;
err = ipv6_find_hdr(pkt->skb, &offset, priv->type, NULL, NULL);
add_timer(&ht->timer);
}
-static void htable_destroy(struct xt_hashlimit_htable *hinfo)
+static void htable_remove_proc_entry(struct xt_hashlimit_htable *hinfo)
{
struct hashlimit_net *hashlimit_net = hashlimit_pernet(hinfo->net);
struct proc_dir_entry *parent;
- del_timer_sync(&hinfo->timer);
-
if (hinfo->family == NFPROTO_IPV4)
parent = hashlimit_net->ipt_hashlimit;
else
parent = hashlimit_net->ip6t_hashlimit;
- if(parent != NULL)
+ if (parent != NULL)
remove_proc_entry(hinfo->name, parent);
+}
+static void htable_destroy(struct xt_hashlimit_htable *hinfo)
+{
+ del_timer_sync(&hinfo->timer);
+ htable_remove_proc_entry(hinfo);
htable_selective_cleanup(hinfo, select_all);
kfree(hinfo->name);
vfree(hinfo);
static void __net_exit hashlimit_proc_net_exit(struct net *net)
{
struct xt_hashlimit_htable *hinfo;
- struct proc_dir_entry *pde;
struct hashlimit_net *hashlimit_net = hashlimit_pernet(net);
- /* recent_net_exit() is called before recent_mt_destroy(). Make sure
- * that the parent xt_recent proc entry is is empty before trying to
- * remove it.
+ /* hashlimit_net_exit() is called before hashlimit_mt_destroy().
+ * Make sure that the parent ipt_hashlimit and ip6t_hashlimit proc
+ * entries is empty before trying to remove it.
*/
mutex_lock(&hashlimit_mutex);
- pde = hashlimit_net->ipt_hashlimit;
- if (pde == NULL)
- pde = hashlimit_net->ip6t_hashlimit;
-
hlist_for_each_entry(hinfo, &hashlimit_net->htables, node)
- remove_proc_entry(hinfo->name, pde);
-
+ htable_remove_proc_entry(hinfo);
hashlimit_net->ipt_hashlimit = NULL;
hashlimit_net->ip6t_hashlimit = NULL;
mutex_unlock(&hashlimit_mutex);
* Bit 17 is marked as already used since the VFS quota code
* also abused this API and relied on family == group ID, we
* cater to that by giving it a static family and group ID.
+ * Bit 18 is marked as already used since the PMCRAID driver
+ * did the same thing as the VFS quota code (maybe copied?)
*/
static unsigned long mc_group_start = 0x3 | BIT(GENL_ID_CTRL) |
- BIT(GENL_ID_VFS_DQUOT);
+ BIT(GENL_ID_VFS_DQUOT) |
+ BIT(GENL_ID_PMCRAID);
static unsigned long *mc_groups = &mc_group_start;
static unsigned long mc_groups_longs = 1;
for (i = 0; i <= GENL_MAX_ID - GENL_MIN_ID; i++) {
if (id_gen_idx != GENL_ID_VFS_DQUOT &&
+ id_gen_idx != GENL_ID_PMCRAID &&
!genl_family_find_byid(id_gen_idx))
return id_gen_idx;
if (++id_gen_idx > GENL_MAX_ID)
{
int first_id;
int n_groups = family->n_mcgrps;
- int err, i;
+ int err = 0, i;
bool groups_allocated = false;
if (!n_groups)
} else if (strcmp(family->name, "NET_DM") == 0) {
first_id = 1;
BUG_ON(n_groups != 1);
- } else if (strcmp(family->name, "VFS_DQUOT") == 0) {
+ } else if (family->id == GENL_ID_VFS_DQUOT) {
first_id = GENL_ID_VFS_DQUOT;
BUG_ON(n_groups != 1);
+ } else if (family->id == GENL_ID_PMCRAID) {
+ first_id = GENL_ID_PMCRAID;
+ BUG_ON(n_groups != 1);
} else {
groups_allocated = true;
err = genl_allocate_reserve_groups(n_groups, &first_id);
static void __fanout_unlink(struct sock *sk, struct packet_sock *po);
static void __fanout_link(struct sock *sk, struct packet_sock *po);
+static struct net_device *packet_cached_dev_get(struct packet_sock *po)
+{
+ struct net_device *dev;
+
+ rcu_read_lock();
+ dev = rcu_dereference(po->cached_dev);
+ if (likely(dev))
+ dev_hold(dev);
+ rcu_read_unlock();
+
+ return dev;
+}
+
+static void packet_cached_dev_assign(struct packet_sock *po,
+ struct net_device *dev)
+{
+ rcu_assign_pointer(po->cached_dev, dev);
+}
+
+static void packet_cached_dev_reset(struct packet_sock *po)
+{
+ RCU_INIT_POINTER(po->cached_dev, NULL);
+}
+
/* register_prot_hook must be invoked with the po->bind_lock held,
* or from a context in which asynchronous accesses to the packet
* socket is not possible (packet_create()).
struct packet_sock *po = pkt_sk(sk);
if (!po->running) {
- if (po->fanout) {
+ if (po->fanout)
__fanout_link(sk, po);
- } else {
+ else
dev_add_pack(&po->prot_hook);
- rcu_assign_pointer(po->cached_dev, po->prot_hook.dev);
- }
sock_hold(sk);
po->running = 1;
struct packet_sock *po = pkt_sk(sk);
po->running = 0;
- if (po->fanout) {
+
+ if (po->fanout)
__fanout_unlink(sk, po);
- } else {
+ else
__dev_remove_pack(&po->prot_hook);
- RCU_INIT_POINTER(po->cached_dev, NULL);
- }
__sock_put(sk);
pkc = tx_ring ? &po->tx_ring.prb_bdqc : &po->rx_ring.prb_bdqc;
- spin_lock(&rb_queue->lock);
+ spin_lock_bh(&rb_queue->lock);
pkc->delete_blk_timer = 1;
- spin_unlock(&rb_queue->lock);
+ spin_unlock_bh(&rb_queue->lock);
prb_del_retire_blk_timer(pkc);
}
return tp_len;
}
-static struct net_device *packet_cached_dev_get(struct packet_sock *po)
-{
- struct net_device *dev;
-
- rcu_read_lock();
- dev = rcu_dereference(po->cached_dev);
- if (dev)
- dev_hold(dev);
- rcu_read_unlock();
-
- return dev;
-}
-
static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
{
struct sk_buff *skb;
mutex_lock(&po->pg_vec_lock);
- if (saddr == NULL) {
+ if (likely(saddr == NULL)) {
dev = packet_cached_dev_get(po);
proto = po->num;
addr = NULL;
* Get and verify the address.
*/
- if (saddr == NULL) {
+ if (likely(saddr == NULL)) {
dev = packet_cached_dev_get(po);
proto = po->num;
addr = NULL;
spin_lock(&po->bind_lock);
unregister_prot_hook(sk, false);
+ packet_cached_dev_reset(po);
+
if (po->prot_hook.dev) {
dev_put(po->prot_hook.dev);
po->prot_hook.dev = NULL;
spin_lock(&po->bind_lock);
unregister_prot_hook(sk, true);
+
po->num = protocol;
po->prot_hook.type = protocol;
if (po->prot_hook.dev)
dev_put(po->prot_hook.dev);
- po->prot_hook.dev = dev;
+ po->prot_hook.dev = dev;
po->ifindex = dev ? dev->ifindex : 0;
+ packet_cached_dev_assign(po, dev);
+
if (protocol == 0)
goto out_unlock;
po = pkt_sk(sk);
sk->sk_family = PF_PACKET;
po->num = proto;
- RCU_INIT_POINTER(po->cached_dev, NULL);
+
+ packet_cached_dev_reset(po);
sk->sk_destruct = packet_sock_destruct;
sk_refcnt_debug_inc(sk);
sk->sk_error_report(sk);
}
if (msg == NETDEV_UNREGISTER) {
+ packet_cached_dev_reset(po);
po->ifindex = -1;
if (po->prot_hook.dev)
dev_put(po->prot_hook.dev);
ret = rdma_bind_addr(cm_id, (struct sockaddr *)&sin);
/* due to this, we will claim to support iWARP devices unless we
check node_type. */
- if (ret || cm_id->device->node_type != RDMA_NODE_IB_CA)
+ if (ret || !cm_id->device ||
+ cm_id->device->node_type != RDMA_NODE_IB_CA)
ret = -EADDRNOTAVAIL;
rdsdebug("addr %pI4 ret %d node type %d\n",
&& rm->m_inc.i_hdr.h_flags & RDS_FLAG_CONG_BITMAP) {
rds_cong_map_updated(conn->c_fcong, ~(u64) 0);
scat = &rm->data.op_sg[sg];
- ret = sizeof(struct rds_header) + RDS_CONG_MAP_BYTES;
- ret = min_t(int, ret, scat->length - conn->c_xmit_data_off);
- return ret;
+ ret = max_t(int, RDS_CONG_MAP_BYTES, scat->length);
+ return sizeof(struct rds_header) + ret;
}
/* FIXME we may overallocate here */
if (msg->msg_name) {
struct sockaddr_rose *srose;
+ struct full_sockaddr_rose *full_srose = msg->msg_name;
memset(msg->msg_name, 0, sizeof(struct full_sockaddr_rose));
srose = msg->msg_name;
srose->srose_addr = rose->dest_addr;
srose->srose_call = rose->dest_call;
srose->srose_ndigis = rose->dest_ndigis;
- if (msg->msg_namelen >= sizeof(struct full_sockaddr_rose)) {
- struct full_sockaddr_rose *full_srose = (struct full_sockaddr_rose *)msg->msg_name;
- for (n = 0 ; n < rose->dest_ndigis ; n++)
- full_srose->srose_digis[n] = rose->dest_digis[n];
- msg->msg_namelen = sizeof(struct full_sockaddr_rose);
- } else {
- if (rose->dest_ndigis >= 1) {
- srose->srose_ndigis = 1;
- srose->srose_digi = rose->dest_digis[0];
- }
- msg->msg_namelen = sizeof(struct sockaddr_rose);
- }
+ for (n = 0 ; n < rose->dest_ndigis ; n++)
+ full_srose->srose_digis[n] = rose->dest_digis[n];
+ msg->msg_namelen = sizeof(struct full_sockaddr_rose);
}
skb_free_datagram(sk, skb);
{
struct tc_action_ops *a, **ap;
+ /* Must supply act, dump, cleanup and init */
+ if (!act->act || !act->dump || !act->cleanup || !act->init)
+ return -EINVAL;
+
+ /* Supply defaults */
+ if (!act->lookup)
+ act->lookup = tcf_hash_search;
+ if (!act->walk)
+ act->walk = tcf_generic_walker;
+
write_lock(&act_mod_lock);
for (ap = &act_base; (a = *ap) != NULL; ap = &a->next) {
if (act->type == a->type || (strcmp(act->kind, a->kind) == 0)) {
}
while ((a = act) != NULL) {
repeat:
- if (a->ops && a->ops->act) {
+ if (a->ops) {
ret = a->ops->act(skb, a, res);
if (TC_MUNGED & skb->tc_verd) {
/* copied already, allow trampling */
struct tc_action *a;
for (a = act; a; a = act) {
- if (a->ops && a->ops->cleanup) {
+ if (a->ops) {
if (a->ops->cleanup(a, bind) == ACT_P_DELETED)
module_put(a->ops->owner);
act = act->next;
{
int err = -EINVAL;
- if (a->ops == NULL || a->ops->dump == NULL)
+ if (a->ops == NULL)
return err;
return a->ops->dump(skb, a, bind, ref);
}
unsigned char *b = skb_tail_pointer(skb);
struct nlattr *nest;
- if (a->ops == NULL || a->ops->dump == NULL)
+ if (a->ops == NULL)
return err;
if (nla_put_string(skb, TCA_KIND, a->ops->kind))
a->ops = tc_lookup_action(tb[TCA_ACT_KIND]);
if (a->ops == NULL)
goto err_free;
- if (a->ops->lookup == NULL)
- goto err_mod;
err = -ENOENT;
if (a->ops->lookup(a, index) == 0)
goto err_mod;
memset(&a, 0, sizeof(struct tc_action));
a.ops = a_o;
- if (a_o->walk == NULL) {
- WARN(1, "tc_dump_action: %s !capable of dumping table\n",
- a_o->kind);
- goto out_module_put;
- }
-
nlh = nlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
cb->nlh->nlmsg_type, sizeof(*t), 0);
if (!nlh)
&csum_idx_gen, &csum_hash_info);
if (IS_ERR(pc))
return PTR_ERR(pc);
- p = to_tcf_csum(pc);
ret = ACT_P_CREATED;
} else {
- p = to_tcf_csum(pc);
- if (!ovr) {
- tcf_hash_release(pc, bind, &csum_hash_info);
+ if (bind)/* dont override defaults */
+ return 0;
+ tcf_hash_release(pc, bind, &csum_hash_info);
+ if (!ovr)
return -EEXIST;
- }
}
+ p = to_tcf_csum(pc);
spin_lock_bh(&p->tcf_lock);
p->tcf_action = parm->action;
p->update_flags = parm->update_flags;
.act = tcf_csum,
.dump = tcf_csum_dump,
.cleanup = tcf_csum_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_csum_init,
- .walk = tcf_generic_walker
};
MODULE_DESCRIPTION("Checksum updating actions");
return PTR_ERR(pc);
ret = ACT_P_CREATED;
} else {
- if (!ovr) {
- tcf_hash_release(pc, bind, &gact_hash_info);
+ if (bind)/* dont override defaults */
+ return 0;
+ tcf_hash_release(pc, bind, &gact_hash_info);
+ if (!ovr)
return -EEXIST;
- }
}
gact = to_gact(pc);
.act = tcf_gact,
.dump = tcf_gact_dump,
.cleanup = tcf_gact_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_gact_init,
- .walk = tcf_generic_walker
};
MODULE_AUTHOR("Jamal Hadi Salim(2002-4)");
return PTR_ERR(pc);
ret = ACT_P_CREATED;
} else {
- if (!ovr) {
- tcf_ipt_release(to_ipt(pc), bind);
+ if (bind)/* dont override defaults */
+ return 0;
+ tcf_ipt_release(to_ipt(pc), bind);
+
+ if (!ovr)
return -EEXIST;
- }
}
ipt = to_ipt(pc);
.act = tcf_ipt,
.dump = tcf_ipt_dump,
.cleanup = tcf_ipt_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_ipt_init,
- .walk = tcf_generic_walker
};
static struct tc_action_ops act_xt_ops = {
.act = tcf_ipt,
.dump = tcf_ipt_dump,
.cleanup = tcf_ipt_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_ipt_init,
- .walk = tcf_generic_walker
};
MODULE_AUTHOR("Jamal Hadi Salim(2002-13)");
.act = tcf_mirred,
.dump = tcf_mirred_dump,
.cleanup = tcf_mirred_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_mirred_init,
- .walk = tcf_generic_walker
};
MODULE_AUTHOR("Jamal Hadi Salim(2002)");
&nat_idx_gen, &nat_hash_info);
if (IS_ERR(pc))
return PTR_ERR(pc);
- p = to_tcf_nat(pc);
ret = ACT_P_CREATED;
} else {
- p = to_tcf_nat(pc);
- if (!ovr) {
- tcf_hash_release(pc, bind, &nat_hash_info);
+ if (bind)
+ return 0;
+ tcf_hash_release(pc, bind, &nat_hash_info);
+ if (!ovr)
return -EEXIST;
- }
}
+ p = to_tcf_nat(pc);
spin_lock_bh(&p->tcf_lock);
p->old_addr = parm->old_addr;
.act = tcf_nat,
.dump = tcf_nat_dump,
.cleanup = tcf_nat_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_nat_init,
- .walk = tcf_generic_walker
};
MODULE_DESCRIPTION("Stateless NAT actions");
ret = ACT_P_CREATED;
} else {
p = to_pedit(pc);
- if (!ovr) {
- tcf_hash_release(pc, bind, &pedit_hash_info);
+ tcf_hash_release(pc, bind, &pedit_hash_info);
+ if (bind)
+ return 0;
+ if (!ovr)
return -EEXIST;
- }
+
if (p->tcfp_nkeys && p->tcfp_nkeys != parm->nkeys) {
keys = kmalloc(ksize, GFP_KERNEL);
if (keys == NULL)
.act = tcf_pedit,
.dump = tcf_pedit_dump,
.cleanup = tcf_pedit_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_pedit_init,
- .walk = tcf_generic_walker
};
MODULE_AUTHOR("Jamal Hadi Salim(2002-4)");
if (bind) {
police->tcf_bindcnt += 1;
police->tcf_refcnt += 1;
+ return 0;
}
if (ovr)
goto override;
- return ret;
+ /* not replacing */
+ return -EEXIST;
}
}
.act = tcf_act_police,
.dump = tcf_act_police_dump,
.cleanup = tcf_act_police_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_act_police_locate,
.walk = tcf_act_police_walker
};
ret = ACT_P_CREATED;
} else {
d = to_defact(pc);
- if (!ovr) {
- tcf_simp_release(d, bind);
+
+ if (bind)
+ return 0;
+ tcf_simp_release(d, bind);
+ if (!ovr)
return -EEXIST;
- }
+
reset_policy(d, defdata, parm);
}
.dump = tcf_simp_dump,
.cleanup = tcf_simp_cleanup,
.init = tcf_simp_init,
- .walk = tcf_generic_walker,
};
MODULE_AUTHOR("Jamal Hadi Salim(2005)");
ret = ACT_P_CREATED;
} else {
d = to_skbedit(pc);
- if (!ovr) {
- tcf_hash_release(pc, bind, &skbedit_hash_info);
+ if (bind)
+ return 0;
+ tcf_hash_release(pc, bind, &skbedit_hash_info);
+ if (!ovr)
return -EEXIST;
- }
}
spin_lock_bh(&d->tcf_lock);
.dump = tcf_skbedit_dump,
.cleanup = tcf_skbedit_cleanup,
.init = tcf_skbedit_init,
- .walk = tcf_generic_walker,
};
MODULE_AUTHOR("Alexander Duyck, <alexander.h.duyck@intel.com>");
sch_tree_lock(sch);
}
+ rate64 = tb[TCA_HTB_RATE64] ? nla_get_u64(tb[TCA_HTB_RATE64]) : 0;
+
+ ceil64 = tb[TCA_HTB_CEIL64] ? nla_get_u64(tb[TCA_HTB_CEIL64]) : 0;
+
+ psched_ratecfg_precompute(&cl->rate, &hopt->rate, rate64);
+ psched_ratecfg_precompute(&cl->ceil, &hopt->ceil, ceil64);
+
/* it used to be a nasty bug here, we have to check that node
* is really leaf before changing cl->un.leaf !
*/
if (!cl->level) {
- cl->quantum = hopt->rate.rate / q->rate2quantum;
+ u64 quantum = cl->rate.rate_bytes_ps;
+
+ do_div(quantum, q->rate2quantum);
+ cl->quantum = min_t(u64, quantum, INT_MAX);
+
if (!hopt->quantum && cl->quantum < 1000) {
pr_warning(
"HTB: quantum of class %X is small. Consider r2q change.\n",
cl->prio = TC_HTB_NUMPRIO - 1;
}
- rate64 = tb[TCA_HTB_RATE64] ? nla_get_u64(tb[TCA_HTB_RATE64]) : 0;
-
- ceil64 = tb[TCA_HTB_CEIL64] ? nla_get_u64(tb[TCA_HTB_CEIL64]) : 0;
-
- psched_ratecfg_precompute(&cl->rate, &hopt->rate, rate64);
- psched_ratecfg_precompute(&cl->ceil, &hopt->ceil, ceil64);
-
cl->buffer = PSCHED_TICKS2NS(hopt->buffer);
cl->cbuffer = PSCHED_TICKS2NS(hopt->cbuffer);
if (rnd < clg->a4) {
clg->state = 4;
return true;
- } else if (clg->a4 < rnd && rnd < clg->a1) {
+ } else if (clg->a4 < rnd && rnd < clg->a1 + clg->a4) {
clg->state = 3;
return true;
- } else if (clg->a1 < rnd)
+ } else if (clg->a1 + clg->a4 < rnd)
clg->state = 1;
break;
clg->state = 2;
if (net_random() < clg->a4)
return true;
+ break;
case 2:
if (net_random() < clg->a2)
clg->state = 1;
- if (clg->a3 > net_random())
+ if (net_random() > clg->a3)
return true;
}
#include <net/netlink.h>
#include <net/sch_generic.h>
#include <net/pkt_sched.h>
+#include <net/tcp.h>
/* Simple Token Bucket Filter.
};
+/* Time to Length, convert time in ns to length in bytes
+ * to determinate how many bytes can be sent in given time.
+ */
+static u64 psched_ns_t2l(const struct psched_ratecfg *r,
+ u64 time_in_ns)
+{
+ /* The formula is :
+ * len = (time_in_ns * r->rate_bytes_ps) / NSEC_PER_SEC
+ */
+ u64 len = time_in_ns * r->rate_bytes_ps;
+
+ do_div(len, NSEC_PER_SEC);
+
+ if (unlikely(r->linklayer == TC_LINKLAYER_ATM)) {
+ do_div(len, 53);
+ len = len * 48;
+ }
+
+ if (len > r->overhead)
+ len -= r->overhead;
+ else
+ len = 0;
+
+ return len;
+}
+
+/*
+ * Return length of individual segments of a gso packet,
+ * including all headers (MAC, IP, TCP/UDP)
+ */
+static unsigned int skb_gso_seglen(const struct sk_buff *skb)
+{
+ unsigned int hdr_len = skb_transport_header(skb) - skb_mac_header(skb);
+ const struct skb_shared_info *shinfo = skb_shinfo(skb);
+
+ if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)))
+ hdr_len += tcp_hdrlen(skb);
+ else
+ hdr_len += sizeof(struct udphdr);
+ return hdr_len + shinfo->gso_size;
+}
+
/* GSO packet is too big, segment it so that tbf can transmit
* each segment in time
*/
while (segs) {
nskb = segs->next;
segs->next = NULL;
- if (likely(segs->len <= q->max_size)) {
- qdisc_skb_cb(segs)->pkt_len = segs->len;
- ret = qdisc_enqueue(segs, q->qdisc);
- } else {
- ret = qdisc_reshape_fail(skb, sch);
- }
+ qdisc_skb_cb(segs)->pkt_len = segs->len;
+ ret = qdisc_enqueue(segs, q->qdisc);
if (ret != NET_XMIT_SUCCESS) {
if (net_xmit_drop_count(ret))
sch->qstats.drops++;
int ret;
if (qdisc_pkt_len(skb) > q->max_size) {
- if (skb_is_gso(skb))
+ if (skb_is_gso(skb) && skb_gso_seglen(skb) <= q->max_size)
return tbf_segment(skb, sch);
return qdisc_reshape_fail(skb, sch);
}
struct tbf_sched_data *q = qdisc_priv(sch);
struct nlattr *tb[TCA_TBF_MAX + 1];
struct tc_tbf_qopt *qopt;
- struct qdisc_rate_table *rtab = NULL;
- struct qdisc_rate_table *ptab = NULL;
struct Qdisc *child = NULL;
- int max_size, n;
+ struct psched_ratecfg rate;
+ struct psched_ratecfg peak;
+ u64 max_size;
+ s64 buffer, mtu;
u64 rate64 = 0, prate64 = 0;
err = nla_parse_nested(tb, TCA_TBF_MAX, opt, tbf_policy);
goto done;
qopt = nla_data(tb[TCA_TBF_PARMS]);
- rtab = qdisc_get_rtab(&qopt->rate, tb[TCA_TBF_RTAB]);
- if (rtab == NULL)
- goto done;
+ if (qopt->rate.linklayer == TC_LINKLAYER_UNAWARE)
+ qdisc_put_rtab(qdisc_get_rtab(&qopt->rate,
+ tb[TCA_TBF_RTAB]));
- if (qopt->peakrate.rate) {
- if (qopt->peakrate.rate > qopt->rate.rate)
- ptab = qdisc_get_rtab(&qopt->peakrate, tb[TCA_TBF_PTAB]);
- if (ptab == NULL)
- goto done;
- }
-
- for (n = 0; n < 256; n++)
- if (rtab->data[n] > qopt->buffer)
- break;
- max_size = (n << qopt->rate.cell_log) - 1;
- if (ptab) {
- int size;
-
- for (n = 0; n < 256; n++)
- if (ptab->data[n] > qopt->mtu)
- break;
- size = (n << qopt->peakrate.cell_log) - 1;
- if (size < max_size)
- max_size = size;
- }
- if (max_size < 0)
- goto done;
+ if (qopt->peakrate.linklayer == TC_LINKLAYER_UNAWARE)
+ qdisc_put_rtab(qdisc_get_rtab(&qopt->peakrate,
+ tb[TCA_TBF_PTAB]));
if (q->qdisc != &noop_qdisc) {
err = fifo_set_limit(q->qdisc, qopt->limit);
}
}
+ buffer = min_t(u64, PSCHED_TICKS2NS(qopt->buffer), ~0U);
+ mtu = min_t(u64, PSCHED_TICKS2NS(qopt->mtu), ~0U);
+
+ if (tb[TCA_TBF_RATE64])
+ rate64 = nla_get_u64(tb[TCA_TBF_RATE64]);
+ psched_ratecfg_precompute(&rate, &qopt->rate, rate64);
+
+ max_size = min_t(u64, psched_ns_t2l(&rate, buffer), ~0U);
+
+ if (qopt->peakrate.rate) {
+ if (tb[TCA_TBF_PRATE64])
+ prate64 = nla_get_u64(tb[TCA_TBF_PRATE64]);
+ psched_ratecfg_precompute(&peak, &qopt->peakrate, prate64);
+ if (peak.rate_bytes_ps <= rate.rate_bytes_ps) {
+ pr_warn_ratelimited("sch_tbf: peakrate %llu is lower than or equals to rate %llu !\n",
+ peak.rate_bytes_ps, rate.rate_bytes_ps);
+ err = -EINVAL;
+ goto done;
+ }
+
+ max_size = min_t(u64, max_size, psched_ns_t2l(&peak, mtu));
+ }
+
+ if (max_size < psched_mtu(qdisc_dev(sch)))
+ pr_warn_ratelimited("sch_tbf: burst %llu is lower than device %s mtu (%u) !\n",
+ max_size, qdisc_dev(sch)->name,
+ psched_mtu(qdisc_dev(sch)));
+
+ if (!max_size) {
+ err = -EINVAL;
+ goto done;
+ }
+
sch_tree_lock(sch);
if (child) {
qdisc_tree_decrease_qlen(q->qdisc, q->qdisc->q.qlen);
q->tokens = q->buffer;
q->ptokens = q->mtu;
- if (tb[TCA_TBF_RATE64])
- rate64 = nla_get_u64(tb[TCA_TBF_RATE64]);
- psched_ratecfg_precompute(&q->rate, &rtab->rate, rate64);
- if (ptab) {
- if (tb[TCA_TBF_PRATE64])
- prate64 = nla_get_u64(tb[TCA_TBF_PRATE64]);
- psched_ratecfg_precompute(&q->peak, &ptab->rate, prate64);
+ memcpy(&q->rate, &rate, sizeof(struct psched_ratecfg));
+ if (qopt->peakrate.rate) {
+ memcpy(&q->peak, &peak, sizeof(struct psched_ratecfg));
q->peak_present = true;
} else {
q->peak_present = false;
sch_tree_unlock(sch);
err = 0;
done:
- if (rtab)
- qdisc_put_rtab(rtab);
- if (ptab)
- qdisc_put_rtab(ptab);
return err;
}
asoc->timeouts[SCTP_EVENT_TIMEOUT_HEARTBEAT] = 0;
asoc->timeouts[SCTP_EVENT_TIMEOUT_SACK] = asoc->sackdelay;
- asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] =
- min_t(unsigned long, sp->autoclose, net->sctp.max_autoclose) * HZ;
+ asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] = sp->autoclose * HZ;
/* Initializes the timers */
for (i = SCTP_EVENT_TIMEOUT_NONE; i < SCTP_NUM_TIMEOUT_TYPES; ++i)
asoc->peer.ipv6_address = 1;
INIT_LIST_HEAD(&asoc->asocs);
- asoc->autoclose = sp->autoclose;
-
asoc->default_stream = sp->default_stream;
asoc->default_ppid = sp->default_ppid;
asoc->default_flags = sp->default_flags;
* for a given destination transport address.
*/
- if (!tp->rto_pending) {
+ if (!chunk->resent && !tp->rto_pending) {
chunk->rtt_in_progress = 1;
tp->rto_pending = 1;
}
+
has_data = 1;
}
unsigned long timeout;
/* Restart the AUTOCLOSE timer when sending data. */
- if (sctp_state(asoc, ESTABLISHED) && asoc->autoclose) {
+ if (sctp_state(asoc, ESTABLISHED) &&
+ asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) {
timer = &asoc->timers[SCTP_EVENT_TIMEOUT_AUTOCLOSE];
timeout = asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE];
INIT_LIST_HEAD(&q->retransmit);
INIT_LIST_HEAD(&q->sacked);
INIT_LIST_HEAD(&q->abandoned);
-
- q->empty = 1;
}
/* Free the outqueue structure and any related pending chunks.
SCTP_INC_STATS(net, SCTP_MIB_OUTUNORDERCHUNKS);
else
SCTP_INC_STATS(net, SCTP_MIB_OUTORDERCHUNKS);
- q->empty = 0;
break;
}
} else {
transport->rto_pending = 0;
}
+ chunk->resent = 1;
+
/* Move the chunk to the retransmit queue. The chunks
* on the retransmit queue are always kept in order.
*/
if (chunk->fast_retransmit == SCTP_NEED_FRTX)
chunk->fast_retransmit = SCTP_DONT_FRTX;
- q->empty = 0;
q->asoc->stats.rtxchunks++;
break;
}
sctp_transport_reset_timers(transport);
- q->empty = 0;
-
/* Only let one DATA chunk get bundled with a
* COOKIE-ECHO chunk.
*/
"advertised peer ack point:0x%x\n", __func__, asoc, ctsn,
asoc->adv_peer_ack_point);
- /* See if all chunks are acked.
- * Make sure the empty queue handler will get run later.
- */
- q->empty = (list_empty(&q->out_chunk_list) &&
- list_empty(&q->retransmit));
- if (!q->empty)
- goto finish;
-
- list_for_each_entry(transport, transport_list, transports) {
- q->empty = q->empty && list_empty(&transport->transmitted);
- if (!q->empty)
- goto finish;
- }
-
- pr_debug("%s: sack queue is empty\n", __func__);
-finish:
- return q->empty;
+ return sctp_outq_is_empty(q);
}
-/* Is the outqueue empty? */
+/* Is the outqueue empty?
+ * The queue is empty when we have not pending data, no in-flight data
+ * and nothing pending retransmissions.
+ */
int sctp_outq_is_empty(const struct sctp_outq *q)
{
- return q->empty;
+ return q->out_qlen == 0 && q->outstanding_bytes == 0 &&
+ list_empty(&q->retransmit);
}
/********************************************************************
* instance).
*/
if (!tchunk->tsn_gap_acked &&
+ !tchunk->resent &&
tchunk->rtt_in_progress) {
tchunk->rtt_in_progress = 0;
rtt = jiffies - tchunk->sent_at;
*/
if (!tchunk->tsn_gap_acked) {
tchunk->tsn_gap_acked = 1;
- *highest_new_tsn_in_sack = tsn;
+ if (TSN_lt(*highest_new_tsn_in_sack, tsn))
+ *highest_new_tsn_in_sack = tsn;
bytes_acked += sctp_data_size(tchunk);
if (!tchunk->transport)
migrate_bytes += sctp_data_size(tchunk);
#include <net/sctp/sctp.h>
#include <net/sctp/sm.h>
+MODULE_SOFTDEP("pre: sctp");
MODULE_AUTHOR("Wei Yongjun <yjwei@cn.fujitsu.com>");
MODULE_DESCRIPTION("SCTP snooper");
MODULE_LICENSE("GPL");
.entry = jsctp_sf_eat_sack,
};
+static __init int sctp_setup_jprobe(void)
+{
+ int ret = register_jprobe(&sctp_recv_probe);
+
+ if (ret) {
+ if (request_module("sctp"))
+ goto out;
+ ret = register_jprobe(&sctp_recv_probe);
+ }
+
+out:
+ return ret;
+}
+
static __init int sctpprobe_init(void)
{
int ret = -ENOMEM;
&sctpprobe_fops))
goto free_kfifo;
- ret = register_jprobe(&sctp_recv_probe);
+ ret = sctp_setup_jprobe();
if (ret)
goto remove_proc;
SCTP_INC_STATS(net, SCTP_MIB_PASSIVEESTABS);
sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL());
- if (new_asoc->autoclose)
+ if (new_asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE])
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_START,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB);
SCTP_INC_STATS(net, SCTP_MIB_ACTIVEESTABS);
sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL());
- if (asoc->autoclose)
+ if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE])
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_START,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
if (chunk->chunk_hdr->flags & SCTP_DATA_SACK_IMM)
force = SCTP_FORCE();
- if (asoc->autoclose) {
+ if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) {
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
}
SCTP_CHUNK(chunk));
/* Count this as receiving DATA. */
- if (asoc->autoclose) {
+ if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) {
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
}
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART,
SCTP_TO(SCTP_EVENT_TIMEOUT_T5_SHUTDOWN_GUARD));
- if (asoc->autoclose)
+ if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE])
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART,
SCTP_TO(SCTP_EVENT_TIMEOUT_T2_SHUTDOWN));
- if (asoc->autoclose)
+ if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE])
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
unsigned int optlen)
{
struct sctp_sock *sp = sctp_sk(sk);
+ struct net *net = sock_net(sk);
/* Applicable to UDP-style socket only */
if (sctp_style(sk, TCP))
if (copy_from_user(&sp->autoclose, optval, optlen))
return -EFAULT;
+ if (sp->autoclose > net->sctp.max_autoclose)
+ sp->autoclose = net->sctp.max_autoclose;
+
return 0;
}
{
struct sctp_rtoinfo rtoinfo;
struct sctp_association *asoc;
+ unsigned long rto_min, rto_max;
+ struct sctp_sock *sp = sctp_sk(sk);
if (optlen != sizeof (struct sctp_rtoinfo))
return -EINVAL;
if (!asoc && rtoinfo.srto_assoc_id && sctp_style(sk, UDP))
return -EINVAL;
+ rto_max = rtoinfo.srto_max;
+ rto_min = rtoinfo.srto_min;
+
+ if (rto_max)
+ rto_max = asoc ? msecs_to_jiffies(rto_max) : rto_max;
+ else
+ rto_max = asoc ? asoc->rto_max : sp->rtoinfo.srto_max;
+
+ if (rto_min)
+ rto_min = asoc ? msecs_to_jiffies(rto_min) : rto_min;
+ else
+ rto_min = asoc ? asoc->rto_min : sp->rtoinfo.srto_min;
+
+ if (rto_min > rto_max)
+ return -EINVAL;
+
if (asoc) {
if (rtoinfo.srto_initial != 0)
asoc->rto_initial =
msecs_to_jiffies(rtoinfo.srto_initial);
- if (rtoinfo.srto_max != 0)
- asoc->rto_max = msecs_to_jiffies(rtoinfo.srto_max);
- if (rtoinfo.srto_min != 0)
- asoc->rto_min = msecs_to_jiffies(rtoinfo.srto_min);
+ asoc->rto_max = rto_max;
+ asoc->rto_min = rto_min;
} else {
/* If there is no association or the association-id = 0
* set the values to the endpoint.
*/
- struct sctp_sock *sp = sctp_sk(sk);
-
if (rtoinfo.srto_initial != 0)
sp->rtoinfo.srto_initial = rtoinfo.srto_initial;
- if (rtoinfo.srto_max != 0)
- sp->rtoinfo.srto_max = rtoinfo.srto_max;
- if (rtoinfo.srto_min != 0)
- sp->rtoinfo.srto_min = rtoinfo.srto_min;
+ sp->rtoinfo.srto_max = rto_max;
+ sp->rtoinfo.srto_min = rto_min;
}
return 0;
extern int sysctl_sctp_rmem[3];
extern int sysctl_sctp_wmem[3];
-static int proc_sctp_do_hmac_alg(struct ctl_table *ctl,
- int write,
+static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp,
+ loff_t *ppos);
+static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp,
+ loff_t *ppos);
+static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write,
void __user *buffer, size_t *lenp,
-
loff_t *ppos);
+
static struct ctl_table sctp_table[] = {
{
.procname = "sctp_mem",
.data = &init_net.sctp.rto_min,
.maxlen = sizeof(unsigned int),
.mode = 0644,
- .proc_handler = proc_dointvec_minmax,
+ .proc_handler = proc_sctp_do_rto_min,
.extra1 = &one,
- .extra2 = &timer_max
+ .extra2 = &init_net.sctp.rto_max
},
{
.procname = "rto_max",
.data = &init_net.sctp.rto_max,
.maxlen = sizeof(unsigned int),
.mode = 0644,
- .proc_handler = proc_dointvec_minmax,
- .extra1 = &one,
+ .proc_handler = proc_sctp_do_rto_max,
+ .extra1 = &init_net.sctp.rto_min,
.extra2 = &timer_max
},
{
{ /* sentinel */ }
};
-static int proc_sctp_do_hmac_alg(struct ctl_table *ctl,
- int write,
+static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, int write,
void __user *buffer, size_t *lenp,
loff_t *ppos)
{
return ret;
}
+static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp,
+ loff_t *ppos)
+{
+ struct net *net = current->nsproxy->net_ns;
+ int new_value;
+ struct ctl_table tbl;
+ unsigned int min = *(unsigned int *) ctl->extra1;
+ unsigned int max = *(unsigned int *) ctl->extra2;
+ int ret;
+
+ memset(&tbl, 0, sizeof(struct ctl_table));
+ tbl.maxlen = sizeof(unsigned int);
+
+ if (write)
+ tbl.data = &new_value;
+ else
+ tbl.data = &net->sctp.rto_min;
+ ret = proc_dointvec(&tbl, write, buffer, lenp, ppos);
+ if (write) {
+ if (ret || new_value > max || new_value < min)
+ return -EINVAL;
+ net->sctp.rto_min = new_value;
+ }
+ return ret;
+}
+
+static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp,
+ loff_t *ppos)
+{
+ struct net *net = current->nsproxy->net_ns;
+ int new_value;
+ struct ctl_table tbl;
+ unsigned int min = *(unsigned int *) ctl->extra1;
+ unsigned int max = *(unsigned int *) ctl->extra2;
+ int ret;
+
+ memset(&tbl, 0, sizeof(struct ctl_table));
+ tbl.maxlen = sizeof(unsigned int);
+
+ if (write)
+ tbl.data = &new_value;
+ else
+ tbl.data = &net->sctp.rto_max;
+ ret = proc_dointvec(&tbl, write, buffer, lenp, ppos);
+ if (write) {
+ if (ret || new_value > max || new_value < min)
+ return -EINVAL;
+ net->sctp.rto_max = new_value;
+ }
+ return ret;
+}
+
int sctp_sysctl_net_register(struct net *net)
{
struct ctl_table *table;
u32 old_cwnd = t->cwnd;
u32 max_burst_bytes;
- if (t->burst_limited)
+ if (t->burst_limited || asoc->max_burst == 0)
return;
max_burst_bytes = t->flight_size + (asoc->max_burst * asoc->pathmtu);
if (copy_from_user(kmsg, umsg, sizeof(struct msghdr)))
return -EFAULT;
if (kmsg->msg_namelen > sizeof(struct sockaddr_storage))
- return -EINVAL;
+ kmsg->msg_namelen = sizeof(struct sockaddr_storage);
return 0;
}
static int
gss_refresh_null(struct rpc_task *task)
{
- return -EACCES;
+ return 0;
}
static __be32 *
umode_t mode;
};
-static int rpc_delete_dentry(const struct dentry *dentry)
-{
- return 1;
-}
-
-static const struct dentry_operations rpc_dentry_operations = {
- .d_delete = rpc_delete_dentry,
-};
-
static struct inode *
rpc_get_inode(struct super_block *sb, umode_t mode)
{
sb->s_blocksize_bits = PAGE_CACHE_SHIFT;
sb->s_magic = RPCAUTH_GSSMAGIC;
sb->s_op = &s_ops;
- sb->s_d_op = &rpc_dentry_operations;
+ sb->s_d_op = &simple_dentry_operations;
sb->s_time_gran = 1;
inode = rpc_get_inode(sb, S_IFDIR | S_IRUGO | S_IXUGO);
static void tipc_core_stop(void)
{
tipc_netlink_stop();
- tipc_handler_stop();
tipc_cfg_stop();
tipc_subscr_stop();
tipc_nametbl_stop();
res = tipc_subscr_start();
if (!res)
res = tipc_cfg_init();
- if (res)
+ if (res) {
+ tipc_handler_stop();
tipc_core_stop();
-
+ }
return res;
}
static void __exit tipc_exit(void)
{
+ tipc_handler_stop();
tipc_core_stop_net();
tipc_core_stop();
pr_info("Deactivated\n");
{
struct queue_item *item;
+ spin_lock_bh(&qitem_lock);
if (!handler_enabled) {
pr_err("Signal request ignored by handler\n");
+ spin_unlock_bh(&qitem_lock);
return -ENOPROTOOPT;
}
- spin_lock_bh(&qitem_lock);
item = kmem_cache_alloc(tipc_queue_item_cache, GFP_ATOMIC);
if (!item) {
pr_err("Signal queue out of memory\n");
struct list_head *l, *n;
struct queue_item *item;
- if (!handler_enabled)
+ spin_lock_bh(&qitem_lock);
+ if (!handler_enabled) {
+ spin_unlock_bh(&qitem_lock);
return;
-
+ }
handler_enabled = 0;
+ spin_unlock_bh(&qitem_lock);
+
tasklet_kill(&tipc_tasklet);
spin_lock_bh(&qitem_lock);
int type;
head = head->next;
+ buf->next = NULL;
/* Ensure bearer is still enabled */
if (unlikely(!b_ptr->active))
return p_ptr;
}
-int tipc_deleteport(u32 ref)
+int tipc_deleteport(struct tipc_port *p_ptr)
{
- struct tipc_port *p_ptr;
struct sk_buff *buf = NULL;
- tipc_withdraw(ref, 0, NULL);
- p_ptr = tipc_port_lock(ref);
- if (!p_ptr)
- return -EINVAL;
+ tipc_withdraw(p_ptr, 0, NULL);
- tipc_ref_discard(ref);
- tipc_port_unlock(p_ptr);
+ spin_lock_bh(p_ptr->lock);
+ tipc_ref_discard(p_ptr->ref);
+ spin_unlock_bh(p_ptr->lock);
k_cancel_timer(&p_ptr->timer);
if (p_ptr->connected) {
}
-int tipc_publish(u32 ref, unsigned int scope, struct tipc_name_seq const *seq)
+int tipc_publish(struct tipc_port *p_ptr, unsigned int scope,
+ struct tipc_name_seq const *seq)
{
- struct tipc_port *p_ptr;
struct publication *publ;
u32 key;
- int res = -EINVAL;
- p_ptr = tipc_port_lock(ref);
- if (!p_ptr)
+ if (p_ptr->connected)
return -EINVAL;
+ key = p_ptr->ref + p_ptr->pub_count + 1;
+ if (key == p_ptr->ref)
+ return -EADDRINUSE;
- if (p_ptr->connected)
- goto exit;
- key = ref + p_ptr->pub_count + 1;
- if (key == ref) {
- res = -EADDRINUSE;
- goto exit;
- }
publ = tipc_nametbl_publish(seq->type, seq->lower, seq->upper,
scope, p_ptr->ref, key);
if (publ) {
list_add(&publ->pport_list, &p_ptr->publications);
p_ptr->pub_count++;
p_ptr->published = 1;
- res = 0;
+ return 0;
}
-exit:
- tipc_port_unlock(p_ptr);
- return res;
+ return -EINVAL;
}
-int tipc_withdraw(u32 ref, unsigned int scope, struct tipc_name_seq const *seq)
+int tipc_withdraw(struct tipc_port *p_ptr, unsigned int scope,
+ struct tipc_name_seq const *seq)
{
- struct tipc_port *p_ptr;
struct publication *publ;
struct publication *tpubl;
int res = -EINVAL;
- p_ptr = tipc_port_lock(ref);
- if (!p_ptr)
- return -EINVAL;
if (!seq) {
list_for_each_entry_safe(publ, tpubl,
&p_ptr->publications, pport_list) {
}
if (list_empty(&p_ptr->publications))
p_ptr->published = 0;
- tipc_port_unlock(p_ptr);
return res;
}
void tipc_acknowledge(u32 port_ref, u32 ack);
-int tipc_deleteport(u32 portref);
+int tipc_deleteport(struct tipc_port *p_ptr);
int tipc_portimportance(u32 portref, unsigned int *importance);
int tipc_set_portimportance(u32 portref, unsigned int importance);
int tipc_portunreturnable(u32 portref, unsigned int *isunreturnable);
int tipc_set_portunreturnable(u32 portref, unsigned int isunreturnable);
-int tipc_publish(u32 portref, unsigned int scope,
+int tipc_publish(struct tipc_port *p_ptr, unsigned int scope,
struct tipc_name_seq const *name_seq);
-int tipc_withdraw(u32 portref, unsigned int scope,
+int tipc_withdraw(struct tipc_port *p_ptr, unsigned int scope,
struct tipc_name_seq const *name_seq);
int tipc_connect(u32 portref, struct tipc_portid const *port);
* Delete TIPC port; this ensures no more messages are queued
* (also disconnects an active connection & sends a 'FIN-' to peer)
*/
- res = tipc_deleteport(tport->ref);
+ res = tipc_deleteport(tport);
/* Discard any remaining (connection-based) messages in receive queue */
__skb_queue_purge(&sk->sk_receive_queue);
*/
static int bind(struct socket *sock, struct sockaddr *uaddr, int uaddr_len)
{
+ struct sock *sk = sock->sk;
struct sockaddr_tipc *addr = (struct sockaddr_tipc *)uaddr;
- u32 portref = tipc_sk_port(sock->sk)->ref;
+ struct tipc_port *tport = tipc_sk_port(sock->sk);
+ int res = -EINVAL;
- if (unlikely(!uaddr_len))
- return tipc_withdraw(portref, 0, NULL);
+ lock_sock(sk);
+ if (unlikely(!uaddr_len)) {
+ res = tipc_withdraw(tport, 0, NULL);
+ goto exit;
+ }
- if (uaddr_len < sizeof(struct sockaddr_tipc))
- return -EINVAL;
- if (addr->family != AF_TIPC)
- return -EAFNOSUPPORT;
+ if (uaddr_len < sizeof(struct sockaddr_tipc)) {
+ res = -EINVAL;
+ goto exit;
+ }
+ if (addr->family != AF_TIPC) {
+ res = -EAFNOSUPPORT;
+ goto exit;
+ }
if (addr->addrtype == TIPC_ADDR_NAME)
addr->addr.nameseq.upper = addr->addr.nameseq.lower;
- else if (addr->addrtype != TIPC_ADDR_NAMESEQ)
- return -EAFNOSUPPORT;
+ else if (addr->addrtype != TIPC_ADDR_NAMESEQ) {
+ res = -EAFNOSUPPORT;
+ goto exit;
+ }
if ((addr->addr.nameseq.type < TIPC_RESERVED_TYPES) &&
(addr->addr.nameseq.type != TIPC_TOP_SRV) &&
- (addr->addr.nameseq.type != TIPC_CFG_SRV))
- return -EACCES;
+ (addr->addr.nameseq.type != TIPC_CFG_SRV)) {
+ res = -EACCES;
+ goto exit;
+ }
- return (addr->scope > 0) ?
- tipc_publish(portref, addr->scope, &addr->addr.nameseq) :
- tipc_withdraw(portref, -addr->scope, &addr->addr.nameseq);
+ res = (addr->scope > 0) ?
+ tipc_publish(tport, addr->scope, &addr->addr.nameseq) :
+ tipc_withdraw(tport, -addr->scope, &addr->addr.nameseq);
+exit:
+ release_sock(sk);
+ return res;
}
/**
static int unix_seqpacket_recvmsg(struct kiocb *, struct socket *,
struct msghdr *, size_t, int);
-static void unix_set_peek_off(struct sock *sk, int val)
+static int unix_set_peek_off(struct sock *sk, int val)
{
struct unix_sock *u = unix_sk(sk);
- mutex_lock(&u->readlock);
+ if (mutex_lock_interruptible(&u->readlock))
+ return -EINTR;
+
sk->sk_peek_off = val;
mutex_unlock(&u->readlock);
+
+ return 0;
}
int err;
unsigned int retries = 0;
- mutex_lock(&u->readlock);
+ err = mutex_lock_interruptible(&u->readlock);
+ if (err)
+ return err;
err = 0;
if (u->addr)
goto out;
addr_len = err;
- mutex_lock(&u->readlock);
+ err = mutex_lock_interruptible(&u->readlock);
+ if (err)
+ goto out;
err = -EINVAL;
if (u->addr)
render_opcode(out, "ASN1_OP_END_SET_OF%s,\n", act);
render_opcode(out, "_jump_target(%u),\n", entry);
break;
+ default:
+ break;
}
if (e->action)
render_opcode(out, "_action(ACT_%s),\n",
}
}
if (!defined $suppress_whiletrailers{$linenr} &&
+ defined($stat) && defined($cond) &&
$line =~ /\b(?:if|while|for)\s*\(/ && $line !~ /^.\s*#/) {
my ($s, $c) = ($stat, $cond);
kallsymopt="${kallsymopt} --all-symbols"
fi
- kallsymopt="${kallsymopt} --page-offset=$CONFIG_PAGE_OFFSET"
+ if [ -n "${CONFIG_ARM}" ] && [ -n "${CONFIG_PAGE_OFFSET}" ]; then
+ kallsymopt="${kallsymopt} --page-offset=$CONFIG_PAGE_OFFSET"
+ fi
local aflags="${KBUILD_AFLAGS} ${KBUILD_AFLAGS_KERNEL} \
${NOSTDINC_FLAGS} ${LINUXINCLUDE} ${KBUILD_CPPFLAGS}"
} elsif ($arch eq "blackfin") {
$mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s__mcount\$";
$mcount_adjust = -4;
-} elsif ($arch eq "tilegx") {
+} elsif ($arch eq "tilegx" || $arch eq "tile") {
+ # Default to the newer TILE-Gx architecture if only "tile" is given.
$mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s__mcount\$";
$type = ".quad";
$alignment = 8;
#include <tools/be_byteshift.h>
#include <tools/le_byteshift.h>
+#ifndef EM_ARCOMPACT
+#define EM_ARCOMPACT 93
+#endif
+
#ifndef EM_AARCH64
#define EM_AARCH64 183
#endif
case EM_S390:
custom_sort = sort_relative_table;
break;
+ case EM_ARCOMPACT:
case EM_ARM:
case EM_AARCH64:
case EM_MIPS:
# Object file lists
obj-$(CONFIG_SECURITY) += security.o capability.o
obj-$(CONFIG_SECURITYFS) += inode.o
-# Must precede capability.o in order to stack properly.
obj-$(CONFIG_SECURITY_SELINUX) += selinux/built-in.o
obj-$(CONFIG_SECURITY_SMACK) += smack/built-in.o
obj-$(CONFIG_AUDIT) += lsm_audit.o
static void audit_pre(struct audit_buffer *ab, void *ca)
{
struct common_audit_data *sa = ca;
- struct task_struct *tsk = sa->aad->tsk ? sa->aad->tsk : current;
if (aa_g_audit_header) {
audit_log_format(ab, "apparmor=");
if (sa->aad->profile) {
struct aa_profile *profile = sa->aad->profile;
- pid_t pid;
- rcu_read_lock();
- pid = rcu_dereference(tsk->real_parent)->pid;
- rcu_read_unlock();
- audit_log_format(ab, " parent=%d", pid);
if (profile->ns != root_ns) {
audit_log_format(ab, " namespace=");
audit_log_untrustedstring(ab, profile->ns->base.hname);
audit_log_format(ab, " name=");
audit_log_untrustedstring(ab, sa->aad->name);
}
-
- if (sa->aad->tsk) {
- audit_log_format(ab, " pid=%d comm=", tsk->pid);
- audit_log_untrustedstring(ab, tsk->comm);
- }
-
}
/**
if (sa->aad->type == AUDIT_APPARMOR_KILL)
(void)send_sig_info(SIGKILL, NULL,
- sa->aad->tsk ? sa->aad->tsk : current);
+ sa->u.tsk ? sa->u.tsk : current);
if (sa->aad->type == AUDIT_APPARMOR_ALLOWED)
return complain_error(sa->aad->error);
/**
* audit_caps - audit a capability
- * @profile: profile confining task (NOT NULL)
- * @task: task capability test was performed against (NOT NULL)
+ * @profile: profile being tested for confinement (NOT NULL)
* @cap: capability tested
* @error: error code returned by test
*
*
* Returns: 0 or sa->error on success, error code on failure
*/
-static int audit_caps(struct aa_profile *profile, struct task_struct *task,
- int cap, int error)
+static int audit_caps(struct aa_profile *profile, int cap, int error)
{
struct audit_cache *ent;
int type = AUDIT_APPARMOR_AUTO;
sa.type = LSM_AUDIT_DATA_CAP;
sa.aad = &aad;
sa.u.cap = cap;
- sa.aad->tsk = task;
sa.aad->op = OP_CAPABLE;
sa.aad->error = error;
/**
* aa_capable - test permission to use capability
- * @task: task doing capability test against (NOT NULL)
- * @profile: profile confining @task (NOT NULL)
+ * @profile: profile being tested against (NOT NULL)
* @cap: capability to be tested
* @audit: whether an audit record should be generated
*
*
* Returns: 0 on success, or else an error code.
*/
-int aa_capable(struct task_struct *task, struct aa_profile *profile, int cap,
- int audit)
+int aa_capable(struct aa_profile *profile, int cap, int audit)
{
int error = profile_capable(profile, cap);
return error;
}
- return audit_caps(profile, task, cap, error);
+ return audit_caps(profile, cap, error);
}
/**
* may_change_ptraced_domain - check if can change profile on ptraced task
- * @task: task we want to change profile of (NOT NULL)
* @to_profile: profile to change to (NOT NULL)
*
- * Check if the task is ptraced and if so if the tracing task is allowed
+ * Check if current is ptraced and if so if the tracing task is allowed
* to trace the new domain
*
* Returns: %0 or error if change not allowed
*/
-static int may_change_ptraced_domain(struct task_struct *task,
- struct aa_profile *to_profile)
+static int may_change_ptraced_domain(struct aa_profile *to_profile)
{
struct task_struct *tracer;
struct aa_profile *tracerp = NULL;
int error = 0;
rcu_read_lock();
- tracer = ptrace_parent(task);
+ tracer = ptrace_parent(current);
if (tracer)
/* released below */
tracerp = aa_get_task_profile(tracer);
if (!tracer || unconfined(tracerp))
goto out;
- error = aa_may_ptrace(tracer, tracerp, to_profile, PTRACE_MODE_ATTACH);
+ error = aa_may_ptrace(tracerp, to_profile, PTRACE_MODE_ATTACH);
out:
rcu_read_unlock();
}
if (bprm->unsafe & (LSM_UNSAFE_PTRACE | LSM_UNSAFE_PTRACE_CAP)) {
- error = may_change_ptraced_domain(current, new_profile);
+ error = may_change_ptraced_domain(new_profile);
if (error) {
aa_put_profile(new_profile);
goto audit;
}
}
- error = may_change_ptraced_domain(current, hat);
+ error = may_change_ptraced_domain(hat);
if (error) {
info = "ptraced";
error = -EPERM;
}
/* check if tracing task is allowed to trace target domain */
- error = may_change_ptraced_domain(current, target);
+ error = may_change_ptraced_domain(target);
if (error) {
info = "ptrace prevents transition";
goto audit;
void *profile;
const char *name;
const char *info;
- struct task_struct *tsk;
union {
void *target;
struct {
* This file contains AppArmor capability mediation definitions.
*
* Copyright (C) 1998-2008 Novell/SUSE
- * Copyright 2009-2010 Canonical Ltd.
+ * Copyright 2009-2013 Canonical Ltd.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
extern struct aa_fs_entry aa_fs_entry_caps[];
-int aa_capable(struct task_struct *task, struct aa_profile *profile, int cap,
- int audit);
+int aa_capable(struct aa_profile *profile, int cap, int audit);
static inline void aa_free_cap_rules(struct aa_caps *caps)
{
struct aa_profile;
-int aa_may_ptrace(struct task_struct *tracer_task, struct aa_profile *tracer,
- struct aa_profile *tracee, unsigned int mode);
+int aa_may_ptrace(struct aa_profile *tracer, struct aa_profile *tracee,
+ unsigned int mode);
int aa_ptrace(struct task_struct *tracer, struct task_struct *tracee,
unsigned int mode);
/**
* aa_may_ptrace - test if tracer task can trace the tracee
- * @tracer_task: task who will do the tracing (NOT NULL)
* @tracer: profile of the task doing the tracing (NOT NULL)
* @tracee: task to be traced
* @mode: whether PTRACE_MODE_READ || PTRACE_MODE_ATTACH
*
* Returns: %0 else error code if permission denied or error
*/
-int aa_may_ptrace(struct task_struct *tracer_task, struct aa_profile *tracer,
- struct aa_profile *tracee, unsigned int mode)
+int aa_may_ptrace(struct aa_profile *tracer, struct aa_profile *tracee,
+ unsigned int mode)
{
/* TODO: currently only based on capability, not extended ptrace
* rules,
if (unconfined(tracer) || tracer == tracee)
return 0;
/* log this capability request */
- return aa_capable(tracer_task, tracer, CAP_SYS_PTRACE, 1);
+ return aa_capable(tracer, CAP_SYS_PTRACE, 1);
}
/**
if (!unconfined(tracer_p)) {
struct aa_profile *tracee_p = aa_get_task_profile(tracee);
- error = aa_may_ptrace(tracer, tracer_p, tracee_p, mode);
+ error = aa_may_ptrace(tracer_p, tracee_p, mode);
error = aa_audit_ptrace(tracer_p, tracee_p, error);
aa_put_profile(tracee_p);
if (!error) {
profile = aa_cred_profile(cred);
if (!unconfined(profile))
- error = aa_capable(current, profile, cap, audit);
+ error = aa_capable(profile, cap, audit);
}
return error;
}
return 0;
}
-static int cap_xfrm_state_alloc_security(struct xfrm_state *x,
- struct xfrm_user_sec_ctx *sec_ctx,
- u32 secid)
+static int cap_xfrm_state_alloc(struct xfrm_state *x,
+ struct xfrm_user_sec_ctx *sec_ctx)
+{
+ return 0;
+}
+
+static int cap_xfrm_state_alloc_acquire(struct xfrm_state *x,
+ struct xfrm_sec_ctx *polsec,
+ u32 secid)
{
return 0;
}
set_to_cap_if_null(ops, xfrm_policy_clone_security);
set_to_cap_if_null(ops, xfrm_policy_free_security);
set_to_cap_if_null(ops, xfrm_policy_delete_security);
- set_to_cap_if_null(ops, xfrm_state_alloc_security);
+ set_to_cap_if_null(ops, xfrm_state_alloc);
+ set_to_cap_if_null(ops, xfrm_state_alloc_acquire);
set_to_cap_if_null(ops, xfrm_state_free_security);
set_to_cap_if_null(ops, xfrm_state_delete_security);
set_to_cap_if_null(ops, xfrm_policy_lookup);
};
int integrity_digsig_verify(const unsigned int id, const char *sig, int siglen,
- const char *digest, int digestlen)
+ const char *digest, int digestlen)
{
if (id >= INTEGRITY_KEYRING_MAX)
return -EINVAL;
}
}
- switch (sig[0]) {
+ switch (sig[1]) {
case 1:
- return digsig_verify(keyring[id], sig, siglen,
+ /* v1 API expect signature without xattr type */
+ return digsig_verify(keyring[id], sig + 1, siglen - 1,
digest, digestlen);
case 2:
return asymmetric_verify(keyring[id], sig, siglen,
#include "integrity.h"
-/*
- * signature format v2 - for using with asymmetric keys
- */
-struct signature_v2_hdr {
- uint8_t version; /* signature format version */
- uint8_t hash_algo; /* Digest algorithm [enum pkey_hash_algo] */
- uint32_t keyid; /* IMA key identifier - not X509/PGP specific*/
- uint16_t sig_size; /* signature size */
- uint8_t sig[0]; /* signature payload */
-} __packed;
-
/*
* Request an asymmetric key.
*/
goto out;
}
- xattr_len = rc - 1;
+ xattr_len = rc;
/* check value type */
switch (xattr_data->type) {
if (rc)
break;
rc = integrity_digsig_verify(INTEGRITY_KEYRING_EVM,
- xattr_data->digest, xattr_len,
+ (const char *)xattr_data, xattr_len,
calc.digest, sizeof(calc.digest));
if (!rc) {
/* we probably want to replace rsa with hmac here */
#include <linux/module.h>
#include <linux/xattr.h>
+#include <linux/evm.h>
-int posix_xattr_acl(char *xattr)
+int posix_xattr_acl(const char *xattr)
{
int xattr_len = strlen(xattr);
static void iint_free(struct integrity_iint_cache *iint)
{
+ kfree(iint->ima_hash);
+ iint->ima_hash = NULL;
iint->version = 0;
iint->flags = 0UL;
iint->ima_file_status = INTEGRITY_UNKNOWN;
select CRYPTO_HMAC
select CRYPTO_MD5
select CRYPTO_SHA1
+ select CRYPTO_HASH_INFO
select TCG_TPM if HAS_IOMEM && !UML
select TCG_TIS if TCG_TPM && X86
select TCG_IBMVTPM if TCG_TPM && PPC64
help
Disabling this option will disregard LSM based policy rules.
+choice
+ prompt "Default template"
+ default IMA_NG_TEMPLATE
+ depends on IMA
+ help
+ Select the default IMA measurement template.
+
+ The original 'ima' measurement list template contains a
+ hash, defined as 20 bytes, and a null terminated pathname,
+ limited to 255 characters. The 'ima-ng' measurement list
+ template permits both larger hash digests and longer
+ pathnames.
+
+ config IMA_TEMPLATE
+ bool "ima"
+ config IMA_NG_TEMPLATE
+ bool "ima-ng (default)"
+ config IMA_SIG_TEMPLATE
+ bool "ima-sig"
+endchoice
+
+config IMA_DEFAULT_TEMPLATE
+ string
+ depends on IMA
+ default "ima" if IMA_TEMPLATE
+ default "ima-ng" if IMA_NG_TEMPLATE
+ default "ima-sig" if IMA_SIG_TEMPLATE
+
+choice
+ prompt "Default integrity hash algorithm"
+ default IMA_DEFAULT_HASH_SHA1
+ depends on IMA
+ help
+ Select the default hash algorithm used for the measurement
+ list, integrity appraisal and audit log. The compiled default
+ hash algorithm can be overwritten using the kernel command
+ line 'ima_hash=' option.
+
+ config IMA_DEFAULT_HASH_SHA1
+ bool "SHA1 (default)"
+ depends on CRYPTO_SHA1
+
+ config IMA_DEFAULT_HASH_SHA256
+ bool "SHA256"
+ depends on CRYPTO_SHA256 && !IMA_TEMPLATE
+
+ config IMA_DEFAULT_HASH_SHA512
+ bool "SHA512"
+ depends on CRYPTO_SHA512 && !IMA_TEMPLATE
+
+ config IMA_DEFAULT_HASH_WP512
+ bool "WP512"
+ depends on CRYPTO_WP512 && !IMA_TEMPLATE
+endchoice
+
+config IMA_DEFAULT_HASH
+ string
+ depends on IMA
+ default "sha1" if IMA_DEFAULT_HASH_SHA1
+ default "sha256" if IMA_DEFAULT_HASH_SHA256
+ default "sha512" if IMA_DEFAULT_HASH_SHA512
+ default "wp512" if IMA_DEFAULT_HASH_WP512
+
config IMA_APPRAISE
bool "Appraise integrity measurements"
depends on IMA
obj-$(CONFIG_IMA) += ima.o
ima-y := ima_fs.o ima_queue.o ima_init.o ima_main.o ima_crypto.o ima_api.o \
- ima_policy.o
+ ima_policy.o ima_template.o ima_template_lib.o
ima-$(CONFIG_IMA_APPRAISE) += ima_appraise.o
#include "../integrity.h"
-enum ima_show_type { IMA_SHOW_BINARY, IMA_SHOW_ASCII };
+enum ima_show_type { IMA_SHOW_BINARY, IMA_SHOW_BINARY_NO_FIELD_LEN,
+ IMA_SHOW_ASCII };
enum tpm_pcrs { TPM_PCR0 = 0, TPM_PCR8 = 8 };
/* digest size for IMA, fits SHA1 or MD5 */
#define IMA_HASH_BITS 9
#define IMA_MEASURE_HTABLE_SIZE (1 << IMA_HASH_BITS)
+#define IMA_TEMPLATE_FIELD_ID_MAX_LEN 16
+#define IMA_TEMPLATE_NUM_FIELDS_MAX 15
+
+#define IMA_TEMPLATE_IMA_NAME "ima"
+#define IMA_TEMPLATE_IMA_FMT "d|n"
+
/* set during initialization */
extern int ima_initialized;
extern int ima_used_chip;
-extern char *ima_hash;
+extern int ima_hash_algo;
extern int ima_appraise;
-/* IMA inode template definition */
-struct ima_template_data {
- u8 digest[IMA_DIGEST_SIZE]; /* sha1/md5 measurement hash */
- char file_name[IMA_EVENT_NAME_LEN_MAX + 1]; /* name + \0 */
+/* IMA template field data definition */
+struct ima_field_data {
+ u8 *data;
+ u32 len;
+};
+
+/* IMA template field definition */
+struct ima_template_field {
+ const char field_id[IMA_TEMPLATE_FIELD_ID_MAX_LEN];
+ int (*field_init) (struct integrity_iint_cache *iint, struct file *file,
+ const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value,
+ int xattr_len, struct ima_field_data *field_data);
+ void (*field_show) (struct seq_file *m, enum ima_show_type show,
+ struct ima_field_data *field_data);
+};
+
+/* IMA template descriptor definition */
+struct ima_template_desc {
+ char *name;
+ char *fmt;
+ int num_fields;
+ struct ima_template_field **fields;
};
struct ima_template_entry {
- u8 digest[IMA_DIGEST_SIZE]; /* sha1 or md5 measurement hash */
- const char *template_name;
- int template_len;
- struct ima_template_data template;
+ u8 digest[TPM_DIGEST_SIZE]; /* sha1 or md5 measurement hash */
+ struct ima_template_desc *template_desc; /* template descriptor */
+ u32 template_data_len;
+ struct ima_field_data template_data[0]; /* template related data */
};
struct ima_queue_entry {
void ima_fs_cleanup(void);
int ima_inode_alloc(struct inode *inode);
int ima_add_template_entry(struct ima_template_entry *entry, int violation,
- const char *op, struct inode *inode);
-int ima_calc_file_hash(struct file *file, char *digest);
-int ima_calc_buffer_hash(const void *data, int len, char *digest);
-int ima_calc_boot_aggregate(char *digest);
-void ima_add_violation(struct inode *inode, const unsigned char *filename,
+ const char *op, struct inode *inode,
+ const unsigned char *filename);
+int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash);
+int ima_calc_field_array_hash(struct ima_field_data *field_data,
+ struct ima_template_desc *desc, int num_fields,
+ struct ima_digest_data *hash);
+int __init ima_calc_boot_aggregate(struct ima_digest_data *hash);
+void ima_add_violation(struct file *file, const unsigned char *filename,
const char *op, const char *cause);
int ima_init_crypto(void);
+void ima_putc(struct seq_file *m, void *data, int datalen);
+void ima_print_digest(struct seq_file *m, u8 *digest, int size);
+struct ima_template_desc *ima_template_desc_current(void);
+int ima_init_template(void);
+
+int ima_init_template(void);
/*
* used to protect h_table and sha_table
int ima_get_action(struct inode *inode, int mask, int function);
int ima_must_measure(struct inode *inode, int mask, int function);
int ima_collect_measurement(struct integrity_iint_cache *iint,
- struct file *file);
+ struct file *file,
+ struct evm_ima_xattr_data **xattr_value,
+ int *xattr_len);
void ima_store_measurement(struct integrity_iint_cache *iint, struct file *file,
- const unsigned char *filename);
+ const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value,
+ int xattr_len);
void ima_audit_measurement(struct integrity_iint_cache *iint,
const unsigned char *filename);
+int ima_alloc_init_template(struct integrity_iint_cache *iint,
+ struct file *file, const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value,
+ int xattr_len, struct ima_template_entry **entry);
int ima_store_template(struct ima_template_entry *entry, int violation,
- struct inode *inode);
-void ima_template_show(struct seq_file *m, void *e, enum ima_show_type show);
+ struct inode *inode, const unsigned char *filename);
+void ima_free_template_entry(struct ima_template_entry *entry);
const char *ima_d_path(struct path *path, char **pathbuf);
/* rbtree tree calls to lookup, insert, delete
#ifdef CONFIG_IMA_APPRAISE
int ima_appraise_measurement(int func, struct integrity_iint_cache *iint,
- struct file *file, const unsigned char *filename);
+ struct file *file, const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value,
+ int xattr_len);
int ima_must_appraise(struct inode *inode, int mask, enum ima_hooks func);
void ima_update_xattr(struct integrity_iint_cache *iint, struct file *file);
enum integrity_status ima_get_cache_status(struct integrity_iint_cache *iint,
int func);
+void ima_get_hash_algo(struct evm_ima_xattr_data *xattr_value, int xattr_len,
+ struct ima_digest_data *hash);
+int ima_read_xattr(struct dentry *dentry,
+ struct evm_ima_xattr_data **xattr_value);
#else
static inline int ima_appraise_measurement(int func,
struct integrity_iint_cache *iint,
struct file *file,
- const unsigned char *filename)
+ const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value,
+ int xattr_len)
{
return INTEGRITY_UNKNOWN;
}
{
return INTEGRITY_UNKNOWN;
}
+
+static inline void ima_get_hash_algo(struct evm_ima_xattr_data *xattr_value,
+ int xattr_len,
+ struct ima_digest_data *hash)
+{
+}
+
+static inline int ima_read_xattr(struct dentry *dentry,
+ struct evm_ima_xattr_data **xattr_value)
+{
+ return 0;
+}
+
#endif
/* LSM based policy rules require audit */
#include <linux/fs.h>
#include <linux/xattr.h>
#include <linux/evm.h>
+#include <crypto/hash_info.h>
#include "ima.h"
-static const char *IMA_TEMPLATE_NAME = "ima";
+/*
+ * ima_free_template_entry - free an existing template entry
+ */
+void ima_free_template_entry(struct ima_template_entry *entry)
+{
+ int i;
+
+ for (i = 0; i < entry->template_desc->num_fields; i++)
+ kfree(entry->template_data[i].data);
+
+ kfree(entry);
+}
+
+/*
+ * ima_alloc_init_template - create and initialize a new template entry
+ */
+int ima_alloc_init_template(struct integrity_iint_cache *iint,
+ struct file *file, const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value,
+ int xattr_len, struct ima_template_entry **entry)
+{
+ struct ima_template_desc *template_desc = ima_template_desc_current();
+ int i, result = 0;
+
+ *entry = kzalloc(sizeof(**entry) + template_desc->num_fields *
+ sizeof(struct ima_field_data), GFP_NOFS);
+ if (!*entry)
+ return -ENOMEM;
+
+ (*entry)->template_desc = template_desc;
+ for (i = 0; i < template_desc->num_fields; i++) {
+ struct ima_template_field *field = template_desc->fields[i];
+ u32 len;
+
+ result = field->field_init(iint, file, filename,
+ xattr_value, xattr_len,
+ &((*entry)->template_data[i]));
+ if (result != 0)
+ goto out;
+
+ len = (*entry)->template_data[i].len;
+ (*entry)->template_data_len += sizeof(len);
+ (*entry)->template_data_len += len;
+ }
+ return 0;
+out:
+ ima_free_template_entry(*entry);
+ *entry = NULL;
+ return result;
+}
/*
* ima_store_template - store ima template measurements
* Returns 0 on success, error code otherwise
*/
int ima_store_template(struct ima_template_entry *entry,
- int violation, struct inode *inode)
+ int violation, struct inode *inode,
+ const unsigned char *filename)
{
const char *op = "add_template_measure";
const char *audit_cause = "hashing_error";
+ char *template_name = entry->template_desc->name;
int result;
-
- memset(entry->digest, 0, sizeof(entry->digest));
- entry->template_name = IMA_TEMPLATE_NAME;
- entry->template_len = sizeof(entry->template);
+ struct {
+ struct ima_digest_data hdr;
+ char digest[TPM_DIGEST_SIZE];
+ } hash;
if (!violation) {
- result = ima_calc_buffer_hash(&entry->template,
- entry->template_len,
- entry->digest);
+ int num_fields = entry->template_desc->num_fields;
+
+ /* this function uses default algo */
+ hash.hdr.algo = HASH_ALGO_SHA1;
+ result = ima_calc_field_array_hash(&entry->template_data[0],
+ entry->template_desc,
+ num_fields, &hash.hdr);
if (result < 0) {
integrity_audit_msg(AUDIT_INTEGRITY_PCR, inode,
- entry->template_name, op,
+ template_name, op,
audit_cause, result, 0);
return result;
}
+ memcpy(entry->digest, hash.hdr.digest, hash.hdr.length);
}
- result = ima_add_template_entry(entry, violation, op, inode);
+ result = ima_add_template_entry(entry, violation, op, inode, filename);
return result;
}
* By extending the PCR with 0xFF's instead of with zeroes, the PCR
* value is invalidated.
*/
-void ima_add_violation(struct inode *inode, const unsigned char *filename,
+void ima_add_violation(struct file *file, const unsigned char *filename,
const char *op, const char *cause)
{
struct ima_template_entry *entry;
+ struct inode *inode = file->f_dentry->d_inode;
int violation = 1;
int result;
/* can overflow, only indicator */
atomic_long_inc(&ima_htable.violations);
- entry = kmalloc(sizeof(*entry), GFP_KERNEL);
- if (!entry) {
+ result = ima_alloc_init_template(NULL, file, filename,
+ NULL, 0, &entry);
+ if (result < 0) {
result = -ENOMEM;
goto err_out;
}
- memset(&entry->template, 0, sizeof(entry->template));
- strncpy(entry->template.file_name, filename, IMA_EVENT_NAME_LEN_MAX);
- result = ima_store_template(entry, violation, inode);
+ result = ima_store_template(entry, violation, inode, filename);
if (result < 0)
- kfree(entry);
+ ima_free_template_entry(entry);
err_out:
integrity_audit_msg(AUDIT_INTEGRITY_PCR, inode, filename,
op, cause, result, 0);
* Return 0 on success, error code otherwise
*/
int ima_collect_measurement(struct integrity_iint_cache *iint,
- struct file *file)
+ struct file *file,
+ struct evm_ima_xattr_data **xattr_value,
+ int *xattr_len)
{
struct inode *inode = file_inode(file);
const char *filename = file->f_dentry->d_name.name;
int result = 0;
+ struct {
+ struct ima_digest_data hdr;
+ char digest[IMA_MAX_DIGEST_SIZE];
+ } hash;
+
+ if (xattr_value)
+ *xattr_len = ima_read_xattr(file->f_dentry, xattr_value);
if (!(iint->flags & IMA_COLLECTED)) {
u64 i_version = file_inode(file)->i_version;
- iint->ima_xattr.type = IMA_XATTR_DIGEST;
- result = ima_calc_file_hash(file, iint->ima_xattr.digest);
+ /* use default hash algorithm */
+ hash.hdr.algo = ima_hash_algo;
+
+ if (xattr_value)
+ ima_get_hash_algo(*xattr_value, *xattr_len, &hash.hdr);
+
+ result = ima_calc_file_hash(file, &hash.hdr);
if (!result) {
- iint->version = i_version;
- iint->flags |= IMA_COLLECTED;
+ int length = sizeof(hash.hdr) + hash.hdr.length;
+ void *tmpbuf = krealloc(iint->ima_hash, length,
+ GFP_NOFS);
+ if (tmpbuf) {
+ iint->ima_hash = tmpbuf;
+ memcpy(iint->ima_hash, &hash, length);
+ iint->version = i_version;
+ iint->flags |= IMA_COLLECTED;
+ } else
+ result = -ENOMEM;
}
}
if (result)
* Must be called with iint->mutex held.
*/
void ima_store_measurement(struct integrity_iint_cache *iint,
- struct file *file, const unsigned char *filename)
+ struct file *file, const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value,
+ int xattr_len)
{
const char *op = "add_template_measure";
const char *audit_cause = "ENOMEM";
if (iint->flags & IMA_MEASURED)
return;
- entry = kmalloc(sizeof(*entry), GFP_KERNEL);
- if (!entry) {
+ result = ima_alloc_init_template(iint, file, filename,
+ xattr_value, xattr_len, &entry);
+ if (result < 0) {
integrity_audit_msg(AUDIT_INTEGRITY_PCR, inode, filename,
op, audit_cause, result, 0);
return;
}
- memset(&entry->template, 0, sizeof(entry->template));
- memcpy(entry->template.digest, iint->ima_xattr.digest, IMA_DIGEST_SIZE);
- strcpy(entry->template.file_name,
- (strlen(filename) > IMA_EVENT_NAME_LEN_MAX) ?
- file->f_dentry->d_name.name : filename);
- result = ima_store_template(entry, violation, inode);
+ result = ima_store_template(entry, violation, inode, filename);
if (!result || result == -EEXIST)
iint->flags |= IMA_MEASURED;
if (result < 0)
- kfree(entry);
+ ima_free_template_entry(entry);
}
void ima_audit_measurement(struct integrity_iint_cache *iint,
const unsigned char *filename)
{
struct audit_buffer *ab;
- char hash[(IMA_DIGEST_SIZE * 2) + 1];
+ char hash[(iint->ima_hash->length * 2) + 1];
+ const char *algo_name = hash_algo_name[iint->ima_hash->algo];
+ char algo_hash[sizeof(hash) + strlen(algo_name) + 2];
int i;
if (iint->flags & IMA_AUDITED)
return;
- for (i = 0; i < IMA_DIGEST_SIZE; i++)
- hex_byte_pack(hash + (i * 2), iint->ima_xattr.digest[i]);
+ for (i = 0; i < iint->ima_hash->length; i++)
+ hex_byte_pack(hash + (i * 2), iint->ima_hash->digest[i]);
hash[i * 2] = '\0';
ab = audit_log_start(current->audit_context, GFP_KERNEL,
audit_log_format(ab, "file=");
audit_log_untrustedstring(ab, filename);
audit_log_format(ab, " hash=");
- audit_log_untrustedstring(ab, hash);
+ snprintf(algo_hash, sizeof(algo_hash), "%s:%s", algo_name, hash);
+ audit_log_untrustedstring(ab, algo_hash);
audit_log_task_info(ab, current);
audit_log_end(ab);
#include <linux/magic.h>
#include <linux/ima.h>
#include <linux/evm.h>
+#include <crypto/hash_info.h>
#include "ima.h"
}
static int ima_fix_xattr(struct dentry *dentry,
- struct integrity_iint_cache *iint)
+ struct integrity_iint_cache *iint)
{
- iint->ima_xattr.type = IMA_XATTR_DIGEST;
- return __vfs_setxattr_noperm(dentry, XATTR_NAME_IMA,
- (u8 *)&iint->ima_xattr,
- sizeof(iint->ima_xattr), 0);
+ int rc, offset;
+ u8 algo = iint->ima_hash->algo;
+
+ if (algo <= HASH_ALGO_SHA1) {
+ offset = 1;
+ iint->ima_hash->xattr.sha1.type = IMA_XATTR_DIGEST;
+ } else {
+ offset = 0;
+ iint->ima_hash->xattr.ng.type = IMA_XATTR_DIGEST_NG;
+ iint->ima_hash->xattr.ng.algo = algo;
+ }
+ rc = __vfs_setxattr_noperm(dentry, XATTR_NAME_IMA,
+ &iint->ima_hash->xattr.data[offset],
+ (sizeof(iint->ima_hash->xattr) - offset) +
+ iint->ima_hash->length, 0);
+ return rc;
}
/* Return specific func appraised cached result */
enum integrity_status ima_get_cache_status(struct integrity_iint_cache *iint,
int func)
{
- switch(func) {
+ switch (func) {
case MMAP_CHECK:
return iint->ima_mmap_status;
case BPRM_CHECK:
static void ima_set_cache_status(struct integrity_iint_cache *iint,
int func, enum integrity_status status)
{
- switch(func) {
+ switch (func) {
case MMAP_CHECK:
iint->ima_mmap_status = status;
break;
static void ima_cache_flags(struct integrity_iint_cache *iint, int func)
{
- switch(func) {
+ switch (func) {
case MMAP_CHECK:
iint->flags |= (IMA_MMAP_APPRAISED | IMA_APPRAISED);
break;
}
}
+void ima_get_hash_algo(struct evm_ima_xattr_data *xattr_value, int xattr_len,
+ struct ima_digest_data *hash)
+{
+ struct signature_v2_hdr *sig;
+
+ if (!xattr_value || xattr_len < 2)
+ return;
+
+ switch (xattr_value->type) {
+ case EVM_IMA_XATTR_DIGSIG:
+ sig = (typeof(sig))xattr_value;
+ if (sig->version != 2 || xattr_len <= sizeof(*sig))
+ return;
+ hash->algo = sig->hash_algo;
+ break;
+ case IMA_XATTR_DIGEST_NG:
+ hash->algo = xattr_value->digest[0];
+ break;
+ case IMA_XATTR_DIGEST:
+ /* this is for backward compatibility */
+ if (xattr_len == 21) {
+ unsigned int zero = 0;
+ if (!memcmp(&xattr_value->digest[16], &zero, 4))
+ hash->algo = HASH_ALGO_MD5;
+ else
+ hash->algo = HASH_ALGO_SHA1;
+ } else if (xattr_len == 17)
+ hash->algo = HASH_ALGO_MD5;
+ break;
+ }
+}
+
+int ima_read_xattr(struct dentry *dentry,
+ struct evm_ima_xattr_data **xattr_value)
+{
+ struct inode *inode = dentry->d_inode;
+
+ if (!inode->i_op->getxattr)
+ return 0;
+
+ return vfs_getxattr_alloc(dentry, XATTR_NAME_IMA, (char **)xattr_value,
+ 0, GFP_NOFS);
+}
+
/*
* ima_appraise_measurement - appraise file measurement
*
* Return 0 on success, error code otherwise
*/
int ima_appraise_measurement(int func, struct integrity_iint_cache *iint,
- struct file *file, const unsigned char *filename)
+ struct file *file, const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value,
+ int xattr_len)
{
struct dentry *dentry = file->f_dentry;
struct inode *inode = dentry->d_inode;
- struct evm_ima_xattr_data *xattr_value = NULL;
enum integrity_status status = INTEGRITY_UNKNOWN;
const char *op = "appraise_data";
char *cause = "unknown";
- int rc;
+ int rc = xattr_len, hash_start = 0;
if (!ima_appraise)
return 0;
if (!inode->i_op->getxattr)
return INTEGRITY_UNKNOWN;
- rc = vfs_getxattr_alloc(dentry, XATTR_NAME_IMA, (char **)&xattr_value,
- 0, GFP_NOFS);
if (rc <= 0) {
if (rc && rc != -ENODATA)
goto out;
goto out;
}
switch (xattr_value->type) {
+ case IMA_XATTR_DIGEST_NG:
+ /* first byte contains algorithm id */
+ hash_start = 1;
case IMA_XATTR_DIGEST:
if (iint->flags & IMA_DIGSIG_REQUIRED) {
cause = "IMA signature required";
status = INTEGRITY_FAIL;
break;
}
- rc = memcmp(xattr_value->digest, iint->ima_xattr.digest,
- IMA_DIGEST_SIZE);
+ if (xattr_len - sizeof(xattr_value->type) - hash_start >=
+ iint->ima_hash->length)
+ /* xattr length may be longer. md5 hash in previous
+ version occupied 20 bytes in xattr, instead of 16
+ */
+ rc = memcmp(&xattr_value->digest[hash_start],
+ iint->ima_hash->digest,
+ iint->ima_hash->length);
+ else
+ rc = -EINVAL;
if (rc) {
cause = "invalid-hash";
status = INTEGRITY_FAIL;
case EVM_IMA_XATTR_DIGSIG:
iint->flags |= IMA_DIGSIG;
rc = integrity_digsig_verify(INTEGRITY_KEYRING_IMA,
- xattr_value->digest, rc - 1,
- iint->ima_xattr.digest,
- IMA_DIGEST_SIZE);
+ (const char *)xattr_value, rc,
+ iint->ima_hash->digest,
+ iint->ima_hash->length);
if (rc == -EOPNOTSUPP) {
status = INTEGRITY_UNKNOWN;
} else if (rc) {
ima_cache_flags(iint, func);
}
ima_set_cache_status(iint, func, status);
- kfree(xattr_value);
return status;
}
if (iint->flags & IMA_DIGSIG)
return;
- rc = ima_collect_measurement(iint, file);
+ rc = ima_collect_measurement(iint, file, NULL, NULL);
if (rc < 0)
return;
#include <linux/err.h>
#include <linux/slab.h>
#include <crypto/hash.h>
+#include <crypto/hash_info.h>
#include "ima.h"
static struct crypto_shash *ima_shash_tfm;
{
long rc;
- ima_shash_tfm = crypto_alloc_shash(ima_hash, 0, 0);
+ ima_shash_tfm = crypto_alloc_shash(hash_algo_name[ima_hash_algo], 0, 0);
if (IS_ERR(ima_shash_tfm)) {
rc = PTR_ERR(ima_shash_tfm);
- pr_err("Can not allocate %s (reason: %ld)\n", ima_hash, rc);
+ pr_err("Can not allocate %s (reason: %ld)\n",
+ hash_algo_name[ima_hash_algo], rc);
return rc;
}
return 0;
}
+static struct crypto_shash *ima_alloc_tfm(enum hash_algo algo)
+{
+ struct crypto_shash *tfm = ima_shash_tfm;
+ int rc;
+
+ if (algo != ima_hash_algo && algo < HASH_ALGO__LAST) {
+ tfm = crypto_alloc_shash(hash_algo_name[algo], 0, 0);
+ if (IS_ERR(tfm)) {
+ rc = PTR_ERR(tfm);
+ pr_err("Can not allocate %s (reason: %d)\n",
+ hash_algo_name[algo], rc);
+ }
+ }
+ return tfm;
+}
+
+static void ima_free_tfm(struct crypto_shash *tfm)
+{
+ if (tfm != ima_shash_tfm)
+ crypto_free_shash(tfm);
+}
+
/*
* Calculate the MD5/SHA1 file digest
*/
-int ima_calc_file_hash(struct file *file, char *digest)
+static int ima_calc_file_hash_tfm(struct file *file,
+ struct ima_digest_data *hash,
+ struct crypto_shash *tfm)
{
loff_t i_size, offset = 0;
char *rbuf;
int rc, read = 0;
struct {
struct shash_desc shash;
- char ctx[crypto_shash_descsize(ima_shash_tfm)];
+ char ctx[crypto_shash_descsize(tfm)];
} desc;
- desc.shash.tfm = ima_shash_tfm;
+ desc.shash.tfm = tfm;
desc.shash.flags = 0;
+ hash->length = crypto_shash_digestsize(tfm);
+
rc = crypto_shash_init(&desc.shash);
if (rc != 0)
return rc;
}
kfree(rbuf);
if (!rc)
- rc = crypto_shash_final(&desc.shash, digest);
+ rc = crypto_shash_final(&desc.shash, hash->digest);
if (read)
file->f_mode &= ~FMODE_READ;
out:
return rc;
}
+int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash)
+{
+ struct crypto_shash *tfm;
+ int rc;
+
+ tfm = ima_alloc_tfm(hash->algo);
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+
+ rc = ima_calc_file_hash_tfm(file, hash, tfm);
+
+ ima_free_tfm(tfm);
+
+ return rc;
+}
+
/*
- * Calculate the hash of a given buffer
+ * Calculate the hash of template data
*/
-int ima_calc_buffer_hash(const void *data, int len, char *digest)
+static int ima_calc_field_array_hash_tfm(struct ima_field_data *field_data,
+ struct ima_template_desc *td,
+ int num_fields,
+ struct ima_digest_data *hash,
+ struct crypto_shash *tfm)
{
struct {
struct shash_desc shash;
- char ctx[crypto_shash_descsize(ima_shash_tfm)];
+ char ctx[crypto_shash_descsize(tfm)];
} desc;
+ int rc, i;
- desc.shash.tfm = ima_shash_tfm;
+ desc.shash.tfm = tfm;
desc.shash.flags = 0;
- return crypto_shash_digest(&desc.shash, data, len, digest);
+ hash->length = crypto_shash_digestsize(tfm);
+
+ rc = crypto_shash_init(&desc.shash);
+ if (rc != 0)
+ return rc;
+
+ for (i = 0; i < num_fields; i++) {
+ if (strcmp(td->name, IMA_TEMPLATE_IMA_NAME) != 0) {
+ rc = crypto_shash_update(&desc.shash,
+ (const u8 *) &field_data[i].len,
+ sizeof(field_data[i].len));
+ if (rc)
+ break;
+ }
+ rc = crypto_shash_update(&desc.shash, field_data[i].data,
+ field_data[i].len);
+ if (rc)
+ break;
+ }
+
+ if (!rc)
+ rc = crypto_shash_final(&desc.shash, hash->digest);
+
+ return rc;
+}
+
+int ima_calc_field_array_hash(struct ima_field_data *field_data,
+ struct ima_template_desc *desc, int num_fields,
+ struct ima_digest_data *hash)
+{
+ struct crypto_shash *tfm;
+ int rc;
+
+ tfm = ima_alloc_tfm(hash->algo);
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+
+ rc = ima_calc_field_array_hash_tfm(field_data, desc, num_fields,
+ hash, tfm);
+
+ ima_free_tfm(tfm);
+
+ return rc;
}
static void __init ima_pcrread(int idx, u8 *pcr)
/*
* Calculate the boot aggregate hash
*/
-int __init ima_calc_boot_aggregate(char *digest)
+static int __init ima_calc_boot_aggregate_tfm(char *digest,
+ struct crypto_shash *tfm)
{
- u8 pcr_i[IMA_DIGEST_SIZE];
+ u8 pcr_i[TPM_DIGEST_SIZE];
int rc, i;
struct {
struct shash_desc shash;
- char ctx[crypto_shash_descsize(ima_shash_tfm)];
+ char ctx[crypto_shash_descsize(tfm)];
} desc;
- desc.shash.tfm = ima_shash_tfm;
+ desc.shash.tfm = tfm;
desc.shash.flags = 0;
rc = crypto_shash_init(&desc.shash);
for (i = TPM_PCR0; i < TPM_PCR8; i++) {
ima_pcrread(i, pcr_i);
/* now accumulate with current aggregate */
- rc = crypto_shash_update(&desc.shash, pcr_i, IMA_DIGEST_SIZE);
+ rc = crypto_shash_update(&desc.shash, pcr_i, TPM_DIGEST_SIZE);
}
if (!rc)
crypto_shash_final(&desc.shash, digest);
return rc;
}
+
+int __init ima_calc_boot_aggregate(struct ima_digest_data *hash)
+{
+ struct crypto_shash *tfm;
+ int rc;
+
+ tfm = ima_alloc_tfm(hash->algo);
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+
+ hash->length = crypto_shash_digestsize(tfm);
+ rc = ima_calc_boot_aggregate_tfm(hash->digest, tfm);
+
+ ima_free_tfm(tfm);
+
+ return rc;
+}
* against concurrent list-extension
*/
rcu_read_lock();
- qe = list_entry_rcu(qe->later.next,
- struct ima_queue_entry, later);
+ qe = list_entry_rcu(qe->later.next, struct ima_queue_entry, later);
rcu_read_unlock();
(*pos)++;
{
}
-static void ima_putc(struct seq_file *m, void *data, int datalen)
+void ima_putc(struct seq_file *m, void *data, int datalen)
{
while (datalen--)
seq_putc(m, *(char *)data++);
* char[20]=template digest
* 32bit-le=template name size
* char[n]=template name
+ * [eventdata length]
* eventdata[n]=template specific data
*/
static int ima_measurements_show(struct seq_file *m, void *v)
struct ima_template_entry *e;
int namelen;
u32 pcr = CONFIG_IMA_MEASURE_PCR_IDX;
+ bool is_ima_template = false;
+ int i;
/* get entry */
e = qe->entry;
ima_putc(m, &pcr, sizeof pcr);
/* 2nd: template digest */
- ima_putc(m, e->digest, IMA_DIGEST_SIZE);
+ ima_putc(m, e->digest, TPM_DIGEST_SIZE);
/* 3rd: template name size */
- namelen = strlen(e->template_name);
+ namelen = strlen(e->template_desc->name);
ima_putc(m, &namelen, sizeof namelen);
/* 4th: template name */
- ima_putc(m, (void *)e->template_name, namelen);
+ ima_putc(m, e->template_desc->name, namelen);
+
+ /* 5th: template length (except for 'ima' template) */
+ if (strcmp(e->template_desc->name, IMA_TEMPLATE_IMA_NAME) == 0)
+ is_ima_template = true;
+
+ if (!is_ima_template)
+ ima_putc(m, &e->template_data_len,
+ sizeof(e->template_data_len));
+
+ /* 6th: template specific data */
+ for (i = 0; i < e->template_desc->num_fields; i++) {
+ enum ima_show_type show = IMA_SHOW_BINARY;
+ struct ima_template_field *field = e->template_desc->fields[i];
- /* 5th: template specific data */
- ima_template_show(m, (struct ima_template_data *)&e->template,
- IMA_SHOW_BINARY);
+ if (is_ima_template && strcmp(field->field_id, "d") == 0)
+ show = IMA_SHOW_BINARY_NO_FIELD_LEN;
+ field->field_show(m, show, &e->template_data[i]);
+ }
return 0;
}
.release = seq_release,
};
-static void ima_print_digest(struct seq_file *m, u8 *digest)
+void ima_print_digest(struct seq_file *m, u8 *digest, int size)
{
int i;
- for (i = 0; i < IMA_DIGEST_SIZE; i++)
+ for (i = 0; i < size; i++)
seq_printf(m, "%02x", *(digest + i));
}
-void ima_template_show(struct seq_file *m, void *e, enum ima_show_type show)
-{
- struct ima_template_data *entry = e;
- int namelen;
-
- switch (show) {
- case IMA_SHOW_ASCII:
- ima_print_digest(m, entry->digest);
- seq_printf(m, " %s\n", entry->file_name);
- break;
- case IMA_SHOW_BINARY:
- ima_putc(m, entry->digest, IMA_DIGEST_SIZE);
-
- namelen = strlen(entry->file_name);
- ima_putc(m, &namelen, sizeof namelen);
- ima_putc(m, entry->file_name, namelen);
- default:
- break;
- }
-}
-
/* print in ascii */
static int ima_ascii_measurements_show(struct seq_file *m, void *v)
{
/* the list never shrinks, so we don't need a lock here */
struct ima_queue_entry *qe = v;
struct ima_template_entry *e;
+ int i;
/* get entry */
e = qe->entry;
seq_printf(m, "%2d ", CONFIG_IMA_MEASURE_PCR_IDX);
/* 2nd: SHA1 template hash */
- ima_print_digest(m, e->digest);
+ ima_print_digest(m, e->digest, TPM_DIGEST_SIZE);
/* 3th: template name */
- seq_printf(m, " %s ", e->template_name);
+ seq_printf(m, " %s", e->template_desc->name);
/* 4th: template specific data */
- ima_template_show(m, (struct ima_template_data *)&e->template,
- IMA_SHOW_ASCII);
+ for (i = 0; i < e->template_desc->num_fields; i++) {
+ seq_puts(m, " ");
+ if (e->template_data[i].len == 0)
+ continue;
+
+ e->template_desc->fields[i]->field_show(m, IMA_SHOW_ASCII,
+ &e->template_data[i]);
+ }
+ seq_puts(m, "\n");
return 0;
}
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/err.h>
+#include <crypto/hash_info.h>
#include "ima.h"
/* name for boot aggregate entry */
static void __init ima_add_boot_aggregate(void)
{
struct ima_template_entry *entry;
+ struct integrity_iint_cache tmp_iint, *iint = &tmp_iint;
const char *op = "add_boot_aggregate";
const char *audit_cause = "ENOMEM";
int result = -ENOMEM;
- int violation = 1;
+ int violation = 0;
+ struct {
+ struct ima_digest_data hdr;
+ char digest[TPM_DIGEST_SIZE];
+ } hash;
- entry = kmalloc(sizeof(*entry), GFP_KERNEL);
- if (!entry)
- goto err_out;
+ memset(iint, 0, sizeof(*iint));
+ memset(&hash, 0, sizeof(hash));
+ iint->ima_hash = &hash.hdr;
+ iint->ima_hash->algo = HASH_ALGO_SHA1;
+ iint->ima_hash->length = SHA1_DIGEST_SIZE;
- memset(&entry->template, 0, sizeof(entry->template));
- strncpy(entry->template.file_name, boot_aggregate_name,
- IMA_EVENT_NAME_LEN_MAX);
if (ima_used_chip) {
- violation = 0;
- result = ima_calc_boot_aggregate(entry->template.digest);
+ result = ima_calc_boot_aggregate(&hash.hdr);
if (result < 0) {
audit_cause = "hashing_error";
- kfree(entry);
goto err_out;
}
}
- result = ima_store_template(entry, violation, NULL);
+
+ result = ima_alloc_init_template(iint, NULL, boot_aggregate_name,
+ NULL, 0, &entry);
+ if (result < 0)
+ return;
+
+ result = ima_store_template(entry, violation, NULL,
+ boot_aggregate_name);
if (result < 0)
- kfree(entry);
+ ima_free_template_entry(entry);
return;
err_out:
integrity_audit_msg(AUDIT_INTEGRITY_PCR, NULL, boot_aggregate_name, op,
int __init ima_init(void)
{
- u8 pcr_i[IMA_DIGEST_SIZE];
+ u8 pcr_i[TPM_DIGEST_SIZE];
int rc;
ima_used_chip = 0;
rc = ima_init_crypto();
if (rc)
return rc;
+ rc = ima_init_template();
+ if (rc != 0)
+ return rc;
+
ima_add_boot_aggregate(); /* boot aggregate must be first entry */
ima_init_policy();
#include <linux/slab.h>
#include <linux/xattr.h>
#include <linux/ima.h>
+#include <crypto/hash_info.h>
#include "ima.h"
int ima_appraise;
#endif
-char *ima_hash = "sha1";
+int ima_hash_algo = HASH_ALGO_SHA1;
+static int hash_setup_done;
+
static int __init hash_setup(char *str)
{
- if (strncmp(str, "md5", 3) == 0)
- ima_hash = "md5";
+ struct ima_template_desc *template_desc = ima_template_desc_current();
+ int i;
+
+ if (hash_setup_done)
+ return 1;
+
+ if (strcmp(template_desc->name, IMA_TEMPLATE_IMA_NAME) == 0) {
+ if (strncmp(str, "sha1", 4) == 0)
+ ima_hash_algo = HASH_ALGO_SHA1;
+ else if (strncmp(str, "md5", 3) == 0)
+ ima_hash_algo = HASH_ALGO_MD5;
+ goto out;
+ }
+
+ for (i = 0; i < HASH_ALGO__LAST; i++) {
+ if (strcmp(str, hash_algo_name[i]) == 0) {
+ ima_hash_algo = i;
+ break;
+ }
+ }
+out:
+ hash_setup_done = 1;
return 1;
}
__setup("ima_hash=", hash_setup);
pathname = dentry->d_name.name;
if (send_tomtou)
- ima_add_violation(inode, pathname,
- "invalid_pcr", "ToMToU");
+ ima_add_violation(file, pathname, "invalid_pcr", "ToMToU");
if (send_writers)
- ima_add_violation(inode, pathname,
+ ima_add_violation(file, pathname,
"invalid_pcr", "open_writers");
kfree(pathbuf);
}
{
struct inode *inode = file_inode(file);
struct integrity_iint_cache *iint;
+ struct ima_template_desc *template_desc = ima_template_desc_current();
char *pathbuf = NULL;
const char *pathname = NULL;
int rc = -ENOMEM, action, must_appraise, _func;
+ struct evm_ima_xattr_data *xattr_value = NULL, **xattr_ptr = NULL;
+ int xattr_len = 0;
if (!ima_initialized || !S_ISREG(inode->i_mode))
return 0;
goto out_digsig;
}
- rc = ima_collect_measurement(iint, file);
+ if (strcmp(template_desc->name, IMA_TEMPLATE_IMA_NAME) == 0) {
+ if (action & IMA_APPRAISE_SUBMASK)
+ xattr_ptr = &xattr_value;
+ } else
+ xattr_ptr = &xattr_value;
+
+ rc = ima_collect_measurement(iint, file, xattr_ptr, &xattr_len);
if (rc != 0)
goto out_digsig;
pathname = (const char *)file->f_dentry->d_name.name;
if (action & IMA_MEASURE)
- ima_store_measurement(iint, file, pathname);
+ ima_store_measurement(iint, file, pathname,
+ xattr_value, xattr_len);
if (action & IMA_APPRAISE_SUBMASK)
- rc = ima_appraise_measurement(_func, iint, file, pathname);
+ rc = ima_appraise_measurement(_func, iint, file, pathname,
+ xattr_value, xattr_len);
if (action & IMA_AUDIT)
ima_audit_measurement(iint, pathname);
kfree(pathbuf);
rc = -EACCES;
out:
mutex_unlock(&inode->i_mutex);
+ kfree(xattr_value);
if ((rc && must_appraise) && (ima_appraise & IMA_APPRAISE_ENFORCE))
return -EACCES;
return 0;
int ima_bprm_check(struct linux_binprm *bprm)
{
return process_measurement(bprm->file,
- (strcmp(bprm->filename, bprm->interp) == 0) ?
- bprm->filename : bprm->interp,
- MAY_EXEC, BPRM_CHECK);
+ (strcmp(bprm->filename, bprm->interp) == 0) ?
+ bprm->filename : bprm->interp,
+ MAY_EXEC, BPRM_CHECK);
}
/**
{
ima_rdwr_violation_check(file);
return process_measurement(file, NULL,
- mask & (MAY_READ | MAY_WRITE | MAY_EXEC),
- FILE_CHECK);
+ mask & (MAY_READ | MAY_WRITE | MAY_EXEC),
+ FILE_CHECK);
}
EXPORT_SYMBOL_GPL(ima_file_check);
{
int error;
+ hash_setup(CONFIG_IMA_DEFAULT_HASH);
error = ima_init();
if (!error)
ima_initialized = 1;
{.action = DONT_MEASURE,.fsmagic = SYSFS_MAGIC,.flags = IMA_FSMAGIC},
{.action = DONT_MEASURE,.fsmagic = DEBUGFS_MAGIC,.flags = IMA_FSMAGIC},
{.action = DONT_MEASURE,.fsmagic = TMPFS_MAGIC,.flags = IMA_FSMAGIC},
- {.action = DONT_MEASURE,.fsmagic = RAMFS_MAGIC,.flags = IMA_FSMAGIC},
{.action = DONT_MEASURE,.fsmagic = DEVPTS_SUPER_MAGIC,.flags = IMA_FSMAGIC},
{.action = DONT_MEASURE,.fsmagic = BINFMTFS_MAGIC,.flags = IMA_FSMAGIC},
{.action = DONT_MEASURE,.fsmagic = SECURITYFS_MAGIC,.flags = IMA_FSMAGIC},
key = ima_hash_key(digest_value);
rcu_read_lock();
hlist_for_each_entry_rcu(qe, &ima_htable.queue[key], hnext) {
- rc = memcmp(qe->entry->digest, digest_value, IMA_DIGEST_SIZE);
+ rc = memcmp(qe->entry->digest, digest_value, TPM_DIGEST_SIZE);
if (rc == 0) {
ret = qe;
break;
* and extend the pcr.
*/
int ima_add_template_entry(struct ima_template_entry *entry, int violation,
- const char *op, struct inode *inode)
+ const char *op, struct inode *inode,
+ const unsigned char *filename)
{
- u8 digest[IMA_DIGEST_SIZE];
+ u8 digest[TPM_DIGEST_SIZE];
const char *audit_cause = "hash_added";
char tpm_audit_cause[AUDIT_CAUSE_LEN_MAX];
int audit_info = 1;
}
out:
mutex_unlock(&ima_extend_list_mutex);
- integrity_audit_msg(AUDIT_INTEGRITY_PCR, inode,
- entry->template.file_name,
+ integrity_audit_msg(AUDIT_INTEGRITY_PCR, inode, filename,
op, audit_cause, result, audit_info);
return result;
}
--- /dev/null
+/*
+ * Copyright (C) 2013 Politecnico di Torino, Italy
+ * TORSEC group -- http://security.polito.it
+ *
+ * Author: Roberto Sassu <roberto.sassu@polito.it>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation, version 2 of the
+ * License.
+ *
+ * File: ima_template.c
+ * Helpers to manage template descriptors.
+ */
+#include <crypto/hash_info.h>
+
+#include "ima.h"
+#include "ima_template_lib.h"
+
+static struct ima_template_desc defined_templates[] = {
+ {.name = IMA_TEMPLATE_IMA_NAME, .fmt = IMA_TEMPLATE_IMA_FMT},
+ {.name = "ima-ng",.fmt = "d-ng|n-ng"},
+ {.name = "ima-sig",.fmt = "d-ng|n-ng|sig"},
+};
+
+static struct ima_template_field supported_fields[] = {
+ {.field_id = "d",.field_init = ima_eventdigest_init,
+ .field_show = ima_show_template_digest},
+ {.field_id = "n",.field_init = ima_eventname_init,
+ .field_show = ima_show_template_string},
+ {.field_id = "d-ng",.field_init = ima_eventdigest_ng_init,
+ .field_show = ima_show_template_digest_ng},
+ {.field_id = "n-ng",.field_init = ima_eventname_ng_init,
+ .field_show = ima_show_template_string},
+ {.field_id = "sig",.field_init = ima_eventsig_init,
+ .field_show = ima_show_template_sig},
+};
+
+static struct ima_template_desc *ima_template;
+static struct ima_template_desc *lookup_template_desc(const char *name);
+
+static int __init ima_template_setup(char *str)
+{
+ struct ima_template_desc *template_desc;
+ int template_len = strlen(str);
+
+ /*
+ * Verify that a template with the supplied name exists.
+ * If not, use CONFIG_IMA_DEFAULT_TEMPLATE.
+ */
+ template_desc = lookup_template_desc(str);
+ if (!template_desc)
+ return 1;
+
+ /*
+ * Verify whether the current hash algorithm is supported
+ * by the 'ima' template.
+ */
+ if (template_len == 3 && strcmp(str, IMA_TEMPLATE_IMA_NAME) == 0 &&
+ ima_hash_algo != HASH_ALGO_SHA1 && ima_hash_algo != HASH_ALGO_MD5) {
+ pr_err("IMA: template does not support hash alg\n");
+ return 1;
+ }
+
+ ima_template = template_desc;
+ return 1;
+}
+__setup("ima_template=", ima_template_setup);
+
+static struct ima_template_desc *lookup_template_desc(const char *name)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(defined_templates); i++) {
+ if (strcmp(defined_templates[i].name, name) == 0)
+ return defined_templates + i;
+ }
+
+ return NULL;
+}
+
+static struct ima_template_field *lookup_template_field(const char *field_id)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(supported_fields); i++)
+ if (strncmp(supported_fields[i].field_id, field_id,
+ IMA_TEMPLATE_FIELD_ID_MAX_LEN) == 0)
+ return &supported_fields[i];
+ return NULL;
+}
+
+static int template_fmt_size(const char *template_fmt)
+{
+ char c;
+ int template_fmt_len = strlen(template_fmt);
+ int i = 0, j = 0;
+
+ while (i < template_fmt_len) {
+ c = template_fmt[i];
+ if (c == '|')
+ j++;
+ i++;
+ }
+
+ return j + 1;
+}
+
+static int template_desc_init_fields(const char *template_fmt,
+ struct ima_template_field ***fields,
+ int *num_fields)
+{
+ char *c, *template_fmt_copy, *template_fmt_ptr;
+ int template_num_fields = template_fmt_size(template_fmt);
+ int i, result = 0;
+
+ if (template_num_fields > IMA_TEMPLATE_NUM_FIELDS_MAX)
+ return -EINVAL;
+
+ /* copying is needed as strsep() modifies the original buffer */
+ template_fmt_copy = kstrdup(template_fmt, GFP_KERNEL);
+ if (template_fmt_copy == NULL)
+ return -ENOMEM;
+
+ *fields = kzalloc(template_num_fields * sizeof(*fields), GFP_KERNEL);
+ if (*fields == NULL) {
+ result = -ENOMEM;
+ goto out;
+ }
+
+ template_fmt_ptr = template_fmt_copy;
+ for (i = 0; (c = strsep(&template_fmt_ptr, "|")) != NULL &&
+ i < template_num_fields; i++) {
+ struct ima_template_field *f = lookup_template_field(c);
+
+ if (!f) {
+ result = -ENOENT;
+ goto out;
+ }
+ (*fields)[i] = f;
+ }
+ *num_fields = i;
+out:
+ if (result < 0) {
+ kfree(*fields);
+ *fields = NULL;
+ }
+ kfree(template_fmt_copy);
+ return result;
+}
+
+static int init_defined_templates(void)
+{
+ int i = 0;
+ int result = 0;
+
+ /* Init defined templates. */
+ for (i = 0; i < ARRAY_SIZE(defined_templates); i++) {
+ struct ima_template_desc *template = &defined_templates[i];
+
+ result = template_desc_init_fields(template->fmt,
+ &(template->fields),
+ &(template->num_fields));
+ if (result < 0)
+ return result;
+ }
+ return result;
+}
+
+struct ima_template_desc *ima_template_desc_current(void)
+{
+ if (!ima_template)
+ ima_template =
+ lookup_template_desc(CONFIG_IMA_DEFAULT_TEMPLATE);
+ return ima_template;
+}
+
+int ima_init_template(void)
+{
+ int result;
+
+ result = init_defined_templates();
+ if (result < 0)
+ return result;
+
+ return 0;
+}
--- /dev/null
+/*
+ * Copyright (C) 2013 Politecnico di Torino, Italy
+ * TORSEC group -- http://security.polito.it
+ *
+ * Author: Roberto Sassu <roberto.sassu@polito.it>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation, version 2 of the
+ * License.
+ *
+ * File: ima_template_lib.c
+ * Library of supported template fields.
+ */
+#include <crypto/hash_info.h>
+
+#include "ima_template_lib.h"
+
+static bool ima_template_hash_algo_allowed(u8 algo)
+{
+ if (algo == HASH_ALGO_SHA1 || algo == HASH_ALGO_MD5)
+ return true;
+
+ return false;
+}
+
+enum data_formats {
+ DATA_FMT_DIGEST = 0,
+ DATA_FMT_DIGEST_WITH_ALGO,
+ DATA_FMT_EVENT_NAME,
+ DATA_FMT_STRING,
+ DATA_FMT_HEX
+};
+
+static int ima_write_template_field_data(const void *data, const u32 datalen,
+ enum data_formats datafmt,
+ struct ima_field_data *field_data)
+{
+ u8 *buf, *buf_ptr;
+ u32 buflen;
+
+ switch (datafmt) {
+ case DATA_FMT_EVENT_NAME:
+ buflen = IMA_EVENT_NAME_LEN_MAX + 1;
+ break;
+ case DATA_FMT_STRING:
+ buflen = datalen + 1;
+ break;
+ default:
+ buflen = datalen;
+ }
+
+ buf = kzalloc(buflen, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ memcpy(buf, data, datalen);
+
+ /*
+ * Replace all space characters with underscore for event names and
+ * strings. This avoid that, during the parsing of a measurements list,
+ * filenames with spaces or that end with the suffix ' (deleted)' are
+ * split into multiple template fields (the space is the delimitator
+ * character for measurements lists in ASCII format).
+ */
+ if (datafmt == DATA_FMT_EVENT_NAME || datafmt == DATA_FMT_STRING) {
+ for (buf_ptr = buf; buf_ptr - buf < datalen; buf_ptr++)
+ if (*buf_ptr == ' ')
+ *buf_ptr = '_';
+ }
+
+ field_data->data = buf;
+ field_data->len = buflen;
+ return 0;
+}
+
+static void ima_show_template_data_ascii(struct seq_file *m,
+ enum ima_show_type show,
+ enum data_formats datafmt,
+ struct ima_field_data *field_data)
+{
+ u8 *buf_ptr = field_data->data, buflen = field_data->len;
+
+ switch (datafmt) {
+ case DATA_FMT_DIGEST_WITH_ALGO:
+ buf_ptr = strnchr(field_data->data, buflen, ':');
+ if (buf_ptr != field_data->data)
+ seq_printf(m, "%s", field_data->data);
+
+ /* skip ':' and '\0' */
+ buf_ptr += 2;
+ buflen -= buf_ptr - field_data->data;
+ case DATA_FMT_DIGEST:
+ case DATA_FMT_HEX:
+ if (!buflen)
+ break;
+ ima_print_digest(m, buf_ptr, buflen);
+ break;
+ case DATA_FMT_STRING:
+ seq_printf(m, "%s", buf_ptr);
+ break;
+ default:
+ break;
+ }
+}
+
+static void ima_show_template_data_binary(struct seq_file *m,
+ enum ima_show_type show,
+ enum data_formats datafmt,
+ struct ima_field_data *field_data)
+{
+ if (show != IMA_SHOW_BINARY_NO_FIELD_LEN)
+ ima_putc(m, &field_data->len, sizeof(u32));
+
+ if (!field_data->len)
+ return;
+
+ ima_putc(m, field_data->data, field_data->len);
+}
+
+static void ima_show_template_field_data(struct seq_file *m,
+ enum ima_show_type show,
+ enum data_formats datafmt,
+ struct ima_field_data *field_data)
+{
+ switch (show) {
+ case IMA_SHOW_ASCII:
+ ima_show_template_data_ascii(m, show, datafmt, field_data);
+ break;
+ case IMA_SHOW_BINARY:
+ case IMA_SHOW_BINARY_NO_FIELD_LEN:
+ ima_show_template_data_binary(m, show, datafmt, field_data);
+ break;
+ default:
+ break;
+ }
+}
+
+void ima_show_template_digest(struct seq_file *m, enum ima_show_type show,
+ struct ima_field_data *field_data)
+{
+ ima_show_template_field_data(m, show, DATA_FMT_DIGEST, field_data);
+}
+
+void ima_show_template_digest_ng(struct seq_file *m, enum ima_show_type show,
+ struct ima_field_data *field_data)
+{
+ ima_show_template_field_data(m, show, DATA_FMT_DIGEST_WITH_ALGO,
+ field_data);
+}
+
+void ima_show_template_string(struct seq_file *m, enum ima_show_type show,
+ struct ima_field_data *field_data)
+{
+ ima_show_template_field_data(m, show, DATA_FMT_STRING, field_data);
+}
+
+void ima_show_template_sig(struct seq_file *m, enum ima_show_type show,
+ struct ima_field_data *field_data)
+{
+ ima_show_template_field_data(m, show, DATA_FMT_HEX, field_data);
+}
+
+static int ima_eventdigest_init_common(u8 *digest, u32 digestsize, u8 hash_algo,
+ struct ima_field_data *field_data,
+ bool size_limit)
+{
+ /*
+ * digest formats:
+ * - DATA_FMT_DIGEST: digest
+ * - DATA_FMT_DIGEST_WITH_ALGO: [<hash algo>] + ':' + '\0' + digest,
+ * where <hash algo> is provided if the hash algoritm is not
+ * SHA1 or MD5
+ */
+ u8 buffer[CRYPTO_MAX_ALG_NAME + 2 + IMA_MAX_DIGEST_SIZE] = { 0 };
+ enum data_formats fmt = DATA_FMT_DIGEST;
+ u32 offset = 0;
+
+ if (!size_limit) {
+ fmt = DATA_FMT_DIGEST_WITH_ALGO;
+ if (hash_algo < HASH_ALGO__LAST)
+ offset += snprintf(buffer, CRYPTO_MAX_ALG_NAME + 1,
+ "%s", hash_algo_name[hash_algo]);
+ buffer[offset] = ':';
+ offset += 2;
+ }
+
+ if (digest)
+ memcpy(buffer + offset, digest, digestsize);
+ else
+ /*
+ * If digest is NULL, the event being recorded is a violation.
+ * Make room for the digest by increasing the offset of
+ * IMA_DIGEST_SIZE.
+ */
+ offset += IMA_DIGEST_SIZE;
+
+ return ima_write_template_field_data(buffer, offset + digestsize,
+ fmt, field_data);
+}
+
+/*
+ * This function writes the digest of an event (with size limit).
+ */
+int ima_eventdigest_init(struct integrity_iint_cache *iint, struct file *file,
+ const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value, int xattr_len,
+ struct ima_field_data *field_data)
+{
+ struct {
+ struct ima_digest_data hdr;
+ char digest[IMA_MAX_DIGEST_SIZE];
+ } hash;
+ u8 *cur_digest = NULL;
+ u32 cur_digestsize = 0;
+ struct inode *inode;
+ int result;
+
+ memset(&hash, 0, sizeof(hash));
+
+ if (!iint) /* recording a violation. */
+ goto out;
+
+ if (ima_template_hash_algo_allowed(iint->ima_hash->algo)) {
+ cur_digest = iint->ima_hash->digest;
+ cur_digestsize = iint->ima_hash->length;
+ goto out;
+ }
+
+ if (!file) /* missing info to re-calculate the digest */
+ return -EINVAL;
+
+ inode = file_inode(file);
+ hash.hdr.algo = ima_template_hash_algo_allowed(ima_hash_algo) ?
+ ima_hash_algo : HASH_ALGO_SHA1;
+ result = ima_calc_file_hash(file, &hash.hdr);
+ if (result) {
+ integrity_audit_msg(AUDIT_INTEGRITY_DATA, inode,
+ filename, "collect_data",
+ "failed", result, 0);
+ return result;
+ }
+ cur_digest = hash.hdr.digest;
+ cur_digestsize = hash.hdr.length;
+out:
+ return ima_eventdigest_init_common(cur_digest, cur_digestsize, -1,
+ field_data, true);
+}
+
+/*
+ * This function writes the digest of an event (without size limit).
+ */
+int ima_eventdigest_ng_init(struct integrity_iint_cache *iint,
+ struct file *file, const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value,
+ int xattr_len, struct ima_field_data *field_data)
+{
+ u8 *cur_digest = NULL, hash_algo = HASH_ALGO__LAST;
+ u32 cur_digestsize = 0;
+
+ /* If iint is NULL, we are recording a violation. */
+ if (!iint)
+ goto out;
+
+ cur_digest = iint->ima_hash->digest;
+ cur_digestsize = iint->ima_hash->length;
+
+ hash_algo = iint->ima_hash->algo;
+out:
+ return ima_eventdigest_init_common(cur_digest, cur_digestsize,
+ hash_algo, field_data, false);
+}
+
+static int ima_eventname_init_common(struct integrity_iint_cache *iint,
+ struct file *file,
+ const unsigned char *filename,
+ struct ima_field_data *field_data,
+ bool size_limit)
+{
+ const char *cur_filename = NULL;
+ u32 cur_filename_len = 0;
+ enum data_formats fmt = size_limit ?
+ DATA_FMT_EVENT_NAME : DATA_FMT_STRING;
+
+ BUG_ON(filename == NULL && file == NULL);
+
+ if (filename) {
+ cur_filename = filename;
+ cur_filename_len = strlen(filename);
+
+ if (!size_limit || cur_filename_len <= IMA_EVENT_NAME_LEN_MAX)
+ goto out;
+ }
+
+ if (file) {
+ cur_filename = file->f_dentry->d_name.name;
+ cur_filename_len = strlen(cur_filename);
+ } else
+ /*
+ * Truncate filename if the latter is too long and
+ * the file descriptor is not available.
+ */
+ cur_filename_len = IMA_EVENT_NAME_LEN_MAX;
+out:
+ return ima_write_template_field_data(cur_filename, cur_filename_len,
+ fmt, field_data);
+}
+
+/*
+ * This function writes the name of an event (with size limit).
+ */
+int ima_eventname_init(struct integrity_iint_cache *iint, struct file *file,
+ const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value, int xattr_len,
+ struct ima_field_data *field_data)
+{
+ return ima_eventname_init_common(iint, file, filename,
+ field_data, true);
+}
+
+/*
+ * This function writes the name of an event (without size limit).
+ */
+int ima_eventname_ng_init(struct integrity_iint_cache *iint, struct file *file,
+ const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value, int xattr_len,
+ struct ima_field_data *field_data)
+{
+ return ima_eventname_init_common(iint, file, filename,
+ field_data, false);
+}
+
+/*
+ * ima_eventsig_init - include the file signature as part of the template data
+ */
+int ima_eventsig_init(struct integrity_iint_cache *iint, struct file *file,
+ const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value, int xattr_len,
+ struct ima_field_data *field_data)
+{
+ enum data_formats fmt = DATA_FMT_HEX;
+ int rc = 0;
+
+ if ((!xattr_value) || (xattr_value->type != EVM_IMA_XATTR_DIGSIG))
+ goto out;
+
+ rc = ima_write_template_field_data(xattr_value, xattr_len, fmt,
+ field_data);
+out:
+ return rc;
+}
--- /dev/null
+/*
+ * Copyright (C) 2013 Politecnico di Torino, Italy
+ * TORSEC group -- http://security.polito.it
+ *
+ * Author: Roberto Sassu <roberto.sassu@polito.it>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation, version 2 of the
+ * License.
+ *
+ * File: ima_template_lib.h
+ * Header for the library of supported template fields.
+ */
+#ifndef __LINUX_IMA_TEMPLATE_LIB_H
+#define __LINUX_IMA_TEMPLATE_LIB_H
+
+#include <linux/seq_file.h>
+#include "ima.h"
+
+void ima_show_template_digest(struct seq_file *m, enum ima_show_type show,
+ struct ima_field_data *field_data);
+void ima_show_template_digest_ng(struct seq_file *m, enum ima_show_type show,
+ struct ima_field_data *field_data);
+void ima_show_template_string(struct seq_file *m, enum ima_show_type show,
+ struct ima_field_data *field_data);
+void ima_show_template_sig(struct seq_file *m, enum ima_show_type show,
+ struct ima_field_data *field_data);
+int ima_eventdigest_init(struct integrity_iint_cache *iint, struct file *file,
+ const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value, int xattr_len,
+ struct ima_field_data *field_data);
+int ima_eventname_init(struct integrity_iint_cache *iint, struct file *file,
+ const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value, int xattr_len,
+ struct ima_field_data *field_data);
+int ima_eventdigest_ng_init(struct integrity_iint_cache *iint,
+ struct file *file, const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value,
+ int xattr_len, struct ima_field_data *field_data);
+int ima_eventname_ng_init(struct integrity_iint_cache *iint, struct file *file,
+ const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value, int xattr_len,
+ struct ima_field_data *field_data);
+int ima_eventsig_init(struct integrity_iint_cache *iint, struct file *file,
+ const unsigned char *filename,
+ struct evm_ima_xattr_data *xattr_value, int xattr_len,
+ struct ima_field_data *field_data);
+#endif /* __LINUX_IMA_TEMPLATE_LIB_H */
IMA_XATTR_DIGEST = 0x01,
EVM_XATTR_HMAC,
EVM_IMA_XATTR_DIGSIG,
+ IMA_XATTR_DIGEST_NG,
};
struct evm_ima_xattr_data {
u8 type;
u8 digest[SHA1_DIGEST_SIZE];
-} __attribute__((packed));
+} __packed;
+
+#define IMA_MAX_DIGEST_SIZE 64
+
+struct ima_digest_data {
+ u8 algo;
+ u8 length;
+ union {
+ struct {
+ u8 unused;
+ u8 type;
+ } sha1;
+ struct {
+ u8 type;
+ u8 algo;
+ } ng;
+ u8 data[2];
+ } xattr;
+ u8 digest[0];
+} __packed;
+
+/*
+ * signature format v2 - for using with asymmetric keys
+ */
+struct signature_v2_hdr {
+ uint8_t type; /* xattr type */
+ uint8_t version; /* signature format version */
+ uint8_t hash_algo; /* Digest algorithm [enum pkey_hash_algo] */
+ uint32_t keyid; /* IMA key identifier - not X509/PGP specific */
+ uint16_t sig_size; /* signature size */
+ uint8_t sig[0]; /* signature payload */
+} __packed;
/* integrity data associated with an inode */
struct integrity_iint_cache {
- struct rb_node rb_node; /* rooted in integrity_iint_tree */
+ struct rb_node rb_node; /* rooted in integrity_iint_tree */
struct inode *inode; /* back pointer to inode in question */
u64 version; /* track inode changes */
unsigned long flags;
- struct evm_ima_xattr_data ima_xattr;
enum integrity_status ima_file_status:4;
enum integrity_status ima_mmap_status:4;
enum integrity_status ima_bprm_status:4;
enum integrity_status ima_module_status:4;
enum integrity_status evm_status:4;
+ struct ima_digest_data *ima_hash;
};
/* rbtree tree calls to lookup, insert, delete
#ifdef CONFIG_INTEGRITY_SIGNATURE
int integrity_digsig_verify(const unsigned int id, const char *sig, int siglen,
- const char *digest, int digestlen);
+ const char *digest, int digestlen);
#else
config KEYS
bool "Enable access key retention support"
+ select ASSOCIATIVE_ARRAY
help
This option provides support for retaining authentication tokens and
access keys in the kernel.
If you are unsure as to whether this is required, answer N.
+config PERSISTENT_KEYRINGS
+ bool "Enable register of persistent per-UID keyrings"
+ depends on KEYS
+ help
+ This option provides a register of persistent per-UID keyrings,
+ primarily aimed at Kerberos key storage. The keyrings are persistent
+ in the sense that they stay around after all processes of that UID
+ have exited, not that they survive the machine being rebooted.
+
+ A particular keyring may be accessed by either the user whose keyring
+ it is or by a process with administrative privileges. The active
+ LSMs gets to rule on which admin-level processes get to access the
+ cache.
+
+ Keyrings are created and added into the register upon demand and get
+ removed if they expire (a default timeout is set upon creation).
+
+config BIG_KEYS
+ bool "Large payload keys"
+ depends on KEYS
+ depends on TMPFS
+ help
+ This option provides support for holding large keys within the kernel
+ (for example Kerberos ticket caches). The data may be stored out to
+ swapspace by tmpfs.
+
+ If you are unsure as to whether this is required, answer N.
+
config TRUSTED_KEYS
tristate "TRUSTED KEYS"
depends on KEYS && TCG_TPM
obj-$(CONFIG_KEYS_COMPAT) += compat.o
obj-$(CONFIG_PROC_FS) += proc.o
obj-$(CONFIG_SYSCTL) += sysctl.o
+obj-$(CONFIG_PERSISTENT_KEYRINGS) += persistent.o
#
# Key types
#
+obj-$(CONFIG_BIG_KEYS) += big_key.o
obj-$(CONFIG_TRUSTED_KEYS) += trusted.o
obj-$(CONFIG_ENCRYPTED_KEYS) += encrypted-keys/
--- /dev/null
+/* Large capacity key type
+ *
+ * Copyright (C) 2013 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <linux/file.h>
+#include <linux/shmem_fs.h>
+#include <linux/err.h>
+#include <keys/user-type.h>
+#include <keys/big_key-type.h>
+
+MODULE_LICENSE("GPL");
+
+/*
+ * If the data is under this limit, there's no point creating a shm file to
+ * hold it as the permanently resident metadata for the shmem fs will be at
+ * least as large as the data.
+ */
+#define BIG_KEY_FILE_THRESHOLD (sizeof(struct inode) + sizeof(struct dentry))
+
+/*
+ * big_key defined keys take an arbitrary string as the description and an
+ * arbitrary blob of data as the payload
+ */
+struct key_type key_type_big_key = {
+ .name = "big_key",
+ .def_lookup_type = KEYRING_SEARCH_LOOKUP_DIRECT,
+ .instantiate = big_key_instantiate,
+ .match = user_match,
+ .revoke = big_key_revoke,
+ .destroy = big_key_destroy,
+ .describe = big_key_describe,
+ .read = big_key_read,
+};
+
+/*
+ * Instantiate a big key
+ */
+int big_key_instantiate(struct key *key, struct key_preparsed_payload *prep)
+{
+ struct path *path = (struct path *)&key->payload.data2;
+ struct file *file;
+ ssize_t written;
+ size_t datalen = prep->datalen;
+ int ret;
+
+ ret = -EINVAL;
+ if (datalen <= 0 || datalen > 1024 * 1024 || !prep->data)
+ goto error;
+
+ /* Set an arbitrary quota */
+ ret = key_payload_reserve(key, 16);
+ if (ret < 0)
+ goto error;
+
+ key->type_data.x[1] = datalen;
+
+ if (datalen > BIG_KEY_FILE_THRESHOLD) {
+ /* Create a shmem file to store the data in. This will permit the data
+ * to be swapped out if needed.
+ *
+ * TODO: Encrypt the stored data with a temporary key.
+ */
+ file = shmem_kernel_file_setup("", datalen, 0);
+ if (IS_ERR(file)) {
+ ret = PTR_ERR(file);
+ goto err_quota;
+ }
+
+ written = kernel_write(file, prep->data, prep->datalen, 0);
+ if (written != datalen) {
+ ret = written;
+ if (written >= 0)
+ ret = -ENOMEM;
+ goto err_fput;
+ }
+
+ /* Pin the mount and dentry to the key so that we can open it again
+ * later
+ */
+ *path = file->f_path;
+ path_get(path);
+ fput(file);
+ } else {
+ /* Just store the data in a buffer */
+ void *data = kmalloc(datalen, GFP_KERNEL);
+ if (!data) {
+ ret = -ENOMEM;
+ goto err_quota;
+ }
+
+ key->payload.data = memcpy(data, prep->data, prep->datalen);
+ }
+ return 0;
+
+err_fput:
+ fput(file);
+err_quota:
+ key_payload_reserve(key, 0);
+error:
+ return ret;
+}
+
+/*
+ * dispose of the links from a revoked keyring
+ * - called with the key sem write-locked
+ */
+void big_key_revoke(struct key *key)
+{
+ struct path *path = (struct path *)&key->payload.data2;
+
+ /* clear the quota */
+ key_payload_reserve(key, 0);
+ if (key_is_instantiated(key) && key->type_data.x[1] > BIG_KEY_FILE_THRESHOLD)
+ vfs_truncate(path, 0);
+}
+
+/*
+ * dispose of the data dangling from the corpse of a big_key key
+ */
+void big_key_destroy(struct key *key)
+{
+ if (key->type_data.x[1] > BIG_KEY_FILE_THRESHOLD) {
+ struct path *path = (struct path *)&key->payload.data2;
+ path_put(path);
+ path->mnt = NULL;
+ path->dentry = NULL;
+ } else {
+ kfree(key->payload.data);
+ key->payload.data = NULL;
+ }
+}
+
+/*
+ * describe the big_key key
+ */
+void big_key_describe(const struct key *key, struct seq_file *m)
+{
+ unsigned long datalen = key->type_data.x[1];
+
+ seq_puts(m, key->description);
+
+ if (key_is_instantiated(key))
+ seq_printf(m, ": %lu [%s]",
+ datalen,
+ datalen > BIG_KEY_FILE_THRESHOLD ? "file" : "buff");
+}
+
+/*
+ * read the key data
+ * - the key's semaphore is read-locked
+ */
+long big_key_read(const struct key *key, char __user *buffer, size_t buflen)
+{
+ unsigned long datalen = key->type_data.x[1];
+ long ret;
+
+ if (!buffer || buflen < datalen)
+ return datalen;
+
+ if (datalen > BIG_KEY_FILE_THRESHOLD) {
+ struct path *path = (struct path *)&key->payload.data2;
+ struct file *file;
+ loff_t pos;
+
+ file = dentry_open(path, O_RDONLY, current_cred());
+ if (IS_ERR(file))
+ return PTR_ERR(file);
+
+ pos = 0;
+ ret = vfs_read(file, buffer, datalen, &pos);
+ fput(file);
+ if (ret >= 0 && ret != datalen)
+ ret = -EIO;
+ } else {
+ ret = datalen;
+ if (copy_to_user(buffer, key->payload.data, datalen) != 0)
+ ret = -EFAULT;
+ }
+
+ return ret;
+}
+
+/*
+ * Module stuff
+ */
+static int __init big_key_init(void)
+{
+ return register_key_type(&key_type_big_key);
+}
+
+static void __exit big_key_cleanup(void)
+{
+ unregister_key_type(&key_type_big_key);
+}
+
+module_init(big_key_init);
+module_exit(big_key_cleanup);
case KEYCTL_INVALIDATE:
return keyctl_invalidate_key(arg2);
+ case KEYCTL_GET_PERSISTENT:
+ return keyctl_get_persistent(arg2, arg3);
+
default:
return -EOPNOTSUPP;
}
kleave("");
}
-/*
- * Garbage collect pointers from a keyring.
- *
- * Not called with any locks held. The keyring's key struct will not be
- * deallocated under us as only our caller may deallocate it.
- */
-static void key_gc_keyring(struct key *keyring, time_t limit)
-{
- struct keyring_list *klist;
- int loop;
-
- kenter("%x", key_serial(keyring));
-
- if (keyring->flags & ((1 << KEY_FLAG_INVALIDATED) |
- (1 << KEY_FLAG_REVOKED)))
- goto dont_gc;
-
- /* scan the keyring looking for dead keys */
- rcu_read_lock();
- klist = rcu_dereference(keyring->payload.subscriptions);
- if (!klist)
- goto unlock_dont_gc;
-
- loop = klist->nkeys;
- smp_rmb();
- for (loop--; loop >= 0; loop--) {
- struct key *key = rcu_dereference(klist->keys[loop]);
- if (key_is_dead(key, limit))
- goto do_gc;
- }
-
-unlock_dont_gc:
- rcu_read_unlock();
-dont_gc:
- kleave(" [no gc]");
- return;
-
-do_gc:
- rcu_read_unlock();
-
- keyring_gc(keyring, limit);
- kleave(" [gc]");
-}
-
/*
* Garbage collect a list of unreferenced, detached keys
*/
*/
found_keyring:
spin_unlock(&key_serial_lock);
- kdebug("scan keyring %d", key->serial);
- key_gc_keyring(key, limit);
+ keyring_gc(key, limit);
goto maybe_resched;
/* We found a dead key that is still referenced. Reset its type and
extern void key_type_put(struct key_type *ktype);
extern int __key_link_begin(struct key *keyring,
- const struct key_type *type,
- const char *description,
- unsigned long *_prealloc);
+ const struct keyring_index_key *index_key,
+ struct assoc_array_edit **_edit);
extern int __key_link_check_live_key(struct key *keyring, struct key *key);
-extern void __key_link(struct key *keyring, struct key *key,
- unsigned long *_prealloc);
+extern void __key_link(struct key *key, struct assoc_array_edit **_edit);
extern void __key_link_end(struct key *keyring,
- struct key_type *type,
- unsigned long prealloc);
+ const struct keyring_index_key *index_key,
+ struct assoc_array_edit *edit);
-extern key_ref_t __keyring_search_one(key_ref_t keyring_ref,
- const struct key_type *type,
- const char *description,
- key_perm_t perm);
+extern key_ref_t find_key_to_update(key_ref_t keyring_ref,
+ const struct keyring_index_key *index_key);
extern struct key *keyring_search_instkey(struct key *keyring,
key_serial_t target_id);
+extern int iterate_over_keyring(const struct key *keyring,
+ int (*func)(const struct key *key, void *data),
+ void *data);
+
typedef int (*key_match_func_t)(const struct key *, const void *);
+struct keyring_search_context {
+ struct keyring_index_key index_key;
+ const struct cred *cred;
+ key_match_func_t match;
+ const void *match_data;
+ unsigned flags;
+#define KEYRING_SEARCH_LOOKUP_TYPE 0x0001 /* [as type->def_lookup_type] */
+#define KEYRING_SEARCH_NO_STATE_CHECK 0x0002 /* Skip state checks */
+#define KEYRING_SEARCH_DO_STATE_CHECK 0x0004 /* Override NO_STATE_CHECK */
+#define KEYRING_SEARCH_NO_UPDATE_TIME 0x0008 /* Don't update times */
+#define KEYRING_SEARCH_NO_CHECK_PERM 0x0010 /* Don't check permissions */
+#define KEYRING_SEARCH_DETECT_TOO_DEEP 0x0020 /* Give an error on excessive depth */
+
+ int (*iterator)(const void *object, void *iterator_data);
+
+ /* Internal stuff */
+ int skipped_ret;
+ bool possessed;
+ key_ref_t result;
+ struct timespec now;
+};
+
extern key_ref_t keyring_search_aux(key_ref_t keyring_ref,
- const struct cred *cred,
- struct key_type *type,
- const void *description,
- key_match_func_t match,
- bool no_state_check);
-
-extern key_ref_t search_my_process_keyrings(struct key_type *type,
- const void *description,
- key_match_func_t match,
- bool no_state_check,
- const struct cred *cred);
-extern key_ref_t search_process_keyrings(struct key_type *type,
- const void *description,
- key_match_func_t match,
- const struct cred *cred);
+ struct keyring_search_context *ctx);
+
+extern key_ref_t search_my_process_keyrings(struct keyring_search_context *ctx);
+extern key_ref_t search_process_keyrings(struct keyring_search_context *ctx);
extern struct key *find_keyring_by_name(const char *name, bool skip_perm_check);
/*
* Determine whether a key is dead.
*/
-static inline bool key_is_dead(struct key *key, time_t limit)
+static inline bool key_is_dead(const struct key *key, time_t limit)
{
return
key->flags & ((1 << KEY_FLAG_DEAD) |
extern long keyctl_instantiate_key_common(key_serial_t,
const struct iovec *,
unsigned, size_t, key_serial_t);
+#ifdef CONFIG_PERSISTENT_KEYRINGS
+extern long keyctl_get_persistent(uid_t, key_serial_t);
+extern unsigned persistent_keyring_expiry;
+#else
+static inline long keyctl_get_persistent(uid_t uid, key_serial_t destring)
+{
+ return -EOPNOTSUPP;
+}
+#endif
/*
* Debugging key validation
}
}
- desclen = strlen(desc) + 1;
- quotalen = desclen + type->def_datalen;
+ desclen = strlen(desc);
+ quotalen = desclen + 1 + type->def_datalen;
/* get hold of the key tracking for this user */
user = key_user_lookup(uid);
}
/* allocate and initialise the key and its description */
- key = kmem_cache_alloc(key_jar, GFP_KERNEL);
+ key = kmem_cache_zalloc(key_jar, GFP_KERNEL);
if (!key)
goto no_memory_2;
if (desc) {
- key->description = kmemdup(desc, desclen, GFP_KERNEL);
+ key->index_key.desc_len = desclen;
+ key->index_key.description = kmemdup(desc, desclen + 1, GFP_KERNEL);
if (!key->description)
goto no_memory_3;
}
atomic_set(&key->usage, 1);
init_rwsem(&key->sem);
lockdep_set_class(&key->sem, &type->lock_class);
- key->type = type;
+ key->index_key.type = type;
key->user = user;
key->quotalen = quotalen;
key->datalen = type->def_datalen;
key->uid = uid;
key->gid = gid;
key->perm = perm;
- key->flags = 0;
- key->expiry = 0;
- key->payload.data = NULL;
- key->security = NULL;
if (!(flags & KEY_ALLOC_NOT_IN_QUOTA))
key->flags |= 1 << KEY_FLAG_IN_QUOTA;
-
- memset(&key->type_data, 0, sizeof(key->type_data));
+ if (flags & KEY_ALLOC_TRUSTED)
+ key->flags |= 1 << KEY_FLAG_TRUSTED;
#ifdef KEY_DEBUGGING
key->magic = KEY_DEBUG_MAGIC;
struct key_preparsed_payload *prep,
struct key *keyring,
struct key *authkey,
- unsigned long *_prealloc)
+ struct assoc_array_edit **_edit)
{
int ret, awaken;
/* and link it into the destination keyring */
if (keyring)
- __key_link(keyring, key, _prealloc);
+ __key_link(key, _edit);
/* disable the authorisation key */
if (authkey)
struct key *authkey)
{
struct key_preparsed_payload prep;
- unsigned long prealloc;
+ struct assoc_array_edit *edit;
int ret;
memset(&prep, 0, sizeof(prep));
}
if (keyring) {
- ret = __key_link_begin(keyring, key->type, key->description,
- &prealloc);
+ ret = __key_link_begin(keyring, &key->index_key, &edit);
if (ret < 0)
goto error_free_preparse;
}
- ret = __key_instantiate_and_link(key, &prep, keyring, authkey,
- &prealloc);
+ ret = __key_instantiate_and_link(key, &prep, keyring, authkey, &edit);
if (keyring)
- __key_link_end(keyring, key->type, prealloc);
+ __key_link_end(keyring, &key->index_key, edit);
error_free_preparse:
if (key->type->preparse)
struct key *keyring,
struct key *authkey)
{
- unsigned long prealloc;
+ struct assoc_array_edit *edit;
struct timespec now;
int ret, awaken, link_ret = 0;
ret = -EBUSY;
if (keyring)
- link_ret = __key_link_begin(keyring, key->type,
- key->description, &prealloc);
+ link_ret = __key_link_begin(keyring, &key->index_key, &edit);
mutex_lock(&key_construction_mutex);
if (!test_bit(KEY_FLAG_INSTANTIATED, &key->flags)) {
/* mark the key as being negatively instantiated */
atomic_inc(&key->user->nikeys);
+ key->type_data.reject_error = -error;
+ smp_wmb();
set_bit(KEY_FLAG_NEGATIVE, &key->flags);
set_bit(KEY_FLAG_INSTANTIATED, &key->flags);
- key->type_data.reject_error = -error;
now = current_kernel_time();
key->expiry = now.tv_sec + timeout;
key_schedule_gc(key->expiry + key_gc_delay);
/* and link it into the destination keyring */
if (keyring && link_ret == 0)
- __key_link(keyring, key, &prealloc);
+ __key_link(key, &edit);
/* disable the authorisation key */
if (authkey)
mutex_unlock(&key_construction_mutex);
if (keyring)
- __key_link_end(keyring, key->type, prealloc);
+ __key_link_end(keyring, &key->index_key, edit);
/* wake up anyone waiting for a key to be constructed */
if (awaken)
/* this races with key_put(), but that doesn't matter since key_put()
* doesn't actually change the key
*/
- atomic_inc(&key->usage);
+ __key_get(key);
error:
spin_unlock(&key_serial_lock);
key_perm_t perm,
unsigned long flags)
{
- unsigned long prealloc;
+ struct keyring_index_key index_key = {
+ .description = description,
+ };
struct key_preparsed_payload prep;
+ struct assoc_array_edit *edit;
const struct cred *cred = current_cred();
- struct key_type *ktype;
struct key *keyring, *key = NULL;
key_ref_t key_ref;
int ret;
/* look up the key type to see if it's one of the registered kernel
* types */
- ktype = key_type_lookup(type);
- if (IS_ERR(ktype)) {
+ index_key.type = key_type_lookup(type);
+ if (IS_ERR(index_key.type)) {
key_ref = ERR_PTR(-ENODEV);
goto error;
}
key_ref = ERR_PTR(-EINVAL);
- if (!ktype->match || !ktype->instantiate ||
- (!description && !ktype->preparse))
+ if (!index_key.type->match || !index_key.type->instantiate ||
+ (!index_key.description && !index_key.type->preparse))
goto error_put_type;
keyring = key_ref_to_ptr(keyring_ref);
memset(&prep, 0, sizeof(prep));
prep.data = payload;
prep.datalen = plen;
- prep.quotalen = ktype->def_datalen;
- if (ktype->preparse) {
- ret = ktype->preparse(&prep);
+ prep.quotalen = index_key.type->def_datalen;
+ prep.trusted = flags & KEY_ALLOC_TRUSTED;
+ if (index_key.type->preparse) {
+ ret = index_key.type->preparse(&prep);
if (ret < 0) {
key_ref = ERR_PTR(ret);
goto error_put_type;
}
- if (!description)
- description = prep.description;
+ if (!index_key.description)
+ index_key.description = prep.description;
key_ref = ERR_PTR(-EINVAL);
- if (!description)
+ if (!index_key.description)
goto error_free_prep;
}
+ index_key.desc_len = strlen(index_key.description);
+
+ key_ref = ERR_PTR(-EPERM);
+ if (!prep.trusted && test_bit(KEY_FLAG_TRUSTED_ONLY, &keyring->flags))
+ goto error_free_prep;
+ flags |= prep.trusted ? KEY_ALLOC_TRUSTED : 0;
- ret = __key_link_begin(keyring, ktype, description, &prealloc);
+ ret = __key_link_begin(keyring, &index_key, &edit);
if (ret < 0) {
key_ref = ERR_PTR(ret);
goto error_free_prep;
* key of the same type and description in the destination keyring and
* update that instead if possible
*/
- if (ktype->update) {
- key_ref = __keyring_search_one(keyring_ref, ktype, description,
- 0);
- if (!IS_ERR(key_ref))
+ if (index_key.type->update) {
+ key_ref = find_key_to_update(keyring_ref, &index_key);
+ if (key_ref)
goto found_matching_key;
}
perm = KEY_POS_VIEW | KEY_POS_SEARCH | KEY_POS_LINK | KEY_POS_SETATTR;
perm |= KEY_USR_VIEW;
- if (ktype->read)
+ if (index_key.type->read)
perm |= KEY_POS_READ;
- if (ktype == &key_type_keyring || ktype->update)
+ if (index_key.type == &key_type_keyring ||
+ index_key.type->update)
perm |= KEY_POS_WRITE;
}
/* allocate a new key */
- key = key_alloc(ktype, description, cred->fsuid, cred->fsgid, cred,
- perm, flags);
+ key = key_alloc(index_key.type, index_key.description,
+ cred->fsuid, cred->fsgid, cred, perm, flags);
if (IS_ERR(key)) {
key_ref = ERR_CAST(key);
goto error_link_end;
}
/* instantiate it and link it into the target keyring */
- ret = __key_instantiate_and_link(key, &prep, keyring, NULL, &prealloc);
+ ret = __key_instantiate_and_link(key, &prep, keyring, NULL, &edit);
if (ret < 0) {
key_put(key);
key_ref = ERR_PTR(ret);
key_ref = make_key_ref(key, is_key_possessed(keyring_ref));
error_link_end:
- __key_link_end(keyring, ktype, prealloc);
+ __key_link_end(keyring, &index_key, edit);
error_free_prep:
- if (ktype->preparse)
- ktype->free_preparse(&prep);
+ if (index_key.type->preparse)
+ index_key.type->free_preparse(&prep);
error_put_type:
- key_type_put(ktype);
+ key_type_put(index_key.type);
error:
return key_ref;
/* we found a matching key, so we're going to try to update it
* - we can drop the locks first as we have the key pinned
*/
- __key_link_end(keyring, ktype, prealloc);
+ __key_link_end(keyring, &index_key, edit);
key_ref = __key_update(key_ref, &prep);
goto error_free_prep;
case KEYCTL_INVALIDATE:
return keyctl_invalidate_key((key_serial_t) arg2);
+ case KEYCTL_GET_PERSISTENT:
+ return keyctl_get_persistent((uid_t)arg2, (key_serial_t)arg3);
+
default:
return -EOPNOTSUPP;
}
/* Keyring handling
*
- * Copyright (C) 2004-2005, 2008 Red Hat, Inc. All Rights Reserved.
+ * Copyright (C) 2004-2005, 2008, 2013 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
#include <linux/seq_file.h>
#include <linux/err.h>
#include <keys/keyring-type.h>
+#include <keys/user-type.h>
+#include <linux/assoc_array_priv.h>
#include <linux/uaccess.h>
#include "internal.h"
-#define rcu_dereference_locked_keyring(keyring) \
- (rcu_dereference_protected( \
- (keyring)->payload.subscriptions, \
- rwsem_is_locked((struct rw_semaphore *)&(keyring)->sem)))
-
-#define rcu_deref_link_locked(klist, index, keyring) \
- (rcu_dereference_protected( \
- (klist)->keys[index], \
- rwsem_is_locked((struct rw_semaphore *)&(keyring)->sem)))
-
-#define MAX_KEYRING_LINKS \
- min_t(size_t, USHRT_MAX - 1, \
- ((PAGE_SIZE - sizeof(struct keyring_list)) / sizeof(struct key *)))
-
-#define KEY_LINK_FIXQUOTA 1UL
-
/*
* When plumbing the depths of the key tree, this sets a hard limit
* set on how deep we're willing to go.
*/
#define KEYRING_NAME_HASH_SIZE (1 << 5)
+/*
+ * We mark pointers we pass to the associative array with bit 1 set if
+ * they're keyrings and clear otherwise.
+ */
+#define KEYRING_PTR_SUBTYPE 0x2UL
+
+static inline bool keyring_ptr_is_keyring(const struct assoc_array_ptr *x)
+{
+ return (unsigned long)x & KEYRING_PTR_SUBTYPE;
+}
+static inline struct key *keyring_ptr_to_key(const struct assoc_array_ptr *x)
+{
+ void *object = assoc_array_ptr_to_leaf(x);
+ return (struct key *)((unsigned long)object & ~KEYRING_PTR_SUBTYPE);
+}
+static inline void *keyring_key_to_ptr(struct key *key)
+{
+ if (key->type == &key_type_keyring)
+ return (void *)((unsigned long)key | KEYRING_PTR_SUBTYPE);
+ return key;
+}
+
static struct list_head keyring_name_hash[KEYRING_NAME_HASH_SIZE];
static DEFINE_RWLOCK(keyring_name_lock);
*/
static int keyring_instantiate(struct key *keyring,
struct key_preparsed_payload *prep);
-static int keyring_match(const struct key *keyring, const void *criterion);
static void keyring_revoke(struct key *keyring);
static void keyring_destroy(struct key *keyring);
static void keyring_describe(const struct key *keyring, struct seq_file *m);
struct key_type key_type_keyring = {
.name = "keyring",
- .def_datalen = sizeof(struct keyring_list),
+ .def_datalen = 0,
.instantiate = keyring_instantiate,
- .match = keyring_match,
+ .match = user_match,
.revoke = keyring_revoke,
.destroy = keyring_destroy,
.describe = keyring_describe,
ret = -EINVAL;
if (prep->datalen == 0) {
+ assoc_array_init(&keyring->keys);
/* make the keyring available by name if it has one */
keyring_publish_name(keyring);
ret = 0;
}
/*
- * Match keyrings on their name
+ * Multiply 64-bits by 32-bits to 96-bits and fold back to 64-bit. Ideally we'd
+ * fold the carry back too, but that requires inline asm.
+ */
+static u64 mult_64x32_and_fold(u64 x, u32 y)
+{
+ u64 hi = (u64)(u32)(x >> 32) * y;
+ u64 lo = (u64)(u32)(x) * y;
+ return lo + ((u64)(u32)hi << 32) + (u32)(hi >> 32);
+}
+
+/*
+ * Hash a key type and description.
+ */
+static unsigned long hash_key_type_and_desc(const struct keyring_index_key *index_key)
+{
+ const unsigned level_shift = ASSOC_ARRAY_LEVEL_STEP;
+ const unsigned long fan_mask = ASSOC_ARRAY_FAN_MASK;
+ const char *description = index_key->description;
+ unsigned long hash, type;
+ u32 piece;
+ u64 acc;
+ int n, desc_len = index_key->desc_len;
+
+ type = (unsigned long)index_key->type;
+
+ acc = mult_64x32_and_fold(type, desc_len + 13);
+ acc = mult_64x32_and_fold(acc, 9207);
+ for (;;) {
+ n = desc_len;
+ if (n <= 0)
+ break;
+ if (n > 4)
+ n = 4;
+ piece = 0;
+ memcpy(&piece, description, n);
+ description += n;
+ desc_len -= n;
+ acc = mult_64x32_and_fold(acc, piece);
+ acc = mult_64x32_and_fold(acc, 9207);
+ }
+
+ /* Fold the hash down to 32 bits if need be. */
+ hash = acc;
+ if (ASSOC_ARRAY_KEY_CHUNK_SIZE == 32)
+ hash ^= acc >> 32;
+
+ /* Squidge all the keyrings into a separate part of the tree to
+ * ordinary keys by making sure the lowest level segment in the hash is
+ * zero for keyrings and non-zero otherwise.
+ */
+ if (index_key->type != &key_type_keyring && (hash & fan_mask) == 0)
+ return hash | (hash >> (ASSOC_ARRAY_KEY_CHUNK_SIZE - level_shift)) | 1;
+ if (index_key->type == &key_type_keyring && (hash & fan_mask) != 0)
+ return (hash + (hash << level_shift)) & ~fan_mask;
+ return hash;
+}
+
+/*
+ * Build the next index key chunk.
+ *
+ * On 32-bit systems the index key is laid out as:
+ *
+ * 0 4 5 9...
+ * hash desclen typeptr desc[]
+ *
+ * On 64-bit systems:
+ *
+ * 0 8 9 17...
+ * hash desclen typeptr desc[]
+ *
+ * We return it one word-sized chunk at a time.
*/
-static int keyring_match(const struct key *keyring, const void *description)
+static unsigned long keyring_get_key_chunk(const void *data, int level)
+{
+ const struct keyring_index_key *index_key = data;
+ unsigned long chunk = 0;
+ long offset = 0;
+ int desc_len = index_key->desc_len, n = sizeof(chunk);
+
+ level /= ASSOC_ARRAY_KEY_CHUNK_SIZE;
+ switch (level) {
+ case 0:
+ return hash_key_type_and_desc(index_key);
+ case 1:
+ return ((unsigned long)index_key->type << 8) | desc_len;
+ case 2:
+ if (desc_len == 0)
+ return (u8)((unsigned long)index_key->type >>
+ (ASSOC_ARRAY_KEY_CHUNK_SIZE - 8));
+ n--;
+ offset = 1;
+ default:
+ offset += sizeof(chunk) - 1;
+ offset += (level - 3) * sizeof(chunk);
+ if (offset >= desc_len)
+ return 0;
+ desc_len -= offset;
+ if (desc_len > n)
+ desc_len = n;
+ offset += desc_len;
+ do {
+ chunk <<= 8;
+ chunk |= ((u8*)index_key->description)[--offset];
+ } while (--desc_len > 0);
+
+ if (level == 2) {
+ chunk <<= 8;
+ chunk |= (u8)((unsigned long)index_key->type >>
+ (ASSOC_ARRAY_KEY_CHUNK_SIZE - 8));
+ }
+ return chunk;
+ }
+}
+
+static unsigned long keyring_get_object_key_chunk(const void *object, int level)
+{
+ const struct key *key = keyring_ptr_to_key(object);
+ return keyring_get_key_chunk(&key->index_key, level);
+}
+
+static bool keyring_compare_object(const void *object, const void *data)
{
- return keyring->description &&
- strcmp(keyring->description, description) == 0;
+ const struct keyring_index_key *index_key = data;
+ const struct key *key = keyring_ptr_to_key(object);
+
+ return key->index_key.type == index_key->type &&
+ key->index_key.desc_len == index_key->desc_len &&
+ memcmp(key->index_key.description, index_key->description,
+ index_key->desc_len) == 0;
}
+/*
+ * Compare the index keys of a pair of objects and determine the bit position
+ * at which they differ - if they differ.
+ */
+static int keyring_diff_objects(const void *object, const void *data)
+{
+ const struct key *key_a = keyring_ptr_to_key(object);
+ const struct keyring_index_key *a = &key_a->index_key;
+ const struct keyring_index_key *b = data;
+ unsigned long seg_a, seg_b;
+ int level, i;
+
+ level = 0;
+ seg_a = hash_key_type_and_desc(a);
+ seg_b = hash_key_type_and_desc(b);
+ if ((seg_a ^ seg_b) != 0)
+ goto differ;
+
+ /* The number of bits contributed by the hash is controlled by a
+ * constant in the assoc_array headers. Everything else thereafter we
+ * can deal with as being machine word-size dependent.
+ */
+ level += ASSOC_ARRAY_KEY_CHUNK_SIZE / 8;
+ seg_a = a->desc_len;
+ seg_b = b->desc_len;
+ if ((seg_a ^ seg_b) != 0)
+ goto differ;
+
+ /* The next bit may not work on big endian */
+ level++;
+ seg_a = (unsigned long)a->type;
+ seg_b = (unsigned long)b->type;
+ if ((seg_a ^ seg_b) != 0)
+ goto differ;
+
+ level += sizeof(unsigned long);
+ if (a->desc_len == 0)
+ goto same;
+
+ i = 0;
+ if (((unsigned long)a->description | (unsigned long)b->description) &
+ (sizeof(unsigned long) - 1)) {
+ do {
+ seg_a = *(unsigned long *)(a->description + i);
+ seg_b = *(unsigned long *)(b->description + i);
+ if ((seg_a ^ seg_b) != 0)
+ goto differ_plus_i;
+ i += sizeof(unsigned long);
+ } while (i < (a->desc_len & (sizeof(unsigned long) - 1)));
+ }
+
+ for (; i < a->desc_len; i++) {
+ seg_a = *(unsigned char *)(a->description + i);
+ seg_b = *(unsigned char *)(b->description + i);
+ if ((seg_a ^ seg_b) != 0)
+ goto differ_plus_i;
+ }
+
+same:
+ return -1;
+
+differ_plus_i:
+ level += i;
+differ:
+ i = level * 8 + __ffs(seg_a ^ seg_b);
+ return i;
+}
+
+/*
+ * Free an object after stripping the keyring flag off of the pointer.
+ */
+static void keyring_free_object(void *object)
+{
+ key_put(keyring_ptr_to_key(object));
+}
+
+/*
+ * Operations for keyring management by the index-tree routines.
+ */
+static const struct assoc_array_ops keyring_assoc_array_ops = {
+ .get_key_chunk = keyring_get_key_chunk,
+ .get_object_key_chunk = keyring_get_object_key_chunk,
+ .compare_object = keyring_compare_object,
+ .diff_objects = keyring_diff_objects,
+ .free_object = keyring_free_object,
+};
+
/*
* Clean up a keyring when it is destroyed. Unpublish its name if it had one
* and dispose of its data.
*/
static void keyring_destroy(struct key *keyring)
{
- struct keyring_list *klist;
- int loop;
-
if (keyring->description) {
write_lock(&keyring_name_lock);
write_unlock(&keyring_name_lock);
}
- klist = rcu_access_pointer(keyring->payload.subscriptions);
- if (klist) {
- for (loop = klist->nkeys - 1; loop >= 0; loop--)
- key_put(rcu_access_pointer(klist->keys[loop]));
- kfree(klist);
- }
+ assoc_array_destroy(&keyring->keys, &keyring_assoc_array_ops);
}
/*
*/
static void keyring_describe(const struct key *keyring, struct seq_file *m)
{
- struct keyring_list *klist;
-
if (keyring->description)
seq_puts(m, keyring->description);
else
seq_puts(m, "[anon]");
if (key_is_instantiated(keyring)) {
- rcu_read_lock();
- klist = rcu_dereference(keyring->payload.subscriptions);
- if (klist)
- seq_printf(m, ": %u/%u", klist->nkeys, klist->maxkeys);
+ if (keyring->keys.nr_leaves_on_tree != 0)
+ seq_printf(m, ": %lu", keyring->keys.nr_leaves_on_tree);
else
seq_puts(m, ": empty");
- rcu_read_unlock();
}
}
+struct keyring_read_iterator_context {
+ size_t qty;
+ size_t count;
+ key_serial_t __user *buffer;
+};
+
+static int keyring_read_iterator(const void *object, void *data)
+{
+ struct keyring_read_iterator_context *ctx = data;
+ const struct key *key = keyring_ptr_to_key(object);
+ int ret;
+
+ kenter("{%s,%d},,{%zu/%zu}",
+ key->type->name, key->serial, ctx->count, ctx->qty);
+
+ if (ctx->count >= ctx->qty)
+ return 1;
+
+ ret = put_user(key->serial, ctx->buffer);
+ if (ret < 0)
+ return ret;
+ ctx->buffer++;
+ ctx->count += sizeof(key->serial);
+ return 0;
+}
+
/*
* Read a list of key IDs from the keyring's contents in binary form
*
- * The keyring's semaphore is read-locked by the caller.
+ * The keyring's semaphore is read-locked by the caller. This prevents someone
+ * from modifying it under us - which could cause us to read key IDs multiple
+ * times.
*/
static long keyring_read(const struct key *keyring,
char __user *buffer, size_t buflen)
{
- struct keyring_list *klist;
- struct key *key;
- size_t qty, tmp;
- int loop, ret;
+ struct keyring_read_iterator_context ctx;
+ unsigned long nr_keys;
+ int ret;
- ret = 0;
- klist = rcu_dereference_locked_keyring(keyring);
- if (klist) {
- /* calculate how much data we could return */
- qty = klist->nkeys * sizeof(key_serial_t);
-
- if (buffer && buflen > 0) {
- if (buflen > qty)
- buflen = qty;
-
- /* copy the IDs of the subscribed keys into the
- * buffer */
- ret = -EFAULT;
-
- for (loop = 0; loop < klist->nkeys; loop++) {
- key = rcu_deref_link_locked(klist, loop,
- keyring);
-
- tmp = sizeof(key_serial_t);
- if (tmp > buflen)
- tmp = buflen;
-
- if (copy_to_user(buffer,
- &key->serial,
- tmp) != 0)
- goto error;
-
- buflen -= tmp;
- if (buflen == 0)
- break;
- buffer += tmp;
- }
- }
+ kenter("{%d},,%zu", key_serial(keyring), buflen);
+
+ if (buflen & (sizeof(key_serial_t) - 1))
+ return -EINVAL;
+
+ nr_keys = keyring->keys.nr_leaves_on_tree;
+ if (nr_keys == 0)
+ return 0;
- ret = qty;
+ /* Calculate how much data we could return */
+ ctx.qty = nr_keys * sizeof(key_serial_t);
+
+ if (!buffer || !buflen)
+ return ctx.qty;
+
+ if (buflen > ctx.qty)
+ ctx.qty = buflen;
+
+ /* Copy the IDs of the subscribed keys into the buffer */
+ ctx.buffer = (key_serial_t __user *)buffer;
+ ctx.count = 0;
+ ret = assoc_array_iterate(&keyring->keys, keyring_read_iterator, &ctx);
+ if (ret < 0) {
+ kleave(" = %d [iterate]", ret);
+ return ret;
}
-error:
- return ret;
+ kleave(" = %zu [ok]", ctx.count);
+ return ctx.count;
}
/*
}
EXPORT_SYMBOL(keyring_alloc);
-/**
- * keyring_search_aux - Search a keyring tree for a key matching some criteria
- * @keyring_ref: A pointer to the keyring with possession indicator.
- * @cred: The credentials to use for permissions checks.
- * @type: The type of key to search for.
- * @description: Parameter for @match.
- * @match: Function to rule on whether or not a key is the one required.
- * @no_state_check: Don't check if a matching key is bad
- *
- * Search the supplied keyring tree for a key that matches the criteria given.
- * The root keyring and any linked keyrings must grant Search permission to the
- * caller to be searchable and keys can only be found if they too grant Search
- * to the caller. The possession flag on the root keyring pointer controls use
- * of the possessor bits in permissions checking of the entire tree. In
- * addition, the LSM gets to forbid keyring searches and key matches.
- *
- * The search is performed as a breadth-then-depth search up to the prescribed
- * limit (KEYRING_SEARCH_MAX_DEPTH).
- *
- * Keys are matched to the type provided and are then filtered by the match
- * function, which is given the description to use in any way it sees fit. The
- * match function may use any attributes of a key that it wishes to to
- * determine the match. Normally the match function from the key type would be
- * used.
- *
- * RCU is used to prevent the keyring key lists from disappearing without the
- * need to take lots of locks.
- *
- * Returns a pointer to the found key and increments the key usage count if
- * successful; -EAGAIN if no matching keys were found, or if expired or revoked
- * keys were found; -ENOKEY if only negative keys were found; -ENOTDIR if the
- * specified keyring wasn't a keyring.
- *
- * In the case of a successful return, the possession attribute from
- * @keyring_ref is propagated to the returned key reference.
+/*
+ * Iteration function to consider each key found.
*/
-key_ref_t keyring_search_aux(key_ref_t keyring_ref,
- const struct cred *cred,
- struct key_type *type,
- const void *description,
- key_match_func_t match,
- bool no_state_check)
+static int keyring_search_iterator(const void *object, void *iterator_data)
{
- struct {
- /* Need a separate keylist pointer for RCU purposes */
- struct key *keyring;
- struct keyring_list *keylist;
- int kix;
- } stack[KEYRING_SEARCH_MAX_DEPTH];
-
- struct keyring_list *keylist;
- struct timespec now;
- unsigned long possessed, kflags;
- struct key *keyring, *key;
- key_ref_t key_ref;
- long err;
- int sp, nkeys, kix;
+ struct keyring_search_context *ctx = iterator_data;
+ const struct key *key = keyring_ptr_to_key(object);
+ unsigned long kflags = key->flags;
- keyring = key_ref_to_ptr(keyring_ref);
- possessed = is_key_possessed(keyring_ref);
- key_check(keyring);
+ kenter("{%d}", key->serial);
- /* top keyring must have search permission to begin the search */
- err = key_task_permission(keyring_ref, cred, KEY_SEARCH);
- if (err < 0) {
- key_ref = ERR_PTR(err);
- goto error;
+ /* ignore keys not of this type */
+ if (key->type != ctx->index_key.type) {
+ kleave(" = 0 [!type]");
+ return 0;
}
- key_ref = ERR_PTR(-ENOTDIR);
- if (keyring->type != &key_type_keyring)
- goto error;
+ /* skip invalidated, revoked and expired keys */
+ if (ctx->flags & KEYRING_SEARCH_DO_STATE_CHECK) {
+ if (kflags & ((1 << KEY_FLAG_INVALIDATED) |
+ (1 << KEY_FLAG_REVOKED))) {
+ ctx->result = ERR_PTR(-EKEYREVOKED);
+ kleave(" = %d [invrev]", ctx->skipped_ret);
+ goto skipped;
+ }
- rcu_read_lock();
+ if (key->expiry && ctx->now.tv_sec >= key->expiry) {
+ ctx->result = ERR_PTR(-EKEYEXPIRED);
+ kleave(" = %d [expire]", ctx->skipped_ret);
+ goto skipped;
+ }
+ }
- now = current_kernel_time();
- err = -EAGAIN;
- sp = 0;
-
- /* firstly we should check to see if this top-level keyring is what we
- * are looking for */
- key_ref = ERR_PTR(-EAGAIN);
- kflags = keyring->flags;
- if (keyring->type == type && match(keyring, description)) {
- key = keyring;
- if (no_state_check)
- goto found;
+ /* keys that don't match */
+ if (!ctx->match(key, ctx->match_data)) {
+ kleave(" = 0 [!match]");
+ return 0;
+ }
- /* check it isn't negative and hasn't expired or been
- * revoked */
- if (kflags & (1 << KEY_FLAG_REVOKED))
- goto error_2;
- if (key->expiry && now.tv_sec >= key->expiry)
- goto error_2;
- key_ref = ERR_PTR(key->type_data.reject_error);
- if (kflags & (1 << KEY_FLAG_NEGATIVE))
- goto error_2;
- goto found;
+ /* key must have search permissions */
+ if (!(ctx->flags & KEYRING_SEARCH_NO_CHECK_PERM) &&
+ key_task_permission(make_key_ref(key, ctx->possessed),
+ ctx->cred, KEY_SEARCH) < 0) {
+ ctx->result = ERR_PTR(-EACCES);
+ kleave(" = %d [!perm]", ctx->skipped_ret);
+ goto skipped;
}
- /* otherwise, the top keyring must not be revoked, expired, or
- * negatively instantiated if we are to search it */
- key_ref = ERR_PTR(-EAGAIN);
- if (kflags & ((1 << KEY_FLAG_INVALIDATED) |
- (1 << KEY_FLAG_REVOKED) |
- (1 << KEY_FLAG_NEGATIVE)) ||
- (keyring->expiry && now.tv_sec >= keyring->expiry))
- goto error_2;
-
- /* start processing a new keyring */
-descend:
- kflags = keyring->flags;
- if (kflags & ((1 << KEY_FLAG_INVALIDATED) |
- (1 << KEY_FLAG_REVOKED)))
- goto not_this_keyring;
+ if (ctx->flags & KEYRING_SEARCH_DO_STATE_CHECK) {
+ /* we set a different error code if we pass a negative key */
+ if (kflags & (1 << KEY_FLAG_NEGATIVE)) {
+ smp_rmb();
+ ctx->result = ERR_PTR(key->type_data.reject_error);
+ kleave(" = %d [neg]", ctx->skipped_ret);
+ goto skipped;
+ }
+ }
- keylist = rcu_dereference(keyring->payload.subscriptions);
- if (!keylist)
- goto not_this_keyring;
+ /* Found */
+ ctx->result = make_key_ref(key, ctx->possessed);
+ kleave(" = 1 [found]");
+ return 1;
- /* iterate through the keys in this keyring first */
- nkeys = keylist->nkeys;
- smp_rmb();
- for (kix = 0; kix < nkeys; kix++) {
- key = rcu_dereference(keylist->keys[kix]);
- kflags = key->flags;
+skipped:
+ return ctx->skipped_ret;
+}
- /* ignore keys not of this type */
- if (key->type != type)
- continue;
+/*
+ * Search inside a keyring for a key. We can search by walking to it
+ * directly based on its index-key or we can iterate over the entire
+ * tree looking for it, based on the match function.
+ */
+static int search_keyring(struct key *keyring, struct keyring_search_context *ctx)
+{
+ if ((ctx->flags & KEYRING_SEARCH_LOOKUP_TYPE) ==
+ KEYRING_SEARCH_LOOKUP_DIRECT) {
+ const void *object;
+
+ object = assoc_array_find(&keyring->keys,
+ &keyring_assoc_array_ops,
+ &ctx->index_key);
+ return object ? ctx->iterator(object, ctx) : 0;
+ }
+ return assoc_array_iterate(&keyring->keys, ctx->iterator, ctx);
+}
- /* skip invalidated, revoked and expired keys */
- if (!no_state_check) {
- if (kflags & ((1 << KEY_FLAG_INVALIDATED) |
- (1 << KEY_FLAG_REVOKED)))
- continue;
+/*
+ * Search a tree of keyrings that point to other keyrings up to the maximum
+ * depth.
+ */
+static bool search_nested_keyrings(struct key *keyring,
+ struct keyring_search_context *ctx)
+{
+ struct {
+ struct key *keyring;
+ struct assoc_array_node *node;
+ int slot;
+ } stack[KEYRING_SEARCH_MAX_DEPTH];
- if (key->expiry && now.tv_sec >= key->expiry)
- continue;
- }
+ struct assoc_array_shortcut *shortcut;
+ struct assoc_array_node *node;
+ struct assoc_array_ptr *ptr;
+ struct key *key;
+ int sp = 0, slot;
- /* keys that don't match */
- if (!match(key, description))
- continue;
+ kenter("{%d},{%s,%s}",
+ keyring->serial,
+ ctx->index_key.type->name,
+ ctx->index_key.description);
- /* key must have search permissions */
- if (key_task_permission(make_key_ref(key, possessed),
- cred, KEY_SEARCH) < 0)
- continue;
+ if (ctx->index_key.description)
+ ctx->index_key.desc_len = strlen(ctx->index_key.description);
- if (no_state_check)
+ /* Check to see if this top-level keyring is what we are looking for
+ * and whether it is valid or not.
+ */
+ if (ctx->flags & KEYRING_SEARCH_LOOKUP_ITERATE ||
+ keyring_compare_object(keyring, &ctx->index_key)) {
+ ctx->skipped_ret = 2;
+ ctx->flags |= KEYRING_SEARCH_DO_STATE_CHECK;
+ switch (ctx->iterator(keyring_key_to_ptr(keyring), ctx)) {
+ case 1:
goto found;
-
- /* we set a different error code if we pass a negative key */
- if (kflags & (1 << KEY_FLAG_NEGATIVE)) {
- err = key->type_data.reject_error;
- continue;
+ case 2:
+ return false;
+ default:
+ break;
}
+ }
+
+ ctx->skipped_ret = 0;
+ if (ctx->flags & KEYRING_SEARCH_NO_STATE_CHECK)
+ ctx->flags &= ~KEYRING_SEARCH_DO_STATE_CHECK;
+ /* Start processing a new keyring */
+descend_to_keyring:
+ kdebug("descend to %d", keyring->serial);
+ if (keyring->flags & ((1 << KEY_FLAG_INVALIDATED) |
+ (1 << KEY_FLAG_REVOKED)))
+ goto not_this_keyring;
+
+ /* Search through the keys in this keyring before its searching its
+ * subtrees.
+ */
+ if (search_keyring(keyring, ctx))
goto found;
- }
- /* search through the keyrings nested in this one */
- kix = 0;
-ascend:
- nkeys = keylist->nkeys;
- smp_rmb();
- for (; kix < nkeys; kix++) {
- key = rcu_dereference(keylist->keys[kix]);
- if (key->type != &key_type_keyring)
- continue;
+ /* Then manually iterate through the keyrings nested in this one.
+ *
+ * Start from the root node of the index tree. Because of the way the
+ * hash function has been set up, keyrings cluster on the leftmost
+ * branch of the root node (root slot 0) or in the root node itself.
+ * Non-keyrings avoid the leftmost branch of the root entirely (root
+ * slots 1-15).
+ */
+ ptr = ACCESS_ONCE(keyring->keys.root);
+ if (!ptr)
+ goto not_this_keyring;
- /* recursively search nested keyrings
- * - only search keyrings for which we have search permission
+ if (assoc_array_ptr_is_shortcut(ptr)) {
+ /* If the root is a shortcut, either the keyring only contains
+ * keyring pointers (everything clusters behind root slot 0) or
+ * doesn't contain any keyring pointers.
*/
- if (sp >= KEYRING_SEARCH_MAX_DEPTH)
+ shortcut = assoc_array_ptr_to_shortcut(ptr);
+ smp_read_barrier_depends();
+ if ((shortcut->index_key[0] & ASSOC_ARRAY_FAN_MASK) != 0)
+ goto not_this_keyring;
+
+ ptr = ACCESS_ONCE(shortcut->next_node);
+ node = assoc_array_ptr_to_node(ptr);
+ goto begin_node;
+ }
+
+ node = assoc_array_ptr_to_node(ptr);
+ smp_read_barrier_depends();
+
+ ptr = node->slots[0];
+ if (!assoc_array_ptr_is_meta(ptr))
+ goto begin_node;
+
+descend_to_node:
+ /* Descend to a more distal node in this keyring's content tree and go
+ * through that.
+ */
+ kdebug("descend");
+ if (assoc_array_ptr_is_shortcut(ptr)) {
+ shortcut = assoc_array_ptr_to_shortcut(ptr);
+ smp_read_barrier_depends();
+ ptr = ACCESS_ONCE(shortcut->next_node);
+ BUG_ON(!assoc_array_ptr_is_node(ptr));
+ }
+ node = assoc_array_ptr_to_node(ptr);
+
+begin_node:
+ kdebug("begin_node");
+ smp_read_barrier_depends();
+ slot = 0;
+ascend_to_node:
+ /* Go through the slots in a node */
+ for (; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ ptr = ACCESS_ONCE(node->slots[slot]);
+
+ if (assoc_array_ptr_is_meta(ptr) && node->back_pointer)
+ goto descend_to_node;
+
+ if (!keyring_ptr_is_keyring(ptr))
continue;
- if (key_task_permission(make_key_ref(key, possessed),
- cred, KEY_SEARCH) < 0)
+ key = keyring_ptr_to_key(ptr);
+
+ if (sp >= KEYRING_SEARCH_MAX_DEPTH) {
+ if (ctx->flags & KEYRING_SEARCH_DETECT_TOO_DEEP) {
+ ctx->result = ERR_PTR(-ELOOP);
+ return false;
+ }
+ goto not_this_keyring;
+ }
+
+ /* Search a nested keyring */
+ if (!(ctx->flags & KEYRING_SEARCH_NO_CHECK_PERM) &&
+ key_task_permission(make_key_ref(key, ctx->possessed),
+ ctx->cred, KEY_SEARCH) < 0)
continue;
/* stack the current position */
stack[sp].keyring = keyring;
- stack[sp].keylist = keylist;
- stack[sp].kix = kix;
+ stack[sp].node = node;
+ stack[sp].slot = slot;
sp++;
/* begin again with the new keyring */
keyring = key;
- goto descend;
+ goto descend_to_keyring;
}
- /* the keyring we're looking at was disqualified or didn't contain a
- * matching key */
+ /* We've dealt with all the slots in the current node, so now we need
+ * to ascend to the parent and continue processing there.
+ */
+ ptr = ACCESS_ONCE(node->back_pointer);
+ slot = node->parent_slot;
+
+ if (ptr && assoc_array_ptr_is_shortcut(ptr)) {
+ shortcut = assoc_array_ptr_to_shortcut(ptr);
+ smp_read_barrier_depends();
+ ptr = ACCESS_ONCE(shortcut->back_pointer);
+ slot = shortcut->parent_slot;
+ }
+ if (!ptr)
+ goto not_this_keyring;
+ node = assoc_array_ptr_to_node(ptr);
+ smp_read_barrier_depends();
+ slot++;
+
+ /* If we've ascended to the root (zero backpointer), we must have just
+ * finished processing the leftmost branch rather than the root slots -
+ * so there can't be any more keyrings for us to find.
+ */
+ if (node->back_pointer) {
+ kdebug("ascend %d", slot);
+ goto ascend_to_node;
+ }
+
+ /* The keyring we're looking at was disqualified or didn't contain a
+ * matching key.
+ */
not_this_keyring:
- if (sp > 0) {
- /* resume the processing of a keyring higher up in the tree */
- sp--;
- keyring = stack[sp].keyring;
- keylist = stack[sp].keylist;
- kix = stack[sp].kix + 1;
- goto ascend;
+ kdebug("not_this_keyring %d", sp);
+ if (sp <= 0) {
+ kleave(" = false");
+ return false;
}
- key_ref = ERR_PTR(err);
- goto error_2;
+ /* Resume the processing of a keyring higher up in the tree */
+ sp--;
+ keyring = stack[sp].keyring;
+ node = stack[sp].node;
+ slot = stack[sp].slot + 1;
+ kdebug("ascend to %d [%d]", keyring->serial, slot);
+ goto ascend_to_node;
- /* we found a viable match */
+ /* We found a viable match */
found:
- atomic_inc(&key->usage);
- key->last_used_at = now.tv_sec;
- keyring->last_used_at = now.tv_sec;
- while (sp > 0)
- stack[--sp].keyring->last_used_at = now.tv_sec;
+ key = key_ref_to_ptr(ctx->result);
key_check(key);
- key_ref = make_key_ref(key, possessed);
-error_2:
+ if (!(ctx->flags & KEYRING_SEARCH_NO_UPDATE_TIME)) {
+ key->last_used_at = ctx->now.tv_sec;
+ keyring->last_used_at = ctx->now.tv_sec;
+ while (sp > 0)
+ stack[--sp].keyring->last_used_at = ctx->now.tv_sec;
+ }
+ kleave(" = true");
+ return true;
+}
+
+/**
+ * keyring_search_aux - Search a keyring tree for a key matching some criteria
+ * @keyring_ref: A pointer to the keyring with possession indicator.
+ * @ctx: The keyring search context.
+ *
+ * Search the supplied keyring tree for a key that matches the criteria given.
+ * The root keyring and any linked keyrings must grant Search permission to the
+ * caller to be searchable and keys can only be found if they too grant Search
+ * to the caller. The possession flag on the root keyring pointer controls use
+ * of the possessor bits in permissions checking of the entire tree. In
+ * addition, the LSM gets to forbid keyring searches and key matches.
+ *
+ * The search is performed as a breadth-then-depth search up to the prescribed
+ * limit (KEYRING_SEARCH_MAX_DEPTH).
+ *
+ * Keys are matched to the type provided and are then filtered by the match
+ * function, which is given the description to use in any way it sees fit. The
+ * match function may use any attributes of a key that it wishes to to
+ * determine the match. Normally the match function from the key type would be
+ * used.
+ *
+ * RCU can be used to prevent the keyring key lists from disappearing without
+ * the need to take lots of locks.
+ *
+ * Returns a pointer to the found key and increments the key usage count if
+ * successful; -EAGAIN if no matching keys were found, or if expired or revoked
+ * keys were found; -ENOKEY if only negative keys were found; -ENOTDIR if the
+ * specified keyring wasn't a keyring.
+ *
+ * In the case of a successful return, the possession attribute from
+ * @keyring_ref is propagated to the returned key reference.
+ */
+key_ref_t keyring_search_aux(key_ref_t keyring_ref,
+ struct keyring_search_context *ctx)
+{
+ struct key *keyring;
+ long err;
+
+ ctx->iterator = keyring_search_iterator;
+ ctx->possessed = is_key_possessed(keyring_ref);
+ ctx->result = ERR_PTR(-EAGAIN);
+
+ keyring = key_ref_to_ptr(keyring_ref);
+ key_check(keyring);
+
+ if (keyring->type != &key_type_keyring)
+ return ERR_PTR(-ENOTDIR);
+
+ if (!(ctx->flags & KEYRING_SEARCH_NO_CHECK_PERM)) {
+ err = key_task_permission(keyring_ref, ctx->cred, KEY_SEARCH);
+ if (err < 0)
+ return ERR_PTR(err);
+ }
+
+ rcu_read_lock();
+ ctx->now = current_kernel_time();
+ if (search_nested_keyrings(keyring, ctx))
+ __key_get(key_ref_to_ptr(ctx->result));
rcu_read_unlock();
-error:
- return key_ref;
+ return ctx->result;
}
/**
* @description: The name of the keyring we want to find.
*
* As keyring_search_aux() above, but using the current task's credentials and
- * type's default matching function.
+ * type's default matching function and preferred search method.
*/
key_ref_t keyring_search(key_ref_t keyring,
struct key_type *type,
const char *description)
{
- if (!type->match)
+ struct keyring_search_context ctx = {
+ .index_key.type = type,
+ .index_key.description = description,
+ .cred = current_cred(),
+ .match = type->match,
+ .match_data = description,
+ .flags = (type->def_lookup_type |
+ KEYRING_SEARCH_DO_STATE_CHECK),
+ };
+
+ if (!ctx.match)
return ERR_PTR(-ENOKEY);
- return keyring_search_aux(keyring, current->cred,
- type, description, type->match, false);
+ return keyring_search_aux(keyring, &ctx);
}
EXPORT_SYMBOL(keyring_search);
/*
- * Search the given keyring only (no recursion).
+ * Search the given keyring for a key that might be updated.
*
* The caller must guarantee that the keyring is a keyring and that the
- * permission is granted to search the keyring as no check is made here.
- *
- * RCU is used to make it unnecessary to lock the keyring key list here.
+ * permission is granted to modify the keyring as no check is made here. The
+ * caller must also hold a lock on the keyring semaphore.
*
* Returns a pointer to the found key with usage count incremented if
- * successful and returns -ENOKEY if not found. Revoked keys and keys not
- * providing the requested permission are skipped over.
+ * successful and returns NULL if not found. Revoked and invalidated keys are
+ * skipped over.
*
* If successful, the possession indicator is propagated from the keyring ref
* to the returned key reference.
*/
-key_ref_t __keyring_search_one(key_ref_t keyring_ref,
- const struct key_type *ktype,
- const char *description,
- key_perm_t perm)
+key_ref_t find_key_to_update(key_ref_t keyring_ref,
+ const struct keyring_index_key *index_key)
{
- struct keyring_list *klist;
- unsigned long possessed;
struct key *keyring, *key;
- int nkeys, loop;
+ const void *object;
keyring = key_ref_to_ptr(keyring_ref);
- possessed = is_key_possessed(keyring_ref);
- rcu_read_lock();
+ kenter("{%d},{%s,%s}",
+ keyring->serial, index_key->type->name, index_key->description);
- klist = rcu_dereference(keyring->payload.subscriptions);
- if (klist) {
- nkeys = klist->nkeys;
- smp_rmb();
- for (loop = 0; loop < nkeys ; loop++) {
- key = rcu_dereference(klist->keys[loop]);
- if (key->type == ktype &&
- (!key->type->match ||
- key->type->match(key, description)) &&
- key_permission(make_key_ref(key, possessed),
- perm) == 0 &&
- !(key->flags & ((1 << KEY_FLAG_INVALIDATED) |
- (1 << KEY_FLAG_REVOKED)))
- )
- goto found;
- }
- }
+ object = assoc_array_find(&keyring->keys, &keyring_assoc_array_ops,
+ index_key);
- rcu_read_unlock();
- return ERR_PTR(-ENOKEY);
+ if (object)
+ goto found;
+
+ kleave(" = NULL");
+ return NULL;
found:
- atomic_inc(&key->usage);
- keyring->last_used_at = key->last_used_at =
- current_kernel_time().tv_sec;
- rcu_read_unlock();
- return make_key_ref(key, possessed);
+ key = keyring_ptr_to_key(object);
+ if (key->flags & ((1 << KEY_FLAG_INVALIDATED) |
+ (1 << KEY_FLAG_REVOKED))) {
+ kleave(" = NULL [x]");
+ return NULL;
+ }
+ __key_get(key);
+ kleave(" = {%d}", key->serial);
+ return make_key_ref(key, is_key_possessed(keyring_ref));
}
/*
return keyring;
}
+static int keyring_detect_cycle_iterator(const void *object,
+ void *iterator_data)
+{
+ struct keyring_search_context *ctx = iterator_data;
+ const struct key *key = keyring_ptr_to_key(object);
+
+ kenter("{%d}", key->serial);
+
+ BUG_ON(key != ctx->match_data);
+ ctx->result = ERR_PTR(-EDEADLK);
+ return 1;
+}
+
/*
* See if a cycle will will be created by inserting acyclic tree B in acyclic
* tree A at the topmost level (ie: as a direct child of A).
*/
static int keyring_detect_cycle(struct key *A, struct key *B)
{
- struct {
- struct keyring_list *keylist;
- int kix;
- } stack[KEYRING_SEARCH_MAX_DEPTH];
-
- struct keyring_list *keylist;
- struct key *subtree, *key;
- int sp, nkeys, kix, ret;
+ struct keyring_search_context ctx = {
+ .index_key = A->index_key,
+ .match_data = A,
+ .iterator = keyring_detect_cycle_iterator,
+ .flags = (KEYRING_SEARCH_LOOKUP_DIRECT |
+ KEYRING_SEARCH_NO_STATE_CHECK |
+ KEYRING_SEARCH_NO_UPDATE_TIME |
+ KEYRING_SEARCH_NO_CHECK_PERM |
+ KEYRING_SEARCH_DETECT_TOO_DEEP),
+ };
rcu_read_lock();
-
- ret = -EDEADLK;
- if (A == B)
- goto cycle_detected;
-
- subtree = B;
- sp = 0;
-
- /* start processing a new keyring */
-descend:
- if (test_bit(KEY_FLAG_REVOKED, &subtree->flags))
- goto not_this_keyring;
-
- keylist = rcu_dereference(subtree->payload.subscriptions);
- if (!keylist)
- goto not_this_keyring;
- kix = 0;
-
-ascend:
- /* iterate through the remaining keys in this keyring */
- nkeys = keylist->nkeys;
- smp_rmb();
- for (; kix < nkeys; kix++) {
- key = rcu_dereference(keylist->keys[kix]);
-
- if (key == A)
- goto cycle_detected;
-
- /* recursively check nested keyrings */
- if (key->type == &key_type_keyring) {
- if (sp >= KEYRING_SEARCH_MAX_DEPTH)
- goto too_deep;
-
- /* stack the current position */
- stack[sp].keylist = keylist;
- stack[sp].kix = kix;
- sp++;
-
- /* begin again with the new keyring */
- subtree = key;
- goto descend;
- }
- }
-
- /* the keyring we're looking at was disqualified or didn't contain a
- * matching key */
-not_this_keyring:
- if (sp > 0) {
- /* resume the checking of a keyring higher up in the tree */
- sp--;
- keylist = stack[sp].keylist;
- kix = stack[sp].kix + 1;
- goto ascend;
- }
-
- ret = 0; /* no cycles detected */
-
-error:
+ search_nested_keyrings(B, &ctx);
rcu_read_unlock();
- return ret;
-
-too_deep:
- ret = -ELOOP;
- goto error;
-
-cycle_detected:
- ret = -EDEADLK;
- goto error;
-}
-
-/*
- * Dispose of a keyring list after the RCU grace period, freeing the unlinked
- * key
- */
-static void keyring_unlink_rcu_disposal(struct rcu_head *rcu)
-{
- struct keyring_list *klist =
- container_of(rcu, struct keyring_list, rcu);
-
- if (klist->delkey != USHRT_MAX)
- key_put(rcu_access_pointer(klist->keys[klist->delkey]));
- kfree(klist);
+ return PTR_ERR(ctx.result) == -EAGAIN ? 0 : PTR_ERR(ctx.result);
}
/*
* Preallocate memory so that a key can be linked into to a keyring.
*/
-int __key_link_begin(struct key *keyring, const struct key_type *type,
- const char *description, unsigned long *_prealloc)
+int __key_link_begin(struct key *keyring,
+ const struct keyring_index_key *index_key,
+ struct assoc_array_edit **_edit)
__acquires(&keyring->sem)
__acquires(&keyring_serialise_link_sem)
{
- struct keyring_list *klist, *nklist;
- unsigned long prealloc;
- unsigned max;
- time_t lowest_lru;
- size_t size;
- int loop, lru, ret;
+ struct assoc_array_edit *edit;
+ int ret;
+
+ kenter("%d,%s,%s,",
+ keyring->serial, index_key->type->name, index_key->description);
- kenter("%d,%s,%s,", key_serial(keyring), type->name, description);
+ BUG_ON(index_key->desc_len == 0);
if (keyring->type != &key_type_keyring)
return -ENOTDIR;
/* serialise link/link calls to prevent parallel calls causing a cycle
* when linking two keyring in opposite orders */
- if (type == &key_type_keyring)
+ if (index_key->type == &key_type_keyring)
down_write(&keyring_serialise_link_sem);
- klist = rcu_dereference_locked_keyring(keyring);
-
- /* see if there's a matching key we can displace */
- lru = -1;
- if (klist && klist->nkeys > 0) {
- lowest_lru = TIME_T_MAX;
- for (loop = klist->nkeys - 1; loop >= 0; loop--) {
- struct key *key = rcu_deref_link_locked(klist, loop,
- keyring);
- if (key->type == type &&
- strcmp(key->description, description) == 0) {
- /* Found a match - we'll replace the link with
- * one to the new key. We record the slot
- * position.
- */
- klist->delkey = loop;
- prealloc = 0;
- goto done;
- }
- if (key->last_used_at < lowest_lru) {
- lowest_lru = key->last_used_at;
- lru = loop;
- }
- }
- }
-
- /* If the keyring is full then do an LRU discard */
- if (klist &&
- klist->nkeys == klist->maxkeys &&
- klist->maxkeys >= MAX_KEYRING_LINKS) {
- kdebug("LRU discard %d\n", lru);
- klist->delkey = lru;
- prealloc = 0;
- goto done;
- }
-
- /* check that we aren't going to overrun the user's quota */
- ret = key_payload_reserve(keyring,
- keyring->datalen + KEYQUOTA_LINK_BYTES);
- if (ret < 0)
+ /* Create an edit script that will insert/replace the key in the
+ * keyring tree.
+ */
+ edit = assoc_array_insert(&keyring->keys,
+ &keyring_assoc_array_ops,
+ index_key,
+ NULL);
+ if (IS_ERR(edit)) {
+ ret = PTR_ERR(edit);
goto error_sem;
+ }
- if (klist && klist->nkeys < klist->maxkeys) {
- /* there's sufficient slack space to append directly */
- klist->delkey = klist->nkeys;
- prealloc = KEY_LINK_FIXQUOTA;
- } else {
- /* grow the key list */
- max = 4;
- if (klist) {
- max += klist->maxkeys;
- if (max > MAX_KEYRING_LINKS)
- max = MAX_KEYRING_LINKS;
- BUG_ON(max <= klist->maxkeys);
- }
-
- size = sizeof(*klist) + sizeof(struct key *) * max;
-
- ret = -ENOMEM;
- nklist = kmalloc(size, GFP_KERNEL);
- if (!nklist)
- goto error_quota;
-
- nklist->maxkeys = max;
- if (klist) {
- memcpy(nklist->keys, klist->keys,
- sizeof(struct key *) * klist->nkeys);
- nklist->delkey = klist->nkeys;
- nklist->nkeys = klist->nkeys + 1;
- klist->delkey = USHRT_MAX;
- } else {
- nklist->nkeys = 1;
- nklist->delkey = 0;
- }
-
- /* add the key into the new space */
- RCU_INIT_POINTER(nklist->keys[nklist->delkey], NULL);
- prealloc = (unsigned long)nklist | KEY_LINK_FIXQUOTA;
+ /* If we're not replacing a link in-place then we're going to need some
+ * extra quota.
+ */
+ if (!edit->dead_leaf) {
+ ret = key_payload_reserve(keyring,
+ keyring->datalen + KEYQUOTA_LINK_BYTES);
+ if (ret < 0)
+ goto error_cancel;
}
-done:
- *_prealloc = prealloc;
+ *_edit = edit;
kleave(" = 0");
return 0;
-error_quota:
- /* undo the quota changes */
- key_payload_reserve(keyring,
- keyring->datalen - KEYQUOTA_LINK_BYTES);
+error_cancel:
+ assoc_array_cancel_edit(edit);
error_sem:
- if (type == &key_type_keyring)
+ if (index_key->type == &key_type_keyring)
up_write(&keyring_serialise_link_sem);
error_krsem:
up_write(&keyring->sem);
* holds at most one link to any given key of a particular type+description
* combination.
*/
-void __key_link(struct key *keyring, struct key *key,
- unsigned long *_prealloc)
+void __key_link(struct key *key, struct assoc_array_edit **_edit)
{
- struct keyring_list *klist, *nklist;
- struct key *discard;
-
- nklist = (struct keyring_list *)(*_prealloc & ~KEY_LINK_FIXQUOTA);
- *_prealloc = 0;
-
- kenter("%d,%d,%p", keyring->serial, key->serial, nklist);
-
- klist = rcu_dereference_locked_keyring(keyring);
-
- atomic_inc(&key->usage);
- keyring->last_used_at = key->last_used_at =
- current_kernel_time().tv_sec;
-
- /* there's a matching key we can displace or an empty slot in a newly
- * allocated list we can fill */
- if (nklist) {
- kdebug("reissue %hu/%hu/%hu",
- nklist->delkey, nklist->nkeys, nklist->maxkeys);
-
- RCU_INIT_POINTER(nklist->keys[nklist->delkey], key);
-
- rcu_assign_pointer(keyring->payload.subscriptions, nklist);
-
- /* dispose of the old keyring list and, if there was one, the
- * displaced key */
- if (klist) {
- kdebug("dispose %hu/%hu/%hu",
- klist->delkey, klist->nkeys, klist->maxkeys);
- call_rcu(&klist->rcu, keyring_unlink_rcu_disposal);
- }
- } else if (klist->delkey < klist->nkeys) {
- kdebug("replace %hu/%hu/%hu",
- klist->delkey, klist->nkeys, klist->maxkeys);
-
- discard = rcu_dereference_protected(
- klist->keys[klist->delkey],
- rwsem_is_locked(&keyring->sem));
- rcu_assign_pointer(klist->keys[klist->delkey], key);
- /* The garbage collector will take care of RCU
- * synchronisation */
- key_put(discard);
- } else {
- /* there's sufficient slack space to append directly */
- kdebug("append %hu/%hu/%hu",
- klist->delkey, klist->nkeys, klist->maxkeys);
-
- RCU_INIT_POINTER(klist->keys[klist->delkey], key);
- smp_wmb();
- klist->nkeys++;
- }
+ __key_get(key);
+ assoc_array_insert_set_object(*_edit, keyring_key_to_ptr(key));
+ assoc_array_apply_edit(*_edit);
+ *_edit = NULL;
}
/*
*
* Must be called with __key_link_begin() having being called.
*/
-void __key_link_end(struct key *keyring, struct key_type *type,
- unsigned long prealloc)
+void __key_link_end(struct key *keyring,
+ const struct keyring_index_key *index_key,
+ struct assoc_array_edit *edit)
__releases(&keyring->sem)
__releases(&keyring_serialise_link_sem)
{
- BUG_ON(type == NULL);
- BUG_ON(type->name == NULL);
- kenter("%d,%s,%lx", keyring->serial, type->name, prealloc);
+ BUG_ON(index_key->type == NULL);
+ kenter("%d,%s,", keyring->serial, index_key->type->name);
- if (type == &key_type_keyring)
+ if (index_key->type == &key_type_keyring)
up_write(&keyring_serialise_link_sem);
- if (prealloc) {
- if (prealloc & KEY_LINK_FIXQUOTA)
- key_payload_reserve(keyring,
- keyring->datalen -
- KEYQUOTA_LINK_BYTES);
- kfree((struct keyring_list *)(prealloc & ~KEY_LINK_FIXQUOTA));
+ if (edit && !edit->dead_leaf) {
+ key_payload_reserve(keyring,
+ keyring->datalen - KEYQUOTA_LINK_BYTES);
+ assoc_array_cancel_edit(edit);
}
up_write(&keyring->sem);
}
*/
int key_link(struct key *keyring, struct key *key)
{
- unsigned long prealloc;
+ struct assoc_array_edit *edit;
int ret;
+ kenter("{%d,%d}", keyring->serial, atomic_read(&keyring->usage));
+
key_check(keyring);
key_check(key);
- ret = __key_link_begin(keyring, key->type, key->description, &prealloc);
+ if (test_bit(KEY_FLAG_TRUSTED_ONLY, &keyring->flags) &&
+ !test_bit(KEY_FLAG_TRUSTED, &key->flags))
+ return -EPERM;
+
+ ret = __key_link_begin(keyring, &key->index_key, &edit);
if (ret == 0) {
+ kdebug("begun {%d,%d}", keyring->serial, atomic_read(&keyring->usage));
ret = __key_link_check_live_key(keyring, key);
if (ret == 0)
- __key_link(keyring, key, &prealloc);
- __key_link_end(keyring, key->type, prealloc);
+ __key_link(key, &edit);
+ __key_link_end(keyring, &key->index_key, edit);
}
+ kleave(" = %d {%d,%d}", ret, keyring->serial, atomic_read(&keyring->usage));
return ret;
}
EXPORT_SYMBOL(key_link);
*/
int key_unlink(struct key *keyring, struct key *key)
{
- struct keyring_list *klist, *nklist;
- int loop, ret;
+ struct assoc_array_edit *edit;
+ int ret;
key_check(keyring);
key_check(key);
- ret = -ENOTDIR;
if (keyring->type != &key_type_keyring)
- goto error;
+ return -ENOTDIR;
down_write(&keyring->sem);
- klist = rcu_dereference_locked_keyring(keyring);
- if (klist) {
- /* search the keyring for the key */
- for (loop = 0; loop < klist->nkeys; loop++)
- if (rcu_access_pointer(klist->keys[loop]) == key)
- goto key_is_present;
+ edit = assoc_array_delete(&keyring->keys, &keyring_assoc_array_ops,
+ &key->index_key);
+ if (IS_ERR(edit)) {
+ ret = PTR_ERR(edit);
+ goto error;
}
-
- up_write(&keyring->sem);
ret = -ENOENT;
- goto error;
-
-key_is_present:
- /* we need to copy the key list for RCU purposes */
- nklist = kmalloc(sizeof(*klist) +
- sizeof(struct key *) * klist->maxkeys,
- GFP_KERNEL);
- if (!nklist)
- goto nomem;
- nklist->maxkeys = klist->maxkeys;
- nklist->nkeys = klist->nkeys - 1;
-
- if (loop > 0)
- memcpy(&nklist->keys[0],
- &klist->keys[0],
- loop * sizeof(struct key *));
-
- if (loop < nklist->nkeys)
- memcpy(&nklist->keys[loop],
- &klist->keys[loop + 1],
- (nklist->nkeys - loop) * sizeof(struct key *));
-
- /* adjust the user's quota */
- key_payload_reserve(keyring,
- keyring->datalen - KEYQUOTA_LINK_BYTES);
-
- rcu_assign_pointer(keyring->payload.subscriptions, nklist);
-
- up_write(&keyring->sem);
-
- /* schedule for later cleanup */
- klist->delkey = loop;
- call_rcu(&klist->rcu, keyring_unlink_rcu_disposal);
+ if (edit == NULL)
+ goto error;
+ assoc_array_apply_edit(edit);
+ key_payload_reserve(keyring, keyring->datalen - KEYQUOTA_LINK_BYTES);
ret = 0;
error:
- return ret;
-nomem:
- ret = -ENOMEM;
up_write(&keyring->sem);
- goto error;
+ return ret;
}
EXPORT_SYMBOL(key_unlink);
-/*
- * Dispose of a keyring list after the RCU grace period, releasing the keys it
- * links to.
- */
-static void keyring_clear_rcu_disposal(struct rcu_head *rcu)
-{
- struct keyring_list *klist;
- int loop;
-
- klist = container_of(rcu, struct keyring_list, rcu);
-
- for (loop = klist->nkeys - 1; loop >= 0; loop--)
- key_put(rcu_access_pointer(klist->keys[loop]));
-
- kfree(klist);
-}
-
/**
* keyring_clear - Clear a keyring
* @keyring: The keyring to clear.
*/
int keyring_clear(struct key *keyring)
{
- struct keyring_list *klist;
+ struct assoc_array_edit *edit;
int ret;
- ret = -ENOTDIR;
- if (keyring->type == &key_type_keyring) {
- /* detach the pointer block with the locks held */
- down_write(&keyring->sem);
-
- klist = rcu_dereference_locked_keyring(keyring);
- if (klist) {
- /* adjust the quota */
- key_payload_reserve(keyring,
- sizeof(struct keyring_list));
-
- rcu_assign_pointer(keyring->payload.subscriptions,
- NULL);
- }
-
- up_write(&keyring->sem);
+ if (keyring->type != &key_type_keyring)
+ return -ENOTDIR;
- /* free the keys after the locks have been dropped */
- if (klist)
- call_rcu(&klist->rcu, keyring_clear_rcu_disposal);
+ down_write(&keyring->sem);
+ edit = assoc_array_clear(&keyring->keys, &keyring_assoc_array_ops);
+ if (IS_ERR(edit)) {
+ ret = PTR_ERR(edit);
+ } else {
+ if (edit)
+ assoc_array_apply_edit(edit);
+ key_payload_reserve(keyring, 0);
ret = 0;
}
+ up_write(&keyring->sem);
return ret;
}
EXPORT_SYMBOL(keyring_clear);
*/
static void keyring_revoke(struct key *keyring)
{
- struct keyring_list *klist;
+ struct assoc_array_edit *edit;
+
+ edit = assoc_array_clear(&keyring->keys, &keyring_assoc_array_ops);
+ if (!IS_ERR(edit)) {
+ if (edit)
+ assoc_array_apply_edit(edit);
+ key_payload_reserve(keyring, 0);
+ }
+}
+
+static bool keyring_gc_select_iterator(void *object, void *iterator_data)
+{
+ struct key *key = keyring_ptr_to_key(object);
+ time_t *limit = iterator_data;
- klist = rcu_dereference_locked_keyring(keyring);
+ if (key_is_dead(key, *limit))
+ return false;
+ key_get(key);
+ return true;
+}
- /* adjust the quota */
- key_payload_reserve(keyring, 0);
+static int keyring_gc_check_iterator(const void *object, void *iterator_data)
+{
+ const struct key *key = keyring_ptr_to_key(object);
+ time_t *limit = iterator_data;
- if (klist) {
- rcu_assign_pointer(keyring->payload.subscriptions, NULL);
- call_rcu(&klist->rcu, keyring_clear_rcu_disposal);
- }
+ key_check(key);
+ return key_is_dead(key, *limit);
}
/*
- * Collect garbage from the contents of a keyring, replacing the old list with
- * a new one with the pointers all shuffled down.
+ * Garbage collect pointers from a keyring.
*
- * Dead keys are classed as oned that are flagged as being dead or are revoked,
- * expired or negative keys that were revoked or expired before the specified
- * limit.
+ * Not called with any locks held. The keyring's key struct will not be
+ * deallocated under us as only our caller may deallocate it.
*/
void keyring_gc(struct key *keyring, time_t limit)
{
- struct keyring_list *klist, *new;
- struct key *key;
- int loop, keep, max;
-
- kenter("{%x,%s}", key_serial(keyring), keyring->description);
-
- down_write(&keyring->sem);
-
- klist = rcu_dereference_locked_keyring(keyring);
- if (!klist)
- goto no_klist;
-
- /* work out how many subscriptions we're keeping */
- keep = 0;
- for (loop = klist->nkeys - 1; loop >= 0; loop--)
- if (!key_is_dead(rcu_deref_link_locked(klist, loop, keyring),
- limit))
- keep++;
-
- if (keep == klist->nkeys)
- goto just_return;
-
- /* allocate a new keyring payload */
- max = roundup(keep, 4);
- new = kmalloc(sizeof(struct keyring_list) + max * sizeof(struct key *),
- GFP_KERNEL);
- if (!new)
- goto nomem;
- new->maxkeys = max;
- new->nkeys = 0;
- new->delkey = 0;
-
- /* install the live keys
- * - must take care as expired keys may be updated back to life
- */
- keep = 0;
- for (loop = klist->nkeys - 1; loop >= 0; loop--) {
- key = rcu_deref_link_locked(klist, loop, keyring);
- if (!key_is_dead(key, limit)) {
- if (keep >= max)
- goto discard_new;
- RCU_INIT_POINTER(new->keys[keep++], key_get(key));
- }
- }
- new->nkeys = keep;
-
- /* adjust the quota */
- key_payload_reserve(keyring,
- sizeof(struct keyring_list) +
- KEYQUOTA_LINK_BYTES * keep);
+ int result;
- if (keep == 0) {
- rcu_assign_pointer(keyring->payload.subscriptions, NULL);
- kfree(new);
- } else {
- rcu_assign_pointer(keyring->payload.subscriptions, new);
- }
+ kenter("%x{%s}", keyring->serial, keyring->description ?: "");
- up_write(&keyring->sem);
+ if (keyring->flags & ((1 << KEY_FLAG_INVALIDATED) |
+ (1 << KEY_FLAG_REVOKED)))
+ goto dont_gc;
- call_rcu(&klist->rcu, keyring_clear_rcu_disposal);
- kleave(" [yes]");
- return;
-
-discard_new:
- new->nkeys = keep;
- keyring_clear_rcu_disposal(&new->rcu);
- up_write(&keyring->sem);
- kleave(" [discard]");
- return;
-
-just_return:
- up_write(&keyring->sem);
- kleave(" [no dead]");
- return;
+ /* scan the keyring looking for dead keys */
+ rcu_read_lock();
+ result = assoc_array_iterate(&keyring->keys,
+ keyring_gc_check_iterator, &limit);
+ rcu_read_unlock();
+ if (result == true)
+ goto do_gc;
-no_klist:
- up_write(&keyring->sem);
- kleave(" [no_klist]");
+dont_gc:
+ kleave(" [no gc]");
return;
-nomem:
+do_gc:
+ down_write(&keyring->sem);
+ assoc_array_gc(&keyring->keys, &keyring_assoc_array_ops,
+ keyring_gc_select_iterator, &limit);
up_write(&keyring->sem);
- kleave(" [oom]");
+ kleave(" [gc]");
}
--- /dev/null
+/* General persistent per-UID keyrings register
+ *
+ * Copyright (C) 2013 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#include <linux/user_namespace.h>
+#include "internal.h"
+
+unsigned persistent_keyring_expiry = 3 * 24 * 3600; /* Expire after 3 days of non-use */
+
+/*
+ * Create the persistent keyring register for the current user namespace.
+ *
+ * Called with the namespace's sem locked for writing.
+ */
+static int key_create_persistent_register(struct user_namespace *ns)
+{
+ struct key *reg = keyring_alloc(".persistent_register",
+ KUIDT_INIT(0), KGIDT_INIT(0),
+ current_cred(),
+ ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
+ KEY_USR_VIEW | KEY_USR_READ),
+ KEY_ALLOC_NOT_IN_QUOTA, NULL);
+ if (IS_ERR(reg))
+ return PTR_ERR(reg);
+
+ ns->persistent_keyring_register = reg;
+ return 0;
+}
+
+/*
+ * Create the persistent keyring for the specified user.
+ *
+ * Called with the namespace's sem locked for writing.
+ */
+static key_ref_t key_create_persistent(struct user_namespace *ns, kuid_t uid,
+ struct keyring_index_key *index_key)
+{
+ struct key *persistent;
+ key_ref_t reg_ref, persistent_ref;
+
+ if (!ns->persistent_keyring_register) {
+ long err = key_create_persistent_register(ns);
+ if (err < 0)
+ return ERR_PTR(err);
+ } else {
+ reg_ref = make_key_ref(ns->persistent_keyring_register, true);
+ persistent_ref = find_key_to_update(reg_ref, index_key);
+ if (persistent_ref)
+ return persistent_ref;
+ }
+
+ persistent = keyring_alloc(index_key->description,
+ uid, INVALID_GID, current_cred(),
+ ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
+ KEY_USR_VIEW | KEY_USR_READ),
+ KEY_ALLOC_NOT_IN_QUOTA,
+ ns->persistent_keyring_register);
+ if (IS_ERR(persistent))
+ return ERR_CAST(persistent);
+
+ return make_key_ref(persistent, true);
+}
+
+/*
+ * Get the persistent keyring for a specific UID and link it to the nominated
+ * keyring.
+ */
+static long key_get_persistent(struct user_namespace *ns, kuid_t uid,
+ key_ref_t dest_ref)
+{
+ struct keyring_index_key index_key;
+ struct key *persistent;
+ key_ref_t reg_ref, persistent_ref;
+ char buf[32];
+ long ret;
+
+ /* Look in the register if it exists */
+ index_key.type = &key_type_keyring;
+ index_key.description = buf;
+ index_key.desc_len = sprintf(buf, "_persistent.%u", from_kuid(ns, uid));
+
+ if (ns->persistent_keyring_register) {
+ reg_ref = make_key_ref(ns->persistent_keyring_register, true);
+ down_read(&ns->persistent_keyring_register_sem);
+ persistent_ref = find_key_to_update(reg_ref, &index_key);
+ up_read(&ns->persistent_keyring_register_sem);
+
+ if (persistent_ref)
+ goto found;
+ }
+
+ /* It wasn't in the register, so we'll need to create it. We might
+ * also need to create the register.
+ */
+ down_write(&ns->persistent_keyring_register_sem);
+ persistent_ref = key_create_persistent(ns, uid, &index_key);
+ up_write(&ns->persistent_keyring_register_sem);
+ if (!IS_ERR(persistent_ref))
+ goto found;
+
+ return PTR_ERR(persistent_ref);
+
+found:
+ ret = key_task_permission(persistent_ref, current_cred(), KEY_LINK);
+ if (ret == 0) {
+ persistent = key_ref_to_ptr(persistent_ref);
+ ret = key_link(key_ref_to_ptr(dest_ref), persistent);
+ if (ret == 0) {
+ key_set_timeout(persistent, persistent_keyring_expiry);
+ ret = persistent->serial;
+ }
+ }
+
+ key_ref_put(persistent_ref);
+ return ret;
+}
+
+/*
+ * Get the persistent keyring for a specific UID and link it to the nominated
+ * keyring.
+ */
+long keyctl_get_persistent(uid_t _uid, key_serial_t destid)
+{
+ struct user_namespace *ns = current_user_ns();
+ key_ref_t dest_ref;
+ kuid_t uid;
+ long ret;
+
+ /* -1 indicates the current user */
+ if (_uid == (uid_t)-1) {
+ uid = current_uid();
+ } else {
+ uid = make_kuid(ns, _uid);
+ if (!uid_valid(uid))
+ return -EINVAL;
+
+ /* You can only see your own persistent cache if you're not
+ * sufficiently privileged.
+ */
+ if (!uid_eq(uid, current_uid()) &&
+ !uid_eq(uid, current_euid()) &&
+ !ns_capable(ns, CAP_SETUID))
+ return -EPERM;
+ }
+
+ /* There must be a destination keyring */
+ dest_ref = lookup_user_key(destid, KEY_LOOKUP_CREATE, KEY_WRITE);
+ if (IS_ERR(dest_ref))
+ return PTR_ERR(dest_ref);
+ if (key_ref_to_ptr(dest_ref)->type != &key_type_keyring) {
+ ret = -ENOTDIR;
+ goto out_put_dest;
+ }
+
+ ret = key_get_persistent(ns, uid, dest_ref);
+
+out_put_dest:
+ key_ref_put(dest_ref);
+ return ret;
+}
static int proc_keys_show(struct seq_file *m, void *v)
{
- const struct cred *cred = current_cred();
struct rb_node *_p = v;
struct key *key = rb_entry(_p, struct key, serial_node);
struct timespec now;
char xbuf[12];
int rc;
+ struct keyring_search_context ctx = {
+ .index_key.type = key->type,
+ .index_key.description = key->description,
+ .cred = current_cred(),
+ .match = lookup_user_key_possessed,
+ .match_data = key,
+ .flags = (KEYRING_SEARCH_NO_STATE_CHECK |
+ KEYRING_SEARCH_LOOKUP_DIRECT),
+ };
+
key_ref = make_key_ref(key, 0);
/* determine if the key is possessed by this process (a test we can
* skip if the key does not indicate the possessor can view it
*/
if (key->perm & KEY_POS_VIEW) {
- skey_ref = search_my_process_keyrings(key->type, key,
- lookup_user_key_possessed,
- true, cred);
+ skey_ref = search_my_process_keyrings(&ctx);
if (!IS_ERR(skey_ref)) {
key_ref_put(skey_ref);
key_ref = make_key_ref(key, 1);
* - the caller holds a spinlock, and thus the RCU read lock, making our
* access to __current_cred() safe
*/
- rc = key_task_permission(key_ref, cred, KEY_VIEW);
+ rc = key_task_permission(key_ref, ctx.cred, KEY_VIEW);
if (rc < 0)
return 0;
if (IS_ERR(keyring))
return PTR_ERR(keyring);
} else {
- atomic_inc(&keyring->usage);
+ __key_get(keyring);
}
/* install the keyring */
* In the case of a successful return, the possession attribute is set on the
* returned key reference.
*/
-key_ref_t search_my_process_keyrings(struct key_type *type,
- const void *description,
- key_match_func_t match,
- bool no_state_check,
- const struct cred *cred)
+key_ref_t search_my_process_keyrings(struct keyring_search_context *ctx)
{
key_ref_t key_ref, ret, err;
err = ERR_PTR(-EAGAIN);
/* search the thread keyring first */
- if (cred->thread_keyring) {
+ if (ctx->cred->thread_keyring) {
key_ref = keyring_search_aux(
- make_key_ref(cred->thread_keyring, 1),
- cred, type, description, match, no_state_check);
+ make_key_ref(ctx->cred->thread_keyring, 1), ctx);
if (!IS_ERR(key_ref))
goto found;
}
/* search the process keyring second */
- if (cred->process_keyring) {
+ if (ctx->cred->process_keyring) {
key_ref = keyring_search_aux(
- make_key_ref(cred->process_keyring, 1),
- cred, type, description, match, no_state_check);
+ make_key_ref(ctx->cred->process_keyring, 1), ctx);
if (!IS_ERR(key_ref))
goto found;
}
/* search the session keyring */
- if (cred->session_keyring) {
+ if (ctx->cred->session_keyring) {
rcu_read_lock();
key_ref = keyring_search_aux(
- make_key_ref(rcu_dereference(cred->session_keyring), 1),
- cred, type, description, match, no_state_check);
+ make_key_ref(rcu_dereference(ctx->cred->session_keyring), 1),
+ ctx);
rcu_read_unlock();
if (!IS_ERR(key_ref))
}
}
/* or search the user-session keyring */
- else if (cred->user->session_keyring) {
+ else if (ctx->cred->user->session_keyring) {
key_ref = keyring_search_aux(
- make_key_ref(cred->user->session_keyring, 1),
- cred, type, description, match, no_state_check);
+ make_key_ref(ctx->cred->user->session_keyring, 1),
+ ctx);
if (!IS_ERR(key_ref))
goto found;
*
* Return same as search_my_process_keyrings().
*/
-key_ref_t search_process_keyrings(struct key_type *type,
- const void *description,
- key_match_func_t match,
- const struct cred *cred)
+key_ref_t search_process_keyrings(struct keyring_search_context *ctx)
{
struct request_key_auth *rka;
key_ref_t key_ref, ret = ERR_PTR(-EACCES), err;
might_sleep();
- key_ref = search_my_process_keyrings(type, description, match,
- false, cred);
+ key_ref = search_my_process_keyrings(ctx);
if (!IS_ERR(key_ref))
goto found;
err = key_ref;
* search the keyrings of the process mentioned there
* - we don't permit access to request_key auth keys via this method
*/
- if (cred->request_key_auth &&
- cred == current_cred() &&
- type != &key_type_request_key_auth
+ if (ctx->cred->request_key_auth &&
+ ctx->cred == current_cred() &&
+ ctx->index_key.type != &key_type_request_key_auth
) {
+ const struct cred *cred = ctx->cred;
+
/* defend against the auth key being revoked */
down_read(&cred->request_key_auth->sem);
- if (key_validate(cred->request_key_auth) == 0) {
- rka = cred->request_key_auth->payload.data;
+ if (key_validate(ctx->cred->request_key_auth) == 0) {
+ rka = ctx->cred->request_key_auth->payload.data;
- key_ref = search_process_keyrings(type, description,
- match, rka->cred);
+ ctx->cred = rka->cred;
+ key_ref = search_process_keyrings(ctx);
+ ctx->cred = cred;
up_read(&cred->request_key_auth->sem);
key_ref_t lookup_user_key(key_serial_t id, unsigned long lflags,
key_perm_t perm)
{
+ struct keyring_search_context ctx = {
+ .match = lookup_user_key_possessed,
+ .flags = (KEYRING_SEARCH_NO_STATE_CHECK |
+ KEYRING_SEARCH_LOOKUP_DIRECT),
+ };
struct request_key_auth *rka;
- const struct cred *cred;
struct key *key;
key_ref_t key_ref, skey_ref;
int ret;
try_again:
- cred = get_current_cred();
+ ctx.cred = get_current_cred();
key_ref = ERR_PTR(-ENOKEY);
switch (id) {
case KEY_SPEC_THREAD_KEYRING:
- if (!cred->thread_keyring) {
+ if (!ctx.cred->thread_keyring) {
if (!(lflags & KEY_LOOKUP_CREATE))
goto error;
goto reget_creds;
}
- key = cred->thread_keyring;
- atomic_inc(&key->usage);
+ key = ctx.cred->thread_keyring;
+ __key_get(key);
key_ref = make_key_ref(key, 1);
break;
case KEY_SPEC_PROCESS_KEYRING:
- if (!cred->process_keyring) {
+ if (!ctx.cred->process_keyring) {
if (!(lflags & KEY_LOOKUP_CREATE))
goto error;
goto reget_creds;
}
- key = cred->process_keyring;
- atomic_inc(&key->usage);
+ key = ctx.cred->process_keyring;
+ __key_get(key);
key_ref = make_key_ref(key, 1);
break;
case KEY_SPEC_SESSION_KEYRING:
- if (!cred->session_keyring) {
+ if (!ctx.cred->session_keyring) {
/* always install a session keyring upon access if one
* doesn't exist yet */
ret = install_user_keyrings();
ret = join_session_keyring(NULL);
else
ret = install_session_keyring(
- cred->user->session_keyring);
+ ctx.cred->user->session_keyring);
if (ret < 0)
goto error;
goto reget_creds;
- } else if (cred->session_keyring ==
- cred->user->session_keyring &&
+ } else if (ctx.cred->session_keyring ==
+ ctx.cred->user->session_keyring &&
lflags & KEY_LOOKUP_CREATE) {
ret = join_session_keyring(NULL);
if (ret < 0)
}
rcu_read_lock();
- key = rcu_dereference(cred->session_keyring);
- atomic_inc(&key->usage);
+ key = rcu_dereference(ctx.cred->session_keyring);
+ __key_get(key);
rcu_read_unlock();
key_ref = make_key_ref(key, 1);
break;
case KEY_SPEC_USER_KEYRING:
- if (!cred->user->uid_keyring) {
+ if (!ctx.cred->user->uid_keyring) {
ret = install_user_keyrings();
if (ret < 0)
goto error;
}
- key = cred->user->uid_keyring;
- atomic_inc(&key->usage);
+ key = ctx.cred->user->uid_keyring;
+ __key_get(key);
key_ref = make_key_ref(key, 1);
break;
case KEY_SPEC_USER_SESSION_KEYRING:
- if (!cred->user->session_keyring) {
+ if (!ctx.cred->user->session_keyring) {
ret = install_user_keyrings();
if (ret < 0)
goto error;
}
- key = cred->user->session_keyring;
- atomic_inc(&key->usage);
+ key = ctx.cred->user->session_keyring;
+ __key_get(key);
key_ref = make_key_ref(key, 1);
break;
goto error;
case KEY_SPEC_REQKEY_AUTH_KEY:
- key = cred->request_key_auth;
+ key = ctx.cred->request_key_auth;
if (!key)
goto error;
- atomic_inc(&key->usage);
+ __key_get(key);
key_ref = make_key_ref(key, 1);
break;
case KEY_SPEC_REQUESTOR_KEYRING:
- if (!cred->request_key_auth)
+ if (!ctx.cred->request_key_auth)
goto error;
- down_read(&cred->request_key_auth->sem);
+ down_read(&ctx.cred->request_key_auth->sem);
if (test_bit(KEY_FLAG_REVOKED,
- &cred->request_key_auth->flags)) {
+ &ctx.cred->request_key_auth->flags)) {
key_ref = ERR_PTR(-EKEYREVOKED);
key = NULL;
} else {
- rka = cred->request_key_auth->payload.data;
+ rka = ctx.cred->request_key_auth->payload.data;
key = rka->dest_keyring;
- atomic_inc(&key->usage);
+ __key_get(key);
}
- up_read(&cred->request_key_auth->sem);
+ up_read(&ctx.cred->request_key_auth->sem);
if (!key)
goto error;
key_ref = make_key_ref(key, 1);
key_ref = make_key_ref(key, 0);
/* check to see if we possess the key */
- skey_ref = search_process_keyrings(key->type, key,
- lookup_user_key_possessed,
- cred);
+ ctx.index_key.type = key->type;
+ ctx.index_key.description = key->description;
+ ctx.index_key.desc_len = strlen(key->description);
+ ctx.match_data = key;
+ kdebug("check possessed");
+ skey_ref = search_process_keyrings(&ctx);
+ kdebug("possessed=%p", skey_ref);
if (!IS_ERR(skey_ref)) {
key_put(key);
goto invalid_key;
/* check the permissions */
- ret = key_task_permission(key_ref, cred, perm);
+ ret = key_task_permission(key_ref, ctx.cred, perm);
if (ret < 0)
goto invalid_key;
key->last_used_at = current_kernel_time().tv_sec;
error:
- put_cred(cred);
+ put_cred(ctx.cred);
return key_ref;
invalid_key:
/* if we attempted to install a keyring, then it may have caused new
* creds to be installed */
reget_creds:
- put_cred(cred);
+ put_cred(ctx.cred);
goto try_again;
}
commit_creds(new);
}
+
+/*
+ * Make sure that root's user and user-session keyrings exist.
+ */
+static int __init init_root_keyring(void)
+{
+ return install_user_keyrings();
+}
+
+late_initcall(init_root_keyring);
* May return a key that's already under construction instead if there was a
* race between two thread calling request_key().
*/
-static int construct_alloc_key(struct key_type *type,
- const char *description,
+static int construct_alloc_key(struct keyring_search_context *ctx,
struct key *dest_keyring,
unsigned long flags,
struct key_user *user,
struct key **_key)
{
- const struct cred *cred = current_cred();
- unsigned long prealloc;
+ struct assoc_array_edit *edit;
struct key *key;
key_perm_t perm;
key_ref_t key_ref;
int ret;
- kenter("%s,%s,,,", type->name, description);
+ kenter("%s,%s,,,",
+ ctx->index_key.type->name, ctx->index_key.description);
*_key = NULL;
mutex_lock(&user->cons_lock);
perm = KEY_POS_VIEW | KEY_POS_SEARCH | KEY_POS_LINK | KEY_POS_SETATTR;
perm |= KEY_USR_VIEW;
- if (type->read)
+ if (ctx->index_key.type->read)
perm |= KEY_POS_READ;
- if (type == &key_type_keyring || type->update)
+ if (ctx->index_key.type == &key_type_keyring ||
+ ctx->index_key.type->update)
perm |= KEY_POS_WRITE;
- key = key_alloc(type, description, cred->fsuid, cred->fsgid, cred,
+ key = key_alloc(ctx->index_key.type, ctx->index_key.description,
+ ctx->cred->fsuid, ctx->cred->fsgid, ctx->cred,
perm, flags);
if (IS_ERR(key))
goto alloc_failed;
set_bit(KEY_FLAG_USER_CONSTRUCT, &key->flags);
if (dest_keyring) {
- ret = __key_link_begin(dest_keyring, type, description,
- &prealloc);
+ ret = __key_link_begin(dest_keyring, &ctx->index_key, &edit);
if (ret < 0)
goto link_prealloc_failed;
}
* waited for locks */
mutex_lock(&key_construction_mutex);
- key_ref = search_process_keyrings(type, description, type->match, cred);
+ key_ref = search_process_keyrings(ctx);
if (!IS_ERR(key_ref))
goto key_already_present;
if (dest_keyring)
- __key_link(dest_keyring, key, &prealloc);
+ __key_link(key, &edit);
mutex_unlock(&key_construction_mutex);
if (dest_keyring)
- __key_link_end(dest_keyring, type, prealloc);
+ __key_link_end(dest_keyring, &ctx->index_key, edit);
mutex_unlock(&user->cons_lock);
*_key = key;
kleave(" = 0 [%d]", key_serial(key));
if (dest_keyring) {
ret = __key_link_check_live_key(dest_keyring, key);
if (ret == 0)
- __key_link(dest_keyring, key, &prealloc);
- __key_link_end(dest_keyring, type, prealloc);
+ __key_link(key, &edit);
+ __key_link_end(dest_keyring, &ctx->index_key, edit);
if (ret < 0)
goto link_check_failed;
}
/*
* Commence key construction.
*/
-static struct key *construct_key_and_link(struct key_type *type,
- const char *description,
+static struct key *construct_key_and_link(struct keyring_search_context *ctx,
const char *callout_info,
size_t callout_len,
void *aux,
construct_get_dest_keyring(&dest_keyring);
- ret = construct_alloc_key(type, description, dest_keyring, flags, user,
- &key);
+ ret = construct_alloc_key(ctx, dest_keyring, flags, user, &key);
key_user_put(user);
if (ret == 0) {
struct key *dest_keyring,
unsigned long flags)
{
- const struct cred *cred = current_cred();
+ struct keyring_search_context ctx = {
+ .index_key.type = type,
+ .index_key.description = description,
+ .cred = current_cred(),
+ .match = type->match,
+ .match_data = description,
+ .flags = KEYRING_SEARCH_LOOKUP_DIRECT,
+ };
struct key *key;
key_ref_t key_ref;
int ret;
kenter("%s,%s,%p,%zu,%p,%p,%lx",
- type->name, description, callout_info, callout_len, aux,
- dest_keyring, flags);
+ ctx.index_key.type->name, ctx.index_key.description,
+ callout_info, callout_len, aux, dest_keyring, flags);
/* search all the process keyrings for a key */
- key_ref = search_process_keyrings(type, description, type->match, cred);
+ key_ref = search_process_keyrings(&ctx);
if (!IS_ERR(key_ref)) {
key = key_ref_to_ptr(key_ref);
if (!callout_info)
goto error;
- key = construct_key_and_link(type, description, callout_info,
- callout_len, aux, dest_keyring,
- flags);
+ key = construct_key_and_link(&ctx, callout_info, callout_len,
+ aux, dest_keyring, flags);
}
error:
intr ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE);
if (ret < 0)
return ret;
- if (test_bit(KEY_FLAG_NEGATIVE, &key->flags))
+ if (test_bit(KEY_FLAG_NEGATIVE, &key->flags)) {
+ smp_rmb();
return key->type_data.reject_error;
+ }
return key_validate(key);
}
EXPORT_SYMBOL(wait_for_key_construction);
#include <linux/slab.h>
#include <asm/uaccess.h>
#include "internal.h"
+#include <keys/user-type.h>
static int request_key_auth_instantiate(struct key *,
struct key_preparsed_payload *);
return ERR_PTR(ret);
}
-/*
- * See if an authorisation key is associated with a particular key.
- */
-static int key_get_instantiation_authkey_match(const struct key *key,
- const void *_id)
-{
- struct request_key_auth *rka = key->payload.data;
- key_serial_t id = (key_serial_t)(unsigned long) _id;
-
- return rka->target_key->serial == id;
-}
-
/*
* Search the current process's keyrings for the authorisation key for
* instantiation of a key.
*/
struct key *key_get_instantiation_authkey(key_serial_t target_id)
{
- const struct cred *cred = current_cred();
+ char description[16];
+ struct keyring_search_context ctx = {
+ .index_key.type = &key_type_request_key_auth,
+ .index_key.description = description,
+ .cred = current_cred(),
+ .match = user_match,
+ .match_data = description,
+ .flags = KEYRING_SEARCH_LOOKUP_DIRECT,
+ };
struct key *authkey;
key_ref_t authkey_ref;
- authkey_ref = search_process_keyrings(
- &key_type_request_key_auth,
- (void *) (unsigned long) target_id,
- key_get_instantiation_authkey_match,
- cred);
+ sprintf(description, "%x", target_id);
+
+ authkey_ref = search_process_keyrings(&ctx);
if (IS_ERR(authkey_ref)) {
authkey = ERR_CAST(authkey_ref);
.extra1 = (void *) &zero,
.extra2 = (void *) &max,
},
+#ifdef CONFIG_PERSISTENT_KEYRINGS
+ {
+ .procname = "persistent_keyring_expiry",
+ .data = &persistent_keyring_expiry,
+ .maxlen = sizeof(unsigned),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = (void *) &zero,
+ .extra2 = (void *) &max,
+ },
+#endif
{ }
};
* arbitrary blob of data as the payload
*/
struct key_type key_type_user = {
- .name = "user",
- .instantiate = user_instantiate,
- .update = user_update,
- .match = user_match,
- .revoke = user_revoke,
- .destroy = user_destroy,
- .describe = user_describe,
- .read = user_read,
+ .name = "user",
+ .def_lookup_type = KEYRING_SEARCH_LOOKUP_DIRECT,
+ .instantiate = user_instantiate,
+ .update = user_update,
+ .match = user_match,
+ .revoke = user_revoke,
+ .destroy = user_destroy,
+ .describe = user_describe,
+ .read = user_read,
};
EXPORT_SYMBOL_GPL(key_type_user);
*/
struct key_type key_type_logon = {
.name = "logon",
+ .def_lookup_type = KEYRING_SEARCH_LOOKUP_DIRECT,
.instantiate = user_instantiate,
.update = user_update,
.match = user_match,
if (a == NULL)
return;
/* we use GFP_ATOMIC so we won't sleep */
- ab = audit_log_start(current->audit_context, GFP_ATOMIC, AUDIT_AVC);
+ ab = audit_log_start(current->audit_context, GFP_ATOMIC | __GFP_NOWARN,
+ AUDIT_AVC);
if (ab == NULL)
return;
return security_ops->xfrm_policy_delete_security(ctx);
}
-int security_xfrm_state_alloc(struct xfrm_state *x, struct xfrm_user_sec_ctx *sec_ctx)
+int security_xfrm_state_alloc(struct xfrm_state *x,
+ struct xfrm_user_sec_ctx *sec_ctx)
{
- return security_ops->xfrm_state_alloc_security(x, sec_ctx, 0);
+ return security_ops->xfrm_state_alloc(x, sec_ctx);
}
EXPORT_SYMBOL(security_xfrm_state_alloc);
int security_xfrm_state_alloc_acquire(struct xfrm_state *x,
struct xfrm_sec_ctx *polsec, u32 secid)
{
- if (!polsec)
- return 0;
- /*
- * We want the context to be taken from secid which is usually
- * from the sock.
- */
- return security_ops->xfrm_state_alloc_security(x, NULL, secid);
+ return security_ops->xfrm_state_alloc_acquire(x, polsec, secid);
}
int security_xfrm_state_delete(struct xfrm_state *x)
#include <net/ip.h> /* for local_port_range[] */
#include <net/sock.h>
#include <net/tcp.h> /* struct or_callable used in sock_rcv_skb */
+#include <net/inet_connection_sock.h>
#include <net/net_namespace.h>
#include <net/netlabel.h>
#include <linux/uaccess.h>
#include "audit.h"
#include "avc_ss.h"
-#define NUM_SEL_MNT_OPTS 5
-
extern struct security_operations *security_ops;
/* SECMARK reference count */
* This function checks the SECMARK reference counter to see if any SECMARK
* targets are currently configured, if the reference counter is greater than
* zero SECMARK is considered to be enabled. Returns true (1) if SECMARK is
- * enabled, false (0) if SECMARK is disabled.
+ * enabled, false (0) if SECMARK is disabled. If the always_check_network
+ * policy capability is enabled, SECMARK is always considered enabled.
*
*/
static int selinux_secmark_enabled(void)
{
- return (atomic_read(&selinux_secmark_refcount) > 0);
+ return (selinux_policycap_alwaysnetwork || atomic_read(&selinux_secmark_refcount));
+}
+
+/**
+ * selinux_peerlbl_enabled - Check to see if peer labeling is currently enabled
+ *
+ * Description:
+ * This function checks if NetLabel or labeled IPSEC is enabled. Returns true
+ * (1) if any are enabled or false (0) if neither are enabled. If the
+ * always_check_network policy capability is enabled, peer labeling
+ * is always considered enabled.
+ *
+ */
+static int selinux_peerlbl_enabled(void)
+{
+ return (selinux_policycap_alwaysnetwork || netlbl_enabled() || selinux_xfrm_enabled());
}
/*
Opt_defcontext = 3,
Opt_rootcontext = 4,
Opt_labelsupport = 5,
+ Opt_nextmntopt = 6,
};
+#define NUM_SEL_MNT_OPTS (Opt_nextmntopt - 1)
+
static const match_table_t tokens = {
{Opt_context, CONTEXT_STR "%s"},
{Opt_fscontext, FSCONTEXT_STR "%s"},
return rc;
}
+static int selinux_is_sblabel_mnt(struct super_block *sb)
+{
+ struct superblock_security_struct *sbsec = sb->s_security;
+
+ if (sbsec->behavior == SECURITY_FS_USE_XATTR ||
+ sbsec->behavior == SECURITY_FS_USE_TRANS ||
+ sbsec->behavior == SECURITY_FS_USE_TASK)
+ return 1;
+
+ /* Special handling for sysfs. Is genfs but also has setxattr handler*/
+ if (strncmp(sb->s_type->name, "sysfs", sizeof("sysfs")) == 0)
+ return 1;
+
+ /*
+ * Special handling for rootfs. Is genfs but supports
+ * setting SELinux context on in-core inodes.
+ */
+ if (strncmp(sb->s_type->name, "rootfs", sizeof("rootfs")) == 0)
+ return 1;
+
+ return 0;
+}
+
static int sb_finish_set_opts(struct super_block *sb)
{
struct superblock_security_struct *sbsec = sb->s_security;
}
}
- sbsec->flags |= (SE_SBINITIALIZED | SE_SBLABELSUPP);
-
if (sbsec->behavior > ARRAY_SIZE(labeling_behaviors))
printk(KERN_ERR "SELinux: initialized (dev %s, type %s), unknown behavior\n",
sb->s_id, sb->s_type->name);
sb->s_id, sb->s_type->name,
labeling_behaviors[sbsec->behavior-1]);
- if (sbsec->behavior == SECURITY_FS_USE_GENFS ||
- sbsec->behavior == SECURITY_FS_USE_MNTPOINT ||
- sbsec->behavior == SECURITY_FS_USE_NONE ||
- sbsec->behavior > ARRAY_SIZE(labeling_behaviors))
- sbsec->flags &= ~SE_SBLABELSUPP;
-
- /* Special handling for sysfs. Is genfs but also has setxattr handler*/
- if (strncmp(sb->s_type->name, "sysfs", sizeof("sysfs")) == 0)
- sbsec->flags |= SE_SBLABELSUPP;
+ sbsec->flags |= SE_SBINITIALIZED;
+ if (selinux_is_sblabel_mnt(sb))
+ sbsec->flags |= SBLABEL_MNT;
/* Initialize the root inode. */
rc = inode_doinit_with_dentry(root_inode, root);
if (!ss_initialized)
return -EINVAL;
+ /* make sure we always check enough bits to cover the mask */
+ BUILD_BUG_ON(SE_MNTMASK >= (1 << NUM_SEL_MNT_OPTS));
+
tmp = sbsec->flags & SE_MNTMASK;
/* count the number of mount options for this sb */
- for (i = 0; i < 8; i++) {
+ for (i = 0; i < NUM_SEL_MNT_OPTS; i++) {
if (tmp & 0x01)
opts->num_mnt_opts++;
tmp >>= 1;
}
/* Check if the Label support flag is set */
- if (sbsec->flags & SE_SBLABELSUPP)
+ if (sbsec->flags & SBLABEL_MNT)
opts->num_mnt_opts++;
opts->mnt_opts = kcalloc(opts->num_mnt_opts, sizeof(char *), GFP_ATOMIC);
opts->mnt_opts[i] = context;
opts->mnt_opts_flags[i++] = ROOTCONTEXT_MNT;
}
- if (sbsec->flags & SE_SBLABELSUPP) {
+ if (sbsec->flags & SBLABEL_MNT) {
opts->mnt_opts[i] = NULL;
- opts->mnt_opts_flags[i++] = SE_SBLABELSUPP;
+ opts->mnt_opts_flags[i++] = SBLABEL_MNT;
}
BUG_ON(i != opts->num_mnt_opts);
for (i = 0; i < num_opts; i++) {
u32 sid;
- if (flags[i] == SE_SBLABELSUPP)
+ if (flags[i] == SBLABEL_MNT)
continue;
rc = security_context_to_sid(mount_options[i],
strlen(mount_options[i]), &sid);
* Determine the labeling behavior to use for this
* filesystem type.
*/
- rc = security_fs_use((sbsec->flags & SE_SBPROC) ?
- "proc" : sb->s_type->name,
- &sbsec->behavior, &sbsec->sid);
+ rc = security_fs_use(sb);
if (rc) {
printk(KERN_WARNING
"%s: security_fs_use(%s) returned %d\n",
case DEFCONTEXT_MNT:
prefix = DEFCONTEXT_STR;
break;
- case SE_SBLABELSUPP:
+ case SBLABEL_MNT:
seq_putc(m, ',');
seq_puts(m, LABELSUPP_STR);
continue;
if (rc)
return rc;
- if (!newsid || !(sbsec->flags & SE_SBLABELSUPP)) {
+ if (!newsid || !(sbsec->flags & SBLABEL_MNT)) {
rc = security_transition_sid(sid, dsec->sid, tclass,
&dentry->d_name, &newsid);
if (rc)
u32 sid;
size_t len;
- if (flags[i] == SE_SBLABELSUPP)
+ if (flags[i] == SBLABEL_MNT)
continue;
len = strlen(mount_options[i]);
rc = security_context_to_sid(mount_options[i], len, &sid);
if ((sbsec->flags & SE_SBINITIALIZED) &&
(sbsec->behavior == SECURITY_FS_USE_MNTPOINT))
newsid = sbsec->mntpoint_sid;
- else if (!newsid || !(sbsec->flags & SE_SBLABELSUPP)) {
+ else if (!newsid || !(sbsec->flags & SBLABEL_MNT)) {
rc = security_transition_sid(sid, dsec->sid,
inode_mode_to_security_class(inode->i_mode),
qstr, &newsid);
isec->initialized = 1;
}
- if (!ss_initialized || !(sbsec->flags & SE_SBLABELSUPP))
+ if (!ss_initialized || !(sbsec->flags & SBLABEL_MNT))
return -EOPNOTSUPP;
if (name)
return selinux_inode_setotherxattr(dentry, name);
sbsec = inode->i_sb->s_security;
- if (!(sbsec->flags & SE_SBLABELSUPP))
+ if (!(sbsec->flags & SBLABEL_MNT))
return -EOPNOTSUPP;
if (!inode_owner_or_capable(inode))
u32 nlbl_sid;
u32 nlbl_type;
- selinux_skb_xfrm_sid(skb, &xfrm_sid);
- selinux_netlbl_skbuff_getsid(skb, family, &nlbl_type, &nlbl_sid);
+ err = selinux_xfrm_skb_sid(skb, &xfrm_sid);
+ if (unlikely(err))
+ return -EACCES;
+ err = selinux_netlbl_skbuff_getsid(skb, family, &nlbl_type, &nlbl_sid);
+ if (unlikely(err))
+ return -EACCES;
err = security_net_peersid_resolve(nlbl_sid, nlbl_type, xfrm_sid, sid);
if (unlikely(err)) {
return 0;
}
+/**
+ * selinux_conn_sid - Determine the child socket label for a connection
+ * @sk_sid: the parent socket's SID
+ * @skb_sid: the packet's SID
+ * @conn_sid: the resulting connection SID
+ *
+ * If @skb_sid is valid then the user:role:type information from @sk_sid is
+ * combined with the MLS information from @skb_sid in order to create
+ * @conn_sid. If @skb_sid is not valid then then @conn_sid is simply a copy
+ * of @sk_sid. Returns zero on success, negative values on failure.
+ *
+ */
+static int selinux_conn_sid(u32 sk_sid, u32 skb_sid, u32 *conn_sid)
+{
+ int err = 0;
+
+ if (skb_sid != SECSID_NULL)
+ err = security_sid_mls_copy(sk_sid, skb_sid, conn_sid);
+ else
+ *conn_sid = sk_sid;
+
+ return err;
+}
+
/* socket security operations */
static int socket_sockcreate_sid(const struct task_security_struct *tsec,
return selinux_sock_rcv_skb_compat(sk, skb, family);
secmark_active = selinux_secmark_enabled();
- peerlbl_active = netlbl_enabled() || selinux_xfrm_enabled();
+ peerlbl_active = selinux_peerlbl_enabled();
if (!secmark_active && !peerlbl_active)
return 0;
}
err = avc_has_perm(sk_sid, peer_sid, SECCLASS_PEER,
PEER__RECV, &ad);
- if (err)
+ if (err) {
selinux_netlbl_err(skb, err, 0);
+ return err;
+ }
}
if (secmark_active) {
struct sk_security_struct *sksec = sk->sk_security;
int err;
u16 family = sk->sk_family;
- u32 newsid;
+ u32 connsid;
u32 peersid;
/* handle mapped IPv4 packets arriving via IPv6 sockets */
err = selinux_skb_peerlbl_sid(skb, family, &peersid);
if (err)
return err;
- if (peersid == SECSID_NULL) {
- req->secid = sksec->sid;
- req->peer_secid = SECSID_NULL;
- } else {
- err = security_sid_mls_copy(sksec->sid, peersid, &newsid);
- if (err)
- return err;
- req->secid = newsid;
- req->peer_secid = peersid;
- }
+ err = selinux_conn_sid(sksec->sid, peersid, &connsid);
+ if (err)
+ return err;
+ req->secid = connsid;
+ req->peer_secid = peersid;
return selinux_netlbl_inet_conn_request(req, family);
}
secmark_active = selinux_secmark_enabled();
netlbl_active = netlbl_enabled();
- peerlbl_active = netlbl_active || selinux_xfrm_enabled();
+ peerlbl_active = selinux_peerlbl_enabled();
if (!secmark_active && !peerlbl_active)
return NF_ACCEPT;
static unsigned int selinux_ip_output(struct sk_buff *skb,
u16 family)
{
+ struct sock *sk;
u32 sid;
if (!netlbl_enabled())
/* we do this in the LOCAL_OUT path and not the POST_ROUTING path
* because we want to make sure we apply the necessary labeling
* before IPsec is applied so we can leverage AH protection */
- if (skb->sk) {
- struct sk_security_struct *sksec = skb->sk->sk_security;
+ sk = skb->sk;
+ if (sk) {
+ struct sk_security_struct *sksec;
+
+ if (sk->sk_state == TCP_LISTEN)
+ /* if the socket is the listening state then this
+ * packet is a SYN-ACK packet which means it needs to
+ * be labeled based on the connection/request_sock and
+ * not the parent socket. unfortunately, we can't
+ * lookup the request_sock yet as it isn't queued on
+ * the parent socket until after the SYN-ACK is sent.
+ * the "solution" is to simply pass the packet as-is
+ * as any IP option based labeling should be copied
+ * from the initial connection request (in the IP
+ * layer). it is far from ideal, but until we get a
+ * security label in the packet itself this is the
+ * best we can do. */
+ return NF_ACCEPT;
+
+ /* standard practice, label using the parent socket */
+ sksec = sk->sk_security;
sid = sksec->sid;
} else
sid = SECINITSID_KERNEL;
* as fast and as clean as possible. */
if (!selinux_policycap_netpeer)
return selinux_ip_postroute_compat(skb, ifindex, family);
+
+ secmark_active = selinux_secmark_enabled();
+ peerlbl_active = selinux_peerlbl_enabled();
+ if (!secmark_active && !peerlbl_active)
+ return NF_ACCEPT;
+
+ sk = skb->sk;
+
#ifdef CONFIG_XFRM
/* If skb->dst->xfrm is non-NULL then the packet is undergoing an IPsec
* packet transformation so allow the packet to pass without any checks
* since we'll have another chance to perform access control checks
* when the packet is on it's final way out.
* NOTE: there appear to be some IPv6 multicast cases where skb->dst
- * is NULL, in this case go ahead and apply access control. */
- if (skb_dst(skb) != NULL && skb_dst(skb)->xfrm != NULL)
+ * is NULL, in this case go ahead and apply access control.
+ * NOTE: if this is a local socket (skb->sk != NULL) that is in the
+ * TCP listening state we cannot wait until the XFRM processing
+ * is done as we will miss out on the SA label if we do;
+ * unfortunately, this means more work, but it is only once per
+ * connection. */
+ if (skb_dst(skb) != NULL && skb_dst(skb)->xfrm != NULL &&
+ !(sk != NULL && sk->sk_state == TCP_LISTEN))
return NF_ACCEPT;
#endif
- secmark_active = selinux_secmark_enabled();
- peerlbl_active = netlbl_enabled() || selinux_xfrm_enabled();
- if (!secmark_active && !peerlbl_active)
- return NF_ACCEPT;
- /* if the packet is being forwarded then get the peer label from the
- * packet itself; otherwise check to see if it is from a local
- * application or the kernel, if from an application get the peer label
- * from the sending socket, otherwise use the kernel's sid */
- sk = skb->sk;
if (sk == NULL) {
+ /* Without an associated socket the packet is either coming
+ * from the kernel or it is being forwarded; check the packet
+ * to determine which and if the packet is being forwarded
+ * query the packet directly to determine the security label. */
if (skb->skb_iif) {
secmark_perm = PACKET__FORWARD_OUT;
if (selinux_skb_peerlbl_sid(skb, family, &peer_sid))
secmark_perm = PACKET__SEND;
peer_sid = SECINITSID_KERNEL;
}
+ } else if (sk->sk_state == TCP_LISTEN) {
+ /* Locally generated packet but the associated socket is in the
+ * listening state which means this is a SYN-ACK packet. In
+ * this particular case the correct security label is assigned
+ * to the connection/request_sock but unfortunately we can't
+ * query the request_sock as it isn't queued on the parent
+ * socket until after the SYN-ACK packet is sent; the only
+ * viable choice is to regenerate the label like we do in
+ * selinux_inet_conn_request(). See also selinux_ip_output()
+ * for similar problems. */
+ u32 skb_sid;
+ struct sk_security_struct *sksec = sk->sk_security;
+ if (selinux_skb_peerlbl_sid(skb, family, &skb_sid))
+ return NF_DROP;
+ /* At this point, if the returned skb peerlbl is SECSID_NULL
+ * and the packet has been through at least one XFRM
+ * transformation then we must be dealing with the "final"
+ * form of labeled IPsec packet; since we've already applied
+ * all of our access controls on this packet we can safely
+ * pass the packet. */
+ if (skb_sid == SECSID_NULL) {
+ switch (family) {
+ case PF_INET:
+ if (IPCB(skb)->flags & IPSKB_XFRM_TRANSFORMED)
+ return NF_ACCEPT;
+ break;
+ case PF_INET6:
+ if (IP6CB(skb)->flags & IP6SKB_XFRM_TRANSFORMED)
+ return NF_ACCEPT;
+ default:
+ return NF_DROP_ERR(-ECONNREFUSED);
+ }
+ }
+ if (selinux_conn_sid(sksec->sid, skb_sid, &peer_sid))
+ return NF_DROP;
+ secmark_perm = PACKET__SEND;
} else {
+ /* Locally generated packet, fetch the security label from the
+ * associated socket. */
struct sk_security_struct *sksec = sk->sk_security;
peer_sid = sksec->sid;
secmark_perm = PACKET__SEND;
/* Check for ptracing, and update the task SID if ok.
Otherwise, leave SID unchanged and fail. */
ptsid = 0;
- task_lock(p);
+ rcu_read_lock();
tracer = ptrace_parent(p);
if (tracer)
ptsid = task_sid(tracer);
- task_unlock(p);
+ rcu_read_unlock();
if (tracer) {
error = avc_has_perm(ptsid, sid, SECCLASS_PROCESS,
.xfrm_policy_clone_security = selinux_xfrm_policy_clone,
.xfrm_policy_free_security = selinux_xfrm_policy_free,
.xfrm_policy_delete_security = selinux_xfrm_policy_delete,
- .xfrm_state_alloc_security = selinux_xfrm_state_alloc,
+ .xfrm_state_alloc = selinux_xfrm_state_alloc,
+ .xfrm_state_alloc_acquire = selinux_xfrm_state_alloc_acquire,
.xfrm_state_free_security = selinux_xfrm_state_free,
.xfrm_state_delete_security = selinux_xfrm_state_delete,
.xfrm_policy_lookup = selinux_xfrm_policy_lookup,
u32 sid; /* SID of file system superblock */
u32 def_sid; /* default SID for labeling */
u32 mntpoint_sid; /* SECURITY_FS_USE_MNTPOINT context for files */
- unsigned int behavior; /* labeling behavior */
- unsigned char flags; /* which mount options were specified */
+ unsigned short behavior; /* labeling behavior */
+ unsigned short flags; /* which mount options were specified */
struct mutex lock;
struct list_head isec_head;
spinlock_t isec_lock;
/* Mask for just the mount related flags */
#define SE_MNTMASK 0x0f
/* Super block security struct flags for mount options */
+/* BE CAREFUL, these need to be the low order bits for selinux_get_mnt_opts */
#define CONTEXT_MNT 0x01
#define FSCONTEXT_MNT 0x02
#define ROOTCONTEXT_MNT 0x04
#define DEFCONTEXT_MNT 0x08
+#define SBLABEL_MNT 0x10
/* Non-mount related flags */
-#define SE_SBINITIALIZED 0x10
-#define SE_SBPROC 0x20
-#define SE_SBLABELSUPP 0x40
+#define SE_SBINITIALIZED 0x0100
+#define SE_SBPROC 0x0200
#define CONTEXT_STR "context="
#define FSCONTEXT_STR "fscontext="
enum {
POLICYDB_CAPABILITY_NETPEER,
POLICYDB_CAPABILITY_OPENPERM,
+ POLICYDB_CAPABILITY_REDHAT1,
+ POLICYDB_CAPABILITY_ALWAYSNETWORK,
__POLICYDB_CAPABILITY_MAX
};
#define POLICYDB_CAPABILITY_MAX (__POLICYDB_CAPABILITY_MAX - 1)
extern int selinux_policycap_netpeer;
extern int selinux_policycap_openperm;
+extern int selinux_policycap_alwaysnetwork;
/*
* type_datum properties
#define SECURITY_FS_USE_NATIVE 7 /* use native label support */
#define SECURITY_FS_USE_MAX 7 /* Highest SECURITY_FS_USE_XXX */
-int security_fs_use(const char *fstype, unsigned int *behavior,
- u32 *sid);
+int security_fs_use(struct super_block *sb);
int security_genfs_sid(const char *fstype, char *name, u16 sclass,
u32 *sid);
#include <net/flow.h>
int selinux_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp,
- struct xfrm_user_sec_ctx *sec_ctx);
+ struct xfrm_user_sec_ctx *uctx);
int selinux_xfrm_policy_clone(struct xfrm_sec_ctx *old_ctx,
struct xfrm_sec_ctx **new_ctxp);
void selinux_xfrm_policy_free(struct xfrm_sec_ctx *ctx);
int selinux_xfrm_policy_delete(struct xfrm_sec_ctx *ctx);
int selinux_xfrm_state_alloc(struct xfrm_state *x,
- struct xfrm_user_sec_ctx *sec_ctx, u32 secid);
+ struct xfrm_user_sec_ctx *uctx);
+int selinux_xfrm_state_alloc_acquire(struct xfrm_state *x,
+ struct xfrm_sec_ctx *polsec, u32 secid);
void selinux_xfrm_state_free(struct xfrm_state *x);
int selinux_xfrm_state_delete(struct xfrm_state *x);
int selinux_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir);
int selinux_xfrm_state_pol_flow_match(struct xfrm_state *x,
- struct xfrm_policy *xp, const struct flowi *fl);
-
-/*
- * Extract the security blob from the sock (it's actually on the socket)
- */
-static inline struct inode_security_struct *get_sock_isec(struct sock *sk)
-{
- if (!sk->sk_socket)
- return NULL;
-
- return SOCK_INODE(sk->sk_socket)->i_security;
-}
+ struct xfrm_policy *xp,
+ const struct flowi *fl);
#ifdef CONFIG_SECURITY_NETWORK_XFRM
extern atomic_t selinux_xfrm_refcount;
return (atomic_read(&selinux_xfrm_refcount) > 0);
}
-int selinux_xfrm_sock_rcv_skb(u32 sid, struct sk_buff *skb,
- struct common_audit_data *ad);
-int selinux_xfrm_postroute_last(u32 isec_sid, struct sk_buff *skb,
- struct common_audit_data *ad, u8 proto);
+int selinux_xfrm_sock_rcv_skb(u32 sk_sid, struct sk_buff *skb,
+ struct common_audit_data *ad);
+int selinux_xfrm_postroute_last(u32 sk_sid, struct sk_buff *skb,
+ struct common_audit_data *ad, u8 proto);
int selinux_xfrm_decode_session(struct sk_buff *skb, u32 *sid, int ckall);
+int selinux_xfrm_skb_sid(struct sk_buff *skb, u32 *sid);
static inline void selinux_xfrm_notify_policyload(void)
{
return 0;
}
-static inline int selinux_xfrm_sock_rcv_skb(u32 isec_sid, struct sk_buff *skb,
- struct common_audit_data *ad)
+static inline int selinux_xfrm_sock_rcv_skb(u32 sk_sid, struct sk_buff *skb,
+ struct common_audit_data *ad)
{
return 0;
}
-static inline int selinux_xfrm_postroute_last(u32 isec_sid, struct sk_buff *skb,
- struct common_audit_data *ad, u8 proto)
+static inline int selinux_xfrm_postroute_last(u32 sk_sid, struct sk_buff *skb,
+ struct common_audit_data *ad,
+ u8 proto)
{
return 0;
}
-static inline int selinux_xfrm_decode_session(struct sk_buff *skb, u32 *sid, int ckall)
+static inline int selinux_xfrm_decode_session(struct sk_buff *skb, u32 *sid,
+ int ckall)
{
*sid = SECSID_NULL;
return 0;
static inline void selinux_xfrm_notify_policyload(void)
{
}
-#endif
-static inline void selinux_skb_xfrm_sid(struct sk_buff *skb, u32 *sid)
+static inline int selinux_xfrm_skb_sid(struct sk_buff *skb, u32 *sid)
{
- int err = selinux_xfrm_decode_session(skb, sid, 0);
- BUG_ON(err);
+ *sid = SECSID_NULL;
+ return 0;
}
+#endif
#endif /* _SELINUX_XFRM_H_ */
sksec->nlbl_state != NLBL_CONNLABELED)
return 0;
- local_bh_disable();
- bh_lock_sock_nested(sk);
+ lock_sock(sk);
/* connected sockets are allowed to disconnect when the address family
* is set to AF_UNSPEC, if that is what is happening we want to reset
sksec->nlbl_state = NLBL_CONNLABELED;
socket_connect_return:
- bh_unlock_sock(sk);
- local_bh_enable();
+ release_sock(sk);
return rc;
}
break;
default:
BUG();
+ return;
}
/* we need to impose a limit on the growth of the hash table so check
break;
default:
BUG();
+ ret = -EINVAL;
}
if (ret != 0)
goto out;
{ AUDIT_MAKE_EQUIV, NETLINK_AUDIT_SOCKET__NLMSG_WRITE },
{ AUDIT_TTY_GET, NETLINK_AUDIT_SOCKET__NLMSG_READ },
{ AUDIT_TTY_SET, NETLINK_AUDIT_SOCKET__NLMSG_TTY_AUDIT },
+ { AUDIT_GET_FEATURE, NETLINK_AUDIT_SOCKET__NLMSG_READ },
+ { AUDIT_SET_FEATURE, NETLINK_AUDIT_SOCKET__NLMSG_WRITE },
};
/* Policy capability filenames */
static char *policycap_names[] = {
"network_peer_controls",
- "open_perms"
+ "open_perms",
+ "redhat1",
+ "always_check_network"
};
unsigned int selinux_checkreqprot = CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE;
}
#endif /* CONFIG_NETLABEL */
-int ebitmap_contains(struct ebitmap *e1, struct ebitmap *e2)
+/*
+ * Check to see if all the bits set in e2 are also set in e1. Optionally,
+ * if last_e2bit is non-zero, the highest set bit in e2 cannot exceed
+ * last_e2bit.
+ */
+int ebitmap_contains(struct ebitmap *e1, struct ebitmap *e2, u32 last_e2bit)
{
struct ebitmap_node *n1, *n2;
int i;
n1 = e1->node;
n2 = e2->node;
+
while (n1 && n2 && (n1->startbit <= n2->startbit)) {
if (n1->startbit < n2->startbit) {
n1 = n1->next;
continue;
}
- for (i = 0; i < EBITMAP_UNIT_NUMS; i++) {
+ for (i = EBITMAP_UNIT_NUMS - 1; (i >= 0) && !n2->maps[i]; )
+ i--; /* Skip trailing NULL map entries */
+ if (last_e2bit && (i >= 0)) {
+ u32 lastsetbit = n2->startbit + i * EBITMAP_UNIT_SIZE +
+ __fls(n2->maps[i]);
+ if (lastsetbit > last_e2bit)
+ return 0;
+ }
+
+ while (i >= 0) {
if ((n1->maps[i] & n2->maps[i]) != n2->maps[i])
return 0;
+ i--;
}
n1 = n1->next;
#include <net/netlabel.h>
-#define EBITMAP_UNIT_NUMS ((32 - sizeof(void *) - sizeof(u32)) \
+#ifdef CONFIG_64BIT
+#define EBITMAP_NODE_SIZE 64
+#else
+#define EBITMAP_NODE_SIZE 32
+#endif
+
+#define EBITMAP_UNIT_NUMS ((EBITMAP_NODE_SIZE-sizeof(void *)-sizeof(u32))\
/ sizeof(unsigned long))
#define EBITMAP_UNIT_SIZE BITS_PER_LONG
#define EBITMAP_SIZE (EBITMAP_UNIT_NUMS * EBITMAP_UNIT_SIZE)
int ebitmap_cmp(struct ebitmap *e1, struct ebitmap *e2);
int ebitmap_cpy(struct ebitmap *dst, struct ebitmap *src);
-int ebitmap_contains(struct ebitmap *e1, struct ebitmap *e2);
+int ebitmap_contains(struct ebitmap *e1, struct ebitmap *e2, u32 last_e2bit);
int ebitmap_get_bit(struct ebitmap *e, unsigned long bit);
int ebitmap_set_bit(struct ebitmap *e, unsigned long bit, int value);
void ebitmap_destroy(struct ebitmap *e);
int mls_level_isvalid(struct policydb *p, struct mls_level *l)
{
struct level_datum *levdatum;
- struct ebitmap_node *node;
- int i;
if (!l->sens || l->sens > p->p_levels.nprim)
return 0;
if (!levdatum)
return 0;
- ebitmap_for_each_positive_bit(&l->cat, node, i) {
- if (i > p->p_cats.nprim)
- return 0;
- if (!ebitmap_get_bit(&levdatum->level->cat, i)) {
- /*
- * Category may not be associated with
- * sensitivity.
- */
- return 0;
- }
- }
-
- return 1;
+ /*
+ * Return 1 iff all the bits set in l->cat are also be set in
+ * levdatum->level->cat and no bit in l->cat is larger than
+ * p->p_cats.nprim.
+ */
+ return ebitmap_contains(&levdatum->level->cat, &l->cat,
+ p->p_cats.nprim);
}
int mls_range_isvalid(struct policydb *p, struct mls_range *r)
static inline int mls_level_dom(struct mls_level *l1, struct mls_level *l2)
{
return ((l1->sens >= l2->sens) &&
- ebitmap_contains(&l1->cat, &l2->cat));
+ ebitmap_contains(&l1->cat, &l2->cat, 0));
}
#define mls_level_incomp(l1, l2) \
static int range_write(struct policydb *p, void *fp)
{
- size_t nel;
__le32 buf[1];
- int rc;
+ int rc, nel;
struct policy_data pd;
pd.p = p;
int selinux_policycap_netpeer;
int selinux_policycap_openperm;
+int selinux_policycap_alwaysnetwork;
static DEFINE_RWLOCK(policy_rwlock);
POLICYDB_CAPABILITY_NETPEER);
selinux_policycap_openperm = ebitmap_get_bit(&policydb.policycaps,
POLICYDB_CAPABILITY_OPENPERM);
+ selinux_policycap_alwaysnetwork = ebitmap_get_bit(&policydb.policycaps,
+ POLICYDB_CAPABILITY_ALWAYSNETWORK);
}
static int security_preserve_bools(struct policydb *p);
/**
* security_fs_use - Determine how to handle labeling for a filesystem.
- * @fstype: filesystem type
- * @behavior: labeling behavior
- * @sid: SID for filesystem (superblock)
+ * @sb: superblock in question
*/
-int security_fs_use(
- const char *fstype,
- unsigned int *behavior,
- u32 *sid)
+int security_fs_use(struct super_block *sb)
{
int rc = 0;
struct ocontext *c;
+ struct superblock_security_struct *sbsec = sb->s_security;
+ const char *fstype = sb->s_type->name;
read_lock(&policy_rwlock);
}
if (c) {
- *behavior = c->v.behavior;
+ sbsec->behavior = c->v.behavior;
if (!c->sid[0]) {
rc = sidtab_context_to_sid(&sidtab, &c->context[0],
&c->sid[0]);
if (rc)
goto out;
}
- *sid = c->sid[0];
+ sbsec->sid = c->sid[0];
} else {
- rc = security_genfs_sid(fstype, "/", SECCLASS_DIR, sid);
+ rc = security_genfs_sid(fstype, "/", SECCLASS_DIR, &sbsec->sid);
if (rc) {
- *behavior = SECURITY_FS_USE_NONE;
+ sbsec->behavior = SECURITY_FS_USE_NONE;
rc = 0;
} else {
- *behavior = SECURITY_FS_USE_GENFS;
+ sbsec->behavior = SECURITY_FS_USE_GENFS;
}
}
atomic_t selinux_xfrm_refcount = ATOMIC_INIT(0);
/*
- * Returns true if an LSM/SELinux context
+ * Returns true if the context is an LSM/SELinux context.
*/
static inline int selinux_authorizable_ctx(struct xfrm_sec_ctx *ctx)
{
}
/*
- * Returns true if the xfrm contains a security blob for SELinux
+ * Returns true if the xfrm contains a security blob for SELinux.
*/
static inline int selinux_authorizable_xfrm(struct xfrm_state *x)
{
}
/*
- * LSM hook implementation that authorizes that a flow can use
- * a xfrm policy rule.
+ * Allocates a xfrm_sec_state and populates it using the supplied security
+ * xfrm_user_sec_ctx context.
*/
-int selinux_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir)
+static int selinux_xfrm_alloc_user(struct xfrm_sec_ctx **ctxp,
+ struct xfrm_user_sec_ctx *uctx)
{
int rc;
- u32 sel_sid;
+ const struct task_security_struct *tsec = current_security();
+ struct xfrm_sec_ctx *ctx = NULL;
+ u32 str_len;
- /* Context sid is either set to label or ANY_ASSOC */
- if (ctx) {
- if (!selinux_authorizable_ctx(ctx))
- return -EINVAL;
-
- sel_sid = ctx->ctx_sid;
- } else
- /*
- * All flows should be treated as polmatch'ing an
- * otherwise applicable "non-labeled" policy. This
- * would prevent inadvertent "leaks".
- */
- return 0;
+ if (ctxp == NULL || uctx == NULL ||
+ uctx->ctx_doi != XFRM_SC_DOI_LSM ||
+ uctx->ctx_alg != XFRM_SC_ALG_SELINUX)
+ return -EINVAL;
+
+ str_len = uctx->ctx_len;
+ if (str_len >= PAGE_SIZE)
+ return -ENOMEM;
+
+ ctx = kmalloc(sizeof(*ctx) + str_len + 1, GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
- rc = avc_has_perm(fl_secid, sel_sid, SECCLASS_ASSOCIATION,
- ASSOCIATION__POLMATCH,
- NULL);
+ ctx->ctx_doi = XFRM_SC_DOI_LSM;
+ ctx->ctx_alg = XFRM_SC_ALG_SELINUX;
+ ctx->ctx_len = str_len;
+ memcpy(ctx->ctx_str, &uctx[1], str_len);
+ ctx->ctx_str[str_len] = '\0';
+ rc = security_context_to_sid(ctx->ctx_str, str_len, &ctx->ctx_sid);
+ if (rc)
+ goto err;
- if (rc == -EACCES)
- return -ESRCH;
+ rc = avc_has_perm(tsec->sid, ctx->ctx_sid,
+ SECCLASS_ASSOCIATION, ASSOCIATION__SETCONTEXT, NULL);
+ if (rc)
+ goto err;
+ *ctxp = ctx;
+ atomic_inc(&selinux_xfrm_refcount);
+ return 0;
+
+err:
+ kfree(ctx);
return rc;
}
+/*
+ * Free the xfrm_sec_ctx structure.
+ */
+static void selinux_xfrm_free(struct xfrm_sec_ctx *ctx)
+{
+ if (!ctx)
+ return;
+
+ atomic_dec(&selinux_xfrm_refcount);
+ kfree(ctx);
+}
+
+/*
+ * Authorize the deletion of a labeled SA or policy rule.
+ */
+static int selinux_xfrm_delete(struct xfrm_sec_ctx *ctx)
+{
+ const struct task_security_struct *tsec = current_security();
+
+ if (!ctx)
+ return 0;
+
+ return avc_has_perm(tsec->sid, ctx->ctx_sid,
+ SECCLASS_ASSOCIATION, ASSOCIATION__SETCONTEXT,
+ NULL);
+}
+
+/*
+ * LSM hook implementation that authorizes that a flow can use a xfrm policy
+ * rule.
+ */
+int selinux_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir)
+{
+ int rc;
+
+ /* All flows should be treated as polmatch'ing an otherwise applicable
+ * "non-labeled" policy. This would prevent inadvertent "leaks". */
+ if (!ctx)
+ return 0;
+
+ /* Context sid is either set to label or ANY_ASSOC */
+ if (!selinux_authorizable_ctx(ctx))
+ return -EINVAL;
+
+ rc = avc_has_perm(fl_secid, ctx->ctx_sid,
+ SECCLASS_ASSOCIATION, ASSOCIATION__POLMATCH, NULL);
+ return (rc == -EACCES ? -ESRCH : rc);
+}
+
/*
* LSM hook implementation that authorizes that a state matches
* the given policy, flow combo.
*/
-
-int selinux_xfrm_state_pol_flow_match(struct xfrm_state *x, struct xfrm_policy *xp,
- const struct flowi *fl)
+int selinux_xfrm_state_pol_flow_match(struct xfrm_state *x,
+ struct xfrm_policy *xp,
+ const struct flowi *fl)
{
u32 state_sid;
- int rc;
if (!xp->security)
if (x->security)
if (fl->flowi_secid != state_sid)
return 0;
- rc = avc_has_perm(fl->flowi_secid, state_sid, SECCLASS_ASSOCIATION,
- ASSOCIATION__SENDTO,
- NULL)? 0:1;
-
- /*
- * We don't need a separate SA Vs. policy polmatch check
- * since the SA is now of the same label as the flow and
- * a flow Vs. policy polmatch check had already happened
- * in selinux_xfrm_policy_lookup() above.
- */
-
- return rc;
+ /* We don't need a separate SA Vs. policy polmatch check since the SA
+ * is now of the same label as the flow and a flow Vs. policy polmatch
+ * check had already happened in selinux_xfrm_policy_lookup() above. */
+ return (avc_has_perm(fl->flowi_secid, state_sid,
+ SECCLASS_ASSOCIATION, ASSOCIATION__SENDTO,
+ NULL) ? 0 : 1);
}
-/*
- * LSM hook implementation that checks and/or returns the xfrm sid for the
- * incoming packet.
- */
-
-int selinux_xfrm_decode_session(struct sk_buff *skb, u32 *sid, int ckall)
+static u32 selinux_xfrm_skb_sid_egress(struct sk_buff *skb)
{
- struct sec_path *sp;
+ struct dst_entry *dst = skb_dst(skb);
+ struct xfrm_state *x;
- *sid = SECSID_NULL;
+ if (dst == NULL)
+ return SECSID_NULL;
+ x = dst->xfrm;
+ if (x == NULL || !selinux_authorizable_xfrm(x))
+ return SECSID_NULL;
- if (skb == NULL)
- return 0;
+ return x->security->ctx_sid;
+}
+
+static int selinux_xfrm_skb_sid_ingress(struct sk_buff *skb,
+ u32 *sid, int ckall)
+{
+ u32 sid_session = SECSID_NULL;
+ struct sec_path *sp = skb->sp;
- sp = skb->sp;
if (sp) {
- int i, sid_set = 0;
+ int i;
- for (i = sp->len-1; i >= 0; i--) {
+ for (i = sp->len - 1; i >= 0; i--) {
struct xfrm_state *x = sp->xvec[i];
if (selinux_authorizable_xfrm(x)) {
struct xfrm_sec_ctx *ctx = x->security;
- if (!sid_set) {
- *sid = ctx->ctx_sid;
- sid_set = 1;
-
+ if (sid_session == SECSID_NULL) {
+ sid_session = ctx->ctx_sid;
if (!ckall)
- break;
- } else if (*sid != ctx->ctx_sid)
+ goto out;
+ } else if (sid_session != ctx->ctx_sid) {
+ *sid = SECSID_NULL;
return -EINVAL;
+ }
}
}
}
+out:
+ *sid = sid_session;
return 0;
}
/*
- * Security blob allocation for xfrm_policy and xfrm_state
- * CTX does not have a meaningful value on input
+ * LSM hook implementation that checks and/or returns the xfrm sid for the
+ * incoming packet.
*/
-static int selinux_xfrm_sec_ctx_alloc(struct xfrm_sec_ctx **ctxp,
- struct xfrm_user_sec_ctx *uctx, u32 sid)
+int selinux_xfrm_decode_session(struct sk_buff *skb, u32 *sid, int ckall)
{
- int rc = 0;
- const struct task_security_struct *tsec = current_security();
- struct xfrm_sec_ctx *ctx = NULL;
- char *ctx_str = NULL;
- u32 str_len;
-
- BUG_ON(uctx && sid);
-
- if (!uctx)
- goto not_from_user;
-
- if (uctx->ctx_alg != XFRM_SC_ALG_SELINUX)
- return -EINVAL;
-
- str_len = uctx->ctx_len;
- if (str_len >= PAGE_SIZE)
- return -ENOMEM;
-
- *ctxp = ctx = kmalloc(sizeof(*ctx) +
- str_len + 1,
- GFP_KERNEL);
-
- if (!ctx)
- return -ENOMEM;
-
- ctx->ctx_doi = uctx->ctx_doi;
- ctx->ctx_len = str_len;
- ctx->ctx_alg = uctx->ctx_alg;
-
- memcpy(ctx->ctx_str,
- uctx+1,
- str_len);
- ctx->ctx_str[str_len] = 0;
- rc = security_context_to_sid(ctx->ctx_str,
- str_len,
- &ctx->ctx_sid);
-
- if (rc)
- goto out;
-
- /*
- * Does the subject have permission to set security context?
- */
- rc = avc_has_perm(tsec->sid, ctx->ctx_sid,
- SECCLASS_ASSOCIATION,
- ASSOCIATION__SETCONTEXT, NULL);
- if (rc)
- goto out;
-
- return rc;
-
-not_from_user:
- rc = security_sid_to_context(sid, &ctx_str, &str_len);
- if (rc)
- goto out;
-
- *ctxp = ctx = kmalloc(sizeof(*ctx) +
- str_len,
- GFP_ATOMIC);
-
- if (!ctx) {
- rc = -ENOMEM;
- goto out;
+ if (skb == NULL) {
+ *sid = SECSID_NULL;
+ return 0;
}
+ return selinux_xfrm_skb_sid_ingress(skb, sid, ckall);
+}
- ctx->ctx_doi = XFRM_SC_DOI_LSM;
- ctx->ctx_alg = XFRM_SC_ALG_SELINUX;
- ctx->ctx_sid = sid;
- ctx->ctx_len = str_len;
- memcpy(ctx->ctx_str,
- ctx_str,
- str_len);
+int selinux_xfrm_skb_sid(struct sk_buff *skb, u32 *sid)
+{
+ int rc;
- goto out2;
+ rc = selinux_xfrm_skb_sid_ingress(skb, sid, 0);
+ if (rc == 0 && *sid == SECSID_NULL)
+ *sid = selinux_xfrm_skb_sid_egress(skb);
-out:
- *ctxp = NULL;
- kfree(ctx);
-out2:
- kfree(ctx_str);
return rc;
}
/*
- * LSM hook implementation that allocs and transfers uctx spec to
- * xfrm_policy.
+ * LSM hook implementation that allocs and transfers uctx spec to xfrm_policy.
*/
int selinux_xfrm_policy_alloc(struct xfrm_sec_ctx **ctxp,
struct xfrm_user_sec_ctx *uctx)
{
- int err;
-
- BUG_ON(!uctx);
-
- err = selinux_xfrm_sec_ctx_alloc(ctxp, uctx, 0);
- if (err == 0)
- atomic_inc(&selinux_xfrm_refcount);
-
- return err;
+ return selinux_xfrm_alloc_user(ctxp, uctx);
}
-
/*
- * LSM hook implementation that copies security data structure from old to
- * new for policy cloning.
+ * LSM hook implementation that copies security data structure from old to new
+ * for policy cloning.
*/
int selinux_xfrm_policy_clone(struct xfrm_sec_ctx *old_ctx,
struct xfrm_sec_ctx **new_ctxp)
{
struct xfrm_sec_ctx *new_ctx;
- if (old_ctx) {
- new_ctx = kmalloc(sizeof(*old_ctx) + old_ctx->ctx_len,
- GFP_ATOMIC);
- if (!new_ctx)
- return -ENOMEM;
+ if (!old_ctx)
+ return 0;
+
+ new_ctx = kmemdup(old_ctx, sizeof(*old_ctx) + old_ctx->ctx_len,
+ GFP_ATOMIC);
+ if (!new_ctx)
+ return -ENOMEM;
+ atomic_inc(&selinux_xfrm_refcount);
+ *new_ctxp = new_ctx;
- memcpy(new_ctx, old_ctx, sizeof(*new_ctx));
- memcpy(new_ctx->ctx_str, old_ctx->ctx_str, new_ctx->ctx_len);
- atomic_inc(&selinux_xfrm_refcount);
- *new_ctxp = new_ctx;
- }
return 0;
}
*/
void selinux_xfrm_policy_free(struct xfrm_sec_ctx *ctx)
{
- atomic_dec(&selinux_xfrm_refcount);
- kfree(ctx);
+ selinux_xfrm_free(ctx);
}
/*
*/
int selinux_xfrm_policy_delete(struct xfrm_sec_ctx *ctx)
{
- const struct task_security_struct *tsec = current_security();
-
- if (!ctx)
- return 0;
+ return selinux_xfrm_delete(ctx);
+}
- return avc_has_perm(tsec->sid, ctx->ctx_sid,
- SECCLASS_ASSOCIATION, ASSOCIATION__SETCONTEXT,
- NULL);
+/*
+ * LSM hook implementation that allocates a xfrm_sec_state, populates it using
+ * the supplied security context, and assigns it to the xfrm_state.
+ */
+int selinux_xfrm_state_alloc(struct xfrm_state *x,
+ struct xfrm_user_sec_ctx *uctx)
+{
+ return selinux_xfrm_alloc_user(&x->security, uctx);
}
/*
- * LSM hook implementation that allocs and transfers sec_ctx spec to
- * xfrm_state.
+ * LSM hook implementation that allocates a xfrm_sec_state and populates based
+ * on a secid.
*/
-int selinux_xfrm_state_alloc(struct xfrm_state *x, struct xfrm_user_sec_ctx *uctx,
- u32 secid)
+int selinux_xfrm_state_alloc_acquire(struct xfrm_state *x,
+ struct xfrm_sec_ctx *polsec, u32 secid)
{
- int err;
+ int rc;
+ struct xfrm_sec_ctx *ctx;
+ char *ctx_str = NULL;
+ int str_len;
+
+ if (!polsec)
+ return 0;
+
+ if (secid == 0)
+ return -EINVAL;
+
+ rc = security_sid_to_context(secid, &ctx_str, &str_len);
+ if (rc)
+ return rc;
- BUG_ON(!x);
+ ctx = kmalloc(sizeof(*ctx) + str_len, GFP_ATOMIC);
+ if (!ctx) {
+ rc = -ENOMEM;
+ goto out;
+ }
- err = selinux_xfrm_sec_ctx_alloc(&x->security, uctx, secid);
- if (err == 0)
- atomic_inc(&selinux_xfrm_refcount);
- return err;
+ ctx->ctx_doi = XFRM_SC_DOI_LSM;
+ ctx->ctx_alg = XFRM_SC_ALG_SELINUX;
+ ctx->ctx_sid = secid;
+ ctx->ctx_len = str_len;
+ memcpy(ctx->ctx_str, ctx_str, str_len);
+
+ x->security = ctx;
+ atomic_inc(&selinux_xfrm_refcount);
+out:
+ kfree(ctx_str);
+ return rc;
}
/*
*/
void selinux_xfrm_state_free(struct xfrm_state *x)
{
- atomic_dec(&selinux_xfrm_refcount);
- kfree(x->security);
+ selinux_xfrm_free(x->security);
}
- /*
- * LSM hook implementation that authorizes deletion of labeled SAs.
- */
+/*
+ * LSM hook implementation that authorizes deletion of labeled SAs.
+ */
int selinux_xfrm_state_delete(struct xfrm_state *x)
{
- const struct task_security_struct *tsec = current_security();
- struct xfrm_sec_ctx *ctx = x->security;
-
- if (!ctx)
- return 0;
-
- return avc_has_perm(tsec->sid, ctx->ctx_sid,
- SECCLASS_ASSOCIATION, ASSOCIATION__SETCONTEXT,
- NULL);
+ return selinux_xfrm_delete(x->security);
}
/*
* we need to check for unlabelled access since this may not have
* gone thru the IPSec process.
*/
-int selinux_xfrm_sock_rcv_skb(u32 isec_sid, struct sk_buff *skb,
- struct common_audit_data *ad)
+int selinux_xfrm_sock_rcv_skb(u32 sk_sid, struct sk_buff *skb,
+ struct common_audit_data *ad)
{
- int i, rc = 0;
- struct sec_path *sp;
- u32 sel_sid = SECINITSID_UNLABELED;
-
- sp = skb->sp;
+ int i;
+ struct sec_path *sp = skb->sp;
+ u32 peer_sid = SECINITSID_UNLABELED;
if (sp) {
for (i = 0; i < sp->len; i++) {
if (x && selinux_authorizable_xfrm(x)) {
struct xfrm_sec_ctx *ctx = x->security;
- sel_sid = ctx->ctx_sid;
+ peer_sid = ctx->ctx_sid;
break;
}
}
}
- /*
- * This check even when there's no association involved is
- * intended, according to Trent Jaeger, to make sure a
- * process can't engage in non-ipsec communication unless
- * explicitly allowed by policy.
- */
-
- rc = avc_has_perm(isec_sid, sel_sid, SECCLASS_ASSOCIATION,
- ASSOCIATION__RECVFROM, ad);
-
- return rc;
+ /* This check even when there's no association involved is intended,
+ * according to Trent Jaeger, to make sure a process can't engage in
+ * non-IPsec communication unless explicitly allowed by policy. */
+ return avc_has_perm(sk_sid, peer_sid,
+ SECCLASS_ASSOCIATION, ASSOCIATION__RECVFROM, ad);
}
/*
* If we do have a authorizable security association, then it has already been
* checked in the selinux_xfrm_state_pol_flow_match hook above.
*/
-int selinux_xfrm_postroute_last(u32 isec_sid, struct sk_buff *skb,
- struct common_audit_data *ad, u8 proto)
+int selinux_xfrm_postroute_last(u32 sk_sid, struct sk_buff *skb,
+ struct common_audit_data *ad, u8 proto)
{
struct dst_entry *dst;
- int rc = 0;
-
- dst = skb_dst(skb);
-
- if (dst) {
- struct dst_entry *dst_test;
-
- for (dst_test = dst; dst_test != NULL;
- dst_test = dst_test->child) {
- struct xfrm_state *x = dst_test->xfrm;
-
- if (x && selinux_authorizable_xfrm(x))
- goto out;
- }
- }
switch (proto) {
case IPPROTO_AH:
case IPPROTO_ESP:
case IPPROTO_COMP:
- /*
- * We should have already seen this packet once before
- * it underwent xfrm(s). No need to subject it to the
- * unlabeled check.
- */
- goto out;
+ /* We should have already seen this packet once before it
+ * underwent xfrm(s). No need to subject it to the unlabeled
+ * check. */
+ return 0;
default:
break;
}
- /*
- * This check even when there's no association involved is
- * intended, according to Trent Jaeger, to make sure a
- * process can't engage in non-ipsec communication unless
- * explicitly allowed by policy.
- */
+ dst = skb_dst(skb);
+ if (dst) {
+ struct dst_entry *iter;
- rc = avc_has_perm(isec_sid, SECINITSID_UNLABELED, SECCLASS_ASSOCIATION,
- ASSOCIATION__SENDTO, ad);
-out:
- return rc;
+ for (iter = dst; iter != NULL; iter = iter->child) {
+ struct xfrm_state *x = iter->xfrm;
+
+ if (x && selinux_authorizable_xfrm(x))
+ return 0;
+ }
+ }
+
+ /* This check even when there's no association involved is intended,
+ * according to Trent Jaeger, to make sure a process can't engage in
+ * non-IPsec communication unless explicitly allowed by policy. */
+ return avc_has_perm(sk_sid, SECINITSID_UNLABELED,
+ SECCLASS_ASSOCIATION, ASSOCIATION__SENDTO, ad);
}
#define SMACK_CIPSO_MAXCATNUM 184 /* 23 * 8 */
/*
- * Flag for transmute access
+ * Flags for untraditional access modes.
+ * It shouldn't be necessary to avoid conflicts with definitions
+ * in fs.h, but do so anyway.
*/
-#define MAY_TRANSMUTE 64
+#define MAY_TRANSMUTE 0x00001000 /* Controls directory labeling */
+#define MAY_LOCK 0x00002000 /* Locks should be writes, but ... */
+
/*
* Just to make the common cases easier to deal with
*/
#define MAY_NOT 0
/*
- * Number of access types used by Smack (rwxat)
+ * Number of access types used by Smack (rwxatl)
*/
-#define SMK_NUM_ACCESS_TYPE 5
+#define SMK_NUM_ACCESS_TYPE 6
/* SMACK data */
struct smack_audit_data {
*
* Do the object check first because that is more
* likely to differ.
+ *
+ * Allowing write access implies allowing locking.
*/
int smk_access_entry(char *subject_label, char *object_label,
struct list_head *rule_list)
}
}
+ /*
+ * MAY_WRITE implies MAY_LOCK.
+ */
+ if ((may & MAY_WRITE) == MAY_WRITE)
+ may |= MAY_LOCK;
return may;
}
static inline void smack_str_from_perm(char *string, int access)
{
int i = 0;
+
if (access & MAY_READ)
string[i++] = 'r';
if (access & MAY_WRITE)
string[i++] = 'a';
if (access & MAY_TRANSMUTE)
string[i++] = 't';
+ if (access & MAY_LOCK)
+ string[i++] = 'l';
string[i] = '\0';
}
/**
smk_ad_init(&ad, __func__, LSM_AUDIT_DATA_TASK);
smk_ad_setfield_u_tsk(&ad, ctp);
- rc = smk_curacc(skp->smk_known, MAY_READWRITE, &ad);
+ rc = smk_curacc(skp->smk_known, mode, &ad);
return rc;
}
* @file: the object
* @cmd: unused
*
- * Returns 0 if current has write access, error code otherwise
+ * Returns 0 if current has lock access, error code otherwise
*/
static int smack_file_lock(struct file *file, unsigned int cmd)
{
smk_ad_init(&ad, __func__, LSM_AUDIT_DATA_PATH);
smk_ad_setfield_u_fs_path(&ad, file->f_path);
- return smk_curacc(file->f_security, MAY_WRITE, &ad);
+ return smk_curacc(file->f_security, MAY_LOCK, &ad);
}
/**
switch (cmd) {
case F_GETLK:
+ break;
case F_SETLK:
case F_SETLKW:
+ smk_ad_init(&ad, __func__, LSM_AUDIT_DATA_PATH);
+ smk_ad_setfield_u_fs_path(&ad, file->f_path);
+ rc = smk_curacc(file->f_security, MAY_LOCK, &ad);
+ break;
case F_SETOWN:
case F_SETSIG:
smk_ad_init(&ad, __func__, LSM_AUDIT_DATA_PATH);
* SMK_LOADLEN: Smack rule length
*/
#define SMK_OACCESS "rwxa"
-#define SMK_ACCESS "rwxat"
+#define SMK_ACCESS "rwxatl"
#define SMK_OACCESSLEN (sizeof(SMK_OACCESS) - 1)
#define SMK_ACCESSLEN (sizeof(SMK_ACCESS) - 1)
#define SMK_OLOADLEN (SMK_LABELLEN + SMK_LABELLEN + SMK_OACCESSLEN)
case 'T':
perm |= MAY_TRANSMUTE;
break;
+ case 'l':
+ case 'L':
+ perm |= MAY_LOCK;
+ break;
default:
return perm;
}
/*
* Minor hack for backward compatibility
*/
- if (count != SMK_OLOADLEN && count != SMK_LOADLEN)
+ if (count < SMK_OLOADLEN || count > SMK_LOADLEN)
return -EINVAL;
} else {
if (count >= PAGE_SIZE) {
seq_putc(s, 'a');
if (srp->smk_access & MAY_TRANSMUTE)
seq_putc(s, 't');
+ if (srp->smk_access & MAY_LOCK)
+ seq_putc(s, 'l');
seq_putc(s, '\n');
}
if (new_rate < 0)
break;
/* make sure we are below the ABDAC clock */
- if (new_rate <= clk_get_rate(dac->pclk)) {
+ if (index < MAX_NUM_RATES &&
+ new_rate <= clk_get_rate(dac->pclk)) {
dac->rates[index] = new_rate / 256;
index++;
}
case SNDRV_PCM_STATE_DISCONNECTED:
err = -EBADFD;
goto _endloop;
+ case SNDRV_PCM_STATE_PAUSED:
+ continue;
}
if (!tout) {
snd_printd("%s write error (DMA or IRQ trouble?)\n",
return;
index = s->packet_index;
+ /* this module generate empty packet for 'no data' */
syt = calculate_syt(s, cycle);
- if (!(s->flags & CIP_BLOCKING)) {
+ if (!(s->flags & CIP_BLOCKING))
data_blocks = calculate_data_blocks(s);
- } else {
- if (syt != 0xffff) {
- data_blocks = s->syt_interval;
- } else {
- data_blocks = 0;
- syt = 0xffffff;
- }
- }
+ else if (syt != 0xffff)
+ data_blocks = s->syt_interval;
+ else
+ data_blocks = 0;
buffer = s->buffer.packets[index].buffer;
buffer[0] = cpu_to_be32(ACCESS_ONCE(s->source_node_id_field) |
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/mutex.h>
+#include <sound/asound.h>
#include "packets-buffer.h"
/**
if (dice_proc_read_mem(dice, &tx_rx_header, sections[2], 2) < 0)
return;
- quadlets = min_t(u32, tx_rx_header.size, sizeof(buf.tx));
+ quadlets = min_t(u32, tx_rx_header.size, sizeof(buf.tx) / 4);
for (stream = 0; stream < tx_rx_header.number; ++stream) {
if (dice_proc_read_mem(dice, &buf.tx, sections[2] + 2 +
stream * tx_rx_header.size,
if (dice_proc_read_mem(dice, &tx_rx_header, sections[4], 2) < 0)
return;
- quadlets = min_t(u32, tx_rx_header.size, sizeof(buf.rx));
+ quadlets = min_t(u32, tx_rx_header.size, sizeof(buf.rx) / 4);
for (stream = 0; stream < tx_rx_header.number; ++stream) {
if (dice_proc_read_mem(dice, &buf.rx, sections[4] + 2 +
stream * tx_rx_header.size,
config SND_HDA_CODEC_CA0132_DSP
bool "Support new DSP code for CA0132 codec"
- depends on SND_HDA_CODEC_CA0132 && FW_LOADER
+ depends on SND_HDA_CODEC_CA0132
select SND_HDA_DSP_LOADER
+ select FW_LOADER
help
Say Y here to enable the DSP for Creative CA0132 for extended
features like equalizer or echo cancellation.
* in the resume / power-save sequence
*/
hda_keep_power_on(codec);
+ if (codec->pm_down_notified) {
+ codec->pm_down_notified = 0;
+ hda_call_pm_notify(codec->bus, true);
+ }
hda_set_power_state(codec, AC_PWRST_D0);
restore_shutup_pins(codec);
hda_exec_init_verbs(codec);
unsigned int in_reset:1; /* during reset operation */
unsigned int power_keep_link_on:1; /* don't power off HDA link */
unsigned int no_response_fallback:1; /* don't fallback at RIRB error */
- unsigned int avoid_link_reset:1; /* don't reset link at runtime PM */
int primary_dig_out_type; /* primary digital out PCM type */
};
memset(path, 0, sizeof(*path));
}
+/* return a DAC if paired to the given pin by codec driver */
+static hda_nid_t get_preferred_dac(struct hda_codec *codec, hda_nid_t pin)
+{
+ struct hda_gen_spec *spec = codec->spec;
+ const hda_nid_t *list = spec->preferred_dacs;
+
+ if (!list)
+ return 0;
+ for (; *list; list += 2)
+ if (*list == pin)
+ return list[1];
+ return 0;
+}
+
/* look for an empty DAC slot */
static hda_nid_t look_for_dac(struct hda_codec *codec, hda_nid_t pin,
bool is_digital)
continue;
}
- dacs[i] = look_for_dac(codec, pin, false);
+ dacs[i] = get_preferred_dac(codec, pin);
+ if (dacs[i]) {
+ if (is_dac_already_used(codec, dacs[i]))
+ badness += bad->shared_primary;
+ }
+
+ if (!dacs[i])
+ dacs[i] = look_for_dac(codec, pin, false);
if (!dacs[i] && !i) {
/* try to steal the DAC of surrounds for the front */
for (j = 1; j < num_outs; j++) {
for (i = 0; i < num_pins; i++) {
hda_nid_t pin = pins[i];
- if (pin == spec->hp_mic_pin) {
- int ret = create_hp_mic_jack_mode(codec, pin);
- if (ret < 0)
- return ret;
+ if (pin == spec->hp_mic_pin)
continue;
- }
if (get_out_jack_num_items(codec, pin) > 1) {
struct snd_kcontrol_new *knew;
char name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN];
val &= ~(AC_PINCTL_VREFEN | PIN_HP);
val |= get_vref_idx(vref_caps, idx) | PIN_IN;
} else
- val = snd_hda_get_default_vref(codec, nid);
+ val = snd_hda_get_default_vref(codec, nid) | PIN_IN;
}
snd_hda_set_pin_ctl_cache(codec, nid, val);
call_hp_automute(codec, NULL);
struct hda_gen_spec *spec = codec->spec;
struct snd_kcontrol_new *knew;
- if (get_out_jack_num_items(codec, pin) <= 1 &&
- get_in_jack_num_items(codec, pin) <= 1)
- return 0; /* no need */
knew = snd_hda_gen_add_kctl(spec, "Headphone Mic Jack Mode",
&hp_mic_jack_mode_enum);
if (!knew)
return 0;
}
+/* return true if either a volume or a mute amp is found for the given
+ * aamix path; the amp has to be either in the mixer node or its direct leaf
+ */
+static bool look_for_mix_leaf_ctls(struct hda_codec *codec, hda_nid_t mix_nid,
+ hda_nid_t pin, unsigned int *mix_val,
+ unsigned int *mute_val)
+{
+ int idx, num_conns;
+ const hda_nid_t *list;
+ hda_nid_t nid;
+
+ idx = snd_hda_get_conn_index(codec, mix_nid, pin, true);
+ if (idx < 0)
+ return false;
+
+ *mix_val = *mute_val = 0;
+ if (nid_has_volume(codec, mix_nid, HDA_INPUT))
+ *mix_val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT);
+ if (nid_has_mute(codec, mix_nid, HDA_INPUT))
+ *mute_val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT);
+ if (*mix_val && *mute_val)
+ return true;
+
+ /* check leaf node */
+ num_conns = snd_hda_get_conn_list(codec, mix_nid, &list);
+ if (num_conns < idx)
+ return false;
+ nid = list[idx];
+ if (!*mix_val && nid_has_volume(codec, nid, HDA_OUTPUT))
+ *mix_val = HDA_COMPOSE_AMP_VAL(nid, 3, 0, HDA_OUTPUT);
+ if (!*mute_val && nid_has_mute(codec, nid, HDA_OUTPUT))
+ *mute_val = HDA_COMPOSE_AMP_VAL(nid, 3, 0, HDA_OUTPUT);
+
+ return *mix_val || *mute_val;
+}
+
/* create input playback/capture controls for the given pin */
static int new_analog_input(struct hda_codec *codec, int input_idx,
hda_nid_t pin, const char *ctlname, int ctlidx,
{
struct hda_gen_spec *spec = codec->spec;
struct nid_path *path;
- unsigned int val;
+ unsigned int mix_val, mute_val;
int err, idx;
- if (!nid_has_volume(codec, mix_nid, HDA_INPUT) &&
- !nid_has_mute(codec, mix_nid, HDA_INPUT))
- return 0; /* no need for analog loopback */
+ if (!look_for_mix_leaf_ctls(codec, mix_nid, pin, &mix_val, &mute_val))
+ return 0;
path = snd_hda_add_new_path(codec, pin, mix_nid, 0);
if (!path)
spec->loopback_paths[input_idx] = snd_hda_get_path_idx(codec, path);
idx = path->idx[path->depth - 1];
- if (nid_has_volume(codec, mix_nid, HDA_INPUT)) {
- val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT);
- err = __add_pb_vol_ctrl(spec, HDA_CTL_WIDGET_VOL, ctlname, ctlidx, val);
+ if (mix_val) {
+ err = __add_pb_vol_ctrl(spec, HDA_CTL_WIDGET_VOL, ctlname, ctlidx, mix_val);
if (err < 0)
return err;
- path->ctls[NID_PATH_VOL_CTL] = val;
+ path->ctls[NID_PATH_VOL_CTL] = mix_val;
}
- if (nid_has_mute(codec, mix_nid, HDA_INPUT)) {
- val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT);
- err = __add_pb_sw_ctrl(spec, HDA_CTL_WIDGET_MUTE, ctlname, ctlidx, val);
+ if (mute_val) {
+ err = __add_pb_sw_ctrl(spec, HDA_CTL_WIDGET_MUTE, ctlname, ctlidx, mute_val);
if (err < 0)
return err;
- path->ctls[NID_PATH_MUTE_CTL] = val;
+ path->ctls[NID_PATH_MUTE_CTL] = mute_val;
}
path->active = true;
return AC_PWRST_D3;
}
+/* mute all aamix inputs initially; parse up to the first leaves */
+static void mute_all_mixer_nid(struct hda_codec *codec, hda_nid_t mix)
+{
+ int i, nums;
+ const hda_nid_t *conn;
+ bool has_amp;
+
+ nums = snd_hda_get_conn_list(codec, mix, &conn);
+ has_amp = nid_has_mute(codec, mix, HDA_INPUT);
+ for (i = 0; i < nums; i++) {
+ if (has_amp)
+ snd_hda_codec_amp_stereo(codec, mix,
+ HDA_INPUT, i,
+ 0xff, HDA_AMP_MUTE);
+ else if (nid_has_volume(codec, conn[i], HDA_OUTPUT))
+ snd_hda_codec_amp_stereo(codec, conn[i],
+ HDA_OUTPUT, 0,
+ 0xff, HDA_AMP_MUTE);
+ }
+}
/*
* Parse the given BIOS configuration and set up the hda_gen_spec
if (err < 0)
return err;
+ /* create "Headphone Mic Jack Mode" if no input selection is
+ * available (or user specifies add_jack_modes hint)
+ */
+ if (spec->hp_mic_pin &&
+ (spec->auto_mic || spec->input_mux.num_items == 1 ||
+ spec->add_jack_modes)) {
+ err = create_hp_mic_jack_mode(codec, spec->hp_mic_pin);
+ if (err < 0)
+ return err;
+ }
+
if (spec->add_jack_modes) {
if (cfg->line_out_type != AUTO_PIN_SPEAKER_OUT) {
err = create_out_jack_modes(codec, cfg->line_outs,
}
}
+ /* mute all aamix input initially */
+ if (spec->mixer_nid)
+ mute_all_mixer_nid(codec, spec->mixer_nid);
+
dig_only:
parse_digital(codec);
const struct badness_table *main_out_badness;
const struct badness_table *extra_out_badness;
+ /* preferred pin/DAC pairs; an array of paired NIDs */
+ const hda_nid_t *preferred_dacs;
+
/* loopback mixing mode */
bool aamix_mode;
STATESTS_INT_MASK);
azx_stop_chip(chip);
- if (!chip->bus->avoid_link_reset)
- azx_enter_link_reset(chip);
+ azx_enter_link_reset(chip);
azx_clear_irq_pending(chip);
if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL)
hda_display_power(false);
* white/black-list for enable_msi
*/
static struct snd_pci_quirk msi_black_list[] = {
+ SND_PCI_QUIRK(0x103c, 0x2191, "HP", 0), /* AMD Hudson */
+ SND_PCI_QUIRK(0x103c, 0x2192, "HP", 0), /* AMD Hudson */
+ SND_PCI_QUIRK(0x103c, 0x21f7, "HP", 0), /* AMD Hudson */
+ SND_PCI_QUIRK(0x103c, 0x21fa, "HP", 0), /* AMD Hudson */
SND_PCI_QUIRK(0x1043, 0x81f2, "ASUS", 0), /* Athlon64 X2 + nvidia */
SND_PCI_QUIRK(0x1043, 0x81f6, "ASUS", 0), /* nvidia */
SND_PCI_QUIRK(0x1043, 0x822d, "ASUS", 0), /* Athlon64 X2 + nvidia MCP55 */
}
dev++;
- complete_all(&chip->probe_wait);
+ if (chip->disabled)
+ complete_all(&chip->probe_wait);
return 0;
out_free:
if ((chip->driver_caps & AZX_DCAPS_PM_RUNTIME) || chip->use_vga_switcheroo)
pm_runtime_put_noidle(&pci->dev);
- return 0;
-
out_free:
- chip->init_failed = 1;
+ if (err < 0)
+ chip->init_failed = 1;
+ complete_all(&chip->probe_wait);
return err;
}
if (!spec->eapd_nid)
return;
+ if (codec->inv_eapd)
+ enabled = !enabled;
snd_hda_codec_update_cache(codec, spec->eapd_nid, 0,
AC_VERB_SET_EAPD_BTLENABLE,
enabled ? 0x02 : 0x00);
{
int err;
struct ad198x_spec *spec;
+ static hda_nid_t preferred_pairs[] = {
+ 0x1a, 0x03,
+ 0x1b, 0x03,
+ 0x1c, 0x04,
+ 0x1d, 0x05,
+ 0x1e, 0x03,
+ 0
+ };
err = alloc_ad_spec(codec);
if (err < 0)
* So, let's disable the shared stream.
*/
spec->gen.multiout.no_share_stream = 1;
+ /* give fixed DAC/pin pairs */
+ spec->gen.preferred_dacs = preferred_pairs;
+
+ /* AD1986A can't manage the dynamic pin on/off smoothly */
+ spec->gen.auto_mute_via_amp = 1;
snd_hda_pick_fixup(codec, ad1986a_fixup_models, ad1986a_fixup_tbl,
ad1986a_fixups);
switch (action) {
case HDA_FIXUP_ACT_PRE_PROBE:
spec->gen.vmaster_mute.hook = ad1884_vmaster_hp_gpio_hook;
+ spec->gen.own_eapd_ctl = 1;
snd_hda_sequence_write_cache(codec, gpio_init_verbs);
break;
case HDA_FIXUP_ACT_PROBE:
SND_PCI_QUIRK(0x1028, 0x0401, "Dell Vostro 1014", CXT5066_DELL_VOSTRO),
SND_PCI_QUIRK(0x1028, 0x0408, "Dell Inspiron One 19T", CXT5066_IDEAPAD),
SND_PCI_QUIRK(0x1028, 0x050f, "Dell Inspiron", CXT5066_IDEAPAD),
- SND_PCI_QUIRK(0x1028, 0x0510, "Dell Vostro", CXT5066_IDEAPAD),
SND_PCI_QUIRK(0x103c, 0x360b, "HP G60", CXT5066_HP_LAPTOP),
SND_PCI_QUIRK(0x1043, 0x13f3, "Asus A52J", CXT5066_ASUS),
SND_PCI_QUIRK(0x1043, 0x1643, "Asus K52JU", CXT5066_ASUS),
#if IS_ENABLED(CONFIG_THINKPAD_ACPI)
#include <linux/thinkpad_acpi.h>
+#include <acpi/acpi.h>
static int (*led_set_func)(int, bool);
+static acpi_status acpi_check_cb(acpi_handle handle, u32 lvl, void *context,
+ void **rv)
+{
+ bool *found = context;
+ *found = true;
+ return AE_OK;
+}
+
+static bool is_thinkpad(struct hda_codec *codec)
+{
+ bool found = false;
+ if (codec->subsystem_id >> 16 != 0x17aa)
+ return false;
+ if (ACPI_SUCCESS(acpi_get_devices("LEN0068", acpi_check_cb, &found, NULL)) && found)
+ return true;
+ found = false;
+ return ACPI_SUCCESS(acpi_get_devices("IBM0068", acpi_check_cb, &found, NULL)) && found;
+}
+
static void update_tpacpi_mute_led(void *private_data, int enabled)
{
struct hda_codec *codec = private_data;
bool removefunc = false;
if (action == HDA_FIXUP_ACT_PROBE) {
+ if (!is_thinkpad(codec))
+ return;
if (!led_set_func)
led_set_func = symbol_request(tpacpi_led_set);
if (!led_set_func) {
SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI),
SND_PCI_QUIRK(0x1c06, 0x2011, "Lemote A1004", CXT_PINCFG_LEMOTE_A1004),
SND_PCI_QUIRK(0x1c06, 0x2012, "Lemote A1205", CXT_PINCFG_LEMOTE_A1205),
{}
static bool hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, int repoll);
-static void hdmi_intrinsic_event(struct hda_codec *codec, unsigned int res)
+static void jack_callback(struct hda_codec *codec, struct hda_jack_tbl *jack)
{
struct hdmi_spec *spec = codec->spec;
+ int pin_idx = pin_nid_to_pin_index(spec, jack->nid);
+ if (pin_idx < 0)
+ return;
+
+ if (hdmi_present_sense(get_pin(spec, pin_idx), 1))
+ snd_hda_jack_report_sync(codec);
+}
+
+static void hdmi_intrinsic_event(struct hda_codec *codec, unsigned int res)
+{
int tag = res >> AC_UNSOL_RES_TAG_SHIFT;
- int pin_nid;
- int pin_idx;
struct hda_jack_tbl *jack;
int dev_entry = (res & AC_UNSOL_RES_DE) >> AC_UNSOL_RES_DE_SHIFT;
jack = snd_hda_jack_tbl_get_from_tag(codec, tag);
if (!jack)
return;
- pin_nid = jack->nid;
jack->jack_dirty = 1;
_snd_printd(SND_PR_VERBOSE,
"HDMI hot plug event: Codec=%d Pin=%d Device=%d Inactive=%d Presence_Detect=%d ELD_Valid=%d\n",
- codec->addr, pin_nid, dev_entry, !!(res & AC_UNSOL_RES_IA),
+ codec->addr, jack->nid, dev_entry, !!(res & AC_UNSOL_RES_IA),
!!(res & AC_UNSOL_RES_PD), !!(res & AC_UNSOL_RES_ELDV));
- pin_idx = pin_nid_to_pin_index(spec, pin_nid);
- if (pin_idx < 0)
- return;
-
- if (hdmi_present_sense(get_pin(spec, pin_idx), 1))
- snd_hda_jack_report_sync(codec);
+ jack_callback(codec, jack);
}
static void hdmi_non_intrinsic_event(struct hda_codec *codec, unsigned int res)
hda_nid_t pin_nid = per_pin->pin_nid;
hdmi_init_pin(codec, pin_nid);
- snd_hda_jack_detect_enable(codec, pin_nid, pin_nid);
+ snd_hda_jack_detect_enable_callback(codec, pin_nid, pin_nid,
+ codec->jackpoll_interval > 0 ? jack_callback : NULL);
}
return 0;
}
int err;
per_cvt = get_cvt(spec, 0);
- err = snd_hda_create_spdif_out_ctls(codec, per_cvt->cvt_nid,
- per_cvt->cvt_nid);
+ err = snd_hda_create_dig_out_ctls(codec, per_cvt->cvt_nid,
+ per_cvt->cvt_nid,
+ HDA_PCM_TYPE_HDMI);
if (err < 0)
return err;
return simple_hdmi_build_jack(codec, 0);
ALC260_FIXUP_KN1,
ALC260_FIXUP_FSC_S7020,
ALC260_FIXUP_FSC_S7020_JWSE,
+ ALC260_FIXUP_VAIO_PINS,
};
static void alc260_gpio1_automute(struct hda_codec *codec)
.chained = true,
.chain_id = ALC260_FIXUP_FSC_S7020,
},
+ [ALC260_FIXUP_VAIO_PINS] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ /* Pin configs are missing completely on some VAIOs */
+ { 0x0f, 0x01211020 },
+ { 0x10, 0x0001003f },
+ { 0x11, 0x411111f0 },
+ { 0x12, 0x01a15930 },
+ { 0x13, 0x411111f0 },
+ { 0x14, 0x411111f0 },
+ { 0x15, 0x411111f0 },
+ { 0x16, 0x411111f0 },
+ { 0x17, 0x411111f0 },
+ { 0x18, 0x411111f0 },
+ { 0x19, 0x411111f0 },
+ { }
+ }
+ },
};
static const struct snd_pci_quirk alc260_fixup_tbl[] = {
SND_PCI_QUIRK(0x1025, 0x008f, "Acer", ALC260_FIXUP_GPIO1),
SND_PCI_QUIRK(0x103c, 0x280a, "HP dc5750", ALC260_FIXUP_HP_DC5750),
SND_PCI_QUIRK(0x103c, 0x30ba, "HP Presario B1900", ALC260_FIXUP_HP_B1900),
+ SND_PCI_QUIRK(0x104d, 0x81bb, "Sony VAIO", ALC260_FIXUP_VAIO_PINS),
+ SND_PCI_QUIRK(0x104d, 0x81e2, "Sony VAIO TX", ALC260_FIXUP_HP_PIN_0F),
SND_PCI_QUIRK(0x10cf, 0x1326, "FSC LifeBook S7020", ALC260_FIXUP_FSC_S7020),
SND_PCI_QUIRK(0x1509, 0x4540, "Favorit 100XS", ALC260_FIXUP_GPIO1),
SND_PCI_QUIRK(0x152d, 0x0729, "Quanta KN1", ALC260_FIXUP_KN1),
ALC889_FIXUP_DAC_ROUTE,
ALC889_FIXUP_MBP_VREF,
ALC889_FIXUP_IMAC91_VREF,
+ ALC889_FIXUP_MBA21_VREF,
ALC882_FIXUP_INV_DMIC,
ALC882_FIXUP_NO_PRIMARY_HP,
+ ALC887_FIXUP_ASUS_BASS,
+ ALC887_FIXUP_BASS_CHMAP,
};
static void alc889_fixup_coef(struct hda_codec *codec,
}
}
-/* Set VREF on speaker pins on imac91 */
-static void alc889_fixup_imac91_vref(struct hda_codec *codec,
- const struct hda_fixup *fix, int action)
+static void alc889_fixup_mac_pins(struct hda_codec *codec,
+ const hda_nid_t *nids, int num_nids)
{
struct alc_spec *spec = codec->spec;
- static hda_nid_t nids[2] = { 0x18, 0x1a };
int i;
- if (action != HDA_FIXUP_ACT_INIT)
- return;
- for (i = 0; i < ARRAY_SIZE(nids); i++) {
+ for (i = 0; i < num_nids; i++) {
unsigned int val;
val = snd_hda_codec_get_pin_target(codec, nids[i]);
val |= AC_PINCTL_VREF_50;
spec->gen.keep_vref_in_automute = 1;
}
+/* Set VREF on speaker pins on imac91 */
+static void alc889_fixup_imac91_vref(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ static hda_nid_t nids[2] = { 0x18, 0x1a };
+
+ if (action == HDA_FIXUP_ACT_INIT)
+ alc889_fixup_mac_pins(codec, nids, ARRAY_SIZE(nids));
+}
+
+/* Set VREF on speaker pins on mba21 */
+static void alc889_fixup_mba21_vref(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ static hda_nid_t nids[2] = { 0x18, 0x19 };
+
+ if (action == HDA_FIXUP_ACT_INIT)
+ alc889_fixup_mac_pins(codec, nids, ARRAY_SIZE(nids));
+}
+
/* Don't take HP output as primary
* Strangely, the speaker output doesn't work on Vaio Z and some Vaio
* all-in-one desktop PCs (for example VGC-LN51JGB) through DAC 0x05
}
}
+static void alc_fixup_bass_chmap(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action);
+
static const struct hda_fixup alc882_fixups[] = {
[ALC882_FIXUP_ABIT_AW9D_MAX] = {
.type = HDA_FIXUP_PINS,
.chained = true,
.chain_id = ALC882_FIXUP_GPIO1,
},
+ [ALC889_FIXUP_MBA21_VREF] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc889_fixup_mba21_vref,
+ .chained = true,
+ .chain_id = ALC889_FIXUP_MBP_VREF,
+ },
[ALC882_FIXUP_INV_DMIC] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc_fixup_inv_dmic_0x12,
.type = HDA_FIXUP_FUNC,
.v.func = alc882_fixup_no_primary_hp,
},
+ [ALC887_FIXUP_ASUS_BASS] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ {0x16, 0x99130130}, /* bass speaker */
+ {}
+ },
+ .chained = true,
+ .chain_id = ALC887_FIXUP_BASS_CHMAP,
+ },
+ [ALC887_FIXUP_BASS_CHMAP] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_bass_chmap,
+ },
};
static const struct snd_pci_quirk alc882_fixup_tbl[] = {
SND_PCI_QUIRK(0x1043, 0x1873, "ASUS W90V", ALC882_FIXUP_ASUS_W90V),
SND_PCI_QUIRK(0x1043, 0x1971, "Asus W2JC", ALC882_FIXUP_ASUS_W2JC),
SND_PCI_QUIRK(0x1043, 0x835f, "Asus Eee 1601", ALC888_FIXUP_EEE1601),
+ SND_PCI_QUIRK(0x1043, 0x84bc, "ASUS ET2700", ALC887_FIXUP_ASUS_BASS),
SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT),
SND_PCI_QUIRK(0x104d, 0x905a, "Sony Vaio Z", ALC882_FIXUP_NO_PRIMARY_HP),
SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
SND_PCI_QUIRK(0x106b, 0x3000, "iMac", ALC889_FIXUP_MBP_VREF),
SND_PCI_QUIRK(0x106b, 0x3200, "iMac 7,1 Aluminum", ALC882_FIXUP_EAPD),
SND_PCI_QUIRK(0x106b, 0x3400, "MacBookAir 1,1", ALC889_FIXUP_MBP_VREF),
- SND_PCI_QUIRK(0x106b, 0x3500, "MacBookAir 2,1", ALC889_FIXUP_MBP_VREF),
+ SND_PCI_QUIRK(0x106b, 0x3500, "MacBookAir 2,1", ALC889_FIXUP_MBA21_VREF),
SND_PCI_QUIRK(0x106b, 0x3600, "Macbook 3,1", ALC889_FIXUP_MBP_VREF),
SND_PCI_QUIRK(0x106b, 0x3800, "MacbookPro 4,1", ALC889_FIXUP_MBP_VREF),
SND_PCI_QUIRK(0x106b, 0x3e00, "iMac 24 Aluminum", ALC885_FIXUP_MACPRO_GPIO),
alc_write_coef_idx(codec, 0x18, 0x7388);
break;
case 0x10ec0668:
+ alc_write_coef_idx(codec, 0x11, 0x0001);
alc_write_coef_idx(codec, 0x15, 0x0d60);
alc_write_coef_idx(codec, 0xc3, 0x0000);
break;
alc_write_coef_idx(codec, 0x18, 0x7388);
break;
case 0x10ec0668:
+ alc_write_coef_idx(codec, 0x11, 0x0001);
alc_write_coef_idx(codec, 0x15, 0x0d50);
alc_write_coef_idx(codec, 0xc3, 0x0000);
break;
static void alc_update_headset_jack_cb(struct hda_codec *codec, struct hda_jack_tbl *jack)
{
struct alc_spec *spec = codec->spec;
- spec->current_headset_type = ALC_HEADSET_MODE_UNKNOWN;
+ spec->current_headset_type = ALC_HEADSET_TYPE_UNKNOWN;
snd_hda_gen_hp_automute(codec, jack);
}
vref);
}
-static void alc283_chromebook_caps(struct hda_codec *codec)
-{
- snd_hda_override_wcaps(codec, 0x03, 0);
-}
-
static void alc283_fixup_chromebook(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
switch (action) {
case HDA_FIXUP_ACT_PRE_PROBE:
- alc283_chromebook_caps(codec);
+ snd_hda_override_wcaps(codec, 0x03, 0);
/* Disable AA-loopback as it causes white noise */
spec->gen.mixer_nid = 0;
+ break;
+ case HDA_FIXUP_ACT_INIT:
+ /* Enable Line1 input control by verb */
+ val = alc_read_coef_idx(codec, 0x1a);
+ alc_write_coef_idx(codec, 0x1a, val | (1 << 4));
+ break;
+ }
+}
+
+static void alc283_fixup_sense_combo_jack(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ struct alc_spec *spec = codec->spec;
+ int val;
+
+ switch (action) {
+ case HDA_FIXUP_ACT_PRE_PROBE:
spec->gen.hp_automute_hook = alc283_hp_automute_hook;
break;
case HDA_FIXUP_ACT_INIT:
/* Set to manual mode */
val = alc_read_coef_idx(codec, 0x06);
alc_write_coef_idx(codec, 0x06, val & ~0x000c);
- /* Enable Line1 input control by verb */
- val = alc_read_coef_idx(codec, 0x1a);
- alc_write_coef_idx(codec, 0x1a, val | (1 << 4));
break;
}
}
#if IS_ENABLED(CONFIG_THINKPAD_ACPI)
#include <linux/thinkpad_acpi.h>
+#include <acpi/acpi.h>
static int (*led_set_func)(int, bool);
+static acpi_status acpi_check_cb(acpi_handle handle, u32 lvl, void *context,
+ void **rv)
+{
+ bool *found = context;
+ *found = true;
+ return AE_OK;
+}
+
+static bool is_thinkpad(struct hda_codec *codec)
+{
+ bool found = false;
+ if (codec->subsystem_id >> 16 != 0x17aa)
+ return false;
+ if (ACPI_SUCCESS(acpi_get_devices("LEN0068", acpi_check_cb, &found, NULL)) && found)
+ return true;
+ found = false;
+ return ACPI_SUCCESS(acpi_get_devices("IBM0068", acpi_check_cb, &found, NULL)) && found;
+}
+
static void update_tpacpi_mute_led(void *private_data, int enabled)
{
if (led_set_func)
bool removefunc = false;
if (action == HDA_FIXUP_ACT_PROBE) {
+ if (!is_thinkpad(codec))
+ return;
if (!led_set_func)
led_set_func = symbol_request(tpacpi_led_set);
if (!led_set_func) {
ALC269_FIXUP_ASUS_X101,
ALC271_FIXUP_AMIC_MIC2,
ALC271_FIXUP_HP_GATE_MIC_JACK,
+ ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572,
ALC269_FIXUP_ACER_AC700,
ALC269_FIXUP_LIMIT_INT_MIC_BOOST,
+ ALC269VB_FIXUP_ASUS_ZENBOOK,
ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED,
ALC269VB_FIXUP_ORDISSIMO_EVE2,
ALC283_FIXUP_CHROME_BOOK,
+ ALC283_FIXUP_SENSE_COMBO_JACK,
ALC282_FIXUP_ASUS_TX300,
ALC283_FIXUP_INT_MIC,
ALC290_FIXUP_MONO_SPEAKERS,
[ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc269_fixup_pincfg_no_hp_to_lineout,
+ .chained = true,
+ .chain_id = ALC269_FIXUP_THINKPAD_ACPI,
},
[ALC269_FIXUP_DELL1_MIC_NO_PRESENCE] = {
.type = HDA_FIXUP_PINS,
.chained = true,
.chain_id = ALC271_FIXUP_AMIC_MIC2,
},
+ [ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_limit_int_mic_boost,
+ .chained = true,
+ .chain_id = ALC271_FIXUP_HP_GATE_MIC_JACK,
+ },
[ALC269_FIXUP_ACER_AC700] = {
.type = HDA_FIXUP_PINS,
.v.pins = (const struct hda_pintbl[]) {
[ALC269_FIXUP_LIMIT_INT_MIC_BOOST] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc269_fixup_limit_int_mic_boost,
+ .chained = true,
+ .chain_id = ALC269_FIXUP_THINKPAD_ACPI,
+ },
+ [ALC269VB_FIXUP_ASUS_ZENBOOK] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_limit_int_mic_boost,
+ .chained = true,
+ .chain_id = ALC269VB_FIXUP_DMIC,
},
[ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED] = {
.type = HDA_FIXUP_FUNC,
.type = HDA_FIXUP_FUNC,
.v.func = alc283_fixup_chromebook,
},
+ [ALC283_FIXUP_SENSE_COMBO_JACK] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc283_fixup_sense_combo_jack,
+ .chained = true,
+ .chain_id = ALC283_FIXUP_CHROME_BOOK,
+ },
[ALC282_FIXUP_ASUS_TX300] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc282_fixup_asus_tx300,
[ALC269_FIXUP_THINKPAD_ACPI] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc_fixup_thinkpad_acpi,
- .chained = true,
- .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
},
[ALC255_FIXUP_DELL1_MIC_NO_PRESENCE] = {
.type = HDA_FIXUP_PINS,
SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK_VENDOR(0x1025, "Acer Aspire", ALC271_FIXUP_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
SND_PCI_QUIRK(0x1028, 0x05bd, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x05be, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0606, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0608, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0610, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0613, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0614, "Dell Inspiron 3135", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_MONO_SPEAKERS),
SND_PCI_QUIRK(0x1028, 0x061f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0629, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS),
+ SND_PCI_QUIRK(0x1028, 0x063e, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x063f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0640, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x15cc, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x15cd, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
SND_PCI_QUIRK(0x103c, 0x1973, "HP Pavilion", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x1983, "HP Pavilion", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x218b, "HP", ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED),
- SND_PCI_QUIRK(0x103c, 0x21ed, "HP Falco Chromebook", ALC283_FIXUP_CHROME_BOOK),
SND_PCI_QUIRK_VENDOR(0x103c, "HP", ALC269_FIXUP_HP_MUTE_LED),
SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
- SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_DMIC),
- SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK),
SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
SND_PCI_QUIRK(0x17aa, 0x2208, "Thinkpad T431s", ALC269_FIXUP_LENOVO_DOCK),
SND_PCI_QUIRK(0x17aa, 0x220c, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
- SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad", ALC269_FIXUP_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", ALC269_FIXUP_THINKPAD_ACPI),
SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
#if 0
{.id = ALC269_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"},
{.id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "dell-headset-multi"},
{.id = ALC269_FIXUP_DELL2_MIC_NO_PRESENCE, .name = "dell-headset-dock"},
+ {.id = ALC283_FIXUP_CHROME_BOOK, .name = "alc283-chrome"},
+ {.id = ALC283_FIXUP_SENSE_COMBO_JACK, .name = "alc283-sense-combo"},
{}
};
ALC861_FIXUP_AMP_VREF_0F,
ALC861_FIXUP_NO_JACK_DETECT,
ALC861_FIXUP_ASUS_A6RP,
+ ALC660_FIXUP_ASUS_W7J,
};
/* On some laptops, VREF of pin 0x0f is abused for controlling the main amp */
.v.func = alc861_fixup_asus_amp_vref_0f,
.chained = true,
.chain_id = ALC861_FIXUP_NO_JACK_DETECT,
+ },
+ [ALC660_FIXUP_ASUS_W7J] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ /* ASUS W7J needs a magic pin setup on unused NID 0x10
+ * for enabling outputs
+ */
+ {0x10, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x24},
+ { }
+ },
}
};
static const struct snd_pci_quirk alc861_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1253, "ASUS W7J", ALC660_FIXUP_ASUS_W7J),
+ SND_PCI_QUIRK(0x1043, 0x1263, "ASUS Z35HL", ALC660_FIXUP_ASUS_W7J),
SND_PCI_QUIRK(0x1043, 0x1393, "ASUS A6Rp", ALC861_FIXUP_ASUS_A6RP),
SND_PCI_QUIRK_VENDOR(0x1043, "ASUS laptop", ALC861_FIXUP_AMP_VREF_0F),
SND_PCI_QUIRK(0x1462, 0x7254, "HP DX2200", ALC861_FIXUP_NO_JACK_DETECT),
};
/* override the 2.1 chmap */
-static void alc662_fixup_bass_chmap(struct hda_codec *codec,
+static void alc_fixup_bass_chmap(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
if (action == HDA_FIXUP_ACT_BUILD) {
ALC668_FIXUP_DELL_MIC_NO_PRESENCE,
ALC668_FIXUP_HEADSET_MODE,
ALC662_FIXUP_BASS_CHMAP,
+ ALC662_FIXUP_BASS_1A,
+ ALC662_FIXUP_BASS_1A_CHMAP,
};
static const struct hda_fixup alc662_fixups[] = {
},
[ALC662_FIXUP_BASS_CHMAP] = {
.type = HDA_FIXUP_FUNC,
- .v.func = alc662_fixup_bass_chmap,
+ .v.func = alc_fixup_bass_chmap,
.chained = true,
.chain_id = ALC662_FIXUP_ASUS_MODE4
},
+ [ALC662_FIXUP_BASS_1A] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ {0x1a, 0x80106111}, /* bass speaker */
+ {}
+ },
+ },
+ [ALC662_FIXUP_BASS_1A_CHMAP] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_bass_chmap,
+ .chained = true,
+ .chain_id = ALC662_FIXUP_BASS_1A,
+ },
};
static const struct snd_pci_quirk alc662_fixup_tbl[] = {
SND_PCI_QUIRK(0x1025, 0x038b, "Acer Aspire 8943G", ALC662_FIXUP_ASPIRE),
SND_PCI_QUIRK(0x1028, 0x05d8, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x05db, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0623, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0624, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0625, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0626, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0628, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
+ SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_BASS_1A_CHMAP),
SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_CHMAP),
SND_PCI_QUIRK(0x1043, 0x1bf3, "ASUS N76VZ", ALC662_FIXUP_BASS_CHMAP),
SND_PCI_QUIRK(0x1043, 0x8469, "ASUS mobo", ALC662_FIXUP_NO_JACK_DETECT),
case 0x10ec0272:
case 0x10ec0663:
case 0x10ec0665:
+ case 0x10ec0668:
set_beep_amp(spec, 0x0b, 0x04, HDA_INPUT);
break;
case 0x10ec0273:
*/
static const struct hda_codec_preset snd_hda_preset_realtek[] = {
{ .id = 0x10ec0221, .name = "ALC221", .patch = patch_alc269 },
+ { .id = 0x10ec0231, .name = "ALC231", .patch = patch_alc269 },
{ .id = 0x10ec0233, .name = "ALC233", .patch = patch_alc269 },
{ .id = 0x10ec0255, .name = "ALC255", .patch = patch_alc269 },
{ .id = 0x10ec0260, .name = "ALC260", .patch = patch_alc260 },
if (action == HDA_FIXUP_ACT_PRE_PROBE) {
spec->mic_mute_led_gpio = 0x08; /* GPIO3 */
- codec->bus->avoid_link_reset = 1;
+ /* resetting controller clears GPIO, so we need to keep on */
+ codec->bus->power_keep_link_on = 1;
}
}
dma_params = ssc_p->dma_params[dir];
- ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_enable);
+ ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_disable);
ssc_writel(ssc_p->ssc->regs, IDR, dma_params->mask->ssc_error);
pr_debug("%s enabled SSC_SR=0x%08x\n",
return 0;
}
+static int atmel_ssc_trigger(struct snd_pcm_substream *substream,
+ int cmd, struct snd_soc_dai *dai)
+{
+ struct atmel_ssc_info *ssc_p = &ssc_info[dai->id];
+ struct atmel_pcm_dma_params *dma_params;
+ int dir;
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+ dir = 0;
+ else
+ dir = 1;
+
+ dma_params = ssc_p->dma_params[dir];
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_enable);
+ break;
+ default:
+ ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_disable);
+ break;
+ }
+
+ return 0;
+}
#ifdef CONFIG_PM
static int atmel_ssc_suspend(struct snd_soc_dai *cpu_dai)
.startup = atmel_ssc_startup,
.shutdown = atmel_ssc_shutdown,
.prepare = atmel_ssc_prepare,
+ .trigger = atmel_ssc_trigger,
.hw_params = atmel_ssc_hw_params,
.set_fmt = atmel_ssc_set_dai_fmt,
.set_clkdiv = atmel_ssc_set_dai_clkdiv,
goto out;
}
+ snd_soc_card_set_drvdata(card, priv);
+
card->dev = &pdev->dev;
card->owner = THIS_MODULE;
card->dai_link = dai;
dai->stream_name = "WM8731 PCM";
dai->codec_dai_name = "wm8731-hifi";
dai->init = sam9x5_wm8731_init;
- dai->dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF
+ dai->dai_fmt = SND_SOC_DAIFMT_DSP_A | SND_SOC_DAIFMT_NB_NF
| SND_SOC_DAIFMT_CBM_CFM;
ret = snd_soc_of_parse_card_name(card, "atmel,model");
/* Private data for AB8500 device-driver */
struct ab8500_codec_drvdata {
- struct regmap *regmap;
-
/* Sidetone */
long *sid_fir_values;
enum sid_state sid_status;
*/
/* Read a register from the audio-bank of AB8500 */
-static int ab8500_codec_read_reg(void *context, unsigned int reg,
- unsigned int *value)
+static unsigned int ab8500_codec_read_reg(struct snd_soc_codec *codec,
+ unsigned int reg)
{
- struct device *dev = context;
int status;
+ unsigned int value = 0;
u8 value8;
- status = abx500_get_register_interruptible(dev, AB8500_AUDIO,
- reg, &value8);
- *value = (unsigned int)value8;
+ status = abx500_get_register_interruptible(codec->dev, AB8500_AUDIO,
+ reg, &value8);
+ if (status < 0) {
+ dev_err(codec->dev,
+ "%s: ERROR: Register (0x%02x:0x%02x) read failed (%d).\n",
+ __func__, (u8)AB8500_AUDIO, (u8)reg, status);
+ } else {
+ dev_dbg(codec->dev,
+ "%s: Read 0x%02x from register 0x%02x:0x%02x\n",
+ __func__, value8, (u8)AB8500_AUDIO, (u8)reg);
+ value = (unsigned int)value8;
+ }
- return status;
+ return value;
}
/* Write to a register in the audio-bank of AB8500 */
-static int ab8500_codec_write_reg(void *context, unsigned int reg,
- unsigned int value)
+static int ab8500_codec_write_reg(struct snd_soc_codec *codec,
+ unsigned int reg, unsigned int value)
{
- struct device *dev = context;
+ int status;
- return abx500_set_register_interruptible(dev, AB8500_AUDIO,
- reg, value);
-}
+ status = abx500_set_register_interruptible(codec->dev, AB8500_AUDIO,
+ reg, value);
+ if (status < 0)
+ dev_err(codec->dev,
+ "%s: ERROR: Register (%02x:%02x) write failed (%d).\n",
+ __func__, (u8)AB8500_AUDIO, (u8)reg, status);
+ else
+ dev_dbg(codec->dev,
+ "%s: Wrote 0x%02x into register %02x:%02x\n",
+ __func__, (u8)value, (u8)AB8500_AUDIO, (u8)reg);
-static const struct regmap_config ab8500_codec_regmap = {
- .reg_read = ab8500_codec_read_reg,
- .reg_write = ab8500_codec_write_reg,
-};
+ return status;
+}
/*
* Controls - DAPM
dev_dbg(dev, "%s: Enter.\n", __func__);
- snd_soc_codec_set_cache_io(codec, 0, 0, SND_SOC_REGMAP);
-
/* Setup AB8500 according to board-settings */
pdata = dev_get_platdata(dev->parent);
- codec->control_data = drvdata->regmap;
-
if (np) {
if (!pdata)
pdata = devm_kzalloc(dev,
static struct snd_soc_codec_driver ab8500_codec_driver = {
.probe = ab8500_codec_probe,
+ .read = ab8500_codec_read_reg,
+ .write = ab8500_codec_write_reg,
+ .reg_word_size = sizeof(u8),
.controls = ab8500_ctrls,
.num_controls = ARRAY_SIZE(ab8500_ctrls),
.dapm_widgets = ab8500_dapm_widgets,
drvdata->anc_status = ANC_UNCONFIGURED;
dev_set_drvdata(&pdev->dev, drvdata);
- drvdata->regmap = devm_regmap_init(&pdev->dev, NULL, &pdev->dev,
- &ab8500_codec_regmap);
- if (IS_ERR(drvdata->regmap)) {
- status = PTR_ERR(drvdata->regmap);
- dev_err(&pdev->dev, "%s: Failed to allocate regmap: %d\n",
- __func__, status);
- return status;
- }
-
dev_dbg(&pdev->dev, "%s: Register codec.\n", __func__);
status = snd_soc_register_codec(&pdev->dev, &ab8500_codec_driver,
ab8500_codec_dai,
/* Clear any pending completions */
try_wait_for_completion(&fll->ok);
+ regmap_update_bits(arizona->regmap, fll->base + 1,
+ ARIZONA_FLL1_FREERUN, 0);
regmap_update_bits(arizona->regmap, fll->base + 1,
ARIZONA_FLL1_ENA, ARIZONA_FLL1_ENA);
if (use_sync)
struct arizona *arizona = fll->arizona;
bool change;
+ regmap_update_bits(arizona->regmap, fll->base + 1,
+ ARIZONA_FLL1_FREERUN, ARIZONA_FLL1_FREERUN);
regmap_update_bits_check(arizona->regmap, fll->base + 1,
ARIZONA_FLL1_ENA, 0, &change);
regmap_update_bits(arizona->regmap, fll->base + 0x11,
struct arizona_fll fll[2];
};
+static const struct reg_default wm5110_sysclk_revd_patch[] = {
+ { 0x3093, 0x1001 },
+ { 0x30E3, 0x1301 },
+ { 0x3133, 0x1201 },
+ { 0x3183, 0x1501 },
+ { 0x31D3, 0x1401 },
+};
+
+static int wm5110_sysclk_ev(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct arizona *arizona = dev_get_drvdata(codec->dev->parent);
+ struct regmap *regmap = codec->control_data;
+ const struct reg_default *patch = NULL;
+ int i, patch_size;
+
+ switch (arizona->rev) {
+ case 3:
+ patch = wm5110_sysclk_revd_patch;
+ patch_size = ARRAY_SIZE(wm5110_sysclk_revd_patch);
+ break;
+ default:
+ return 0;
+ }
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ if (patch)
+ for (i = 0; i < patch_size; i++)
+ regmap_write(regmap, patch[i].reg,
+ patch[i].def);
+ break;
+
+ default:
+ break;
+ }
+
+ return 0;
+}
+
static DECLARE_TLV_DB_SCALE(ana_tlv, 0, 100, 0);
static DECLARE_TLV_DB_SCALE(eq_tlv, -1200, 100, 0);
static DECLARE_TLV_DB_SCALE(digital_tlv, -6400, 50, 0);
ARIZONA_MIXER_CONTROLS("SPKDAT2L", ARIZONA_OUT6LMIX_INPUT_1_SOURCE),
ARIZONA_MIXER_CONTROLS("SPKDAT2R", ARIZONA_OUT6RMIX_INPUT_1_SOURCE),
-SOC_SINGLE("HPOUT1 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_1L,
- ARIZONA_OUT1_OSR_SHIFT, 1, 0),
-SOC_SINGLE("HPOUT2 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_2L,
- ARIZONA_OUT2_OSR_SHIFT, 1, 0),
-SOC_SINGLE("HPOUT3 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_3L,
- ARIZONA_OUT3_OSR_SHIFT, 1, 0),
-SOC_SINGLE("Speaker High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_4L,
- ARIZONA_OUT4_OSR_SHIFT, 1, 0),
-SOC_SINGLE("SPKDAT1 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_5L,
- ARIZONA_OUT5_OSR_SHIFT, 1, 0),
-SOC_SINGLE("SPKDAT2 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_6L,
- ARIZONA_OUT6_OSR_SHIFT, 1, 0),
-
SOC_DOUBLE_R("HPOUT1 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_1L,
ARIZONA_DAC_DIGITAL_VOLUME_1R, ARIZONA_OUT1L_MUTE_SHIFT, 1, 1),
SOC_DOUBLE_R("HPOUT2 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_2L,
ARIZONA_DAC_DIGITAL_VOLUME_6R, ARIZONA_OUT6L_VOL_SHIFT,
0xbf, 0, digital_tlv),
-SOC_DOUBLE_R_RANGE_TLV("HPOUT1 Volume", ARIZONA_OUTPUT_PATH_CONFIG_1L,
- ARIZONA_OUTPUT_PATH_CONFIG_1R,
- ARIZONA_OUT1L_PGA_VOL_SHIFT,
- 0x34, 0x40, 0, ana_tlv),
-SOC_DOUBLE_R_RANGE_TLV("HPOUT2 Volume", ARIZONA_OUTPUT_PATH_CONFIG_2L,
- ARIZONA_OUTPUT_PATH_CONFIG_2R,
- ARIZONA_OUT2L_PGA_VOL_SHIFT,
- 0x34, 0x40, 0, ana_tlv),
-SOC_DOUBLE_R_RANGE_TLV("HPOUT3 Volume", ARIZONA_OUTPUT_PATH_CONFIG_3L,
- ARIZONA_OUTPUT_PATH_CONFIG_3R,
- ARIZONA_OUT3L_PGA_VOL_SHIFT, 0x34, 0x40, 0, ana_tlv),
-
SOC_DOUBLE("SPKDAT1 Switch", ARIZONA_PDM_SPK1_CTRL_1, ARIZONA_SPK1L_MUTE_SHIFT,
ARIZONA_SPK1R_MUTE_SHIFT, 1, 1),
SOC_DOUBLE("SPKDAT2 Switch", ARIZONA_PDM_SPK2_CTRL_1, ARIZONA_SPK2L_MUTE_SHIFT,
static const struct snd_soc_dapm_widget wm5110_dapm_widgets[] = {
SND_SOC_DAPM_SUPPLY("SYSCLK", ARIZONA_SYSTEM_CLOCK_1, ARIZONA_SYSCLK_ENA_SHIFT,
- 0, NULL, 0),
+ 0, wm5110_sysclk_ev, SND_SOC_DAPM_POST_PMU),
SND_SOC_DAPM_SUPPLY("ASYNCCLK", ARIZONA_ASYNC_CLOCK_1,
ARIZONA_ASYNC_CLK_ENA_SHIFT, 0, NULL, 0),
SND_SOC_DAPM_SUPPLY("OPCLK", ARIZONA_OUTPUT_SYSTEM_CLOCK,
{ "AEC Loopback", "HPOUT3L", "OUT3L" },
{ "AEC Loopback", "HPOUT3R", "OUT3R" },
{ "HPOUT3L", NULL, "OUT3L" },
- { "HPOUT3R", NULL, "OUT3L" },
+ { "HPOUT3R", NULL, "OUT3R" },
{ "AEC Loopback", "SPKOUTL", "OUT4L" },
{ "SPKOUTLN", NULL, "OUT4L" },
iface |= 0x0001;
break;
case SND_SOC_DAIFMT_DSP_A:
- iface |= 0x0003;
+ iface |= 0x0013;
break;
case SND_SOC_DAIFMT_DSP_B:
- iface |= 0x0013;
+ iface |= 0x0003;
break;
default:
return -EINVAL;
switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
case SND_SOC_DAIFMT_DSP_B:
- aif1 |= WM8904_AIF_LRCLK_INV;
+ aif1 |= 0x3 | WM8904_AIF_LRCLK_INV;
case SND_SOC_DAIFMT_DSP_A:
aif1 |= 0x3;
break;
snd_soc_update_bits(codec, WM8962_CLOCKING_4,
WM8962_SYSCLK_RATE_MASK, clocking4);
+ /* DSPCLK_DIV can be only generated correctly after enabling SYSCLK.
+ * So we here provisionally enable it and then disable it afterward
+ * if current bias_level hasn't reached SND_SOC_BIAS_ON.
+ */
+ if (codec->dapm.bias_level != SND_SOC_BIAS_ON)
+ snd_soc_update_bits(codec, WM8962_CLOCKING2,
+ WM8962_SYSCLK_ENA_MASK, WM8962_SYSCLK_ENA);
+
dspclk = snd_soc_read(codec, WM8962_CLOCKING1);
+
+ if (codec->dapm.bias_level != SND_SOC_BIAS_ON)
+ snd_soc_update_bits(codec, WM8962_CLOCKING2,
+ WM8962_SYSCLK_ENA_MASK, 0);
+
if (dspclk < 0) {
dev_err(codec->dev, "Failed to read DSPCLK: %d\n", dspclk);
return;
/* disable POBCTRL, SOFT_ST and BUFDCOPEN */
snd_soc_write(codec, WM8990_ANTIPOP2, 0x0);
+
+ codec->cache_sync = 1;
break;
}
return ret;
/* Wait for the RAM to start, should be near instantaneous */
- count = 0;
- do {
+ for (count = 0; count < 10; ++count) {
ret = regmap_read(dsp->regmap, dsp->base + ADSP2_STATUS1,
&val);
if (ret != 0)
return ret;
- } while (!(val & ADSP2_RAM_RDY) && ++count < 10);
+
+ if (val & ADSP2_RAM_RDY)
+ break;
+
+ msleep(1);
+ }
if (!(val & ADSP2_RAM_RDY)) {
adsp_err(dsp, "Failed to start DSP RAM\n");
print_buf_info(prtd->ram_channel, "i ram_channel");
pr_debug("davinci_pcm: link=%d, status=0x%x\n", link, ch_status);
- if (unlikely(ch_status != DMA_COMPLETE))
+ if (unlikely(ch_status != EDMA_DMA_COMPLETE))
return;
if (snd_pcm_running(substream)) {
break;
}
- dapm->bias_level = level;
-
return 0;
}
return -ENOMEM;
card->dev = &op->dev;
- platform_set_drvdata(op, pdata);
pdata->card = card;
if (ret)
dev_err(&op->dev, "snd_soc_register_card() failed: %d\n", ret);
+ platform_set_drvdata(op, pdata);
+
return ret;
}
SNDRV_PCM_FMTBIT_S24_LE | \
SNDRV_PCM_FMTBIT_S32_LE)
+#define KIRKWOOD_SPDIF_FORMATS \
+ (SNDRV_PCM_FMTBIT_S16_LE | \
+ SNDRV_PCM_FMTBIT_S24_LE)
+
static int kirkwood_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
unsigned int fmt)
{
ctl);
}
- if (dai->id == 0)
- ctl &= ~KIRKWOOD_PLAYCTL_SPDIF_EN; /* i2s */
- else
- ctl &= ~KIRKWOOD_PLAYCTL_I2S_EN; /* spdif */
-
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
/* configure */
ctl = priv->ctl_play;
+ if (dai->id == 0)
+ ctl &= ~KIRKWOOD_PLAYCTL_SPDIF_EN; /* i2s */
+ else
+ ctl &= ~KIRKWOOD_PLAYCTL_I2S_EN; /* spdif */
+
value = ctl & ~KIRKWOOD_PLAYCTL_ENABLE_MASK;
writel(value, priv->io + KIRKWOOD_PLAYCTL);
.channels_max = 2,
.rates = SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000 |
SNDRV_PCM_RATE_96000,
- .formats = KIRKWOOD_I2S_FORMATS,
+ .formats = KIRKWOOD_SPDIF_FORMATS,
},
.capture = {
.channels_min = 1,
.channels_max = 2,
.rates = SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000 |
SNDRV_PCM_RATE_96000,
- .formats = KIRKWOOD_I2S_FORMATS,
+ .formats = KIRKWOOD_SPDIF_FORMATS,
},
.ops = &kirkwood_i2s_dai_ops,
},
.playback = {
.channels_min = 1,
.channels_max = 2,
- .rates = SNDRV_PCM_RATE_8000_192000 |
- SNDRV_PCM_RATE_CONTINUOUS |
- SNDRV_PCM_RATE_KNOT,
+ .rates = SNDRV_PCM_RATE_CONTINUOUS,
+ .rate_min = 5512,
+ .rate_max = 192000,
.formats = KIRKWOOD_I2S_FORMATS,
},
.capture = {
.channels_min = 1,
.channels_max = 2,
- .rates = SNDRV_PCM_RATE_8000_192000 |
- SNDRV_PCM_RATE_CONTINUOUS |
- SNDRV_PCM_RATE_KNOT,
+ .rates = SNDRV_PCM_RATE_CONTINUOUS,
+ .rate_min = 5512,
+ .rate_max = 192000,
.formats = KIRKWOOD_I2S_FORMATS,
},
.ops = &kirkwood_i2s_dai_ops,
.playback = {
.channels_min = 1,
.channels_max = 2,
- .rates = SNDRV_PCM_RATE_8000_192000 |
- SNDRV_PCM_RATE_CONTINUOUS |
- SNDRV_PCM_RATE_KNOT,
- .formats = KIRKWOOD_I2S_FORMATS,
+ .rates = SNDRV_PCM_RATE_CONTINUOUS,
+ .rate_min = 5512,
+ .rate_max = 192000,
+ .formats = KIRKWOOD_SPDIF_FORMATS,
},
.capture = {
.channels_min = 1,
.channels_max = 2,
- .rates = SNDRV_PCM_RATE_8000_192000 |
- SNDRV_PCM_RATE_CONTINUOUS |
- SNDRV_PCM_RATE_KNOT,
- .formats = KIRKWOOD_I2S_FORMATS,
+ .rates = SNDRV_PCM_RATE_CONTINUOUS,
+ .rate_min = 5512,
+ .rate_max = 192000,
+ .formats = KIRKWOOD_SPDIF_FORMATS,
},
.ops = &kirkwood_i2s_dai_ops,
},
SNDRV_PCM_HW_PARAM_CHANNELS, 2, 2);
n810_ext_control(&codec->dapm);
- return clk_enable(sys_clkout2);
+ return clk_prepare_enable(sys_clkout2);
}
static void n810_shutdown(struct snd_pcm_substream *substream)
{
- clk_disable(sys_clkout2);
+ clk_disable_unprepare(sys_clkout2);
}
static int n810_hw_params(struct snd_pcm_substream *substream,
config SND_SOC_RCAR
tristate "R-Car series SRU/SCU/SSIU/SSI support"
select SND_SIMPLE_CARD
+ select REGMAP
help
This option enables R-Car SUR/SCU/SSIU/SSI sound support
return;
}
+ dma_async_issue_pending(dma->chan);
}
-
- dma_async_issue_pending(dma->chan);
}
int rsnd_dma_available(struct rsnd_dma *dma)
struct rsnd_mod *mod,
struct rsnd_dai_stream *io)
{
- struct rsnd_priv *priv = rsnd_mod_to_priv(mod);
- struct device *dev = rsnd_priv_to_dev(priv);
-
- if (!mod) {
- dev_err(dev, "NULL mod\n");
+ if (!mod)
return -EIO;
- }
if (!list_empty(&mod->list)) {
+ struct rsnd_priv *priv = rsnd_mod_to_priv(mod);
+ struct device *dev = rsnd_priv_to_dev(priv);
+
dev_err(dev, "%s%d is not empty\n",
rsnd_mod_name(mod),
rsnd_mod_id(mod));
return 0;
id = rsnd_mod_id(mod);
- if (id < 0 || id > ARRAY_SIZE(routes))
+ if (id < 0 || id >= ARRAY_SIZE(routes))
return -EIO;
/*
break;
case 2:
((u16 *)(&ucontrol->value.bytes.data))[0]
- &= ~params->mask;
+ &= cpu_to_be16(~params->mask);
break;
case 4:
((u32 *)(&ucontrol->value.bytes.data))[0]
- &= ~params->mask;
+ &= cpu_to_be32(~params->mask);
break;
default:
return -EINVAL;
*/
int devm_snd_soc_register_card(struct device *dev, struct snd_soc_card *card)
{
- struct device **ptr;
+ struct snd_soc_card **ptr;
int ret;
ptr = devres_alloc(devm_card_release, sizeof(*ptr), GFP_KERNEL);
ret = snd_soc_register_card(card);
if (ret == 0) {
- *ptr = dev;
+ *ptr = card;
devres_add(dev, ptr);
} else {
devres_free(ptr);
}
}
+static void dmaengine_pcm_release_chan(struct dmaengine_pcm *pcm)
+{
+ unsigned int i;
+
+ for (i = SNDRV_PCM_STREAM_PLAYBACK; i <= SNDRV_PCM_STREAM_CAPTURE;
+ i++) {
+ if (!pcm->chan[i])
+ continue;
+ dma_release_channel(pcm->chan[i]);
+ if (pcm->flags & SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX)
+ break;
+ }
+}
+
/**
* snd_dmaengine_pcm_register - Register a dmaengine based PCM device
* @dev: The parent device for the PCM device
const struct snd_dmaengine_pcm_config *config, unsigned int flags)
{
struct dmaengine_pcm *pcm;
+ int ret;
pcm = kzalloc(sizeof(*pcm), GFP_KERNEL);
if (!pcm)
dmaengine_pcm_request_chan_of(pcm, dev);
if (flags & SND_DMAENGINE_PCM_FLAG_NO_RESIDUE)
- return snd_soc_add_platform(dev, &pcm->platform,
+ ret = snd_soc_add_platform(dev, &pcm->platform,
&dmaengine_no_residue_pcm_platform);
else
- return snd_soc_add_platform(dev, &pcm->platform,
+ ret = snd_soc_add_platform(dev, &pcm->platform,
&dmaengine_pcm_platform);
+ if (ret)
+ goto err_free_dma;
+
+ return 0;
+
+err_free_dma:
+ dmaengine_pcm_release_chan(pcm);
+ kfree(pcm);
+ return ret;
}
EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_register);
{
struct snd_soc_platform *platform;
struct dmaengine_pcm *pcm;
- unsigned int i;
platform = snd_soc_lookup_platform(dev);
if (!platform)
pcm = soc_platform_to_pcm(platform);
- for (i = SNDRV_PCM_STREAM_PLAYBACK; i <= SNDRV_PCM_STREAM_CAPTURE; i++) {
- if (pcm->chan[i]) {
- dma_release_channel(pcm->chan[i]);
- if (pcm->flags & SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX)
- break;
- }
- }
-
snd_soc_remove_platform(platform);
+ dmaengine_pcm_release_chan(pcm);
kfree(pcm);
}
EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_unregister);
}
}
-static void soc_pcm_init_runtime_hw(struct snd_pcm_hardware *hw,
+static void soc_pcm_init_runtime_hw(struct snd_pcm_runtime *runtime,
struct snd_soc_pcm_stream *codec_stream,
struct snd_soc_pcm_stream *cpu_stream)
{
- hw->rate_min = max(codec_stream->rate_min, cpu_stream->rate_min);
- hw->rate_max = max(codec_stream->rate_max, cpu_stream->rate_max);
+ struct snd_pcm_hardware *hw = &runtime->hw;
+
hw->channels_min = max(codec_stream->channels_min,
cpu_stream->channels_min);
hw->channels_max = min(codec_stream->channels_max,
if (cpu_stream->rates
& (SNDRV_PCM_RATE_KNOT | SNDRV_PCM_RATE_CONTINUOUS))
hw->rates |= codec_stream->rates;
+
+ snd_pcm_limit_hw_rates(runtime);
+
+ hw->rate_min = max(hw->rate_min, cpu_stream->rate_min);
+ hw->rate_min = max(hw->rate_min, codec_stream->rate_min);
+ hw->rate_max = min_not_zero(hw->rate_max, cpu_stream->rate_max);
+ hw->rate_max = min_not_zero(hw->rate_max, codec_stream->rate_max);
}
/*
/* Check that the codec and cpu DAIs are compatible */
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
- soc_pcm_init_runtime_hw(&runtime->hw, &codec_dai_drv->playback,
+ soc_pcm_init_runtime_hw(runtime, &codec_dai_drv->playback,
&cpu_dai_drv->playback);
} else {
- soc_pcm_init_runtime_hw(&runtime->hw, &codec_dai_drv->capture,
+ soc_pcm_init_runtime_hw(runtime, &codec_dai_drv->capture,
&cpu_dai_drv->capture);
}
ret = -EINVAL;
- snd_pcm_limit_hw_rates(runtime);
if (!runtime->hw.rates) {
printk(KERN_ERR "ASoC: %s <-> %s No matching rates\n",
codec_dai->name, cpu_dai->name);
struct snd_soc_platform *platform = rtd->platform;
struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
struct snd_soc_dai *codec_dai = rtd->codec_dai;
- struct snd_soc_codec *codec = rtd->codec;
+ bool playback = substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
mutex_lock_nested(&rtd->pcm_mutex, rtd->pcm_subclass);
/* apply codec digital mute */
- if (!codec->active)
+ if ((playback && codec_dai->playback_active == 1) ||
+ (!playback && codec_dai->capture_active == 1))
snd_soc_dai_digital_mute(codec_dai, 1, substream->stream);
/* free any machine hw params */
unsigned int fmt)
{
struct tegra20_i2s *i2s = snd_soc_dai_get_drvdata(dai);
- unsigned int mask, val;
+ unsigned int mask = 0, val = 0;
switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
case SND_SOC_DAIFMT_NB_NF:
return -EINVAL;
}
- mask = TEGRA20_I2S_CTRL_MASTER_ENABLE;
+ mask |= TEGRA20_I2S_CTRL_MASTER_ENABLE;
switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
case SND_SOC_DAIFMT_CBS_CFS:
- val = TEGRA20_I2S_CTRL_MASTER_ENABLE;
+ val |= TEGRA20_I2S_CTRL_MASTER_ENABLE;
break;
case SND_SOC_DAIFMT_CBM_CFM:
break;
{
struct device *dev = dai->dev;
struct tegra20_spdif *spdif = snd_soc_dai_get_drvdata(dai);
- unsigned int mask, val;
+ unsigned int mask = 0, val = 0;
int ret, spdifclock;
- mask = TEGRA20_SPDIF_CTRL_PACK |
- TEGRA20_SPDIF_CTRL_BIT_MODE_MASK;
+ mask |= TEGRA20_SPDIF_CTRL_PACK |
+ TEGRA20_SPDIF_CTRL_BIT_MODE_MASK;
switch (params_format(params)) {
case SNDRV_PCM_FORMAT_S16_LE:
- val = TEGRA20_SPDIF_CTRL_PACK |
- TEGRA20_SPDIF_CTRL_BIT_MODE_16BIT;
+ val |= TEGRA20_SPDIF_CTRL_PACK |
+ TEGRA20_SPDIF_CTRL_BIT_MODE_16BIT;
break;
default:
return -EINVAL;
unsigned int fmt)
{
struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(dai);
- unsigned int mask, val;
+ unsigned int mask = 0, val = 0;
switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
case SND_SOC_DAIFMT_NB_NF:
return -EINVAL;
}
- mask = TEGRA30_I2S_CTRL_MASTER_ENABLE;
+ mask |= TEGRA30_I2S_CTRL_MASTER_ENABLE;
switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
case SND_SOC_DAIFMT_CBS_CFS:
- val = TEGRA30_I2S_CTRL_MASTER_ENABLE;
+ val |= TEGRA30_I2S_CTRL_MASTER_ENABLE;
break;
case SND_SOC_DAIFMT_CBM_CFM:
break;
if (usb_pipein(ep->pipe) ||
snd_usb_endpoint_implicit_feedback_sink(ep)) {
+ urb_packs = packs_per_ms;
+ /*
+ * Wireless devices can poll at a max rate of once per 4ms.
+ * For dataintervals less than 5, increase the packet count to
+ * allow the host controller to use bursting to fill in the
+ * gaps.
+ */
+ if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_WIRELESS) {
+ int interval = ep->datainterval;
+ while (interval < 5) {
+ urb_packs <<= 1;
+ ++interval;
+ }
+ }
/* make capture URBs <= 1 ms and smaller than a period */
- urb_packs = min(max_packs_per_urb, packs_per_ms);
+ urb_packs = min(max_packs_per_urb, urb_packs);
while (urb_packs > 1 && urb_packs * maxsize >= period_bytes)
urb_packs >>= 1;
ep->nurbs = MAX_URBS;
return err;
}
- return err;
+ return 0;
}
int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
static enum event_type
process_op(struct event_format *event, struct print_arg *arg, char **tok);
+/*
+ * For __print_symbolic() and __print_flags, we need to completely
+ * evaluate the first argument, which defines what to print next.
+ */
+static enum event_type
+process_field_arg(struct event_format *event, struct print_arg *arg, char **tok)
+{
+ enum event_type type;
+
+ type = process_arg(event, arg, tok);
+
+ while (type == EVENT_OP) {
+ type = process_op(event, arg, tok);
+ }
+
+ return type;
+}
+
static enum event_type
process_cond(struct event_format *event, struct print_arg *top, char **tok)
{
goto out_free;
}
- type = process_arg(event, field, &token);
+ type = process_field_arg(event, field, &token);
/* Handle operations in the first argument */
while (type == EVENT_OP)
goto out_free;
}
- type = process_arg(event, field, &token);
+ type = process_field_arg(event, field, &token);
+
if (test_type_token(type, token, EVENT_DELIM, ","))
goto out_free_field;
* is in the bottom half of the 32 bit field.
*/
offset &= 0xffff;
- val = (unsigned long long)(data + offset);
+ val = (unsigned long long)((unsigned long)data + offset);
break;
default: /* not sure what to do there */
return 0;
if (evsel->idx == (int) desc[i].leader_idx) {
evsel->leader = evsel;
/* {anon_group} is a dummy name */
- if (strcmp(desc[i].name, "{anon_group}"))
+ if (strcmp(desc[i].name, "{anon_group}")) {
evsel->group_name = desc[i].name;
+ desc[i].name = NULL;
+ }
evsel->nr_members = desc[i].nr_members;
if (i >= nr_groups || nr > 0) {
ret = 0;
out_free:
- while ((int) --i >= 0)
+ for (i = 0; i < nr_groups; i++)
free(desc[i].name);
free(desc);
/* Override latest entry if it had no specific time coverage */
if (!curr->start) {
comm__override(curr, str, timestamp);
- return 0;
+ } else {
+ new = comm__new(str, timestamp);
+ if (!new)
+ return -ENOMEM;
+ list_add(&new->list, &thread->comm_list);
}
- new = comm__new(str, timestamp);
- if (!new)
- return -ENOMEM;
-
- list_add(&new->list, &thread->comm_list);
thread->comm_set = true;
return 0;
.fi
.SH "SEE ALSO"
.LP
-cpupower(1), cpupower\-monitor(1), cpupower\-info(1), cpupower\-set(1)
+cpupower(1), cpupower\-monitor(1), cpupower\-info(1), cpupower\-set(1),
+cpupower\-idle\-set(1)
--- /dev/null
+.TH "CPUPOWER-IDLE-SET" "1" "0.1" "" "cpupower Manual"
+.SH "NAME"
+.LP
+cpupower idle\-set \- Utility to set cpu idle state specific kernel options
+.SH "SYNTAX"
+.LP
+cpupower [ \-c cpulist ] idle\-info [\fIoptions\fP]
+.SH "DESCRIPTION"
+.LP
+The cpupower idle\-set subcommand allows to set cpu idle, also called cpu
+sleep state, specific options offered by the kernel. One example is disabling
+sleep states. This can be handy for power vs performance tuning.
+.SH "OPTIONS"
+.LP
+.TP
+\fB\-d\fR \fB\-\-disable\fR
+Disable a specific processor sleep state.
+.TP
+\fB\-e\fR \fB\-\-enable\fR
+Enable a specific processor sleep state.
+
+.SH "REMARKS"
+.LP
+Cpuidle Governors Policy on Disabling Sleep States
+
+.RS 4
+Depending on the used cpuidle governor, implementing the kernel policy
+how to choose sleep states, subsequent sleep states on this core, might get
+disabled as well.
+
+There are two cpuidle governors ladder and menu. While the ladder
+governor is always available, if CONFIG_CPU_IDLE is selected, the
+menu governor additionally requires CONFIG_NO_HZ.
+
+The behavior and the effect of the disable variable depends on the
+implementation of a particular governor. In the ladder governor, for
+example, it is not coherent, i.e. if one is disabling a light state,
+then all deeper states are disabled as well. Likewise, if one enables a
+deep state but a lighter state still is disabled, then this has no effect.
+.RE
+.LP
+Disabling the Lightest Sleep State may not have any Affect
+
+.RS 4
+If criteria are not met to enter deeper sleep states and the lightest sleep
+state is chosen when idle, the kernel may still enter this sleep state,
+irrespective of whether it is disabled or not. This is also reflected in
+the usage count of the disabled sleep state when using the cpupower idle-info
+command.
+.RE
+.LP
+Selecting specific CPU Cores
+
+.RS 4
+By default processor sleep states of all CPU cores are set. Please refer
+to the cpupower(1) manpage in the \-\-cpu option section how to disable
+C-states of specific cores.
+.RE
+.SH "FILES"
+.nf
+\fI/sys/devices/system/cpu/cpu*/cpuidle/state*\fP
+\fI/sys/devices/system/cpu/cpuidle/*\fP
+.fi
+.SH "AUTHORS"
+.nf
+Thomas Renninger <trenn@suse.de>
+.fi
+.SH "SEE ALSO"
+.LP
+cpupower(1), cpupower\-monitor(1), cpupower\-info(1), cpupower\-set(1),
+cpupower\-idle\-info(1)
#include "helpers/bitmask.h"
static struct option set_opts[] = {
- { .name = "perf-bias", .has_arg = optional_argument, .flag = NULL, .val = 'b'},
- { .name = "sched-mc", .has_arg = optional_argument, .flag = NULL, .val = 'm'},
- { .name = "sched-smt", .has_arg = optional_argument, .flag = NULL, .val = 's'},
+ { .name = "perf-bias", .has_arg = required_argument, .flag = NULL, .val = 'b'},
+ { .name = "sched-mc", .has_arg = required_argument, .flag = NULL, .val = 'm'},
+ { .name = "sched-smt", .has_arg = required_argument, .flag = NULL, .val = 's'},
{ },
};
int sysfs_is_idlestate_disabled(unsigned int cpu,
unsigned int idlestate)
{
- if (sysfs_get_idlestate_count(cpu) < idlestate)
+ if (sysfs_get_idlestate_count(cpu) <= idlestate)
return -1;
if (!sysfs_idlestate_file_exists(cpu, idlestate,
char value[SYSFS_PATH_MAX];
int bytes_written;
- if (sysfs_get_idlestate_count(cpu) < idlestate)
+ if (sysfs_get_idlestate_count(cpu) <= idlestate)
return -1;
if (!sysfs_idlestate_file_exists(cpu, idlestate,
* turbostat -- show CPU frequency and C-state residency
* on modern Intel turbo-capable processors.
*
- * Copyright (c) 2012 Intel Corporation.
+ * Copyright (c) 2013 Intel Corporation.
* Len Brown <len.brown@intel.com>
*
* This program is free software; you can redistribute it and/or modify it
unsigned int do_nhm_cstates;
unsigned int do_snb_cstates;
unsigned int do_c8_c9_c10;
+unsigned int do_slm_cstates;
+unsigned int use_c1_residency_msr;
unsigned int has_aperf;
unsigned int has_epb;
unsigned int units = 1000000000; /* Ghz etc */
#define RAPL_DRAM (1 << 3)
#define RAPL_PKG_PERF_STATUS (1 << 4)
#define RAPL_DRAM_PERF_STATUS (1 << 5)
+#define RAPL_PKG_POWER_INFO (1 << 6)
+#define RAPL_CORE_POLICY (1 << 7)
#define TJMAX_DEFAULT 100
#define MAX(a, b) ((a) > (b) ? (a) : (b))
unsigned long long tsc;
unsigned long long aperf;
unsigned long long mperf;
- unsigned long long c1; /* derived */
+ unsigned long long c1;
unsigned long long extra_msr64;
unsigned long long extra_delta64;
unsigned long long extra_msr32;
outp += sprintf(outp, " MSR 0x%03X", extra_msr_offset64);
if (do_nhm_cstates)
outp += sprintf(outp, " %%c1");
- if (do_nhm_cstates)
+ if (do_nhm_cstates && !do_slm_cstates)
outp += sprintf(outp, " %%c3");
if (do_nhm_cstates)
outp += sprintf(outp, " %%c6");
if (do_snb_cstates)
outp += sprintf(outp, " %%pc2");
- if (do_nhm_cstates)
+ if (do_nhm_cstates && !do_slm_cstates)
outp += sprintf(outp, " %%pc3");
- if (do_nhm_cstates)
+ if (do_nhm_cstates && !do_slm_cstates)
outp += sprintf(outp, " %%pc6");
if (do_snb_cstates)
outp += sprintf(outp, " %%pc7");
if (!(t->flags & CPU_IS_FIRST_THREAD_IN_CORE))
goto done;
- if (do_nhm_cstates)
+ if (do_nhm_cstates && !do_slm_cstates)
outp += sprintf(outp, " %6.2f", 100.0 * c->c3/t->tsc);
if (do_nhm_cstates)
outp += sprintf(outp, " %6.2f", 100.0 * c->c6/t->tsc);
if (do_snb_cstates)
outp += sprintf(outp, " %6.2f", 100.0 * p->pc2/t->tsc);
- if (do_nhm_cstates)
+ if (do_nhm_cstates && !do_slm_cstates)
outp += sprintf(outp, " %6.2f", 100.0 * p->pc3/t->tsc);
- if (do_nhm_cstates)
+ if (do_nhm_cstates && !do_slm_cstates)
outp += sprintf(outp, " %6.2f", 100.0 * p->pc6/t->tsc);
if (do_snb_cstates)
outp += sprintf(outp, " %6.2f", 100.0 * p->pc7/t->tsc);
}
- /*
- * As counter collection is not atomic,
- * it is possible for mperf's non-halted cycles + idle states
- * to exceed TSC's all cycles: show c1 = 0% in that case.
- */
- if ((old->mperf + core_delta->c3 + core_delta->c6 + core_delta->c7) > old->tsc)
- old->c1 = 0;
- else {
- /* normal case, derive c1 */
- old->c1 = old->tsc - old->mperf - core_delta->c3
+ if (use_c1_residency_msr) {
+ /*
+ * Some models have a dedicated C1 residency MSR,
+ * which should be more accurate than the derivation below.
+ */
+ } else {
+ /*
+ * As counter collection is not atomic,
+ * it is possible for mperf's non-halted cycles + idle states
+ * to exceed TSC's all cycles: show c1 = 0% in that case.
+ */
+ if ((old->mperf + core_delta->c3 + core_delta->c6 + core_delta->c7) > old->tsc)
+ old->c1 = 0;
+ else {
+ /* normal case, derive c1 */
+ old->c1 = old->tsc - old->mperf - core_delta->c3
- core_delta->c6 - core_delta->c7;
+ }
}
if (old->mperf == 0) {
if (get_msr(cpu, extra_msr_offset64, &t->extra_msr64))
return -5;
+ if (use_c1_residency_msr) {
+ if (get_msr(cpu, MSR_CORE_C1_RES, &t->c1))
+ return -6;
+ }
+
/* collect core counters only for 1st thread in core */
if (!(t->flags & CPU_IS_FIRST_THREAD_IN_CORE))
return 0;
- if (do_nhm_cstates) {
+ if (do_nhm_cstates && !do_slm_cstates) {
if (get_msr(cpu, MSR_CORE_C3_RESIDENCY, &c->c3))
return -6;
+ }
+
+ if (do_nhm_cstates) {
if (get_msr(cpu, MSR_CORE_C6_RESIDENCY, &c->c6))
return -7;
}
if (!(t->flags & CPU_IS_FIRST_CORE_IN_PACKAGE))
return 0;
- if (do_nhm_cstates) {
+ if (do_nhm_cstates && !do_slm_cstates) {
if (get_msr(cpu, MSR_PKG_C3_RESIDENCY, &p->pc3))
return -9;
if (get_msr(cpu, MSR_PKG_C6_RESIDENCY, &p->pc6))
ratio, bclk, ratio * bclk);
get_msr(0, MSR_IA32_POWER_CTL, &msr);
- fprintf(stderr, "cpu0: MSR_IA32_POWER_CTL: 0x%08llx (C1E: %sabled)\n",
+ fprintf(stderr, "cpu0: MSR_IA32_POWER_CTL: 0x%08llx (C1E auto-promotion: %sabled)\n",
msr, msr & 0x2 ? "EN" : "DIS");
if (!do_ivt_turbo_ratio_limit)
switch(msr & 0x7) {
case 0:
- fprintf(stderr, "pc0");
+ fprintf(stderr, do_slm_cstates ? "no pkg states" : "pc0");
break;
case 1:
- fprintf(stderr, do_snb_cstates ? "pc2" : "pc0");
+ fprintf(stderr, do_slm_cstates ? "no pkg states" : do_snb_cstates ? "pc2" : "pc0");
break;
case 2:
- fprintf(stderr, do_snb_cstates ? "pc6-noret" : "pc3");
+ fprintf(stderr, do_slm_cstates ? "invalid" : do_snb_cstates ? "pc6-noret" : "pc3");
break;
case 3:
- fprintf(stderr, "pc6");
+ fprintf(stderr, do_slm_cstates ? "invalid" : "pc6");
break;
case 4:
- fprintf(stderr, "pc7");
+ fprintf(stderr, do_slm_cstates ? "pc4" : "pc7");
break;
case 5:
- fprintf(stderr, do_snb_cstates ? "pc7s" : "invalid");
+ fprintf(stderr, do_slm_cstates ? "invalid" : do_snb_cstates ? "pc7s" : "invalid");
+ break;
+ case 6:
+ fprintf(stderr, do_slm_cstates ? "pc6" : "invalid");
break;
case 7:
- fprintf(stderr, "unlimited");
+ fprintf(stderr, do_slm_cstates ? "pc7" : "unlimited");
break;
default:
fprintf(stderr, "invalid");
case 0x3F: /* HSW */
case 0x45: /* HSW */
case 0x46: /* HSW */
+ case 0x37: /* BYT */
+ case 0x4D: /* AVN */
return 1;
case 0x2E: /* Nehalem-EX Xeon - Beckton */
case 0x2F: /* Westmere-EX Xeon - Eagleton */
#define RAPL_POWER_GRANULARITY 0x7FFF /* 15 bit power granularity */
#define RAPL_TIME_GRANULARITY 0x3F /* 6 bit time granularity */
+double get_tdp(model)
+{
+ unsigned long long msr;
+
+ if (do_rapl & RAPL_PKG_POWER_INFO)
+ if (!get_msr(0, MSR_PKG_POWER_INFO, &msr))
+ return ((msr >> 0) & RAPL_POWER_GRANULARITY) * rapl_power_units;
+
+ switch (model) {
+ case 0x37:
+ case 0x4D:
+ return 30.0;
+ default:
+ return 135.0;
+ }
+}
+
+
/*
* rapl_probe()
*
- * sets do_rapl
+ * sets do_rapl, rapl_power_units, rapl_energy_units, rapl_time_units
*/
void rapl_probe(unsigned int family, unsigned int model)
{
unsigned long long msr;
+ unsigned int time_unit;
double tdp;
if (!genuine_intel)
case 0x3F: /* HSW */
case 0x45: /* HSW */
case 0x46: /* HSW */
- do_rapl = RAPL_PKG | RAPL_CORES | RAPL_GFX;
+ do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_GFX | RAPL_PKG_POWER_INFO;
break;
case 0x2D:
case 0x3E:
- do_rapl = RAPL_PKG | RAPL_CORES | RAPL_DRAM | RAPL_PKG_PERF_STATUS | RAPL_DRAM_PERF_STATUS;
+ do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_DRAM | RAPL_PKG_PERF_STATUS | RAPL_DRAM_PERF_STATUS | RAPL_PKG_POWER_INFO;
+ break;
+ case 0x37: /* BYT */
+ case 0x4D: /* AVN */
+ do_rapl = RAPL_PKG | RAPL_CORES ;
break;
default:
return;
return;
rapl_power_units = 1.0 / (1 << (msr & 0xF));
- rapl_energy_units = 1.0 / (1 << (msr >> 8 & 0x1F));
- rapl_time_units = 1.0 / (1 << (msr >> 16 & 0xF));
+ if (model == 0x37)
+ rapl_energy_units = 1.0 * (1 << (msr >> 8 & 0x1F)) / 1000000;
+ else
+ rapl_energy_units = 1.0 / (1 << (msr >> 8 & 0x1F));
- /* get TDP to determine energy counter range */
- if (get_msr(0, MSR_PKG_POWER_INFO, &msr))
- return;
+ time_unit = msr >> 16 & 0xF;
+ if (time_unit == 0)
+ time_unit = 0xA;
- tdp = ((msr >> 0) & RAPL_POWER_GRANULARITY) * rapl_power_units;
+ rapl_time_units = 1.0 / (1 << (time_unit));
- rapl_joule_counter_range = 0xFFFFFFFF * rapl_energy_units / tdp;
+ tdp = get_tdp(model);
+ rapl_joule_counter_range = 0xFFFFFFFF * rapl_energy_units / tdp;
if (verbose)
- fprintf(stderr, "RAPL: %.0f sec. Joule Counter Range\n", rapl_joule_counter_range);
+ fprintf(stderr, "RAPL: %.0f sec. Joule Counter Range, at %.0f Watts\n", rapl_joule_counter_range, tdp);
return;
}
{
unsigned long long msr;
int cpu;
- double local_rapl_power_units, local_rapl_energy_units, local_rapl_time_units;
if (!do_rapl)
return 0;
if (get_msr(cpu, MSR_RAPL_POWER_UNIT, &msr))
return -1;
- local_rapl_power_units = 1.0 / (1 << (msr & 0xF));
- local_rapl_energy_units = 1.0 / (1 << (msr >> 8 & 0x1F));
- local_rapl_time_units = 1.0 / (1 << (msr >> 16 & 0xF));
-
- if (local_rapl_power_units != rapl_power_units)
- fprintf(stderr, "cpu%d, ERROR: Power units mis-match\n", cpu);
- if (local_rapl_energy_units != rapl_energy_units)
- fprintf(stderr, "cpu%d, ERROR: Energy units mis-match\n", cpu);
- if (local_rapl_time_units != rapl_time_units)
- fprintf(stderr, "cpu%d, ERROR: Time units mis-match\n", cpu);
-
if (verbose) {
fprintf(stderr, "cpu%d: MSR_RAPL_POWER_UNIT: 0x%08llx "
"(%f Watts, %f Joules, %f sec.)\n", cpu, msr,
- local_rapl_power_units, local_rapl_energy_units, local_rapl_time_units);
+ rapl_power_units, rapl_energy_units, rapl_time_units);
}
- if (do_rapl & RAPL_PKG) {
+ if (do_rapl & RAPL_PKG_POWER_INFO) {
+
if (get_msr(cpu, MSR_PKG_POWER_INFO, &msr))
return -5;
((msr >> 32) & RAPL_POWER_GRANULARITY) * rapl_power_units,
((msr >> 48) & RAPL_TIME_GRANULARITY) * rapl_time_units);
+ }
+ if (do_rapl & RAPL_PKG) {
+
if (get_msr(cpu, MSR_PKG_POWER_LIMIT, &msr))
return -9;
print_power_limit_msr(cpu, msr, "DRAM Limit");
}
- if (do_rapl & RAPL_CORES) {
+ if (do_rapl & RAPL_CORE_POLICY) {
if (verbose) {
if (get_msr(cpu, MSR_PP0_POLICY, &msr))
return -7;
fprintf(stderr, "cpu%d: MSR_PP0_POLICY: %lld\n", cpu, msr & 0xF);
+ }
+ }
+ if (do_rapl & RAPL_CORES) {
+ if (verbose) {
if (get_msr(cpu, MSR_PP0_POWER_LIMIT, &msr))
return -9;
}
+int is_slm(unsigned int family, unsigned int model)
+{
+ if (!genuine_intel)
+ return 0;
+ switch (model) {
+ case 0x37: /* BYT */
+ case 0x4D: /* AVN */
+ return 1;
+ }
+ return 0;
+}
+
+#define SLM_BCLK_FREQS 5
+double slm_freq_table[SLM_BCLK_FREQS] = { 83.3, 100.0, 133.3, 116.7, 80.0};
+
+double slm_bclk(void)
+{
+ unsigned long long msr = 3;
+ unsigned int i;
+ double freq;
+
+ if (get_msr(0, MSR_FSB_FREQ, &msr))
+ fprintf(stderr, "SLM BCLK: unknown\n");
+
+ i = msr & 0xf;
+ if (i >= SLM_BCLK_FREQS) {
+ fprintf(stderr, "SLM BCLK[%d] invalid\n", i);
+ msr = 3;
+ }
+ freq = slm_freq_table[i];
+
+ fprintf(stderr, "SLM BCLK: %.1f Mhz\n", freq);
+
+ return freq;
+}
+
double discover_bclk(unsigned int family, unsigned int model)
{
if (is_snb(family, model))
return 100.00;
+ else if (is_slm(family, model))
+ return slm_bclk();
else
return 133.33;
}
fprintf(stderr, "cpu%d: MSR_IA32_TEMPERATURE_TARGET: 0x%08llx (%d C)\n",
cpu, msr, target_c_local);
- if (target_c_local < 85 || target_c_local > 120)
+ if (target_c_local < 85 || target_c_local > 127)
goto guess;
tcc_activation_temp = target_c_local;
do_smi = do_nhm_cstates;
do_snb_cstates = is_snb(family, model);
do_c8_c9_c10 = has_c8_c9_c10(family, model);
+ do_slm_cstates = is_slm(family, model);
bclk = discover_bclk(family, model);
do_nehalem_turbo_ratio_limit = has_nehalem_turbo_ratio_limit(family, model);
cmdline(argc, argv);
if (verbose)
- fprintf(stderr, "turbostat v3.4 April 17, 2013"
+ fprintf(stderr, "turbostat v3.5 April 26, 2013"
" - Len Brown <lenb@kernel.org>\n");
turbostat_init();
CC = $(CROSS_COMPILE)gcc
PTHREAD_LIBS = -lpthread
WARNINGS = -Wall -Wextra
-CFLAGS = $(WARNINGS) -g $(PTHREAD_LIBS) -I../include
+CFLAGS = $(WARNINGS) -g -I../include
+LDFLAGS = $(PTHREAD_LIBS)
all: testusb ffs-test
%: %.c
- $(CC) $(CFLAGS) -o $@ $^
+ $(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS)
clean:
$(RM) testusb ffs-test
int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len)
{
- return kvm_write_guest_page(kvm, gfn, (const void *) empty_zero_page,
- offset, len);
+ const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0)));
+
+ return kvm_write_guest_page(kvm, gfn, zero_page, offset, len);
}
EXPORT_SYMBOL_GPL(kvm_clear_guest_page);
int r;
struct kvm_vcpu *vcpu, *v;
+ if (id >= KVM_MAX_VCPUS)
+ return -EINVAL;
+
vcpu = kvm_arch_vcpu_create(kvm, id);
if (IS_ERR(vcpu))
return PTR_ERR(vcpu);