18.6 Performance Characteristics Summary
Choosing the right collection type often involves considering the time complexity of common operations. The table below summarizes typical complexities (average or amortized where applicable):
Collection | Access (Index/Key) | Insert (End/Any) | Remove (End/Any) | Iteration Order | Key Notes |
---|---|---|---|---|---|
Vec<T> | O(1) / N/A | O(1) * / O(n) | O(1) / O(n) | Insertion | Contiguous memory, cache-friendly. *Amortized. |
String | N/A (Byte Slice) | O(1) * / N/A | N/A | UTF-8 Bytes | Vec<u8> + UTF-8 guarantee. Append is O(1) *. |
HashMap<K, V> | O(1) ** | O(1) ** | O(1) ** | Arbitrary | Requires Hash +Eq keys. **Average case. |
BTreeMap<K, V> | O(log n) | O(log n) | O(log n) | Sorted by Key | Requires Ord +Eq keys. Slower than HashMap . |
HashSet<T> | O(1) ** (contains) | O(1) ** | O(1) ** | Arbitrary | Unique elements, hashed. **Average case. |
BTreeSet<T> | O(log n) (contains) | O(log n) | O(log n) | Sorted | Unique elements, ordered. Requires Ord +Eq . |
VecDeque<T> | O(1) (ends) / O(n) | O(1) * (ends) / O(n) | O(1) * (ends) / O(n) | Insertion | Ring buffer. *Amortized O(1) at ends. |
LinkedList<T> | O(n) | O(1) *** | O(1) *** | Insertion | Poor cache locality. ***Requires known node/cursor. |
Notes:
* Amortized O(1)
: The operation is very fast on average, but occasional calls might be slower (O(n)
) due to internal resizing.
** Average case O(1)
: Assumes a good hash function and few collisions. Worst-case can be O(n)
.
*** O(1)
if you already have direct access (e.g., a cursor) to the node or its neighbor involved in the operation. Finding the node first is O(n)
.