✨ Shield now has support for Avalonia UI

C# Collections Interview Questions and Answers

May 28, 2023 | .NET, C#

Preparing for your next big C# interview? You likely know that one of the essential topics to master is C# collections. In this comprehensive guide, we’ll tackle some of the most frequently asked C# collections interview questions to help you brush up on your knowledge and build your confidence.

Both beginner and experienced developers can benefit from this extensive list of questions and detailed answers, designed to cover core concepts as well as potential real-world applications of collections in C#.

So, without further ado, let’s dive into these collections C# interview questions and get you one step closer to landing your dream job!

Index

In C# Collections, how does the performance of a List compare to a LinkedList when it comes to insertion and deletion operations in the middle of the collection? Explain why.

Answer

When it comes to insertion and deletion operations in the middle of the collection, LinkedList<T> usually has better performance than List<T>. The performance difference arises because of the way both these collections are organized internally:

  • List<T> is based on an underlying array. When inserting or deleting elements in the middle, the elements after the insertion/deletion point have to be moved to maintain a continuous array, resulting in an O(n) time complexity.
  • LinkedList<T>, on the other hand, is a doubly-linked list. Insertion and deletion only require reassigning pointers. Though searching an element in a linked list is O(n), once the targeted element is found, insertion and deletion are O(1).

Therefore, if you need to frequently insert or delete elements in the middle of a collection, LinkedList<T> is more suitable from a performance perspective.

Explain the differences and use cases between IEnumerable, ICollection, and IList interfaces in C# Collections.

Answer

  • IEnumerable: This interface allows you to iterate through a collection without modifying its elements. IEnumerable<T> only exposes the GetEnumerator method, which returns an IEnumerator to traverse the collection’s elements. Any collection that requires enumeration should implement this interface.
  • ICollection: This interface extends IEnumerable<T> and adds basic collection functionality like adding, removing, and checking if an element is present in the collection. It also defines additional properties for the collection, like Count and IsReadOnly. Use ICollection<T> when you need more control over collection manipulation than just iterating.
  • IList: This interface extends ICollection<T> and adds the ability to insert, remove, and update elements at specific indices in the collection. It also provides the ability to retrieve elements by index using the indexer property. Use IList<T> when you need even more control over the collection, such as ordered elements and performing operations by index.

How can you implement a custom IComparer to be used with C# Collection classes that can have configurable sorting behavior during runtime?

Answer

To create a custom IComparer<T>, you need to implement the IComparer<T> interface and define the Compare method. Depending on your requirements, you can make the sorting behavior configurable during runtime by adding properties or methods to control how the comparison is performed.

Here’s an example of a custom (configurable) IComparer<T> for sorting Person instances:

public class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class PersonComparer : IComparer<Person>
{
    public enum SortBy
    {
        FirstName,
        LastName,
    }

    public SortBy SortOrder { get; set; } = SortBy.FirstName;

    public int Compare(Person x, Person y)
    {
        switch (SortOrder)
        {
            case SortBy.FirstName:
                return string.Compare(x.FirstName, y.FirstName);
            case SortBy.LastName:
                return string.Compare(x.LastName, y.LastName);
            default:
                return 0;
        }
    }
}

var people = new List<Person> { ... };
var comparer = new PersonComparer { SortOrder = PersonComparer.SortBy.LastName };
people.Sort(comparer);

In this example, we’ve defined a custom IComparer<Person> which compares Person objects either by FirstName or LastName, depending on a configurable SortOrder property.

What are the advantages of using a System.Collections.Concurrent namespace over the regular System.Collections.Generic namespace for collections, especially in multithreading scenarios?

Answer

The System.Collections.Concurrent namespace provides thread-safe collection classes that are specifically designed for use in multi-threaded or parallel programming scenarios. These collections can be safely accessed and modified by multiple threads concurrently without the need for additional locking or synchronization mechanisms.

Advantages of using System.Collections.Concurrent collections include:

  • Thread-safety: Concurrent collections manage their internal state to avoid race conditions and ensure data consistency between threads, reducing the chance of synchronization issues or data corruption.
  • Performance: Using concurrent collections can result in better performance compared to using traditional collections with manual synchronization. They’re optimized for working with multiple threads and can help reduce contention and lock overhead.
  • Ease of use: Reduces the complexity of manually implementing synchronization mechanisms like locks, monitors, or semaphores when working with shared collections.

Some examples of concurrent collections include ConcurrentBag<T>, ConcurrentQueue<T>, ConcurrentStack<T>, and ConcurrentDictionary<TKey, TValue>.

Explain the concept of a “bucket” in the context of the Dictionary class in C# Collections and how it affects the performance during hash collisions.

Answer

In Dictionary<TKey, TValue> implementation, a “bucket” is an internal data structure that holds the entries. The dictionary uses the hash code of the key to determine which bucket an entry should be stored in. Each bucket can hold one or more entries, usually represented as a linked list of entries.

When two different keys have the same hash code, a hash collision occurs. When a collision happens, both entries will end up being stored in the same bucket but as separate nodes in the linked list. As more collisions occur within a bucket, the linked list within that bucket grows, causing performance issues when accessing, inserting, or deleting elements.

The performance of a Dictionary<TKey, TValue> is highly dependent on a well-distributed hash function for key values to minimize the number of hash collisions.

To address collisions, the dictionary will automatically resize and reorganize itself by increasing the number of buckets and redistributing entries evenly among new buckets. This process is called “rehashing.” Rehashing can improve performance by reducing the number of entries per bucket, although it introduces a temporary performance hit during the resizing process.


Now that we’ve delved into the backbone of how dictionaries and buckets work in C# collections, it’s time to explore other crucial aspects such as hash codes, custom reference types, and infinite sequences. Continue reading our collections in C# interview questions and answers to enhance your understanding and deepen your proficiency with these concepts.


How does the GetHashCode method affect the behavior of HashSet and Dictionary in C# Collections?

Answer

The GetHashCode method plays a crucial role in hash-based collections like HashSet<T> and Dictionary<TKey, TValue>. These collections rely on the hash code of an object to determine its position or “bucket” within the internal data structure.

A well-implemented GetHashCode method should:

  • Provide a good distribution of hash code values: This is important to minimize the number of hash collisions, which can degrade the performance of the collection.
  • Return stable hash code values: The same object should always return the same hash code, as long as its state is unchanged. If hash codes change over time, it can result in inconsistencies when retrieving, storing, or removing elements from the collection.

When implementing custom classes that will be used as keys or elements in hash-based collections, it is essential to override the GetHashCode method to ensure proper behavior. Besides, if the GetHashCode method is overridden, the Equals method should also be overridden to maintain consistency between the two methods.

What are some potential issues with using custom reference types as dictionary keys in C# Collections, and how would you mitigate them?

Answer

Using custom reference types as dictionary keys in C# Collections can lead to potential issues, which include:

  1. Default implementation: The default GetHashCode and Equals method implementations in object class might not provide expected behavior for custom reference types, leading to performance issues or incorrect comparisons. Mitigation: Override both the GetHashCode and Equals methods in the custom reference type class to provide appropriate and consistent behavior, considering the unique properties of the custom class.
  2. Mutability: If a key’s state changes after being added to the dictionary, it might cause inconsistencies, as the calculated hash code will differ from the initial hash code, making it difficult to retrieve, update, or remove the associated value. Mitigation: Either make the key objects immutable or ensure that the properties used for calculating the hash code and equality comparison do not change after being used as keys in the dictionary.
  3. Thread-safety: In multi-threaded scenarios, if the key objects are not thread-safe, using them in concurrent collections might lead to race conditions, causing data corruption or synchronization issues. Mitigation: Use proper synchronization mechanisms when working with mutable key objects or use thread-safe key objects, like the ones provided in the System.Collections.Concurrent namespace.

Can you create an IEnumerable that generates an infinite sequence of values on-the-fly without causing a memory leak? How would you implement such a collection?

Answer

Yes, you can create an IEnumerable<T> that generates an infinite sequence of values on-the-fly without causing a memory leak. You can achieve this by implementing an iterator method using the yield return statement, which allows you to return each value in the sequence individually rather than creating a collection to hold all the values.

Here’s an example of an infinite IEnumerable<int> that generates increasing integer values starting from a given value:

public static IEnumerable<int> InfiniteGenerator(int startValue)
{
    int currentValue = startValue;
    while (true)
    {
        yield return currentValue;
        currentValue++;
    }
}

// Usage:
foreach (var value in InfiniteGenerator(5))
{
    // This loop will run indefinitely, generating increasing integer values starting from 5
    // Note: Add a break condition to avoid an infinite loop.
}

By using yield return, you can generate the next value in the sequence only when it’s requested, avoiding the need to store all values in memory.

Explain the importance of immutability in relation to C# Collection classes, and provide an example of a scenario where an immutable collection would be beneficial.

Answer

Immutability in the context of C# Collection classes refers to creating collections where their elements and structure cannot be modified once they’re created. Immutable collections provide several benefits, particularly in scenarios that involve:

  1. Thread-safety: Immutable collections can be safely accessed and shared across multiple threads without needing to implement synchronization mechanisms. Since their state can’t be changed, there’s no risk of race conditions or data corruption.
  2. Predictable behavior: By ensuring that a collection does not change after its creation, you can avoid unexpected side-effects or state changes throughout your code. This simplifies debugging and helps maintain the consistency of your application.
  3. Functional programming: Immutable collections are an essential tool for programmers working with functional programming paradigms. They enable you to reason about your code more easily, compose operations in a chain, and provide a more deterministic and stateless approach.

Example: Imagine a multi-threaded application that needs to share a configuration object (list of key-value pairs) across different threads. Using an immutable collection, like ImmutableDictionary<TKey, TValue>, ensures that the configuration remains consistent across threads without the need for locks or other synchronization mechanisms.

Discuss the performance differences between Array, ArrayList, and List when it comes to adding, deleting, inserting, and looping through elements in C# Collections.

Answer

Array: Arrays are fixed-size and strongly-typed collections. As a result, the performance of arrays when accessing elements is very efficient (O(1)), as no boxing or unboxing is required.

  • Adding an element: Not directly supported. Requires manual array resizing, resulting in O(n) complexity.
  • Deleting an element: Not directly supported. Requires shifting elements and manual resizing, resulting in O(n) complexity.
  • Inserting an element: Not directly supported. Requires shifting elements and manual resizing, resulting in O(n) complexity.
  • Looping through elements: Very efficient (O(n)) due to strong-typing and continuous memory allocation, which benefits CPU cache locality.

ArrayList: ArrayList is a dynamic-size collection that stores elements as object types, leading to boxing and unboxing overhead when working with value types.

  • Adding an element: Amortized O(1) complexity.
  • Deleting an element: O(n) complexity, as it requires shifting elements.
  • Inserting an element: O(n) complexity, as it requires shifting elements.
  • Looping through elements: O(n) complexity, with potential additional overhead from boxing and unboxing when accessing elements.

List: List is a dynamic-size and strongly-typed collection, which provides better performance than ArrayList, particularly when dealing with value types.

  • Adding an element: Amortized O(1) complexity.
  • Deleting an element: O(n) complexity, as it requires shifting elements.
  • Inserting an element: O(n) complexity, as it requires shifting elements.
  • Looping through elements: O(n) complexity, with efficient element access due to strong-typing and no need for boxing and unboxing.

In general, for most scenarios, List<T> is preferred over ArrayList due to its strongly-typed nature and better performance with value types. Arrays are best suited for scenarios where the collection size is fixed or known in advance, and memory usage and element access performance are crucial.


Having discussed the performance differences and various use cases involving Array, ArrayList, and List, let’s continue to venture into more advanced topics, such as SortedDictionary and SortedSet, as well as custom comparer implementations.

In our generic collection in C# interview questions, we’ll examine more sophisticated scenarios and teach you how to make the right choices for your applications.


How does the C# SortedDictionary internally maintain its order, and how would you choose between using it and a SortedSet?

Answer

Internally, SortedDictionary<TKey, TValue> is implemented as a self-balancing binary search tree (typically a red-black tree). This tree structure enables the dictionary to maintain order based on the key, and ensures that operations like insertion, deletion, and search have O(log n) time complexity.

When choosing between SortedDictionary<TKey, TValue> and SortedSet<T>, consider the following aspects:

  • Purpose: Use SortedDictionary<TKey, TValue> when you need to associate keys with values and maintain the key-value pairs in a sorted order. Use SortedSet<T> when you need a non-duplicate collection of elements that are always sorted.
  • Lookup: In SortedDictionary<TKey, TValue>, the lookups, insertions, and deletions are based on the key, whereas, in SortedSet<T>, these operations are based on the element itself.
  • Memory: SortedDictionary<TKey, TValue> generally consumes more memory due to storing both keys and values, while SortedSet<T> only stores the elements.

Choose the collection that better fits your requirements based on the need for key-value pairs, operations you will frequently perform, and memory consumption.

What are potential issues that can arise when using default comparison methods for complex types in C# Collections, and how would you address them?

Answer

When using default comparison methods for complex types in C# Collections, you can encounter the following issues:

  1. Poor performance: The default comparison method may not utilize the unique characteristics of the complex type, leading to inefficient comparisons or high number of collisions, which can cause poor performance, particularly in hash-based collections. Solution: Implement custom comparison methods, e.g., by overriding GetHashCode and Equals for your complex type or providing custom IComparer<T> and IEqualityComparer<T> implementations that are tailored to your complex type’s specific properties.
  2. Inconsistency: The default comparison method may not provide a consistent or meaningful comparison between instances of the complex type, leading to unexpected results or unexpected collection behavior. Solution: Implement custom comparison methods that provide meaningful and consistent equality and ordering rules for your complex type.
  3. Incompleteness: The default comparison method may not take all relevant properties of the complex type into account, causing objects that should be considered different to be treated as equal or vice versa. Solution: Implement custom comparison methods that include all properties of the complex type relevant to equality or ordering comparisons.

Discuss the differences between a Stack and a Queue in C# Collections, and provide a real-world example of when each type would be most suitable.

Answer

  • Stack: A Stack is a Last-In-First-Out (LIFO) collection, meaning the most recently added element is the first one to be removed. Main operations in a Stack include Push (to add an element to the top) and Pop (to remove the top element). Real-world example: A Stack is suitable for scenarios like managing the Undo and Redo functionality in a text editor or implementing function call stacks in programming languages.
  • Queue: A Queue is a First-In-First-Out (FIFO) collection, meaning the oldest added element is the first to be removed. Main operations in a Queue include Enqueue (to add an element to the end) and Dequeue (to remove the front element). Real-world example: A Queue is suitable for scenarios like handling requests in a server, where each incoming request is queued and processed in the order they were received, or managing print jobs in a printer queue.

In summary, choose a Stack<T> when you need a LIFO behavior, and choose a Queue<T> when you need a FIFO behavior.

How can you implement a custom EqualityComparer to be used within a HashSet or Dictionary for value comparison?

Answer

To implement a custom EqualityComparer<T>, you need to create a class that inherits from the EqualityComparer<T> base class and overrides the Equals and GetHashCode methods. The custom methods should provide the appropriate comparison logic for your specific type.

Here’s an example of a custom EqualityComparer<Person>:

public class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class PersonEqualityComparer : EqualityComparer<Person>
{
    public override bool Equals(Person x, Person y)
    {
        if (x == null && y == null)
            return true;
        if (x == null || y == null)
            return false;

        return x.FirstName == y.FirstName && x.LastName == y.LastName;
    }

    public override int GetHashCode(Person obj)
    {
        if (obj == null)
            return 0;

        int hashFirstName = obj.FirstName == null ? 0 : obj.FirstName.GetHashCode();
        int hashLastName = obj.LastName == null ? 0 : obj.LastName.GetHashCode();

        return hashFirstName ^ hashLastName;
    }
}

var peopleSet = new HashSet<Person>(new PersonEqualityComparer());
var peopleDict = new Dictionary<Person, int>(new PersonEqualityComparer());

In this example, we have created a custom EqualityComparer<Person> that compares Person objects based on their FirstName and LastName properties. The custom comparer can be used with HashSet or Dictionary when instantiating the collections.

What are the performance considerations to take into account when choosing to implement a tree-based collection such as SortedSet or SortedDictionary, and how does its performance compare to a hash-based collection like HashSet or Dictionary?

Answer

When choosing to implement a tree-based collection like SortedSet<T> or SortedDictionary<TKey, TValue>, some performance considerations should be taken into account:

  • Lookup, insertion, and removal operations: For tree-based collections, operations like lookup, insertion, and removal have an O(log n) time complexity. In comparison, hash-based collections like HashSet<T> and Dictionary<TKey, TValue> offer, on average, constant-time (O(1)) complexity for these operations. As a result, hash-based collections generally provide faster performance for these operations.
  • Sorting: Tree-based collections maintain a sorted order, which can be a beneficial feature if you require sorting. In contrast, hash-based collections do not guarantee any specific order and require additional sorting steps when needed.
  • Memory consumption: Tree-based collections tend to have higher memory consumption than hash-based collections, as tree nodes require storing additional pointers for maintaining the tree structure.

In summary, prefer tree-based collections like SortedSet<T> or SortedDictionary<TKey, TValue> when maintaining a sorted order is a priority or a requirement. For general-purpose usage or scenarios where fast lookups, insertions, and removals are more important, consider using hash-based collections like HashSet<T> or Dictionary<TKey, TValue>.


We’ve explored several performance considerations and custom implementations when working with tree-based and hash-based collections.

Let’s now shift our focus to understanding ReadOnlyCollection, ImmutableList, and other strategies for optimizing the operation of large collections in our C# collections interview questions for experienced developers.


Explain the difference between ReadOnlyCollection and ImmutableList in C# collections, and when would you use one over the other?

Answer

ReadOnlyCollection<T> and ImmutableList<T> are both used to represent read-only or immutable collections in C#, but they exhibit some differences:

  • ReadOnlyCollection: This is a wrapper around an existing collection that provides a read-only view, preventing modifications to the underlying collection. However, the underlying collection itself can still be changed, which will be visible through the ReadOnlyCollection<T> view. Use case: ReadOnlyCollection<T> is useful when you want to prevent specific portions of your code (e.g., a method or a class consumer) from modifying a collection, while still allowing the original collection to change as needed.
  • ImmutableList: This is an inherently immutable collection that represents an unmodifiable list. Once created, its content cannot change. Any operation that appears to modify the collection (e.g., adding or removing an element) will instead return a new instance with the relevant changes. Use case: ImmutableList<T> is useful when you need a collection that remains completely unchanged when shared across multiple components, ensuring data correctness and simplifying concurrent programming.

In summary, use ReadOnlyCollection<T> when you need a read-only view of an existing mutable collection, and use ImmutableList<T> when you need a truly immutable collection that can be safely shared and remains unchanged throughout its lifetime.

What are some strategies for optimizing the operation of large collections in a C# application, especially in terms of memory consumption and GC pressure?

Answer

When working with large collections in a C# application, consider the following optimization strategies for reducing memory consumption and GC pressure:

  1. Array usage: If the collection size is fixed or known in advance, use arrays instead of dynamic collections like List<T> to save memory overhead caused by resizing.
  2. Capacity Planning: For dynamic collections like List<T>, Dictionary<TKey, TValue>, and others, plan the initial capacity if the approximate collection size is known. This helps to avoid unnecessary resizing operations and minimize memory allocations and deallocations.
  3. Memory pooling: Whenever possible, use memory pooling techniques (e.g., the ArrayPool<T> class or object pools) to reuse collections or objects, reducing the strain on the garbage collector.
  4. Value types: Use value types (structs) instead of reference types to reduce memory allocations and GC overhead. However, be mindful of the associated copying costs for large structs.
  5. Lazy loading: In scenarios where it is not necessary to load all data into memory simultaneously, implement lazy loading to retrieve or compute data on-demand. Use collections like Lazy<T> and yield return to minimize memory usage.
  6. Immutable collections: When dealing with large read-only collections, consider using immutable collection types like ImmutableArray<T> or ImmutableList<T>, which optimize memory sharing for unchanged collections and can reduce GC pressure.
  7. Filtering data: Apply filters to your data before loading it into the collection if you are only interested in a subset of items.
  8. Minimize boxed value types: When using collections that store data as objects (e.g., ArrayList, non-generic collections), prefer using generic, strongly-typed collections to reduce boxing and unboxing overhead, which can lead to additional memory allocations and GC pressure.

By implementing these strategies, you can reduce memory consumption and GC pressure in your application, resulting in improved performance and smoother operation for large collections.

How can you implement a custom collection class that can be used as both a Dictionary and a List while maintaining the performance characteristics of both in C#?

Answer

To implement a custom collection class that acts as both a Dictionary and a List, you can create a class that composes a List<T> for ordered storage and a Dictionary<TKey, TValue> for key-based lookups. This way, you preserve the performance characteristics of both collections in a single class.

Here’s an example of a custom class ListAndDictionary<T> that uses a key selector function to build the dictionary and utilize list functionality:

public class ListAndDictionary<TKey, TValue>
{
    private List<TValue> _list = new List<TValue>();
    private Dictionary<TKey, TValue> _dictionary = new Dictionary<TKey, TValue>();
    private readonly Func<TValue, TKey> _keySelector;

    public ListAndDictionary(Func<TValue, TKey> keySelector)
    {
        _keySelector = keySelector ?? throw new ArgumentNullException(nameof(keySelector));
    }

    public void Add(TValue value)
    {
        TKey key = _keySelector(value);
        _dictionary.Add(key, value);
        _list.Add(value);
    }

    public TValue GetByKey(TKey key) => _dictionary[key];

    public TValue GetByIndex(int index) => _list[index];

    public bool Remove(TKey key)
    {
        if (_dictionary.TryGetValue(key, out TValue value))
        {
            _dictionary.Remove(key);
            _list.Remove(value);
            return true;
        }
        return false;
    }

    public int Count => _list.Count;

    // Implement other methods or properties as needed.
}

By combining both a List<T> and a Dictionary<TKey, TValue> in a single custom class, you can efficiently access elements by key or by index and maintain the performance characteristics of both.

Discuss the differences between Covariance and Contravariance in C# Collections, and provide an example to demonstrate the difference.

Answer

Covariance and contravariance in C# Collections refer to the type relationships when working with generic collection interfaces and their type parameters:

  • Covariance: Allows a more derived type to be used in place of a less derived type. In C#, covariance is supported for matching method signatures that return a type parameter in generic interfaces and delegates. For instance, IEnumerable<T> is covariant in its type parameter. Example: If you have a collection of objects of type IEnumerable<Cat> and a method that accepts a parameter of type IEnumerable<Animal>, you can pass the IEnumerable<Cat> collection to the method, as Cat is derived from Animal.
  • Contravariance: Allows a less derived type to be used in place of a more derived type. In C#, contravariance is supported for matching method signatures with parameters of a type parameter in generic interfaces and delegates. For example, IComparer<T> and IEqualityComparer<T> are contravariant in their type parameters. Example: If you have an IComparer<Animal> and a method that accepts a parameter of type IComparer<Cat>, you can pass the IComparer<Animal> object to the method, as Animal is less derived than Cat.

Understanding and utilizing covariance and contravariance can lead to more flexible and reusable code, as well as provide better support for working with collections that store derived types.

In C# Collections, how would you go about implementing a custom KeyedCollection and what are the key aspects to consider in terms of performance and functionality?

Answer

To implement a custom KeyedCollection<TKey, TValue>, you need to create a class that inherits from the KeyedCollection<TKey, TValue> class and override the GetKeyForItem method. The GetKeyForItem method should provide a key for a given item based on the item’s properties.

public class Person
{
    public string Id { get; set; }
    public string Name { get; set; }
}

public class PersonKeyedCollection : KeyedCollection<string, Person>
{
    protected override string GetKeyForItem(Person item)
    {
        return item.Id;
    }
}

var personCollection = new PersonKeyedCollection();
personCollection.Add(new Person { Id = "1", Name = "John" });
Person person = personCollection["1"];

In this example, we’ve created a custom PersonKeyedCollection that uses the Id property of Person as the key.

When implementing a custom KeyedCollection<TKey, TValue>, consider the following aspects regarding performance and functionality:

  1. Key selection: Choose an appropriate key for the item, considering the unique characteristics of your data. The key should provide a meaningful way to identify and access items.
  2. Key hashing: KeyedCollection<TKey, TValue> internally uses a dictionary for key-based lookups. Ensure a well-distributed hash function for the selected key to minimize collision and maintain good performance.
  3. Duplicate keys: KeyedCollection<TKey, TValue> does not allow duplicate keys. If duplicate keys are added, an ArgumentException will be thrown.
  4. Performance: The KeyedCollection<TKey, TValue> class offers good performance characteristics since it provides O(1) complexity for key-based lookups and maintains the insertion order of elements in the internal list.

However, keep in mind that list operations like inserting or deleting elements in the middle have O(n) complexity.

You May Also Like