Wiki
Clone wikiHigh Speed Priority Queue for C# / Why do I have to implement IPriorityQueueNode to enqueue something?
Typical data structures will have you add (or remove, etc) items like this:
myDataStructure.Add(myValue, priority);
Internally, the Add()
method might look something like this:
public void Add(T myValue, double priority) { Node node = new Node() { Value = myValue, Priority = priority }; AddNode(node); }
This paradigm is convenient - it allows T to be any class - but slow. Every time you call .Add()
, you have to do another allocation. Allocations are fast (on Windows at least - they're pretty slow on Xbox), but not allocating is much faster (You could solve this with a pool of objects, but it would still be slower than not having to pool at all).
Additionally, to implement .Contains()
, either we have to look up the value in the data structure (slow) or store a separate Dictionary<T, Node>
to map values to nodes.
By forcing the values to implement IPriorityQueueNode
, we can avoid both of those issues. For example, here is the for .Contains()
in HeapPriorityQueue
:
public bool Contains(T node) { return (_nodes[node.QueueIndex] == node); }
That lookup is about 5-10x faster than using a Dictionary<T, node>
. Normally, 5-10x faster than "practically nothing" would not be a big deal; however, since IPriorityQueue.Contains()
is the most-called method in many path-finding algorithms, it becomes a big deal.
Updated