Here are some notes I put together for single path shortest paths

A good introductory example on Dijkstra algorithm

http://algs4.cs.princeton.edu/44sp/

The problem with Dijkstra is that it is inherently sequential, it requires a heap (priority queue) based implementation.

An easy to understand implementation can be found here

http://algs4.cs.princeton.edu/44sp/DijkstraSP.java.html

The key algorithm is this

pq = new IndexMinPQ<Double>(G.V()); pq.insert(s, distTo[s]); while (!pq.isEmpty()) { int v = pq.delMin(); for (DirectedEdge e : G.adj(v)) relax(e); }

Every time a min node “v” is popped, it is made sure that the distance to this node is the smallest. This is because it is has the shortest distance to the nodes that have already been assigned a shortest distance (the next node to join the shortest tree). This a good example of a greedy algorithm.

And then the relax part extends the shortest tree to all the other nodes.

The worst case complexity for Dijkstra is* E * log V*, since it is possible to remove a node from the heap and add it back for every edge relaxation.

The good question is what happens when this algorithm goes parallel, given the strong dependence on the sequentially updated “priority queue” or heap in this case.

One other approach is the Bellman-ford approach

A good explanation can be found here

http://www.geeksforgeeks.org/dynamic-programming-set-23-bellman-ford-algorithm/

Pseudo code is shown here

functionBellmanFord(listvertices,listedges,vertexsource) ::distance[],predecessor[]// This implementation takes in a graph, represented as// lists of vertices and edges, and fills two arrays// (distance and predecessor) with shortest-path// (less cost/distance/metric) information// Step 1: initialize graphfor eachvertex vinvertices: distance[v] :=inf// At the beginning , all vertices have a weight of infinity predecessor[v] :=null// And a null predecessor distance := 0 // Except for the Source, where the Weight is zero// Step 2: relax edges repeatedlyforifrom1tosize(vertices)-1:for eachedge (u, v)withweight winedges:ifdistance[u] + w < distance[v]: distance[v] := distance[u] + w predecessor[v] := u// Step 3: check for negative-weight cyclesfor eachedge (u, v)withweight winedges:ifdistance[u] + w < distance[v]:error"Graph contains a negative-weight cycle"returndistance[], predecessor[]

It is simpler than Dijkstra as it just relax each edge V times. So the time complexity is O(VE), and it is the average case, not just the worst case. The intuition is that in the first iteration, all the nodes that are 1 edge away are being updated, the second iterations, the nodes that are 2 edges away are being fixed to their shortest distances and so on.

The Pros are (1) good for parallel implementation (2) detects negative cycles

The Cons are (1) much more work o(V*E) [average case] compare to O(E log V)

Can we get the best of both worlds?

The answer is yes, we have a third approach that combines Dijkstra with Bellmanford

Delta stepping: A Parallel Single Source Shortest Path Algorithm U. Meyer, P.Sanders

http://www.cs.utexas.edu/~pingali/CS395T/2013fa/papers/delta-stepping.pdf

TODO: add something on implementation notes, point to an implementation