Jumping Haybales

From programming_contest
Jump to navigation Jump to search

This problem asks you to traverse a grid from top left to bottom right, only making moves (each move is downward OR rightward) of at most k, while avoiding certain squares.

The Easy answer would be to BFS, but with a grid 4 million large, and with k maximized, the number of edges in the graphs is another 2000 per element, and 4 billion is too big.

The fact that we can only move in one direction at a time might be a hint to solve the two dimensions independently, but the fact that we have haybales means we can't do that. Instead, let's see how we might solve a single column in linear instead of quadratic time. The quadratic time solution is the BFS, whereby for large k, we traverse all n edges. Consider, however, what the solution looks like. In this single column, we will never jump shorter than we can. If we can reach 5 squares, there is never an optimal reason to only jump 4. Which leeds us to the greedy rule:

in a single column, starting on square i, first check i+k, then i+k-1....etc. 

This guarantees that we only evaluate each square once. Boom. Linear time.

But what about multiple columns? There MIGHT be cases where we DO want to take a shorter jump, because it allows us to do better while jumping sideways. We can generalize our greedy rule a bit:

In a single column, we will never continue jumping in within that column from a "non-optimal" square.

This leads to the following algorithm:

  1. for a given square, first check if it is "optimal" for either dimension
  2. in each dimension it is optimal in, simulate the jump as listed above, filling in all possible squares we can jump to
  3. iterate from top to bottom, left to right

We have one last question to answer, which is, how do we know whether a square is "optimal." Simple. For each row and column and number of jumps, cache the "optimal" square we've seen so far. This yields 3 DP arrays (though the second 2 are analogues

  1. dp[x][y] = minimum number of jumps to reach x,y
  2. dp[x][j] = furthest y value which can be reached using j jumps ending in column x
  3. dp[y][j] = furthest x value which can be reached using j jumps ending in row y

The same way we demonstrate that the single dimension algorithm is linear time, we can demonstrate that we will only ever set the number of jumps in an individual cell at most twice: once from each direction (because we only ever jump from the previous "optimal" cells). This yields n^2 runtime.

Note: one can eliminate the first DP array altogether, and simply store the optimal cells for each column. In either case, the second two arrays are a dimension swap.