Enlarging Enthusiasm
This problem gives us 12 point values, and wants us to add points to each of the 12 values such that
- the total number of points added sums to some x
- we add points to each value once, and the order we add them is from the SMALLEST number of points ADDED to the largest
- each time we add points, the new value is larger than any previous value
The end goal is to determine the number of different final orderings of the 12 values.
The first thing we realize is that we are limited to 11*11! final orderings, which we maybe could iterate over if we were fast enough. (450 million). The issue is that evaluation takes another factor of 12, pushing us up over 1 billion. These orderings must be determined by the order that we allocate points (since every subsequent value will be ahead in the final ordering).
Another thing to realize is it doesn't matter if we under-allocate points, since they can all be used up at the end without any issue. That means the only way one of the above given orderings is impossible is if we don't have enough points left.
That gives us two rules for every time we assign points:
- the number of points assigned must be at least 1 greater than the previous count of assigned points, or equal if the previously existing points increase.
- the points assigned plus the previously existing point values must be at least 1 greater than the best created in the previous assignment
The standard DP solution for this kind of ordering problem reduces it from n! to 2^n * n
The DP state is simply which elements are available to be selected, and we iterate over all available ones and recurse. The issue here is that it requires a complete independence of which elements we have already selected, and which ones we have yet to select, which is not the case here.
The common solution to THAT problem is to explode the states to deal with any dependencies, which in this case involves the last element chosen previously, the number of points available, etc. So does that work?
Were we to try the most naive DP solution, we would have
- nodes remaining (2^12)
- value left to be allocated (700)
- value allocated last time (700)
- total sum from last time (700)
- note, from these, we can derive the previous value of the last time we allocated points, which may be necessary to break ties.
This is, of course, too large (1.4 trillion).
Can we think of an optimization, though?
Lets think of the problem a little bit differently. What is the minimum amount of points we'd have to award for any ordering? In an ideal case, each "final" points total would be one more than the previous. In such a case, the total points which must be awarded is sum(initial_leader - other_totals) + n/2*n+1....or pretty much, the amount of points for everyone to catch the original leader, and then the number of points to get one more than the current leader. If we did not care about having to assign more points each turn, then either we would have enough points for EVERY ordering, or we wouldn't....but of course, we do.
We can note, then, that this ideal ordering will only occur in one case, where we assign points in decreasing order of initial points (meaning the amount of points assigned must increase). If we try to go to a person with HIGHER initial points, since we have to assign at leas the same amount of additional points, we'll necessary have MORE than a gap of 1 in the final points total.
Example:
starting values, 10,9,8,7
In the "optimal" assignment, the final values will be 14,13,12,11, meaning we need 16 points to assign.
If we assign them in order, we have
10-9-8-7 11-10-8-7 (2+9) 12-11-10-7 (4+8) 13-12-11-10 (6+7) 14-13-12-11 (4+10)
But wait....that doesn't work, since we added 4 after adding a 6....lets try again
10-9-8-7 11-10-8-7 (2+9) 12-11-8-7 (2+10) 13-12-11-7 (5+8) 14-13-12-11 (7+7)
There we go. We can easily prove that the same assignment pattern is valid for any initial assignment for producing the minimum total.
So what happens when we assign out of order:
10-9-8-7 11-10-9-8 (4+7) 14-11-9-8 (4+10) 15-14-11-8 (6+9) 16-15-14-11 (8+8)
For a total of 22.
We can see that as soon as we chose the lower value, we in essence had to "reset" our baseline. When we return to the previously optimal assignment, we are forced to go to 14 instead of 12, and then 15, 16...etc. Since we have reset, we can calculate exactly how much "extra" it will take to go out of order.
After we have used the 7, if we choose the 10 next, we know there is a gap of 2 in the final ordering vs optimal (we end up with a 14 instead of a 12). This means that the best possible case we can have moving forward must add 2 extra for each number remaining (because we have to get to 14 instead of 12, and then 15 instead of 13....etc). Since there are 3 numbers remaining, we end up with 6 extra above minimum....hence 22 vs 16. Once we account for that "extra" we can ignore it. We will simply assume we're assigning optimally again. It looks like this
10-9-8-7 11-10-9-8 (4+7) selecting 10 would create a gap of 2. We have 3 numbers remaining, so "reserve" 6 and "adjust back to optimal 12-11-9-8 (2+10) (with 6 on the side) 13-12-11-8 (4+10) (with 6 on the side) 14-13-12-11 (6+10) (with 6 on the side)
So here we have "assigned" 16 points, plus the 6 on the side, to get 22.
This leads us to a DP solution:
- nodes remaining 2^12
- previous node 12
- Value remaining (700)
Then the recurrence relation follows. If the value of the candidate node is LESS than that of the previous node, the amount we must account is simply max_initial-candidate + num_nodes_already_selected + 1. In the case above, when we select 7, we've selected no nodes already, so the calculation is 10-7 + 0 + 1...or 4.
If the value of the candidate node is MORE than that of the previous node, then we know we're going to have to add some extra. The amount of extra is calculated as (my_initial_value - previous_node_initial_value-1) * nodes remaining + (the same calculation as above). So in the the example above, when we add the 10, the calulation is (10 - 7 - 1) * 3 + 10 - 10 + 1 + 1 = 6 + 2. The 6 accounts for the "extra" for all remaining nodes, and the 2 for getting from 10 to 12.
One can node that this can be collapsed into a single statement by taking min of (my_initial_value - previous_node_initial_value-1) and 0.
We take the summation of the result of all possible recursions and store that in the array.
The runtime is 2^12*12*700 (size of array) * 12 (work for each entry) = 412 million. This should be fast enough, especially since many states are unreachable.