To avoid possible copyright-related issues, all problem statements have been paraphrased or summarized.
You are given a 10 by 10 grid and told the sum in each row and in each column. The sum tells you how many cells in that row or column are occupied by a ship. Each ship is aligned to the grid, meaning that for any given cell it either occupies it completely or does not occupy it at all, and all cells occupied by a particular ship are contiguous within a single row or column (as in the game Battleship). Ships are completely contained within the grid. There are four one-cell ships, three two-cell ships, two three-cell ships, and one four-cell ship. Two different ships cannot overlap, nor can two vertically, horizontally, or diagonally adjacent cells be occupied by two different ships.
- How many possible configurations of the ten ships yield the given row and column sums?
- If this number is greater than one, can we force the solution to be unique by further specifying the contents of some cell? We can specify it to be empty, occupied by the interior of a ship, occupied by the leftmost cell of a horizontal ship, occupied by the topmost cell of a vertical ship, occupied by the rightmost cell of a horizontal ship, occupied by the bottommost cell of a vertical ship, or occupied by a one-cell ship. If this is possible, find the “best” cell that allows us to do this, ranking first by lowest row number, then breaking ties by lowest column number, and then by type of contents, in the order listed in the previous sentence. If it is not possible, can we specify the contents of two cells and hence force the solution to be unique? Among multiple pairs of cells, find the best pair, that is, the one whose best cell is the best according to the rules for ranking single cells; ties are broken by the second best cells.
No team solved this problem during the contest. However, we may be reasonably certain that it can only be solved by a recursive backtracking approach, in which we will recursively generate all possible combinations of locations for the ten ships by adding one ship at a time, and pruning wherever adding a ship in a particular location would exceed the specified number of occupied cells in some row or column, and remembering that ships can’t overlap or be adjacent; whenever we manage to place all ships, increment the count of possible solutions. When trying to specify one or two cells in order to force a unique solution, we simply try all possibilities, starting from the best (i.e., empty cell in first row and first column, the empty cell in first row and second column, and so on) and again recursively generating all solutions, until we find something that give us exactly one solution. Because we didn’t attempt this problem, I can’t say what kind of optimizations are necessary to get AC.
This problem asks to simulate cuckoo hashing. There are two arrays, of sizes and (). A non-negative integer value would be stored at position in the first, and in the second. These two arrays together constitute a hash table. To insert a value into the hash table , insert it into the first array. If there is already some value in the position , then we have to move to the second array, storing it in position . And if there is already a value in that position, then we move it to the first array, in position , and so on—we keep shuffling values back and forth between the two hash tables until a value lands in an empty spot.
You are given a sequence of non-negative integers. Output the states of the two arrays after inserting all the values in order. It is guaranteed that each insertion will terminate.
To see that this is fast enough, first observe that it’s impossible to insert more than values, so there will be at most 1999 integers given to insert. Furthermore, since it’s guaranteed that each insertion will terminate, it’s not too hard to see that there’s a linear limit on how many steps each insertion can take. To prove this, consider an undirected graph in which each vertex corresponds to a location in one of the two arrays and each edge corresponds to either an existing value or the value we’re inserting; the edge is incident upon the two vertices corresponding to the positions where the value can be stored (thus, the graph is bipartite). When we insert a value, we trace out a path along the edges, which terminates once it lands on an empty vertex. If an insertion is to terminate, then there can’t be more edges than vertices in the connected component of the edge corresponding to the value we’re inserting, because then there would be too few spots for all the values. So there are two cases:
- Case 1: the number of edges is one less than the number of vertices (there are two empty spots). So the connected component is a tree. Because a tree has no cycles, we can never visit an edge twice (i.e., move a value that we’ve already moved). This gives the desired linear time bound.
- Case 2: the number of edges equals the number of vertices (there is one empty spot). There is then exactly one cycle. In this case, observe that if we ever visit a vertex twice, this must be the first vertex on the cycle we visited. When we visit it the second time, we kick it back to its original position which must be outside the cycle since it was the first. After this, we are directed away from the cycle and so cannot enter it again. Thus in this case the number of iterations can’t be more than twice the number of edges.
I’m not sure whether I could’ve come up with this (extremely hand-wavy) proof during the contest, but I think you’re just supposed to intuitively realize the number of iterations is linearly bounded. The total number of iterations here can’t exceed , or about 8 million, so we would expect this to get in under the time limit.
This problem deals with a modified version of the Playfair cipher. To encrypt a message, you first need to pick a secret key. Then you write the letters of the key, one at a time, in a 5 by 5 grid, with each letter occupying one cell. Squares are filled left to right, top to bottom. Duplicate letters in the key are skipped after the first time they’re written down. After this, unused squares in the grid are filled left to right, top to bottom, by the letters of the alphabet not present in the key, in order.
J are considered the same letter, so there are only 25 letters in the alphabet, and thus the grid will end up containing a permutation of the alphabet.
A message is encrypted as digraphs, that is, two-letter pairs. For example, to encrypt
ECNA, you would break it down as
EC NA and then encrypt
NA separately, and concatenate the results. A special case occurs when the two letters in a pair are identical; in this case, in the original Playfair cipher, you insert the letter
X after the first, and then the second actually becomes the first letter of the next pair; for example,
HELLO would become
HE LX LO and then these three pairs would be encrypted. If you end up with a single letter at the end, add an
X to the end. (Evidently, a message such as
FOX cannot be encrypted with this method.) In the modified version used in this problem, instead of always inserting the letter
X, the first time you insert a letter you use
A, the second time, you use
B, and so on, skipping
J, and wrapping around after
Z. You also skip a letter if inserting it would give a pair of identical letters; for example,
AB AR DV AR KC, where we have skipped
After breaking down the message into letter pairs, we use the grid to encrypt each pair. If the two letters appear in the same row, then we replace each letter by the one immediately to the right of it in the grid, wrapping around if necessary. If they are in the same column, we replace each one by the one immediately below it, again wrapping around if necessary. If they are in neither the same row nor the same column, replace the first letter by the letter that lies in the same row as the first letter and same column as the second, and replace the second letter by the letter that lies in the same row as the second letter and same column as the first. Concatenate all the encrypted pairs to give the ciphertext. You are given a series of key-plaintext pairs. In each case output the corresponding ciphertext.
Simulation. I would say
straightforward simulation, but I guess this problem isn’t exactly the most straightforward. Luckily, I wasn’t the one stuck coding this one :P
On a web page of dimensions up to 109 by 109 pixels, up to 10000 disjoint axis-aligned squares are given, each of which has side length 10 pixels. Each square’s upper-left pixel has coordinates that are multiples of 10. Find the smallest set of pixels which both contains all the squares and is orthogonally convex, meaning that if two pixels in the same row belong to the set, then so do all the pixels between them in that row; likewise, if two pixels in the same column belong to the set, then so do all the pixels between them in that column. Output the vertices of this set in clockwise order starting from the topmost one (leftmost in case of a tie).
First, notice that we can replace each 10 by 10 square by its four vertices. For example, the square from (50, 60) to (59, 69) can be replaced by (50, 60), (50, 69), (59, 60), and (59, 69). The orthogonal convex hull of the points thus obtained will be the same as that of the original set of squares, so we’ll just work with these points instead.
To proceed, we basically walk around the hull in clockwise order. We start by identifying the topmost pixel (leftmost in case of a tie). This must be a vertex. To find the next vertex, we locate the topmost point which lies strictly to the right of it (in case of a tie, pick the rightmost). Then we find the topmost point which lies strictly to the right of that, and so on. Eventually we reach a point in the rightmost occupied column. Now we switch gears, and find the rightmost point that lies strictly below our current point (bottommost in case of a tie). Keep doing this until we get to the bottommost occupied column. Then switch gears again, repeatedly moving to the bottommost point that lies strictly to the left of the current point (leftmost in case of a tie). Finally, once we reach the leftmost occupied column, repeatedly move to the leftmost point that lies strictly above the current point. Eventually we reach the starting point, and we are done. Whenever we move from a point to another point that is in both a different row and different column, we have to insert an additional vertex. For example, in the first phase, when we are moving from a point to the topmost point strictly to the right, if the new point is strictly below the current point, then we insert a point which is in the same row as the new point but same column as the current point; the other three cases are analogous.
To efficiently move from one point to the next, all we have to do is precompute, for each of the (up to) 40000 points we have to work with, the topmost point to the right, the rightmost point below, the bottommost point to the left, and the leftmost point above. We can precompute the topmost point to the right of all points by sorting the points by x-coordinate and scanning from right to left, remembering the topmost point we’ve seen so far; the other three cases are analogous.
In an grid (), some squares are occupied by obstacles. The leftmost and rightmost columns do not contain obstacles. There is initially a game piece in each cell of the leftmost column. You need to move them all to the rightmost column, using the minimum number of turns possible. In each turn, you can choose, for each piece, whether to move it or to leave it where it is; if you move a piece then you must move it into a vertically or horizontally adjacent cell that does not contain an obstacle. At the end of a turn, no two pieces can occupy the same square. What is the minimum number of turns necessary?
We didn’t solve this problem, but it seemed to us that we could do it with max flow. However, it’s somewhat unclear whether this approach would even fit in memory, let alone be fast enough.
First, we consider the problem of determining whether it is possible to solve the problem in turns. We can reduce this to max flow with vertex constraints. There is one vertex for each (cell, turn) combination. For example, if , then there are four vertices for each cell: the cell at the beginning of the game, the cell after the first move, the cell after the second move, and the cell after the third move. Each vertex has a capacity of one. Each vertex has up to five outgoing edges (which we can take to have capacity one), each of which goes to some vertex corresponding to the next turn, and either the same cell, or a horizontally or vertically adjacent cell, as long as it does not contain an obstacle. There is an edge (which we can again take to have capacity one) from each vertex corresponding to the final state and a cell in the rightmost column to the sink. There is also an edge (which we can again take to have capacity one) from the source to each vertex corresponding to the initial state that is in the leftmost column. It is not hard to see that an augmenting path in this flow network corresponds to the movement of a game piece from the leftmost column in the initial state to the rightmost column in the final state; each edge that is not from the source or to the sink corresponds to the movement of the game piece from a particular cell in a particular turn to an adjacent cell in the next turn (or leaving it in the same cell); the vertex capacity constraint enforces not being able to have two pieces in the same cell in a given turn. If the max flow is , then we have moved pieces, that is, won the game.
Before we proceed, what is the complexity of this approach? Well, there are up to 625 cells, so there are about internal vertices, each with up to about 5 outgoing edges, so edges. Furthermore, to enforce the vertex capacity constraints, we’ll have to split each vertex up into an incoming and outgoing vertex, with an edge between them, so now we’re up to edges. Because the graph has unit capacities and the max flow is at most 25, we can use DFS to find each augmenting path, and we have to go through 25 augmenting paths before we find the max flow. The complexity of DFS is so the total number of operations is about .
Now, how do we find the minimum ? Well, we didn’t get that far, but it seems that a reasonable approach would be as follows: first just set and create the corresponding flow network. Then, as long as the max flow of hasn’t been achieved yet because you can’t find a path in the residual network, add the next set of vertices and edges, corresponding to increasing by allowing an extra move (that is, add more vertices and join the current final state vertices to them, making the vertices just added the new final state vertices; also add edges from the rightmost column cells in the new vertex set to the sink) and check whether we can now find an augmenting path; if not, increase again, and so on. To efficiently determine whether an augmenting path exists in the new graph, all we have to do is run another DFS starting from reachable vertices in the previous final state cells, rather than doing a huge DFS from the source that will go through the entire graph again. This essentially means that every time we have to increase once or more, the total time taken for all the DFSing is about the same as that taken to do a single DFS in the graph corresponding to the final value before we find another augmenting path. That means the total number of operations is bounded by about , where is the answer (minimum number of turns). I’m not quite sure how large could be; it could certainly be as large as about , in the case that the obstacles force you to take a winding path through the grid. Putting , we obtain a figure of about 39 million. Is this fast enough? I’m not sure.
Jacob argues that binary search on would be faster. At some point I’ll probably code up both and see who’s right.
In a directed acyclic graph of up to 200 vertices, we say an edge is redundant if removing the edge does not alter the connectivity of the graph (i.e., if a path existed from one vertex to another in the original graph, a path must also exist when the redundant edge is removed). Find all redundant edges.
A naive approach is to simply try deleting each edge in turn, and then running a DFS from each vertex to determine whether the connectivity has changed. Unfortunately, this approach takes time, which is in dense graphs, and is too slow for .
A faster approach is to run Warshall’s algorithm for computing the transitive closure of the graph. Whenever we find that there is a path from to and a path from to , if there is an edge directly from to , that edge must be redundant. This is only true because the graph is acyclic! Deleting that edge can’t alter the connectivity of to or to , because in topological order, and thus the path from to can't use the edge from to , nor can the path from to . The running time of this approach is .
There is a horizontal line segment (the
house) located above the origin on the Cartesian plane, and up to 10
decorations, each of which is also a line segment, and each of which is either reflective or non-reflective. There is a
lamp located at the origin, which emits rays at all angles from to to the vertical. Non-reflective decorations absorb all incident light, and reflective decorations act as mirrors that obey the law of reflection (and are reflective on both sides). It is guaranteed that a ray can only be reflected up to 100 times. What fraction of the house is actually illuminated?
I don’t think any teams solved this problem during the contest, so I have no idea how actually to do it. My suspicion is that it can be done using a brute force simulation approach, in which we simply sample the interval of angles and cast out a ray at each sampled angle, following its course until it either gets absorbed, reaches the house, or escapes to infinity. The fraction of the house illuminated only needs to be accurate to within one part in 104 (a percentage rounded to the nearest hundredth). I’m not sure how finely you’d have to sample in order to get the right answer, but of course, if nobody else solves the problem, then the rational strategy is to just keep resubmitting until you get it accepted. The simulation is pretty easy to do; in each iteration you just have to check whether the ray intersects any of the (up to) 11 line segments specified. If it’s the house or one of the non-reflective decorations, or if it doesn’t intersect anything, you’re done; if it’s a mirror, then you have to iterate again, but you only have to do up to 100 iterations for each angle, so that’s 1100 operations. Let’s say you decided to sample angles; then you’d have to do a total of about 55 million operations. I suppose it’s plausible that this approach would work, but I can’t say for sure until I see the test data or official solutions.
The other, non-hackish approach would be to compute the illuminated fraction exactly by considering only
interesting angles. An interesting angle is an initial angle at which the ray grazes the endpoint of a some line segment. It is easy to see that between any two consecutive interesting angles, either the ray does not illuminate the house at all, or the image of the ray on the house varies monotonically and continuously, so that whatever portion of the house lies between the image of the ray at the first angle and the image of the ray at the second angle is completely illuminated. Taking the union of all such intervals would then give the total length illuminated. To do the actual angle sweep, we would first sort a list of all angles at which the ray would graze the endpoint of some interval. We would then consider each sub-interval between a pair of consecutive interesting angles. If, in this interval, the ray hits a mirror, then we have to recursively subdivide the interval by considering all the angles at which the ray could hit the mirror, reflect, and then graze the endpoint of another interval. If in one of those intervals the ray hits a second mirror, then we would have to recursively subdivide again, and so on.
Two questions remain. First of all, how do we compute the initial angle required to hit some target point after reflection from a sequence of mirrors ? The answer is pretty clever: we reflect the target point across mirror to get point , and reflect across mirror to get point , and so on, finally reflecting across to get point . The ray from the origin to is then the same ray which would (potentially) reflect from all the mirrors in sequence and finally hit . (I imagine anyone who plays pool is familiar with this trick.) The second question is what the worst case running time of this approach is. I have no idea. I don’t have any idea whether this was the intended approach, or the brute force one. When the test data come out, maybe we’ll see. If I’m not too lazy, that is.
In a grid of squares (), each square contains an integer. We can place a domino across two squares and gain a number of points equal to the product of the numbers of the two squares. What is the maximum possible score, if we can place as many dominoes as we want but no two dominoes can overlap?
This problem can be solved using dynamic programming. We will define as the maximum score obtainable given that all squares covered by dominoes are left of column , and specifies whether the top square in column is covered, likewise for and with the middle and bottom squares. The total number of states is then .
Computing the transitions is annoying. If we number the columns starting from zero, our base case is ; here, no dominoes can be placed and the maximum score is 0. To compute for , we consider all possible arrangements of dominoes in column :
- : column is empty. Then : we have license to cover as many or as few squares in column as we wish, and then we’ll simply not put anything in column , and hence not gain any additional points.
- : Only the top square in column is filled. In this case the domino covering that square must also occupy the top square in column . The score we get is the value of this domino plus , where the 0 indicates that the top square must be initially uncovered in column , but we don’t care whether the other two squares are.
- or : These are analogous to the previous case.
- : There are two ways we can cover the top and middle squares in column while leaving the bottom one empty. Either we can place a domino across these two squares, giving a maximum total score equal to the value of that domino plus (because once we remove this domino from the board we’ll have an empty column ). Or, we can place a domino on each that also covers the square to the left. The maximum score we could then obtain is the sum of the values of these two dominoes and (where we see that the bottom square in the previous column could be either covered or uncovered). The case is analogous.
- : This is only obtainable by placing a domino on the top square that also covers the square to its left, and a domino on the bottom square that also covers the square to its left. The maximum score you can get is then the sum of the values of these two dominoes plus , where we don’t care whether the middle square is covered in the previous column.
- : There are three ways to obtain this. We could lay a domino across the top and middle squares, and have the domino covering the bottom square also cover the square to its left. The score here is the value of the first domino plus . We could also lay a domino across the bottom and middle squares, and have the domino covering the top square also cover the square to its left. The score here is the value of the first domino plus . Finally, we could have three different dominoes in this column, each also covering the square to the left. The score here is the sum of the values of the three dominoes plus .
The final answer is then , since we ultimately don’t care which squares are covered in the last column.
Consider the regular language
(a|ab|bb)*. Place all strings in this language in order of increasing length, where words of the same length are ordered lexicographically. Suppose there are words per page in the dictionary (). What are the first and last words on page number ()?
The first word on the th page has (one-based) rank . The last word on the th page has rank . If we can figure out how to compute the th-ranked word in the dictionary, we just set to get the desired words. Note that is too large to fit in a 64-bit integer, so I suggest using Java’s native bignums.
This problem, like most problems of the type
find the kth lexicographically smallest element, is solved by using the ordering criteria to narrow down the characteristics of the sought element from coarsest to finest. That is, first we will determine the length of the sought word, then we will determine the first letter of the word, and so on, because words are ranked first by length, then by first letter, and so on.
To determine the length of the word, we first ask ourselves whether it could be of length 1. To do this, we compute the number of words of length 1. If this is less than or equal to , then we know that the th word has length 1. If not, then we compute the number of words of length 2. If the sum of the number of words of length 1 and 2 is less than or equal to , then we know that the th word has length 2. And so on. To do this efficiently, we would first subtract the number of words of length 1 from , then subtract the number of words of length 2, and so on, until we find such that the value of we have left over is less than or equal to the number of words of length . Then the desired word is the th word of length (where, remember, we have subtracted out the word counts for lengths already from ).
Next, we find the first character of the word, by computing the number of words of length that begin with the letter
a. If is less than or equal to this, the word begins with
a, otherwise, the word begins with
b, and subtract the number of words beginning with
a from . Then move on to the second letter; compute the number of words of length , which begin with the first letter we computed in the previous step, and compare this with , and so on. In this way we successively determine one letter after another, until we have constructed the entire word.
(I take this opportunity to reiterate that this approach is very general, and is almost always the approach used to solve
find the kth lexicographically smallest element problems, or generally any kind of problem in which a set of items is ranked according to a succession of criteria and we have to find an item in a particular position of the sequence.)
Finding the number of words of a given length or the number of words of a given length and a given prefix—the computation you need to do over and over again in the preceding procedure—is actually easy once you realize that a word is in the language if and only if it starts with an even number of
bs. Thus, if the prefix contains at least one
a, you have free rein with the rest of the letters and there are words, where is the number of remaining letters. If there is no
a in the prefix and an even number of
bs, then there are words, where consist of
a followed by whatever, correspond to
bba followed by whatever, and so on. If there is no
a in the prefix and an odd number of
bs, then there are words, where correspond to
ba followed by whatever, correspond to
bbba followed by whatever, and so on.
In the contest, we solved problems B, C, D, F, H, and I, and ran out of time trying to code E. After solving B, C, D, F, H, and I (not in that order), we had about 50 minutes left (?), and figured that E would be quickest to code. However, E turned out to be difficult to debug, and the nature of the algorithm meant we couldn’t use prewritten max flow code from our printed materials. In the end, we couldn’t even get it to work on the sample data.
For E, our team did a binary search on k, reconstructing the graph every single time, and it still passed. But then again the judge was pretty fast in comparison with the computers; on a blank 25 by 25 grid our program took around 3-4 seconds. Besides that, the only implementation difference was that we used BFS for maxflow.
Cool! For E, is there an example where min-cost max-flow on the input graph (just find any short set of paths from left to right) is incorrect? It feels like the overlapping of pieces as they move around can always be avoided, although I don’t know if it’s really true.