On perspective


Reasoning objectively is difficult because we are all biased by our own subjective experiences. There are two ways I can see to address this. The first is to consider others’ subjective experiences in addition to your own. This gives you what many would call a more balanced point of view. You might say that reading about another person’s experiences gave you new perspective on an issue. The second is to attempt to unconsider your own subjective experiences by training yourself to recognize cognitive bias.

I think the conventional wisdom is that you should always try to seek out others’ perspectives, but I question how useful that really is. I’m biased enough already; why would I want to inject even more bias into my thoughts? Emotion is really important in day-to-day life, but I think that it shouldn’t enter the debate on important moral issues facing humanity. When people talk about how their perspectives have been changed, often they mean something like how they visited an impoverished country and were moved by what they saw. Honestly, I don’t care. I already know that poverty causes a great deal of suffering, and, like many others, I would really like to know how it can be reduced. I don’t want to hear your sob story, though.

On the other hand, talking to other people about issues can be rewarding in that they may provide you with facts that you didn’t already know. So that’s the strategy I try to use: talk to other people to learn things you didn’t know, and, at the same time, work by yourself to become less biased. Unfortunately, if there’s one thing I’ve learned, it’s that becoming less biased is really hard. I’m definitely still a very biased person.

Advertisements
Posted in Uncategorized | 1 Comment

Done organic chemistry forever!


I used to love organic chemistry. Especially synthesis problems, which I felt gave me a chance to exercise my creativity and problem-solving skills. When I was competing for a spot on the Canadian IChO team, organic chemistry was probably my strong suit (although it’s also Richard’s strong suit, and in fact he apparently beat me on every section). When I decided to study chemistry at U. of T., I decided to take mostly physical and organic chemistry courses. I avoided biochemistry, because there’s too much memorization, and analytical chemistry, because I thought it would be boring. And I couldn’t fit inorganic into my schedule initially, and I ended up just not taking any inorganic courses at all.

I’m sorry to say that I don’t love organic chemistry anymore. In fact, I’m sick of it. I’ve taken CHM249 (organic chemistry II), CHM342 (organic synthesis), and CHM440 (more organic synthesis, with a focus on pharmaceutical agents). Every time I took another organic chemistry course, there were more and more reactions to memorize. The memorization was not too bad in CHM249, but there also weren’t any challenging problems of the sort I used to love solving in high school. In CHM342 the amount of memorization increased significantly, and I had an especially hard time with drawing three-dimensional transition states and predicting stereochemistry. In CHM440 there were a total of 90 named reactions. I was actually scared of synthesis problems, because, with that many reactions, there is simply no way I would be able to see the right disconnections. Luckily, there weren’t any. Suffice it to say that this course confirmed my suspicion that it was the right choice not to go to grad school to study organic synthesis…

Anyway, I had my CHM440 final exam last week, and my CHM347 exam today (biochemistry; I didn’t want to take it but if I took only courses I liked then I wouldn’t be able to graduate). Next term I’m only taking statistical mechanics. This means I’ll never have to look at another curved arrow for the rest of my life. Yippee!

In retrospect, I greatly underestimated the difficulty of fourth year—or perhaps I just bit off more than I could chew. Three of the five courses I took (CHM440, high-energy physics, general relativity I) were cross-listed as grad courses, so I guess it was foolish of me to expect them not to be so hard. There was also a grad student in my (time-dependent) quantum chem course, and at any rate it was very grad-style, with a take-home final (which was way too hard) and a term paper. I was incredibly busy toward the end of the term, trying to keep up my GPA (not that it actually matters, but I guess it’s just an irrational drive of mine).

I’ve noticed something about myself: when I’m busy doing stuff I don’t want to do, I think of all the productive things I could be doing if I had free time, such as developing the PEG Judge, studying quantum field theory, learning Haskell, or maybe learning a new (human) language. Alas, once free time arrives, I basically waste time browsing Reddit and playing games. Anyone else do this?

Now for some more awesome news. We made ACM World Finals! I’ll be representing the University of Toronto in Russia in June, together with my teammates Hao Wei and Saman Samikermani. We weren’t sure whether we were going to make it, since CMU and UMich creamed us at the regionals, and solved more problems than we did. But I guess ACM just stuck to the recent trend of inviting the top three teams from our region. We’ve got a long road ahead of us; other teams are good—really, really good—and we’re all a bit out of practice. I just hope we don’t do too badly!

Posted in Uncategorized | 1 Comment

WKB approximation


I have a confession to make: I’m scared of approximations.

Scandalous, isn’t it? For there are few things in physics that can be solved exactly, and so if we want to ever be able to do any calculations, rather than just sitting around and writing down one PDE after another that nobody will ever solve, we simply can’t do without approximations. What kind of physics major doesn’t like approximations?

And yet, they’ve never come easily to me. In contrast to theorems and exact solutions, which make a lot of sense to me, approximations always confuse me and feel like they’re something I just have to memorize, and I’m not very good at memorizing things. This started in high school, when we were discussing double-slit diffraction patterns. In order to get an expression for the approximate spacing between the bands, you’ve got to argue that because the screen is far away, you have nearly a right triangle, and \sin \theta \approx \tan \theta \approx \theta, as shown here. In the end, I just memorized the formula and felt dirty about it.

In my second year of college, I took an intro quantum mechanics course. It began with a discussion of the wave-like nature of matter, the photoelectric effect, Compton scattering, bremsstrahlung, hydrogen atom energy levels, all that good stuff. Then we did the Schrödinger equation for a particle in a box, a free particle, and plenty of super annoying problems where you have a potential jump and you have to match coefficients at the discontinuity and compute transmission and reflection coefficients. At the very end of term, we were introduced to the WKB approximation for the first time. Now, the prof for this course is notoriously bad (fortunately, he no longer teaches it), so I could barely understand what was going on; in the end he just derived an approximate formula for the tunneling amplitude through a potential barrier, and said WKB wouldn’t be on the final exam. I was relieved, and hoped I’d never come across it again.

Fast forward to present. I’m taking a course called Applications of Quantum Mechanics, and it’s a thinly veiled physics course about time-dependent QM which happens to be classified as CHM (luckily for me, because I didn’t want to take any more organic courses). Naturally, the WKB approximation shows up. There’s a lengthy discussion in the text about assuming an exponential form, expanding in a power series, and then plugging it into the Schrödinger equation. It was terribly dry, so I ended up just looking at the formula. That’s when it finally made sense to me.

The WKB approximation gives the following expression for the wave function of a scattering momentum eigenstate (i.e., E > V) subject to a spatially varying potential V(x):

\displaystyle \psi(x) \approx \frac{A}{\sqrt{p}} e^{i \int p/\hbar \, dx} + \frac{B}{\sqrt{p}} e^{-i \int p /\hbar \, dx}

where A and B are constants, and the momentum function p is defined as you would expect: p(x) = \sqrt{2m(E-V(x))}.

In order to see why this formula makes sense, compare it to the case where V = 0 and we have a free particle. Here we have an exact solution for the momentum eigenstates:

\displaystyle \psi(x) = C e^{ipx/\hbar} + D e^{-ipx/\hbar}

From the free-particle solution we can see that as the wave travels, it picks up a phase shift proportional to its momentum. If it travels a distance dx, then it incurs a phase shift of p/\hbar \, dx.

The WKB approximation is nothing more than the extension of this to a spatially varying potential. Here p is a function of x, so the total phase shift up to a particular point isn’t just px/\hbar, but has to be replaced by an integral, \int p/\hbar \, dx.

There’s a twist, however; in order to conserve probability, the amplitude has to vary spatially. Because we have a stationary state, the probability current has to be constant. Now, the probability current transported by the wave A' e^{ipx/\hbar} is proportional to A'^2 p. For this to remain constant, we must have A' = A/\sqrt{p}.

And really, that’s all there is to it: WKB is what you get when you assume that a free particle propagating through a potential maintains basically the same form; it just doesn’t accumulate a phase shift at a constant rate now since the potential isn’t constant, and its amplitude varies in order to conserve probability (just like how a classical wave’s amplitude decreases when it passes to a denser medium). There wasn’t anything to be scared of!

(In regions where E < V, we instead have exponential decay of the wave function. The formula given above is still correct, but a factor of i from the square root cancels the factor of i already in the exponential, and you get a real argument. The factor of (-1)^{1/4} in the denominator can be absorbed into the constant.)

Posted in Uncategorized | Leave a comment

ACM regionals, problems and solutions


To avoid possible copyright-related issues, all problem statements have been paraphrased or summarized.

Problem A

You are given a 10 by 10 grid and told the sum in each row and in each column. The sum tells you how many cells in that row or column are occupied by a ship. Each ship is aligned to the grid, meaning that for any given cell it either occupies it completely or does not occupy it at all, and all cells occupied by a particular ship are contiguous within a single row or column (as in the game Battleship). Ships are completely contained within the grid. There are four one-cell ships, three two-cell ships, two three-cell ships, and one four-cell ship. Two different ships cannot overlap, nor can two vertically, horizontally, or diagonally adjacent cells be occupied by two different ships.

  1. How many possible configurations of the ten ships yield the given row and column sums?
  2. If this number is greater than one, can we force the solution to be unique by further specifying the contents of some cell? We can specify it to be empty, occupied by the interior of a ship, occupied by the leftmost cell of a horizontal ship, occupied by the topmost cell of a vertical ship, occupied by the rightmost cell of a horizontal ship, occupied by the bottommost cell of a vertical ship, or occupied by a one-cell ship. If this is possible, find the “best” cell that allows us to do this, ranking first by lowest row number, then breaking ties by lowest column number, and then by type of contents, in the order listed in the previous sentence. If it is not possible, can we specify the contents of two cells and hence force the solution to be unique? Among multiple pairs of cells, find the best pair, that is, the one whose best cell is the best according to the rules for ranking single cells; ties are broken by the second best cells.

Solution

No team solved this problem during the contest. However, we may be reasonably certain that it can only be solved by a recursive backtracking approach, in which we will recursively generate all possible combinations of locations for the ten ships by adding one ship at a time, and pruning wherever adding a ship in a particular location would exceed the specified number of occupied cells in some row or column, and remembering that ships can’t overlap or be adjacent; whenever we manage to place all ships, increment the count of possible solutions. When trying to specify one or two cells in order to force a unique solution, we simply try all possibilities, starting from the best (i.e., empty cell in first row and first column, the empty cell in first row and second column, and so on) and again recursively generating all solutions, until we find something that give us exactly one solution. Because we didn’t attempt this problem, I can’t say what kind of optimizations are necessary to get AC.

Problem B

This problem asks to simulate cuckoo hashing. There are two arrays, of sizes n_1 \leq 1000 and n_2 \leq 1000 (n_1 \neq n_2). A non-negative integer value x would be stored at position x \mod n_1 in the first, and x \mod n_2 in the second. These two arrays together constitute a hash table. To insert a value into the hash table x, insert it into the first array. If there is already some value y in the position x \mod n_1, then we have to move y to the second array, storing it in position y \mod n_2. And if there is already a value z in that position, then we move it to the first array, in position z \mod n_1, and so on—we keep shuffling values back and forth between the two hash tables until a value lands in an empty spot.

You are given a sequence of non-negative integers. Output the states of the two arrays after inserting all the values in order. It is guaranteed that each insertion will terminate.

Solution

Straightforward simulation.

To see that this is fast enough, first observe that it’s impossible to insert more than n_1 + n_2 values, so there will be at most 1999 integers given to insert. Furthermore, since it’s guaranteed that each insertion will terminate, it’s not too hard to see that there’s a linear limit on how many steps each insertion can take. To prove this, consider an undirected graph in which each vertex corresponds to a location in one of the two arrays and each edge corresponds to either an existing value or the value we’re inserting; the edge is incident upon the two vertices corresponding to the positions where the value can be stored (thus, the graph is bipartite). When we insert a value, we trace out a path along the edges, which terminates once it lands on an empty vertex. If an insertion is to terminate, then there can’t be more edges than vertices in the connected component of the edge corresponding to the value we’re inserting, because then there would be too few spots for all the values. So there are two cases:

  1. Case 1: the number of edges is one less than the number of vertices (there are two empty spots). So the connected component is a tree. Because a tree has no cycles, we can never visit an edge twice (i.e., move a value that we’ve already moved). This gives the desired linear time bound.
  2. Case 2: the number of edges equals the number of vertices (there is one empty spot). There is then exactly one cycle. In this case, observe that if we ever visit a vertex twice, this must be the first vertex on the cycle we visited. When we visit it the second time, we kick it back to its original position which must be outside the cycle since it was the first. After this, we are directed away from the cycle and so cannot enter it again. Thus in this case the number of iterations can’t be more than twice the number of edges.

I’m not sure whether I could’ve come up with this (extremely hand-wavy) proof during the contest, but I think you’re just supposed to intuitively realize the number of iterations is linearly bounded. The total number of iterations here can’t exceed 2 \times 1999^2, or about 8 million, so we would expect this to get in under the time limit.

Problem C

This problem deals with a modified version of the Playfair cipher. To encrypt a message, you first need to pick a secret key. Then you write the letters of the key, one at a time, in a 5 by 5 grid, with each letter occupying one cell. Squares are filled left to right, top to bottom. Duplicate letters in the key are skipped after the first time they’re written down. After this, unused squares in the grid are filled left to right, top to bottom, by the letters of the alphabet not present in the key, in order. I and J are considered the same letter, so there are only 25 letters in the alphabet, and thus the grid will end up containing a permutation of the alphabet.

A message is encrypted as digraphs, that is, two-letter pairs. For example, to encrypt ECNA, you would break it down as EC NA and then encrypt EC and NA separately, and concatenate the results. A special case occurs when the two letters in a pair are identical; in this case, in the original Playfair cipher, you insert the letter X after the first, and then the second actually becomes the first letter of the next pair; for example, HELLO would become HE LX LO and then these three pairs would be encrypted. If you end up with a single letter at the end, add an X to the end. (Evidently, a message such as FOX cannot be encrypted with this method.) In the modified version used in this problem, instead of always inserting the letter X, the first time you insert a letter you use A, the second time, you use B, and so on, skipping J, and wrapping around after Z. You also skip a letter if inserting it would give a pair of identical letters; for example, AARDVARK becomes AB AR DV AR KC, where we have skipped A.

After breaking down the message into letter pairs, we use the grid to encrypt each pair. If the two letters appear in the same row, then we replace each letter by the one immediately to the right of it in the grid, wrapping around if necessary. If they are in the same column, we replace each one by the one immediately below it, again wrapping around if necessary. If they are in neither the same row nor the same column, replace the first letter by the letter that lies in the same row as the first letter and same column as the second, and replace the second letter by the letter that lies in the same row as the second letter and same column as the first. Concatenate all the encrypted pairs to give the ciphertext. You are given a series of key-plaintext pairs. In each case output the corresponding ciphertext.

Solution

Simulation. I would say straightforward simulation, but I guess this problem isn’t exactly the most straightforward. Luckily, I wasn’t the one stuck coding this one :P

Problem D

On a web page of dimensions up to 109 by 109 pixels, up to 10000 disjoint axis-aligned squares are given, each of which has side length 10 pixels. Each square’s upper-left pixel has coordinates that are multiples of 10. Find the smallest set of pixels which both contains all the squares and is orthogonally convex, meaning that if two pixels in the same row belong to the set, then so do all the pixels between them in that row; likewise, if two pixels in the same column belong to the set, then so do all the pixels between them in that column. Output the vertices of this set in clockwise order starting from the topmost one (leftmost in case of a tie).

Solution

First, notice that we can replace each 10 by 10 square by its four vertices. For example, the square from (50, 60) to (59, 69) can be replaced by (50, 60), (50, 69), (59, 60), and (59, 69). The orthogonal convex hull of the 4n points thus obtained will be the same as that of the original set of squares, so we’ll just work with these 4n points instead.

To proceed, we basically walk around the hull in clockwise order. We start by identifying the topmost pixel (leftmost in case of a tie). This must be a vertex. To find the next vertex, we locate the topmost point which lies strictly to the right of it (in case of a tie, pick the rightmost). Then we find the topmost point which lies strictly to the right of that, and so on. Eventually we reach a point in the rightmost occupied column. Now we switch gears, and find the rightmost point that lies strictly below our current point (bottommost in case of a tie). Keep doing this until we get to the bottommost occupied column. Then switch gears again, repeatedly moving to the bottommost point that lies strictly to the left of the current point (leftmost in case of a tie). Finally, once we reach the leftmost occupied column, repeatedly move to the leftmost point that lies strictly above the current point. Eventually we reach the starting point, and we are done. Whenever we move from a point to another point that is in both a different row and different column, we have to insert an additional vertex. For example, in the first phase, when we are moving from a point to the topmost point strictly to the right, if the new point is strictly below the current point, then we insert a point which is in the same row as the new point but same column as the current point; the other three cases are analogous.

To efficiently move from one point to the next, all we have to do is precompute, for each of the (up to) 40000 points we have to work with, the topmost point to the right, the rightmost point below, the bottommost point to the left, and the leftmost point above. We can precompute the topmost point to the right of all points by sorting the points by x-coordinate and scanning from right to left, remembering the topmost point we’ve seen so far; the other three cases are analogous.

Problem E

In an n \times n grid (n \leq 25), some squares are occupied by obstacles. The leftmost and rightmost columns do not contain obstacles. There is initially a game piece in each cell of the leftmost column. You need to move them all to the rightmost column, using the minimum number of turns possible. In each turn, you can choose, for each piece, whether to move it or to leave it where it is; if you move a piece then you must move it into a vertically or horizontally adjacent cell that does not contain an obstacle. At the end of a turn, no two pieces can occupy the same square. What is the minimum number of turns necessary?

Solution

We didn’t solve this problem, but it seemed to us that we could do it with max flow. However, it’s somewhat unclear whether this approach would even fit in memory, let alone be fast enough.

First, we consider the problem of determining whether it is possible to solve the problem in k turns. We can reduce this to max flow with vertex constraints. There is one vertex for each (cell, turn) combination. For example, if k = 3, then there are four vertices for each cell: the cell at the beginning of the game, the cell after the first move, the cell after the second move, and the cell after the third move. Each vertex has a capacity of one. Each vertex has up to five outgoing edges (which we can take to have capacity one), each of which goes to some vertex corresponding to the next turn, and either the same cell, or a horizontally or vertically adjacent cell, as long as it does not contain an obstacle. There is an edge (which we can again take to have capacity one) from each vertex corresponding to the final state and a cell in the rightmost column to the sink. There is also an edge (which we can again take to have capacity one) from the source to each vertex corresponding to the initial state that is in the leftmost column. It is not hard to see that an augmenting path in this flow network corresponds to the movement of a game piece from the leftmost column in the initial state to the rightmost column in the final state; each edge that is not from the source or to the sink corresponds to the movement of the game piece from a particular cell in a particular turn to an adjacent cell in the next turn (or leaving it in the same cell); the vertex capacity constraint enforces not being able to have two pieces in the same cell in a given turn. If the max flow is n, then we have moved n pieces, that is, won the game.

Before we proceed, what is the complexity of this approach? Well, there are up to 625 cells, so there are about 625k internal vertices, each with up to about 5 outgoing edges, so 3125k edges. Furthermore, to enforce the vertex capacity constraints, we’ll have to split each vertex up into an incoming and outgoing vertex, with an edge between them, so now we’re up to 3750k edges. Because the graph has unit capacities and the max flow is at most 25, we can use DFS to find each augmenting path, and we have to go through 25 augmenting paths before we find the max flow. The complexity of DFS is E+V = 5000k so the total number of operations is about 125000k.

Now, how do we find the minimum k? Well, we didn’t get that far, but it seems that a reasonable approach would be as follows: first just set k = 0 and create the corresponding flow network. Then, as long as the max flow of n hasn’t been achieved yet because you can’t find a path in the residual network, add the next set of vertices and edges, corresponding to increasing k by allowing an extra move (that is, add n^2 more vertices and join the current final state vertices to them, making the vertices just added the new final state vertices; also add edges from the rightmost column cells in the new vertex set to the sink) and check whether we can now find an augmenting path; if not, increase k again, and so on. To efficiently determine whether an augmenting path exists in the new graph, all we have to do is run another DFS starting from reachable vertices in the previous final state cells, rather than doing a huge DFS from the source that will go through the entire graph again. This essentially means that every time we have to increase k once or more, the total time taken for all the DFSing is about the same as that taken to do a single DFS in the graph corresponding to the final k value before we find another augmenting path. That means the total number of operations is bounded by about 125000m, where m is the answer (minimum number of turns). I’m not quite sure how large m could be; it could certainly be as large as about n^2/2, in the case that the obstacles force you to take a winding path through the grid. Putting n = 25, we obtain a figure of about 39 million. Is this fast enough? I’m not sure.

Jacob argues that binary search on k would be faster. At some point I’ll probably code up both and see who’s right.

Problem F

In a directed acyclic graph of up to 200 vertices, we say an edge is redundant if removing the edge does not alter the connectivity of the graph (i.e., if a path existed from one vertex to another in the original graph, a path must also exist when the redundant edge is removed). Find all redundant edges.

Solution

A naive approach is to simply try deleting each edge in turn, and then running a DFS from each vertex to determine whether the connectivity has changed. Unfortunately, this approach takes O(EV(E+V)) time, which is O(V^5) in dense graphs, and is too slow for V = 200.

A faster approach is to run Warshall’s algorithm for computing the transitive closure of the graph. Whenever we find that there is a path from i to k and a path from k to j, if there is an edge directly from i to j, that edge must be redundant. This is only true because the graph is acyclic! Deleting that edge can’t alter the connectivity of i to k or k to j, because i < k < j in topological order, and thus the path from i to k can't use the edge from i to j, nor can the path from k to j. The running time of this approach is O(V^3).

Problem G

There is a horizontal line segment (the house) located above the origin on the Cartesian plane, and up to 10 decorations, each of which is also a line segment, and each of which is either reflective or non-reflective. There is a lamp located at the origin, which emits rays at all angles from -\theta/2 to \theta/2 to the vertical. Non-reflective decorations absorb all incident light, and reflective decorations act as mirrors that obey the law of reflection (and are reflective on both sides). It is guaranteed that a ray can only be reflected up to 100 times. What fraction of the house is actually illuminated?

Solution

I don’t think any teams solved this problem during the contest, so I have no idea how actually to do it. My suspicion is that it can be done using a brute force simulation approach, in which we simply sample the interval of angles and cast out a ray at each sampled angle, following its course until it either gets absorbed, reaches the house, or escapes to infinity. The fraction of the house illuminated only needs to be accurate to within one part in 104 (a percentage rounded to the nearest hundredth). I’m not sure how finely you’d have to sample in order to get the right answer, but of course, if nobody else solves the problem, then the rational strategy is to just keep resubmitting until you get it accepted. The simulation is pretty easy to do; in each iteration you just have to check whether the ray intersects any of the (up to) 11 line segments specified. If it’s the house or one of the non-reflective decorations, or if it doesn’t intersect anything, you’re done; if it’s a mirror, then you have to iterate again, but you only have to do up to 100 iterations for each angle, so that’s 1100 operations. Let’s say you decided to sample 5\times 10^4 angles; then you’d have to do a total of about 55 million operations. I suppose it’s plausible that this approach would work, but I can’t say for sure until I see the test data or official solutions.

The other, non-hackish approach would be to compute the illuminated fraction exactly by considering only interesting angles. An interesting angle is an initial angle at which the ray grazes the endpoint of a some line segment. It is easy to see that between any two consecutive interesting angles, either the ray does not illuminate the house at all, or the image of the ray on the house varies monotonically and continuously, so that whatever portion of the house lies between the image of the ray at the first angle and the image of the ray at the second angle is completely illuminated. Taking the union of all such intervals would then give the total length illuminated. To do the actual angle sweep, we would first sort a list of all angles at which the ray would graze the endpoint of some interval. We would then consider each sub-interval between a pair of consecutive interesting angles. If, in this interval, the ray hits a mirror, then we have to recursively subdivide the interval by considering all the angles at which the ray could hit the mirror, reflect, and then graze the endpoint of another interval. If in one of those intervals the ray hits a second mirror, then we would have to recursively subdivide again, and so on.

Two questions remain. First of all, how do we compute the initial angle required to hit some target point after reflection from a sequence of mirrors m_1, m_2, ..., m_k? The answer is pretty clever: we reflect the target point p across mirror m_k to get point p_k, and reflect p_k across mirror m_{k-1} to get point p_{k-1}, and so on, finally reflecting across m_1 to get point p_1. The ray from the origin to p_1 is then the same ray which would (potentially) reflect from all the mirrors in sequence and finally hit p. (I imagine anyone who plays pool is familiar with this trick.) The second question is what the worst case running time of this approach is. I have no idea. I don’t have any idea whether this was the intended approach, or the brute force one. When the test data come out, maybe we’ll see. If I’m not too lazy, that is.

Problem H

In a 3\times n grid of squares (n \leq 1000), each square contains an integer. We can place a domino across two squares and gain a number of points equal to the product of the numbers of the two squares. What is the maximum possible score, if we can place as many dominoes as we want but no two dominoes can overlap?

Solution

This problem can be solved using dynamic programming. We will define f(k, o_1, o_2, o_3) as the maximum score obtainable given that all squares covered by dominoes are left of column k, and o_1 specifies whether the top square in column k-1 is covered, likewise for o_2 and o_3 with the middle and bottom squares. The total number of states is then 2^3 n.

Computing the transitions is annoying. If we number the columns starting from zero, our base case is k \leq 0; here, no dominoes can be placed and the maximum score is 0. To compute f(k, o_1, o_2, o_3) for k > 0 , we consider all possible arrangements of dominoes in column k-1:

  • o_1 = o_2 = o_3 = 0: column k-1 is empty. Then f(k, 0, 0, 0) = \max_{o_1, o_2, o_3} f(k-1, o_1, o_2, o_3): we have license to cover as many or as few squares in column k-2 as we wish, and then we’ll simply not put anything in column k-1, and hence not gain any additional points.
  • o_1 = 1; o_2 = o_3 = 0: Only the top square in column k-1 is filled. In this case the domino covering that square must also occupy the top square in column k-2. The score we get is the value of this domino plus \max_{o_2, o_3} f(k-1, 0, o_2, o_3), where the 0 indicates that the top square must be initially uncovered in column k-2, but we don’t care whether the other two squares are.
  • o_1 = 0; o_2 = 1; o_3 = 0 or o_1 = 0; o_2 = 0; o_3 = 1: These are analogous to the previous case.
  • o_1 = o_2 = 1; o_3 = 0: There are two ways we can cover the top and middle squares in column k-1 while leaving the bottom one empty. Either we can place a domino across these two squares, giving a maximum total score equal to the value of that domino plus f(k, 0, 0, 0) (because once we remove this domino from the board we’ll have an empty column k-1). Or, we can place a domino on each that also covers the square to the left. The maximum score we could then obtain is the sum of the values of these two dominoes and \max(f(k-1, 0, 0, 0), f(k-1, 0, 0, 1)) (where we see that the bottom square in the previous column could be either covered or uncovered). The case o_1 = 0; o_2 = o_3 = 1 is analogous.
  • o_1 = 1; o_2 = 0; o_3 = 1: This is only obtainable by placing a domino on the top square that also covers the square to its left, and a domino on the bottom square that also covers the square to its left. The maximum score you can get is then the sum of the values of these two dominoes plus \max(f(k-1, 0, 0, 0), f(k-1, 0, 1, 0)), where we don’t care whether the middle square is covered in the previous column.
  • o_1 = o_2 = o_3 = 1: There are three ways to obtain this. We could lay a domino across the top and middle squares, and have the domino covering the bottom square also cover the square to its left. The score here is the value of the first domino plus f(k, 0, 0, 1). We could also lay a domino across the bottom and middle squares, and have the domino covering the top square also cover the square to its left. The score here is the value of the first domino plus f(k, 1, 0, 0). Finally, we could have three different dominoes in this column, each also covering the square to the left. The score here is the sum of the values of the three dominoes plus f(k-1, 0, 0, 0).

The final answer is then \max_{o_1, o_2, o_3} f(n, o_1, o_2, o_3), since we ultimately don’t care which squares are covered in the last column.

Problem I

Consider the regular language (a|ab|bb)*. Place all strings in this language in order of increasing length, where words of the same length are ordered lexicographically. Suppose there are n words per page in the dictionary (n \leq 30). What are the first and last words on page number m (m \leq 10^{18})?

Solution

The first word on the mth page has (one-based) rank n(m-1) + 1. The last word on the mth page has rank nm. If we can figure out how to compute the kth-ranked word in the dictionary, we just set k = n(m-1)+1, nm to get the desired words. Note that 30 \times 10^{18} is too large to fit in a 64-bit integer, so I suggest using Java’s native bignums.

This problem, like most problems of the type find the kth lexicographically smallest element, is solved by using the ordering criteria to narrow down the characteristics of the sought element from coarsest to finest. That is, first we will determine the length of the sought word, then we will determine the first letter of the word, and so on, because words are ranked first by length, then by first letter, and so on.

To determine the length of the word, we first ask ourselves whether it could be of length 1. To do this, we compute the number of words of length 1. If this is less than or equal to k, then we know that the kth word has length 1. If not, then we compute the number of words of length 2. If the sum of the number of words of length 1 and 2 is less than or equal to k, then we know that the kth word has length 2. And so on. To do this efficiently, we would first subtract the number of words of length 1 from k, then subtract the number of words of length 2, and so on, until we find l such that the value of k we have left over is less than or equal to the number of words of length l. Then the desired word is the kth word of length l (where, remember, we have subtracted out the word counts for lengths 1, 2, ..., l-1 already from k).

Next, we find the first character of the word, by computing the number of words of length l that begin with the letter a. If k is less than or equal to this, the word begins with a, otherwise, the word begins with b, and subtract the number of words beginning with a from k. Then move on to the second letter; compute the number of words of length l, which begin with the first letter we computed in the previous step, and compare this with k, and so on. In this way we successively determine one letter after another, until we have constructed the entire word.

(I take this opportunity to reiterate that this approach is very general, and is almost always the approach used to solve find the kth lexicographically smallest element problems, or generally any kind of problem in which a set of items is ranked according to a succession of criteria and we have to find an item in a particular position of the sequence.)

Finding the number of words of a given length or the number of words of a given length and a given prefix—the computation you need to do over and over again in the preceding procedure—is actually easy once you realize that a word is in the language if and only if it starts with an even number of bs. Thus, if the prefix contains at least one a, you have free rein with the rest of the letters and there are 2^r words, where r is the number of remaining letters. If there is no a in the prefix and an even number of bs, then there are 2^{r-1} + 2^{r-3} + ... words, where 2^{r-1} consist of a followed by whatever, 2^{r-3} correspond to bba followed by whatever, and so on. If there is no a in the prefix and an odd number of bs, then there are 2^{r-2} + 2^{r-4} + ... words, where 2^{r-2} correspond to ba followed by whatever, 2^{r-4} correspond to bbba followed by whatever, and so on.


In the contest, we solved problems B, C, D, F, H, and I, and ran out of time trying to code E. After solving B, C, D, F, H, and I (not in that order), we had about 50 minutes left (?), and figured that E would be quickest to code. However, E turned out to be difficult to debug, and the nature of the algorithm meant we couldn’t use prewritten max flow code from our printed materials. In the end, we couldn’t even get it to work on the sample data.

Posted in Uncategorized | 2 Comments

ACM regionals


Yesterday, I attended the 2013 East Central North America regional ACM programming contest, representing the University of Toronto. (My teammates were Saman Samikermani and Hao Wei.) The scoreboard isn’t available online yet, nor are the problems, but I can say that we placed fourth overall, behind CMU, Michigan, and CMU again. (It’s a bit demoralizing that their B team is better than our A team, isn’t it?)

This was only the second time I’ve attended regionals. The first time was in my first year, when I was at Waterloo. In second year I was going through that phase when I didn’t want to do any contests at all, and last year I declined to be on U of T’s ACM team because I was too busy and had no time to practice. It’s a shame, because that was Jacob’s second time at World Finals, and I think he and I together would’ve been pretty good. Oh well.

Now for the burning question: will we advance? Nobody can say for sure at this point. In general, the first place team is guaranteed to advance from each region, but if there was a team in the region that won a medal at finals last year, then this year they guarantee at least two teams from the region will advance. That means CMU and Michigan are going to advance for sure, since CMU won a medal last year. In the past few years they have actually taken three teams from our region; Jacob says they do this when the third place team isn’t too far behind the second place team, but that’s hardly precise enough to predict anything, and at any rate these rules aren’t written down anywhere. He also says that our chance of advancing is more than 50% but less than 90%. I guess that sounds reasonable.

My friend is going to email me a copy of the problem set later tonight (she kept the paper; I didn’t) and I’ll try to post solution sketches to some of the problems. My team didn’t solve all the problems, though, and official solutions and test data aren’t yet available, so I can’t necessarily promise correct or complete solutions.

On the way back from regionals I taught some of the other U of T people how to play Durak. We also played a few hands of Hearts, in which I was reminded that I’m horribly inattentive and usually fail to stop people from shooting. Fun times.

Posted in Uncategorized | 1 Comment

Dating and non-equilibrium thermodynamics


The analogy between interpersonal relationships and chemical processes is well-established. Just about everyone understands what it means when you say, “we just didn’t have chemistry”. In fact, according to the Online Etymology Dictionary, this usage goes as far back as the 18th century. The analogy can be fleshed out by the light-hearted addition of such concepts as “single displacement reaction” and “activation barrier”. Now, anyone who hasn’t taken a chemistry course is going to give you a funny look because they won’t know what you’re talking about, but I suppose that’s the price you pay for unlocking the “Used Chemistry Terms in Casual Conversation” achievement. (Wait, what do you mean, you don’t keep track?!)

I think I’m a bit odd, though, in actually taking the analogy seriously. A lot of people seem to have a fair amount of faith when it comes to their love lives: they don’t know how or when they’re going to meet their lifelong partners, but they’re sure it’s going to happen. What I believe, though, is a bit different: the system will eventually reach equilibrium, if you allow it to. I don’t postulate that you’re guaranteed to meet that special person, but I do submit that if you meet them and keep in touch for long enough, then you’ll eventually end up together. Furthermore, if you’re with the wrong person, it won’t last long.

Indeed, I like to imagine a spring between every pair of people. This spring doesn’t exist in real space, but an abstract space in which the distance between two people measures intimacy (greater intimacy means shorter distance). I like to say that every spring has an equilibrium length, which corresponds to how compatible you are, and that eventually, all those springs are going to reach equilibrium. If you’re not very close to someone whom you could be closer to, you’ll feel the desire to get to know them better—that’s how strangers become acquaintances, acquaintances become friends, and friends become… well… more than friends. That’s the restoring force. (Did I just wade into physics territory? It’s worth noting that the model of chemical bonding with a harmonic potential between two atoms is commonly taught in physical chemistry courses.) If you’re too close to someone, it’s going to come to an end eventually—the restoring force pushes you apart. Equilibrium is not always reached right away—for example, you might be dating, then break up and not talk to each other for a few months, but then, after a long period of time, you may be friends again—and that’s where the equilibrium distance might lie.

I also like to imagine a diverse set of protein molecules floating around in a solution. Some pairs will have very high binding constants; once they bind, they’ll stay together, for the most part… perhaps by chance a high-energy molecule will come by and break them apart, but they’ll be together again, sooner or later. A binding that isn’t very strong won’t stay together for long—it may break apart of its own accord, or a single displacement might occur. It’s important to note, of course, that two molecules are not going to end up bound together if they never come into contact. (Just like in real life.)

In a sense, then, what I’m saying is: if it’s meant to be, it’ll happen. Sooner or later. This is the attitude I’ve consistently taken whenever I’ve been plagued by anxiety-inducing hypotheticals. If you mess up, but the other person likes and understands you well enough, it’s recoverable; you’re only separated temporarily while the equilibrium re-establishes itself. (This is true of not only intimate relationships but also friendships.) If you feel like you “missed your chance” to make your move—well, it’s never too late—the most stable configuration awaits you. And if the person you love actually ends up with someone else—that person probably had a higher binding constant with them than you did… but not to worry, someone better for you might still be out there.

The only thing to watch out for here is that you don’t ignore the non-equilibrium regime. (Apparently, non-equilibrium thermodynamics has not been extensively studied, which is why equilibrium thermodynamics is very well understood, and non-equilibrium, not so much. This makes it difficult to study, among other things, biological systems, which are only interesting as long as they stay away from equilibrium… that is, alive.) So if the right person comes along, you don’t have to worry about how things will work out—they will. But it’s still worthwhile to consider what happens before that. A lot of people out there are in non-equilibrium relationships—sure, they might not end up married to their current partners, but many of them are still having a good time. You may as well seize the opportunity to enjoy non-equilibrium interactions if possible—after all, there’s no guarantee that you’ll meet that special somebody within your lifetime.

The disconnect between equilibrium and non-equilibrium thermodynamics is a source of some of what you might consider bad advice going around. For example, “be yourself” is good advice in the equilibrium regime—indeed, it might help equilibrium be established more quickly—but not so much in the non-equilibrium regime: if you just “be yourself”, but that isn’t a person that most people can relate to, you may end up with no non-equilibrium interactions. Nope—if you want to form that short-lived unstable species, you’ve got to pour energy into the system. Unfortunately, in this I have no advice to give. If I figure anything out, I’ll let you know. ;-P

Posted in Uncategorized | 1 Comment

Back to school update


It’s been a while since I posted an update! I suppose that’s because I keep getting more modest and hence more rarely believe that something I came up with will be interesting to a wide audience. (Though, I assure you, I post just as much mildly amusing crap on Facebook as I always have.)

So I moved into my new apartment last month. I love the location—it’s only a few minutes away from campus so I usually don’t have to get up until about half an hour before my first class each day, and I don’t have class before 11 all week. Of course, I still manage to not get enough sleep, by staying up until 2:30 AM every day. Go figure, right? The only unfortunate thing is that internet access wasn’t provided. I signed up on (I believe) Sep. 6, but couldn’t get a technician to come over and set it up until Sep. 27. I was out of town but I forgot to tell my roommates that there were two technicians scheduled to come over, so we missed the second one and he couldn’t come until a week later, that is, Oct. 4. We also discovered that our modem was broken, so we had to order a new one, which didn’t arrive until last night! (To make things worse, I discovered while trying to configure the modem that my laptop’s Ethernet port is broken. Basically, everything that could go wrong, went wrong.)

I’m taking five courses this term: Introduction to High Energy Physics, Relativity Theory I (i.e., intro to general relativity), Applications of Quantum Mechanics, Organic Chemistry of Biological Compounds, and The Synthesis of Modern Pharmaceutical Agents. I originally wasn’t going to take the synthesis course, because I didn’t want to have to memorize a lot of reactions; instead I signed up for a supposedly easy course on polymers. I dropped it like a hot potato, however, when I found out there was a term paper, and the synthesis course was the least intolerable course that remained. Ironically, I found out the very same day that the quantum mechanics course has a term paper, too, but at least I actually like quantum mechanics.

Contrary to expectation, I managed to get hired as a TA this term. I was hoping for an upper-year course on algorithms or data structures, because that’s my favourite area of CS, of course, but I ended up getting CSC108, an introductory course. Apparently they expanded enrollment by 50% since last year. If that weren’t the case, I probably would not have gotten any TA position at all, since grad students have priority. So really, I can’t complain. (It pays really well, too.)

Oh, and one last thing. I’ve accepted a full-time offer with Google in their Mountain View office, starting next September!

Posted in Uncategorized | 2 Comments