## K&R C

I was recently reading the C99 rationale (because why the fuck not) and I was intrigued by some comments that certain features were retired in C89, so I started wondering what other weird features existed in pre-standard C. Naturally, I decided to buy a copy of the first edition of The C Programming Language, which was published in 1978.

Here are the most interesting things I found in the book:

• The original hello, world program appears in this book on page 6.

In C, the program to print “hello, world” is

main()
{
printf("hello, world\n");
}

Note that return 0; was not necessary.

Write a program to replace each tab by the three-column sequence >, backspace, -, which prints as ᗒ, and each backspace by the similar sequence ᗕ. This makes tabs and backspaces visible.

… Wait, what? This made no sense to me at first glance. I guess the output device is assumed to literally be a teletypewriter, and backspace moves the carriage to the left and then the next character just gets overlaid on top of the one already printed there. Unbelievable!

• Function definitions looked like this:
power(x, n)
int x, n;
{
...
return(p);
}

The modern style, in which the argument types are given within the parentheses, didn’t exist in K&R C. The K&R style is still permitted even as of C11, but has been obsolete for many years. (N.B.: It has never been valid in C++, as far as I know.) Also note that the return value has been enclosed in parentheses. This was not required so the authors must have preferred this style for some reason (which is not given in the text). Nowadays it’s rare for experienced C programs to use parentheses when returning a simple expression.

• Because of the absence of function prototypes, if, say, you wanted to take the square root of an int, you had to explicitly cast to double, as in: sqrt((double) n) (p. 42). There were no implicit conversions to initialize function parameters; they were just passed as-is. Failing to cast would result in nonsense. (In modern C, of course, it’s undefined behaviour.)
• void didn’t exist in the book (although it did exist at some point before C89; see here, section 2.1). If a function had no useful value to return, you just left off the return type (so it would default to int). return;, with no value, was allowed in any function. The general rule is that it’s okay for a function not to return anything, as long as the calling function doesn’t try to use the return value. A special case of this was main, which is never shown returning a value; instead, section 7.7 (page 154) introduces the exit function and states that its argument’s value is made available to the calling process. So in K&R C it appears you had to call exit to do what is now usually accomplished by returning a value from main (though of course exit is useful for terminating the program when you’re not inside main).
• Naturally, since there was no void, void* didn’t exist, either. Stroustrup’s account (section 2.2) appears to leave it unclear whether C or C++ introduced void* first, although he does say it appeared in both languages at approximately the same time. The original implementation of the C memory allocator, given in section 8.7, returns char*. On page 133 there is a comment:

The question of the type declaration for alloc is a vexing one for any language that takes its type-checking seriously. In C, the best procedure is to declare that alloc returns a pointer to char, then explicitly coerce the pointer into the desired type with a cast.

(N.B.: void* behaves differently in ANSI C and C++. In C it may be implicitly converted to any other pointer type, so you can directly do int* p = malloc(sizeof(int)). In C++ an explicit cast is required.)

• It appears that stdio.h was the only header that existed. For example, strcpy and strcmp are said to be part of the standard I/O library (section 5.5). Likewise, on page 154 exit is called in a program that only includes stdio.h.
• Although printf existed, the variable arguments library (varargs.h, later stdarg.h) didn’t exist yet. K&R says that printf is … non-portable and must be modified for different environments. (Presumably it peeked directly at the stack to retrieve arguments.)
• The authors seemed to prefer separate declaration and initialization. I quote from page 83:

In effect, initializations of automatic variables are just shorthand for assignment statements. Which form to prefer is largely a matter of taste. We have generally used explicit assignments, because initializers in declarations are harder to see.

These days, I’ve always been told it’s good practice to initialize the variable in the declaration, so that there’s no chance you’ll ever forget to initialize it.

• Automatic arrays could not be initialized (p. 83)
• The address-of operator could not be applied to arrays. In fact, when you really think about it, it’s a bit odd that ANSI C allows it. This reflects a deeper underlying difference: arrays are not lvalues in K&R C. I believe in K&R C lvalues were still thought of as expressions that can occur on the left side of an assignment, and of course arrays do not fall into this category. And of course the address-of operator can only be applied to lvalues (although not to bit fields or register variables). In ANSI C, arrays are lvalues so it is legal to take their addresses; the result is of type pointer to array. The address-of operator also doesn’t seem to be allowed before a function in K&R C, and the decay to function pointer occurs automatically when necessary. This makes sense because functions aren’t lvalues in either K&R C or ANSI C. (They are, however, lvalues in C++.) ANSI C, though, specifically allows functions to occur as the operand of the address-of operator.
• The standard memory allocator was called alloc, not malloc.
• It appears that it was necessary to dereference function pointers before calling them; this is not required in ANSI C.
• Structure assignment wasn’t yet possible, but the text says [these] restrictions will be removed in forthcoming versions. (Section 6.2) (Likewise, you couldn’t pass structures by value.) Indeed, structure assignment is one of the features Stroustrup says existed in pre-standard C despite not appearing in K&R (see here, section 2.1).
• In PDP-11 UNIX, you had to explicitly link in the standard library: cc ... -lS (section 7.1)
• Memory allocated with calloc had to be freed with a function called cfree (p. 157). I guess this is because calloc might have allocated memory from a different pool than alloc, one which is pre-zeroed or something. I don’t know whether such facilities exist on modern systems.
• Amusingly, creat is followed by [sic] (p. 162)
• In those days, a directory in UNIX was a file that contains a list of file names and some indication of where they are located (p. 169). There was no opendir or readdir; you just opened the directory as a file and read a sequence of struct direct objects directly. Example is given on page 172. You can’t do this in modern Unix-like systems, in case you were wondering.
• There was an unused keyword, entry, said to be reserved for future use. No indication is given as to what use that might be. (Appendix A, section 2.3)
• Octal literals were allowed to contain the digits 8 and 9, which had the octal values 10 and 11, respectively, as you might expect. (Appendix A, section 2.4.1)
• All string literals were distinct, even those with exactly the same contents (Appendix A, section 2.5). Note that this guarantee does not exist in ANSI C, nor C++. Also, it seems that modifying string literals was well-defined in K&R C; I didn’t see anything in the book to suggest otherwise. (In both ANSI C and C++ modifying string literals is undefined behaviour, and in C++11 it is not possible without casting away constness anyway.)
• There was no unary + operator (Appendix A, section 7.2). (Note: ANSI C only allows the unary + and - operators to be applied to arithmetic types. In C++, unary + can also be applied to pointers.)
• It appears that unsigned could only be applied to int; there were no unsigned chars, shorts, or longs. Curiously, you could declare a long float; this was equivalent to double. (Appendix A, section 8.2)
• There is no mention of const or volatile; those features did not exist in K&R C. In fact, const was originally introduced in C++ (then known as C With Classes); this C++ feature was the inspiration for const in C, which appeared in C89. (More info here, section 2.3.) volatile, on the other hand, originated in C89. Stroustrup says it was introduced in C++ [to] match ANSI C (The Design and Evolution of C++, p. 128)
• The preprocessing operators # and ## appear to be absent.
• The text notes (Appendix A, section 17) that earlier versions of C had compound assignment operators with the equal sign at the beginning, e.g., x=-1 to decrement x. (Supposedly you had to insert a space between = and - if you wanted to assign -1 to x instead.) It also notes that the equal sign before the initializer in a declaration was not present, so int x 1; would define x and initialize it with the value 1. Thank goodness that even in 1978 the authors had had the good sense to eliminate these constructs… :P
• A reading of the grammar on page 217 suggests that trailing commas in initializers were allowed only at the top level. I have no idea why. Maybe it was just a typo.
• I saved the biggest WTF of all for the end: the compilers of the day apparently allowed you to write something like foo.bar even when foo does not have the type of a structure that contains a member named bar. Likewise the left operand of -> could be any pointer or integer. In both cases, the compiler supposedly looks up the name on the right to figure out which struct type you intended. So foo->bar if foo is an integer would do something like ((Foo*)foo)->bar where Foo is a struct that contains a member named bar, and foo.bar would be like ((Foo*)&foo)->bar. The text doesn’t say how ambiguity is resolved (i.e., if there are multiple structs that have a member with the given name).
Posted in Uncategorized | 12 Comments

## I’m now a grownup, apparently

Whoa, it’s been a long time since my last update. (My previous post seems to have been received exceptionally poorly. I guess people don’t like negativity.)

Part of the reason why I update less frequently now is that I spend a lot of time on Quora and it gives me the opportunity to write a lot of things about science that I probably would’ve written here instead. Some people even blog on Quora, but I don’t think it makes sense for me to do so. I have a lot of followers on Quora, but they probably want to see my thoughts on science, and may or may not be interested in my personal life. On the other hand, a lot of people who read my blog aren’t on Quora.

I had planned to update in September, because that’s when I had planned to start work. For anyone who doesn’t know yet, I’m now living in Sunnyvale, California and working full-time as a software engineer at Google! Due to problems with getting a visa, I wasn’t actually able to start until the past Monday—more than four months after my initially planned start date. I guess you could say I had a rather long vacation—longer than I could handle, as I was starting to get quite bored at home.

This is, of course, my first full-time job, and for the first time, I feel that I have to exercise a lot of self-restraint. During my internships, I skipped breakfast, stuffed myself at dinner, drank a lot of pop, and frequently ate unhealthy snacks. I allowed myself to get away with it because I knew that the internship would end in a few months, and then I wouldn’t be tempted as much. Things are a lot different now, because, well, who knows how long I’m going to be here? If I develop unhealthy habits, it’s going to be hard to break them. So I now wake up early enough to catch breakfast at the office, eat reasonable portions at lunch and dinner, limit my consumption of pop, and try to eat at least one piece of fresh fruit per day. (Google doesn’t have as many unhealthy snacks as Facebook; I haven’t yet seen beef jerky at the office.) Now I just have to make myself exercise… urgh, I’ll get to that eventually >_>

You know, when I was a little kid, like all other kids, I hated having a bedtime, and I wondered why adults didn’t have bedtimes, and my mom told me that adults do have bedtimes, and they’re just self-imposed. Well, now I understand. I guess that makes me an adult, eh?

(Oh, and one last note: The girl mentioned in the previous post is no longer my girlfriend. I suppose it’s a little bit embarrassing to excitedly announce that you’re dating someone and then break up shortly afterward, but, oh well, that’s life. I’m not going to retroactively edit the post to hide my embarrassment or anything like that. She’s still an awesome person; just not the right one for me.)

Posted in Uncategorized | 1 Comment

## Done exams

I thought my Real Analysis exam was going to be the hardest one, but it turned out to be probably the easiest exam I have ever written in my entire undergrad. Several questions just asked for definitions or statements of theorems. One question was true/false, with no proof required. Two questions were nontrivial, but actually quite trivial because they appeared on the 2012 exam so I already knew how to solve them. Also, the exam has 39 marks total, but you only need 30 marks to get 100%.

Looking over past exams, I would say that there has been a marked decrease in difficulty over time. Not just for this course, mind you—this has been the case for most other courses I’ve looked at too. For example, this was certainly the case for CHM342 and CHM440 (both organic synthesis). PHY489 (high energy physics) didn’t seem to show any variation in difficulty. I’ve never seen a course for which the exams became harder over time. It’s funny how people complain all the time about how hard U. of T. is. Would they have even passed in previous years?? (The counterargument here, perhaps, is that the high school curriculum has also become more and more watered-down over time, so perhaps students in the past were more competent in college and could handle the harder exams. I’m not sure whether this is true.)

Anyway; wow. School is finally over for me… at least until grad school. Not to say that studying wasn’t satisfying… but maybe it will be even more so now that I have the freedom to set my own curriculum.

The next order of business is the ACM-ICPC World Finals, which are going to be held in Russia from June 22 to June 26. This gives me about two months to practice. But sadly, I’m unlikely to make good use of it. I know people can improve a lot in a few months; Tyson (my teammate in first year) is a great example as he got really good really quickly over the summer after high school. Unfortunately, I get distracted by Reddit on a regular basis. Coding algorithm problems just hasn’t been fun for a long time now; I love solving them, but coding is a pain. Also, reading ACM problem statements is a pain. I’m adequately motivated during an actual contest, but when I’m practicing by myself that’s a different matter… sigh. It’s too bad Richard Peng isn’t here to remind me to do problems :P

Posted in Uncategorized | 1 Comment

## Relativistic electrodynamics cheat sheet

I was bored, so I decided to LaTeX up the cheat sheet I brought to my PHY450 (relativistic electrodynamics) exam. It wasn’t actually cheating, of course; we were permitted to bring in a two-sided exam aid sheet. I originally used both sides, but when I typed it up I was able to cram all the relevant formulae onto one sheet. This has not been thoroughly vetted for errors, so use at your own risk. Page numbers are references to lecture notes and will not be useful for anyone not taking the course, but I still think these formulae will be generally useful. Gaussian units are used throughout because that’s what was used in class. (I don’t like Gaussian units but that’s just how it is; see previous post about what I think would be a better unit system—but of course nobody will ever use it.)

Posted in Uncategorized | 1 Comment

## Unit systems in electrodynamics

I learned electrodynamics, like most other undergraduate students of my generation, in SI units. They seemed like the natural choice, because we use SI units for everything else. But then I took PHY450, “Relativistic Electrodynamics”, where we use cgs-Gaussian units. At first I found this unit system strange and unnatural, but then I realized it does have some advantages over the SI system. For example, the E and B fields have the same units, as do the scalar and vector potentials. This is attractive because the scalar and vector potentials together constitute the four-current, $A^i = (\phi, \mathbf{A})$, and the electric and magnetic fields together form the electromagnetic field tensor,

$\displaystyle F^{ij} = \begin{pmatrix} 0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0 \end{pmatrix}$

Still, I find it very strange that cgs units don’t have a separate unit for electric charge or electric current. I think it is unintuitive to measure electric charge in terms of length, mass, and time, because electric charge isn’t defined in that way. We define speed to be distance over time, acceleration to be change in velocity over time, force to be mass times acceleration, pressure to be force over area, energy to be force times distance, power to be energy over time. We don’t define electric charge to be anything times anything or anything over anything; it’s as much a fundamental quantity as any other. For what it’s worth, there are some contexts in which it makes sense not to treat charge as a separate unit. For example, in quantum electrodynamics, the charge of the electron is often expressed in Planck units, where its value is the square root of the fine structure constant. But then again, we also measure distance in units of time sometimes (light years) and time in units of distance sometimes (think $x^0$ in relativity). I think that it makes as much sense to have a separate unit for charge or current as it does to have separate units for distance and time, or mass, momentum, and energy—most of the time it’s better, with some exceptions.

I think that in the nicest possible system of units, then, the equations of electrodynamics would look like this. $k$ denotes the reciprocal of the electric constant $\epsilon_0$, and is $4\pi$ times greater than Coulomb’s constant. The magnetic field and magnetic vector potentials are $c$ times their values in SI units. The metric convention is (+,-,-,-).

Maxwell’s equations:
$\displaystyle \nabla \cdot \mathbf{E} = k\rho$
$\displaystyle \nabla \cdot \mathbf{B} = 0$
$\displaystyle \nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial(ct)}$
$\displaystyle \nabla \times \mathbf{B} = k\frac{\mathbf{J}}{c} + \frac{\partial\mathbf{E}}{\partial(ct)}$

Lorentz force law:
$\displaystyle \mathbf{F} = q\left(\mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B}\right)$

Coulomb’s law:
$\displaystyle \mathbf{F} = \frac{k q_1 q_2}{4\pi r^2} \hat{\mathbf{r}}$
$\displaystyle \mathbf{E} = \frac{k q}{4\pi r^2} \hat{\mathbf{r}} = \iiint \frac{k \rho}{4\pi r^2} \hat{\mathbf{r}} \, dV$

Biot–Savart law:
$\displaystyle \mathbf{B} = \int \frac{k}{4\pi r^2} \frac{I \, d\ell \times \hat{\mathbf{r}}}{c} = \iiint \frac{k}{4\pi r^2} \frac{\mathbf{J} }{c} \times \hat{\mathbf{r}} \, dV$

Scalar and vector potentials
$\displaystyle \mathbf{E} = -\nabla V - \frac{\partial \mathbf{A}}{\partial(ct)}$
$\displaystyle \mathbf{B} = \nabla \times \mathbf{A}$
$\displaystyle A^\mu = (V, \mathbf{A})$

Field tensor
$\displaystyle F^{\mu\nu} = \begin{pmatrix} 0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0 \end{pmatrix}$

Potentials in electrostatics and magnetostatics (Coulomb gauge)
$\displaystyle V = \frac{k}{4\pi r}q = \iiint \frac{k}{4\pi r} \rho \, dV$
$\displaystyle \mathbf{A} = \int \frac{k}{4\pi r} \frac{I \, d\ell}{c} = \iiint \frac{k}{4\pi r} \frac{\mathbf{J}}{c} \, dV$

Retarded potentials, Lorenz gauge
$\displaystyle A^\mu(\mathbf{r}, t) = \iiint \frac{k}{4\pi r} \frac{J^\mu(\mathbf{r}', t - \|\mathbf{r} - \mathbf{r}'\|/c)}{c} \, d^3\mathbf{r}'$
$\displaystyle A^\mu(x^\nu) = \iiiint \frac{k}{4\pi} \frac{J^\mu(x^{\nu\prime})}{c} \, 2\delta((x^\nu - x^{\nu\prime})^2) \, \theta(x^0 - x^{0\prime}) \, d^4 x^{\nu\prime}$

Liénard–Wiechert potentials
$\displaystyle V = \frac{kq}{4\pi r} \frac{1}{1-\vec{\beta} \cdot \hat{\mathbf{r}}}$
$\displaystyle \mathbf{A} = \frac{kq}{4\pi r} \frac{\vec{\beta}}{1 - \vec{\beta} \cdot \hat{\mathbf{r}}}$
($\mathbf{r}$ is the vector from the retarded position of the charge to the observation point, $r$ is the magnitude of $\mathbf{r}$, and $\vec\beta$ is $1/c$ times the velocity of the charge at the retarded time.)

Poynting vector; Maxwell stress tensor; electromagnetic stress-energy tensor
$\displaystyle T^{\alpha\beta} = -\frac{1}{k}F^{\alpha \gamma} F^\beta{}_\gamma + \frac{\eta^{\alpha\beta}}{4k} F^{\gamma\delta}F_{\gamma\delta}$
$\displaystyle T^{00} = \frac{1}{2k}(E^2 + B^2)$
$\displaystyle T^{0i} = T^{i 0} = \frac{1}{k} \epsilon_{ijk} E_j B_k = \frac{1}{c} S_i$
$\displaystyle T^{ij} = -\frac{1}{k}\left[E_i E_j + B_i B_j - \frac{1}{2}\delta_{ij} (E^2 + B^2)\right] = -\sigma_{ij}$
$\displaystyle \mathbf{S} = \frac{c}{k} \mathbf{E} \times \mathbf{B}$
Poynting’s theorem and the statement of the conservation of momentum in electrodynamics take their usual form and are the same in all unit systems:
$\displaystyle \nabla \cdot \mathbf{S} = -\mathbf{J} \cdot \mathbf{E} - \frac{\partial}{\partial t} \left[\frac{1}{2k}(E^2 + B^2)\right]$
$\displaystyle \partial_i \sigma_{ij} = f_j + \frac{1}{c^2} \frac{\partial S_j}{\partial t}$

Electromagnetic Lagrangian density
$\displaystyle \mathcal{L} = -\frac{1}{4k}F^{\alpha\beta}F_{\alpha\beta} - \frac{1}{c}J^\alpha A_\alpha$

## On perspective

Reasoning objectively is difficult because we are all biased by our own subjective experiences. There are two ways I can see to address this. The first is to consider others’ subjective experiences in addition to your own. This gives you what many would call a more balanced point of view. You might say that reading about another person’s experiences gave you new perspective on an issue. The second is to attempt to unconsider your own subjective experiences by training yourself to recognize cognitive bias.

I think the conventional wisdom is that you should always try to seek out others’ perspectives, but I question how useful that really is. I’m biased enough already; why would I want to inject even more bias into my thoughts? Emotion is really important in day-to-day life, but I think that it shouldn’t enter the debate on important moral issues facing humanity. When people talk about how their perspectives have been changed, often they mean something like how they visited an impoverished country and were moved by what they saw. Honestly, I don’t care. I already know that poverty causes a great deal of suffering, and, like many others, I would really like to know how it can be reduced. I don’t want to hear your sob story, though.

On the other hand, talking to other people about issues can be rewarding in that they may provide you with facts that you didn’t already know. So that’s the strategy I try to use: talk to other people to learn things you didn’t know, and, at the same time, work by yourself to become less biased. Unfortunately, if there’s one thing I’ve learned, it’s that becoming less biased is really hard. I’m definitely still a very biased person.

Posted in Uncategorized | 1 Comment

## Done organic chemistry forever!

I used to love organic chemistry. Especially synthesis problems, which I felt gave me a chance to exercise my creativity and problem-solving skills. When I was competing for a spot on the Canadian IChO team, organic chemistry was probably my strong suit (although it’s also Richard’s strong suit, and in fact he apparently beat me on every section). When I decided to study chemistry at U. of T., I decided to take mostly physical and organic chemistry courses. I avoided biochemistry, because there’s too much memorization, and analytical chemistry, because I thought it would be boring. And I couldn’t fit inorganic into my schedule initially, and I ended up just not taking any inorganic courses at all.

I’m sorry to say that I don’t love organic chemistry anymore. In fact, I’m sick of it. I’ve taken CHM249 (organic chemistry II), CHM342 (organic synthesis), and CHM440 (more organic synthesis, with a focus on pharmaceutical agents). Every time I took another organic chemistry course, there were more and more reactions to memorize. The memorization was not too bad in CHM249, but there also weren’t any challenging problems of the sort I used to love solving in high school. In CHM342 the amount of memorization increased significantly, and I had an especially hard time with drawing three-dimensional transition states and predicting stereochemistry. In CHM440 there were a total of 90 named reactions. I was actually scared of synthesis problems, because, with that many reactions, there is simply no way I would be able to see the right disconnections. Luckily, there weren’t any. Suffice it to say that this course confirmed my suspicion that it was the right choice not to go to grad school to study organic synthesis…

Anyway, I had my CHM440 final exam last week, and my CHM347 exam today (biochemistry; I didn’t want to take it but if I took only courses I liked then I wouldn’t be able to graduate). Next term I’m only taking statistical mechanics. This means I’ll never have to look at another curved arrow for the rest of my life. Yippee!

In retrospect, I greatly underestimated the difficulty of fourth year—or perhaps I just bit off more than I could chew. Three of the five courses I took (CHM440, high-energy physics, general relativity I) were cross-listed as grad courses, so I guess it was foolish of me to expect them not to be so hard. There was also a grad student in my (time-dependent) quantum chem course, and at any rate it was very grad-style, with a take-home final (which was way too hard) and a term paper. I was incredibly busy toward the end of the term, trying to keep up my GPA (not that it actually matters, but I guess it’s just an irrational drive of mine).

I’ve noticed something about myself: when I’m busy doing stuff I don’t want to do, I think of all the productive things I could be doing if I had free time, such as developing the PEG Judge, studying quantum field theory, learning Haskell, or maybe learning a new (human) language. Alas, once free time arrives, I basically waste time browsing Reddit and playing games. Anyone else do this?

Now for some more awesome news. We made ACM World Finals! I’ll be representing the University of Toronto in Russia in June, together with my teammates Hao Wei and Saman Samikermani. We weren’t sure whether we were going to make it, since CMU and UMich creamed us at the regionals, and solved more problems than we did. But I guess ACM just stuck to the recent trend of inviting the top three teams from our region. We’ve got a long road ahead of us; other teams are good—really, really good—and we’re all a bit out of practice. I just hope we don’t do too badly!

Posted in Uncategorized | 1 Comment