How does immigrant visa spillover work?


There has been some recent speculation that 52,000 visas could spill over from the family-based quota to the employment-based quota due to Trump’s recent executive order to ban immigrants from entering the US for 60 days (which many people believe will be extended, perhaps until the election or even longer). There have been many arguments about the mechanics of spillover on Trackitt, an immigration forum largely populated by Indian people, who face very long backlogs for employment-based immigrant visas. However, the law is quite clear, so much of the controversy seems unnecessary.

Where did the 52,000 figure come from?

The figure of 52,000 comes from a tweet by the Migration Policy Institute that 26,000 green cards would be blocked by the EO during each month it remains in effect, which means 52,000 over a period of 60 days. The MPI doesn’t explain their methodology, but I think we can draw some inferences.

The MPI seems to have gotten the 26,000 figure by taking the total of 315,403 and dividing by 12. This total includes family-based, employment-based, and diversity visas. So we can immediately conclude that MPI did not estimate 52,000 family-based immigrants would be blocked. That was, rather, their total estimate across all three categories. Rather, their estimate is that 5,565 immediate relative immigrants and 15,409 family-based preference immigrants would be blocked per month.

Second, the MPI’s figures look very similar to the official State Department figures for immigrant visas issued abroad in 2019. For example, the MPI’s estimate of 3,432 for EB-2 is very close to the State Department figure of 3,497 EB-2 immigrant visas issued in 2019, which is substantially less than the actual annual EB-2 quota of roughly 40,000. From this we can infer that the MPI numbers mean something like we estimate that 52,000 people who would have received immigrant visas over the next 60 days, now will not as a result of the EO.

This does not mean that 52,000 visas will go unused! Since the EO does not block green cards for applicants who are already in the US, there is a possibility that some or all of those 52,000 visas can be issued to applicants in the US, instead of the people outside the US who would have received them if not for the EO. (Note: An applicant in the US who receives a green card consumes an immigrant visa, but is not actually physically issued a visa.) The MPI figures simply do not answer the question of whether any immigrant visas will be unused this year.

Will any family-based visas actually go unused this year?

The May 2020 visa bulletin was released on April 24, several days later than expected, since it could not be cleared for release until Trump’s executive order had been finalized. However, astute readers noticed that the date April 6, 2020 was printed at the bottom of the bulletin, suggesting that the cutoff dates had already been calculated by then. Indeed, the movement in cutoff dates was, perhaps, no more than would have been expected without the EO. For example, the F3 and F4 Mexico cutoff dates, which advanced 1 month in the April bulletin, again advanced 1 month in the May bulletin.

Now, in order to use up all the family-based visas for this fiscal year, the State Department must move the cutoff dates far enough ahead so that enough people are able to apply for their visas. Since most family-based applicants live outside the US, and many of them can no longer receive a visa due to the EO, it is necessary to move the cutoff dates more quickly so the visas can instead be used by applicants inside the US. The fact that this does not seem to have happened has made some immigration lawyers concerned. However, until the June 2020 visa bulletin is released, it’s too early to panic—again, the May 2020 cutoff dates may simply not have taken into account the EO yet.

Most family-based immigrants on the waiting list are outside the US. This is because most of them cannot easily obtain a long-term visa, such as a student visa or work visa, that would enable them to live in the US while they wait for an immigrant visa to become available. This fact has led some to speculate whether there are just not enough of them inside the US to use up the rest of the family-based immigration quota for the current fiscal year. For example, if, hypothetically speaking (I am just pulling numbers out of thin air here), 120,000 of the 226,000 family-based immigrant visas this year had already been issued, and there were 85,000 remaining applicants waiting inside the US, then at least 21,000 visas would go unused. However, I don’t think that this is the case.

Greg Siskind, an immigration lawyer, has made the following comment that most casual readers won’t understand: They could have moved all the Mexican categories to May 2001 (they’re backed up to the 90s except F2A) so that the 245i folks in the US could file adjustments to fill in the gap from IV processing abroad. Siskind is referring to INA 245(i), which, in short, allows certain people with priority dates earlier than May 2001 to obtain a green card even if they are currently residing in the US without authorization. In other words, if the State Department is having a hard time issuing all the remaining family-based visas this year, they should advance the Mexico cutoff dates, possibly all the way up to May 2001, to give more people a chance to apply, as there is probably a substantial population of Mexicans inside the US who may be eligible for green cards under INA 245(i). The number of such individuals is impossible to determine accurately, so the State Department cannot in good faith say well, we tried to issue all the visas available, but we couldn’t do so because of the EO unless they’ve given all the 245(i) folks an opportunity to apply.

Some immigration lawyers think that the administration is going to try to waste family-based visas on purpose as part of their agenda to curtail immigration. If the June 2020 visa bulletin rolls around and it really does seem that they are not making an effort to advance the dates, then it is likely that immigrant advocate groups will have some members of Congress ask the State Department why they are not on track to issue all the visas to eligible applicants—and there may be legal challenges if the answer is not satisfactory. So I believe that while some family-based visas may go unused this year, the chances of the number being significantly affected by the EO are slim (i.e., there is a slim chance that any judicial relief gets held up until after the end of the fiscal year). (We should note that there was already some spillover into the employment-based quota this year, indicating that not all of last year’s family-based visas were used. Nobody seems to know exactly why, so the same might happen this year, EO or no EO.)

If some visas are unused, would they actually spill over?

According to INA 201(d)(1)(B), if the full quota of family-based visas is not used up in this fiscal year, then the unused numbers are added to next year’s employment-based immigration quota. While Trump can suspend issuance of visas pursuant to INA 212(f), he is without power to block the spillover provisions. Even if Trump takes further actions to block issuance of employment-based green cards next year, in an effort to prevent immigrants from benefitting from the spillover, he cannot actually waste any green cards. If the spilled-over numbers, added to the employment-based quota next year, are not used, they will just spill over as family-based quota the following year, and so on indefinitely back and forth between the two categories until either Trump is out of office, or Congress changes the law.

Unused diversity visa numbers, however, do not spill over. They are neither added to next year’s diversity visa quota, or to any other category.

If visas spill over to employment-based immigrants, how will they be allocated?

INA 201(d)(1)(B) prescribes the worldwide level of employment-based immigrants. INA 203(b) prescribes that the worldwide level shall be allocated as follows: 2/7 to EB-1, 2/7 to EB-2, 2/7 to EB-3, 1/14 to EB-4, and 1/14 to EB-5. Some people have speculated that the spillover visas will all go to the most backlogged people, namely Indian-born EB-2 applicants; this is false. Some have speculated that the spillover visas will all go to EB-1; this is also false. The EB-2 and EB-3 quotas will increase proportionally. This is very clear from the statute; there is no ambiguity. Some have speculated that none of the spillover numbers can be allocated to Indians because the number of available visas for natives of any single country is fixed. This is also false, since the 7% limitation applies to the total worldwide level of preference-based immigrants according to INA 202(a)(2), which, as previously mentioned, is not a fixed number but, according to INA 201, depends on how many visas spilled over from the previous year.

Thus, if any numbers spill over, then the State Department will allocate them the same way they would have allocated the usual quota—just with all the numbers scaled up proportionally. Because EB-1 Rest of World is unlikely to use up all the numbers (whether there is spillover or no spillover), that means that the spilled-over visas allocated to EB-1 (namely 2/7 of the total) will mostly go to the most backlogged (i.e., Indian-born applicants) due to INA 202(a)(5). Another 2/7 would spill over to EB-2, which would also be allocated in the usual way, but if Rest of World does not use up all the EB-2 numbers, then any additional numbers would again be allocated to India. In EB-3, demand from Rest of World exceeds supply, so if current patterns continue, then EB-3 China and EB-3 India would likely only receive 7% of any spilled-over numbers.

Conclusion

It’s not likely that there will be any unused family-based visas to spill over. But if there are any unused family-based visas, they will spill over to employment-based immigrants and Trump cannot prevent that. If such spillover does occur, then it will be distributed evenly to EB-1, EB-2, and EB-3, and within each preference level, Rest of World will still be entitled to 86% of the spilled-over visas, which will only go to India if there are not enough Rest of World applicants. Thus, spillover could significantly benefit EB-1 India and EB- 3 Rest of World, whereas EB-3 India and all three Chinese categories would likely receive little benefit (though EB-5 is another story). The EB-2 Rest of World and EB-2 India situations are harder to predict (although EB-2 India is so backlogged that even if it does end up receiving additional unused numbers from EB-2 Rest of World, the movement will still not be that much).

Posted in Uncategorized | Leave a comment

Armchair lawyering about the recent EO


The Attorney General of New York is making preparations to challenge Trump’s executive order banning most immigrants from entering the US “temporarily” under INA 212(f) (8 USC §1182(f)). I am sure there are various other groups also interested in filing legal challenges.

The order can be found here.

Can the President ban all immigration?

Some commentators suggest that INA 212(f) doesn’t give the President authority to end all immigration. Here, all has been placed in quotation marks since the EO does not ban all immigration anyway; certain categories such as spouses of US citizens are exempted. The idea is that bans based on 212(f) that are too broad may not be allowable.

Yet, the text clearly says the President may suspend the entry of all aliens or any class of aliens.

Some argue that, while the Trump v. Hawaii case established that the President may ban all aliens from particular countries, that it would be contrary to the INA for the President to indefinitely end the issuance of visas in entire preference categories – for example, as Trump is banning all F4 immigrants (brothers and sisters of US citizens) from entering the US (unless they qualify for an exemption based on being in the US already) this could be argued to contravene an imagined intent of Congress in enacting INA 201 and INA 203 of making at least 65,000 F4 immigrant visas available each fiscal year.

This argument is essentially about timing: it is clear from the statute that the President may ban all aliens, but the argument is that this power could not be used to establish a ban of such duration that it would result in the visas allocated by Congress going wasted. I don’t think this argument is convincing. The statute does say the President can suspend entry for such period as he shall deem necessary, which is pretty close to explicitly saying there’s no limit on the President’s discretion to set the duration of the suspension.

Is Trump’s use of INA 212(f) an unconstitutional delegation?

Congress is not constitutionally permitted to delegate its legislative power to the executive branch or any other entity, but may enact statutes that enable the executive branch to exercise discretion over policy, if it provides an intelligible principle. For example, in INA 212(a)(9)(B), Congress banned aliens from entering the United States if they were unlawfully present for more than 180 days, but granted the Attorney General discretion to waive this inadmissibility in certain cases if it would cause extreme hardship to a US citizen or LPR relative. In this case, the executive is permitted to set policy regarding which aliens can receive waivers, but Congress provided the guiding principle that the standard of eligibility for such waiver must be based on a level of extreme hardship.

A recent case in the United States District Court for the District of Oregon concerned whether Trump’s use of INA 212(f) to bar the entry of aliens who could not demonstrate that they would be able to obtain certain types of health insurance coverage (Case No. 3:19-cv-1743-SI). The plaintiffs in this case argued that this use of INA 212(f) involved an unconstitutional delegation. The court agreed and granted a preliminary injunction.

The court’s opinion stated that INA 212(f) is overbroad in delegation, providing no intelligible principle for the Presidents exercise of discretion, since the only condition is that the President must find that the entry of aliens would be detrimental to the interests of the United States. The court thus found that Congress cannot delegate power to the President in this manner. The defendants pointed out that Supreme Court precedent (Knauff v. Shaughnessy, 338 US 537 (1950)) established that the President’s inherent constitutional power over foreign affairs includes the power to restrict the entry of aliens (so that INA 212(f) confers little or no additional power upon the President but merely reaffirms power that the President already possesses). The district court’s opinion, however, was that such inherent power of the President over immigration does not extend to restricting immigration for domestic policy purposes (in contrast to e.g. national security or foreign relations concerns).

In my opinion, this type of argument, if it is raised in the cases concerning the present executive order, would not be likely to persuade the Supreme Court. I believe the Supreme Court would almost certainly find that the President’s inherent authority to restrict the entry of aliens is not limited only to situations implicate national security or foreign relations concerns.

Conclusion

Based on what we know about the current Supreme Court, I think it’s overwhelmingly unlikely that they would strike down the recent EO. In fact, I think it’s very likely that they would quickly reverse any stay of the EO issued by a lower court. I think it’s almost irresponsible for the mainstream media to give false hope to immigrants that a court challenge would be likely to succeed. Then again, there are actual legal experts who disagree with me, so take this with a grain of salt.

Posted in Uncategorized | Leave a comment

Why Chinese people should support HR1044/S386


In discourse about HR1044/S386, there is a myth that Chinese people don’t support removing the per-country caps. I can tell you that this is false. I’m Chinese and I support it.

Not only is it the right thing to do, but even for purely selfish reasons, I should still support it. Imagine if the per-country caps had never existed in the first place. There are two possible scenarios. One is that the waiting time for Chinese would be shorter than it is now, and I would already have a green card. That’s obviously a win. What about the other scenario? What if there were more applicants in the alternate universe, so waiting times were 5+ years for everyone, rather than 3–4 years for Chinese?

Well, I would rather live in that world—where everyone waits 5+ years—than in this one, where I’m disadvantaged by longer waiting times. Any factor that puts me at a disadvantage relative to other people threatens my well-being in a competitive globalized economy.

Right now, I have the opportunity to work for many of the top companies in the United States because they can sponsor me for H-1B status. However, there is no guarantee that this will continue to be the case. My bachelor’s degree is not in computer science, and I’ve already received two RFEs (in which USCIS asked my company’s lawyers to provide more evidence that I’m actually eligible for TN and H-1B status). The Trump administration has increased scrutiny on H-1B applications, and determined that junior software engineering positions should not be presumed to be eligible for the H-1B classification. The word on the streets is that scrutiny has increased for H-1B applicants who don’t have CS degrees, so I can expect this situation to persist in the near future. No one can predict the Trump administration’s next moves, and the probability that they will change the regulations so that I’m simply not eligible to be an H-1B software engineer anymore—either because I don’t have a CS degree, or because we just can’t prove that my job is complex enough to merit an H-1B—is non-negligible. Of course, if I already had a green card, then any such future changes wouldn’t affect me.

I live in constant fear that I am going to have to leave the United States while others (including hundreds of thousands of my fellow Canadian citizens) continue to work for top US companies, getting the best possible experience, and I fall behind. As I said: this disadvantage would threaten my well-being in a competitive globalized economy. As long as the opportunities available in tech continue to expand, everything will be fine and dandy; but if a recession were to occur, limiting the availability of decent software engineering jobs, I would be in the unenviable position of having to compete with people with much better résumés for that limited set of openings. Chances are that I’d be forced to accept some demoralizing, low-paid position in a company that treats its software engineers like cogs and has frequent layoffs. (Never forget that Dilbert is a minor exaggeration of the reality of working as a software engineer in a company where software engineers can’t command respect.)

Some people think I’m overly anxious. I would say that we software engineers should remember how lucky we are right now. Do you see how other people our age, who are just as intelligent and hard-working as we are, struggle for economic security? One might say, there but for the grace of God go we.

I have often expressed my desire for a world with open borders, but we all know that’s not happening any time soon. For the time being, people who were born in United States, or otherwise acquired US citizenship at birth or through the naturalization of a parent, have an advantage over me in terms of access to the best software engineering opportunities, which handicaps me in the competition for the sharply limited availability of economic security in a neoliberal globalized economy. The effect of the per-country cap system is to also put me at a disadvantage relative to equally qualified individuals born in Canada or, say, Australia, Hong Kong, Taiwan, or South Korea. For this reason, I must support HR1044/S386, and that would remain true even if I had the animosity against Indian software engineers that many anti-HR1044/S386 people seem to have.

Posted in Uncategorized | 2 Comments

INA 203(g) cancellation and DoL processing times


The Department of Labor is currently taking 4–5 months to process prevailing wage determination requests and 3–4 months to process permanent labor certifications. See here.

A few years ago, prevailing wage determinations only used to take about 6 weeks.

This increase in processing times has created an obscure possibility for me and other employment-based green card applicants to lose their place in line.

Because of my position in the queue, I’ll become eligible to apply for a green card in about 1.5 to 2 years from now. Once I become eligible, I have a 12 month deadline to file the final form (Form I-485) to apply for a green card. Due to INA 203(g) (8 USC §1153(g)), if I fail to file within this deadline, then I go to the back of the line and will probably have to wait another 3 years or so before I can file again.

Those of you who have gone through, or are going through the employment-based immigration process have probably already figured out why this is dangerous. For others, read on.

At the time of filing Form I-485, I have to have the intent to work for my sponsoring employer—the one that filed all the paperwork with the Department of Labor. Technically, I don’t have to be working for them when the form is filed, but they must be offering me either new or continuing employment in order for the I-485 to be approvable. So what happens if I’ve filed my I-485, it sits in a queue with USCIS for a few months, and then I get fired? The I-485 is effectively abandoned, and I have to find a new employer who has to file the same forms with the Department of Labor as the old employer did. And that’ll take another 7–9 months.

During that 7–9 month period, I won’t be able to file Form I-485. It will only be possible to file it after the DoL has approved all of the new employer’s paperwork. What this means is that while all this is happening, my 12 month clock may run out. Example: if Form I-485 has been pending for 5 months and my current employer fires me, then I have about 7 months left on the clock before USCIS says: hey, it’s been an entire year since the last time you applied for a green card even though you’ve been at the front of the line the whole time. So we’re sending you to the back of the line. See ya!

This analysis depends on a rather pessimistic interpretation of INA 203(g). In particular, you might wonder whether the 12 month clock should start over at the point at which I get fired, causing my I-485 to be abandoned. Let’s say this is the case. Then the 7–9 month processing time doesn’t seem as dangerous. But 3–5 months is not a lot of slack. It takes time to find a new job, to gather experience letters from previous co-workers, and for attorneys to draft the required forms. Furthermore, the Department of Labor sometimes randomly decides to audit permanent labor certification applications, adding an extra 3 months, in which case the total processing time becomes 10–12 months. In that case, it’s almost certain that the 12 month clock would run out during the time when paperwork is pending with DoL.

There is one safeguard against this possibility. If my I-485 has been pending for at least 180 days before I leave my employer, then the new employer doesn’t need to file any paperwork, so it’s all good. However, if I get fired within the first 180 days, then I’m possibly screwed. In the past, there would probably have been enough time to go through all the paperwork with the new employer. But given current processing times, there may not be.

Processing times are not likely to decrease in the next few years (particularly as I don’t especially trust the Democrats to nominate someone who can actually beat Trump in 2020), and the chances of Congress passing a fix to this situation—say, by extending the deadline in INA 203(g) from 12 months to 24 months—are virtually zero, because this issue is simply too obscure. If there is any legislative fix, it will have to come as part of a comprehensive immigration reform package—and don’t hold your breath for that either.

But hey, it’s not the end of the world. If it takes an extra 3 years to get my green card, at least I’m still working and getting paid during that time. Unless, of course, the Trump administration has made it virtually impossible to get H-1Bs by then. If that’s the case, I’d better hope to be married to an American by then…

Posted in Uncategorized | Leave a comment

On solving problems for a living


Back when I was just an intern, spending my summers chilling out in the bay area (and doing work sometimes), I started reading Quora and noticed people debating whether the compensation for software engineering was disproportionate to the difficulty of the job. (I have no idea how long this debate has been going on.) It seemed pretty obvious to me back then the answer was yes, or, at least, that the actual difficulty of software engineering is much less than people think when they haven’t been exposed to the right opportunities to learn programming. It seemed inevitable that salaries would eventually fall as a result of more people being trained to enter the field.

But I saw a contrary point of view being expressed in the answers sometimes, namely that software engineering is stressful. Some of that, of course, is due to factors that are not universal, such as long hours and on-call rotations. But it was argued that the job is also demanding in other ways, such as the need to constantly learn new technologies and new developments in existing technologies. At the time, I don’t think I had the perspective to be able to understand this.

Now that I’ve been a full-time software engineer for almost three years, I think I’m starting to understand. I’m lucky to be on a team where I’m nearly free from certain stressors that my software engineer friends have to deal with, such as pager duty, daily scrums, and frequent meetings in general. But I also experience job-related stress of a particular kind, and I suspect this experience may be close to universal: it’s that I don’t know what the next problem is that needs to be solved, but I know I’ll have to do everything I can to solve it.

Almost all jobs involve problem-solving to some extent, but occupy different positions on a spectrum. On one end of the spectrum are jobs that involve performing a bounded set of responsibilities in some fixed way, such as working in a call centre where you just follow a script and transfer the caller to a specialist whenever that fails. Those jobs can often be extremely stressful due to the long hours, low pay, and general working conditions, but not due to the nature of the problems to be solved. Many jobs fall somewhere near the middle of the spectrum, consisting mostly of fixed responsibilities with some variability in challenges and some room to invent ideas for how to better perform those responsibilities. I would argue that software engineering stands at the far end of the spectrum, together with other traditionally highly compensated jobs such as that of the doctor or lawyer. For while a lecturer can present the same lecture twice to different audiences and a bus driver can drive along the same route twice on different days, a software engineer should not even write the same ten lines of code twice, or perform the same operations twice—they should be factoring out the code into reusable components and automating the operations.

It follows that if you still have a job as a software engineer, it is only because you are needed in order to deal with a never-ending stream of new problems that are qualitatively different from the ones you have solved before. And that means you have to constantly search for solutions to those problems. You cannot follow a fixed set of steps to try to solve the problems you are faced with, and claim that your responsibilities have been fulfilled. You have to apply your technical and creative skills to their fullest extent.

As I said, this makes software engineering comparable to practising medicine or law. Your job is not to apply some fixed set of techniques but to treat the patient or to win the case (or perhaps settle it favourably, yeah, yeah, I know) and you go to school for a long time so you can learn the knowledge needed to prepare a solution to the problem from first principles as it may be different in significant ways from cases that you have dealt with in the past. Being constantly challenged in this way is very likely to be stressful, particularly when you aren’t sure whether the problem even has a solution, or how you should feel about your own personal inability to solve it.

So I think when people argue that software engineers are disproportionately compensated, they should consider that people whose jobs consist of this kind of generalized problem solving within a broad domain do tend to be well-compensated and that this should not be unexpected when you consider the demands of those jobs (and I hardly hear anyone saying doctors are overpaid).

I wonder whether there are highly compensated jobs that are much closer to the other end of the spectrum, where you do more or less the same thing every day but you get paid a lot of money to do it. I suspect that a job of that type would be very desirable and that it could only continue to be well-compensated if it either had artificial barriers to entry or required skills that are difficult to acquire. If jobs in the former category existed in the past, they have probably mostly disappeared due to globalization and other forces eliminating such barriers. But perhaps there are jobs that fall into the second category.

Posted in Uncategorized | 2 Comments

The inverse-square law of magnetism


The observation that opposite charges attract while like charges repel, with a force proportional to the inverse square of distance, motivates the study of electrostatics. Although we often don’t solve problems in electrostatics using Coulomb’s law directly, relying instead on Gauss’s law or on techniques for solving Poisson’s equation for the electric scalar potential, there is no physical information contained in those techniques that isn’t already in Coulomb’s law. Therefore, the entire field of electrostatics, including the introduction of the electric field E itself, may be viewed as an exploration of the consequences of this basic fact about stationary electric charges (and continuous distributions thereof).

At the beginning of the chapter on magnetostatics in Introduction to Electrodynamics, Griffiths notes that parallel currents attract while antiparallel currents repel. You might think that he would go on to write down a law similar to Coulomb’s law that described this force between currents directly. However, this approach is not pursued. Instead, the magnetic field B and the Lorentz force law are introduced right away. In a subsequent section, the Biot–Savart law is used to compute the magnetic field of a long straight wire, which is then immediately used to find the force between two wires.

There is a good reason for this, which I’ll get to at the end. There’s just one small problem, which is that it’s not clear that magnetism actually obeys an inverse-square law and that the force that wire 1 exerts on wire 2 is equal and opposite to the force that wire 2 exerts on wire 1. Students are also confused by being asked to use the right-hand rule to assign an apparently arbitrary direction to the magnetic field. It may appear that the universe prefers one handedness over the other for some strange reason (which, for what it’s worth, is true for the weak interactions, but not the electromagnetic).

If there were an analogue to Coulomb’s law for magnetostatics, it would become clear that the magnetic force obeys an inverse-square law, obeys Newton’s third law, and does not involve an arbitrary choice of handedness. Obviously I don’t mean the Biot–Savart law; I mean a law that gives the force between currents directly. In fact, we can derive such a law using basic knowledge of vector calculus. Recall that the Biot–Savart law gives the magnetic field at a point due to some given source distribution:
\displaystyle B(x) = \frac{\mu_0}{4\pi} \int \frac{J(x') \times (x - x')}{\|x - x'\|^3} \, \mathrm{d}^3x'
The force this exerts on some other current distribution J(x) is given by applying the Lorentz force law and integrating again over x:
\displaystyle F = \frac{\mu_0}{4\pi} \iint J(x) \times \frac{J(x') \times (x - x')}{\|x - x'\|^3} \, \mathrm{d}^3x' \, \mathrm{d}^3 x
We can simplify this using the vector identity A \times (B \times C) = B(A \cdot C) - C(A \cdot B):
\displaystyle F = \frac{\mu_0}{4\pi} \iint J(x') \left(J(x) \cdot \frac{x - x'}{\|x - x'\|^3}\right) - (J(x) \cdot J(x')) \frac{x - x'}{\|x - x'\|^3} \, \mathrm{d}^3x' \, \mathrm{d}^3 x
To see that the integral over the first term vanishes, we can first exchange the order of integration and pull J(x') out of the inner integral, then observe that (x - x')/\|x - x'\|^3 is the gradient of -1/\|x - x'\|:
\displaystyle \iint J(x') \left(J(x) \cdot \frac{x - x'}{\|x - x'\|^3}\right) \, \mathrm{d}^3 x' \, \mathrm{d}^3 x = \int J(x') \int J(x) \cdot \nabla(-\|x - x'\|^{-1}) \, \mathrm{d}^3 x \, \mathrm{d}^3 x'
This allows us to perform integration by parts on the inner integral:
\displaystyle \int J(x) \cdot \nabla(-\|x - x'\|^{-1}) \, \mathrm{d}^3 x = \\ -\int \|x - x'\|^{-1} J(x) \cdot \mathrm{d}a + \int (\nabla \cdot J(x))\|x - x'\|^{-1} \, \mathrm{d}^3 x
where the surface integral is taken over some sufficiently large surface that completely encloses the current distribution J(x) acted on and therefore vanishes, and the volume integral vanishes because the divergence is zero for a steady current. We conclude that:
\displaystyle F = -\frac{\mu_0}{4\pi} \iint (J(x) \cdot J(x')) \frac{x - x'}{\|x - x'\|^3} \, \mathrm{d}^3x' \, \mathrm{d}^3 x
This now reads very much like Coulomb’s law:
\displaystyle F = \frac{1}{4\pi\epsilon_0} \iint \rho(x) \rho(x') \frac{x - x'}{\|x - x'\|^3} \, \mathrm{d}^3x' \, \mathrm{d}^3 x
Both laws describe inverse-square forces that act along the line joining the two elements; the difference in signs is because like charges repel while like (parallel) currents attract. It is now easy to see that the force exerted on J(x) by J(x') is equal and opposite to the force exerted on J(x') by J(x). The right-hand rule does not appear anywhere. Parallel currents attract and antiparallel currents repel, period.

We can directly use this law to compute the force between two long, straight, parallel current-carrying wires. For such a problem we would use the version of the law expressed in terms of currents, I, rather than current densities, J. (I derived the law using current densities because it’s more general that way, and we can easily argue that we can go from J to I but it’s less obvious why the form involving J‘s should be true given the form involving I‘s.)
\displaystyle F = -\frac{\mu_0 I_1 I_2 }{4\pi} \iint \frac{x - x'}{\|x - x'\|^3} \, \mathrm{d}\ell \cdot \mathrm{d}\ell'
To get the force per unit length, we just compute the inner integral at x = 0. We know by symmetry that the force will be perpendicular to the wires, so we can just throw away the longitudinal component. And the two wires are everywhere parallel so \mathrm{d}\ell \cdot \mathrm{d}\ell' = 1. It’s then clear that
\displaystyle \|F\|/L = \frac{\mu_0 I_1 I_2 }{4\pi} \int_{-\infty}^\infty \frac{d}{(d + x)^{3/2}} \, \mathrm{d}x = \frac{\mu_0 I_1 I_2 }{2\pi d}
This is basically the same calculation that we would do with the Biot–Savart law and the Lorentz force law; it’s simply a bit more direct this way.

Still, if magnetism were introduced in this way, it would be necessary to get from the inverse square law to the field abstraction somehow. With electrostatics this is easy: we just pull out \rho(x) and call everything else E(x). With the inverse-square law for the magnetic force, we can try to do something like this. It’s a little bit trickier because the force also depends on the direction of the current element J(x) (while charge density, of course, doesn’t have a direction). To represent an arbitrary linear function of J(x) that gives the force density f on the current element, we need to use a matrix field rather than a vector field:
\displaystyle F = -\frac{\mu_0}{4\pi} \iint \left[\frac{x - x'}{\|x - x'\|^3} J^t(x')\right] J(x) \, \mathrm{d}^3x' \, \mathrm{d}^3 x = \int B(x)J(x) \, \mathrm{d}^3 x
where the matrix field B evidently satisfies
\displaystyle B(x) = -\frac{\mu_0}{4\pi} \int \frac{x - x'}{\|x - x'\|^3} J^t(x')  \, \mathrm{d}^3x'
The problem is that what we’ve done here is, in some sense, misleading. The local law f = BJ we may be tempted to conclude is not really true, in that B(x)J(x)\, \mathrm{d}^3 x is not the real force density on the infinitesimal current element J(x). In performing an integration by parts earlier, we essentially averaged out the term we wanted to get rid of over the entire target distribution. But this destroyed the information about what the true distribution of force is over the target current loop, with observable consequences unless the loop is rigid. In other words, we can only compute the total force on the loop, but not the stress tending to deform the loop. The true Lorentz force law that has the correct local information also has the form f = BJ, but the matrix B is antisymmetric; we also need an antisymmetrization (wedge product) in the equation for B. It’s not clear to me whether it’s possible to get from the inverse-square law to the true Lorentz force law and the true Biot–Savart law, or whether the loss of local information was irreversible. So, as I said, there is a very good reason why we don’t just postulate the inverse-square law and take that as the basis for magnetostatics: it is (probably) actually incomplete in a way that Coulomb’s law is not. Still, I feel that it contains some useful intuition.

Posted in Uncategorized | 1 Comment

K&R C


I was recently reading the C99 rationale (because why the fuck not) and I was intrigued by some comments that certain features were retired in C89, so I started wondering what other weird features existed in pre-standard C. Naturally, I decided to buy a copy of the first edition of The C Programming Language, which was published in 1978.

"The C programming Language" by Kernighan and Ritchie

Here are the most interesting things I found in the book:

  • The original hello, world program appears in this book on page 6.

    In C, the program to print “hello, world” is

    main()
    {
         printf("hello, world\n");
    }

    Note that return 0; was not necessary.

  • Exercise 1-8 reads:

    Write a program to replace each tab by the three-column sequence >, backspace, -, which prints as ᗒ, and each backspace by the similar sequence ᗕ. This makes tabs and backspaces visible.

    … Wait, what? This made no sense to me at first glance. I guess the output device is assumed to literally be a teletypewriter, and backspace moves the carriage to the left and then the next character just gets overlaid on top of the one already printed there. Unbelievable!

  • Function definitions looked like this:
    power(x, n)
    int x, n;
    {
         ...
         return(p);
    }

    The modern style, in which the argument types are given within the parentheses, didn’t exist in K&R C. The K&R style is still permitted even as of C11, but has been obsolete for many years. (N.B.: It has never been valid in C++, as far as I know.) Also note that the return value has been enclosed in parentheses. This was not required so the authors must have preferred this style for some reason (which is not given in the text). Nowadays it’s rare for experienced C programs to use parentheses when returning a simple expression.

  • Because of the absence of function prototypes, if, say, you wanted to take the square root of an int, you had to explicitly cast to double, as in: sqrt((double) n) (p. 42). There were no implicit conversions to initialize function parameters; they were just passed as-is. Failing to cast would result in nonsense. (In modern C, of course, it’s undefined behaviour.)
  • void didn’t exist in the book (although it did exist at some point before C89; see here, section 2.1). If a function had no useful value to return, you just left off the return type (so it would default to int). return;, with no value, was allowed in any function. The general rule is that it’s okay for a function not to return anything, as long as the calling function doesn’t try to use the return value. A special case of this was main, which is never shown returning a value; instead, section 7.7 (page 154) introduces the exit function and states that its argument’s value is made available to the calling process. So in K&R C it appears you had to call exit to do what is now usually accomplished by returning a value from main (though of course exit is useful for terminating the program when you’re not inside main).
  • Naturally, since there was no void, void* didn’t exist, either. Stroustrup’s account (section 2.2) appears to leave it unclear whether C or C++ introduced void* first, although he does say it appeared in both languages at approximately the same time. The original implementation of the C memory allocator, given in section 8.7, returns char*. On page 133 there is a comment:

    The question of the type declaration for alloc is a vexing one for any language that takes its type-checking seriously. In C, the best procedure is to declare that alloc returns a pointer to char, then explicitly coerce the pointer into the desired type with a cast.

    (N.B.: void* behaves differently in ANSI C and C++. In C it may be implicitly converted to any other pointer type, so you can directly do int* p = malloc(sizeof(int)). In C++ an explicit cast is required.)

  • It appears that stdio.h was the only header that existed. For example, strcpy and strcmp are said to be part of the standard I/O library (section 5.5). Likewise, on page 154 exit is called in a program that only includes stdio.h.
  • Although printf existed, the variable arguments library (varargs.h, later stdarg.h) didn’t exist yet. K&R says that printf is … non-portable and must be modified for different environments. (Presumably it peeked directly at the stack to retrieve arguments.)
  • The authors seemed to prefer separate declaration and initialization. I quote from page 83:

    In effect, initializations of automatic variables are just shorthand for assignment statements. Which form to prefer is largely a matter of taste. We have generally used explicit assignments, because initializers in declarations are harder to see.

    These days, I’ve always been told it’s good practice to initialize the variable in the declaration, so that there’s no chance you’ll ever forget to initialize it.

  • Automatic arrays could not be initialized (p. 83)
  • The address-of operator could not be applied to arrays. In fact, when you really think about it, it’s a bit odd that ANSI C allows it. This reflects a deeper underlying difference: arrays are not lvalues in K&R C. I believe in K&R C lvalues were still thought of as expressions that can occur on the left side of an assignment, and of course arrays do not fall into this category. And of course the address-of operator can only be applied to lvalues (although not to bit fields or register variables). In ANSI C, arrays are lvalues so it is legal to take their addresses; the result is of type pointer to array. The address-of operator also doesn’t seem to be allowed before a function in K&R C, and the decay to function pointer occurs automatically when necessary. This makes sense because functions aren’t lvalues in either K&R C or ANSI C. (They are, however, lvalues in C++.) ANSI C, though, specifically allows functions to occur as the operand of the address-of operator.
  • The standard memory allocator was called alloc, not malloc.
  • It appears that it was necessary to dereference function pointers before calling them; this is not required in ANSI C.
  • Structure assignment wasn’t yet possible, but the text says [these] restrictions will be removed in forthcoming versions. (Section 6.2) (Likewise, you couldn’t pass structures by value.) Indeed, structure assignment is one of the features Stroustrup says existed in pre-standard C despite not appearing in K&R (see here, section 2.1).
  • In PDP-11 UNIX, you had to explicitly link in the standard library: cc ... -lS (section 7.1)
  • Memory allocated with calloc had to be freed with a function called cfree (p. 157). I guess this is because calloc might have allocated memory from a different pool than alloc, one which is pre-zeroed or something. I don’t know whether such facilities exist on modern systems.
  • Amusingly, creat is followed by [sic] (p. 162)
  • In those days, a directory in UNIX was a file that contains a list of file names and some indication of where they are located (p. 169). There was no opendir or readdir; you just opened the directory as a file and read a sequence of struct direct objects directly. Example is given on page 172. You can’t do this in modern Unix-like systems, in case you were wondering.
  • There was an unused keyword, entry, said to be reserved for future use. No indication is given as to what use that might be. (Appendix A, section 2.3)
  • Octal literals were allowed to contain the digits 8 and 9, which had the octal values 10 and 11, respectively, as you might expect. (Appendix A, section 2.4.1)
  • All string literals were distinct, even those with exactly the same contents (Appendix A, section 2.5). Note that this guarantee does not exist in ANSI C, nor C++. Also, it seems that modifying string literals was well-defined in K&R C; I didn’t see anything in the book to suggest otherwise. (In both ANSI C and C++ modifying string literals is undefined behaviour, and in C++11 it is not possible without casting away constness anyway.)
  • There was no unary + operator (Appendix A, section 7.2). (Note: ANSI C only allows the unary + and - operators to be applied to arithmetic types. In C++, unary + can also be applied to pointers.)
  • It appears that unsigned could only be applied to int; there were no unsigned chars, shorts, or longs. Curiously, you could declare a long float; this was equivalent to double. (Appendix A, section 8.2)
  • There is no mention of const or volatile; those features did not exist in K&R C. In fact, const was originally introduced in C++ (then known as C With Classes); this C++ feature was the inspiration for const in C, which appeared in C89. (More info here, section 2.3.) volatile, on the other hand, originated in C89. Stroustrup says it was introduced in C++ [to] match ANSI C (The Design and Evolution of C++, p. 128)
  • The preprocessing operators # and ## appear to be absent.
  • The text notes (Appendix A, section 17) that earlier versions of C had compound assignment operators with the equal sign at the beginning, e.g., x=-1 to decrement x. (Supposedly you had to insert a space between = and - if you wanted to assign -1 to x instead.) It also notes that the equal sign before the initializer in a declaration was not present, so int x 1; would define x and initialize it with the value 1. Thank goodness that even in 1978 the authors had had the good sense to eliminate these constructs… :P
  • A reading of the grammar on page 217 suggests that trailing commas in initializers were allowed only at the top level. I have no idea why. Maybe it was just a typo.
  • I saved the biggest WTF of all for the end: the compilers of the day apparently allowed you to write something like foo.bar even when foo does not have the type of a structure that contains a member named bar. Likewise the left operand of -> could be any pointer or integer. In both cases, the compiler supposedly looks up the name on the right to figure out which struct type you intended. So foo->bar if foo is an integer would do something like ((Foo*)foo)->bar where Foo is a struct that contains a member named bar, and foo.bar would be like ((Foo*)&foo)->bar. The text doesn’t say how ambiguity is resolved (i.e., if there are multiple structs that have a member with the given name).
Posted in Uncategorized | 13 Comments