.post-body { line-height:100%; } -->

Saturday, 3 April 2021

Very Able

Henrietta Swan Leavitt
 One of the most important discoveries in the history of astrophysics and cosmology was made by a computer in 1908. 

Waitwhat?!! A computer doing useful scientific work in 1908? What gives?

In fact, computers have been doing useful scientific work for centuries, although they weren't always referred to as such. The earliest recorded use of the term 'computer' is from 1613, the same year Shakespeare's famous Globe Theatre burned down, and some three hundred years before Charles Babbage began constructing his 'difference engine', the first recognisable mechanical computer*. Of course, this reference, from The Yong Mans Gleanings by English writer Richard Braithwait, isn't dealing with a mechanical or electronic machine which carries out computations, it's talking about an arithmetician; a human.

That we even think of a computer as a machine is really an oddity of our modern perspective, wherein the ubiquity of electronic computers has meant using the term for anything else leads to ambiguity and misunderstanding. It's yet another nail in the coffin of one of the most common fallacies in discourse, rooted in thinking words have intrinsic meaning. They don't, of course. They have usage. Indeed, there's been such a shift in our semantic thinking about computers as to engender even the original usage entirely obsolete. Nowadays, we don't talk about humans as computers and, in fact, the term 'human computer' has come to mean a human with such extraordinary skill at arithmetic and computation as to seem to us mere mortals and dilettantes nearer to their electronic counterparts than to other humans; to superhumans.

Many of the greatest minds in the history of science have worked as computers, undertaking the work of calculating and comparing for other great scientists. Johannes Kepler, for example, served as a computer for Tycho Brahe although the term hadn't yet been coined. 

This is not, however, a post about computers, though a human computer plays a starring role (pun intended). This is really a post about expertise.

We've talked a little about expertise before in Where do you Draw the Line? There, we looked at how expertise manifests, particularly in the difference in perception between an expert and a novice. Here, I want to look at expertise more closely and, in particular, what expertise actually is.

Henrietta Swan Leavitt was born in Lancaster, Massachusetts in July 1868. She studied at Oberlin College in Ohio and then what later became Radcliffe College at Harvard. Her studies were very broad and included, among other things, philosophy, analytic geometry and calculus. It wasn't until quite late in her college education that she took a course in astronomy, and here she found her calling.

After some bits and pieces of work and some travel (I won't replicate the Wiki here), Leavitt returned to Harvard, where she worked as curator of astronomical photographs, and was assigned the study of Cepheid variable stars in the Magellanic clouds. A Cepheid variable pulsates radially, changing in diameter, temperature and luminosity with a well-defined and stable period, which is to say it waxes and wanes reliably and predictably. The name comes from the first of its kind observed, a star in the constellation Cepheus discovered in 1784. She identified 1777 such stars in the small and large Magellanic clouds, and detailed her findings in a 1908 paper[1], in which she noted the brightest stars had the longest period of variability*. 

She spent the next several years studying a sample of 25 variables in the small Magellanic cloud, paying particular attention to the relationship between period and magnitude, and discovered not just a relationship, but a clear correlation. This was detailed in a 1912 paper in which she plotted the maxima and minima of their periods on a graph, with the logarithm of their periods on the y axis (↑) and the magnitude on the x (→) axis. 

So, why is this so important? 

Put simply, this tells us any two stars with identical periods will also have identical intrinsic brightness. This means for any two stars of different apparent magnitude with identical periods the difference must, of necessity, be attributable to some other factor. The most obvious cause of such difference is distance. In other words, if two stars have the same period and different apparent magnitudes, the brighter one is closer than the dimmer one. This fact can be used to determine relative distances. This makes them a perfect candidate for what is now known in the jargon as a 'standard candle'.

Until this point, the only reliable way of measuring distance to extraterrestrial objects was parallax. This is a simple procedure anybody can understand with a simple experiment requiring no equipment other than two eyes and a finger. 

Hold your finger up at arms length in front of your face. Close one eye at a time and note how the apparent placement of your finger changes against the background. That's parallax. You can think of the distance between your eyes as a baseline. With the length of the baseline and the angles between the ends of the baseline and the point of observation, simple trigonometry can tell you how far away it is by a process known as triangulation, the same method used by surveyors. The longer your baseline, the further you can reliably measure, as greater distance renders the difference in parallax too small to resolve. From Earth, the longest baseline achievable is the length of the line between Earth's position at one point in Earth's orbit, the vernal equinox, say, and the opposite point of the orbit, the autumnal equinox. With such a baseline, parallax can be reliably measured only up to a few hundred light years. Since our own galaxy is 100,000 light years in diameter, this means it's unreliable for anything outside our own galaxy. Indeed, even the small Magellanic cloud leavitt was studying has a mean distance from Earth of just under 200,000 light years, far too distant to measure by parallax.

So we have a standard candle, so we can tell relative distances, but there's something missing. We need to be able to measure absolute distances in order for this to be really useful, which means we need a scale. This, of course, requires a measurement of such a variable by other means. Enter Ejnar Hertzpsrung, a Danish chemist and astronomer famous for his work with Henry Russell in classifying stars by luminosity, stage in their life-cycle and spectral type resulting in the Hertzsprung-Russell diagram of stellar classification.

A little under two years after the second of these papers, Hertzsprung measured the distance to two Cepheids by parallax[3]. He made a bit of a mess of his calculations, but it didn't matter. Once corrected, we now had an absolute distance to multiple Cepheids and, from these, the distances to all other Cepheids could be extrapolated. For the very first time, we had a means to measure distances in the cosmos up to an enormous 20 million light years; our first standard candle with attendant scale.

Modern scientific cosmology is widely regarded as having begun in 1917, when Einstein put the final touches on the general theory of relativity[4]. In my view, and given the preceding, this is a mistake. This ability to measure extra-galactic distances is the real foundation of modern cosmology. Indeed these observations, along with Vesto Slipher's measurements of spectral shift in other galaxies, formed the basis of Hubble's work establishing the expansion of the universe. Leavitt's Law led to Hubble's Law. In other words, Leavitt's work underpins modern cosmology in exactly the same way Darwin's work underpins thermodynamics and quantum mechanics (and by the same means, since what Darwin brought to the table was the random variable).

Since then, other standard candles have been discovered, most notably one we've met before, a type 1A supernova. A type 1A supernova is a white dwarf going nova in a binary system. The interesting feature of a type 1A supernova is how it's generated, because it has consequences. In very simple terms, the white dwarf sucks material off its partner star. 

When a star has expended all its fuel, it undergoes one of several processes, dependent only on the mass of the star. In the case of a star with less than about 1.4 times the mass of the sun, it will spend its last fuel growing into a red giant and then, when all its fuel is spent, it collapses, as all stars do. In the case of a white dwarf, because it has no fuel to continue fusion, the only thing stopping it from complete collapse is electron degeneracy pressure, a function of the Pauli exclusion principle. This requires greater than 1.4 solar masses - the Chandrasekhar limit - for gravity to overcome. The mass of such a star will be comparable to that of the sun, but it will occupy a space approximately the size of Earth. It will sit and glow due to residual thermal energy, and will eventually cool, crystallise and become a cold black dwarf.

In some binary systems, however, something interesting happens. Because the white dwarf can suck material off its companion star, the mass of the white dwarf can creep up toward the Chandrasekhar limit. Once it breaches this limit, it will go nova and explode. Obviously, this type of nova always happens at exactly the same mass, which means that all type 1A supernovae go nova with exactly the same intrinsic brightness. Once again, we have something whose intrinsic brightness is a fixed point, meaning that any variation in the apparent magnitude of a type 1A supernova must be a function of distance. This is a really powerful standard candle for two reasons. The first is that they're fairly easy to spot, being incredibly bright. The second is that such a supernova occurs in just about every Milky Way-sized galaxy on average every 200 years. Given the number of galaxies we can observe, this gives us an excellent platform for observing distances over billions of light years, increasing our observational scale by orders of magnitude further.

What we've really been looking at here, although an interesting example of the progress of modern science, points us to something important about expertise, and it's this:

The central feature of expertise is understanding variables.

OK, so this seems a bit of a stretch, and maybe somebody will accuse me of committing the fallacy of equivocation, since we're talking about very different conceptions of variable here, but this would be premature. In fact, although the term 'variable' has slightly different meanings here, Cepheids are a variable to which both meanings apply. Our interest here is the second of these.

Understanding variables and, in fact, recognising the existence of variables, is the central and most important facet of expertise. This doesn't only apply to scientific understanding, it applies in all areas. 

Take woodwork, for example. Regulars here (of which I am apparently not one, given how long it's been since I last published) will be aware I'm a musician first and foremost. I've played guitar for 98% of my life. I've also historically carried out most of my own maintenance on my instruments for all of this time, including stripping, cleaning, electronics, etc. A few years ago, I started to get interested in building. I had all the important knowledge about guitars necessary to do it successfully. I had some expertise. I understood most of the variables in the difference between a good, playable guitar and a lump of wood with strings on it. one would have thought all I needed to do was to pick up my tools and crack on.

Wrong. First, there's the manual skill in using tools, which is something requiring practice and learning. More importantly, there's a humongous variable (or set of variables) in the materials. 

Look at a piece of wood. You can see the variables right there in the grain. What may not be apparent is how these variables translate to what happens when tool meets material. For example, you may think the wood from a particular species of tree will have consistent density and cut precisely the same. However, the density of the grain in a single piece of wood is variable, and such variability has consequences. For example, you might be carefully paring away with a chisel when you meet a hard grain line causing the chisel to dive down into the grain and dig a chunk out. This is obviously not good for a consistent finish. Some species respond better to cutting than to abrasives, and vice versa. Poplar is very soft, and is prone to pilling when cut. Mahogany has its grain running in different directions, which can be interesting under a chisel, as any direction of cut can cause the tool to dive down. And these are just scratching the surface (I know, I know; I can't resist a good pun).

When you watch a real master doing this work, it looks almost trivial. He knows what it feels like in minute detail, and can quite literally read the wood with his chisel. It isn't just science, it's art. It's expertise. 

I'm sure I've talked before about passing a driving test. Most people, when they obtain their licence, think they're the best driver on the road, and ready to enter Le Mans. In fact, the licence is a licence to learn to drive. Driving lessons are designed to give you a measure of expertise, by helping you to understand the key variables. No driving lessons can furnish you with all the understanding of variables you need, however, not least because every other person on the road represents another critical variable. True expertise lies in understanding as much about the variability of the landscape as possible.

What's your field of expertise? What's your experience in talking to others about it? I'm guessing that, like me, you've encountered people who think they have the skinny in your field of expertise, while your expertise tells you they're missing critical information, and this lack hinders their understanding. Such critical information will almost always be in understanding the variables.

This is why expertise matters, why experience counts and why, if you're able, learning to understand variables will make you very able.

* It was long thought the Antikythera mechanism was a computer. Indeed, the wiki on the mechanism tells us it's 'described as the oldest example of an analogue computer', while simultaneously saying that it's an orrery (a device used to track and predict the orbits of celestial bodies). The latter is, of course, undeniably true, but the former? Is a slide rule a computer? Some might say it is, but I'd argue that any definition so ridiculously broad is useless. To talk about an orrery - an extremely uncomplicated, if slightly complex - device and even Babbage's difference engine (which is probably considerably closer to the Antikythera device than to the laptop from which this epistle is issued forth) in the same breath is just silly, particularly when we want to talk about definitions. To talk about it in the same context as humans as the 'oldest example of an analogue computer' is either forgetting that humans really are analogue (or are they..? For that matter, are they?) or forgetting that they're considerably older than the Antikythera device - or both. One might as well call a sundial a wristwatch. All else aside, the wiki on orreries contains precisely zero instances of the word 'computer'. QED, I rest my case. lmao. Gosh, I've missed this place.
**The variability of a star is how much its apparent brightness - or magnitude, in the jargon - changes over time. By apparent brightness, we mean of course its brightness as viewed from Earth, as opposed to its intrinsic (actual) brightness (luminosity). The period of variability is how long the cycle takes from maximum to minimum and back to maximum again.
† I'm being somewhat loose here in my use of 'identical'. Read as 'identical to a first approximation'.
‡ There is more to come, including some information about where I've been and what I've been doing, essentially following Orwell's path and living the life in the name of research. 


[1] 1777 variables in the Magellanic Clouds - Leavitt, Henrietta S. - Annals of Harvard College Observatory 1908
[2] Periods of 25 Variable Stars in the Small Magellanic Cloud. - Leavitt, Henrietta S.; Pickering, Edward C. - Harvard College Observatory Circular 1912
[3] On the spatial distribution of variable [stars] of the δ Cephei type - Hertzsprung E - Astronomische Nachrichten 1913
[4] Cosmological Considerations on the General Theory of Relativity - Einstein A. 1917

No comments:

Post a Comment