Dante Aligheri |

Here's the question as presented:

Why do scientists so definitively state an age of a fossil. They never explain how they got to that date. Many dating methods have been shown to be iffy at best, subject to personal bias. How can they differentiate between 1M years and 2M?

The short answer is something not really talked about much concerning the nature of scientific epistemology, specifically a single principle. I’ll set aside the incorrect assertion that the ages of fossils are so definitively given, and the silly assertion that the methods are iffy, because this single principle sidesteps both.

So, there’s a principle in science known as ‘consilience’. It’s one of the most important principles in experimental science of any sort, and it works like this.

Imagine a tower. You don’t know how tall it is, but you have some information. There’s a man standing next to the tower, so you can have a ball park guess. You can see that the tower is about 6 times the height of the man. Taking a vaguely average height of 1.7 metres, you estimate that the tower is approximately 10 metres tall and some change.

Now, there’s somebody on top of the tower, and they drop a ball. You can time the fall and it takes 1.5 seconds. Well, now you have an actual measurement to work with. We can take $d=½gt^2$ and calculate that the tower is in fact a hair under 11 metres.

So, we have two methods and, while they may not agree (the estimate was a bit iffy, you might say), they’re not so incredibly far apart. This is weak consilience, but it’s still consilience and increases confidence in the result. But we’re not done yet, because we have other means at our disposal, because you have a can-do attitude and never go anywhere without some measuring stuff and a pencil. The sun is going down, and casting decent shadows. You whip out your trusty metre stick, and stand it up. You mark the end of its shadow and measure carefully, to find that the shadow cast by the stick is 100 metres long. So now you measure the length of the shadow cast by the tower, and find that it’s a tiny bit under 1100 metres, which would make the tower approximately 11 metres tall.

At this point, you have two measurements that agree and one guesstimate that was close, but not close enough. However, because you now have two consilient methods, you can now in fact give a much more robust figure for the height of the man, approximately 1.8 metres. The two genine measures provide calibration for the estimate.

Well, you could pack up your chalk and go home, but you’re a diligent researcher (you have to be to walk a couple of kilometres when the initial guesstimate is probably close enough for whatever purposes you might have for your burning need to go around willy-nilly all knowing how tall towers are). So you made a point of marking the distance between your location and the tower when you were measuring its shadow, so you know you’re standing 72 metres from the tower. You whip out your ever-present theodolite - because you never know when you might meet an emergency surveying opportunity - and you measure the angle to the top of the tower from your vantage point be 30°. You get some chalk dust in the air while you work through your trigonometric functions and find that the height of the tower is 11 metres.

So, all our means of measuring basically converge on a single figure to different degrees of accuracy, but they do converge on one value. This is strong consilience, and it gives us enormous confidence.

When we do this in any area of science, we’re never looking at one method of measurement and declaring that the way things are, we look at entirely different ways of measuring things and see how closely they match with other methods using different assumptions. When they agree like this, it’s a sign not only that we’re converging on robust measures but, more importantly, that the assumptions underlying each of those methods are good assumptions, because bad assumptions stand out in the numbers like a sore thumb. Each new kind of measurement that delivers the same basic numbers tells us those assumptions are good and warranted.

That is consilience. From a purely statistical perspective, the probability of gaining consilient measures under faulty assumptions is vanishingly low, and each new method that delivers the same results drives that probability down lower.

So, that’s the first part. I’ll answer your question completely in a sec.

The second part is to talk about the single most important inclusion in a scientific measurement of any sort. And when I say most important, I mean the one thing that any measurement is utterly meaningless without; error margins.

No scientific measurement is ever presented in a scientific setting without inclusion of error bars, because measurements without error bars or some other indicator of margin of uncertainty means nothing. Here’s an actual data set (actually a meta dataset):

The extended coloured lines are the error margins for the individual data points, and the shaded areas are the error margins for the calibration methods.

And finally, the only assumption underlying the science of radiometric dating is consistency in the laws of physics over time. This is because the decay laws that underpin radiometric dating techniques were empirically derived - that is, we learned them by testing, not via theory - and thus the only thing we have to assume is that they behaved the same in the past. Which is not really an assumption for two reasons. The first is Noether’s Theorem. This is a mathematical theorem about which one can’t really argue, but it’s deeper than that. What Noether’s Theorem tells us is that the laws of physics are invariant under time translations because certain quantities are being conserved (time translation invariance is the modern version of the first law of thermodynamics, thus the quantity being conserved is energy in this case).

The second is empirical, and it involves the speed of light. See, all the processes in the universe are deeply and irrevocably tied to this speed. No matter how fast you’re moving, all your measurements of energetic processes rely on this speed being the same no matter what. That has consequences, as Senkichi Awaya learned to his detriment one day in 1945. One of those consequences is that it takes time for light to get anywhere, which means that when we look out into the universe, we’re looking back at what the universe was like in the past. And we can see light moving from place to place in the past, and it was always moving at the same speed.

When we put all of this together with just a couple of other things, like the overlap in date ranges with other dating methods that rely on completely different assumptions than the one assumption we needed for radiometric dating, they all align, with complete consilience between them.

We have: dendrochronology that overlaps with 14C. We have ice cores showing atmospheric composition that overlaps with dendrochronology and radiometric dating. We have the exquisite time and taxonomic ordering of progression of features in the fossil record that overlaps with 14C, dendrochronology and ice core dates. There is, in fact, a beautiful double line of consilience between the exact carbon content of the atmosphere down to isotopic signature, tree ring growth consistent with the climate conditions shown in ice core, and exact parent to daughter ratios based on the exact historic carbon content of the atmosphere.

Then we have the molecular data from genetics, which provides the same set of beautiful nested hierarchies that are provided by EVERY line of evidence.

This is consilience, and it’s among the most important factors in scientific epistemology, yet so poorly understood. You’re picking at the edges of an edifice constructed of many lines of entirely consilient and consistent data with ‘but what about the elbow joint of the lesser spotted weasel frog?’

More on radiometric dating:

## No comments:

## Post a Comment

Note: only a member of this blog may post a comment.