.post-body { line-height:100%; } -->

Thursday 29 September 2016

The Map Is Not The Terrain: The Pitfalls of Language

What's in a name?

Rather more than we might think, actually. Words have baggage, and sometimes that baggage can be difficult to shake off.

Here I want to talk about natural language, and how we use it. Increasingly, in my virtual travels, I encounter the degeneration of discussion into semantic quibbling, most of which entirely misses the point of semantics (and I freely confess to having gone down this rabbit-hole myself).

We've previously had some discussion on semantics, and why and when it's important, but there are some things that weren't necessarily made explicit, and I wish to address some of them here, along with touching on some things we haven't previously covered. Specifically, I want to deal with some of those instances in which language can lead us astray, especially in our reliance on conventions. By halfway through what follows, I'm going to try to convince you that words can't be trusted and, by the end, I'm going to try to convince you that they can, as long as care is taken.

Language can be a tricky bugger, especially when dealing with complex languages like English. One of the issues is how it evolves over time. There's an exceptional book by Professor Steve Jones entitled Language of the Genes, which details the evolutionary history of language, and how it follows genuine evolutionary principles of divergence, selection, drift, etc. Properly, it's stochastic. That's an important term in the sciences, which we discussed in some detail in Has Evolution Been Proven. It means that the future state of a system is dependent on the current state plus one or more random variables. By random, of course, I mean 'statistically independent' not, as some seem to use the term 'uncaused'.

There's a useful example of how language evolves that should be reasonably familiar to all. In 1976, Richard Dawkins' The Selfish Gene was published. In it, he coined a term, 'meme'. He meant it to signify an idea that becomes culturally embedded. It was erected more as a didactic tool than anything, an analogy for biological evolution, in which ideas were subject to divergence, selection and drift. Since then, an entire field of study has arisen to look at the evolution of memes. It's known as memetics.

I'd be willing to place a Hawking-style wager that, were you to ask most internet denizens these days what a meme is, they'd tell you that it's an image, especially one that delivers some sort of message. I first encountered this idea on a particular social networking site. An entire cognitive culture seems to have grown around them, even to the extent that they're treated as discardable simply on the basis of them being images, which had me somewhat flummoxed.

I like to be as clear as I can be, and character limits can sometimes be an issue for clarity, even when ideas are spread over several submissions, so it's a no-brainer to me to do a screen cap of some text I've written, crop it and upload, so that I can be as clear and logical as possible in natural language. Here's a typical example:


OK, so it's direct, and slightly scornful, but accurate. So what happens when I do that? It gets dismissed on the basis that it's a meme, and no consideration is even given to the content. This is, of course, another example of the genetic fallacy.

The main concern here, though, is simply that the idea of what a meme is has evolved, as memes do, and as language does, even beginning to find some divergence in meaning.

I recently looked at cognitive inertia, our tendency to hang on to ideas we find comfortable for any reason, in Patterns and the Inertia of Ideas, in which was detailed how ideas can form deep roots and consequently become very resistant to change. Words can be deeply embedded in our psyches, and we each carry with us our own version of what is meant by a given word. A word is just an idea, after all. The most common words will tend to be well-correlated, because it soon gets picked up if we're not using a word in the way others are using it, and probably long before one imprints a specific definition. Dictionaries also serve to standardise many of the less common ones as well, but there are pitfalls, and they fall under a set of fallacies I like to term argumentum ad lexicum, or appeal to dictionary, itself a subset of the genetic fallacy.

The best known of these is probably the etymological fallacy. This is an appeal to either the etymological root of a word or to a historical definition of a word to dismiss other usages. For obvious reasons, it's also related to the fallacy of equivocation and the argumentum ad antiquetatem (appeal to antiquity or history). As we can see with the example of memes above, this is trivially silly.

The major example of the argumentum ad lexicum is 'here's how such-and-such dictionary defines that word, and it's not how you're using it, therefore your argument is wrong' (or the corollary positive argument, which still commits the same fallacy). This fallacy is a twofer, because it commits the argumentum ad populum and the argumentum ad verecundiam.

I've come to the conclusion that people tend to think of dictionaries as prescriptive, as monolithic authorities on what words mean. If this were actually the case, English would have had no new words or meanings since Dr Samuel Johnson finished his famous tome, and in which he defines 'oats' as a grain, which in England is generally given to horses, but in Scotland supports the people. With a bit of luck and a fair wind, I don't have to point out how absurd that conclusion is.

So what's a dictionary, then, if not a prescriptive source regarding what words mean? It's a descriptive source describing usage. A dictionary does nothing more than document how words are generally used, and trace their history to written sources. Thus, accepting or dismissing an argument based purely on a dictionary definition, is appealing to popular usage, which is the ad populum. In treating the dictionary as authoritative, the ad verecundiam is committed.

There's another way that words can be tricky, and it occurs most in areas in which natural language is a poor reflection of what it's describing. in Who Put it There? we looked at information in DNA and, among the issues we covered, was the principle that some of the terms we use in dealing with it can give an inaccurate picture of what's really going on if one isn't careful of what the terms actually refer to in that context. There are other areas in which this is even more pronounced, and I want to briefly talk about one of them here.

The map is not the terrain.

In some of the previous posts on quantum theory, we've explored the double-slit experiment and wave-particle duality. In many popular treatments, this is described as a quantum entity being, or having properties of, both waves and particles. This serves as a useful shorthand, but doesn't rigorously capture the essence of what's actually going on. Consider:

What are the characteristics of a wave?
Distributed (i.e. not localised).
Displays interference.

What are the characteristics of a particle?
Localised (i.e. not distributed).
Doesn't display interference.

Suppose we run the double slit experiment. There's a full description of a beautiful experiment conducted by some R&D guys at Hitachi, in which this was done with electrons sent through the apparatus one at a time. The result is single electrons arriving at the screen. After many tens of thousands of electrons were sent through the apparatus, one electron at a time going through and arriving at the detector, we get an interference pattern.

So, is it a particle? Well, it's certainly localised on the screen, but it shows interference. Particles don't show interference.

Is it a wave? Well, it shows interference, but it isn't distributed.

So which is it? Is it a particle, a wave, or both?

The answer is, of course, that it's neither: It's an electron!

Another good example, also from quantum theory, is what Heisenberg's Uncertainty Principle tells us about, for example, the position and momentum of an electron, namely what we can 'know' about them, and the relationship between them, and other pairs of conjugate variables. This is again slightly misleading, because it suggests that the limitation might be a simple matter of experimental cunning. What we're actually talking about when we say 'know', or when we talk about the information that can be extracted from a quantum system, is actually all of the information about the system, whether we can know it or not. If we extract the information about the position, the momentum doesn't actually exist, and vice versa. Indeed, neither can be said to actually exist until we make an observation.

Tying these two ideas together, what actually happens when we observe a quantum system? Well, when we interact with the entity in one way, we see a behaviour that we associate with waves. When we interact with it another way, we see a behaviour that we associate with particles.

So what about fields? Well, when we interact with the field in certain ways, we see behaviours that we associate with fields. Are fields a rigorous description? Quite probably not, but it seems to work for now. Modelling phenomena in terms of excitations in fields has been amazingly fruitful.

Here's Brian Greene, string theorist and professor of physics at Columbia University.
Quantum mechanics is a conceptual framework for understanding the microscopic properties of the universe. And just as special relativity and general relativity require dramatic changes in our worldview when things are moving very quickly or when they are very massive, quantum mechanics reveals that the universe has equally if not more startling properties when examined on atomic and subatomic distance scales. In 1965, Richard Feynman, one of the greatest practitioners of quantum mechanics, wrote:
"There was a time when the papers said that only twelve men understood the theory of relativity. I do not believe there ever was such a time. There might have been a time when one man did because he was the only guy that caught on, before he wrote his paper. But after people read the paper a lot of people understood the theory of relativity in one way or other, certainly more than twelve. On the other hand, I think I can safely say that nobody understands quantum mechanics"
Although Feynman expressed this view more than three decades ago, it applies equally well today. What he meant is that although the special and general theories of relativity require a drastic revision of previous ways of seeing the world, when one fully accepts the basic principles underlying them, the new and unfamiliar implications for space and time follow directly from careful logical reasoning. If you ponder the descriptions of Einstein's work in the preceding two chapters with adequate intensity, you will - if even for just a moment - recognize the inevitability of the conclusions we have drawn. Quantum mechanics is different. By 1928 or so, many of the mathematical formulas and rules of quantum mechanics has been put in place and, ever since, it has been used to make the most precise and successful numerical predictions in the history of science. But in a real sense those who use quantum mechanics find themselves following rules and formulas laid down by the "founding fathers" of the theory - calculational procedures that are straightforward to carry out - without any real understanding why the procedures work or what they really mean. Unlike relativity, few if any people ever grasp quantum mechanics at a "soulful" level.
What are we to make of this? Does it mean that on a microscopic level the universe operates in ways so obscure and unfamiliar that the human mind, evolved over eons to cope with phenomena on familiar everyday scales, is unable to fully grasp "what really goes on"? Or, might it be that through historical accident physicists have constructed an extremely awkward formulation of quantum mechanics that, although quantitatively successful, obfuscates the true nature of reality? No one knows. Maybe some time in the future some clever person will see clear to a new formulation that will fully reveal the "whys" and the "whats" of quantum mechanics. And then again, maybe not. The only thing we know with certainty is that quantum mechanics absolutely and unequivocally shows us that a number of basic concepts essential to our understanding of the familiar everyday world fail to have any meaning when our focus narrows to the microscopic realm. As a result, we must significantly modify both our language and our reasoning when attempting to understand and explain the universe on atomic and subatomic scales.
Similarly and, as discussed in Who Put it There? we do the same in evolutionary theory. We know that DNA is not actually a code. Our treatment of it is a code, because that's an extremely good way of looking at it to further understanding. It's extremely fruitful, because it displays many of the features that we associate with codes. Have another meme:


So it looks an awful lot like words are extremely plastic. How the holy shit are we supposed to trust them?

Luckily, there's an escape. It deals with an area of thought central to what we mean when we say a thing, yet is often dismissed. Yes, it's semantics. As we've already covered this topic at some length, I won't belabour it here, but I want to deal with one more thing, and it's the most important thing about semantics. It encompasses all that we've been talking about here, and goes directly to what semantic discussion should be about. It doesn't actually matter what this or that dictionary says about word, or even if you invent a brand new one just for purpose, or repurpose an existing word and build a new usage. What actually matters about semantics is clear communication. Thus, whether you and I agree on a particular definition is irrelevant. Semantic arguments should only ever revolve around ensuring that both arguer and arguee understand what's meant by the given term as it is being used.

In other words, it doesn't matter whether I understand 'random' to mean 'statistically independent' and you understand it to mean 'uncaused', it only matters that you know what I mean by it, and I know what you mean. The easy way to resolve such discrepancies is to discuss the definitions and come to some consensus that you're both happy doesn't misrepresent your view, and that communication is reached.

As a corollary to this, it's also important that, if somebody asks you to define a term that you're using, it isn't remotely sufficient to say 'go look it up'. This will only tell you how some other people are using a term. If I ask for a definition, it isn't because I don't have one, it's because I want to understand what you associate with a given term. The dictionary can't tell me what you mean, which is what we need to ascertain in order to communicate effectively.

Always remember that words and the entities and phenomena they describe are distinct and different things, and that the words are merely an aid to understanding; a metaphor; a cipher.

The map is not the terrain.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.