.post-body { line-height:100%; } -->

Tuesday 30 April 2019

Can You Hear Me?

I had an interesting interaction today that I thought might prove instructive. The precise details of the interaction aren't really important, but suffice it to say that I came across an instance of something we've looked at before - most notably in That's Racist! - namely arguments that arise from a failure to clarify or even acknowledge a critical distinction, that between terms whose vernacular usage differs from a more technical usage. 

I know that many of my regular readers, being active in those corners of the internet where atheists and believers gather to do battle, have come across instances of this failure of distinction, not least in the ever-ubiquitous 'just a theory' canard erected against evolution*.

In this particular instance, a Facebook friend and talented writer post that somebody had 'unfriended' them after stating that "White people can face prejudice, but not systematic racism".

As an minor aside, the formulation of this statement carries implicitly the distinction between technical and vernacular by casting it as prejudice versus systematic racism.

I responded by pointing out that much of the issue lies in a failure to note the distinction, and that this is especially a problem when communicating by text alone which, as we've discussed before, renders our communication only about 10% effective. Somebody requested a citation for this latter assertion, as they felt it to be ableist.

This came as something of a surprise to me but, as I know full well that I can't speak authoritatively about the lived experience of somebody who faces issues that I don't, I asked for clarification, being very careful to cast the request in terms of my own education. I know from long experience that there are things that aren't immediately apparent to me in the things that people say (I have a piece in the works about a phenomenon known as 'dog-whistling' - saying something that looks innocent on the surface but contains a message meant for 'those with ears to hear') and I always want to learn more, so that I can be a more effective ally.


As it turned out, the discussion degenerated from there, though it was clear from the response that my meaning hadn't been grasped, because my interlocutor thought I was saying that non-verbal communication was universally better, and that those with various issues, such as a lack of propensity with non-verbal cues, were being excluded from this assessment. That brings us to the meat of what I want to discuss here; a rendering of this notion from an information-theoretic perspective.

We've talked before about Claude Shannon, most notably in this post about information and DNA.

Shannon was a communications engineer, and his work was all about quantifying information to drive improvements in transmission, reception and noise-reduction technology. In other words, he was concerned with signal fidelity. In this light, Shannon defined information in such a way as to have a maximal information content. His definition is given in the header image at the top of the post, although I'd argue that 'reduction' is a better term than 'resolution'. Under this definition, the information content is maximal where there is no reduction in certainty§.


The easiest way I can think of to represent this is to lean on what I know best; music technology. This might turn out to be problematic, not least because some will grok this easier than others, owing to the vast improvements made in technology over the past few decades.Those of us who grew up before the advent of the iPod will recall hearing our favourite songs on tinny little monophonic radios, but younger people may not immediately grasp it, so let's start at the beginning.


Here's a typical graph showing the frequency response of a piece of audio equipment - a hi-fi speaker, in this case.

On the \(x\) axis, we have the frequency in Hz**. On the \(y\) axis, we have amplitude in dB††. This is a measure of sound pressure levels (SPL) at a given distance, usually one metre. As with temperature, sound pressure is a measure of energy density, or the motion of particles. 


The full range of human hearing, under idealised listening conditions, is from around 12 Hz to around 28 kHz (kilohertz = 1,000 Hz), though it's commonly accepted that under normal conditions it runs from around 20 Hz to 20 kHz. This diminishes as we age, most quickly in the upper ranges, as these correlate to the smallest of the stereocilia, tiny hairs that sit in the cochlea. We lose these hairs through damage and ageing, and the smallest cilia corresponding to the higher frequencies are lost first. We understand this because we've all learned about the quantised nature of waves, which tells us that only those waves that can complete a half or full cycle within the boundaries can contribute. As always, I'll link to some further reading on waves at the bottom.

Generally speaking, adult humans can hear from around 20 Hz to around 15-17 kHz, though this can be impacted fairly severely by being exposed to high sound pressure levels routinely. This loss in the higher frequencies is why sound can often seem muffled in older humans.

So, there's the groundwork.


What that graph represents then, in this context, is information. Those whose hearing has been impacted by the loss of stereocilia (or other issues) get less information from the sound, because they're not getting the information in those higher frequencies corresponding to the last downward stroke to the right of the graph.

Note that this is an analogy, albeit one that's directly concerned with the same thing, namely the reception of transmitted information. In the language of Shannon, that information has been lost between transmission and reception.


We can now think of all our tools for communication as fitting in that graph.

Some of it is body language; we recognise that people who fold their arms and scowl are being defensive, for example (not in all cases, but generally).

Some is tone and inflection; we recognise that, when the end of a sentence rises, it usually indicates and interrogative.

Some of it is the words we use. Even this can be problematic, not least because of our inherent tendency to be somewhat creative with our language, meaning that we lean on synonyms for colour when very few pairings of words are actually directly synonymous.

In my statement that our communication is only about 10% effective (the research shows that this is actually as low as 7%), what I'm saying is that the research shows that text alone can only communicate 10% of the total language, corresponding to reception of only a tiny portion of the information transmitted.

What often makes this considerably worse is something that we've discussed before in other contexts; passive listening.

Passive listening - as opposed to active listening - is when we pick up only on key words in what's being said and immediately start composing our response in our heads (or responding immediately based only on key words, when we're talking about communicating by written text alone - I suppose passive reading is a better term in that context). When we do this, we're not reduced only to 10% of what's being communicated, we're reduced to only that percentage corresponding to those key terms that we're reacting to. This is a massive problem in all sorts of spheres. It's one of the things that those working in customer service training, for example, spend a lot of effort in trying to address. For example, when a customer service agent has taken ten calls about a single complaint, they hear the key words related to that complain in their eleventh call and automatically go to composing their response on the assumption that it's exactly the same complaint. And, of course, that's in an environment in which we still have tone and inflection to rely on to convey information.

When we get triggered by what we think somebody has said, we're responding not to what they've said, but to the triggers. That's what happened here.

Hopefully, this post has both clarified that and been informative regarding how we think about our online interactions, which are massively curtailed by this loss of information. All of the information that we transmit is still being transmitted, of course. When we type our text, our body language is still in play, and our tone and inflection is also there, at least in our heads while we're transmitting, but this information is not being received.



Edited to add: It turns out that the numbers from the Mehrabian study aren't the most robust, and the study has been picked apart in terms of its methodology. However, this doesn't materially affect the underlying point, which is that, when we communicate by text alone, our efficacy is reduced because not all the information if being received.

_______________________________________



Further reading:

The Certainty of Uncertainty - Quantum mechanics and the quantisation of waves

Did You See That?!! - Quantum mechanics and the observer effect
Give Us a Wave! - Waves in classical and quantum mechanics, including sound waves
Paradox! A Game For All The Family. - A treatment of some well-known paradoxes, including a more complete explanation of wave/particle duality


______________________________________

*For the uninitiated, the argument that evolution is 'just a theory' fails to note that a theory is, in scientific parlance, an integrated explanatory framework encompassing all the facts, observations, laws and hypotheses pertaining to a given area of interest, whereas in the vernacular it's a guess.

† T
he pedant in me wants to note that 'systematic' isn't the proper term to use here, as that refers to being methodical or following a set plan. The correct term would be 'systemic', meaning inherent in a system. The author's intent is crystal clear, though, so I'll pretend to keep the pedant silent by relegating him to a footnote.

‡ Mehrabian and Ferris 1967

§ It's worth noting that Shannon's isn't the only definition, nor the only formulation in information theory. Properly, Shannon's work deals only with signal integrity, so would be better thought of as a theory of communication, rather than information. Indeed, Shannon's original paper, submitted to the Bell System Technical Journal in 1948, was entitled 'A Mathematical Theory of Communication'. There's another formulation of information theory concerned with information storage, devised by Andrey Kolmogorov. I discuss the distinction between them in some detail in the post linked above about information in DNA.

** Hertz - named after Heinrich Hertz, the German physicist who first conclusively proved Maxwell's wave theory of electromagnetism. We now know, of course, that electromagnetism isn't waves, and nor is it particles, it's something else that has some of the characteristic behaviours that we associate with waves or particles dependent on how we observe it.

‡‡ decibel - One tenth of a bel, named after Alexander Graham Bell, stemming from it's use as a measurement of telephony power by Bell Systems.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.