No, not the Clarke quotation, though this applies equally there. Look at the one that says 'look really carefully at this sentence'.

Did you spot it? There's some specialist notation in that sentence. Anybody who doesn't recognise the specialist notation will, in all probability, just glaze over mentally until they get to a bit that doesn't contain any specialist notation and they can understand what's going on without tying themselves in mental knots.

So how about you? Did any of the specialist notation in that sentence cause you any problems?

If you've gotten this far and are still reading, the probability that you encountered any issues with it is pretty low, and that's because you're in on the secret.

That's right. There was no hidden equation, nothing particularly different about that sentence from anything that's followed it, because this is the notation we use when we want to communicate with each other, in this case to express notions unambiguously in plain English.

We've talked a lot about language here in this little corner of the virtual universe, because language shapes our thought in ways many and diverse, so it's a prime topic here where we think about how we think about things. What we haven't done a whole lot of is to talk about representations of language, and there's an awful lot of mystery in our minds when we experience technical jargon, entirely commensurate with Clarke's Third Law. Here, I want to look at some specifics that can seem arcane, or mysterious. My hope is to show that the notation that mathematicians, geometers, physicists, philosophers, chemists and musicians employ are anything but a mystery, and can be easily understood by anybody with a little information and practice, in precisely the same way that you've understood everything I've said so far using these arcane squiggles we call the alphabet.

Because some of what follows is ground I've covered before, and I'm really only going to provide a taster here, I'll provide a list of relevant prior posts at the bottom. This post will be the foundation of some future posts, so some of what follows is going to be a little rudimentary.

I know that some people, when they encounter a mathematical equation in a bit of text, do a bit of a brain-flip and skip straight over it, almost like they haven't seen it. How do I know this? I used to be one of them. I still am, really, despite having spent decades reading literature liberally interspersed with them, but I've managed to demystify notation to the degree that I think I can explain it to a six year-old, as Einstein enjoined us to be able to do.

It can be hard, when we see arcane symbols, to even contemplate trying to work out what's going on. We might think - as I've done in the past - that 'this is why physics is hard', or some such. And yet here we are, most of the way through a preamble to a post about notation, having employed many arcane squiggles on the way to arriving at this point and, I assume, largely without any glazing of eyes. My aim here is to extend notions that we're all entirely familiar with and, in the process, take all the mystery away and bring all kinds of notation into the arena of the mundane, where it belongs.

So, let's start with mathematics, and something really simple.

\(1+2=3\)

There, that wasn't too hard, was it? In fact, it was really easy, despite the fact that we've now introduced arcane squiggles onto the page that we haven't seen earlier in the post! We take two quantities, add them together as indicated by the arcane symbol '+', and out pops the sum on the other side of the equator (=). The equator, like the Earth's equator, splits the equation into two equal parts (in fact, the Earth's equator doesn't do this; because of disparity in mass distribution, there's slightly more on the bottom: it's all a bit pear-shaped). You can think of the equator as being like the fulcrum on a set of scales. The two sides have to balance.

We can even change things a bit so that we can map the equation to the real world, like this.

It's really easy with an example like this to see how equations work, with one side of the equator precisely balanced by the other.

Let's try another.

\(a+b=c\)

OK, so this looks a bit odd at first, because we're combining letters and mathematical operations. This is the beginning of where the wheels come off for many, because surely this is mathematics, and these are alphabetical? Whiskey Tango Foxtrot?!!

In fact, using the squiggles that we think of as letters is no different than using the squiggles we think of as numbers. The really different one of these examples is the apples, because there we're using quantities of things in the real world. In the other examples, we're using a 'cipher' (strictly, the image of an apple is a cipher as well, but we can think of them as real apples for our purpose here).

A cipher, loosely speaking, is where one thing is represented by another, usually for the transmission and reception of information. All the squiggles we use as letters and numbers - in any language - are ciphers, as are the words and equations we construct with them in written form. In the case of letters and words, they're ciphers of sounds, or 'phonemes'. In the case of numbers, they're ciphers of quantities. They're not the

*ding an sich*, as Kant would have said*. I often use the phrase 'the map is not the terrain', which expresses the same notion. The equation we began with is the map, and the apples are the terrain.

While the examples \(a\), \(b\) and \(c\) we've chosen for our example here have a direct mapping to \(1\), \(2\) and \(3\), being the first three characters of the alphabet and the positive integers respectively, there are three major motivations for using non-numerical characters in an equation even where there's no such one-to-one mapping in this manner.

One reason for using a letter rather than a number is that a letter can serve as a placeholder for a number in situations where using a number is problematic. For example, if we want to show the relationships between quantities that are variable, then using an absolute number will render an equation that's too specific. If we use letters as placeholders, we can vary with abandon and the relationships will still hold, which means that they can be generalised to rules or axioms. We can introduce a new term into the discussion here, the capital Greek letter sigma (\(\Sigma\)), which is the standard mathematical term for 'sum', and this gives us:

\(a+b=\Sigma\)

This is a rule, or 'axiom'. It tells us that, if you perform the operation 'addition' (+) on two numbers, the output is the sum.

OK, time to complicate things a tiny bit to make later complications a bit simpler. In fact, what we're really going to introduce is a simplification but, because it comes in the form of new notation, it will look like a complication.

First, we need two new terms; base and exponent.

\(a^2\)

In this example, \(a\) is the base, and the superscript \(2\) is the exponent. The exponent tells us how many times the base should be multiplied by itself. If, for example \(a=2\), then \(a^2=4\). If \(a=6\), \(a^2=36\)

^{†}. This is known as squaring the base. If the exponent is \(3\), then you multiply the base by itself, and then you multiply the result by the base again. So, if \(a=2\), \(a^3=8\). If \(a=6\), \(a^3=216\).

An exponent of \(2\) is known as squaring, because a square with sides of length \(a\) has an area of \(a \times a\). An exponent of \(3\) is known as cubing for similar reasons, as a cube with sides of length \(a\) has a volume of \(a \times a \times a\)

^{‡}. This last example should start to show the utility of exponents in notation, as we've managed to cram five characters into just two. When we get to big numbers, this can really pay dividends. We can represent one million in only three characters instead of eight. Compare \(1,000,000\) to \(10^6\). These are two representations of the same number.

Indeed, that little snippet with powers of 10 contains the key to our whole enterprise here. We have three representations of the same number. The difference in character count between 'one million' and '\(1,000,000\)' isn't huge, only two characters, but this soon mounts up, and the difference with an exponent is a further six characters. 'Ten million' in 'natural language' has the same character count as the first example, the plain numeric representation has one more character at ten characters, while the exponential representation has the same number of characters as before (\(10^7\)). Each time you add a zero to the plain numeric representation, you add one to the exponent. Clearly, this is really useful for keeping the chalk budget down in mathematics departments (and instances of carpal tunnel syndrome as a bonus).

So, in powers of ten, the power is the number of zeros after the \(1\). This only holds true for positive exponents, though. With negative exponents, the opposite is true, and the power indicates the number of zeros

*before*the \(1\), so that \(10^{-6}=0.000001\).

Another good motivation for using letters as placeholders is that they can be used to represent previous results which themselves might have incorporated reams of calculations. In physics, for example, there are quantities known as 'constants'. An example we've encountered a fair bit in these pages is the speed of light \(c\). The value of this constant is 186,000 miles per second, or \(300,000,000\) (\(3 \times 10^8\)) metres per second (\(m\cdot s\)) (it's actually slightly less than that, but we'll keep it simple). As with all examples, the letter represents a numerical value.

Before I move on to other types of notation, a quick word about order of operations, because this is a sticking point for many. There's a useful mnemonic for order of operations (actually, several), known as BODMAS. This stands for brackets (parentheses), orders (exponents), division, multiplication, addition and subtraction. The reason for this is fairly simple, namely that carrying out operations in a different order yields different results. Consider solving for \(x\):

\(5+4\times3=x\)

If we start at the left and proceed to the right, we get \(5+4=9\times3=27\). According to BODMAS, we should do the multiplication prior to any addition, so we get \(4\times3=12+5=17\), a pretty big discrepancy. By adhering to this convention, then, we can be clear what to do and when in order to get a consistent result.

Let's move on, then, as we don't want to write

*Principia Mathematica*in a blog post.

I had wanted here to interject with an exposition of the notation used in formal logic, propositional calculus. In the event, I'm going to hold it over for a supplementary post, as there's some ground I really want to cover here, and I'm conscious of how long this already is.

I do, however, want to tie together some of what's gone before with what's to come, so it's time to get a bit graphic.

*direction*.

We have two directions or 'axes' for our graph, the horizontal or \(x\) direction and the vertical or \(y\) direction. Let's dive straight in and pick a point on the graph.

For simplicity, let's start with a point on the \(x\) axis 5 units from the origin. We can label this point \(x=5\).

The origin is the bottom left corner, in this case, but in fact this is an arbitrary decision on my part for simplicity. We should think of the origin only as the point on the graph where the value for all axes is zero, regardless of where on the 'plane' it's defined as being. In reality, you can think of the graph as extending to infinity on both axes, and any choice of origin arbitrary, but that's a complication we don't need for the moment. The units themselves aren't important for the moment. They could even be apples.

So, we've picked a point, but there's a problem. We have two directions to work with, but we've only provided information on one of them. In fact, we've defined not a point, but a line. Here it is, the line \(x=5\).

It's easy to see that there's some uncertainty here. Our point could be anywhere on that line. Clearly, we need more information, specifically about what's going on on the \(y\) axis. Without it, we can't even tell how far away the point is from the origin, because every point on that line is a different distance away from any other. What's the point of specifying a position if you can't even tell how far away it is?

Here's the graph again. We can see the point we want to define, and we know that one number - 5 - represents the position on the \(x\) axis, so do we have any information about the \(y\) axis? Why, yes! The value on the \(y\) axis is 0. So we just need some way to arrange this information. We can say, for example, (\(x=5,y=0\)). This even suggests some notation we can use, given the lessons we've learned above. We can simply say (\(5,0\)).

It's worth noting at this point that, by convention, the \(x\) axis is always represented first in such 'Cartesian' coordinate systems

^{§}.

OK, so let's pick another point.

This time, we know exactly what to do. We can confidently label this point (\(0,4\)), because we're in on the secret. We know how this notation works. Clarke's third law stands unviolated, and no magic has been performed, because this technology is not, to anybody who's read thus far and understood, sufficiently advanced as to be indistinguishable therefrom.

Now we can do something interesting. We can define multiple points. Let's do that.

Nothing new has happened here. We have exactly the same two points we already had, but we've placed them in the same graph. Using what we've already learned, we know how to label them. We can call this 'plot' (\(5,0\)),(\(0,4\)). We can even add them together to come up with a third point. A quick flurry of chalk and we get this:

\(\thinspace(5,0)+\)

\(\dfrac{(0,4)}{(5,4)}\)

Let's add that point to our graph, then. Now we have a graph with three points on it, (5,0), (0,4) and (5,4). Can this take us anywhere interesting?

Well, one thing we can do is to connect them to form a triangle, like this. I'm going to skip a few steps here and add some notation to the graph. First, we're going to label the lines \(a, b\) and \(c\), and that little square in the corner where sides \(a\) and \(b\) meet.

That square tells us that the angle (\(\theta\) theta) where those two lines meet is 90°, also known as a right-angle.

There's a really simple rule, generally attributed to Pythagoras, that tells us how to find the length of one side of a right-angled triangle given knowledge of the length of the other two sides. It tells us that the square of the hypotenuse of a right-angled triangle is equal to the sum of the squares on the other two sides. The hypotenuse is the side opposite the right-angle, and is always the longest side. In our case, the hypotenuse is labelled \(c\).

So what does this rule actually mean? Let's move our triangle away from the origin so that we have a little more room to work with. This will have no effect on our example, because the sides of the triangle will still be the same lengths. We know from our earlier example that the area of a square is the length of the side multiplied by itself - squaring - and we already have notation for that, so we can say that \(a^2+b^2=c^2\). We can represent that on the graph by drawing a square on each of the sides, like this.

Pythagoras' rule, which was almost certainly never uttered by Pythagoras (and was known long before him, in fact), simply tells us that the area of square \(a\) plus the area of square \(b\) is exactly the same as the area of square \(c\).

Side \(a\) is four units long, and \(4\times4=16\). Side \(b\) is five units long, and \(5\times5=25\). Add them together \(16+25=41\), and we now know that the area of square \(c\) is 41 units. All we need to do now is to find the square root of 41 (\(\sqrt{41}\)). The square root of a number is the number that, when you square it, gives the number you started with. I won't detail finding a square root here, as it will take us too far afield. We'll just give it, having worked it out on a calculator. The length of side \(c\) is 6.4 units (this is an approximation that gets more accurate as you add decimal places).

Let's leave that there, then, and see what else we can do with graphs before we move on to a completely different kind of notation.

One thing we can do with a graph is to show change over time.

Suppose I wanted to keep track of how much coffee I'm drinking each day? Of course, if this were what I was really doing, I'd be in need of a much bigger graph or, better yet, a logarithmic graph, but we'll keep it simple.

Here we can see a simple plot of cups of coffee by day. Three cups on day one, six cups on day two, five cups on day three, etc. It's really easy to see how this works, so I won't labour the point any more. The takeaway point here is that we can represent time as well as space.

In fact, we can represent just about anything using these methods. I'll be covering some of this in supplementary posts, especially curves and vectors, which are of particular interest in physics. For now, though, we'll leave that there and move on to a completely different kind of notation.

Here's another kind of graph. You've probably seen this before, if you've spent any time with music. You almost certainly didn't think of it as a graph, but that's what it is.

Our \(x\) axis, in this case, is time, while the \(y\) axis is pitch.

We can look at what the graph is telling us via a couple of mnemonics. The first thing to note is that, without additional notation, what the staff (that's what this graph is called; pronounced 'stave') shows us is only natural notes, or the white keys on a piano keyboard. On the treble staff, denoted by the 'clef' (this is a G clef, and tells us where in the range of notes we are), the spaces spell the word 'face'. The lines can be easily remembered by the phrase 'every good boy deserves favour'. As you can see here, the bass or 'F' staff is shifted down one space. They meet in the middle at middle C, which is the C nearest the centre of a piano keyboard.

To get to the non-natural notes, we need some additional notation. We call these 'accidentals', and the one of the bits of notation we use will not be unfamiliar to social media denizens. It's what you know as the 'hashtag', known to musicians for centuries as the 'sharp' (\(\sharp CanRead\)). The other is a little different, looking a bit like a lopsided 'b' \(\flat\) and is known as a 'flat. Where you see a \(\sharp\), the note should be shifted upwards one semitone, usually to the nearest black key. Where you see a \(\flat\), the note should be shifted down a semitone, usually to the nearest black key.

Let's stick them in the graph so we can see what they look like.

Some bits of convention, to begin with.

First, you'll never see accidentals presented like this together except in an explanation of what they are. Flats and sharps never appear together in a 'key signature'. You may encounter individual flats and sharps almost anywhere but, by convention, where the key signature is composed of flats, all the occasional accidentals you encounter in a piece will be written as flats. The same is true of sharps. You can think of the accidentals as being relativistic, in a sense. They're generally written relative to the key signature and the note

^{††}.

Second, the order of the flats is as above, left to right, and the key is given by the number of flats. On the far left, the first note to have a \(\flat\) on it is B. Any given key is defined by the previous accidental. In the case of the key signature 'F', there's a peculiarity that you need to be aware of to understand the notation. Let's look at it:

So, I just told you that the key is denoted by the previous accidental. Here we are, with only one accidental. In this case, the previous accidental is a sharp, and the first sharp to appear in order is F.

In the case of the key signature being given only in sharps, the name of the key is given by the note above the last sharp appearing. With only one sharp in the key signature, we know that the note that's always sharpened is F, and that the key is, therefore, G. Where there are no accidentals in the key signature, the key is C, and only the white keys are played.

What you should easily be able to tell from the examples given is that there's a pattern to the way that key signatures are laid out on the staff. No matter how many sharps or flats there are, a quick glance will tell you, because the pattern is easy to see. Once you become familiar with how the notation works, these patterns become easy to spot.

In addition to the key signature, there will usually also be a time signature at the beginning. Unlike the key signature, which appears on every line of the staff, the time signature appears only at the beginning unless there's a change to time signature during the piece. It's given like this.

Here we have, in fact, two time signatures, which never happens in the real world, but it highlights another peculiarity of convention in music notation. The second of these, \(C\), is a conventional way to represent 'common time', which would normally be denoted by the first, \(\dfrac{4}{4}\), which is the standard notation for time signature. The top of the two numbers is the number of beats per bar, while the bottom number tells you what length of beat the upper number refers to. In this instance, it's telling us that there are four beats of length 4 in every bar. But what does 'length 4' mean?

Here we can see the standard set of notes, with how they would be represented in a time signature. All these are based on common time, \(\dfrac{4}{4}\). In common time, there are four crotchets per bar, 2 minims, 8 quavers, etc.

The last of those notes, the breve, is an archaic character pretty much no longer in use, and only appears here as an explanation for why the longest note has name that indicates that it's half of something. It's a throwback to before most of our modern conventions existed.

So, now we know that a time signature of \(\dfrac{4}{4}\) tells us that there are four crotchets per bar. A time signature of \(\dfrac{12}{8}\) tells us that there are twelve quavers per bar, and so on.

This is also a clue about how the notes work. Play such and such a note for such and such a length of time, etc.

As with much of what we've discussed here, there are patterns inherent in musical notation, in the key signatures, in the way that bars are laid out, but also in something that we haven't yet looked at. This is our final stop before we attempt to tie all of the above together.

On the left is the chord of C. In the beginning, one might think that somebody reading this will look at each note individually and work out what the chord is but, of course, that would make sight-reading music very difficult, especially if the tempo is high (if the music is at such a pace that you get through the bars quickly). In fact, once you become familiar with the way that notes cluster in chords, all you see is the shape of the chord - the pattern. It's all about the intervals between the notes. We can see that the chord follows a simple pattern, and we can even number the notes. The root note will be one (or eight), and the pattern is the same for any basic chord, 1,3,5 and 8. This is the major chord, but this pattern will hold for any chord. For a minor chord, the third is flattened, so the C minor chord is 1, flattened 3, 5 and 8.

So, as we can see, musical notation is just another kind of graph. To make this explicit, here's a piece of music that most people should be familiar with. It was originally written by Mozart at a very young age as a series of variations on a theme, but here's it's presented in one of its most basic forms, so that we can read it easily.

On top, there's a conventional graphic representation. This snippet here only covers two octaves of the nine on the full piano keyboard and, as you can see, a conventional graph representing this would be huge. This shows the beauty of musical notation, because it's extremely compact. We can cover four octaves easily in this compact form and, with the help of a little additional notation, we can cover all nine octaves.

And, just so you can hear the result, here it is, played on piano.

It's worth noting that the upper graphic representation will seem very familiar to some of my readers. Anybody who's spent any time using a digital recording environment that processes MIDI (Musical Instrument Digital Interface) will have seen such a representation, because this is very similar to the MIDI editing environment.

So, hopefully, what we should be taking away from all of this is that arcane symbols are only arcane until you're in on the secret, and the secrets are not really very secret, and anybody can learn to use any of these notational forms. They aren't trivial to learn, but neither are they so difficult that we should be afraid of them.

We use, in our standard English script, 26 letters, 10 numbers, various mathematical operators, and any number of symbols that, until you've learned to use them and, more importantly, practice with them, might seem mysterious, even magical. The forms of notation we've explored here are no different.

When a physicist sees \(\hbar\) in an equation, he doesn't see the symbol, he sees the quantity, a constant, the reduced Planck constant. This is a number, specifically \(1.055 \times 10^{-34} \thinspace J \cdot {s}\)(given in Joules per second - units of energy over time). Nothing mysterious there once you know what's going on.

But don't forget, symbolism's a load of bollocks.

*

*Ding an sich*translates as 'thing in itself'. Kant was drawing a distinction between phenomena - things apprehended - and noumena - the experience of the thing. It's a distinction concerning ontology - the real nature of things. There's a huge amount of controversy surrounding these concepts among philosophers concerned with metaphysics but, since metaphysics is to thought what air guitar is to music, we can safely ignore it, simply noting that, in the context of this discussion, the distinction between the thing and the representation of the thing has utility.

^{†}In a recent discussion about pet peeves in language usage, the term 'exponential' came up. It appears to be being used increasingly to simply mean 'large'. This is the proper meaning, though. When we talk about exponential growth, we're talking about growth by some multiplicative factor. My own pet peeve of the discussion was the word 'superlative', which seems to be being employed as a superlative. Monkeys in shoes, eh(hat tip to Geoff Rogers)?

^{‡}This result can be generalised to any number of dimensions \(n\). The volume of any 'cube' of \(n\) dimensions is the side length \(a\) multiplied by itself \(n\) times, expressed \(a^n\). A cube in four dimensions is known as a 'tesseract'. This is difficult to visualise, but we can, in a manner of speaking, visualise its shadow in 3 dimensions. I've found it helpful to think of squares and lines as special cases of cubes, so a square is a cube in two dimensions, and a line is a cube in one dimension. For more on tesseracts and dimensions, there's a wonderful exposition by Carl Sagan at a Youtube near you.

^{§}Named after Rene Descartes, mathematician and philosopher. Descartes cast his coordinate system in one dimension only, but his work was later expanded to be applicable to two, three, ...\(n\) dimensions and is always applicable (see previous footnote).

^{††}There are times when mixtures of accidentals are written, including naturals (\(\natural\), no matter what the key signature, but that's really advanced stuff, in the realm of quantum vector analysis by comparison (this is a topic we'll cover in the supplementary material).

‡‡ Footnote to author's note: First, I did a lot of additional work on this subject that never quite made it to publication and that material is waiting in the wings for future posts. Second, this is, I feel, one of my best offerings (the first time I made all the diagrams and illustrations myself)and yet, since it was first published, it hasn't seen a lot of attention. I think I made a poor decision when I selected the final title only minutes before I published. I had a title, and I liked it, but the particular title I chose was too good to be wasted on this topic, to which the title I'd chosen is only obliquely related. I'm the sort of writer who hangs everything on the title, so I went with what I thought was a clever idea at the time, but I realise was just a bit silly, so I changed it for a second time. I have a lot more material on this topic almost ready to go, and I'll finish it if it looks like this one gets any better traction than before. Apologies to the few who've read this and opened it expecting something new. I hate to disappoint, but I have a fair bit in the pipeline again, and I'm seeking my mojo to get moving again.

The Map is not the Terrain Some of the pitfalls of natural language.

What's in a Name? Differences in usage between vernacular and technical arenas.

That's Racist! More on terms of art versus vernacular usage.

The Map is not the Terrain Some of the pitfalls of natural language.

What's in a Name? Differences in usage between vernacular and technical arenas.

That's Racist! More on terms of art versus vernacular usage.

## No comments:

## Post a Comment

Note: only a member of this blog may post a comment.