.post-body { line-height:100%; } -->

Wednesday 13 April 2016

Who, What, When, Where, How, Why..? The Art of Philosophy.

In previous posts, I've made some pretty bold assertions, especially concerning the nature of philosophy and the way it's taught. I've asserted - correctly, I think - that philosophy is the art of asking the right kind of question. Here, I intend to support that view, and to elucidate the pros and cons of the way philosophy is currently taught. In order to do that, I want to spend a little time focussing on questions. Why are we here? Why is there something rather than nothing? How many angels can dance on the head of a pin? Why did this pencil fall to the floor when I let go of it? Are these even the right kinds of question to ask?

This post will be dealing with some of the foundations of epistemology and, more germane, what are suspected to be the limits of knowledge.

First, some definitions, because it's important that we know what we're talking about. Note that, as always, the definitions I give here are my own (which isn't to suggest that they aren't consistent with what you'll find in a dictionary, only that, in attempting to be rigorous, I employ specific definitions for clarity).

Let's start with that term at the foot of it all, knowledge:

What does it actually mean to know something?

I remember an incident when I was a youngster involving a family friend. My family were en route to other family friends, and we expected to be met by our friend, but he didn't show up as expected. We waited as long as was reasonable, and then we started to make some phone calls (this was in the '70s, before mobile phones and internet). After some calls, we discovered that he'd died during the night from an epileptic fit. We were devastated. He was one of the most beautiful people I've ever known, and he was gone.

We arrived at our destination, a church in Salford, run by an Anglican vicar and his family, and they seemed unsurprised that we arrived without Phil. The wife of the vicar, who always worried me somewhat as a child (but is fairly instrumental in my atheism), informed us that she had had a dream, and that she was already aware that Phil had died.

Did she know this? I remember thinking so at the time. However, in reality, she could only have intuited this, at best. Properly, this could only constitute knowledge once it was confirmed.

A fairly common definition of knowledge is 'justified true belief'. I've never been a fan of this, mostly because of the inclusion of the word 'belief'. It's a massively problematic word, covering such a broad range of concepts - trust, confidence, faith, etc - that its utility is hugely undermined, not least because, for the vast majority of the concepts covered by the term, there's a better, more robust term describing exactly that concept, without ambiguity and with no danger of equivocation. For this reason, I eschew the term entirely, only employing it in settings such as this, where I'm explaining why I have little use for it. In my lexicon, if it's justified and true, it isn't a belief, it's knowledge. Knowledge is that which is demonstrably true.

The popular view of science has it that our theories are what constitute knowledge, but this view is problematic for several reasons. The first is that it's primarily an inductive discipline, which means that, as I said in an earlier post, it generates conclusions that have a degree of probability of being correct. Our confidence in the correctness of the theory increases with every observation that fails to falsify it, but we always have to be aware that it generally takes only a single observation to falsify a theory. The obvious go-to example to elucidate this is Newtonian mechanics, which was having its postulates validated up the wazoo for more than two centuries, despite the fact that, as we now know, it was wrong. There were clues fairly early on that something was amiss with it, not least because it was known that there was a discrepancy in Mercury's orbit, confirmed in 1859 by Urbain le Verrier, that Newton's equations couldn't account for. It was a tiny discrepancy, amounting to only 43 arc-seconds per century (Newton predicted a precession of 5,557 seconds of arc per century, while observation indicated that it was actually 5,600 arc-seconds), but it was there, and Newton failed to predict it. Many hypotheses were erected to explain it, such as the existence of an undetected planet, known as Vulcan (live long and prosper), or that there was a massive amount of dust between the Sun and Mercury. Neither was observed, but people still clung to Newton because it worked. The first real test of general relativity was that it predicted this discrepancy exactly, with no fudging. There were other problems but, rather than deal with them here, I refer you to my earlier post dealing with the history of relativity. 

So, while each supporting observation increases our confidence that our theory is correct (or, at least, on the right track), that confidence can never be complete, for the simple reason that complete confidence could only come about when every possible observation has been made and no falsifying observations are possible. This was famously formalised by David Hume as the 'problem of induction', and it's fundamental to scientific epistemology. 

This raises the question, then, if our validated theories don't constitute knowledge, where is the knowledge to be found in science? In the first of the posts I linked to above, I talk a bit about a principle that's very closely related to the problem of induction, formalised by Karl Popper, namely falsification. Popper thought he'd solved the problem of induction. What he actually did was to provide a rational basis for accepting hypotheses arrived at inductively, which Hume asserted wasn't possible. He didn't fully solve the problem, though, and it still pervades science today. It stands as a caveat against over-confidence, and means that all scientific statements in a certain class must be viewed as having a unspoken 'if our model is correct' appended to it. What Popper did achieve in his work was to bring proof into science and, with it, real knowledge. Going back to our earlier example, we can't say with supreme confidence that general relativity is correct (in fact, we know that it's incomplete, at the very least), but we can say with absolute confidence that Newtonian mechanics is not correct. It's been falsified. This is real knowledge, and it's essentially how science progresses. We construct models that describe phenomena with ever greater accuracy and precision, but we're always aware that our models could, in fact, be wrong and, by showing ideas to be wrong, knowledge increases.


There's a famous Latin phrase, formulated by Descartes, often mistaken as being an absolute ontological statement. You're all familiar with it, I'm sure. It runs cogito ergo sum. Many have taken this to be an absolute statement regarding the nature of reality, but that's certainly not how Descartes intended it. He'd spent the best part of twenty years thinking about what it might mean to know something, and how one would go about constructing a primary axiom upon which knowledge could be constructed. After much effort and thought, he drilled it down to his famous statement; I think, therefore I am.

On the face of it, this looks a lot like an ontological statement, not least because it contains the words 'I am'. However, this would be an inadequate assessment. What Descartes was doing in that statement was to lay the foundations for knowledge: If this statement is not true, there can be no knowledge. It's an assumption, but a necessary one.

The astute among you may have picked up on something in the above, namely my insistence that Descartes' statement is not an ontological one. Why would this be a problem?

Popper, along with falsification, introduced another, closely related principle, as a solution to the problem of demarcation, which deals with what is required in order for a statement to be classified as a scientific one. This principle is known as falsifiability. He asserted that, because our theories are necessarily abstract, they could only be tested by implication; they couldn't be tested directly. As stated above, no number of supporting observations can logically be taken as confirming the correctness of a hypothesis, but a falsifying observation is absolute. With this in mind, Popper took falsifiability as his primary criterion for determining whether or not a statement is a scientific one. In other words, if there is no method by which a statement can be shown to be false, it isn't science, and has no place in science. Christopher Hitchens famously said 'anything that explains everything explains nothing', which encapsulates this principle neatly.

Ontology deals with the ultimate, fundamental nature of things, including what exists. Since we are limited to what we can observe in terms of garnering knowledge, all ontological statements are unfalsifiable. There is no observation we can make, even in principle, that can determine that what we observe is, in any ontological sense, real. Immanuel Kant drew a distinction between phenomena and noumena (a term I shall use for convenience) to highlight the possible difference between what we observe and what's actually real. There are many thought constructs in which what we observe is not real, such as the Matrix, brain-in-a-vat, thought in the mind of god, etc. These constructs are inherently unfalsifiable, and have no place in science. Science, to be effective, must remain ontology-free. Science deals only with phenomena, and leaves noumena to those who have nothing better to do than ascertaining the colour of the lint in their umbilici.

That's not to say that you'll never find any ontological statements from scientists, because you certainly will. Scientists are given to speculating about the true nature of things as much as the next man and, in the case of theoretical physicists, for example, considerably more than the next man. The various interpretations of quantum mechanics, for example, are ontological. They amount to no more than speculation concerning why we observe what we observe. While individual scientists may prefer one of these interpretations over the others, all are empirically equivalent, and there's no experiment we can currently devise that will allow us to falsify any of them (and it's entirely probable that this will always be the case). All of these interpretations can be criticised with the injunction to 'shut up and calculate'.

I hope this goes some way toward explaining my objections to Tegmark's 'mathematical universe' idea, as discussed in the comments section of my last post.

For my part, I can think of no good reason to think that phenomena and noumena are different things, but I can't absolutely rule it out. I proceed on the basis that what I observe is real, but remain op-en to the possibility that I can be fooled, not least because I'm painfully aware that the universe is under no obligation to pander to my puny intuitions.

So, why do I quote Descartes and other philosophers here? Those who know me well could be forgiven for thinking that I've taken leave of my senses, not least because I'm known to rail against quoting others as a substitute for discourse. Is it because, being some of the great minds in history, I'm deferring to their authority? Or maybe it's simply that mentioning their names gives my own writings an air of authority they wouldn't otherwise possess? It certainly appears that this motivation can be found in some writings. The correct answer is 'none of the above'. The real reason that I cite these thinkers is simply that they expressed in an elegant and pithy manner what I wanted to say. 

There are those who think that I have little use for philosophy and philosophers, but this couldn't be further from the truth. What I have little use for is a certain class of pseudo-philosopher, one who thinks that his umbilicus is a source of information about the world. They can be found not only in internet chatrooms and forums the world over, but also in academia, and this is a bit of a problem. Alfred North Whitehead famously said 'The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato.' In proper context, Whitehead was talking about where our systems of thought find their roots, but there's also a danger lurking therein. Whitehead touched on it in the preceding paragraph, by pointing out that, regardless of the fact that one can find express authority for any of the main positions of modern philosophy in one of the great thinkers of the past, nothing should be taking as resting on authority. There's a good reason for this, namely that appeals to authority are fallacious. What sets these luminaries apart and gives them their place in history is not what they thought, but how they thought. We teach what they thought not because they were right, but because the thought processes themselves are valuable.

Unfortunately, due to the pedagogical approach to the teaching of philosophy in the modern world, it's all too easy to come away with the impression that the key content is what these people thought. 

Ultimately, philosophy is a didactic tool, whose utility lies in teaching one how to think properly. At bottom, the means by which it achieves this is in teaching us how to formulate the right kinds of question. What's the right kind of question? That depends on the nature of the area of enquiry, but it will generally be the kind of question that leads to some means to test the question, either by observation and evidence, or by checking for soundness and consistency. If such a test is not possible, then the question is epistemologically worthless, as is any conclusion drawn from it.

To bring this back full circle, let's take a look again at those questions at the head of the post, and see what kinds of question they are:

1. Why are we here?

This is seen as one of the thorniest questions in philosophy, but there's a fallacy lurking in there, hidden behind that word 'why'. As an advocate of scientific thinking, it's incumbent upon me to advise that such questions are invalid in science, because science doesn't do 'why' questions. Now, I can hear the clamouring objections already, even before I hit submit, so let's address them.

Science is concerned with mechanism. When a scientist asks 'why did that thing behave like that?' he's actually asking is 'what is the mechanism behind that behaviour?' This is a 'how' question, albeit one phrased as a 'why'. 

'Why' presupposes purpose, or teleology, and this alone should make you pause at the question 'why are we here?' because there's a glaring question being begged in it, namely that there even is a why. There's no rational or logical foundation to assume that there's a reason that we're here. Science has gone a long way toward explaining the how of it, and is making strides in those areas in which our understanding is less than complete. 

This is the wrong kind of question.

2. Why is there something rather than nothing?

Another perennial, and a favourite of armchair and pub philosophers the world over. There are two approaches to this question. I'll set aside the 'why' problem, not least because it's already been addressed, sop we'll simply move on to rephrasing this correctly as a 'how' question and see what we get. The first approach is one that I dealt with in an earlier post, in which I talked about Heisenberg's Uncertainty Principle, which governs what values can obtain for certain pairs of parameters known as 'conjugate variables'. The critical equation is this one:


\begin{equation} \Delta p \Delta x \geq \hbar/2 \end{equation}


We can think of spacetime as being a field, and the conjugate variables pertaining to fields as governed by this equation are 'value' (x) and rate of change (p). The result of this is that 'nothing' while being an attainable value for both parameters, cannot persist, because it would violate this principle on the grounds that both value and rate of change would be zero, meaning that we could know both with arbitrary precision. Thus, Heisenberg's Uncertainty Principle ensures that there must be something.

The other approach also leans on some esoteric physics, namely the 'zero-sum universe'. There's an idea that's been around for a while that the positive energy of the matter and dark matter is exactly mirrored by the negative energy in gravity, which means that the net energy of the universe is actually nil. In that context, then, a better question would be 'why is there nothing rather than something?' (although it still contains the 'why').

Ultimately, and while it may not be the most satisfactory response, we should probably fall back on some form of anthropic reasoning here, and state that if there were nothing rather than something, we wouldn't be here to ask such silly questions.

This is the wrong kind of question.

3. How many angels can dance on the head of a pin?

This is, at least, not a why question, but it still has some of the same fallacies lurking within it. The most obvious are the questions being begged, namely that angels exist, that they go around dancing on the heads of pins or that, even given the alleged magical nature of such entities, they'd be constrained in such a manner.

This is the wrong kind of question.

4. Why did this pencil fall to the floor when I let go of it?

There's the why again, but we're aware of the problem now, so we can gloss over it and go straight to the mechanism. Our best current model tells us that it falls to the floor because spacetime is warped toward the nearest large centre of mass, the Earth.

This is the right kind of question.

I won't go into detail about this, because I intend to give a fairly comprehensive treatment of general relativity in a near-future post.

In conclusion, philosophy is concerned with ensuring that we're asking the right kinds of question, because only the right kinds of question can lead to genuine knowledge. As soon as you allow philosophers to tell you what to think, you're doing it wrong.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.