Colin Swatridge
Foolosophy? Think Again, Sophie Ten reasons for not taking Philosophy too seriously
Colin Swatridge
FOOLOSOPHY? THINK AGAIN, SOPHIE
Ten reasons for not taking Philosophy too seriously
Bibliografische Information der Deutschen Nationalbibliothek Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen Nationalbibliografie; detaillierte bibliografische Daten sind im Internet über http://dnb.d-nb.de abrufbar. Bibliographic information published by the Deutsche Nationalbibliothek Die Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.d-nb.de.
ISBN-13: 978-3-8382-1788-8 © ibidem-Verlag, Stuttgart 2023 Alle Rechte vorbehalten Das Werk einschließlich aller seiner Teile ist urheberrechtlich geschützt. Jede Verwertung außerhalb der engen Grenzen des Urheberrechtsgesetzes ist ohne Zustimmung des Verlages unzulässig und strafbar. Dies gilt insbesondere für Vervielfältigungen, Übersetzungen, Mikroverfilmungen und elektronische Speicherformen sowie die Einspeicherung und Verarbeitung in elektronischen Systemen. All rights reserved. No part of this publication may be reproduced, stored in or introduced into a retrieval system, or transmitted, in any form, or by any means (electronical, mechanical, photocopying, recording or otherwise) without the prior written permission of the publisher. Any person who does any unauthorized act in relation to this publication may be liable to criminal prosecution and civil claims for damages.
Printed in the EU
For Alice, standing in for Sophie, who has the last word.
Contents The Argument
7
Reason 1:
It still doesn’t know what it is
10
Reason 2:
It is unsure about who is a philosopher
20
Reason 3:
It claims to apply universally
33
Reason 4:
It divides the world in two
45
Reason 5:
It makes an idol of knowledge
57
Reason 6:
It has an unrealistic view of truth
71
Reason 7:
It is confused about mind
85
Reason 8:
It adds nothing to the golden rule
95
Reason 9:
It has no special tools
113
Reason and reasoning
114
Intuitive thinking
118
Analogies and thought-experiments
122
Informal logic
127
Reason 10: It is beset by doubts of its own
134
Conclusion
146
Sophie’s Response
150
Index
155
7
The Argument I declined to play cards and was therefore requested to discourse on philosophy; after which no one spoke to me at all – a result which I did not regret. Fyodor Dostoevsky, Poor Folk, 1845
Philosophy has been around for something like 2,500 years – or so we’re told. Plato, Aristotle, Augustine, Aquinas, Descartes, Kant, Russell, Wittgenstein: these and others were ‘great’ Philosophers. That’s how we think of them. The subject ‘Philosophy’ is taught in all the ‘great’ universities, world-wide, and you probably regard it as a subject for those with sharp minds confronting the ‘big questions’. But is it a subject at all? The writer of Psalm 14 wrote: ‘The fool has said in his heart there is no God’; I shall argue that there is no goddess Wisdom for the ‘Philosopher’ to seek out; that it’s foolish to make a subject of it, and to give it a capital P. We all philosophize at some time in our lives: we do so as children when we ask ‘Do numbers go on for ever without stopping?’, or ‘Why do we eat lamb, but not puppy?’; and we do so as adults when we cast about for answers. Plato, Aristotle, and company philosophized, asking deep questions, eager to understand the world they lived in. They weren’t fools – on the contrary – though the answers they came up with aren’t ones we can make much use of now. Thinkers in the ancient world, and indeed, into modern times, believed in gods, or God – everybody did; one couldn’t be wise and be an atheist. Their (public) thinking had to take God into account if they wanted to live, so there was very little difference between philosophy and theology – the study of God and (in practice, mostly the Christian) religion. Thinkers right up until the late 1800s who were later thought of as Philosophers with a capital P were theologians, too. Many of them were Theologians with a capital T. I shall argue that there are still echoes of theology in a lot of present-day Philosophy. (One such is the so-called ‘mind-body problem’: it has religious roots and taxes Philosophers and mental health services to this day.)
8 Who am I? More particularly, who am I to call Philosophy foolish? I studied Theology when I was a believer (and was expected to be a believer), and Philosophy when I ceased to be a believer. Philosophy, I thought, would be more open-minded. Both were a disappointment. I have taught a number of subjects at the university level – but not Theology or Philosophy. I’m an outsider who has read his fill of Philosophy books, academic and popular; and much of what I read there I’m prepared to call foolish. There are those who profess Philosophy who’ve expressed doubts about the project; but they don’t pin their theses to the doors of their Philosophy Departments. The doubters haven’t resigned their posts, and they still call themselves ‘Philosophers’. Why would they saw through the branch on which they’re sitting quite comfortably? I should add here that I don’t call Philosophers foolish: they argue in wonderfully artful ways, and pick holes in each other’s work, finding all manner of faux pas, fallacies, and category mistakes in it – and they’ll probably do the same with this short book. I don’t question their motives or their professionalism, just their loyalty to a lost cause. And who are you, (whom I’m calling ‘Sophie’ because she was the first person to read the book, and because she has the last word)? I’m imagining that you’ve embarked on the study of Philosophy, or you plan to; or that you’re a more than usually thoughtful ‘general reader’ who has always supposed Philosophy to be a rather rarefied subject pursued by donnish intellectuals. You may believe it to be a worthwhile, even necessary subject that has contributed immensely to our understanding of the world and of ourselves in it. Of course, we should all think, and the more we think reflectively the better; but such thinking shouldn’t put us in the rather small box that Philosophy has become, in which Philosophers mainly talk among themselves. I’ll not base my argument on those works in which Philosophers write for each other; I shall focus in the main on those thinkers who address an intelligent, but non-specialist readership. I shall argue that Philosophy as an academic subject – at any rate, in the ‘West’ – is not one that should be paid as much respect as it thinks it deserves; indeed, I shall give you ten reasons why you should revise whatever
9 high opinion you might have entertained of the subject. Should those who call themselves Philosophers read this book, I shall not expect them to say: ‘He’s right’. I would be happy enough if you, whom I know to philosophize, were to say: ‘He’s not wrong’. Note: this is not an academic book written for academics. It is written for outsiders like myself, or for only-just-insiders. It is, I hope, the final version of a string of versions, the last of which was provided with notes and references to all the scores of books listed in a bibliography. It was judged that its intended readers would find this apparatus unnecessary and off-putting – even you, Sophie, might not have got beyond the Introduction. So, this is a more straight-to-the-point, less cluttered version. If you find this one too hard-going, you’ve found reason number eleven for not taking up Philosophy. I do quote a number of Philosophers in this book, and I do refer to the sources of the quotations, but there are no bibliographical details given. Further information about those sources can easily be found online. (By the way, I hope all those of you who are not called Sophie will not feel excluded. This book is for you, too).
10
Reason 1: It still doesn’t know what it is It occurred to Wayne that for some time now he was always arriving or departing. He was never anywhere you could actually call a place. He wasn’t here and wasn’t there. It was like a problem in philosophy. Don DeLillo, Libra, 1988
‘What is Philosophy?’ You must have asked yourself at some stage, Sophie, ‘What is philosophy?’ The question is often the title of an introduction to Philosophy, in these words, or words to this effect; or it’s the title of the first chapter. When an astrophysicist or an anthropologist asks the question ‘What is astrophysics?’, or ‘What is anthropology?’ it‘s to explain its content to newcomers to the subject – what it covers – not because these scientists themselves aren’t sure what they’re doing. When a historian like E. H. Carr asks What is History? at book-length, it’s to raise awareness of the role of the historian to interpret facts as much as to collect them; it isn’t to explain or justify the existence of history as a subject. For many Philosophers, to raise the question ‘What is Philosophy?’ is to do Philosophy. Indeed, Nigel Warburton, in his Philosophy: The Basics, asks what he calls this ‘notoriously difficult question’, and answers it thus: ‘philosophy is what philosophers do’. This answer gives Philosophy permission to be anything at all; and a subject that’s anything runs the risk of being nothing. Art runs a similar risk. The question ‘what is Philosophy?’ is one about the subject raised within the subject, one of the very ‘problems of Philosophy’ that Philosophers wrestle with. It’s the difference between Philosophy and virtually all other subjects: *It’s+ one of the things that distinguishes philosophy from science. You don’t just have disagreement about particular issues; you have disagreement about the nature of the subject. Tim Crane, in New British Philosophy (Eds. Julian Baggini and Jeremy Stangroom), 2002, p.116
11 It’s one of the things that distinguishes Philosophy from pretty much every other subject under the sun. The difficulty has a lot to do with the way in which the subject has evolved: once it was a catch-all term for the search for knowledge of any kind. Then, for centuries, knowledge was what the Church said it was – and new knowledge had to be consistent with what (it said) was in the Bible. Over time, knowledge was acquired in other ways than by pondering, chin in one hand, pen in the other; it was got by observing, testing, measuring, calculating. In most intellectual disciplines, assertions are supposed to be backed by evidence. Mathematicians have proofs, biochemists have experiments, historians have documents. You cannot just say whatever you happen to believe. Is philosophy an exception? Timothy Williamson, The Philosophy of Philosophy, 2007, p. 208
It does, sometimes, seem that Philosophy is, indeed, an exception – and Williamson’s title gives it the look of a dog chasing its tail. Philosophers make assertions all the time (they call them propositions), and all too often, they aren’t based on evidence of any kind. Reclining on a couch at one of Plato’s dinner-parties, debating with elite Athenians, might have been enough to raise interesting questions, even to come up with good-guesswork answers; sitting by a warm stove and thinking about how little he knew might have been enough for René Descartes to overcome doubts about his own existence – but even he, lying in bed, had to watch a spider spinning a web to be the pioneer of an amalgam of geometry and algebra. Thinking only gets one so far. Rafael Ferber makes this bold claim: Philosophy is not harmless: sometimes it hurts. It drags us out of the security of our prejudices and takes us to where we no longer feel at home. It is almost as if we were transported to another planet. Rafael Ferber Key Concepts in Philosophy: An Introduction, 2015, p.16
To over-state the case for the subject – to say, as he goes on to say, that the subject matter of Philosophy is ‘the world and everything in it’ – is surely to
12 raise eyebrows, and the question whether a subject as wide-ranging as this can really be a ‘subject’ at all. Philosopher Wilfrid Sellars suggested that the aim of Philosophy is ‘to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term’. Can something so baggy-sounding be a ‘discipline’? What is it that Philosophers have thought about? We can place the questions – what I have called, above, the ‘big’ questions, what Bertrand Russell in his 1912 Problems of Philosophy calls the ‘ultimate questions’ – in three categories: 1.
2.
3.
What is there? What is the essence of things? What is real, as opposed to what merely appears to be real? What is it to be, to have being? Why is there something and not nothing? What is the self? What becomes of the self at death? What is mind, and do other people have minds? Questions of this sort are the stuff of metaphysics. What do we know? How do we come to know it? How can we be sure that we know it? What justifies our being sure? What is a belief? How do we know that what we believe is true? Questions of this sort are the stuff of epistemology (or theory of knowledge). How should we behave? What is it to be good, and to do good? How should we behave towards others, and how should they behave towards us? When is an act right or wrong, and how can we tell the difference? Questions of this sort are the stuff of ethics (or moral philosophy).
They are questions that you don’t have to be a professional Philosopher to ask, or to answer. We all ask them – if not out loud, then in unspoken thinking – from time to time. You may well have to be a Philosopher, though, to want to find answers that command agreement, because they’re well argued; they’re coherent; they’re consistent – they persuade. You very likely have to be a Philosopher to regard the questions as problems to be solved. The really significant problem may be, though, that some of them have already been solved, or are well on the way to being solved, by scientists, and that those that haven’t lie beyond the power of scientists to solve because they’re
13 questions that have no ‘ultimate’ answers. Two and a half thousand years of serious philosophizing appear to have demonstrated that Philosophers don’t have the answers, either – not ones that satisfy more than a minority of Philosophers in one generation, anyway. The answers may be elusive, but that doesn’t put an end to the search. There are still plenty of Philosophers nostalgic for philosophy as they suppose it used to be. I am trying to stage a comeback for the old idea of truth and wisdom but now wearing a postmodern hat. John D. Caputo, Truth: Philosophy in Transit, 2013, p. 52
Descartes spoke of the ‘creation of eternal truths’. I’m taking up this programme but without the help of God. Alain Badiou, Second Manifesto for Philosophy, 2011, p.129
Philosophy always was about the search for ‘eternal truths’, and I shall hope to demonstrate that this is still what gives the subject its supposed edge over lately arriving subjects. It still looks for foundations on which answers to the questions ‘what is there?’ and ‘what do we know?’ and ‘how should we behave?’ can be built.
What it ‘ought’ to be Many Philosophers are prepared to scale down their ambitions for the subject. They take it as their aim to be clear and to make clear. What’s more, they consider it their mission to examine, and to make clear, the claims made by thinkers in disciplines other than their own. It seems that no matter what subject matter we’re investigating, or how we’re investigating it, we can always step back, try to identify presuppositions that inform our investigation, and think about whether they’re the best ones (…) stepping back can take us from doing physics, or medicine, to doing philosophy of physics or medicine. Dave Ward in Matthew Chrisman et al., Philosophy for Everyone, 2013, p. 5
14 You might think, as I do, that those who practise physics and medicine probably think about what they do already, and that they’re better equipped than Philosophers to do it. Philosophers seem to be at their most zealous when they step back to spot fallacies and confusions in the thinking of other Philosophers. Clear thinking is all, and clarity might be enough; but what is as clear as gin to one Philosopher may be as muddy as craft cider to another. One can talk and write utter nonsense clearly. Language is the sea in which Philosophers have to swim: we speak different languages, and we use our own language in different ways. It may be language that gives Anglo-American Philosophers, swimming in cold, analytic waters, the impression that French and German Philosophers, who lay their towels down beside the Mediterranean, don’t express themselves clearly. It may be language, indeed, that sows doubt about whether they are doing Philosophy at all, or doing it properly. French thinker Jacques Derrida, for instance, was at first refused an honorary doctorate by the University of Cambridge because he didn’t meet ‘the accepted standards of clarity’. It does take a certain doggedness on an English-speaker’s part to root out the meaning in much of what Foucault, Levinas, Deleuze and others have to say. We can’t always blame the translator. Sometimes it seems that all that there is to Philosophy is definitions, and the trading of definitions – of knowledge, for example, of truth, beauty, and goodness. Perhaps there is little point in discussing what is or is not true if one doesn’t first try to define what is meant (in context, or for all time) by ‘truth’. Perhaps it’s not surprising that there are almost as many definitions of the key terms that Philosophers use as there are Philosophers. In what we blithely call a ‘post-modern’ world (a compound adjective itself defined in a wealth of ways), the question ‘What is Philosophy?’ is given a variety of answers across university departments of Philosophy, and this is because there’s always been disagreement about whether it’s an ‘arts’ subject, or one that’s scientific, even mathematical, in its methods. Bertrand Russell was inspired to devise laws of truth as hard as Boyle’s gas laws, Newton’s laws of motion, and Maxwell’s laws of electromagnetism. Teachers and practitioners of formal logic continue to treat Philosophy as a cousin of
15 mathematics, where laws are as unchanging as the rules of chess. (I shall say more about logic – though not more than my outsider status qualifies me to do – under Reason 9). As the thinkers of the Middle Ages in the schools of theology – those we call Schoolmen, or Scholastic Philosophers – treated Philosophy as the maidservant of Theology, so teachers and practitioners of metaphysics, may be said to treat Philosophy as a maid-servant of the sciences, if not as a science itself. Here’s one who does: Metaphysics, at bottom, is about the fundamental structure of reality (…) the ultimate goal is insight into this structure itself – insight into what the world is like, at the most fundamental level. Discerning ‘structure’ means discerning patterns. It means figuring out the right categories for describing the world. It means ‘carving reality at the joints’, to paraphrase Plato. It means inquiring into how the world fundamentally is, as opposed to how we ordinarily speak or think of it. Theodore Sider Writing the Book of the World, 2011, p. 1
Philosophers who see the subject as a science (harking back to when science was called ‘natural philosophy’) are called ‘naturalists’. Some of the contributors to a book with the title The Armchair or the Laboratory? think of themselves in this way. It’s far from clear, though, what they would do in the laboratory that they don’t do now in their armchairs. Do scientists at their laboratory benches ‘carve reality at the joints’? How do these ‘naturalists’ think they might do it? The laboratory that they refer to would seem to be the language laboratory, and the ‘structure’ they seek to examine, the reality they seek to carve, is language and our use of it to frame concepts. The objective is: to clarify the content and structure of our existing conceptual frameworks so that our philosophical perplexities fade away. Marie McGinn, in The Armchair and the Laboratory, (Ed. Matthew C. Haug), 2013, p. 12
Hope continues to triumph over two and a half thousand years of experience.
16 The poet Matthew Arnold wrote in 1865 that: ‘Most of what now passes with us for religion and philosophy will be replaced by poetry’. Lithuanian-born, French Philosopher Emmanuel Levinas considered Philosophy at its best to be a meditation on the works of Shakespeare; and there are those Philosophers who find as much philosophy in novels as they do in Philosophy. Most universities locate the subject in a faculty of Arts, or Humanities, or Arts and Humanities, in partnership sometimes with Theology, or Religious Studies. Oxford has one of the biggest departments of Philosophy in the world (it has more than a hundred and fifty teachers on the staff, more than fifty of whom are professors or associate professors), in the ‘Humanities Division’; it’s so big, in fact, that it’s a faculty in its own right, established in 2001. Philosophy as it is taught I said above that the ‘what is Philosophy?’ question is answered variously across universities. Whatever its status or size, though, most departments of Philosophy in the ‘West’, and in the English-speaking world, offer a common suite of undergraduate courses. These are:
Metaphysics Epistemology (or Theory of Knowledge) Ethics (and often: Applied Ethics, e.g. Bioethics, Environmental Ethics) History of Philosophy (often divided into Ancient, and Modern Philosophy) Philosophy of Mind (or variants on Philosophy and Cognitive Science) Philosophy of Science/Social Science/Mathematics/Language Logic (or Philosophical Logic, Symbolic Logic) Political Philosophy
Aesthetics (or Philosophy of Art/Film/Literature/Music).
Perhaps this list of standard course-offerings is the clearest answer to the question ‘What is Philosophy?’ And then there are, to be sure, many courses on offer with novel names, particularly in the newer British, and many
17 American universities. York, for example, offers courses in ‘Imagination’, ‘Effective Altruism’, and ‘Lies, Bullshit, Perversions, and Propaganda’; Nottingham’s offer includes courses in ‘God and Money’, and ‘Philosophy and Sex’. Dartmouth College, an Ivy League university in New Hampshire, includes ‘Friends, Lovers, Comrades: Ethical Issues of Special Relationships’, ‘Love and Respect’, and ‘The Real, the True, and the Vaguely’ (sic) among its courseofferings; and Rutgers, the state university of New Jersey, offers courses in ‘Philosophical Ideas in Science Fiction’, ‘African, Latin American, and Native American Philosophy’, and ‘Philosophies of Death and Dying’. It would seem that, beyond the standard curriculum, Philosophy is returning to its roots: philosophy in the ancient world was about anything that could be known or guessed at; once again, it seems, if the word ‘philosophy’ can be attached to Love, or Sex, or Race, or Imagination, it counts as an issue in Philosophy with a capital P. Virginia State University even offers a course under the hold-all title: ‘The Meaning of Life’. (This might be rather a short course: Tartaglia has it that: [Philosophy] gives people pleasure, understanding, and for its active participants, something worthwhile to do (…) and perhaps most importantly, it allows us to understand the meaninglessness of life. James Tartaglia, Philosophy in a Meaningless Life, 2016, p. 180)
What, one might ask, does the member of a Philosophy department bring to courses under these novel titles that thinkers in other departments – thinkers in general – couldn’t bring? Is today’s Philosopher the polymath that a member of the Royal Society in the 1660s could just about claim to be? Is Philosophy having to make itself more accessible to students for whom metaphysics and logic are a turn-off? The Edinburgh University Department of Philosophy, alongside courses in ‘The Philosophy of Well-Being’, and ‘The Philosophy of Time-Travel’, runs options in ‘Philosophy: Fun and Games’, and ‘Puzzles and Paradoxes’. These options reflect a clear popular trend. A number of books have been published that – if they don’t make fools of their readers – seem to be intended to puzzle them. Or, perhaps, the authors have been fooled, themselves, in their quest for wisdom, and so fall back on entertainment:
18 Do Llamas Fall in Love: 33 Perplexing Philosophy Puzzles; The Pig That Wants to be Eaten: and Ninety-Nine Other Thought-Experiments; Is Your Neighbour a Zombie? Compelling Philosophical Puzzles that Challenge Your Beliefs; and A Cabinet of Philosophical Curiosities: A Collection of Puzzles, Oddities, Riddles and Dilemmas. Titles of this sort give the impression that philosophy is both brain-teasing and fun. As they do so, they risk making it look as if philosophy is material for a pub quiz. Philosophy on Tap: Pint-Sized Puzzles for the Pub Philosopher does this quite explicitly. And so does: Plato and a Platypus Walk into a Bar: Understanding Philosophy Through Jokes. The joint authors of this last-named book include snatches of dialogue: one brief exchange between Dimitri and Tasso is as follows: Dimitri: This clarifies everything we’ve been talking about. Tasso: In what way? Dimitri: What you call ‘philosophy’ I call a ‘joke’. Daniel Klein and Thomas Cathcart, 2016, p. 137
The jokes are good ones, it must be said. And Dimitri has a point: perhaps we should rather be amused by Philosophers’ problems and puzzles, than ponder them, po-faced. Frenchman Roger Pol-Droit seems to think so when he suggests (in 101 Experiments in Everyday Philosophy) that ‘if you want to try laughing at an idea, you should seek the company of philosophers… the best way of showing your respect for an idea is to laugh at it.’ Warburton’s ‘philosophy is what philosophers do’ is circular. Indeed, he points out that ‘work of art’ and ‘member of the art world’ have been defined in terms of each other. In much the same way, when Philosophers think, they call what they do Philosophy; when the rest of us think, we merely philosophize. The last exchange in Plato and a Platypus (p. 177) has Dimitri scoffing: ‘So, we’ve spent the whole afternoon discussing philosophy and you don’t even know what philosophy is?’ I confess I don’t know what it is myself, (that is, where the subject begins and ends), either before or after reading the
19 above books, or scores of other books by today’s Philosophers. Many of the books are themselves confessional: In no other age has there been such a deep and radical questioning about what philosophy is, what it can be, what, if anything it contributes to human culture. Richard J. Bernstein, Beyond Objectivism and Relativism, 1983, p. 181.
Frank Ramsey said of his friend Ludwig Wittgenstein: ‘The conclusion of the greatest living philosopher is that there is no such subject as philosophy’. Universities all over the world seem to think there is such a subject; but its practitioners seem to have much more in common with Theologians than a taste for puzzles and paradoxes.
20
Reason 2: It is unsure about who is a philosopher ‘Poets *in ancient Greece and Rome+ depicted themselves as shepherds singing to other shepherds; philosophers depicted themselves engaged in long conversations, stretching out over several days.’ Stephen Greenblatt, The Swerve: How the World Became Modern, 2011
Greek lovers of wisdom It’s likely you think Philosophy is the oldest of all subjects – the oldest and maybe the wisest. Thinking – even philosophizing – goes back as far as language; and the writings that we have that might be called ‘philosophical’ go back more than two and a half thousand years. But Philosophy, as an academic subject, independent of all others, really only goes back as far as the last decades of the 1800s. Of course, there’s nothing wrong with being a young subject: neuroscience is much younger. My point is that for most of its life, philosophy was just a rather grand word for thinking – and it still is. But why is only a small group of thinkers called ‘Philosophers’ by those who persist in calling themselves Philosophers in the 21st Century? We don’t give thinking a capital T and make a separate academic subject of it. I shan’t write a history of Philosophy here; there are plenty of those already. I just want to outline how it came to be a ‘subject’ – how it got its capital P – and to question why only a more or less agreed corps of thinkers are thought of as Philosophers, deemed exceptionally qualified by other Philosophers to philosophize. Pythagoras is best known for the theorem concerning right-angled triangles that’s attributed to him. He lived between about 550 and 500 BCE. Half a millennium later, Cicero called him a friend (philos) of knowledge, of learning, of wisdom (sophos), and Pythagoras might well have called himself this – we simply don’t know; but his ‘knowledge’ was very likely that of the shaman, not that of the scientist in any sense of that word. He appears to have been the head of a school of mystics who, only in the century after his death, said interesting things about the power of numbers. Look up philosophia in a dictionary of classical Greek, and you find it means a ‘love of knowledge; a
21 fondness for studious pursuits’ that raised the lover of learning above unthinking citizens and slaves. The Greek thinkers who came after Pythagoras, who wrote, or who (like Socrates) were written about, were religious men. They believed in the gods, and they believed they had a ‘soul’ that would survive them in an after-life – but these beliefs were of a token kind for teachers and writers like Plato and Aristotle. Stories about what Zeus, in particular, got up to were losing their grip on serious thinkers. Those who preceded Socrates raised questions about the basic physical nature of reality – was it water; was it air? – or they asked metaphysical questions – do things stay the same, or are they subject to constant change? Socrates (according to Plato’s account) tested Athenian youth with questions of a deeper sort: what do we mean by ‘courage’? what is it to be ‘good’? He professed not to have any answers to his own questions; his aim was to get his friends to think beyond the glib answers they gave to his questions, without really thinking: I am called wise, because my hearers imagine that I possess the wisdom that’s lacking in them. I go about the world inquiring into the wisdom of others, whether citizen or stranger, who appears to be wise. Then I show him that he is not wise. Socrates, Apology
There is something provocative about his wandering the streets of Athens interrogating strangers in order to point out the defects in their reasoning – but there is no doubting his determination to get to the bottom of what was meant by words commonly bandied about as if there was a shared understanding of what ‘virtue’ was, and ‘beauty’, and ‘truth’. There was no higher education in Athens, so men (always men) like Socrates took it upon themselves to school young men (and only men) in what it took to be a good citizen of the city. These teachers were called ‘sophists’; and because there were charlatans among them who taught for cash in hand, and were not conspicuously good citizens themselves, they were not well thought of. Even the more respectable among them, were inclined to argue as if in a boxingmatch whose end was the knock-down argument that left one’s opponent speechless. And when the metaphor wasn’t boxing, or fencing, or athletics, it
22 was warfare. There was less wisdom about it all than ambition to be king of the castle of word-smiths. Plato (who knew Socrates to be a serious thinker, and who was a serious thinker himself) founded what has been called the first university: the Academy. It was a university, of course, only in the literal sense that it concerned itself with what was then the universe of knowledge. (It was a very small universe). All knowledge, all thinking, all learning and pursuit of learning, was philosophia. It wasn’t a subject on the curriculum; it was the curriculum. The purpose of the Academy was for the training of young (men’s) minds, just as the athletics stadium was for the training of young (men’s) bodies. Indeed, philosophia for the Greeks was undertaken for the good of the soul, for the health, the physical and mental ‘wholeness’ of the good citizen. Aristotle was attached to the Academy for twenty years or so, and he went on to set up a school of his own: the Lyceum. It was he who divided up the universe of knowledge into a number of branches, or disciplines. Among these disciplines were subjects that Philosophers tease themselves with even now: the nature of being; the relation of the mind (the psyche) to the body; the study of logic; and the working out of principles in ethics and politics. He laid the basis for biology and physics, too, not always helpfully; and he had original things to say about literary and theatre criticism. It was Plato who had the greater influence in the early centuries of the Christian Church; but it was Aristotle who, for better or worse, was the model for thinkers into the Middle Ages. There seemed to be little else to say once Aristotle’s writings were made available to medieval thinkers. He seemed to have said it all. Of course, plenty of people had philosophized: Herodotus was an open-minded thinker and lover of learning if ever there was one – but he’s credited with being ‘the father of history’ for his study of the conflicts between the Greeks and the Barbarians. So, he is not thought of as a ‘Philosopher’. Hippocrates seems to have given serious thought to the origins of disease – he rejected the commonly accepted idea that they were supernatural. This should have qualified him as an amateur of philosophia; but he has come down to us as
23 ‘the father of medicine’, not as a Philosopher. The writer of the book that we call The Wisdom of Solomon had philosophized, when he wrote: Wisdom’s brilliance never fades; those who seek her out, and love her, have no difficulty finding her, since she is there, ready and waiting, making herself known. The Bible: The Wisdom of Solomon 6: 12, 13.
Jesus philosophized, and who will say that his thoughts on ethics aren’t as profound as Aristotle’s? Paul ‘the Apostle’ philosophized – he even took Greek thinkers in Athens to task for worshipping an ‘Unknown God’; and so did John, the otherwise anonymous author of the fourth gospel – but nobody called the author of The Wisdom of Solomon, nobody called Jesus, Paul, John, and numerous other early Christian thinkers ‘philosophers’ in their lifetimes, and nobody does now. Why is that? They were committed to the study of the Bible, and the God of the Bible, and to spreading what they understood to be the good news of how God redeemed a sinful world. But they were devoted to God’s Word (Theou Logos); so, they were theologians. Clement, Justin, Irenaeus, Tertullian, Origen, Cyprian – they were all theologians whose names are familiar only to present-day theologians; yet they all philosophized.
Theologians and believing thinkers So, what makes Augustine a Philosopher, the 5th Century CE bishop, a theologian whose master-work was The City of God? Indeed, he has been called the first of the great Christian Philosophers. To be sure, he was a keen reader of Plato’s dialogues, and he had things to say about the so-called ‘problem of evil’ and ‘free will’. In his youth he’d believed that we are subject to the play of good and evil forces in our lives. He’d believed in the powers of darkness as a reality separate from the reality of a good God. Later, he renounced this belief in favour of our freedom to choose the good and overcome our essential sinfulness – in fact he declared it a heresy to believe otherwise. He named and persecuted as heresies a lot of doctrines that he disagreed with; and he had just as poor an opinion of women (he made an exception of his Christian mother) as his Greek forebears had. But he philosophized, not least in anticipating Descartes’ ‘I think therefore I am’.
24 Augustine’s version was: ‘If I’m in error, I must exist’. He did exist and he confessed to his errors in his pioneering autobiography that we know as The Confessions of St Augustine. He was a bishop who thought, and left us his thoughts in writing. And there were men (and almost certainly women) who went on philosophizing, in spite of the fall of Rome and the long ‘dark ages’. Few names have come down to us of those who did – or few writings; but it’s absurd to suppose that thinking died, just because there were no universities and no Philosophy departments. What thinking was going on, at least in public, was theological. It assumed a steady faith; it wasn’t necessary to be as inquisitive as the Greeks. Indeed, it was best not to be too inquisitive. The 11th Century CE Archbishop of Canterbury, Anselm, was a theologian who wrote about truth – God’s truth; but he turned to reason, too, as a buttress to faith. He is most famous for his ‘rational’ argument for the existence of God: We have in our minds a concept of utter greatness, and we call it God; there is nothing greater. A God who exists in reality is greater than a God who exists in the mind only. Therefore, God must exist in reality. Anselm, Proslogion, 1098
Reasoning about ‘being’ (or ‘ontology’) is at the heart of metaphysics, and this was one of the ‘three philosophies’, along with ‘natural philosophy’ and ‘moral philosophy’, that passed for advanced study at monastic schools: those of Paris, and shortly afterwards, Oxford. The scholar who graduated in the seven liberal arts and the three philosophies was received by his teachers as a ‘Doctor of Philosophy’. He was a philosophizing all-rounder – not the specialist with a PhD of today. Thomas Aquinas was another theologian who has qualified as a Philosopher because he reasoned. He lived and wrote in the 1200s, chiefly in Paris. He rejected Anselm’s argument for the existence of God: for him, it was plain that the world is as it is thanks to a chain of causes and effects; and the First Cause of all could only be God. This was the argument of one who believed
25 already, of course, but the terms he used were those used by Aristotle. It was in Oxford that a certain tension between faith and reason began to emerge, and a vertical line was drawn between them.
Faith
Reason
It wasn’t that faith and reason were hostile towards each other, exactly, but each had to be given its rightful place: for Aquinas, for instance, the Trinity, the Incarnation of Christ, our being born sinful – these were articles of faith; the existence of God, and his being omnipotent, omniscient, and omnipresent were matters about which one might reason. But if faith and reason did come into conflict with each other, it was faith that always came out on top. All these so-called ‘schoolmen’, in the ‘schools’ of Paris and Oxford, were theologians first, and might have balked at being called ‘philosophers’ (if they recognized any difference at all between being one or the other), in part because they had to be careful not to stray too far into worldly matters that didn’t contribute to salvation. Above all, their reasoning mustn’t contradict what was written in the Bible and what the Church taught. So, thinkers like Duns Scotus and William of Ockham would have agreed with Peter Damian when he called philosophy ‘the maid-servant of theology’. When these schoolmen philosophized, they did so knowing what happened to heretics. Were it not for the medieval schoolmen there would have been a very long dark age between Augustine (or perhaps the 3rd Century mystic, Plotinus, the last of the Greek thinkers to be a ‘Philosopher’, before Theology dominated the scene) and René Descartes (1596-1660). The logic-chopping schoolmen bridged the gap, until the Renaissance and the Reformation gave rise to a generation of very notable thinkers. Luther, Melancthon, Thomas More, John Colet, Erasmus, Calvin, all philosophized – More, a lawyer, and Lord Chancellor of England, wrote Utopia (1516), a philosophical treatise if ever there was one; and he had to be pretty philosophical, imprisoned in the Tower of London for over a year, before his execution – but none of them is a ‘Philosopher’ according to today’s definition of who counts as one and who
26 doesn’t. They’re all placed on the ‘faith’ side of the vertical line that Philosophers still draw. ‘Natural philosophers’ Francis Bacon, another Lord Chancellor of England, published his New Method, in 1620. In this, he advocated a new way of understanding the world: the schoolmen had followed Aristotle in making a general statement (like the classic ‘all men are mortal’), and then passing from a particular case (‘Socrates is a man’) to a particular conclusion (‘Socrates is mortal’). This is a deductive way of arguing: from the general to the particular. Bacon’s ‘new method’ consisted in turning this procedure on its head: first look at particular instances (this man died…that man died; all the men – and women – who ever lived died in the end), and only then safely conclude that ‘all men (and women) are mortal’. He recommended that thinkers be doers; that experiment take the place of speculation. This inductive method of reasoning overturned centuries of dogma. It was the portal to what we understand by science. Indeed, he proposed that a college be set up for the study of ‘the new philosophy of promoting knowledge of the natural world’. His proposal wasn’t taken up at the time, but it’s thought to have led to the founding of the ‘Royal Society of London for Improving Natural Knowledge’, in 1663. It’s Descartes, though, who’s regarded as the father of modern Philosophy. He was something of a scientist himself, in fact; his first publication (Le Monde, 1630) was a claim that there was no essential difference between the stuff of this world and the stuff of the universe. It was a bold claim, and one that he felt obliged to withdraw when Galileo had to back-track on his own claim that the Earth orbits the Sun – a claim that seemed to contradict biblical teaching. Descartes’ reasoning was what qualified him to be called a ‘philosopher’ – and it was reasoning that was informed by the precision that he found in mathematics. He thought we could describe the world in the same exact terms used in physics and maths, without recourse to our senses (that could deceive us) or to the theological speculations of the schoolmen (who recycled old dogmas). His big idea was that nature is uniform: it consists of a single extended substance. His big mistake, from a secular point of view, was to accept the Church dogma that mind, or soul, is a ’substance’ of a different,
27 thinking, spiritual, non-material sort. (I’ll say more about this under Reason 7.) Descartes might have been a ‘philosopher’, in the sense that any thinking person is, but God was central to his thinking. He joined the band of thinkers who regarded the existence of God as a given. God was the one Truth who could stand as guarantor for any ‘truth’ that we might achieve by human reason. Descartes was of his time, as we all are, and as the ‘philosophers’ who came after him were: Baruch Spinoza, John Locke, Gottfried Leibniz, George Berkeley, Joseph Butler, David Hume, and company. Descartes was a soldier and a gentleman, rejoicing in a private income; Spinoza was a lens polisher who partly lived on the charity of others; John Locke was physician and private secretary to the Earl of Shaftesbury, and a public servant; Leibniz was a mathematician, courtier, and diplomat; Berkeley was a classicist and bishop of the Church in Ireland; Butler was the Anglican Bishop of Durham; and Hume was a librarian, essayist, and historian (the author of an eight-volume History of England). They were thinkers and doers, who did ‘philosophy’ only in the sense that they had ideas about the world and our lives in it, and they passed them down in reasoned, written arguments. They are called Philosophers now, when the ‘natural philosophers’ who graced the Royal Society are not. Christopher Wren, Robert Boyle, John Wilkins, Robert Moray, Robert Hooke, John Evelyn, Isaac Newton – they were all ‘philosophers’ who met together to discuss their findings and to publish their journal, the Philosophical Transactions. But they’re not called Philosophers now. Their compatriots would have called them ‘philosophers’, just as the thinkers of a later generation in France – Voltaire, Rousseau, Buffon, Diderot, d’Alembert and others – were called philosophes. These men of the ‘Enlightenment’ thought (and did) in ways recognizably ‘scientific’ (the term had been in use for some time to mean a method, or institution, or person concerned with the workings of nature). Science challenged thinkers who thought in theological ways. Descartes had discounted what we could learn from our senses; Locke accepted that reason gives us ideas about geometry and arithmetic, but that we learn most things by seeing them, hearing, smelling, touching them; and Hume rejected the
28 idea that our knowledge of the world needed a religious basis. We know Isaac Newton as a physicist, but he valued more highly what he wrote in theology than what he wrote in his Mathematical Principles of Natural Philosophy. All these thinkers were believers as well (though Hume had his doubts), whilst, at the same time, they lived in what we call The Age of Reason. Today’s Philosophers are inclined to think of the subject as having mothered all the specialist subjects that bulk the prospectus of the modern university, in the same sort of way in which we like to think that Westminster was the mother of parliaments. They still like to think that it’s ‘home’ to these subjects: that it brings them all together in one all-embracing quest for knowledge – a tree that continues to send out branches as if to persuade itself that it remains as fertile as ever: the philosophy of religion, the philosophy of science, the philosophy of history, the philosophy of education, the philosophy of just about anything you can think of, as if theologians, scientists, historians, educationists can’t think for themselves. Another way of looking at the history of Philosophy as a subject is to see her as an aged mother, long past child-bearing, whose sons, and later her grandsons and great grandsons were obliged to divide up the family land until all Mother had left was a small parcel of land so worked out that nothing more could be grown on it. Theology, physics, history, economics, sociology, psychology, linguistics – one by one, these subjects fenced and farmed their own plots, when it was obvious to adventurous thinkers that nothing would get done sitting in an armchair.
‘Knowledge natural and moral’ The first thinker recognizable as a Professor of Philosophy appears to have been Christian Wolff, at the University of Halle, in Prussia. He was appointed in 1706, to teach mathematics and natural philosophy – so he was as much of a generalist as it was still possible to be. His ambition was to place theology and natural philosophy on the same evidence base as mathematics. He wrote and lectured on cosmology, physics, logic, psychology (such as it was then), and ethics – it was all ‘philosophy’.
29 Frances Hutcheson was appointed Professor of Moral Philosophy at Glasgow in 1730. It was he who coined the maxim associated with the later Jeremy Bentham, that the best action is the one that produces the greatest happiness for the greatest number. He was succeeded at Glasgow by Adam Smith, whom we think of as the father of economics – ‘moral philosophy’ was, in truth, a ragbag of subjects that included ethics, political economy, and psychology. They were all ‘philosophy’ in an age when subject-boundaries were less clearly drawn than they are now. In Samuel Johnson’s famous Dictionary of 1755, Philosophy was defined as: ‘Knowledge natural or moral’, and ‘The course of sciences read in the schools’. Even Immanuel Kant, recognized as one of the ‘Great Philosophers’, was a generalist. He wrote on many subjects of a ‘scientific‘ sort, but in his capacity as Professor of Logic and Metaphysics at Kӧnigsberg, East Prussia, from 1770 onwards, he lectured on physics, mathematics, economics, and physical geography. It was only later in life that he confined himself to what has come to be the province of the ‘Philosopher’. There was a prominent place for God in his thinking; but God was unknowable, so beyond the reach of Philosophy as Kant and Kantians understood it. He made his mark by applying practical reason to the big questions: What is there? What is it to know something? And how should we behave? With Kant, we can begin to talk about Philosophy, with a capital P, as a subject distinct from other subjects. ‘Natural philosophers’ might assist in answering the first of the three questions; but it was surely what Kant called ‘practical reasoning’ that would raise belief to the status of what might count as knowledge. But wasn’t it the ‘natural philosopher’, the ‘man of science’ – Galvani, Volta, Lavoisier, Faraday, Lyell – who was truly ‘practical’? And didn’t Mary Somerville and Caroline Herschel, the first women honorary members of the Royal Astronomical Society, and didn’t Ada Lovelace, mathematician, and arguably the first writer of a computer algorithm – didn’t these outstanding women do science? They couldn’t be called ‘men of science’, of course – so Professor of Moral Philosophy at Cambridge, William Whewell, came up with the word ‘scientist’ in 1834 to include men and women who sought ‘scientific’
30 answers to the question: What is there? Scientists and Philosophers would from henceforward go their separate ways. Philosophy a ‘subject’ Until 1878, professors of Philosophy – and of other subjects – at Oxford and Cambridge were clergymen, signed up to the Thirty-Nine Articles of the Church of England. T. H. Green was the first professor of Philosophy at Oxford who wasn’t a clergyman; and Henry Sidgwick was the first at Cambridge, in 1883. It was time to mark out the territory that Philosophy would occupy, and to decide who the ‘great’ philosophers were who had made contributions to the content of the subject. ‘English Literature’ was a new subject, too: who had been the ‘great’ writers? Matthew Arnold and F. R. Leavis between them decided who was in the ‘great tradition’ of heavyweights from Chaucer onwards (unsurprisingly, they were mostly men). New subjects had to work hard to establish themselves; and Philosophy was no exception. Socrates, Plato, Aristotle and other thinking Greeks qualified, of course; and so did Augustine and the medieval schoolmen who reasoned about their faith. Francis Bacon could be called a Philosopher at a push, but not Galileo, Boyle, or Newton, men who could now be called ‘scientists’; not Lyell the geologist, not Darwin the biologist, not Maxwell, Hertz, Boltzmann, Poincaré, Einstein, physicists and mathematicians – not even Freud, the psychoanalyst, whose ‘talking cure’, with its emphasis on conversation, earned him the nickname ‘the new Socrates’. Descartes, Locke, Hume, Spinoza, Leibniz – definitely; they philosophized in a quite systematic way, even though Philosophy wasn’t their day job. Bentham, a lawyer and social reformer, qualified likewise, and so did John Stuart Mill, a servant at the India Office and, briefly, a Member of Parliament. Friedrich Nietzsche, one of the most influential thinkers of the 19th Century, was briefly a professor of Greek and Roman philology, before his retirement on health grounds. But no presentday Philosopher would deny Nietzsche membership of the club. Was Samuel Taylor Coleridge a Philosopher, as well as a poet and literary critic? Were Auguste Comte, mathematician and social-political theorist; Søren Kierkegaard, a prolific author and social critic; Herbert Spencer, a writer
31 living on a legacy – were these Philosophers? They might be squeezed in – minor philosophers, as one might speak of minor poets. What about Edmund Husserl, Martin Heidegger, William James? Were they Philosophers, or Psychologists? Was it necessary to be a professional philosopher to be a Philosopher? Were Bertrand Russell and Alfred Whitehead really mathematicians, joint authors as they were of the opaque Principia Mathematica? Russell did a great many other things than profess philosophy, but nobody now – in spite of his political activism and his Nobel Prize for Literature (not Philosophy) – would disagree that, first and foremost, he is one of the UK’s ‘great’ 20th-Century Philosophers. It was when he read these words by an obscure German mathematician that he saw the promised land that Philosophy-as-a-subject might occupy: To discover truths is the task of all sciences; it falls to logic to discern the laws of truth (…) Rules for assorting, thinking, judging, inferring follow from the laws of truth. Gottlob Frege, Contributions to Philosophy, 1918-19
Frege hadn’t been a professional Philosopher, but his confidence in what you could do with logic convinced Russell that this is where the future of Philosophy as an academic subject was to be found. Logic – analysis of the language that we use when we make claims – would separate Philosophy from Psychology and all other social and physical sciences. It would put an end to the speculations of Philosophers like G. W. F. Hegel, Green, F. H. Bradley and others, who still talked airily about the ‘Absolute’. Philosophy would no longer be the maid-servant of theology; it would monitor the sciences, making sure language was used methodically, consistently, unambiguously. Philosophy had lost an empire of thinkers, but it had found a role. Between them, Frege and Russell set Philosophy in the English-speaking world, at least, on a path that it would tread for more or less the whole of the 20th Century. It was a rugged, tortuous path to nowhere – but it set Philosophers apart from philosophizers once and for all. Thinkers from Socrates onwards to Hegel and Co. had been Philosophers, to be sure, but Philosophy as an academic discipline, with its own journals, and conferences, and departmental budgets, would earn it the respect it craved.
32 This pleased a Philosopher like Geoffrey Warnock; but he was sorry that: There lingers a certain sense of the old, kind days of amateurism, the days, as it were, when anyone could join in, could have his own say, and could expect to be listened to. Geoffrey Warnock, English Philosophy Since 1900, 1969, p. 119
Even in the 1960s, a less-than-confident Oxford Philosopher could still seek to guard the bastion of Philosophy, jealously, against outsiders who might undermine his professional self-esteem. Another, more recent, British Philosopher echoed Warnock’s keep-off-the-grass warning to philosophizers like me: It is very rare for anyone who hasn’t studied philosophy long and hard and who has an exceptional talent to make a real contribution to the growth of academic philosophy. Julian Baggini, Making Sense: Philosophy Behind the Headlines, 2002, p. 13
Should those who philosophize outside the walls of the academy be prosecuted for trespassing on private land, when philosophy with a small p is such an old pursuit to which hordes of thinkers have contributed, and when academic Philosophy, with a big P, is so very young, and – still – so very unsure of what exactly it is? The next four or five reasons for thinking again about Philosophy could have been bunched under the overall reason: It is still doing Theology. But each is a reason significant enough to stand on its own.
33
Reason 3: It claims to apply universally It is an Absolute we are all after, a statement of the whole scheme – the issue, the progress through time – and the return – making unchangeable eternity. D. H. Lawrence, Letter to Lady Ottoline Morrell, 1915
‘To all the nations’ Christians worship supposedly the same God as Muslims and Jews, though from a different perspective, and under a different name. All three religions are monotheistic, and all three are ‘world’ religions. Of the three religions, Christianity is most committed to spreading its message to that world. If it is to be true to its calling, it must mean something to the whole world – not just to a part of it. A religion that admits that there are other gods than the one that it worships can’t be universal. Only a religion that worships one god – one god that is the only god there is – can claim, meaningfully, to be a world religion. Philosophy, too, is monotheistic: its goddess is Wisdom. Philosophy, too, must have the potential to mean something to the whole world, not just to a part of it. I’m not sure how much of the early (or any) part of the Bible you know, Sophie, but you may have gathered that the early ‘Israelites’ believed in many gods. They had their gods, and other tribes – most of them hostile – had theirs. The Moabites worshipped Chemosh, the Ammonites worshipped Molech, the Amorites worshipped Marduk (among others), and so on. It took a long time for the Israelites to think of their own god, Yahweh, or Jehovah, as the only god worth worshipping. Thus, the writer of The Book of Judges could represent Yahweh as saying: “I am the Lord your God; you shall not fear the gods of the Amorites”. They had to be warned, repeatedly, by the movers and shakers among them that Yahweh favoured them over their neighbours and that He would look after them if they did as they were told, and worshipped Him only. Even when, much later than whenever it was that Moses led the Israelites out of slavery in Egypt, the writer (or one of the writers) of Exodus has him declare: “Now I know that the Lord is greater than all gods”. It was still believed that there were other gods. Even Solomon, known for his
34 ‘wisdom’, built his temple to honour the God who was “great above all gods”, not because He was the only one. One whole millennium later, He was the only one. Judaism was a monotheistic religion; and Jesus the Jew was able to say to his followers: Whoever holds out to the end will be saved, and the good news of the kingdom of God will be proclaimed throughout the Earth, as a witness to all the nations. And then the end will come. The Gospel According to Matthew 24: 14
At first, his missionaries Paul and Barnabas preached only to the Jews; but when the Jews (at least, those who lived in Antioch) wouldn’t listen, Paul decided it was time to spread the word elsewhere, to Jews and non-Jews alike. Christianity was to be for everybody, circumcised or not, partial to pork or not. This was the assumption of all the missionaries, and elders, and Christian writers who came after Paul – and when the Roman Emperor Constantine converted to Christianity in 312 CE (whether or not he saw what he claimed was a flaming cross in the sky), it was the assumption of the Roman Empire. Constantine recognized that if he was to unite the empire, it must worship one god, and share the same beliefs about him. And that god had to be the one who had given his son, his Bible, and his Church to the world – and anybody who disagreed, or who dissented from any of the doctrines agreed at the Council of Nicea in 325 CE, was declared a heretic. That would prove to be, over the centuries, quite a lot of people. Of course, Christendom was at first confined to the land-mass we now call Europe; but the presiding belief was that God was Lord of all; that he cared for every one of his human creatures equally, knowing even the number of the hairs on their heads, and that he was pained even when a single sparrow died. The Crusades were a dismal attempt to push the boundaries of Christendom back to where it had begun; but, if we stretch a point, the desire of the most saintly of the crusaders was that all humankind should hear the ‘good news’. And this was what motivated Christian missionaries in the 1800s: though their motives and their message were tainted by the presumption of white supremacy, the best of them put themselves at risk of
35 disease and worse because they believed that the kingdom would come only when everyone had had a chance to hear the good news, and either accept it or reject it. (Truth to tell, as an adolescent, I believed this myself). There have been pistols-at-dawn disagreements about whether or not Jesus really was the son of God; the ‘Holy Spirit’ really is the third person of the three-in-one deity; the bread and wine of the mass really are the body and blood of Christ; and whether or not we can expect Christ to make a second appearance, and so on – but, whatever your denomination, indeed whatever your religion, if you are a believer, you believe that God is still God, no matter what name he goes under. Three believers, the first a Muslim educationist, the second a Christian molecular biologist, and the third a researcher in artificial intelligence, are convinced that there is one God and that He is God of the entire universe: The prophet Muhammad is linked closely with Jesus as carrying on the same message of God’s unity and the unity of the human race. st Zaki Badawi, in God for the 21 Century, (Ed. Russell Stannard), 2000, p. 73 There is only one reality and that reality is the work of the Creator, operating through the natural laws that we attempt to fathom. Martinez Hewlett, in ibid., p. 180
God’s promise to creation is universal – this is the biblical tradition. It is not our place to exclude people from the community. Anne Foerst, in ibid., p. 139
Non-believers observe that Judaism, Islam, Hinduism, Buddhism, Sikhism have a strong hold on those born in, or converted to, these religions; they observe how hard the Church finds it to present a united front; and how little room astrophysics appears to leave to any sort of god ‘out there’. These nonbelievers may doubt whether the appeal of any one of the ‘world’ religions could ever have been, or could ever be, as truly universal as it is in its dreams.
36 The search for secure foundations Philosophers down the ages have assumed that their propositions, like the doctrines of the Theologians, apply everywhere and at all times. The Greeks didn’t only speak for the Greeks. When Aristotle drew up his ‘laws of thought’ he did so on the assumption that everyone thought in this way – or that they damn-well ought to. The medieval schoolmen didn’t only speak for Benedictine or Franciscan monks. When they distinguished between the whole person, the being of Socrates, for example, (his ‘substance’), and his obstinacy and snub nose (his ‘attributes’) they sought to establish the fundamental nature of all beings in the world – and, by extension, all things in the world. When Descartes thought of his mind as of a different substance from his body, he assumed that he was speaking for all minds and all bodies (the minds and bodies of humans, that is: animals he supposed were machines; they didn’t have minds at all). When Locke wrote that all our ideas are the product of experience, he supposed that we are all, without exception, born a blank slate. When Kant claimed that we couldn’t know things-in-themselves, he meant that none of us could. And when Jean-Paul Sartre committed himself to freedom as an end in itself, not as the means to some other end, he felt obliged to will the freedom of everyone else. Anyone who looks for answers to the (metaphysical) question ‘what is there?’ must assume that what there is will go on being there, everywhere, and all the time. If other people have minds, all other people have minds, whether they use them or not. If there is an infinite number of possible worlds, as some metaphysicians would have us believe, then those possible worlds exist for all of us. In what he says (below) about possible worlds, Kim uses the word ‘qualia’: this is a word for what we experience consciously, for example, the sensation we have when we taste a lemon, or smell a rat: It is not the case that all phenomena of the world are physical phenomena: nor is it the case that physical facts determine all the facts. There is a possible world that is like this world in all respects, except for the fact that in that world qualia are distributed differently. Jaegwon Kim, in The Future for Philosophy (Ed. Brian Leiter), 2004, p.143
37 One might wonder how Kim can possibly know this: that there is even a ‘possible world’ in which beings have sensations of a different sort from ours, yet is otherwise a replica of ours. Is this something we should all know? Shouldn’t Kim be writing science fiction? Anyone who looks for answers to the (epistemic) question ‘what do we know?’ does so on the assumption that what we say we can know, can be known by anyone, everywhere, and all the time. If we know that Thursday follows Wednesday, we know it by definition – and everybody who speaks English knows it. If we know as a result of repeated testing that hot air rises, we can accept it as a fact, or put the claim to the test for ourselves and know it to be ‘true’. As this Philosopher has it: It seems that we as humans are – sometimes – driven by a pure desire to know, and to know not just anything but things which have a bearing on the fundamental nature of the world itself and of our human condition. Anthony O’Hear Philosophy in the New Century, 2001, p. 140
Anyone who looks for answers to the (ethical) question ‘how should we behave?’ supposes that, in an ideal world, we all behave in the same moral way. If we wouldn’t care to be tortured ourselves, we infer that nobody else would, and that therefore it’s wrong, everywhere, and all the time. If a promise is made it should be kept – that’s what a promise means; and the idea behind promising carries this meaning whatever language it may be spoken in. This is the position expressed here: Morality is objective in the sense of being correct independently of whether anyone thinks so (…) Moral requirements are inescapable because they are not of our own making (…) Everyone has a reason to regard genocide as evil, because it is true that genocide is evil. Russ Shafer-Landau Ethical Theory: An Anthology, 2007, pp.179, 181
What all Philosophers, metaphysicians, epistemologists, and ethicists alike have been looking for are foundations for theory in each of these subdisciplines. Their hope is that Philosophy will be a stronghold, safe from sabotage by mere opinion. They want ultimate answers to their questions –
38 no ifs, no buts – that will do universal service. To give up the quest for sophos, ‘wisdom’, that word that’s built into the very name that Philosophers adopt for themselves, would be like Theologians giving up belief in God. Indeed, it’s belief in God that fuelled belief in the goddess Wisdom in the first place. None of the three Philosophers quoted above is a Theologian as far as I know – they may not even believe in God, or not expressly; but there is something religious about their downrightness. Could it be that, if – as Kim says – it’s not physics that gives us the facts about ‘phenomena’, is it something immaterial; something ‘spiritual’; something divine? Is there any human who could enlighten us as to O’Hear’s fundamental nature of the world, if we even supposed that there is such a thing? If it is not we – people – who make inescapable moral requirements, as Shafer-Landau believes, who on Earth could it be? Some Philosophers – and, again, I shall quote three of them – make their debt to Theology quite explicit. They know perfectly well where their universalism comes from. It was Sir Michael Dummett’s view, for example, that there could be no objective truth if God didn’t exist to guarantee human knowledge: When there is an answer that we do not know, we may say that God knows it. He knows it because, for every true proposition, He knows that it is true. Michael Dummett Thought and Reality, 2006, p. 107
This is Theology pure and simple, and what’s more, it’s a proposition that begs the question. How did Dummett know that God knows the answer that we don’t know? He ‘knew’ it because he believed it, because the Church has always taught that God knows everything. The Polish Philosopher, Leszek Kołakowski, similarly depends on the existence of a God for a path to the truth. He doesn’t use the name ‘God’, though: he prefers the term used by certain Philosophers of the 1800s – the ‘Absolute’ – to demonstrate that they were doing Philosophy, not Theology. At least, they thought they were:
39 Once the quest for Truth and Reality with capitals is accepted as a structural part both of culture and of the human mind, it becomes clear that it cannot be satisfied with anything less than the Absolute. Leszek Kołakowski, Metaphysical Horror, 2001, p. 35
That word ‘absolute’, with or without its capital letter, is another of those words, like ‘foundation’, ‘fundamental’, ‘ultimate’, and ‘universal’ that have come down to Philosophy from Theology. My third religious-minded Philosopher doesn’t use capital letters, but he does present us with a stark choice: I think we need a new idea of truth (and consequently of religion to which a lot of my work is dedicated). (…) Religious concerns are close to our heart – or else we are robots. It’s as serious as that – religion or robots! John D. Caputo Truth: Philosophy in Transit, 2013, pp.34, 53
Friedrich Nietzsche’s pronouncement that ‘God is dead’ (in The Gay Science, 1882) was just one sign of a shift among professional philosophizers away from biblical theism. Most Philosophers nowadays don’t do God, or call Him the Absolute. They’ve found other foundations on which to build the sort of certainty – valid everywhere, and all the time – that they look for. There were those whose chosen foundation was Plato’s Ideas (or ideals of Beauty, Truth, Virtue, and so on); others banked on Aristotle’s ‘laws of thought’; others still on Descartes’ ‘I think therefore I am’, or Kant’s ‘categorical imperative’. Kant wanted a certainty about morals that didn’t depend on a belief in God, not because he didn’t believe in God, but because he didn’t want nonbelievers to have an excuse to be immoral. He was convinced (or he convinced himself) that we all have an inbuilt sense of duty: we just know the difference between right and wrong. Nevertheless, he thought it best (in 1785) to cast his conviction (not in the biblical term ‘Commandment’, but) in this ‘categorical imperative’: ‘Act on that principle which, in doing so, you would want made a universal law’. (Being a realist, though, he knew there’d be plenty of people who’d not act on any such principle – so he needed God to be there to reward those who did). According to one modern Philosopher, Kant’s
40 system of ethics is one of the most beautiful creations that the human mind has ever devised. Roger Scruton, Modern Philosophy: An Introductory Survey, 1996, p. 284
G. W. F. Hegel wasn’t a fan of Kant. For Hegel it was more realistic to think of reason as the guiding principle than a sense of duty. The long process of human history had a meaning: it was an evolution from despotism to individual freedom – a state in which history would no longer govern us; we would govern history. By listening to reason, we would be set free. Hegel believed that this evolution was inevitable, since it was directed by an overarching ‘Mind’ (or ‘Spirit’; the German Geist can be translated either way – or, indeed, as ‘ghost’, as in der Heilige Geist, the Holy Ghost). He has been said to view history as having a three-stage pattern, where one position (the thesis) is held for a time; this proves to be untenable (the antithesis); and, eventually, a stable, compromise position is reached (the synthesis). History is a ‘dialectical’ process, where one position is negated, and then is negated again in its turn. Thesis ↓ Antithesis
→
Synthesis
This ‘dialectic’ was taken up by Marx and the Marxists. Their three stages, played out in economic history were: Feudalism → Capitalism → Communism. Marxist thinkers – and all Philosophers in Soviet Russia and its satellite states were Marxists, willingly or unwillingly – accepted as an article of faith that communism would carry the day everywhere, and for all time. Communism was the Marxist equivalent of the ‘Kingdom of God’. The Frenchman Auguste Comte (the first thinker to call himself a ‘sociologist’) was another man for whom (modern) history was a three-stage process. According to his ‘Positive Philosophy’ of 1853, the stages were: the Religious, the Metaphysical, and the Scientific. The Religious stage (corresponding to the medieval period) was necessary, but outmoded; the Metaphysical stage
41 was when science had its modest beginnings in the 1600s, and unseen forces were still little understood; and the Scientific (or ‘Positive’) stage was marked by an understanding of natural phenomena that could be measured mathematically. Comte saw himself as a pioneer of this last stage – a believer, along with many contemporaries, in the progress of humankind – all mankind and all womankind – from lower to higher things.
‘Reduced ambitions’ I’ve mentioned already (under Reason 2) Frege’s faith in logic: his reconfiguring of language so as to make its propositions as certain as mathematics. Russell and others championed the project: indeed, ‘linguistic analysis’ came to be what counted as Philosophy in the English-speaking world – even, it might be said, the Western world. It seemed as though a new dawn was breaking for a discipline that had nothing to do with any other science, least of all Psychology. Undoubtedly, some ‘analysts’ went too far: the members of the so-called Vienna Circle, in the 1920s, thought they could divide propositions into three classes (a lot of things come in threes): 1.
2.
3.
There are analytic propositions: these are true inasmuch as we determine the workings of our language, e.g. ‘Snow is white if, and only if, snow is white’; and ‘Sir Walter Scott is the author of The Waverley Novels’. Then there are synthetic propositions: these are true inasmuch as they have been established by repeated scientific testing, e.g. ‘Gene mutations cause cancer by speeding up the division of cells, or by inhibiting normal cell processes’. All other propositions are literally meaningless: they simply express a feeling, e.g. ‘Everything is art, and everybody is an artist’. It is true neither by definition, nor by discovery – therefore, it is not true.
Philosophy would concern itself only with the first class: propositions that are true or false by virtue of the words that are used. Something like three dozen thinkers, in the 1920s and ‘30s, drew up what they called the ‘Verification Principle’: a proposition is only meaningful if it obeys agreed norms (i.e., it’s analytic), or it can be proved empirically (i.e., it’s synthetic). Only propositions
42 of these two kinds can be ‘verified’. All other propositions are merely matters of opinion; claims that we either agree with, or don’t. This principle was soon discredited because the principle itself could not be verified in either of these two ways. Besides, we’ve come to accept that it’s far from meaningless (for some people) to say ‘God so loved the world that He sent his only son to save us from sin’, or (for the same, or other people) to say ‘We have far too few Leonardo da Vincis, and far too many Picassos’. We post-modernists, or postpost-modernists have accepted that meaning and truth come in many different shapes and sizes. The Vienna Circle and its verification principle have been called the last inhabitants of the Garden of Eden: true believers in the universal mission of the Philosophy that germinated in ancient Greece, grew in the European bosom of Christendom, and spread, ultimately, to the rest of the world; reason would overcome superstition, and then the end would come, just as J. Christ said it would. Few now talk about anything ‘ultimate’, or ‘fundamental’; indeed, it may be that Anglo-American Philosophers busy themselves with ‘analysis’ because wishful talk of the spread of Western Philosophy to the East and South smacks of cultural imperialism. If Christianity isn’t about the redemption of the whole world by Christ’s death, what is it about? Likewise, if Philosophy isn’t about answers to the three ‘big questions’ (metaphysical, epistemological, ethical) – answers, what’s more, that everybody, everywhere, all the time would accept – what is it about? Interestingly, French Philosophers who gave us ‘difference’, ‘deconstruction’, and the end of ‘grand narratives’ (like Hegel’s and Comte’s), may still be attached to what is universal about Philosophy. Derrida gave up on the search for objective truth; and Foucault turned his back on the possibility of universal values. Yet the old attachment lingers on. This was one of the exchanges in a conversation between Paul Hegarty and Jean Baudrillard: JB: Everyone’s trying to find the universal, some universal values which can mop up everything and mediatize it. PH: The universal still works in France.
43 JB: For us, it’s our heritage. The only thing is it doesn’t have any value any more. It is not rated at the global level. It’s not worth anything on the Stock Exchange. Paul Hegarty, Jean Baudrillard: Live Theory, 2004, p. 145
(‘Mediatize’? Did Baudrillard mean ‘publicize’, or ‘normalize’, perhaps). Philosophers know that, since the ‘death of God’, there can’t be answers to the ‘big questions’ that would satisfy everybody. Indeed, is there any point asking the questions if we rule out ‘universal’ answers in advance? German Philosophers, French Philosophers, British Philosophers, and American Philosophers have never put forward propositions that they thought applied only in Germany, France, the UK, or the USA – or in Europe, or the West. Does Western Philosophy cut any ice in Chile or China? Does it speak to the experience of the Cambodians or the Congolese? It’s unlikely that it has, or could, or will. The hope that it might is the legacy of a Theology that motivated the missionaries. Many of today’s Philosophers would quietly assent to what Nagel wrote in 1986: Philosophy cannot take refuge in reduced ambitions. It is after eternal and non-local truth, even though we know that is not what we are going to get. Thomas Nagel, The View From Nowhere, 1986, p. 10
Like Baudrillard and Nagel they want to have their cake, and to eat it, too. They want to believe that the propositions of Philosophy – what Philosophers say and write when they make their claims – have universal application; yet they know they’ll be disappointed. Other Philosophers would soon prick their ballooning hopes if they pretended otherwise. Findings of ‘universal’ significance are far more likely to be published by scientists working on the European X-Ray Free Electron Laser (XFEL), or at the European Centre for Nuclear Research (CERN), or with the International Panel on Climate Change (IPCC), than by Philosophers. And the authors of the Universal Declaration of Human Rights of 1948 were politicians and lawyers – thinkers, to be sure – not Philosophers. Those authors knew that, though the rights they listed
44 might come into conflict with actual moral practices in many countries (the Indian caste system, for example, that defies the first, most basic, article of the Declaration; ‘All human beings are born free and equal in dignity and rights’), they must set standards that, at least, aspired to be universal. Heidegger, John Dewey, Ludwig Wittgenstein, John Searle and others, have doubted whether Philosophy can be said to have ‘foundations’ on which to build ‘universal’ propositions. Rorty made the point succinctly: [T]he notion of philosophy as having foundations is as mistaken as that of knowledge having foundations. Richard Rorty, The Mirror of Nature, 1980, p. 264
A house must have foundations if it is to be ‘foundational’, (and by extension) universal. Without ‘foundations’, Philosophy is a folly, and the Philosopher is the man in the parable who built his house on sand. Does it matter that Philosophy doesn’t speak to everyone, from Seoul to Seattle? Is that a reason for not taking it up? It’s not a reason by itself, no – of course it isn’t. We don’t expect audiences in Manila or Montevideo to respond in the same way to New Orleans jazz or to Mahler symphonies as we may do in London or Chicago. Music isn’t always the international language it’s often said to be. But Philosophy is the love of Wisdom – not wisdom as defined on one continent, and in a different way on another, but – a Wisdom that everyone, everywhere capable of reasoning might acquire. Bauman was a distinguished sociologist and philosopher, both in his native Poland and in his adopted England: Modernity once deemed itself universal. It now thinks of itself as global (…); universality was a feather in philosophers’ caps. Globality exiles the philosophers, naked, back into the wilderness, from which universality promised to emancipate them. Zygmunt Bauman, Life in Fragments: Essays in Postmodern Morality, 1995, p. 24
What is not universal is relative; and Philosophers, as we shall see, don’t get on well with the relatives.
45
Reason 4: It divides the world in two There were two Macaulays, a rational Macaulay who was generally wrong, and a romantic Macaulay who was almost invariably right (…) As a philosopher he had only two thoughts; and neither of them is true. G. K. Chesterton, The Victorian Age in Literature, 1946
Sets of opposites Did people in ancient times think of the world as one, first of all, and then only later divide it in two? Or did they think of it as divided in two from the start? Which came first: monism or binarism? It seems the ancient Egyptians thought that men and gods were made of one substance; it was only later, along with the Sumerians, the Greeks, and – it’s said – the Māoris, that they thought of them as binary opposites, as Earth and Sky. The evolution of this view, from monism to binarism, seems to have taken place right across the ancient world. When we cut things in two, we have a ‘dichotomy’. Philosophy takes its cue from Theology when it dichotomizes, and in doing so, both are guilty of over-simplifying things. But, then, it seems to be a quite basic instinct to divide by two. We’re either male or female, after all (though, little by little, we’re coming round to the idea that, like much else, gender is a matter of degree); and we’re either cat people or dog people – or so we’re told. It was Foucault’s opinion that: A binary structure runs through society. There are two groups, two categories of individuals, or two armies, and they are opposed to each other. (…) *The+ relationship that exists between the two groups that constitute the social body and shapes the State is in fact one of war, of permanent warfare. Michel Foucault Society Must be Defended, 2001, pp. 51, 88
If this sounds like fantasy fiction, it must be allowed that the friend-or-foe, fight-or-flight pairings are instincts that we shan’t civilize away. You may not have known much about the early history of the Israelites, Sophie: why would you? You’ve not listened to as many sermons, and lessons
46 from the lectern as I had before I was twenty-one. But you might have heard or read the first verses of Genesis, in Religious Studies at school, at some stage. One of the writers of that book drew a vertical line between one created thing and another: In the beginning God created the heaven and the earth. (…) And God said, ‘Let there be light’; and there was light. And God saw the light, that it was good: and God divided the light from the darkness. And God called the light Day, and the Darkness he called Night. And there was evening and there was morning, one day. Genesis 1:1-5
In these first five verses of the first book of the Bible, covering the first twenty-four hours of the imagined life of the world, we are presented with four sets of opposites: heaven light day evening
earth darkness night morning
On day two, God was minded to divide the waters above the ‘firmament’ of Heaven from the waters under the firmament. Further sets of opposites follow soon afterwards: And God said, ‘Let the waters under the heavens be gathered together unto one place, and let dry land appear’; and it was so. And God called the dry land Earth; and the gathering together of the waters he called Seas: and God saw that it was good (…) And God said, ‘Let there be lights in the firmament of the heaven to divide the day from the night’ (…) and God made the two great lights; the greater light to rule the day and the lesser light to rule the night. Genesis 1:9-16
What God in his infinite wisdom was supposed to have begun, ‘man’ was only too happy to continue. One or both distinguished between the plant kingdom and the animal kingdom; between God’s friends and God’s enemies; between
47 the hallowed seventh day and the other days of the week; between the seed of Eve and the seed of the serpent; between the good Abel and the wicked Cain; between the keeper of sheep and the tiller of the soil; between the raven and the dove; and so, on and on. Jews and Christians have made a habit of dichotomizing. It’s as if they can’t tread a path for long before it forks. Here are some of the most obvious of the dichotomies they’ve traded in: God Righteous Law Clean Pure Spirit Soul Leavened Bread Circumcised Sacred Jew Old Testament Temple Priest Bread Transcendent God Orthodoxy Clergy Faith Spiritual
Satan Wicked Prophets Unclean Impure Flesh Body Unleavened Bread Uncircumcised Profane Gentile New Testament Church People Wine Immanent God Heresy Laity Reason Temporal
In the early party of the Bible, God was close to Adam and Eve: the couple are supposed to have heard the voice of God as He walked in the garden in the cool of the day; God called Adam asking: “Where are you?” (Gen. 3: 8,9). God was immanent, which means: in and of this world. In the later books of the Old Testament, and certainly in the New, He was a more remote figure, transcendent: above and beyond the world. The gulf between God and humankind could be bridged to some extent by the rituals laid down in the
48 third, fourth, and fifth books of the Bible (Leviticus, Numbers, and Deuteronomy). The animals to be sacrificed, and the parts of animals; the animals or the parts of animals that could be eaten, or couldn’t be eaten; when and how people should wash themselves and their clothes; what could and couldn’t be done on the Sabbath day – all these things were set down in detail as the means by which God’s chosen people should demonstrate their obedience to His will. By observing the fine differences between what was said to be clean, and what unclean, the Israelites made themselves separate from tribes that bowed down before Chemosh, Marduck and the rest. The first Jewish Christians were unsure which of these rituals they ought to maintain – whether or not, for example, they should circumcise their sons; Gentile Christians were free to invent rituals of their own. There was no longer any need to sacrifice animals; Christ had been the ultimate sacrifice. All that was needed was a ritual that honoured that sacrifice in a reenactment of the Last Supper; and the bridge that spanned the gulf between a transcendent God and his creatures was collective prayer. It wasn’t long, though, before the Church drew a hard, vertical line between what should be believed, and what shouldn’t. Was Christ a man or was he God? Was he of the same substance as God, or only of like substance? Was the Holy Spirit the equal of God and Christ, the third member of a three-in-one God? Once Christianity was the state religion after Constantine, there had to be one set of answers to questions like these if the Church, and the state with it, wasn’t to break in pieces. But, of course, it did break in pieces – the Church and the state. The Bishop (now Patriarch) of Constantinople was given a status equal to the Bishop of Rome in 451 CE. The Church in the Eastern Empire worshipped in Greek, whereas the Church in the Western Empire worshipped in Latin. The two Churches grew apart in respect of what they believed, and how they expressed their beliefs – but it wasn’t until 1054 that the split was complete. So, there was Orthodox (those whose beliefs were the right ones) and there was Catholic (whose beliefs were universal). Less than a half millennium later there was Catholic, and there was Protestant; and, in very little time at all, there was Evangelical (or Lutheran) Protestantism, and there was
49 Reformed (or Calvinist) Protestantism. There was justification by works, and there was justification by faith; there were seven deadly sins, and there were seven cardinal virtues. In Britain, there was the Anglican Church, and there was Nonconformity – which itself broke into ever tinier pieces. Of dichotomies, it seems, there was no end. The old dichotomy between Greek and Latin is preserved in the very name given to the study of religious matters: at Oxford it’s Theology, and at Cambridge it’s Divinity. Two English bishops discovered the rather fundamental distinction between theory and practice – between being a Theologian and being a Bishop. John Robinson (in the 1960s) poured scorn on the idea that God was ‘up there’, or ‘out there’; and those of simple faith poured scorn on John Robinson. David Jenkins (in the 1980s) doubted whether Jesus actually ‘rose from the dead’ on Easter Sunday morning; and the faithful in the pews doubted whether David Jenkins was fit to be a bishop.
A well-established habit Of course, we all divide experience in two (when we don’t divide it in three), and decisions are often binary. We’ve scarcely got up in the morning before we’re asked (in hotel or guesthouse): “Tea or coffee, sir/madam?”; “full English, or continental breakfast?” – when we hope we’d be given hot chocolate if we asked for it, and a plate of toast and marmalade. Life isn’t all either/or. There’s nothing wrong with dividing experience in two when it’s convenient; what’s open to question is that Philosophers hold on to hoary old dichotomies inherited from the ancients as if good sense (and institutional Philosophy) depended on it. Here are some of the distinctions they’ve made, beginning with the one that divided Philosophy from Theology in the first place (I’ll explain certain of them later on, in context; but one or two of them call for explanation straight away): Faith Mind Fact Free Will Knowledge
Reason Matter Value Determinism Belief
50 Rationalism Subjectivity Appearance Universals A priori Deontology Realism True Is (Indicative) Egoism Necessity Deduction Absolutism Analytic Sense Essence
Empiricism Objectivity Reality Particulars A Posteriori Consequentialism Idealism False Ought (Imperative) Altruism Contingency Induction Relativism Synthetic Reference Existence
We probably couldn’t do without some of these dichotomies: if we don’t know what it means to be subjective, for instance, we’re unlikely to try to be objective when we need to be. On the other hand, we are seldom totally subjective, or totally objective; we’re more likely to be more or less subjective most of the time. It’s useful to talk about a claim’s being true or false, even though we know it’s likely to be more or less one or the other, depending on the circumstances. When we’re talking about things, we can probably come close to being objective, and making claims that are true; but when we’re talking about anything to do with people, we’re more likely to be subjective, being people ourselves. Most claims made by people that concern other people are matters of degree. They’re on a continuum: ←-------------------------------------------------------------------------------------------------→ Subjective Objective What of the dichotomy in italics half-way down the table: a priori, and a posteriori? These Latin terms have plagued Philosophy for centuries. A Priori means ‘from before’, and a posteriori ‘from afterwards’. An a priori proposition needs no evidence to support it – we just know it to be true; an a
51 posteriori proposition does depend on experience to back it up. This does seem to be a genuine dichotomy. Socrates/Plato, as we’ve seen, claimed that what we know from before, dimly, can be recaptured in the course of reasoned debate: we view beautiful things, so we know what it is to be beautiful, but what Beauty is in itself is something we can know (if it can be known at all) only by philosophizing. Again, as we saw, Anselm supposed that we have an idea of absolute greatness than which there can be nothing greater; the idea needed no evidence to support it – as long as we accept that we do have such an idea, and that it’s ‘greater’ to be real than to be imaginary. Locke’s view was that we’re born without any ideas at all: we’re a blank slate, written on by all the experiences that we have, giving us all we know. We see objects, hear them, and feel them, and that’s how we know them. Berkeley disagreed: there were no ideas in objects; ideas had to come from somewhere else – and being the good bishop that he was, he supposed that they must come from God. We had them from before, a priori. (Berkeley seems to have been the first thinker to have used the term in this sense, in 1710). This difference between Locke and Berkeley explains another of the above dichotomies. Locke was a Realist, Berkeley an Idealist. If you’re a realist, you believe that what exists does so independently of us – clocks, cuckoos and cuckoo-clocks exist whether there’s anybody around to see or hear them. You’re an idealist (and nowadays rather unusual) if you believe that these things are ideas in our minds, given us by God a priori. Those who accept the distinction between what we ‘just know’ a priori, and what we learn by experience, a posteriori, might agree that: We seem to have intuitively clear instances of a priori knowledge of the principles of logic, arithmetic, geometry, probability, of the principles of colour incompatibility and implication, of some definitions, perhaps of some truths of philosophy itself. Paul Boghossian and Christopher Peacocke, New Essays on the A Priori, 2000, p. 8
That 2 + 2 = 4 is something we ‘just know’ is the usual piece of knowledge that we’re supposed to have a priori. If I just knew it, though, why did I have
52 to be taught it? Those who defend the idea that we have a priori knowledge are careful to point out that it’s not innate; it’s rather that we just know the equation to be true without having to put it to the test. All the same, when Hartry Field in the above book claims to know a priori that: Spinsters are not married If London is larger than Paris, then Paris is not larger than London The chess bishop moves diagonally One ought to have some concern for the welfare of other people. we might begin to suspect that we’re back to the distinction made between analytic and synthetic knowledge. Two of Field’s four statements are no more and no less than definitions. I had to be shown that one pencil and one pencil made two pencils, and that another two pencils made four altogether – I had to be told the names we give to the quantities of things. In the same way I had to be told that an unmarried woman is (or used to be) called a ‘spinster’; and I had to be told, and probably shown, what the permitted moves were of each of the different chess pieces. I probably had to be told that London is larger than Paris (if it is in fact); but once I’d been told this, or found it out for myself, I didn’t need to check whether or not Paris is larger than London. The one fact follows from the other – indeed the two facts are one and the same. Did I know a priori that I ought to have some concern for the welfare of other people? How could I know, if I wasn’t told more than once that my older brother’s toy-soldiers weren’t mine, and that I shouldn’t kick my sister under the table? Don’t all children have to be told to take other people’s interests into account? Do we know ‘right’ from ‘wrong’ from day one? Unless we believe in original sin, or that immorality is in the genes, we have to believe that new-born babies are innocent of a priori knowledge of any kind. We can do without the a priori and the a posteriori: they’re misleading bits of Latin that only Philosophers have a use for. But in dispensing with one dichotomy, is it helpful to introduce another – or to re-name an old one?
53 Rather than the a priori being the category of God-given unchallengeable truths, it becomes the category of things to which we happen to be most attached, or at least, most attached at a particular time and place in the history of thought. Simon Blackburn, What do we really know? The Big Questions of Philosophy, 2009, p. 56
Does this introduce two substitute categories: those things to which we are ‘attached’, and those we’re not?
‘Tiresome relics’ In doing without the a priori, we can certainly do without Berkeley’s bizarre notion that we only see and hear things because God does. We may not gain many ideas from objects by themselves – pencils don’t announce how many of them there are – but there are people happy to give us those ideas, mistaken or not. And if we can dispense with the a priori/a posteriori, and Realism/Idealism dichotomies, we can probably do without the Rationalism/Empiricism dichotomy as well. The rationalist has it that we have knowledge that doesn’t come to us through our senses: knowledge of cause and effect, for instance. We get this knowledge by reasoning. But do babies manifest powers of reasoning when they suck at the breast or the bottle? Does it prove babies understand causation? Or is the instinct – a sensory instinct, surely – proof that it’s alive and, like any living being, it wants to stay alive? Babies learn by experience: they’re empiricists, and they go on being empiricists. What does the Necessity/Contingency dichotomy amount to? If a proposition is true of necessity, it can’t not be true without defying logic, usage, or common sense. That a ‘tricycle has three wheels’ is necessarily true, inasmuch as ‘tricycle’ is the word we’ve given (in English) to a mode of pedal transport having three wheels. It couldn’t be otherwise unless we chose new names for cycles having one, two, or three wheels. I’d prefer to say that the sentence is true by definition – or, better still, that it’s a fact by definition. A sentence that’s contingently true is one that doesn’t have to have been as it
54 is: ‘this tricycle is blue’ is contingently true (if the tricycle is, in fact, blue) because it could have been almost any other colour under the sun. It appears this Necessity/Contingency dichotomy does survive a purge of dichotomies. On the other hand, when we say (in English) ‘this tricycle is blue’ we have to have defined what we call blue, and not turquoise bordering on green, or red bordering on purple – so even this sentence is a fact by definition to some extent. It doesn’t do to place the two sorts of fact on either side of a vertical line as if they were binary opposites. There is something in the distinction between Appearance and Reality, of course. Another of Field’s supposedly a priori propositions is that nothing can be both red and green all over. How soon babies recognize the difference between a red brick and a green one I am not enough of a child-development expert to know; but I imagine that once an older sibling or adult has attached the names ‘red’ and ‘green’ to the two bricks (possibly several times), they seem to be able to perceive the difference and to pick out the ‘right’ one when asked. But we learn later that an object can appear to be one colour in artificial light and another colour in daylight – and, come to that, in various intensities of artificial light and daylight at different times of the day. A scarf that appears to be one colour (all over) under strip-lighting in a changing cubicle might appear quite different at noon out on the pavement – and the colour might well change by degrees as the scarf is taken out of the cubicle into, and through, and out of the shop. The object itself hasn’t changed, only the light falling on it. In short, Appearance and Reality aren’t dichotomous, they’re continuous. That might go for most if not all of the dichotomies listed above (and several others that Philosophers routinely invent and use). We’ve become used to the fact, after all, that sexuality comes in rainbow colours; and we no longer divide the study of psychology into ‘normal’ and ‘abnormal’ – we talk about a continuum, or spectrum, of mental health/ill-health. Having different words for the same thing tempts us to think the two things themselves are different. Morals and ethics (like divinity and theology) come to us from Latin and Greek respectively: they mean the same, but many Philosophers have found ways of distinguishing between them. Here’s one who does:
55 Ethics is about the kind of person one is, about the manner and character of one’s life and activity. (…) morality is the obligations and duties, the constraints and parameters that apply in one’s relationship with others (…) it is an ethical matter what colour you paint the front door of your house, but it is not a moral matter unless the colour is so offensive that it upsets others. A. C. Grayling, The God Argument, 2013, p. 185
Yet if you look up ‘Ethics’ in the Oxford Companion to Philosophy, you’re told to ‘see Moral Philosophy’. How a physician behaves towards his clients is plainly a moral matter, yet we talk about ‘medical ethics’, as we do about ‘professional ethics’ in general. This isn’t terribly important. But the use of the words ‘brain’ and ‘mind’, or ‘brain-states’ and ‘mental states’ does imply that one word denotes one thing, and the other something entirely different. Some of the implications of this particular dichotomy will turn up under Reason 7. There is Eastern Philosophy and there is Western Philosophy; and in the West, there is ‘Anglo-Saxon Philosophy’ and ‘Continental Philosophy’. If to a non-theist it sometimes seems there’s little difference between those who call themselves Christians and those who don’t, when it comes to how they behave when they’re not in church, so the response made by the Philosopher Bernard Williams to a point made by Philosopher Bryan Magee seems to reinforce the case that I tried to make under Reason 1, that there is no difference between philosophy and (critical) thinking: Magee: Twenty years ago, in the new dawn of linguistic philosophy, people were inclined to think that by the use of new techniques, the fundamental problems of philosophy would be solved in, say twenty years. Well, those twenty years have now passed, and the fundamental problems of philosophy are still with us. Williams: The dichotomy between philosophy and everything else cannot ultimately be made. Bryan Magee, Men of Ideas, BBC2, 1978
Several decades later, the ‘problems’ are still with us. If we can’t distinguish philosophy from everything else, we certainly can’t distinguish it from
56 thinking. In spite of what looks like a death-blow to Philosophy, Williams went on professing it for twenty-five more years, at the universities of Cambridge, Oxford, and California, Berkeley. To have rubbed out the line between philosophy and all other actual and potential subjects was, apparently, no problem at all. No doubt he managed to profess Philosophy convincingly enough, all the time taking care to distinguish it from ‘everything else’.
57
Reason 5: It makes an idol of knowledge For me, as for most novelists, every genuine imaginative event begins down there, with the facts, with the specific, and not with the philosophical, the ideological, or the abstract. Philip Roth, A Novelist’s Autobiography, 1988
Knowing and believing When I claim that Philosophers make an idol of knowledge, I mean that they have been inclined to put Knowledge on a high shelf, out of reach of mortal thinkers. Is this because God forbade Adam and Eve to eat the fruit of a particular tree in the middle of the Garden of Eden? There seem to be two versions of the story in Genesis: in one, the name of the tree is the ‘tree of Life’ and in the other it’s the ‘tree of knowledge of good and evil’. It’s the fruit of this second one that God says the pair mustn’t eat, on pain of death – and it’s an apple from this second one that the devil, in the shape of the snake, tempts Eve to pick and eat. The snake tells her: “You won’t die, because God knows that on the day you eat the apple, your eyes will be opened, and you’ll be like gods knowing good and evil.” When God realized the couple had eaten the apple, he is overheard as saying: “So, these humans have become like us, knowing good and evil; and if they reach up and eat the fruit of the tree of life as well, they’ll live for ever”. Therefore, the Lord expelled them from the Garden of Eden. Genesis 3: 4, 5, 22
One tree or two trees, one God or many gods (and note that ‘us’, that plural), the plain message is that eating the apple of the tree of knowledge made one god-like. Yet God himself seems not to have been all-knowing at this early stage in Jewish history: when Cain killed Abel, God had to ask the killer “Where’s Abel, your brother?”; and he had the Israelites wander in the desert for ‘forty years’ before letting them occupy the Promised Land to see whether
58 or not they’d obey him. Apparently, he didn’t know beforehand that, for much of the time, they wouldn’t. Adam ‘knew’ Eve (Gen 4.1: the Hebrew yodah is ‘to know’, in the conventional sense); Cain ‘knew’ his wife; Hannah, whose husband had ‘known’ her late in life, praised the Lord as ‘a God of knowledge’ because he’d granted her wish for a child. Knowledge was the basis of a contract of the most intimate kind between man and wife; and it was the basis for the contract – or covenant – between a man and his servant, and between God and humankind. For a human to know God’s name was enough to ‘know’ God himself, and, so as not to presume on God’s sometimes less-than-tender mercies, his name wasn’t written out in full in the Hebrew books. The writer of Psalm 9 declared that to know God was to trust him. The writer of Psalm 139, though, wasn’t so sure: he knew God knew him, when he sat down, when he got up, where he went – but the writer couldn’t know God in the same way. Knowledge of this sort was too awesome, too high for a man to reach. Only the ‘chosen’ would know him: Solomon, son of David, would know him; and Jeremiah – God had known Jeremiah before he was born. He knew he’d be faithful; and he knew he’d remind the unfaithful to honour God’s contract with his people. By the end of the Old Testament period, ‘Knowledge’ and ‘Understanding’ were spoken of in the same breath as ‘Wisdom’ – the all-but-goddess-like Sophia. She was present in the very breath of God at the creation of the world, the forerunner of the Logos (or ‘Word’) and the Holy Spirit of the New Testament. The mission of John the Baptist was to bring ‘knowledge of salvation’ to the (Jewish) people; and it was to be Paul’s mission to reveal ‘all the treasures of wisdom and knowledge’ of God to Romans, and Galatians, Ephesians, Philippians, and Colossians – anyone who’d listen. You may be wondering, Sophie, what all this has got to do with Philosophy? I believe Philosophers inherited this biblical meaning of ‘knowing’, where there’s an intimate and mystical oneness with Wisdom. And bear in mind that early ‘philosophers’, early thinkers, were theologians, too. But I’ll try not to over-state the case.
59 It came to be accepted in the Church that knowledge of God could be acquired by reflecting on the wonders of the created world; the life and teachings of Jesus; the stories in the Bible; and the Church’s interpretation of the Bible. Believers attached their own weight to each of these means of knowing God: Paul’s view was that unbelievers had no excuse for their unbelief – God himself might be invisible, but the world he’d made was plain for all to see. This is what came to be called ‘natural knowledge’. Augustine, Aquinas, Luther, Calvin were all spokesmen for natural knowledge, but none of them thought it was enough. You could reason your way to an understanding of God’s powers, to be sure; but faith was needed to penetrate to the mysteries of the Trinity, the Incarnation, and the Resurrection. There was always that knowledge that was ‘hidden’ from ordinary churchgoers: knowledge that the celibate priest had who could read Latin, dispensing the bread (and less often the wine) from behind the altar rail, in something like the way the Jewish ‘Covenant’ was shut away in its ‘Ark’ in the Holy of Holies. For the Lutheran Kant in the 1700s, God didn’t reveal all of himself in his works, or in his Word; only faith gave puny man any purchase on God the Absolute. For the American Calvinist Charles Hodge in the 1800s, we’re all born with an understanding of a Being on whom we depend. At the very heart of Protestantism is the doctrine of the priesthood of all believers – everyone has access to the Bible and the teachings of the Church, so has no need of a priest. Belief feeds on prior knowledge, said Hodge. First know, then believe. For Calvinist Karl Barth in the 1900s – in this respect a Kantian – faith was what begat knowledge. It wasn’t enough to gaze on nature, or to read the stories in the Bible; one had to make the leap of faith before one could know, and believe. And T. F. Torrance, a disciple of Barth in the Church of Scotland, was convinced that God revealed himself only to his ‘son’, and that the son revealed himself only to those who believed in him. First believe, then know. Some of this thinking about Knowledge as ‘hidden’, or, at least, as something beyond easy reach, passed into Philosophy. It’s a commonplace that
60 knowledge is power; the question for Philosophers is whether or not they have the power to acquire it. And it wasn’t just a question for thinkers brought up as Jews or Christians. The Greeks had their version of the forbidden-fruit story in the myth of Prometheus, the titan who defied Zeus by stealing a brand of the holy fire. Socrates professed not to know anything; Plato supposed Knowledge to be like Truth, Beauty, Honour: Ideas that mankind saw only as shadows on the wall of a cave. The religious thinkers of the Middle Ages ‘knew’ what they should ‘believe’, and what they could work out for themselves by applying Aristotle’s logic. Descartes knew he existed inasmuch as he had his doubts, and he supposed that God must exist for much the same reason. As thinkers saw what scientists were beginning to achieve, so they wanted not to have to depend on God for their knowledge. But there always seemed to be something that got in the way between the subject who wanted to know, and the object to be known. It was the old problem of the difference between appearance and reality: a subject only perceived an object as it appeared, not as it was for real. All we had were percepts – our sensations and perceptions of objects. Philosophers since Descartes have given these various names: Descartes called them ideas Locke called them ideas of sensations Berkeley called them sensible qualities Hume called them impressions Kant called them representations J. S. Mill called them sensations Moore and Russell called them sense-data A.J. Ayer called them sense-content David Lewis called them qualia. They were what we perceived, not the objects themselves – those we could never know; they have been the agents of scepticism into modern times. It was what the scientists were discovering that made a sceptic of Hume. One
61 could be certain about the conclusion to come to in a deductive syllogism such as this one: All men who are not slaves are free. Anastasius is not a slave, Therefore, Anastasius is a free man. But deductive syllogisms were for schoolmen; now the members of the Royal Society used inductive methods – Boyle, for example, established that there is an inverse relationship between the pressure applied to a gas and its volume – the space it occupies. Hume, though, worried that Boyle was measuring an ‘impression’ of the pressure, and an impression of the volume. How could Boyle be sure that the one – perhaps fleeting – impression was directly related to the other? How could one be sure that the heat of the sun was directly related to the burning of dry heather, never mind that the one was the cause of the other? This was an extreme sort of scepticism, (which might have been justified in the very early days of empirical science), but it convinced Kant that we could never know ‘things in themselves’.
Knowing for certain Mathematics, though, did seem to deliver certain knowledge. One can know that if the three sides of a triangle are of equal length, the internal angles must also be of equal value. It didn’t rely on repeated experiment – on Bacon’s and Boyle’s inductive method – to find this out. Nothing got in the way between subject and object in mathematics; in fact, one didn’t even need an object. All the subject needed to do was think. This is not to say that it’s a priori knowledge; mathematics is a complex of facts, facts by definition, that have to be shown to be facts to those not in the know – but they aren’t impressions that have to be tested repeatedly once they’ve been shown. This seemed to be a model for Philosophy: hence the revival of interest in logic and the analysis of propositions. Hegel thought all genuine knowledge is a priori, and there were plenty of Philosophers who agreed with him. But doubt set in:
62 In connection with mathematics the one-sidedness of the Greek genius appears: it reasoned deductively from what appeared self-evident, not inductively from what had been observed. Its amazing successes in the employment of this method misled not only the ancient world, but the greater part of the modern world also. Bertrand Russell, History of Western Philosophy, 1946/1961, p. 57
It misled Russell for a while. There are those Philosophers, nevertheless, who argue that we can have certain knowledge: that we can know for certain that a glass can’t be full of water and empty at the same time. To claim the one is to deny the truth of the other. Then there are those Philosophers who argue that certainty is bought at too high a price: For example, you might know that you hold just one out of a million coupons in a fair lottery, which will have one winner. You may inductively infer, with very high probability, 0.999999, that you will lose, as 999,999 of the million coupons will lose. But surely you do not know you will lose. Robert Audi, Epistemology: A Contemporary Introduction to the Theory of Knowledge, 2011, p. 191
This sets the bar for certain knowledge pretty high, and might make sceptics of us all. It’s been said that if one isn’t sceptical, one is dogmatic. If I said: “I know Descartes was a womanizer; I’m absolutely certain about it”, I’d sound pretty dogmatic. But, if I say: “I’m certain that if I jump in the Thames, I’ll get wet”, can there be a circumstance in which we won’t allow anything at all, to count against my being right? Must there always be something wrong with being certain? Must we leave certainty to believers in God? There must surely be a difference of degree between the extreme scepticism of Hume and the extreme dogmatism of the ‘infallible’ Pope Pius IX. Philosophers broadly agree that knowledge starts with understanding: once you understand something, you can believe it or not; and you have to believe something before you can know it. So said Torrance: first know, then believe. Not so, said Hodge: first believe then know. Let’s imagine a clever, but
63 sensitive student, as Peter Cave does: she attends an interview for a place on a university Philosophy course: Before the interview, she is familiar with the basic philosophers, including Plato (…) yet when she is asked whether Plato wrote The Republic, she gets tongue-tied. She lacks belief and certainty about even Plato’s existence (…) During the interview she seemed to lack belief; yet do we want to say that she lacked knowledge? Peter Cave How to Outwit Aristotle, 2011/12, p. 58
(Incidentally, Cave called his imaginary student Sophie: fictional female students of Philosophy are generally called Sophie). By ‘belief’ in this case, Cave seems to mean ‘confidence’; and it doesn’t sound a very plausible story. Still, surely it is plausible to have to believe that nothing can move faster than the speed of light before one can know it.
‘Justified true belief’ The question is: by what alchemy is the base metal of belief converted into the gold of knowing? It’s generally said that to be known, a belief must be justified by evidence of some kind. It achieves the status of knowledge when it turns out to be ‘true’. The key terms are: belief, truth, and justification – and it has been the conventional definition of knowledge that it’s ‘justified true belief’. We can express it like this (with apologies, Sophie, if, as I suspect, you’re allergic to equations and pretend-equations in prose text; in general, I am myself): B(elief) + J(ustification) + T(ruth) = K(nowledge) It’s not enough to justify one belief by reference to another: we can’t justify belief in the existence of God by reference to the Bible, since we’d then have to justify belief in the Bible as having a special authority; and that belief would rest on another, in its turn. We’d be caught in a regress, or circle – unless we were ‘foundationalists’ who claim that there are basic beliefs that need no justification. Imagine two married brothers (married, that is, to two separate women), Alan and Brian, who aren’t talking to each other. Brian
64 quarrelled aggressively with Alan’s wife and wouldn’t apologize. Alan believes Brian to be at fault, but he believes it’s his responsibility, all the same, to attempt a reconciliation, (a) because he’s the older brother, and (b) because their mother wishes the brothers to make peace with each other before she dies. This second ‘belief’ is enough for Alan: it’s basic; it’s foundational; it needs no further justification. One who’s sceptical about ‘basic’ beliefs might prefer to say that all Alan’s beliefs together (that Brian was at fault; that he should have apologized; that it was his, Alan’s, job to be the peace-maker, and so on) justified his acting in the case. You’d think that it was enough to be justified in believing something, and that that belief was ‘true’, for you to know it; but this seems not to be the case: Most contemporary philosophers assume that justified true belief is necessary, but not sufficient, for knowledge. But only trained philosophers ever talk about ‘justified true belief’. A justified true belief will get you to Larissa just as well as knowledge will. John Turri, Knowledge and the Norm of Assertion, 2016, p. 29
(Larissa is in Greece, in case that’s where you want to go). Turri asks why knowledge is rated higher than justified true belief, then says: ‘I do not find this question gripping, but others have’. Those others agreed with Turri, accepting the formula as it stood, until Edmund Gettier published a short paper in 1963. In it, he suggested that a belief might appear to be supported by chance, or coincidence – but this wouldn’t justify it; this wouldn’t make it ‘true’. So, to pursue the above story, Alan writes a conciliatory letter by firstclass post to Brian. At the same time, Brian’s wife persuades her husband to write (by second-class post) to Alan’s wife to apologize for quarrelling with her. When Alan’s wife receives Brian’s letter Alan believes that it’s his letter that prompted it. Alan’s belief that he had made peace was ‘true’, for him; but it wasn’t justified – therefore, strictly speaking, he couldn’t know that he’d been the peace-maker. (Philosophers, as you can see, are very strict with each other when it comes to having one hundred per cent knowledge of something. Ninety-nine per cent won’t do).
65 So, the time-honoured formula ‘knowledge is justified true belief’ no longer quite satisfies in a post-Gettier world. There still seems to be an x factor in the equation (which I’m turning round): K(knowledge) = J(ustified) + T(rue) + B(elief) + x There still seems to be something elusive preventing us from having rocksolid knowledge. The sceptic will say that we simply won’t find that x – not in this life – and call a dogmatist anyone who says we can. (Audi, for instance, says absolute knowledge is ‘not appropriate to the human condition’). Nobody wants to be called a dogmatist: In respect of almost any matter, the possibility of certain imagined sequences of experience makes quite a convincing case that one ought not, on pain of dogmatism, [to] have the attitude of absolute certainty. (…) Being certain involves being dogmatic (…) so there is always something wrong with being certain. Peter Unger, in Arguing about Knowledge (Eds. Ram Neta and Duncan Pritchard), 2009, pp. 475, 478
G. E. Moore was no sceptic: he claimed that he knew he had clothes on, and wasn’t absolutely naked. This made him a dogmatist as far as Unger was concerned, because Moore hadn’t allowed anything at all to count against his claim. There might still be that x factor that dented Moore’s self-confident certainty. In the same book, Kuanvig says: An adequate theory of knowledge must contain an account of the nature of knowledge that is, at a minimum, counter-example free. (…) True belief based on good reasons is not knowledge. Jonathan Kuanvig, in ibid., pp. 40, 50
Moore had pretty good reasons for believing that he was wearing clothes: what counter-example could possibly have counted against him? One of his students (like the boy in The Emperor’s New Clothes) calling out: “But he isn’t wearing anything at all!”? Unger and Kuanvig are sceptics, like Hume; but where Hume might have been justified in his scepticism at the dawn of
66 science, extreme scepticism can’t be justified now. Surely, it’s not dogmatism, when we’ve just dressed to go out, to claim that we know we’re fully clothed – it isn’t just an impression. The problem seems to be that sceptical Philosophers – those who make an idol of knowledge – are trapped in yet another dichotomy, another either/or: scepticism or dogmatism. Of course there’s a difference between something’s being certain and its being doubtful, but it’s a difference of degree. They’re on a continuum: ←-------------------------------------------------------------------------------------------------→ Certain
Probable
Plausible
Possible
Doubtful
Philosophers may have to accept that they have invested too much in this word ‘knowledge’; it is only a word, after all. To say that we know most things with only more or less certainty, isn’t the same as saying we can know nothing for certain. Why is ‘to know’ any different from ‘to learn’? When we learn, we do it over time: we don’t really believe we can learn a foreign language in a matter of days or weeks. Learning, knowing, thinking – none of these verbs denotes a once-and-for-all, perfectible activity. When we ask what it is that motivates us to learn, what best promotes learning, what might inhibit learning, we mainly leave it to psychologists to give us answers. Why should knowing be any different? French and German Philosophers had been impressed by the work of Edmund Husserl, a turn-of-the-20th-Century German thinker who gave rise to a theory of knowledge that came to be called phenomenology. This lays emphasis on the way we have conscious, subjective experience of objects (or phenomena). We rid ourselves of any presuppositions we may have about existence, to fasten on the essence of things. (And here we have another of those dichotomies: we can see a rainbow, and therefore be sure of its existence, but we have to understand what brings it about to know its essence. But can we ever know the essence of a person, can we be certain about our own essence, even if we admit there is such a thing?). Jean-Paul Sartre made much of the distinction between a person’s existence and their essence in an
67 existentialism that focused on the existence of human beings in the world, and the choices they make that give them their essence. This sort of thinking fed into hermeneutics – another portentous term. Like much else in Philosophy, it sprang from theology, and interpretation of the Bible, finding a home in continental Philosophy. It hoped to apply as universally as all its predecessors: Hermeneutics can be raised to the level of a universal philosophy which acknowledges that when we use language, we are already interpreting the world, not literally as if it possessed a single transparent meaning, but figuratively, in terms of allegory, symbol, metaphor, myth, and analogy. Richard Kearney Modern Movements in European Philosophy, 1994, p. 98
It gave rise to Derrida’s programme of ‘deconstruction’ of texts, whereby there’s no one reading – there’s no one meaning out there – privileged above all others. In short there is no absolute knowledge. And, by extension, there’s no absolute theory of knowledge, nor any need for one. Only epistemologists have any need of epistemology.
Doing without a theory It isn’t a German or a Frenchman who thought to put epistemology to the sword, though; it was an American, and he used unadorned American language: What was epistemology? A bad answer to a bad question – a question as bad as “What is the good?” Knowledge, like goodness is a good thing. So, it was thought in both cases, that by having a theory of this good thing we might be able to acquire more of it. Neither project panned out. Richard Rorty, in Rorty and his Critics (Ed. Robert B. Brandom), 2000, p. 240
We don’t need to doubt that we can know things in the 21st Century in the way Hume did in the 18th; and we don’t need a theory of how we know things. Philosophers will just have to accept that they will not be the ones to know what it is to know. Scientists have given us reinforced concrete knowledge because they have at their disposal what Philosophers don’t:
68 observational techniques and measuring instruments. We put this knowledge to the test daily when we switch on the computer, type in our password, click on this and that key, and press ‘Enter’ – and when it doesn’t work, we’re being metaphysical when we blame a gremlin, and metaphorical when we blame a virus. We know neither explanation is (literally) ‘true’. There are truths that don’t need to be supported by empirical evidence (without their being innate or a priori), for example that, in England you have to be 18 before you can drink alcohol in a pub; but there are lots of truths that empiricists of one sort or another have observed, and measured, and found to be ‘true’, such as that: Hot air rises Deciduous trees lose their leaves in the autumn Double-glazed windows reduce external noise and heat loss Gross economic inequality correlates with social problems Fresh water is necessary to life on this planet Early-years attachment is necessary to a child’s mental health Anti-Semitism has deep roots in European history We know these claims are true, not because we know them ‘from before’, or because we’ve put them to the test ourselves, but because we accept that they’re supported by evidence and are thus well-established facts. Hume himself would have been persuaded by repeated demonstration that hot air rises, without worrying himself about causation. It’s as pointless to pretend not to know things that we do know as to pretend to know things we don’t, or can’t. Of course, there are plenty of Philosophers who accept that science gives us knowledge without ifs or buts, or an x factor; even that (etymologically) science is knowledge. They agree that we know what we know because we sense and perceive things in the public world and communicate our sensations and perceptions in a public language. We may be deluded sometimes – standard optical illusions may deceive us – but science regulates
69 our individual experiences, and illusions are shown for what they are. We may not understand how we are conscious of something’s being red; but we have daily visual experience of red things, and each one of those experiences confirms the learnt association between ripe rhubarb, radishes, and redness. Is there anything else worth knowing about redness than that when we experience a red thing it is, indeed, red? Sceptics who question the truths of science have been accused of wasting their breath; but Philosophers are not unused to wasting their breath. They do so when they teach a course in Epistemology, where ‘problems’ are raised about the nature of Knowledge and our access to it. Those problems arose as Philosophy sought to distinguish itself first from theology and its faith-based ‘knowledge’; and then from science and its evidence-based knowledge. An advertisement for ‘The Google App’ blazoned this sentence across a full page of broadsheet weekend magazine: Knowledge is not always within reach This seems to be the position that Philosophy has got itself into. Knowledge is either within reach or not, as if it’s all or nothing – and Philosophers have tended to think it’s not. We have historical, physical-science, economic, linguistic, biological knowledge – factual knowledge – about which we can be certain, up to a point. Beyond that ever-shifting point there is disagreement and research yet to be done. Politics, the arts, religion, human relations: these matters are best left to experience and judgment. If there’s a problem it isn’t whether or not we can know; it’s what we know, and what our factual basis is for knowing it – and that isn’t a philosophical problem even with a small p. I don’t, and shan’t claim that all Philosophers – whether or not they call themselves Epistemologists – gaze up at Knowledge on a pedestal, wishing they could reach up to it, dismally aware that it may simply be too high for them. The fact, though, that the ‘theory of knowledge’ continues to engage
70 Philosophers, and their students, does seem to witness to a persistent faith in the possibility of acquiring a knowledge about knowledge – knowledge of a non-religious kind, knowledge of a non-scientific kind – that it may be given to Philosophy to rise to if only it can find the right words. Words are pretty much all they’ve got; and the words that Philosophers use, are bound to be (for the most part) the words that we all use. ‘Knowledge’ is a word that is used by some in the common-or-garden way we all use it, whilst others have given it a rather specialized, rather set-apart meaning. ‘Truth’ is another word that causes problems.
71
Reason 6: It has an unrealistic view of truth It is the spirit of the age to believe that any fact, no matter how suspect, is superior to any imaginative exercise, no matter how true. Gore Vidal, ‘French Letters: Theories of the New Novel’, in Encounter, 1967
‘The supreme truth’ Truth, what truth is, how we know something to be true or not is at the heart of thinking, so we can be sure that truth has been an issue for everyone, everywhere, from the very beginning of conscious thought. We’ve seen that it stands in a close relationship with justified belief in a (Philosophers’) definition of knowledge. Aristotle put the case like this (Sophie, you may need to read this more than once): We first define what truth and falsity are. To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true. Aristotle, in Wolfgang Künne, Conceptions of Truth, 2003, p. 95
To be, or not to be: there are just the two options. There’s no middle way, so this ‘law of thought’ of Aristotle’s is called the ‘law of excluded middle’. It may be that we have Aristotle to thank for some at least of all those dichotomies we met under Reason 4. And it may be that we have to thank Aristotle for Philosophy’s being, so often, a zero-sum game: there’s a winner, who wins all, and a loser, who loses all. The truth/falsehood dichotomy, though, was deep-rooted in the thinking of the Old Testament writers and prophets, even if it was an unspoken assumption before it was a proposition. When Moses, or whoever it was, devised the famous Ten Commandments, he didn’t allow for a middle way between committing adultery and not committing adultery. When the prophets ‘heard the word of the Lord’, they knew they had to speak that word to the people, without equivocation. There was no middle way between obedience and disobedience. Ezekiel, Isaiah, Jeremiah, Elijah, Amos, Micah,
72 Hosea, all told the ‘truth’ fearlessly as they saw it and felt it; and JudaeoChristianity has always agreed that the ‘truth’ was what they told. Paul was a New Testament prophet, and he wasn’t one to mince his words either. He wrote to the Christians in Galatia: ‘Have I become your enemy because I tell you the truth?’; and he wrote to the Christians in Corinth: ‘Every word we ever addressed to you bore the mark of truth.’ But it was John among New Testament writers who used the word ‘truth’ with most theological conviction. He used the words ‘true’ and ‘truth’ many more times than the other three gospel-writers put together, most memorably in these words: ‘I am the way, the truth, and the life’. If Jesus hadn’t actually used these words, John was certain enough of their ‘truth’ to put them in his mouth. Augustine had a conversion experience rather like Paul’s, and he saw life, and the after-life, in similar black-and-white terms. He wrote, as if to God: ‘It is said by your Son, who is Truth itself: whoever says to his brother “You fool” is in danger of hell fire’. (Let me repeat here what I wrote in the introduction to this book: that I am not calling Philosophers fools). Augustine could be pretty confident of a place in heaven after all he did to denounce heretics. He was made a saint, anyway. The Church rarely makes saints of those beset by doubts. The medieval schoolmen, the reformers of the 1500s, and the counterreformation Catholic Church, all took Truth to have a capital T, to the extent that the Vatican Council in 1870 declared that the Pope was infallible in matters of faith and morals. ‘Truth’ was what the Church said it was; and what the Church said, from the papal throne, was ‘true’. Protestants could be dogmatic, too: the Calvinist Karl Barth began work on his Church Dogmatics in 1932. In it he wrote that God’s answer to the human question, ‘like any other science, establishes the most certain truth ever known’. And the Lutheran Jürgen Moltmann could write: ‘God became man so that dehumanized men might become true men’. I could quote many ‘true men’ who have turned the water of belief into the wine of truth; but I won’t. I’ll just quote one Church of England clergyman and Oxford professor of both Theology and Philosophy:
73 What is wholly reliable (even inerrant) in Scripture are those truths which are important for our salvation – truths about Jesus’s death and resurrection and about the kingdom of God. Keith Ward, The Philosopher and the Gospels, 2011, p. 32
Voltaire famously honoured those who sought truth, but despised those who thought they’d found it – and that included the Catholic Church and its pretended monopoly of The Truth. Unlike Theologians, Philosophers on the whole don’t claim to have found it – but many do still seem to think there is such a thing. Some even hope they might find it one fine day – or, at least, they might find what Russell calls ‘successive approximations’ to it. Here’s one who agrees with Russell that Philosophy’s objective is to secure: a better grasp of some fragment of philosophical truth (…) I thread my way unsteadily along the tortuous mountain path which is supposed to lead, in the long distance, to the City of Eternal Truth. Paul Grice, in Richard E. Grandy and Richard Warner (Eds.) Philosophical grounds of Rationality, 1986, pp. 62, 63
What is a ‘philosophical truth’, one might ask; and is it truer, or wiser, than other kinds of truth? And is this ‘city of eternal truth’ the same one as Augustine’s City of God, or John Bunyan’s Celestial City, Christian’s goal in The Pilgrim’s Progress? Could it be that Grice is still doing Theology? Grice is not alone: it has been an article of faith among Philosophers all the way along that there’s a truth ‘out there’, and that it’s their job to capture it in well-chosen words. It’s only a minority of Philosophers – Dummett and Kołakowski, for example, whom we met back in Reason 3 – who still associate truth with God without embarrassment. All the same, Philosophers talk about ‘mind-independent truth’, and by this they mean truth that doesn’t depend on a human mind to grasp it. But is there any other sort of mind than a human mind (once we set aside Descartes and allow for degrees of mindfulness in animals) – one that can grasp a truth expressed in human language? You can only believe in the truth ‘out there’ if, like Berkeley and Dummett, you’re an idealist who believes in a God out there, who can know the truth that we can’t.
74 The truth about truth Does it make sense, though, to think that ‘truth’ lies on one side of a vertical line, and ‘falsehood’ lies on the other? It’s a dichotomy that gives us the classic puzzle found in Philosophy puzzle-books like those listed under Reason 1. You’ll have heard of the Cretan who said ‘all Cretans are liars’, I dare say – and if you haven’t yet, Sophie, I’m sure you will. Haack teased her readers with sentence S: ‘This sentence is false’: Suppose S is true; then what it says is the case; so it is false. Suppose that S is false; then what it says is not the case, so it is true. So, S is true, if and only if, S is false. Susan Haack, Philosophy of Logics, 1978, p. 135
Is it a silly puzzle, or a clever one? Wittgenstein tried to avoid paradoxes of this sort by devising ‘well-formed’ sentences. These were ones whose syntax meant that they were either one hundred per cent true (T) or one hundred per cent false (F). He drew up ‘truth tables’ to handle a sentence like: ‘If it’s raining (R) and I don’t have an umbrella (U), I’ll get wet (W)’. In a truth table, it would look like this: R
U
W
T
T
T
T
F
F
F
T
F
F
F
F
So, given that if it’s raining and I don’t have an umbrella, I’ll get wet, it follows that if it’s raining and I do have an umbrella, I shan’t get wet; if it’s not raining and I do have an umbrella, I shan’t get wet; and if it’s not raining and I do have an umbrella, I still shan’t get wet. Is that clear? And does the truth table
75 help to make it clear? Probably not: you could probably work out the permutations without the aid of a truth table. (In fairness, it should be pointed out that most of Wittgenstein’s tables were more complicated than this). In mathematics, there are two truth values: true and false; there are always four prime numbers between ten and twenty, no more and no less. Quine was a mathematician and he prized what he called two-valued logic – true/false logic. Either a statement or its negation is true, and not both. W.V.O.Quine, Quiddities: An Intermittently Philosophical Dictionary, 1987, p. 143
Truth is one thing, [justified] belief another. We can gain clarity and enjoy the sweet simplicity of two-valued logic by heeding the distinction. W.V.O.Quine, Pursuit of Truth, 1992, p. 94
It makes life easier for the Philosopher who does logic to agree with Aristotle that a statement (or sentence, or proposition) is either true or false. The fact, of course, that ‘the law of excluded middle’ makes logic clear, sweet, and simple, doesn’t alter the fact that many of the things we say may seem to be absolutely ‘true’ to some, but only partly ‘true’ to others. It may depend on how much water there is in the glass. What do we mean by ‘truth’, anyway? Mathematicians or not, Philosophers have mostly accepted one of four ‘classic’ theories of truth: the correspondence theory has it that claims are true if they correspond to states of affairs in the world – if, as it were, they give us an accurate map of reality, e.g. ‘Puddles form when heavy rain falls on a firm but uneven surface’. the coherence theory, whereby the truth of a proposition depends on its making sense in context with other propositions known to be true, e.g. ‘The ball ricocheted off his arm past the goal-keeper; so the goal had to be disallowed. It would’ve been OK if it’d been his head’.
76 the pragmatic theory evaluates a claim according to whether or not it is useful in practice – what’s true is, essentially, what works or has ‘cash-value’, e.g. ‘I find that my belief in God helps me through rough patches in life’. the deflationary theory ‘deflates’ the balloon of ‘truth’; it does away with the ‘what is truth?’ question altogether. ‘It’s true’ adds nothing to a proposition, e.g. ‘*It’s true+ Snow is white if, and only if, snow is white’. Perhaps we should be wary of theory of any kind. T. S. Eliot is reported to have said: “One must have theories, but one need not believe them”. But Eliot was a poet; surely a Philosopher will have more respect for theory. Not this one, though. (He refers to a theory of naming): It really is a nice theory. The only defect I think it has is probably common to all philosophical theories. It’s wrong. Saul Kripke, Naming and Necessity, 1980, 64
He might have been wiser to say: it’s misguided; or, better still, I don’t agree with it, or with any other philosophical theory – or even perhaps any theory at all that isn’t open to proof or disproof. Whether the correspondence theory is ‘wrong’ or not, we mostly do expect statements about the world to correspond to the world as we see it – it’s simple common sense; and we do attach most value to a statement that has some practical use, even if we don’t necessarily think this makes it ‘true’. The coherence theory is really only ‘right’ in the particular circumstances of this or that particular context: a shout of “Deuce!” has a truth-value in tennis, but it’s meaningless in football. It does seem best to deflate the word. Why should it have one meaning, all the time, for everybody – a universal meaning? Like ‘knowledge’, ‘truth’ is just a word whose meaning a (good, big) dictionary will try to capture by giving several ways in which we use it. Paul Horwich is a deflationist: [The] attempt to discern the essence of truth – to analyse that special quality which all truths supposedly have in common – is just a pseudo-problem based on syntactic over-generalization. Paul Horwich, Truth, (2nd Edn.), 1998, p. 5
77 I could hardly agree more, and that goes for you, too, Sophie, I dare say. Yet later, in the same book, Horwich won’t let Aristotle go: ‘every proposition is true or not – i.e. true or false’ (he writes on page 77). He even supposes (on page 129) that of all the propositions that exist, ‘presumably half are true’. Isn’t that rather odd: to presume that half of all propositions in the world are true, and half are false? Exclude the middle and you land in a muddle. Though I prefer not to divide the world in two, I introduced the terms ‘facts by definition’ and ‘facts by discovery’ under Reason 4. It seems safer to talk about ‘facts’ than ‘truths’. A fact, after all, is a claim that has been put beyond dispute – it concerns something ‘made’, something ‘done’. So, when we say that an American president can serve for no more than two four-year terms, we’re stating a fact by definition – it’s there in the American constitution. This sort of fact corresponds to the analytic truth of the Vienna Circle (and, loosely, to the coherence theory of truth). When we say that because iron oxidizes easily it rarely occurs in nature, we’re stating a fact by discovery. This sort of fact corresponds to the Vienna Circle’s synthetic truth (and to the correspondence theory). What those Philosophers, Moritz Schlick, Rudolf Carnap and others (many of them mathematicians and scientists) overlooked were statements that might be called ‘truths’ but that can’t be called facts – like this one: The human world was irrevocably changed by Borromini, Bach and Braque, even if many people are unable to notice the fact. Roger Scruton, Modern Philosophy: An Introduction and Survey, 1996, p.347
They don’t ‘notice’ the fact, of course, because it isn’t a fact at all. Someone else might just as well have claimed that Palladio, Prokoviev and Picasso changed the world, or Gaudi, Glazunov and Goya. Inasmuch as few would agree with any of these claims, they can’t be facts. A fact is ‘out there’ (because we put it there) in a way a truth about Scruton’s ‘human world’ can never be. Perhaps Scruton’s ‘fact’ was a truth in a pragmatic sense: it worked for him.
78 Facts can be changed, of course – and to this extent, the vertical line between facts by definition and facts by discovery is less fixed than first appears. Did the ancients define geometry into existence, or did they discover it? It’s probably safer to say that they gave names to two- and three-dimensional objects and that, having done that, they discovered patterns in the shapes that they’d come up with. It was a simple fact by definition that the capital city of Nigeria was Lagos; on 12 December 1991, Abuja replaced it as the country’s capital, in a simple act of re-definition. The existence of Pluto was confirmed on 18 February 1930, and it was thought of as the ninth planet in the Solar System; in 2006 it was re-classified as a dwarf planet. Following a discovery and a re-definition Pluto was a planet no longer. A fact (of both kinds) is a fact or it is not; it is not open to a pragmatist to say ‘it’s a fact for me, even if it isn’t for you’. We may allow there to be alternative truths; but there are only alternative facts in countries with authoritarian governments. Scruton could have said: ‘That Borromini, Bach and Braque changed the human world is true for me’ – but he wouldn’t have done, because that would have made him a relativist. What is relativism? It’s the denial of the absolute. A fact is absolutely ‘true’, for as long as it’s agreed to be a fact. It’s absolutely true that metal is a better conductor of heat than wood; but it’s only relatively true that music is an international language. Facts cluster at the absolute end of the True/False, Absolute/Relative continuum – but claims about the ‘human world’ are strung out along it. TRUE FALSE ←-----------------------------------------------------------------------------------------------→ ABSOLUTELY TRUE
R --- E --- L --- A --- T --- I --- V --- E --- L ---- Y
TRUE
If you’re not an absolutist in matters of opinion concerning the human world (art, politics, religion, literature, history – and, yes, philosophy, and to some extent, the social sciences), you have to be relativist – even if only relatively so.
79 Truth and trust Nietzsche had written that we all see the world from our own perspective; there is no one perspective that can be called the ‘true’ one. ’Continental’ Philosophers, the likes of Derrida, Foucault, Deleuze, Baudrillard, Habermas: became excited about Nietzsche mainly because he philosophized in opposition to the traditional ideal of absolute philosophical truth. Gary Gutting, French Philosophy in the Twentieth Century, 2001, p. 255
In short, these and other Philosophers denied, collectively, that the truth is ‘out there’. Philosophers can’t hope to make a word-map of it: it isn’t there to be either discovered or defined. Understandably, those Philosophers who hold on grimly to the ideal of a universal Wisdom dislike this sort of talk – it threatens to undermine the whole enterprise. They associate it with postmodernism, pragmatism, existentialism, subjectivism – a host of ‘barbarous programmes’ that conspire in ‘the trashing of truth’. These explosive words are taken from one of many books that have as their title ‘Truth’: We need a positive response on behalf of truth – an aggressive or, at least, an inventive defence which addresses candidly the doubts of a world in denial (…) There are two ways forward: first, by turning back to tradition; secondly – because a worthwhile chase flatters the quarry – by hounding subjectivism and relativism until the truth is run to earth. Felipe Fernandez-Armesto, Truth: A History and a Guide for the Perplexed, 1997, pp. 207, 229
Fernandez-Armesto’s ‘tradition’ is Roman Catholic; Daniel Dennett’s is atheistic. Dennett was interviewed by Guardian journalist Carole Cadwalladr – and he proved to be just as downright as any true believer: D.D.: I think what the postmodernists did was truly evil. They are responsible for the intellectual fad that made it respectable to be cynical about truth and facts (…)
80 C. C.: My understanding of postmodernism – and you’re a very prominent atheist – is that, in the absence of a single meta-narrative, which is God, you had competing narratives. D.D.: Yes, and one’s true and the others are false. One of those narratives is the truth and the others aren’t; it’s as simple as that. The Guardian, 12 February 2017
(I am reminded of Oscar Wilde’s dictum: ‘The truth is rarely pure and never simple’). I don’t know whether journalist Matthew D’Ancona believes in God or not, but he writes as if he does. He doesn’t profess to be a Philosopher, but he philosophizes, and no less forcefully than Fernandez-Armesto and Dennett. In a book with the title Post-Truth: the new war on truth and how to fight back (2017), he scorns ‘the infectious spread of pernicious relativism’ and declares: ‘The truth is out there – if only we demand it’. To whom would we issue our demand, I wonder. D’Ancona claims we live in a ‘post-truth era’, and he’s not unique in this; but what can he possibly mean? We can make some sense of modernism – as a late 19th Century break from the conventions of the past; a movement influenced by the psychology of Freud, marked by a passion for experiment in the arts – so we can make some sense of a late 20th Century post-modernist era. But what sense can we make of a ’truth era’? When was that, and when did it come to an end? Philosophers answer those who defend relativism by saying: “You tell us there’s no such thing as truth; truth is relative – there’s one truth for me and another for you; well, you’re warning us not to believe what you say, so we won’t. Cretans are liars and relativists are liars, too.” When Philosophers quarrel, it’s often facts they disagree about, not truths, and certainly not The Truth with a capital T. Truth matters, of course it does; I accepted under Reason 5 that we can be certain about a lot of things – things that are facts by definition or discovery or both. Here are a few more; the authors think they prove the point that ‘relativism is false’ and that just about anyone who isn’t a psychotic will agree: Walls are solid; fire burns; knives cut; jumping off a cliff will cause serious injury; it hurts more to be hit with a rock than with a violet; rain is wet; heat
81 cooks food; aeroplanes fly because engineers designed them according to various physical laws (not magic); there is no little gremlin inside a radio set; and so on. Ophelia Benson and Jeremy Stangroom, Why Truth Matters, 2006, p. 41
Are all walls solid? Are stud walls and cavity walls as solid as the stone walls of a Norman castle, or the Berlin Wall of 1961-89? Aren’t walls more or less solid – relatively solid? Do all knives cut what we may want them to cut? Does a butter knife cut a rare steak; will a paper knife cut through a tightly knotted shoelace? Doesn’t it depend on how steep and how high the cliff is whether or not you’ll injure yourself seriously if you jump off it? Am I quibbling? It’s a fact – and therefore true – that to be hit by a flower of any colour is less damaging than to be hit by a rock; and it’s a fact that rain is wet. But these facts don’t falsify relativism: they simply show that we have to be careful to distinguish between truths that are facts, and truths that aren’t. Claims like ‘Dali was a greater painter than Chagall ’, and ‘prison works as a deterrent to law-breaking’ are propositions that some will call true and others won’t. We had better stick to calling them claims, or truth-claims. I mentioned Theodore Sider as a Philosopher whose aim it was to describe the basic structure of the world: to ‘carve it perfectly at the joints’. Australian Philosopher and cognitive scientist David Chalmers aimed even higher, perhaps, in a book that his publisher advertised as: ‘A highly ambitious and original approach to philosophy, based on the idea that reasoning from a limited class of basic truths yields all truths about the world’. All the truths about the whole world? And all for just £25? This was a book that a borrower like me was intrigued enough to get the local public library to buy. How did Chalmers go about running all these truths to earth? He invented the ‘Cosmoscope’ – a fanciful device, he admitted, but one so powerful that nothing in the world could escape its gaze, starting with (oh, dear) all the a priori truths he could muster, together with all the facts by discovery made by researchers in every conceivable academic field.
82 A reasonably intelligent subject could use a Cosmoscope to answer many questions: who was Jack the Ripper; will there be a third world war; is there life on other planets? To uncover truth-claims like those about Dali and Chagall and the effectiveness of prison as a deterrent to law-breaking, Chalmers conjured up: an extended Cosmoscope that delivers intentional truths to us, allowing us to know what every individual believes, desires, intends, and so on. Given this sort of extended Cosmoscope, there would be no obstacle to determining who is whose friend, who has what sort of money, what the laws of society are, and so on. David J. Chalmers, Constructing the World, 2012, pp. 118, 279
George Orwell’s Thought Police would have loved to get their hands on this device. Could Chalmers have brought theorizing about truth into disrepute once and for all? I might have thought so if I’d not read a book that surveyed ‘new research on truth’ by eighteen of some of the ‘most promising young researchers working on the subject’. One of these researchers made this claim: I offer what I believe to be the right view about the conception of truth (…) while in a friendly spirit making most theories of truth out to be at least partly right. Douglas Patterson, in Cory D. Wright and Nikolaj J. L.L. Pedersen (Eds.), New Waves in Truth, 2010, p. 13
Remember what Eliot and Kripke had to say about theories? Is the word ‘right’ another word for ‘true’? And is there likely to be only one ‘right view’ about truth? (And isn’t ‘right view’ an oxymoron, in any case?). The seventeen other young researchers had views of their own. ‘True’ is a word that appears to have come from Old English (trīewe) through Old Frisian (triuwe), to mean steadfast, reliable, trustworthy. It’s cognate with ‘truce’ (Old English trēow): an agreement to suspend hostilities; and with ‘tryst’: an agreement to meet. Both ‘truce’ and ‘tryst’ have to do with keeping
83 a promise – indeed, ‘tryst’ would appear to be a variant of ‘trust’. Whether or not ‘truth’ and ‘trust’ have a common ancestor, their meanings do – and meaning is in the use. We take our father’s claim that “if you drop that glass it’ll break” to be true (though we may need to test the claim on pain of being sent to bed early), because we trust him; and we take our maths teacher’s claim that numbers go on for ever and ever to be true, because we trust her. Both are authority figures whom we trust (more or less) until we know the claims to be true for ourselves. God (The One, Allah, Yahweh) is still the ultimate authority figure for many – but not for non-believers and not for most Philosophers. They can only appeal to people who are in some sort of authority: those in positions of power, now or then (Abraham Lincoln, Winston Churchill, Chairman Mao); experts in their fields (Albert Einstein, James Watson, Kurt Gӧdel); poets, essayists, playwrights in the ‘great tradition’ of world literature (Goethe, Shakespeare, Tolstoy). Philosophers generally appeal to other philosophizers: often the almighty Greeks, or Hume, or Kant, or Nietzsche. A. N. Whitehead said the whole of Philosophy was a ‘series of footnotes to Plato’. What we take to be true will depend on whom we take to be an authority – whom we choose to trust. First, it’ll be our parents, then our teachers, then adults who seem to know what they’re talking about – the people we listen to and read, who are possessed of relevant qualifications and experience, and who have a reputation for honesty and consistency. We take in what they say; we take a look at their evidence base; we check whether others of similar standing come to the same conclusion; and we’re persuaded or not that what they say is true. What else can we do? And this process is what we all have to go through to establish the ‘truth’, in history, sociology, biology, and – yes – in the physical sciences, too. Diderot wrote in 1757: ‘What is the truth? The conformity of our judgment with that of others’. What we take to be the truth will depend on who ‘those others’ are. Here’s the same thought expressed a little less concisely: When someone tells you something, your mind constructs a meaning for the utterance (which if all goes well, matches the meaning they had in mind). If
84 this meaning jibes [or chimes] with something already in your understanding, you’ll judge the utterance true (…) If you assume the speaker is sincere, then you’ll just add the meaning to your understanding of the world. Ray Jackendoff, A User’s Guide to Thought and Meaning, 2012, pp. 198, 189
Jackendoff goes on to observe that: ‘It isn’t always easy to decide whether you can trust the speaker’. Quite so, and for that reason it’s best not to ‘assume’ anything. Most of the time, of course, we don’t have to view everything from behind dark glasses. We trust our parents up to a point; we trust our teachers and other adults – experts or not – up to a point. The point at which we can be confident that they’re telling the ‘absolute truth’ is when they give us facts that are beyond dispute.
85
Reason 7: It is confused about mind I am a man of substance, of flesh and bone, fibre and liquids – and I might even be said to possess a mind. Ralph Waldo Ellison, Invisible Man, 1952
Breath, wind, spirit Mind, brain, soul, spirit, self, psyche, consciousness, id/ego/super-ego – we have all these words for what may or may not be the same thing, if it’s a ‘thing’ at all; if it’s an ‘it’ at all. They all seem to try to capture something essential about us; something that makes us who we are; something nonmaterial – something that’s other than our physical body. It was thought in the past, if it isn’t now, that whatever it was had either to have been ‘breathed’ into humankind from the start, or it was innate in each individual – a potential at birth that developed as we grew. The 1200 BCE Egyptian Book of the Dead was a book of spells whose purpose was to assist a dead person on the journey through the underworld into the after-life. In it there are references to the spirit of the deceased. The belief seems to have been that one’s spirit, pictured as a bird with a human head, flew from the body as it lay asleep, dreaming, to take a long break in the world of spirits. The ‘spirit’ seems to have meant something rather different for the Hebrews: there was something breezy, something breathy, about the ‘spirit that moved on the face of the waters’, in Genesis 1:2. The Hebrew word that we romanize as ruah means: wind, breath, spirit. It is the ‘breath of life’ that animated those chosen to enter Noah’s ark; and it is what revived ‘dem bones, dem bones, dem dry bones’ (of the ‘negro spiritual’, and) of Ezekiel’s vision of the Valley of the Bones. God instructed the prophet to call on the wind to breathe on the bones and so bring them back to life (Ezekiel 37:9). It was the ruah of God that ‘came upon’ each of the prophets: it fortified Job, for instance; ‘the spirit of God is in my nostrils’, he said – and he seems to have meant that it was as physically real as his nostrils.
86 The Hebrews weren’t alone in this belief in the physical airiness, or breathiness of what animated them – indeed, that word that I’ve now used twice, comes from the Latin anima, breath, which itself comes from the Greek anemos, wind; and the primary meaning of the Greek psuchē, our ‘psyche’, is breath. In the course of time, these terms were given an extended use to refer to what animated, in-spired, en-souled us – and they stood in, eventually, for the ‘soul’ or the ‘mind’ itself. It was the writer of Psalm 51 who first used the words ‘holy spirit’: ‘take not your holy spirit from me’ he pleaded; but the gospel-writers still needed to give this a physical form. The spirit that descended on Jesus at the Jordan ‘came as a dove’ (Mark 1.10) – perhaps the same dove that Noah released, centuries before, to see if the flood had abated. It’s the Greek word pneuma, meaning wind, air, that we translate as the ‘Holy Spirit’, or ‘Holy Ghost’ in the English New Testament (‘spirit’ being the Latin, and ‘ghost’ being the Germanic words for the same entity). The ‘Holy Spirit’ that filled the apostles at Pentecost, and the house where they were sitting, came as ‘the rushing of a mighty wind’ (Acts 2: 1-4). It was this Spirit, now in the shape of forked tongues of fire that made them, all of a sudden, polyglots. The words ‘Spirit’ and ‘Ghost’ are used more or less interchangeably in older versions of the New Testament; and it’s the Holy Ghost’ that churchgoers say they believe in when they recite the Apostles’ Creed (and as I used to do myself, though I didn’t believe in ghosts on weekdays). In modern versions of the Bible, ‘Spirit’ is the preferred term, presumably because those who’d deny that they believe in ghosts don’t have as much trouble believing in spirits. Christians can’t help but believe in ‘spirit’, in what is ‘spiritual’, and not material: how otherwise can they explain how a non-material God interacts with a very material world? What theologian nowadays will say that God the Father has a body, in the same way that God the Son, for thirty-three years or so, had a body? He has to be ‘spiritual’, and he has to live in a fifth dimension – a ‘spiritual dimension’ – if he is to live anywhere at all. Such an ether must make considerable demands on the imagination of a theist, never mind of an atheist.
87 Rowan Williams, former Archbishop of Canterbury, comforted the faithful in Christchurch, New Zealand, in the year following the 2011 earthquake, in these words: The spirit of Jesus is the spirit that constantly renews in us the ability to pray with integrity and conviction; to pray to God intimately, as to a parent, to say Abba, Father, that’s what the spirit does. Is this use of the word ‘spirit’, and its offshoot ‘spiritual’, more than a figure of speech? Do archbishops, and bishops, and priests, and deacons, when they use the words really believe that there is a fifth dimension of which we have no tangible evidence whatsoever – of which we can’t, by definition, have any evidence whatsoever? To be sure, there are those who aren’t Christians at all, who don’t consider themselves to be religious, who would admit to having a ‘soul’ – and they’d very likely suppose it to have a different function, and a different location, from their brain. Of course, we speak of being compassionate as ‘having a heart’, and we say without thinking as we show pity for the bereaved that ‘our hearts go out to them’. But, in this instance, surely, the heart really is no more than a figure of speech. I don’t imagine believers still think of ‘spirit’ as having the consistency of wind, or breath – a movement of the air. I don’t imagine they think of it as a fifth force, either, to add to the four physical forces that we do have evidence of. What do they think of when they talk about having a ‘soul’? Plato thought of it as the ‘life-force’ – and to some extent, what life is, essentially, remains a mystery. Galen’s 2nd Century idea that the heart pumps a ‘vital spirit’ to the brain where it’s admixed with pneuma, or air, within the folds of the brain, lasted right through to the 19th Century, when Darwin thought of thought as a ‘secretion of the brain’. Why should anyone doubt that they had a ‘soul’ when the Church told them they had, and before the study of what makes us what we are was anything more than guesswork? How the human mind works could only be understood by analogy with whatever technology was available at the time:
88 For Leonardo Da Vinci, the mind was like a clock, or other mechanical device. ↓ For Charles Babbage and Ada Lovelace, it worked in something like the way in which their Analytic Engine, or calculating machine, worked, in the 1830s/40s. ↓ For Melvil Dewey it was a highly organized library, whose books were classified according to his Dewey Decimal System of 1876. ↓ For Freud it was natural, as steam-power was introduced, to think of the mind as a hydraulic pumping system, harnessing multiple flows of energy. ↓ For Charles Sherrington, a neurophysiologist in 1942, it had something of the character of an ‘enchanted loom’, with millions of flashing shuttles. It was progress of a sort. In the middle of the 20th Century, the Bell Telephone Exchange was a suggestive analogy. Then it was the turn of the mainframe computer. Now, if we’re not neuroscientists, we may think of the mind as an intranet, or – to the extent that it interacts with other minds – as an internet terminal. Artificial intelligence is premised on the idea that the workings of the mind might somehow be represented in one or another form of robotic technology. Most of us have no choice but to think of the mind in a metaphorical way:
89 It is virtually impossible to think or talk about the mind in any serious way without conceptualizing it metaphorically (…) We have no single, consistent, univocal set of non-metaphorical concepts for mental operations and ideas. Independent of these metaphors, we have no conception of how the mind works. George Lakoff & Mark Johnson, Philosophy in the Flesh: The Embodied Mind and its challenge to Western Thought, 1999, pp. 235, 248
The wonder is that some Philosophers (any Philosophers) still have a use for the word ‘soul’. Roger Scruton, whose Modern Philosophy I have already quoted more than once, talks about our capacities for thinking, feeling, intending, and for being conscious of our mental states, in other words, our ‘mind, passion, will and self’ as being all wrapped up in the word ‘soul’ (p. 209). It’s thought of by this Philosopher as something set apart from the body, and as the seat of our moral sense. Alain De Botton struggles to give a name to that part of us that isn’t precisely: Intelligence or emotion, not character or personality, but another, more abstract entity loosely connected with all of those and yet differentiated from them by an additional ethical and transcendent dimension – and to which we may as well refer, following Christian terminology, as the soul. Alain De Botton, Religion for Atheists, 2012, p. 113
What is an atheist to make of this? What is an open-minded Philosopher to make of it? Can it still make sense to think of our intelligence as residing in one part of our body and our ethics in another? Philosophy really ought to have nothing more to do with ‘soul’, ‘spirit’, and ‘spiritual’ than it does with ‘sin’, ‘heaven’, and ‘the holy’. We do use metaphor, and perhaps we have to when we don’t know enough to be literal – when we’re still confronted by mysteries. Perhaps the Philosophy that I’ve read on the subject was written before we knew as much about the mind/brain as we do now (having said which, of course, we are still a long way from knowing as much about it as we do our physical body).
90 It was Descartes who drew a vertical line between mind and body that persists for some, even now: I certainly possess a body, with which I am closely conjoined; nevertheless, because on the one hand I have a distinct idea of myself as a thinking thing, and, on the other, I possess a distinct idea of a body, insofar as this is only an unthinking thing, it is certain that my mind is entirely and truly distinct from my body and may exist without it. René Descartes, Meditations No.6, para 9
(Just in case we still accept that it’s enough to say ‘I think, therefore I am’, Giulia Enders, in her book Gut, points out that in light of recent gut research it would be more accurate to say: ‘I feel, then I think, therefore I am’). Descartes’ idea was entirely in line with what Paul and other Christian thinkers taught: that the mind was separate from, and ‘higher’ than, the body, the one immaterial and the other grossly material. It was Descartes who gave us what looked like the most unassailable of dichotomies – dualism – and in so doing set us, perhaps, the most taxing of all Philosophical ‘problems’. And it’s not only a Philosophical problem: when someone appears to overcome a physical ailment by a sheer act of will or prayerfulness, we talk of the power of ‘mind over matter’ (when we don’t call it a miracle). We talk about physical illness, and mental illness, and we refer patients to two separate health services, as if our nervous system was in a quite separate compartment of our body from our immune system. Of course, this is neither surprising, nor is it blameworthy – and what we do is at last catching up with what we know.
The end of dualism Some Philosophers are catching up, and some aren’t. Few now would want to be called a dualist, but there are equally few prepared to commit themselves to the physicalist view that the mind and brain are one and the same, and that our thoughts are physical processes. E. J. Lowe was keen to prove the
91 ‘inadequacy of physicalism’; (by ‘reify’ in the following quotation he means to ‘make a material thing of’): [I am] unashamedly committed to a dualism of self and body, though emphatically not one along traditional ‘Cartesian’ lines (…) I would prefer to speak of the self-body problem, for I do not wish to reify the ‘mind’ as an entity on a par with the body. E. J. Lowe, Subjects of Experience, 1996, pp. ix, 1
He refuses to reduce mental to biochemical processes, calling such ‘reductivism’ a dogmatic prejudice. Grayling doesn’t want to identify the mind with the brain, either – to reduce one to the other: Here is a truth that neuropsychology appears entirely to overlook: minds are not brains (…) minds are the result of social interaction between brains (…) A single mind is, accordingly, the result of interaction between many brains, and this is not something that shows up on an fMRI scan. A. C. Grayling, The Challenge of Things: Thinking Through Troubled Times, 2015, p. 121
There is a lot in this, of course: nobody wants to discount the effects of lived experience on our thinking; to do that really would make robots of us. Grayling doesn’t call neuroscience into question; but he does want to ensure that what he calls ‘neurophilosophy’ has a part to play, without being very clear about what Philosophy (without the ‘neuro’) would bring to the table. There is something troubling about ‘reducing’ mind to brain; and it’s still more troubling to think of reducing it to body. It’s like reducing love to lust. It may be, though, that the mind/brain ‘problem’ arises because we have two words: we think of the brain as a ‘thing’ – something that we can see on a slab, and poke about in if we’re so minded. The mind isn’t a thing in the same way, or, at least, we don’t think of it as something tangible. But, surely, this is only because we’ve inherited ‘spirit’ language (the primary meaning of the word esprit in French is ‘mind’), and the idea that mind is somehow above matter. To accept that mind is matter is to admit, after all, that there’s nothing in the world that isn’t stuff. It’s an idea that dies hard. It does for Joseph Margolis:
92 It is difficult to justify ascribing such mental conditions as thoughts or emotions or reasons or hallucinations to mere physical bodies, or indeed, to a congeries of sub-atomic particles. Joseph Margolis, Introduction to Philosophical Problems, 2006, p. 203 What bothers many is that if we do ascribe thoughts, emotions, and so on to sub-atomic particles, or electrochemical impulses – in short, if we accept physicalism – we lose, or we seem to lose, what Lowe meant by ‘self’, and what Kim means by ‘subjectivity’: If subjectivity should turn out to be accommodatable in a physical world, we would like to know exactly how the accommodation works – or exactly how subjectivity arises in a purely physical system. Jaegwon Kim, in Brian Leiter (Ed.) The Future for Philosophy, 2004, p. 151
But do we lose these aspects of mind? If we accept that our self, and our subjective world-view can be accounted for by our memory – the sum of our ‘lived experience’ – and if, as we know, long-term memory is stored in multiple regions throughout the nervous system, it shouldn’t be difficult to locate ‘mind’, not only in the brain, but in the rest of the body, too. John Searle has done a lot of thinking about this, and he rejects dualism pretty firmly: he’s quite prepared to accommodate mind in a physical system: We know for a fact that all of our conscious states are caused by brain processes. This proposition is not up for grabs (…) consciousness is a biological phenomenon like any other. So far so physicalist (or materialist); but he adds a reservation: This does not prevent consciousness from being a higher-level feature of the brain in the same way that digestion is a higher-level feature of the stomach. In short, the way to reply to materialism is to point out that it ignores the real existence of consciousness. John Searle Mind, Language, and Society: Philosophy in the Real World, 1999, pp. 51, 52
93 Is digestion best described as a ‘higher-level feature of the stomach’? Isn’t digestion of food what the stomach does? Can we really separate what the stomach does from what it is? If consciousness is biological, as Searle says it is, in what sense is it ‘higher’ than the biological brain? Again, does it make any sense to separate what the nervous system does from what it is? Why this passion for dichotomies? Is it rooted in our unwillingness to give up the fantasy that there is something of us that will survive our physical death?
Mind the gap ‘Consciousness’ would seem to be a word that we have to add to ‘spirit’, ‘soul’, ‘mind’, ‘self’, and ‘subjectivity’ as ways in which Philosophers can hold on to a place in our make-up for the very special, very human, ability to think – and not just to think, but to think in that special breathy, windy way that is thinking ‘philosophically’. Nineteenth-Century theologians had to adjust themselves to the advances in scientific knowledge: this knowledge left fewer and fewer gaps in the cosmos for the sort of God it had been possible to believe in – a God up there, seated ‘above’ the created world in a ‘heaven’ into which Christ had ‘ascended’. Because we love alliteration in English, he was called (by sceptics) ‘the God of the gaps’. ‘Consciousness’ is the last refuge – the last gap – into which a belief in an immaterial mind can be slotted. It’s fair to say that most Philosophers are as mystified by what consciousness is as the rest of us: After almost three hundred years of tinkering with the brain we still haven’t come up with a satisfactory explanation of consciousness. Nor can we satisfactorily define consciousness. Richard M. Restak, Mind: The Big Questions, 2012, p. 50
Perhaps, like ‘knowledge’, ‘consciousness’ is just a word: it’s only been around since the 1600s, and it’s generally taken to signify ‘awareness’ of one’s own thoughts and feelings. But that just substitutes one word for another. Look in a thesaurus, and you’ll find several more synonyms for ‘consciousness’ – including ‘mind’. Because we don’t really understand very much about what consciousness is, we can still make room in the idea for
94 reasoned thinking and free will – for what spares us from being only as mindful as dolphins, and bonobo chimpanzees, or Caputo’s robots. Daniel Dennett has done as much as any Philosopher to ‘explain’ consciousness, but even he admits to trading one metaphor for another, just as thinkers and doers over the centuries have come up with one analogy for the mind after another. Philosophers can go on thinking about thinking like Descartes did, but this is all they can really do in the comfort of their armchairs, or sitting at their desks writing, to date (according to Chalmers – he of the ‘Cosmoscope’) twenty thousand papers on the subject. Some of these papers ponder yet another dichotomy: there are those Philosophers who believe that a thinker’s thoughts depend on factors external to the mind of the thinker – these are called ‘externalists’. ‘Internalists’, as you’ll have guessed, are those who believe that thoughts are located within the skin and skull of the thinker. There are even ‘social externalists’, ‘reductive externalists’, ‘vehicle internalists’, ‘phenomenal internalists’, and more. To read theories about where our thoughts are located is rather like eavesdropping on the debate between those theologians who held that Christ was of the same nature as God, and those who said he was of like nature (there is literally one iota to tell the two words apart, in Greek); or, the dispute between those who thought it best to crack a boiled egg at the bulbous end, the ‘Big-Enders’, and those who favoured cracking it at the more pointed end, the ‘Little-Enders’, in Jonathan Swift’s Gulliver’s Travels. There really oughtn’t to be any difficulty about accepting that consciousness – like mind, like memory – is ‘located’ throughout the nervous system; indeed, that consciousness is the nervous system; that, just as the ‘knee-bone’s connected to the thigh-bone’, and the ‘thigh-bone’s connected to the hip-bone’, so the nervous system’s connected to the limbic system, and the limbic system’s connected to the digestive system, and so on. The Philosophers, and philosophizers, who object to ‘reducing’ the mind to the body often do so because they’re reluctant to believe that evolution – the action of apparently chance mutations – can have given rise to the marvel of consciousness. In this respect, they are like the religious believers who objected to the claim, made by evolutionary biologist Richard Dawkins, that
95 natural selection over aeons of time was responsible for the wonder that is the human eye. Such a claim, they protested, is just as outlandish as to claim that, over countless millennia, a team of (even bonobo) chimpanzees, could type out the Bible. Most scientists would happily join most Philosophers in marvelling at the human brain: that it is made from the same mitochondriafilled-cell material as the liver and the kidneys, yet give rise to thought. There is no need to disparage evolution to do justice to the mind. William James was a physiologist, psychologist, and philosopher. Had he been French he would have been called an intellectual. Being an American he was a pragmatist, and is called the father of American psychology – indeed, he’s reputed to have said that the first lecture in psychology that he ever heard was the first that he ever gave. He was a no-nonsense sort of thinker: Consciousness is the name of a nonentity, and has no right to a place among first principles. Those who still cling to it are clinging to a mere echo, the faint rumour left behind by the disappearing ‘soul’ upon the air of philosophy. William James, in Bertrand Russell, History of Western Philosophy, 1961, p. 767
Consciousness may be only a word, but it’s a useful word. We wouldn’t want to be without it, whereas, unless we do theology, we can do without the word ‘soul’, and its cousins ‘spirit’, and ‘spiritual’, with all their primitive associations with breath, wind, and air. The word ‘mind’ is useful too, as long as we use it in the same sort of metaphorical way we use the word ‘heart’. I’m sure William James wouldn’t have minded; and I’m sure Henry Marsh wouldn’t mind. He was a neurosurgeon at St George’s University Hospital in South London. He said, on BBC Radio 3, in 2015: It is a fact beyond doubt that thought is a physical process. Many people find that hard to accept, or understand. We’re all innate Cartesian dualists (…) It’s absolutely clear from neuroscience and practical neurosurgery that we are our brains and that everything we think and feel is an electrochemical process. In his book Do No Harm, published in the previous year, he had written: ‘As a practical brain surgeon I have always found the philosophy of the so-called ‘Mind-Brain Problem’ confusing and ultimately a waste of time.’
96 Many Philosophers do identify the mind with the brain now, and are happy to call themselves physicalists. But there are many for whom the relationship between the mind and the brain remains a problem – for whom trying to solve it isn’t a waste of time. They fear that to reduce thinking to biochemistry is to risk reducing Philosophy to a spectator sport: the neuroscientists and the brain surgeons score the goals whilst Philosophers are the men (they’re generally men) who, when the broadcast game is over, supply ‘reaction’. You will have gathered by now, Sophie, that this is what I believe Philosophy has come to.
97
Reason 8: It adds nothing to the golden rule Fichte replied, ‘We create the world not out of our imagination, but out of our sense of duty. We need the world so that we may have the greatest possible number of opportunities to do our duty. That is what justifies philosophy, and German philosophy in particular’. Penelope Fitzgerald, The Blue Flower, 1995
Divine command Here we come to the last of the ‘big questions’: How should we behave? It was the moral example of Jesus of Nazareth that was the last element of Christianity that I held on to when I ceased to be a Christian. I continued to buy into the notion that he was the example par excellence of ‘good’ conduct. It was only over time that I came to think that turning the other cheek, and selling all your property and giving the money to the poor – not to mention, making a blood sacrifice of yourself – were a bar set impossibly high, even absurdly high. Besides, was it ‘moral’ to encourage his followers to abandon wife, brothers, parents, children ‘for the sake of the kingdom of God’ (Luke 18: 29)? What about that command to ‘honour your father and your mother, so that your days may be long in the land’? ‘Do unto others as you would have them do unto you’; ‘treat others as you would want them to treat you’. These or similar words give us the ‘golden rule’. Why golden? Perhaps it came from the dismal fact that if you had the gold, you made the rules. The term in English is said to have been coined by a couple of Anglican clergymen, in 1604; and we know it now as the gold standard of behaviour across the major and minor religions and moral codes all over the world. (There’s also a ‘silver rule’, in the negative: ‘Don’t do to others what you wouldn’t want them to do to you’.) In its usual form, it’s a self-centred rule: ‘you’ are the criterion of what’s acceptable; but it can be expressed so as to centre it on the other: ‘treat others as they would have you treat them’.
98 The rule can be found in a text dating back to the Egyptian Middle Kingdom (2040-1650 BCE); in the Babylonian Code of Hammurabi, of c. 1754 BCE; in the teachings of Confucius (551-479 BCE), and the Buddha (4th-6th Century BCE). In its earliest form, it might have sanctioned revenge: ‘what someone does to you, do the same to them’; or as the writer of Exodus puts it: ‘give eye for eye, tooth for tooth’ (Ex. 21: 24). It was the writer of the next book – Leviticus – who gave us the first outing of the more forgiving: ‘you shall love your neighbour as yourself’ (Lev. 19: 18). It’s worth observing that the words ‘command’ and ‘commandment’ occur in the Old Testament a total of 213 times, and in the New Testament 103 times. Behaving yourself is still seen in the New, as in the Old, as obeying orders. The words ‘behave’ and ‘behaviour’ do occur in the New Testament, but ‘moral’ and ‘ethical’ don’t. No ethical theory is offered, only top-down commands. All but one of the Ten Commandments take a negative form: ‘You shall not…’; and the exception – ‘Honour your father and mother…’ – makes it quite clear that you’ll be in trouble if you don’t. The Sermon on the Mount, to be sure, was a radical departure from Jewish precedent; but Jesus was still in command mode when a Jewish lawyer asked him which he thought was the ‘greatest commandment in the Law’: Love the Lord your God with all your heart, with all your soul, with all your mind. This is the greatest commandment. It comes first. The second is like it: Love your neighbour as yourself. Everything in the Law and prophets hangs on these two commandments. Matthew 22: 35-40
They’re positive commands, but they’re still commands. In John’s Gospel (13: 34), Jesus gives his disciples what he calls a ‘new commandment’: ‘love one another as I have loved you’. It isn’t a new commandment at all, of course: it’s a re-statement of the old contract, except that now the contract is with the Son, not the Father. The problem for the first Christians was to decide how many of the regulations in the Jewish Law they should continue to obey. They were still
99 circumcised Jews: should they still decline to eat shellfish? Should they leave off sowing and pruning their vines, and let the grapes rot, every seventh year? Should they heap their sins on to a luckless goat, and burn it on an altar, dung and all? And should they only offer to God bread to which no yeast had been added to raise the dough? How many of these regulations in the first five books of the Bible were moral laws, and how many merely ceremonial? It was a conundrum. There are lots of instructions in the letters of Paul, for example, as to how Christians should behave, (some of them, frankly, bizarre: such as that a woman should either be veiled when she prays, or be shaven-headed, 1 Cor. 11:6), but there’s no coherent code. For Plato, wisdom lay in knowing the ‘Good’, the ‘form’ of Goodness, and living according to reason (whatever that might be). For Aristotle, ‘virtue’ was living so as to realize our given nature: his was a practical wisdom; a ‘good’ person was one who acted purposefully. His ethics were of a piece with his politics. Cicero wove these threads together in his dictum: ‘True law is right reason in agreement with nature’. It was a law that held for all nations: ‘There will not be one law in Rome and another in Athens’. This was all very well if people in Rome and Athens – and Jerusalem – could agree about what was ‘right’, and ‘reasonable’, and if their ‘nature’ was the same in all three places. Building on Greek foundations, Cicero did name four ‘cardinal virtues’: Courage, Temperance, Justice, and Wisdom. A fourth-century monk identified a number of ‘cardinal sins’, which another monk, John Cassian, imported into the West in the following century; then, in the century after that (these were slow-moving as well as dark ages), Pope Gregory revised the list and presented to the Christian world what he called the ‘Seven Deadly Sins’ – seven always was a rather significant number. They were Pride, Greed, Lust, Envy, Gluttony, Wrath, and Sloth, still all negative (‘you shall not’) examples of bad behaviour. What about good behaviour? Theologians in the Middle Ages (the ‘Schoolmen’) were keen on matching patterns (as well as on dichotomies), so, as there were seven deadly sins, there had to be seven heavenly virtues. Paul had named three, and Cicero four: so the seven positive examples of Christian behaviour were Paul’s Faith, Hope, and Love, and Cicero’s Courage, Temperance, Justice, and Wisdom. These vices and
100 virtues could be cast as characters in the morality plays that trundled from one market-place to another in the 1500s, to add to the reminders of the fiery consequences of sin painted on church walls. Cicero’s appeal to Nature wasn’t forgotten, though: Aquinas might well have had the catalogue of vices and virtues in mind, when he asserted that people knew the difference between good and evil, right and wrong. It was also clear to him that we ought to treat others as we would have them treat us: it was what we had to do to conform to our God-given nature. It was what came to be thought of as ‘natural law’ – law written in the book of nature. Natural law was conjoined with Cicero’s ‘right reason’, so that nature and reason came to be authorities almost as compelling as God and the Bible. The problem was to determine what it was natural and reasonable to do in particular situations. The pre-Reformation Roman Catholic Church hadn’t been as interested in patience, temperance, and the other virtues as it might have been: it had paid more attention to pigs’ bones said to be those of this or that saint, or chunks of wood said to be from the cross; selling pardons to pay for the building of St Peter’s, Rome; and counting Hail Marys. Luther denounced abominations of this sort like Paul on a mission, urging people to read the Bible for themselves in their own language. They didn’t need a priest to tell them how to behave. That other great reformer, John Calvin, though, was more inclined to talk in terms of Old Testament commandments. He had rather little time for Luther’s ‘priesthood of all believers’: it wasn’t for individuals in the pew to interpret the Bible; that was the job of the Church – his church, in Geneva, reading his commentaries on the Bible, mindful of the code of behaviour set out in his very own Institutes of the Christian Religion, of 1536. The word ‘conscience’ doesn’t appear in the Old Testament, but it does occur in the New more than 30 times. Paul, in particular uses the word a lot. It meant something to both Augustine and Aquinas, so it found a home in Catholic theology. Cardinal Newman even thought of the conscience as evidence for our knowledge of God, in 1870. Believing thinkers face a dilemma, though: when is it right to obey a command in the Bible, or the teaching of the Church, and when to listen to one’s conscience, when the one
101 might seem to be in conflict with the other? Protestants who value individual autonomy, as Bishop Butler did in the 18th Century, might look to their conscience as the voice of God, and therefore as the ultimate moral guide.
What is and what ought to be Just as I had taken the moral content of Christianity to be its legacy of most value, so I supposed that of the three ‘big questions’ of Philosophy, the howshould-we-behave question was the biggest. The more I read, the more I was happy to leave it to scientists of one sort or another to answer the first two questions: to show us what there is in the world, and how we come to know anything at all. Morals, or ethics, did seem to be the rightful province of philosophy: indeed, it seemed to be all that was left for the practical philosopher to think about. And, it seems others are of the same mind: Philosophers who don’t write ethics are failing in their duty, one often hears, and the first duty of the philosopher is to think about ethics; to add a chapter on ethics to each of his or her books. Jacques Derrida, The Gift of Death, 1995, p. 67
Derrida was annoyed by all the ‘moralists and good consciences’ who, he said, preach about our moral responsibility, daily and weekly, in the press and broadcast media. He’s right that Philosophers have added to the tomes of sermons by forgotten divines many shelves of books on moral theory. Hume was among the first ‘modern’ thinkers who took issue with those sermons. He had said, more than 250 years previously, something very similar to what Derrida said. Like Derrida, it was those who moralized who were the butt of his displeasure. (I have re-worded this classic statement slightly): In every system of morals I’ve met with so far, I’ve noticed that the author – having reasoned in the customary way about God and human affairs – all of a sudden, instead of the usual is or is not propositions, makes statements about what ought or ought not to be. The change is subtle, but of enormous consequence. David Hume, A Treatise of Human Nature, 1739
102 By ‘the author’ Hume means the clergyman who wrote at his study desk, or who preached from the pulpit. What Hume objected to was this clergyman’s presumption that he could switch from observing this or that behaviour to saying what was right or wrong about it, as if the one (the ‘ought’) followed logically from the other (the ‘is’). In respect of the 18th Century clergyman in his three-decker pulpit, Hume was probably right to object – though that clergyman might have protested that he spoke on the authority of the Bible: that was what justified him in declaring what ought to be. A number of Philosophers have agreed with Hume that there’s a logical gap between ‘is’ and ‘ought’, and that there may be no bridge across it. R. M. Hare, for instance, in The Language of Morals, of 1952, wrote that you couldn’t infer an imperative sentence from two indicative sentences: These boxes need to be taken to the station. This is one of the boxes. Take this box to the station. But, surely, context is all. If the third sentence had been uttered by the lord of the manor to his manservant, there would have been no need to appeal to logic – though it would have been friendlier to add ‘please’. Any parent bridges the gap when she shouts up the stairs: “It’s 8.00 o’clock! You’ll be late for school! (You really ought to) Get a move on!”. Nobody upstairs is likely to shout back: “That doesn’t follow!” (or “Non sequitur!” if the smart-aleck upstairs is learning Latin). Hume didn’t believe we could work out what was moral by reasoning about it, any more than he believed we could reason that all birds can fly, just because the ones that we happen to have seen can fly. When we approve of what someone does, we call the act ‘good’, and we call that someone good, too. A good act tends to produce pleasure, and a bad one pain – and that, says Hume, is really all there is to morals. Our morals are the products of our feelings. Kant couldn’t have agreed less. He shared Hume’s distaste of clerical dogmatism; but he didn’t want moral truths to depend on our feelings. He didn’t want them to depend on God, either, or on arguments for the existence of God. We met his ‘categorical imperative’ under Reason 3: ‘Act on
103 that principle which, in so doing, you would want made a universal law’. Anyone, whether they lived in Athens, Rome, Jerusalem, or elsewhere, could see that this stood to reason. It was our plain moral duty to set an example to others by the way we behaved. It was a duty that Kant was quite hard-line about: there could be no excuse for telling a lie, for not paying a debt, for breaking a promise, or for committing suicide. There could be no exceptions. When we didn’t do our duty, God was there to pass sentence. Put more simply, Kant’s imperative amounts to no more than: ‘Behave as you’d expect others to behave’ – in which form it bears a striking resemblance to the golden rule. Jeremy Bentham was a younger contemporary of Kant’s. He, too, looked for a universal basis for morals, but his basis would be as godless as the University of London that he co-founded. He adopted the principle of the Scotsmen Hume and Hutcheson: what is ‘good’ is what gives most pleasure, and least pain; or more usefully, what gives pleasure to the most people, and pain to the least. He called it the principle of ‘utility’, or, more plainly, the ‘Greatest Happiness Principle’. Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do (…) They govern us in all we do, in all we say, in all we think. Jeremy Bentham, Introduction to the Principles of Morals and Legislation, 1798
He might have expressed the principle in this way: ‘Behave so that you maximize the happiness of others as well as your own’ – in which case he would have come pretty close to re-stating the golden rule. These two thinkers, Kant and Bentham, gave us what we might call the two ‘classic’ ethical theories – the deontological, and the consequentialist theories: Deontological ethics (from the Greek déon, that which is binding) is rulebased; it tells us what it’s our duty to do; what we are bound to do. Generally,
104 it’s religious believers – brought up on divine commandments – who think of ethics in this way. But there are those secular Philosophers who will claim that ‘we all know that we mustn’t pull someone’s finger-nails out to make them confess’, as if this was a priori knowledge. For the deontologist, an act is right or wrong in itself. Consequentialist ethics (of which Bentham’s utilitarianism is one example) have an eye to what will follow from an act – whether the consequences will give pleasure, or inflict pain, or more of one than the other. In its utilitarian form, it’s a rather calculating process: ‘how many people will benefit from this, and how many will be hurt?’ But in common-sense use, it’s what we might consider when we ask a boy who’s pulled a girl’s pony-tail: “what if everybody did that?” For the consequentialist, an act is right or wrong depending on what it might lead to. The problem is, it isn’t always easy to predict what the consequences of an act might be. Besides, if we have to calculate before we act precisely how many people might be made happier, and how many less happy, consequentialism turns out to be just as hard-line as deontology. And what is ‘happiness’, anyway? It’s no easier to be sure what it means to be happy than what it means to be good. Still, in practice, we’re all utilitarians when it comes down to it: we may not be able to foresee the consequences of stamping on the neighbour’s daffodils when we’re four; but we know what they’re likely to be when we’re fourteen. Henry Sidgwick (whom I introduced in Reason 2) was the first Professor of Moral Philosophy at Cambridge who wasn’t a clergyman, in 1883. He was a utilitarian who, like Bentham, wanted a non-religious basis for ethics – one that was reasonable and universalizable. He recognized the intuitive pull of egoism (seeking one’s own pleasure); but, also, that: The good of any one individual is of no more importance, from the point of view (if I may say so) of the Universe, than the good of any other. Henry Sidgwick, The Methods of Ethics, 1874
105 How to secure the good of that ‘other’, when happiness isn’t the only good? On what rational basis could it be shown that you, Sophie, have as much of a duty to that other as you have to yourself? Sidgwick’s answer was that our intuition, or common sense, tells us that we have to take the interests of others into account. But he recognized himself that this was a rather inadequate answer: was common sense really all that ‘common’? And was common sense just another (rather weaker) form of words than Kant’s ‘duty’? Common or not, the answer impressed one of his students, G. E. Moore: he defended our common-sense attachment to familiar intuitions (or intuitionism) – and the golden rule was certainly one of these. I referred in Reason 6 to the ‘verification principle’ devised by the Vienna Circle: truth was analytic if it was true by definition; or it was synthetic if it was shown to be true empirically. A. J. Ayer spent a year in Vienna after graduation, and was fired by the notion that all propositions that aren’t either analytic or synthetic are factually meaningless. They are mere expressions of feeling: Thus, if I say to someone, ‘You acted wrongly in stealing that money,’ I am not stating anything more than if I had said, ‘You stole that money.’ In adding that this action is wrong I am not making any further statement about it. I am simply evincing my moral disapproval of it. It is as if I had said, ‘You stole that money,’ in a peculiar tone of horror (…) the function of the relevant ethical word is purely ‘emotive’. A. J. Ayer Language, Truth and Logic, 1936
Thus, we can add emotivism to intuitionism, deontology, and consequentialism. Ayer was right that the statement ‘stealing is wrong’ is factually meaningless – unless by wrong we mean illegal; but it is not emotionally meaningless, particularly to the owner of what has been stolen. As an atheist, Ayer was keen to rule out propositions like ‘prayer is good for the soul’, or ‘heaven awaits the pure in heart’; but in dismissing religious feelings, he dismissed feelings of all sorts. Babies and bath-water come to mind.
106 From covenant to contract The idea underlying the golden rule is that the two parties to it, the ‘self’ and the ‘other’, might change places – it’s reciprocal. Both parties are motivated to honour the rule. It’s an idea that has given rise to ethical theory based on the contract. (Or, perhaps, it’s a revival of the Old Testament ‘covenant’ relationship between Yahweh and his chosen people). A lot of the ways in which we treat others and they treat us have to do with a contract, written, or spoken, or merely understood: we promise to leave a rented flat as we found it; we agree to sell a bike at a price acceptable to ourselves and the buyer; we undertake to abide by club rules in return for the benefits of membership. Jean-Jacques Rousseau based The Social Contract of 1762 on the idea; and John Rawls took it up again in 1971. He posed the basic moral question: why should we take any account of the interests of other people, especially when it might not be to our advantage to do so? He had read what Locke, Hume, Rousseau, Kant, Bentham, and J.S. Mill had written on contract theory, and he wanted to see whether he could put the idea on a firmer footing. Duty, conscience, utility, Rousseau’s ‘general will’, intuition – these might motivate; but they didn’t necessarily secure justice. He imagined what he called an ‘original position’ – a sort of Garden of Eden state of innocence – in which everyone is equal: It is understood as a purely hypothetical situation characterized so as to lead to a certain conception of justice. Among the essential features of this situation is that no one knows his place in society, his class position or social status, nor does anyone know his fortune in the distribution of natural assets and abilities, his intelligence, strength and the like. I shall even assume that the parties do not know their conceptions of the good or their special psychological propensities. The principles of justice are chosen behind a veil of ignorance. John Rawls, A Theory of Justice, 1971, p. 12
Rather like William Golding, picturing the boys in his novel Lord of the Flies, sitting in a circle on their island – before things went badly wrong – deciding
107 on the rules by which they’d live, Rawls pictures people discussing how goods ought to be distributed, and he decides that they’d want equal shares of rights and duties. They’d want any inequalities in the way wealth and authority were allocated to be to the benefit of all, and in particular to those who might end up the ‘least advantaged’. If people didn’t know the ends in advance, they’d accept the means that ensured fairness. Justice meant fairness – not perfect equality, but a distribution of goods that allowed greater benefits to be enjoyed by some, only if this inequality improved the lot of the many in the long run. Rawls later acknowledged that this benign vision of a well-ordered society was unrealistic: it didn’t take into account the fact that, over time, free citizens adopt different, reasonable religious, political, philosophical, and moral opinions. So, he revised his theory: Ideally, citizens are to think of themselves as if they were legislators and ask themselves what statutes, supported by what reasons satisfying the criterion of reciprocity, they would think it most reasonable to enact. John Rawls, Political Liberalism (3rd Edn.) 2005, p. 444
This sort of citizens’ council still assumes a ‘well-ordered’ society, though. There’s still a trace of Kant in it; but mostly it’s the golden rule re-worked for the 21st Century. So, to deontology, consequentialism, emotivism, we can add contractualism. Sophie, if you’re allergic to game theory, you can skip the next paragraph in brackets. I think it’s relevant, though, to the ethics of contractualism in practice, where Rawls’s vision may seem rather other-worldly. [To the extent that any of these theories can be tested, contractualism has been put to the test in a computer-based ‘prisoner’s dilemma’ game. There are two players, A and B: each is a prisoner in a separate cell. There isn’t enough evidence to convict them on the major charge, but there is on a lesser charge. Each is given the chance to betray the other (i.e. to defect) or to remain silent (i.e. to co-operate with the other). If both defect, they stay in prison for two years; if one betrays the other and the other (unknowingly) co-
108 operates, the betrayer is freed and the other gets three years; if both cooperate by staying silent, they both get one year on the lesser charge. B stays silent operates) A stays silent operates)
(co-
A betrays B (defects)
(co-
B betrays A (defects)
Each serves 1 year
A serves 3 years; B is freed
A is freed; B serves 3 years
Each serves 2 years.
Which strategy should a player adopt: defect, co-operate, copy what the other does, or do the opposite? All these strategies (and others that I might not have thought of) were put through their paces on numerous computers numerous times, to see which strategy worked best for both A and B. The optimum strategy turned out to be the one called ‘Tit for Tat’: The analysis of data from these tournaments reveals four properties which tend to make a decision rule successful: avoidance of unnecessary conflict by co-operating as long as the other player does; a preparedness to be provoked in the face of an uncalled-for defection by the other; forgiveness after responding to a provocation; and clarity of behaviour so that the other player can adapt to your pattern of action. Robert Axelrod, The Evolution of Co-operation, 1990, p. 20
This may sound more like an Old Testament ‘cheek for cheek’, than a New Testament ‘turn the other cheek’, strategy; but being ‘nice’, that is, not being the first to defect, turned out to be the strategy with the highest score. Cooperation is another word for reciprocity – or the golden rule. ‘Tit for Tat’ may be what Moral Philosophers and philosophizers will have to settle for.] You can understand why Philosophers should be looking for a theory of ethics that applies in Athens, Rome, and Jerusalem (and Seoul and Seattle): if it doesn’t apply everywhere, if it’s relative to here but not to there, it can’t be
109 said to be very useful or relevant. No Philosopher shouts down relativism quite like a Moral Philosopher. Here’s one who ‘doubts the truth of moral relativism’: Moral relativism is the view that there are no universal moral standards, no standards of ‘right’ and ‘wrong’ that apply across all times and all cultures. Instead, moral relativists think that the truth value of the claim ‘Torturing innocent people is wrong’ is relative. Julia Driver, Ethics: The Fundamentals, 2006, p. 5
Such a moral Philosopher is a moral ‘realist’: one who believes that there are moral facts, or norms, or values that are objective; they are ‘out there’, like Truth. Driver takes torture as her example – presumably not merely of innocent people, but of rogues, as well. Other moral philosophers cite murder, robbery, genocide, corruption, paedophilia, rape, as examples of what is objectively ‘wrong’ – and wrong everywhere and all the time. And you and I, Sophie, would agree that each and every one of these behaviours is not just legally wrong – it’s morally wrong, too. It’s easy to make the case for moral realism, though, when one chooses extremes of depravity like these; it’s a less persuasive case when we consider the moral dilemmas we face in the course of a working day. Is it a moral fact that it’s wrong for a man to make a habit of filing for divorce? Can we make a moral norm of placing a limit on the number of children that a family may have? Do we consider carefulness with money to be an objective value? Here’s a moral philosopher who doesn’t think we do, or that we can: There are no objective values (…) If there were such things as objective values, they’d be ‘queer’ entities utterly unlike anything else in the universe (…) Our morals have an appearance of objectivity because they descend to us from a religious context. J. L. Mackie, Ethics: Inventing Right and Wrong, 1990, pp. 15, 38, 45
Of course, there are laws everywhere against murder, robbery, genocide and so on. These are moral facts, of a sort, by definition, and there isn’t a country in the world where they’d be given legal sanction. But there are laws that
110 obtain in certain countries but not in others: gay and lesbian marriage is permitted in England, for example, but not (at least, at present) in most Eastern European, and Middle-Eastern countries; the law forbids capital punishment in member countries of the European Union, but sanctions it in certain states of the United States of America, China, Iran, and other countries that I’m sure you could name. It’s legal to pay for sex in Germany, but not in France; abortion is legal in much of Europe, but not in Poland and Malta. As Mackie recognizes in his sub-title, these and all other laws, like the Ten Commandments, were ‘invented’. They’re ‘objective’ only for as long as they’re on the statute books. A failed suicide bid was a criminal offence in the United Kingdom until 1961; capital punishment was legal until 1965; male homosexuality was legalized only in 1967. Governments change objective laws sometimes before, but more often after, public opinion changes – intersubjective opinion.
A return to virtue Impatient with theory, perhaps, and despairing of fundamental moral principles, many Philosophers have looked again at Aristotle’s ‘virtues’: these were the ‘golden mean’ between extremes of behaviour. Thus, the golden mean between recklessness and cowardice was courage; that between severity and leniency was justice. Courage, justice, moderation, and prudence were his ‘cardinal virtues’. These were the virtues that Aristotle approved of in the Athenian male and the Athenian soldier, in particular. Modern virtue ethics is about proposing what virtues, what virtuous acts, add up to a virtuous life. How are we to acquire the sort of virtues that are generally proposed: generosity, truthfulness, friendliness, and so on? Virtue ethics is not a theory which tells us what to do; we neither have nor should want any such thing. Rather, it guides us by improving the practical reasoning with which we act. It directs us as we are wondering what to do, towards emulating people who are braver, more generous and generally better than we are. Julia Annas, in (Ed.) Russ Shafer-Landau, Ethical Theory, 2007, p. 744
111 Annas likens the process to learning to be a good concert pianist: the apprentice watches and copies what the professional (perhaps virtuoso) pianist does, and so learns to be almost as good, as good, or better still. Playing the piano to concert standard is a skill, though, or set of skills. Is it a virtue? Indeed, what is a ‘virtue’? Is ‘virtue’ just another word for what’s ‘good’, or ‘right’? At first, we will watch what our parents do; then what other children do; what our teachers, aunts, colleagues, bosses, politicians do. They may set us a ‘good’ example, and they may not. Acts that are deemed to be ‘virtuous’, ‘good’, ‘acceptable’, ‘right’ in one time and place are likely to be called vicious, bad, unacceptable, wrong, in others. Virtue ethics can’t escape the brute fact of relativism: what is a courageous, just, prudent, moderate act in one situation may not be in another; it’ll be open to interpretation. You are a large-minded person, Sophie, partly because that’s who you are, but partly because that’s how you were brought up to be. I approve of monogamy because my parents were monogamous. We tend to approve of behaviour with which we are familiar. If behaviour obeyed the rules of logic, we could devise universal moral principles to our hearts’ content. Just about the only ‘rule’ that Harry Gensler found he could apply to ethics was the consistency rule – and he was a Philosopher who believed that there are objective, formal a priori, truths about how we ought to live. He came to this conclusion: The Golden Rule is close to being a global principle – a norm common to all people of all times (…) If you had to give one sentence to express what morality is about, you couldn’t do better than the Golden Rule. Harry J. Gensler, Formal Ethics, 1966, p. 106
More than half a century later, nobody seems to have done better. Philosophers might, of course, protest that they’d need more than one sentence to express what morality is about: they’d need whole books. But does it take whole books (or even Derrida’s added chapters) to apply the golden rule to this or that moral issue? Have we noticed how many whole books about ethics have made a difference to the way we behave? And are
112 Philosophers uniquely qualified to write them? Early on in one (short) book on the subject, Blackburn made this telling observation: A single photograph may have done more to halt the Vietnam War than all the writings of moral philosophers of the time put together. Simon Blackburn, Being Good: A Short Introduction to Ethics, 2001, p.5
He was referring to the famous and shocking photograph by Nick Ut of terrified children running from a napalm attack on Trang Bang, in June 1972. In 2019, American financier Stephen Schwarzman made a gift of 150 million pounds to the University of Oxford for the establishment of a centre for the humanities and for the study of ethics in particular. You might have thought that at least some of the 150 Philosophers in the Oxford Faculty of Philosophy might have been putting their minds to the study of ethics at rather less expense. It’s worth asking, though – if those Philosophers were lucky enough to be drafted in to the new Schwarzman Centre – what bag of specialist tools they would bring to the job.
113
Reason 9: It has no special tools It is the man, and not his instruments, which is the most important. There can be no substitute for my experience and intuited knowledge. Timothy Mo, An Insular Possession, 1986
You’re a thinker, Sophie, I know – you wouldn’t be reading this book if you weren’t. You may not always think in a systematic, objective sort of way, any more than I do, or most people do, or Philosophers do when they’re off-duty. When they write or lecture, though, Philosophers like to think that they do think systematically and objectively. They like to think that there is such a thing as thinking philosophically. My own view is that this is a simple tautology: thinking thinkingly. Philosophers will say that when they write or speak philosophically, they are reasoning. This, they suppose, is a cut above mere thinking, just as they suppose judgment to be a cut above opinion. I shall look first at what we mean by reason; then I shall consider what is meant by intuition. This is what Philosophers resort to when they suspect they know something without being able to justify it on reasonable grounds – when insight trumps a laboured argument. When we don’t fully understand something, or we struggle to explain something in literal terms, we may reach for an analogy: we relate what may be obscure to something more familiar – perhaps more tangible. When the analogy is developed, it may in a Philosopher’s hands amount to a thought experiment – an allegory almost, where each part of the narrative explanation corresponds to the idea to be explained. They use the word ‘experiment’, perhaps, with the laboratory in mind that some Philosophers aspire to work in. And then there is logic. This is a pattern of reasoning beloved of Philosophers with a taste for maths. In formal logic, words are replaced by symbols devised for the purpose. Beginners may suspect that the purpose is to cow those of us defeated by back-to-front Es (ꓱ) and upside-down As (ꓯ). Of these four tools
114 that Philosophers use, formal logic is the one with the hardest edge. I shall (briefly) discuss the whys and wherefores only of informal logic. To some extent, the four tools can be used interchangeably: a thoughtexperiment may draw on an intuition, just as a pair of pliers might bang in a tack (if not a nail).
Reason and reasoning If death is final and there’s nothing beyond it, the wicked will die happy because they’ll escape punishment. This is unthinkable in a world ordered by a good god. This is how Plato reasoned. Happiness is an ultimate, selfsufficient good; happiness, therefore, must be what all action is aimed at. This is how Aristotle reasoned. Death means nothing to us since it involves the loss of sensation; therefore, we needn’t fear that we’ll suffer a painful death. This is how Epicurus reasoned. The Greeks valued reasoning highly; and so did the Romans, Cicero and Marcus Aurelius. What were they doing when they reasoned? They were laying out a reason, or reasons, that they thought would support a conclusion. They were laying out the implications of one or more claims, or premises. In a word, they were arguing. I mentioned under Reason 2 that the medieval schoolmen tried to reconcile Greek reasoning with their faith, putting themselves in some danger when they did so. The 9th Century John the Scot was actually an Irishman who lived in France. It was lucky for him that he did: had he lived elsewhere, and not been on the payroll of King Charles the Bald, he might have been barbecued for his risky reasoning. Even certain of the propositions of the sainted Aquinas were condemned after his death, by Church leaders in Paris and Oxford, in 1277. Religion was a hot-tempered topic in the Church courts and on the battlefield until 1648, at least, when the Peace of Westphalia put an end to the so-called hundred-year wars of religion. That Peace is supposed to have ushered in the ‘Age of Reason’. There’s certainly a lot more reason than faith about Pascal’s famous ‘wager’:
115 Let us weigh up the gain and loss involved in calling heads that God exists. Let us assess the two cases: if you win, you win everything; if you lose, you lose nothing. Do not hesitate then: wager that he does exist. Blaise Pascal, Pensées, 1670 The appeal to reason was something of a reaction to the religious ‘enthusiasm’ of the Puritans and Quakers and other protestant sects of the 1600s. They played up the ‘mystery’ of faith. Science, with its mahogany and brass models of the planets orbiting the sun, seemed to leave less and less room for mystery. God’s creation appeared to work according to laws that ‘natural philosophers’ were beginning to make sense of. John Locke caught the tide in his treatise of 1695: The Reasonableness of Christianity; and the Irish free-thinker John Toland went a step further in the following year with his Christianity Not Mysterious. In this, he denied that there could be anything in Christianity that offended reason. The French revolutionaries placed the Goddess of Reason on the altar of Notre Dame Cathedral in Paris – but this goddess was no statue: Sophie (yes, her name was Sophie), the wife of the leading light of the Cult of Reason, Antoine-François Momoro, played the part, dressed, it is said, ‘provocatively’. Robespierre wasn’t amused; and Napoleon banned all cults in 1801. But Kant ensured that ‘critical reasoning’ would be the hallmark of the new philosophy: no longer would Philosophy with a capital P listen to appeals to faith, divine authority, and revelation. That’s enough about the long-ago: what do we mean by ‘reason’ now? Is Reason without an article in front of it the same as a reason, as in ‘give me a reason for believing one word that you say’; or the ten provocatively dressed reasons that I give in this book for not placing Philosophy on an altar? We talk about a child reaching the ‘age of reason’: when is that? Is it at the age of 10, when a child in the UK reaches the age of criminal responsibility? Chambers Dictionary gives ‘premise’ as one meaning of the word, identifying it with a reason, or the reason or reasons. A second definition is; ‘the mind’s power of drawing conclusions and determining right and truth’. This is what Philosophers hope to be doing when they think, and what they expect the
116 rest of us to do when we philosophize. Philosophers might well have welcomed what Kahneman had to say about what he thought of as two systems of thinking:
System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps. Daniel Kahneman, Thinking, Fast and Slow, 2011, pp. 20, 21
Kahneman doesn’t talk down System 1 thinking in the least: after all, it’s how we think when we ‘detect that one object is more distant than another’; when we ‘complete the phrase “bread and . . .”’; when we ‘make a ‘disgusted’ face when shown a horrible picture’; and when we ‘understand simple sentences’. It’s how we think most of the time. We form quick opinions; we react spontaneously to danger; we make a decision when the situation demands it – System 1 thinking is only automatic because it’s partly instinctive and party the product of repeated experience. There’s nothing inadequate about it. All the same, System 2 thinking is likely to be more reliable: we resort to it when System 1 thinking doesn’t fit the bill. We don’t do this without making an effort, though: according to Kahneman, most of us find cognitive activity ‘mildly unpleasant, and avoid it as much as possible.’ The Economist, in its review of the book, hailed Kahneman as having ‘shown that we are not the paragons of reason we assume ourselves to be.’ Most Philosophers, and perhaps most of the rest of us, like to think of Philosophy as pure System 2 thinking – that reflective, deliberative, logical, contemplative, introspective thinking is what Philosophy essentially is.
117 It is certainly safe to say that one cannot even begin to master the expanse of philosophical thought without learning how to use the tools of reason. Julian Baggini and Peter S. Fosl, The Philosopher’s Toolkit, (2nd Edn.) 2019, p. 3
You’ll have noticed the tautology (‘philosophical thought’), Sophie, I’m sure. The adjective ‘philosophical’ has a very positive ring to it; ‘rhetorical’ doesn’t. Rhetoric is the art of persuasion, and is associated with tricky speech-making designed to sway an audience. It was a respected discipline in the early universities, but to use the word ‘rhetoric’ of a spoken or written text nowadays is a put-down (we sneer when we speak of ‘empty’ rhetoric): If postmodernism has busily eroded public belief in reason, evidence, logic and argument for the past forty years, as it has, then all too often it is the case that rhetoric is all that’s in play. And behold, it wins, even though the other side has the better case. Ophelia Benson and Jeremy Stangroom, Why Truth Matters, 2006, p. 172
Rhetoric is the very opposite of reason. Rhetoric makes appeal to the emotions; it’s manipulative; it’s like a film-score that induces a cinemaaudience to tense with fear of what’s to come, or to weep with empathic sorrow. Reason, on the other hand, is calm, dispassionate, even transcendent: In reasoning about practical matters, we are able to distance ourselves from our own point of view and take on, instead, a wider perspective, ultimately even the point of view of the universe. Peter Singer, How Are We to Live: Ethics in an Age of Self-Interest, 1997, p. 272
It’s a nice idea; but are we really able to do that? To think like gods? Are there really ‘tools of reason’ that enable us to do that – or, more modestly, to think like Philosophers? That word ‘ultimately’ is a giveaway, lending force to belief in The Truth with a capital T. Isn’t there something rhetorical about this – something that looks like wishful thinking? There are some Philosophers, though, who question whether there’s anything special about reasoning, and certainly any reason to give it a capital R.
118 There is no such thing as Reason (as it was understood by the Enlightenment, at least), but there are good reasons and bad reasons. John D. Caputo, Truth: Philosophy in Transit, 2013, p. 10
Surely, this is all we can say: or, perhaps, to avoid an over-simple dichotomy, we’d better say there are degrees of good reasons, less good reasons, less bad reasons, and bad reasons. They’ll be on a continuum like much else. Philosophers do what the rest of us do when we employ System 2 thinking: they don’t reason in some specialist, elite way; they give good and less good – sometimes even rather bad – reasons for coming to this or that conclusion. Reasoning, like philosophizing, is just another word for thinking – critical thinking.
Intuitive thinking We talk about feminine intuition, by which (I think) we mean that women have a rather special, rather direct sort of insight that men lack. Perhaps you’re better qualified, Sophie, to judge the ‘truth’ of this than I am. Do we (or do men) credit women with intuition, because they bag reason for themselves? To intuit is to cut to the core of things, keenly and quickly. God and his three spheres of angels (and we often thought of angels as winged women, maid-servants to the Archangel Gabriel; or at least male theologians did) were supposed to know things immediately – seeing and knowing were one and the same. Intuition seems to have been first used, to mean ‘looking into’, in 1497; the word wasn’t used in its modern sense, to mean an immediate insight, until 1762. Sidgwick used the term intuitionism for the first time, in 1847, to mean a sort of refined common sense. Intuition gives us a direct, realist experience of the world, not merely of objects in the world, but of what is good – morally good – for the individual and for the world. Sidgwick resurrected Kant’s categorical imperative, as it were, but he toned it down. We just know that this or that is the right thing to do; but the knowledge might be vague and inconsistent; intuitions don’t give us a rational basis for conduct – but then Sidgwick didn’t place much confidence in reason where moral conduct was concerned. Moore and Russell broadly agreed with Sidgwick, though Russell
119 did wonder how a conflict between competing intuitions might be resolved: Ian’s, for example, that it was right to appropriate a new umbrella left on a train seat, and his girl-friend Jenny’s that it wasn’t. This is a fair point, surely: what’s the difference between an intuition and a simple opinion, or even prejudice? And how do we know when – if ever – we can trust our intuitions? Do we trust our intuitions in the same way we trust our instincts? Is an intuition just an instinct with a dash of thought attached? Henri Bergson wrote (in Creative Evolution, 1907): There are things that intelligence alone is able to seek, but which, by itself, it will never find. These things instinct alone could find; but it will never seek them. Henri-Louis Bergson, in Gary Gutting, French Philosophy in the Twentieth Century, 2001, p.72
It’s our Intelligence that looks for knowledge – that’s its goal; but instinct won’t find it. This is where intuition comes in, says Gary Gutting: instinct needs intuition to guide it towards intelligence, and so to knowledge: Instinct → INTUITION → Intelligence → Knowledge We’re born with instincts with which we sense the world, but we’re also born (metaphorically) wearing a pair of glasses through which we intuit, or perceive it. Kahneman’s System 1 is thus divided into two rapid-fire stages. You can see that Philosophers might rely on an intuition to guide them in their choice of a promising line of enquiry. In this sense, an intuition to the Philosopher is what a ‘hunch’ is to the scientist. We used to talk about the ‘scientific method’, as if there was just the one that had the precision of a computer algorithm. Scientists themselves testify that intuition, inspired guesswork (not to mention a stroke of luck) plays a part; but intuition is only the beginning of the working hypothesis; and that’s only the beginning of the work. In the course of that work, evidence will accumulate to support the hypothesis, or not. Can Philosophers claim to unearth evidence to support their intuitions? Or do their intuitions leak into their premises, and so into their conclusions? Is much of Philosophy a tissue of raw intuitions that (even
120 assuming the application of intelligence) never really amount to knowledge? And does it go in circles? There is a practical necessity to rely intuitively on what is intuitively held-tobe-true. Jürgen Habermas in Robert E. Brandom (Ed.), Rorty and his Critics, 2000, p. 49
Is this really all a Philosopher can do: appeal to intuition to confirm an intuition? Can Philosophy (a subject that, along with Theology, is an almost evidence-free zone) employ only System 1 thinking? Can it be that, in spite of the intelligence compacted in all those books and lectures and scholarly papers, they’re all just intuitions? You didn’t think this was all there is to Philosophy, Sophie, I’m sure. It may be all there is to any thinking, though, that isn’t evidence-based. After all, mathematician Kurt Gӧdel didn’t despise intuitive thinking even in maths – and even arithmetic. Kahneman’s System I and System 2 thinking seems to reinforce the distinction that Philosophers have made between intuition and reason. In a book, perhaps less celebrated than Kahneman’s, two cognitive scientists prefer to talk about reasons than reason: they rebut the idea that Philosophers have taken for granted that reason is a superior power of the human mind. All we do when we ‘reason’, they say, is give reasons to support our intuitions. Much recent thinking about thinking (for instance Daniel Kahneman’s Thinking, Fast and Slow) revolves around a contrast between intuition and reasoning, as if the two were quite different forms of inference. (…) Reasoning, we will argue, is a form of intuitive inference. (…) Reasoning rarely questions reasoners’ intuitions, making it very unlikely that it would correct any misguided intuitions they might have. People are biased to find reasons that support their point of view because this is how they can justify their actions and convince others to share their beliefs. Hugo Mercier and Dan Sperber, The Enigma of Reason: A new Theory of Human Understanding, 2017, pp. 7, 90, 218, 331 These writers argue that we have intuitions, then we find reasons to support
those intuitions – but those reasons, too, are intuitions, or intuitive
121 inferences. These inferences are intended to back up the initial intuitions, and persuade ourselves and others of their ‘truth’. So, System 1 thinking is intuitive, but then System 2 thinking is no less intuitive– and that System 2 thinking isn’t reasoning in the second of the two dictionary definitions; it’s reasons in the first. The giving of reasons appears to have evolved in conversation with others. When individuals ‘reason’ on their own, they fall prey to what psychologists call ‘confirmation bias’: they choose reasons that confirm their initial intuition. I find this account of our thinking process compelling, and not only because it avoids Kahneman’s dichotomy: might both first and second thoughts be more or less fast or slow? This writer didn’t need to be compelled: I’ve spent a lot of life picking over arguments, surrounded by philosophers and their books and articles (…) Was I ever really persuaded by an argument or did I just have thoughts and find good reasons for them afterwards? James Garvey, The Persuaders: The Hidden Industry that Wants to Change Your Mind, 2016
We might guess at how Garvey answered his own question. Intuitions, insights, thoughts, ideas – we all have these; and we have intuitions, insights, thoughts, and ideas to reinforce those that occurred to us first. We call them reasons, and we call the process reasoning. Perhaps the most we can hope for is that the older and wiser we get, the more experience we have, the more we can trust our intuitions. The question is: are a Philosopher’s intuitions (is a Philosopher’s experience) likely to be different from everybody else’s in ways that matter? Are they likely to be more reliable? If Philosophers answer yes to these questions, can we be sure that the reasons they give (“we studied the subject at university”; “we’ve read lots of the relevant books and papers”; “we’ve written books and papers ourselves, all peer-reviewed”; “we’ve engaged in rational debate with colleagues down the years”) aren’t confirmation-biased intuitions? If they were to answer no – admitting that when they reason they, too, infer intuitively – they surrender one more of their ‘special’ tools. In the art of judgment, Philosophers have no more authority than historians, magistrates, and others in the habit of thinking.
122 Analogies and thought experiments The simplest sort of comparison is the simile: my four-year-old sister was to sleep in a bedroom whose curtains were bright red; the evening sun shone through them, lending the room a bright red glow. As my sister entered the room, her eyes widened, and she said: “It’ll be like going to sleep in a jam tart”. The likeness was the colour; the comparison was delightful, as she was, but it couldn’t have been pushed any further. In a thought experiment, the thinker does push the comparison further, so that each element of what’s to be explained has a counterpart in the parallel case. We think of ancient Athens as the cradle of democracy; Plato likened it to a ship whose captain had lost control to the crew. What happens? The sailors are quarrelling with each other about the steering – everyone is of opinion that he has a right to steer, though he has never learned the art of navigation (…) they mutiny and take possession of the ship and make free with the stores; thus, eating and drinking, they proceed on their voyage in such fashion as may be expected. Plato, Republic (Ed. Richard Livingstone, 1940)
Plato didn’t think highly of Athenian democracy, and he chose an analogy for it that would damn it. Another of his analogies compares the Idea of the Good with the Sun. We are like prisoners shackled in a cave, seeing only the shadows of puppets thrown by a fire behind us on the cave wall. If we were to be freed, and led up to the mouth of the cave, we’d be dazzled by the pure light of day and want to return to the familiar semi-darkness of the unreal. His analogy illustrated his theory of Ideas, or Forms: in this life we could know only the shadows of Goodness and other perfections. It inspired Augustine’s City of God, and centuries of Christian images of heaven. Descartes wasn’t sure that he could believe what his senses told him. He was sure that it couldn’t be a good God who was planting doubts in his mind. It must, therefore, be an evil demon: This demon employed all his energy to deceive me. I consider that the heavens, the earth, colours, figures, sounds and all other external things are
123 nothing but illusions and dreams that the demon is using to set traps for me. I shall consider that I have no hands, no eyes, no flesh, no blood, nor any senses, in spite of falsely believing that I possess all these things. René Descartes, Meditation on First Philosophy, 1, 1641
One thing he could not doubt was that he was thinking, hence his ‘I think, therefore I am’ – an aphorism as well-known as Hamlet’s ‘To be or not to be’. Descartes’ meditation can certainly claim to be a thought experiment. Descartes and Isaac Newton both compared the world to a time-piece. William Paley made a thought experiment of the comparison. He challenged his readers, in 1802, to consider the workings of a watch. They’d be amazed at the ingenuity shown by the watchmaker, by the interaction of all the parts, and the fine decoration on the face and the case. How much more amazing, he asked, was the fine workmanship displayed in the workings of the world: didn’t nature far surpass art? As the watch had been designed by a watchmaker, surely the world must have been designed by a world-maker. Thus was born the argument from ‘intelligent design’ for the existence of God. Hume had already demolished the argument in his posthumously published Dialogues Concerning Natural Religion, in 1779: was the world perfect – hadn’t the Lisbon earthquake of 1755 killed between 10,000 and 100,000 people? How, then could the designer of the world be called good? What’s more, we can observe a watchmaker at his delicate work: has anyone observed the world-maker designing a machine infinitely more complex? That’s the problem with any analogy, or thought experiment: it can only be taken so far. Ronald Reagan compared the government (before he was the government) to a big baby: ‘an alimentary canal with a big appetite at one end and no responsibility at the other’. It amuses, but government in general can’t stand further comparison with a baby of any size. There’s a limit to what we can safely infer from a thought experiment. I made mention of Rawls’s vision of people in an original position of equality, under Reason 8: it was a large-minded, liberal vision of justice as fairness. It didn’t play well with neoliberal, market-minded Robert Nozick (in Anarchy, State, and Utopia, 1974): it’s all very well, he said, to imagine goods being divided up fairly in a Garden of Eden. That was then: now, in the ‘real world’, goods are distributed very
124 unevenly; whether this is fair or not depends on how this inequality came about. Where’s the justice in asking those who’d acquired their goods fairly to give them up? You couldn’t try to bring about equality in the here and now by force. Besides, he said, it isn’t a government’s business to allocate goods; government’s about protecting people from each other, and that’s all it’s about. So spoke one set of intuitions to another. Another famous thought experiment is John Searle’s ‘Chinese Room’ of 1980. There were those in the artificial intelligence community who were claiming that: The computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states. John Searle, ‘Minds, Brains and Programs’, in The Behavioural and Brain Sciences, Vol. 3
Searle thought otherwise: he imagined himself sitting in a locked room receiving a series of documents posted through his letter-box: (1) is a ‘script’ in Chinese, which he doesn’t understand – it’s just squiggles; (2) is a ‘story’, again in Chinese, together with a set of rules in English for ‘translating’ the Chinese squiggles into letters of the English alphabet; and (3) is instructions in English in how to match this third document with the first two. He’s instructed to return to sender certain Chinese symbols having certain shapes in response to certain other shapes given him in the third document. Without his knowing it, what he’s doing is ’answering’ a set of ‘questions’, in spite of the fact that he doesn’t understand either the questions or the answers. (A fourth document arrives through the letter-box, but I’ll pass over that: the story’s quite complicated enough as it is). As long as he obeys all instructions to the letter, nobody outside the room would be able to tell from the correctness of all his answers that he can’t tell Chinese from Urdu. His mind has been ‘programmed’, but he can’t be said, literally, to understand what he’s doing. Ah, but what if the process was speeded up? After all, we normally understand things in our own language much faster than Searle in his Chinese room story.
125 If a speeded-up version of Searle’s preposterous story could come true, and we met a person who seemed to converse intelligently in Chinese but was really deploying millions of memorized rules in fractions of a second, it is not clear that we would deny that he understood Chinese. Steven Pinker, How the Mind Works, 1997, p. 95
So spoke one set of intuitions to another. Pinker was no fan of the view that the mind is a computer, but he was prepared to call thinking a kind of computation. I’m with Searle (to the extent that I can keep up with him), and not merely so as to avoid the 0/1 dichotomy of the computer program. One of the ‘problems’ that beset Philosophers has to do with identity: how far can something, or someone, change and yet remain the same thing, or the same person? The classic case is the ship in which Theseus left Crete where he had slain the Minotaur. The ship was taken on an annual thanksgiving voyage, and, as each of its timbers rotted, it was replaced by a new one. Was it still the same ship when each and every one of the timbers had been replaced, or an entirely new ship? Each of the cells in our bodies is replaced every seven years (only the female ova and the irises in our eyes last a lifetime, apparently); can it be said, then, that we are the same person at forty that we were at thirty? One Philosopher, confronting this knotty problem, asked us to imagine exchanging each one of our cells, one after the other, with the Swedish film actress Greta Garbo. (Perhaps I should explain to a feminist like yourself, Sophie – and like myself – that in the 1920s and ‘30s, she would not have been referred to by the male noun ‘actor’). When could it have been said that we had become Greta Garbo, and no longer the person we were at the beginning of the process? Another of the ‘problems’ (already mentioned under Reason 7) is the relation of the mind to the ‘world’. An internalist believes that the brain is bodybound, and that our thinking is contained within our physical skull; an externalist believes, on the contrary, that our thinking extends beyond our body to include objects and events in the outside world. Externalists have appealed to a thought experiment in which Otto has to write down names, addresses, and other facts if he is to have any hope of remembering them:
126 Consider the well-trodden example of Otto. Imagine that Otto has a mild form of Alzheimer’s and he always carries a notebook with him (…) Otto’s notebook plays the same functional role in Otto’s mental life as neural memory does for Otto’s healthy counterpart, Inga. Mark Sprevak and Jesper Kallestrup (Eds.), New Waves in the Philosophy of Mind, 2014, p. 79
So, we either affirm that there is a distinction between ‘mind’ and ‘world’, or Otto persuades us that there isn’t. One intuition runs counter to another. Even a large-scale map of the smallest village can’t be that village; it can only ever represent it, and lose an awful lot of detail in the process. A poorly designed map might misrepresent the place. If it does, and we can’t find the bus-stop whose symbol the map misplaces, or the footpath that we plan to take, we can fault the map in precise ways. We can measure, and check, and perhaps produce a better map. How can Philosophers map even one small parcel of their chosen territory when all they have is language with all its disputed meanings? Philosophers have teased themselves with all manner of ‘prisoner’s dilemma’style scenarios, brains in vats, possible or ‘alternate’ (by which they mean alternative) worlds, and a ‘Twin Earth’ on which H₂O is XYZ. They’ve imagined far-fetched situations in which, for example, you control the switch that will change railway points; an express train is coming; there’s a well-thought-of government minister crossing one line in his ministerial car, and several men working on the other. Do you throw the switch to save the minister and his life-enhancing policies, or do you sacrifice the one to save the many? A rather more credible scenario is one where you’re in command of a detention facility, holding an Islamist prisoner privy to plans for a terrorist attack due to take place in twenty-four hours. Do you have him tortured to get him to reveal the time and place of the planned atrocity, or do you rule out torture on principle and so, potentially, condemn large numbers of civilians to a gory death?
127 ‘Ethical’ dilemmas of this sort share the weakness of many thought experiments: they peel away the sort of details that complicate a ‘real-world’ story. The most important thing about the way such cases are invented in discussions is the assumption that only two courses are on offer. Elizabeth Anscombe, in De Mesel, The Later Wittgenstein and Moral Philosophy, 2018, p. 162
They set up formulaic problems that are supposed to have a single ‘best’ answer and a QED at the end. They’re really no more than parlour-games. If the thought experiments in the literature are as near as Philosophers get to experimenting, their mimicry of science is another god that fails. And when Philosophers don’t find the precision they’re looking for in science, they turn to mathematics.
Informal logic Now, I know you might think of logic as verbal maths and be put off it already for that reason, Sophie, but you shouldn’t find what follows above your head. (You can skip this section and go on to Reason 10, though, if you must). Logic really starts with Aristotle. His ’classical logic’ took the form of the syllogism: this can be arranged in a number of ways; one common form is as follows: All kings are monarchs Charles the Bald is a king So, Charles the Bald is a monarch. Aristotle’s aim was to state what was known, so as to deduce from it an inescapable conclusion – one that was necessarily true. Only such a syllogism was ‘valid’. Aristotle and his syllogisms were still at the heart of the subject when Macmillan published Logic, by W. Stanley Jevons in 1876. Jevons applied Aristotle’s rules to decide whether a syllogism was valid or not: ‘The great logician Aristotle more than two thousand years ago discovered these
128 rules’, Jevons wrote, as if they were there in the subsoil, like fossils, waiting to be dug up. Deductive argument generally begins with a generalization. I’m not a particular fan of Haydn’s music, and I’m pretty sure you’re not, Sophie, but I’ll use him as an example, all the same. Here’s a generalization: Any symphony by Haydn is worth listening to. This would be followed by a more specific claim: This symphony has ‘Haydn’ written all over it. The conclusion would then relate this second claim to the first: So, this symphony will be worth listening to. The problem with deductive argument is that it can’t come out of the blue: it has to have been based on some prior set of observations; we don’t generally argue from a generalization to specific cases – we argue in the other direction, making a series of specific claims first, and generalizing at the end. In order to make the first claim, one would have to have argued inductively first – and be familiar with a lot of Haydn symphonies (including, presumably, No.22, nick-named The Philosopher). So, a complete argument might look something like this: From Symphony No.1 onwards – including the best known, like No.45, nicknamed the Farewell Symphony, and No.94, the Surprise, and those less often played – they all have a playful, melodic, richly textured quality. Haydn was always experimenting. Any symphony by Haydn is worth listening to. Descartes hoped to take mathematics as his model of all thinking, from certain premises to certain conclusions. Spinoza shaped his Ethics on Euclid’s theorems in geometry; and Leibniz, the co-founder of the calculus, with Isaac Newton, dreamed of replacing thought with calculation. But with the arrival of empirical science in the late 1600s, and the death of Leibniz in 1716, interest in logic waned.
129 Just three years after the Jevons book was published, Gottlob Frege put logic on a completely new footing. He was impatient with the vagueness of ordinary language – words like ‘all’, ‘some’, ‘any’, ‘and’, ‘or’; he found some statements even in maths too maddeningly imprecise. His laws of logic stipulated how we ought to think; it was for the psychologist to make sense of how we do think. In trying to make propositions precise, though, he and his disciple Russell squeezed the juice out of them. A. J. Ayer deplored Frege’s ‘logicism’: One of the effects has been not so much to subordinate maths to logic, which is what Frege and Russell wanted, but to subordinate logic to maths. And in recent years, since mathematical logic has become more and more mathematical, it has had less and less to do with philosophy in general (…) it’s very odd how little the two are connected. A. J. Ayer, in Bryan Magee (Ed.), The Great Philosophers, 1988, pp. 308, 309
And this was the 1960s and ‘70s Wykeham Professor of Logic at Oxford talking. Logic is about thinking in a consistent and methodical way, which, as I’ve said, any philosophizer should want to do. But we shouldn’t confuse clear thinking with the formal logic taught in many Philosophy departments. Once upon a time, logic was a required course for medical students; now it’s only students of Philosophy who may have to wrestle with it. Deduction does come into its own in maths, but we didn’t evolve to think mathematically; survival didn’t depend on our doing so. It’s no wonder, then, that so many students of logic find the subject so unfriendly. More than that, though, they have every right to think it’s irrelevant. ‘Cogency’ in the following quotation is another word for validity: The business of logic is the systematic evaluation of arguments for internal cogency. And the kind of internal cogency that will especially concern us is deductive validity. Peter Smith, An Introduction to Formal Logic, 2003, p. 1
130 We don’t think in a deductively valid way any more than we’d willingly wear a straitjacket. We generally attach more importance to sense than we do to validity, whereas all that matters to a logical argument is that the inference is valid – deductively valid. Thus, this argument of Smith’s is valid: Either Jack is a philosopher or he is a physicist. Jack isn’t both a philosopher and a fool. He is not a physicist. So, Jack is not a fool. Ibid. p. 115
Jack is a philosopher, as we might have been told in the first place. But at least the premises make sense. This ‘argument’ is valid, too: Jack is married; Jack is single; so today is Tuesday. Ibid. p. 44
And here, the premises don’t make sense: they can’t both be true. But in classical logic this doesn’t matter: there can’t be a situation in which the premises are both true and the conclusion is false – so the conclusion is true. Perhaps Jack was a foolosopher. Most of us are inclined to take seriously only those arguments whose premises make sense. We prefer a ‘sound’ argument whose premises are ‘true’ to one that is merely valid – indeed, we might wonder whether an unsound argument is an argument, properly speaking, at all. We do apply some rules when we think, ‘discovered’ or not. I mentioned Aristotle’s third so-called ‘laws of thought’ under Reason 6. Nobody could object to the first two: the law of identity (whatever is, is); and the law of contradiction (nothing can both be and not be) – though if someone were to ask you: “Are you happy?” you might well want to answer: “Well, yes and no”. Still, most of us will accept that it’s illogical to contradict ourselves. It’s the third law, the law of excluded middle, that’s the problem: put simply, it’s that every proposition is either true or false. Or, perhaps, the real problem is that it’s assumed we still have to apply the law if we’re to think straight. It commits us to a binary (two-valued) view of the world. There are, of course, propositions that are incontestably true:
131 Average temperatures in France are lower in the winter than in the summer. Iron is a more effective conductor of heat than either wool or wood. And there are propositions that are incontestably false: There is mounting evidence that the Earth was created in the year 4004 BCE. When it’s midday in London, it’s 10.00 in the morning in Moscow. But, then, there are scores of propositions that are neither true nor false: Manet’s paintings are more unconventional than Monet’s. The migration of peoples from south to north is both inevitable and desirable. Many Philosophers regard propositions in the third category as ‘borderline cases’; they are ‘vague’ and therefore of no use to the writers of books about logic. Hodges admits that: Life seems full of half-truths, grey areas, borderline cases, but Logic stands with sword uplifted to divide the world cleanly into the True and the False. But an honest thinker must ask himself whether this clean and absolute division is perhaps no more than a verbal delusion. Wilfrid Hodges Logic, 1977, p. 32
He asks himself the question, and he admits, that it may be a verbal delusion, yet he proceeds for another 290 pages as if his sword really did cut cleanly between true and false propositions. Those who value classical logic won’t abandon it just because much of the language we use is vague. Horwich, for instance, holds on for dear life to the law of excluded middle because ‘classical logic is attractively simple and familiar’ (Truth, 1998, p. 78). Is this really a good enough reason for pretending that logic has anything to do with the way we think – with Philosophy – or even with the way we ought to think? Must we divide the world in two, must every proposition be true or false, to satisfy the mathematically-minded?
132 Those who chafe in the straitjacket of true/false, two-valued, logic try to loosen the laces by introducing three-valued, many-valued, even fuzzy, logic. Two-valued logic finds it difficult to handle sentences containing words like ‘must’, and ‘might’; sentences of an ‘if. . .then’ kind; and those in which there is some emotional content, or some inconsistency, or other subtlety. So, logicians have had to invent new symbols to capture these subtleties. But there’s a trade-off: either you keep things simple and precise; or you create bafflingly complicated strings of symbols in a forlorn effort to represent natural language. Either way, logic forfeits a connection with how we actually think and express ourselves. Once upon a time, Philosophers expected to be able to refer their problems to a court of appeal: logic would preside in wig and gown and deliver a guilty, or not guilty, verdict. Even Frege gave up the notion that there are two sorts of propositions: and Kurt Gӧdel, in his Incompleteness Theorem of 1931, showed that it was impossible to prove the consistency of a formal system, like arithmetic or logic, within that system. Put simply: Under fairly general assumptions one cannot demonstrate the proof of a theory from within that theory. Volker Halbach, The Logic Manual, 2010, p. 4
Few Philosophers now think we can resolve our dilemmas by clicking on ‘Symbols’ at the end of the Insert bar of the latest edition of Microsoft Windows. Those who do are the schoolmen of our time. Reasoning in Philosophy, it seems, is no different from reasoning in history, sociology, and economics; we all reason when we give reasons for our intuitions. Philosophers’ intuitions, it seems, are no different from the hunch or ‘gut’ feelings of a sort that other experienced thinkers have – with similar guts and, perhaps, experiences, too. The analogies and thought experiments that Philosophers deploy, it seems, are as imperfect a representation of the world as those that the rest of us use when we’re in a rhetorical mood. And logic turns out to be a blind alley to all but mathematicians and computer
133 programmers. It doesn’t represent the way we think, and there’s rather little evidence of its regulating the way we think. [Philosophy] needs an active methodology, an assured or at least authoritative set of tools (…) we keep tripping on the fact that there is no such authoritative set of tools, but only suspicion: militant scepticism about whether any such set of tools could exist. Havi Carel and David Gamez, What Philosophy Is: Contemporary Philosophy in Action, 2004, p. xvii
If Philosophers do have a toolkit, it seems to contain much the same sort of common-or-garden tools – some of them multi-purpose, some of them redundant – that the rest of us reach for when we think.
134
Reason 10: It is beset by doubts of its own Three-fourths of philosophy and literature is the talk of people trying to convince themselves that they really like the cage they were tricked into entering. Gary Sherman Snyder Earth House Hold, 1969
An end of foundations The vast majority of Philosophers have been men, and men are inclined to be competitive. Plato was the head of one of the rival schools of thinkers who competed with each other for students from the wealthiest families in Athens. But perhaps it was Socrates who had set the tone: he was a ‘gadfly’, a piece of grit in the city’s oyster, known for his combative style of arguing. His young wife, Xanthippe, was branded a shrew by one of his students, for her similarly abrasive manner – Socrates boasted that it was because of this manner that he’d married her in the first place. Free-thinking women have fallen under Xanthippe’s curse ever since. Early-Church heresy-hunters and medieval schoolmen sharpened their words on the shields of those they disagreed with, and so it has gone on. Cambridge classicist and television presenter Mary Beard has said of the ’great thinker’ that ‘he’s tall, dark, slightly angular – and male’. Bernard Williams was one such, it seems: his first wife, the politician Shirley Williams judged his ‘capacity for pretty sharp putting down of people he thought were stupid’, quite unacceptable. ‘He can eviscerate somebody’, she said. Beard adds that trying to win an argument against Williams ‘must have been much the same as trying to score a goal against Socrates’. Simone de Beauvoir didn’t call herself a Philosopher, perhaps, because her lifelong partner, J-P. Sartre, did, and she came second to his first in the highly-competitive agrégation exam in Philosophy. Another Simone, Simone Weil, and another Philosophy agrégée, appears to have given as good as she got, though she was not universally admired for her outspokenness. Perhaps it’s no wonder that fewer than 20 per cent of the members of Philosophy departments in the USA are women; that of the 59 senior members of the Oxford Faculty of
135 Philosophy, just 13 (22 per cent) are women; that only 21 out of 248 (8 per cent) contributors to the 1995 edition of the Oxford Companion to Philosophy were women; and that women are generally in the minority among contributors to collections of Philosophical essays. Perhaps women have too many real problems to worry about without taxing themselves with those that occupy male Philosophers. Philosophy has been as much a male preserve, at least until recently, as Theology and the priesthood have been. British Philosopher Mary Midgley, who it seems could be quite combative herself, was sorry that Philosophy had always been about winning and losing. ‘We are not forced to assume that all stinging is valuable’, she said, ‘merely because Socrates had a sting’. It isn’t just in the English-speaking world that Philosophers have a reputation for stinging: École Normale Supérieure students are animated to this day by debates among students and faculty often culminating in the rapier-swift demolishing of an opponent’s position. David Mikis, Who was Jacques Derrida? An Intellectual Biography, 2009, p. 24
Badiou agrees: he refers, perhaps a little acrimoniously, to: The brutal and acrimonious corporations of Philosophers (…) this milieu where hostile indifference towards one’s colleagues is the rule. Alain Badiou, The Adventure of French Philosophy (Ed.) Bruno Bosteels, 2012, p. 67
Could it be that Philosophers have a reputation for combativeness not merely because they’re mostly men, but because they’re in search of The Truth, and their self-respect depends on their finding it, and they don’t take kindly to being told they haven’t? That’s an over-simple way of putting it, of course, but when there’s doubt about what Philosophy is, and about what tools, if any, they have for their exclusive use, there is bound to be a want of selfconfidence, and a certain petulant defensiveness. Other disciplines, after all, achieve success in conspicuous ways, whereas Philosophers have so little to show for their efforts. Philosophers don’t win Nobel, Pulitzer, Rhône-Poulenc Prizes, or Fields Medals: there are no ‘breakthroughs’ in Philosophy, no
136 discoveries, no new evidence of any kind. So, it’s no wonder Philosophers are given to squabbling among themselves. And two and a half thousand years is a long time in which to fail in the attempt to resolve problems that are problems only to Philosophers. If, as I suggested under Reason 2, Philosophy as a systematic subject has only been around since the early-to-mid 1800s, well, that’s still a long time. Here are some of those problems as represented in the titles listed in just one bibliography: What’s the meaning of ‘This’? The conscious mind: in search of a fundamental theory Does conceivability entail possibility? The aprioricity of logic Inexhaustibility: a non-exhaustive treatment Is knowledge justified true belief? Why Humeans are out of their minds Vagueness and the mind of God Troubles on moral Twin Earth: moral queerness revived Probabilities of conditionals and conditional probabilities The meaning of ‘meaning’. (I didn’t mean to type ‘humans’: Humeans are disciples of David Hume). A salient problem, as we’ve seen, is the question whether other people have minds, or whether we can prove that they do. Child-development psychologists appear to have found that children have accepted it as a fact, by the age of four, that other people do have minds. Those Philosophers who still have their doubts assume in practice that the readers of their books and papers on the subject must have minds. Another metaphysical problem concerns ontology - what it is to be. This ontologist begins a book on the subject on a pessimistic note: Frankly, I find some of the positions you’ll find in later chapters utterly ludicrous, and have had to grit my teeth and force myself to neutrally state the theories as well as possible.
137 He goes on to acknowledge that: Whilst ontology is one of the oldest areas of study, it has not as yet been lucky enough to develop a widely-agreed-upon methodology. Nikk Effingham, An Introduction to Ontology, 2013, pp. ix, 19
How many other areas of study trust to luck to find an agreed way of proceeding? A Philosopher may object that physics can tell us all about the atoms and sub-atomic particles, quarks and bosons and so forth that make up matter, but it can’t tell us about the intrinsic nature of reality. A physicist might well object in her turn that if there’s any such thing, it’ll be physics that’ll nail it, not metaphysics. Problems in the ‘theory of knowledge’, it seems, are just as far from being resolved, if this Philosopher speaks for other Philosophers than himself: Our everyday talk of ‘belief’, ‘knowledge’, and ‘desire’ has a complexity which has so far eluded the attempts of philosophers to provide it with a clear theoretical foundation. E. J. Lowe, An Introduction to the Philosophy of Mind, 2000, p. 297
And the same goes for ethics. There are many Moral Philosophers who hope that a firm basis will be found for an objective view of moral truth: that relativism will be hunted to extinction. Alasdair MacIntyre is said (in the Encyclopaedia Britannica) to have been ‘one of the great moral thinkers of the late 20th and early 21st Centuries’. He recognized that religion no longer gives us a shared foundation for moral talk and action. He was writing here in 1985, but no Philosopher who’s written since has found a convincing way of countering his claim: Up to the present in everyday discourse the habit of speaking of moral judgements as true or false persists; but the question of what it is that [makes] a particular moral judgement true or false has come to lack any clear answer. [M]oral judgements are linguistic survivals from the practice of classical theism which have lost the context provided by that practice. nd Alasdair MacIntyre, After Virtue: A Study in Moral Theory, (2 Edn.) 1985, p. 60
138 In short, some Philosophers go on hoping to find out the fundamental features of what ‘exists’, of what ‘knowledge’ is, and of what moral code we should live by, but others doubt that there are principles ‘out there’, or foundations ‘down there’, to be discovered. New arguments, new elegant arrangements of words, new analyses of the ways in which we use language won’t cut it, because – after two and a half thousand years of looking – surely, we have to conclude that they aren’t there to be discovered, whether in the laboratory or in the armchair – or the subsoil.
Other directions Of course, people speak and write in hundreds of different languages; and the English language has changed in subtle ways over time. We live in global times now, and much of the world is speaking, or learning to speak, (perhaps American) English. Could it be that the ‘problems of Philosophy’ will be resolved, or dissolved, as Philosophers identify the problems in a shared language? Paul Grice admits to a ‘fantasy’: that many of the problems have, in fact, been solved – some of them many times. The real problem has been that they’ve been framed in confusingly various terms and idioms: Now this fantasy may lack foundation in fact; but to believe it and to be wrong may well lead to good philosophy and, seemingly, can do no harm; whereas to reject it and to be wrong in rejecting it might well involve one in philosophical disaster. Paul Grice, in Richard E. Grandy and Richard Warner, Philosophical Grounds of Rationality, 1986, p. 67
Pascal’s wager is unlikely to have convinced many doubters to believe in spite of all the ‘good’ that belief brought with it; likewise, Grice’s wager is unlikely to persuade many that if only Philosophers would agree, in a common tongue, about what the problems are, they’d stave off disaster. What is this disaster that Grice warns us about? It seems to be that if Philosophy keeps posing the same old problems it won’t live; and if it doesn’t come up with new problems, it’ll die. So, he says:
139 those who still look to philosophy for their bread and butter should pray that the supply of new problems never dries up. Paul Grice in ibid, p. 106
If luck won’t work, will prayer? What might be a disaster for Philosophy and for Philosophers, of course, need not be a disaster for the rest of us who merely think. With more than enough bread-and-butter problems to occupy us, pleading for new ones is simply perverse. I’ve made more than one reference to the efforts of Philosophers to give the subject the look of a ‘science’; the Anglo-American emphasis on analysis of language has represented one of these efforts. Philosophers from a scientific or mathematical background, like Saul Kripke and W. V. O. Quine, have seen in logic the means to place the subject on a ‘scientific’ footing. Do scientists make use of logic, though? Do they take much notice of Philosophy, and the findings of ‘philosophers of science’? Theoretical physicist and cosmologist Stephen Hawking didn’t: in a speech at the Google Zeitgeist Conference, in 2011, he said “philosophy is dead” because philosophers haven’t kept up with developments in science and with developments in physics in particular. It was science, he said, that now bore “the torch of discovery in our quest for knowledge”. You’d think a death-announcement of this sort would bring down the curtain on any ambition to make the subject more rigorously scientific – but it probably hasn’t. Philosophers with a background in ‘humanities’ subjects – and that’s most of them, I’d imagine – place Philosophy firmly alongside History, Literature, Linguistics, Religious and Cultural Studies. Much of Philosophy, indeed, is History. For every book that does Philosophy there are ten or more histories of the subject to add to the tens of thousands of books about this or that Philosopher, dead or alive – of course, mostly very dead. Before Socrates, Philosophy could hardly be distinguished from poetry; and I quoted poet and critic Matthew Arnold, back in Reason 1 predicting that: ‘Most of what now passes with us for religion and philosophy will be replaced by poetry’. There may be neither rhyme nor reason in much Philosophy; so perhaps it could be said to be nearer to the novel and the play. American
140 Philosophers, Dewey, Rorty, Caputo, have noted that novels by the likes of Nabokov, Orwell, Naipaul, Rushdie, Ishiguro who’ve lived and written at the crossings of cultures, have been more successful at ‘carving reality at the joints’ than many a Philosopher. J.-P. Sartre among French Philosophers wrote both novels and plays full of philosophizing. So did Badiou and Deleuze – indeed, Badiou points out that he and Deleuze, and Foucault, and Jacques Lacan wanted above all to be writers (his emphasis) – not specifically writers of Philosophy (with that capital P). Among British Philosophers, Simon Critchley is a published novelist. His Memory Theatre has a narrator who’s a philosopher called ‘Simon Critchley’; but there’s as much fiction in this debut as fact. Alain De Botton is another Philosopher who took a holiday from Philosophy in fiction, with The Course of Love. In this, he interleaves nonfiction advice with accounts of the ups and downs in the marriage of an ‘emblematic’ couple. Colin McGinn is yet another Philosopher-novelist with The Space Trap (1992) and Bad Patches (2012) to his name. Gary Gutting was an American Philosopher with a European focus: Philosophies are like novels, not alternative absolutes among which we must choose the ‘right one’, but different perspectival visions (…) all of which have their relative values and uses. Gary Gutting, French Philosophy in the Twentieth Century, 2001, p. 386
When a Philosopher talks about ‘philosophies’ in the plural, and identifies them with perspectives and visions that have their ‘uses’, Philosophy has come a long way from being the same thing in Rome as in Athens – or from trying to be. If philosophies are like novels, we could do worse than read novels; at least they divert as much as they may gently instruct. I’ve not referred so far in this short book to aesthetics. This could be said to be the fourth corner of the subject, to add to metaphysics, epistemology, and ethics – but rather few Philosophers have written about it, beyond devoting a chapter to it in yet another Introduction to Philosophy. (There are almost as many of these as there are histories). Perhaps the old adage ‘Beauty is in the eye of the beholder’ puts off those who look for judgments that we might all make. Few would echo Scruton’s judgment that ‘the human world was
141 irrevocably changed by Bach, Borromini and Braque’, and few would want to suggest an alternative composer-architect-painter threesome who changed the world. Perhaps, too, there’s a suspicion that what those who do write about aesthetics are really doing is trying to impose their own tastes on the rest of us. And, what’s more, their judgments might have something a touch ‘spiritual’ about them: The art of the past will continue to point to a sense that there is in the universe a power which works towards the good and gives the universe and our existence a meaning beyond that provided by science. Anthony O’Hear, Philosophy in the New Century, 2001, p. 123
And De Botton, who criticizes Auguste Comte for calling his grand narrative a ‘religion’, defines good art in his Religion for Atheists. He calls it: ‘the sensuous presentation of those ideas which matter most to the proper functioning of our souls’. His use of the word ‘proper’ has as much of the sermon about it as the word ‘soul’. Aesthetics is, perhaps, for artists and artcritics to argue about. The rest of us can be left to like, or dislike, what we see and hear, whether or not we can explain why. Much of the time, Philosophers write books for other Philosophers: problemsolving in expensive books and obscure journals is their ‘bread and butter’. Their career prospects depend on their productivity, and on being quoted in the expensive books and papers written by their peers. Some, wishing to address a wider public, write books that ‘popularize’ the subject; others turn to fiction; and still others to issues of the day. De Botton, for example, lists among his published books: The Art of Travel (2002); The Pleasures and the Sorrows of Work (2009); and How to Think More About Sex (2012). He’s called a Philosopher in the publisher’s blurbs – but are these books really ‘Philosophy’? Are they studied in departments of Philosophy? Perhaps they are in certain of the courses referred to under Reason 1. He did publish The Consolations of Philosophy, in 2000, and in this he uses selected thinkers as hooks on which to hang the advice of a kindly agony-uncle. Julian Baggini tried something similar in The Virtues of the Table: How to Eat and Think (2014): perhaps he had said, in all the books that he’s published since the turn
142 of the millennium (twenty-five or more), all that he has to say on the subject of Philosophy. A.C. Grayling is another UK-based popularizer. He said of what De Botton writes that ‘it’s not philosophy. It’s cream-puff stuff’. Having written books titled, The Meaning of Things (2001); The Reason of Things (2002); and The Mystery of Things (2004), Grayling wrote The Heart of Things: Applying st Philosophy to the 21 Century (2005). This is ‘philosophical’ only in the sense in which we say: “He failed the exam, but he’s being very philosophical about it”. It contains essays on Romance, Cheating, Nudity, Cowardice, Fences, Conversation – all of them high-minded, if moralizing. Then, since there were still things to say about things, he wrote, The Challenge of Things (2015). Thirty-six books have appeared under his name since the mid-‘90s – five of them in 2007 alone. It may not be ‘cream-puff stuff’, but can it be good for us? Can it be good for Philosophy? Is it Philosophy at all, or is it journalism – doubtless very competent, readable journalism? The same question might be asked of Colin McGinn’s Sport: A Philosopher’s Manual (2008), and the Slovenian public intellectual Slavoj Žižek’s Event: Philosophy in Transit (2014), in which he writes about psychoanalysis, film and fiction, politics and economics, all in the pursuit of what an Event (with a capital E) might be. A more suitable title might have been Philosophy in Flight. Peter Singer is an interesting case: an Australian, he taught in Melbourne, before moving to Princeton. He nailed one of his colours to the mast in his Animal Liberation, in 1975: ever since, his opposition to the way we treat animals has been a prominent motif in his writings. He adopted the term ‘speciesism’, to compare our treatment of animals today with the way we treated slaves up until 1865 – treatment we’d now call ‘racism’, or something worse. He has campaigned to defend the right to abortion, to help those in extreme poverty in the world; and he has got himself into trouble over the years by arguing for voluntary euthanasia. More than one conference has been cancelled because religious opponents refused him a platform. In One World: The Ethics of Globalization (2002), he wrote as a committed environmentalist. Singer is a Moral Philosopher; all his books are based on thoughtful principles, and we wouldn’t want to be without them. You’d
143 approve of every one of them, Sophie, I know. But are Singer’s principles Philosophical principles? They’re all good liberal principles; indeed, they’re all good (small p) philosophical principles – but One World could have been written by Bill McKibben, George Monbiot, Al Gore, or David Wallace-Wells, all of whom have written morally mindful books on the environment. None of these is called a Philosopher.
A loss of confidence Perhaps Philosophy ought to be what it is in Singer’s hands; but could all his concerns be contained in one academic subject, in one university department? Perhaps they could; in fact, they should, and all students, young and old, should be required to study at least a sub-set of them. It’s all to the good that Philosophers like Singer should be public intellectuals; but why does a small minority of public intellectuals call themselves Philosophers? Why do thinkers call themselves Philosophers who write things like this? Philosophical questions are often in desperate need of clarification, and they tend to lose any aura of mystery upon careful inspection. Gordon Baker, in Richard E. Grandy and Richard Warner (Eds.), Philosophical Grounds of Rationality, 1986, p. 309
The failure of philosophy to provide what religion could no longer furnish was an important cause of philosophy losing its central cultural role and becoming a marginal, narrowly academic subject. Alasdair MacIntyre, After Virtue: A Study in Moral Theory, 1985, p. 24
In today’s world of professionalized philosophy, the most brilliant solution of a puzzle can get its author a very long way indeed; the temptations and pressures are there to write on puzzles, for other professional philosophers, and let civilization take its course. Edward Craig, Philosophy: A Very Short Introduction, 2002, p. 116
144 Many questions that I would regard as philosophical are investigated outside philosophy departments and many questions that I would regard as not particularly philosophical are investigated within. Philip Pettit, in Brian Leiter (Ed.), The Future for Philosophy, 2004, p. 311
Philosophical beliefs are much less distinctive in nature than many philosophers like to think (…) In general, the postulation by philosophers of a special cognitive capacity exclusive to philosophical or quasi-philosophical thinking looks like a scam. Timothy Williamson, The Philosophy of Philosophy, 2007, pp. 133, 136
Each of these thinkers called himself, and was called, a Philosopher, in spite of his misgivings. They must have thought there was some special brand of thinking that united them. What might this glue now be that holds all the bits and pieces of Philosophy together to justify its being called a ‘discipline’, when it has no objectives, no methodology, no evidence-base of its own to distinguish it from other intellectual pursuits? Do Philosophers in their role as public intellectuals contribute to public policy? If they did, that would be something. Mary Warnock, a Moral Philosopher, chaired a committee of enquiry into special education in the 1970s, because of her background in education rather than because she had published works on existentialism. And it was as a public intellectual (and, to be sure, as a writer on ethics) that she was appointed to government committees looking into environmental pollution, human fertilization and embryology, and animal experimentation, in the 1980s. Margaret Thatcher declared: Choice is the essence of ethics. The economic results [of the Western way of life] are better because the moral philosophy is superior. Charles Moore, Not for Turning, 2013, p. 348
But she turned to economists and business leaders for advice – Alan Walters, Douglas Hague, Gordon Pepper, John Sparrow – not Moral Philosophers. Was there a sage Philosopher on the Scientific Advisory Group for Emergencies (SAGE), for example, to which the UK government turned during the COVID19 pandemic? Would he or she have had anything useful to say?
145 Where’s all this leading us? Whereas a minority of Philosophers have tried to make the subject more ‘scientific’, others have preferred to see it as a ‘humanities’ subject. Those Philosophers who remain faithful to the ‘problems’ that have been their bread and butter, find themselves writing papers on ever more obscure aspects of those problems, having titles like those that I listed above (What’s the meaning of ‘This’? The meaning of ‘meaning’, and so on). Those who acknowledge that Philosophy might understand the world much as the writers of fiction understand it – from a certain, subjective angle – may test their visions as novelists, or as critics. Those less inclined to stretch Philosophy to this subjective extent, philosophize in non-fiction that stretches the subject so as to embrace journalism, in books and periodicals, written and published for the ‘general reader’. I would argue that these trends represent a loss of confidence in the subject; doubts that weaken the fences that bordered the subject in its heyday. A Philosophy without borders is a subject without a determinate identity: it’s philosophizing, and that’s something you do, Sophie; it’s something I do; it’s something all thinkers do without imagining they’re doing ‘Philosophy’.
146
Conclusion I have tried to show that philosophy, the quest for knowledge, the love of wisdom, became Philosophy, an academic discipline, only after long years in which it was indistinguishable from Theology, and later ran in parallel with it. Faith and Reason were sometimes happily married; sometimes one was dominant; and sometimes – and certainly in modern times – they have slept in separate beds. I have argued that, in a number of ways, and despite everything, the relationship is still a close one. Philosophy as an institution has separated itself from Theology in most universities, though in some they have been reconciled, perhaps, for economic reasons. It is recognizably a ‘humanities’ subject – not maths, not science, not social science – alongside History, Literature, and Cultural Studies. It wears all the badges of membership of the academic community: it has its own specialist journals, its conferences, its PhD programmes. It enjoys prestige as (generally speaking) a post-school subject, whose subject matter is accorded respect by the uninitiated. A Professor of Philosophy is regarded with some awe, even though that professor may not be any clearer than the rest of us about what Philosophy is, or might be. The closeness to Theology, though, is still evident in the way it thinks: there’s still that assumption that its propositions apply universally, or that they ought to; it’s still inclined to divide ideas, approaches, the world, in two; it views knowledge as hard won, if it can be won at all, and truth as an entity that’s ‘out there’ ready to be pinned down; it’s reluctant to accept that everything that exists is physical – that thinking is as physical a process as walking; and it tends to believe, or to hope, that there are moral facts to which we might all assent, wherever and whenever we live. I hope to have shown that Philosophy, like Theology, has sought foundations for its beliefs: ultimate bases for propositions that hold in all possible worlds. They’ve wanted absolutes because what’s only relative is too slippery, too subjective, too fashionable. They’ve shouted relativism down suspecting that it permits, where it only describes. They’ve pointed to torture, genocide, abuse of innocents, cruel and unusual punishments, to make their moral case,
147 when none of us – on good days – would approve of these abominations. I’ve said they have looked for absolutes, foundations, ultimate principles, not that they do, because many Philosophers now settle for less. They recognize that it’s not ‘wrong’ to let women choose whether or not to bring an unviable foetus to full term, or to campaign for doctor-assisted dying in defined cases, any more than it’s wrong to prefer Bowie to Bach. It’s not ‘wrong’ to legalize the recreational use of cannabis, or to grant citizenship to economic migrants, any more than it’s wrong to favour the adaptation of a novel for the cinema over the hardback original. There might have been less ‘sting’ in the arguments of Philosophers if they had preferred facts over truth. Facts by definition, and facts by discovery, aren’t antagonists like truth and falsehood are. If they’d traded in facts Philosophers might have succeeded in being more like scientists, except – again as I hope to have shown – they don’t have the tools to establish facts in Philosophy, both because there can’t be any, and because the tools they’ve used aren’t up to the job. Reason doesn’t establish facts; intuition, thoughtexperiments don’t establish facts; and logic does little more than demonstrate the ‘truth’ of truisms. Since it can’t trade in facts, Philosophy can only trade in beliefs, and these – like all other claims that aren’t facts – are matters of degree, and infinite petty dispute. If Theologians and Philosophers hadn’t given currency to the idea that there was Truth, out there, to be revealed or discovered, and believed in, we might have been spared talk of our own post-modern time being called a ‘posttruth’ age. I’ve said that philosophy is an elevated word for thinking: I should, perhaps, have said that it’s equivalent to critical thinking. This involves weighing the judgment of informed others to mitigate confirmation bias. Critical thinking is no more a subject than philosophy is: arguing effectively, by defining terms, considering counter-claims, marshalling facts, and coming to safe conclusions is what one hopes to do when one thinks in and beyond the borders of any subject.intuitive claims; choosing the claim, or set of claims, that’s supported by the best empirical evidence; coming to an informed judgment; and putting it to
148 A case could certainly be made for a course in the history of ideas, just as a case has been made for rebranding Theology as Religious Studies, where the phenomenon under review is religion, rather than God or gods. Bryan Magee, when he wrote The Great Philosophers, in 1987, wasn’t a Philosopher: he was Honorary Research Fellow, and later Visiting Professor, in the History of Ideas, at King’s College, London (though his book only contained material on Philosophers as conventionally defined). A course in the history of ideas would certainly contain as many social and physical scientists, historians, fiction-writers, essayists, and political reformers as theologians and Philosophers. The choice of thinkers would vary interestingly from teacher to teacher. It would be a multidisciplinary subject in a world in which there is too much compartmentalization, at all levels of education beyond the primary phase. It would include many more thinkers who never thought of themselves as Philosophers with a capital P, than it would those who’ve professed Philosophy, and been paid to do so. More and more Philosophers think outside the box – acknowledging as they do so that the conventional box of Philosophical tricks is too small for them – they think to take their theories into the boxes of other subjects. But what new thinking can a professor of social and political theory, for example, bring to sociology and political science that thinkers in these boxes can’t devise for themselves? They have well-established bodies of knowledge, and methods of enquiry of their own. Of course, Philosophy is changing, as I hope to have shown, impressionistically, in Reason 1: as issues to philosophize about evolve and diversify, so Philosophers will train their sights on them alongside their colleagues in other departments. But if thinkers who call themselves Philosophers have neither subject content nor methods of thinking that mark them out as specialists, what confidence can they have that they have anything of value to contribute? And what value can their colleagues place on their judgments? Historian and philosopher Yuval Noah Hariri wrote this during the coronavirus pandemic of 2020: When the present crisis is over, I don’t expect we will see a significant increase in the budgets of philosophy departments. (…) Governments anyhow aren’t
149 very good at philosophy. It isn’t their domain. (…) It is up to individuals to do better philosophy. Yuval Noah Hariri, in The Guardian, 25 April 2020
Philosophy with a small p is everyone’s ‘domain’; but I couldn’t agree more that it is up to the individual to think – and to think again. This is not an end-of-philosophy book, and it’s certainly not an end-ofphilosophizing book. If it was it might have borne these words in its title. It’s an end-of-being-fooled-by-Philosophy book that hopes to have questioned whether Wisdom should have been thought of as a holy grail once the ‘holy’ had gone the way of ‘heaven’.
150
Sophie’s Response Sophie (her nom de plume) read through the above argument. She was nineteen at the time, before she embarked on the study of Philosophy at university. Naturally, I was curious to know what she thought of it. I didn’t write it in the hope that she would change her mind, and study something more useful; I simply hoped that she would be enabled to think critically about what she was about to do, as an antidote to the sort of pre-course reading that she was recommended to do. Sophie chose to write down her thoughts as follows: Foolosophy: I read, I thought, I considered. First of all, I feel a need to say that I agree with nearly everything you say. I think the only place we differ is in our conclusions. You do not think Philosophy is a subject in its own right, worth studying, and I do. Perhaps this is because I am young and naïve and still see Philosophy as a way out from the uncritical thinking that I feel I’m surrounded by. I grew up in small towns where people didn’t even consider the subjects that Philosophy discusses. Where there is a dichotomy of thought, and no thought at all, Philosophy for me, at the moment, is choosing thought. You, on the other hand, have lived a whole life and haven’t had to confront thoughtless, empty people, for a long time. You’ve seen what Philosophy has to offer
151 for enough time that it no longer offers you anything you’re searching for and now all you can see are its flaws. I think I understand that; I’m sure it’ll happen to me some day, too. Your book is well written and I can hardly imagine all the research and organization it took to put together all your thoughts and citations. I’d like to thank you for this as, despite my interest in the subject, I find committing to, and following through with reading books on academic subjects with high-level wordchoices to be quite taxing. I do want to know about these things, but don’t quite have the motivation to find out about them. Your book was very useful for me as my emotional ties to you and my determination to show my appreciation for being addressed in your book by telling you what I thought of it, prompted me to read through till the end. I agree with your first point most of all. As you may know, I’ve never been a religious person and I, too, find the obsession with questions about God to be very offputting in Philosophy. I understand why they’re so tied together when you consider history, but personally, I don’t think the existence of God is something that all
152 humans intrinsically question any more. The advance of scientific theories for why and how we all exist means that many people, like myself, grow up thinking that religion is somewhat archaic. Though I respect people’s right to follow a religion, I don’t want to have to study theology as part of my Philosophy degree. They certainly do sleep in separate beds now and will probably end up getting a divorce one day. I also agree that Philosophy doesn’t quite know what it is, but I don’t necessarily think this is a bad thing. Perhaps
it
shouldn’t
call
itself
an
independent
academic subject and submit itself to standardized testing, and instead simply be discussed and thought about. Having said this, I don’t know whether I would ever have been introduced to Philosophy if it wasn’t in university prospectuses. Maybe it should be taught in secondary
school
as
the
history
of
Philosophy,
or
something. I don’t know; I don’t have the answer to this. Which brings me on to my next point, which is that Philosophy tries to be too definite. I really liked your raising the issue of simplifying complicated things in dichotomies. I often think that ‘Great Thinkers’ get a bit
153 arrogant and think that they can come to conclusions about everything, when there are so many factors they’re
not
considering.
To
say
that
thought
experiments provide evidence for Philosophical theories is ridiculous. Plato’s world of Ideas is completely baseless and yet he taught it as though it was fact. I agree
that
facts
would
be
extremely
useful
in
Philosophy. Most people’s theories are just so personal and hold up no more in the real world than Freud’s theory of penis envy. There is no issue, in my opinion, where theorising for the sake of it is concerned, or in explaining the world in a way that makes sense to you, and then sharing that with others. The issue is when Philosophers think they are objectively correct and that others are wrong. Opinion and personal experience are a big part of Philosophy. We shouldn’t try to make it a science like we did to psychology, because it isn’t one. I
completely
agree
with
you
that
Philosophy
is
historically sexist; it attempts to be universal when really nothing is; and it tries to be objective about things that are so opinion-based, such as morality. Philosophy is flawed because the human beings who
154 participate in it are flawed. Every subject is flawed, but none more than ones, like Philosophy, which are the product of our own minds and can’t be pinned down. It’s easy to say that Philosophy is useless because of all this, but I don’t believe that it is. Thank you for presenting all these topics to me in your usual way. I will now start my studies with more educated opinions and perhaps more scepticism about Philosophy with a capital P than I would otherwise have done. Foolosophy opened my eyes to several arguments within Philosophy that I’d never considered; but it still left me with some hope and desire to study it. I don’t know if that’s what you intended, but I see it as a good thing. Sophie
155 Index A Aesthetics, 16, 140, 141 Analogy, 87, 88, 94, 113, 122, 123 Annas, Julia, 110, 111 Anselm, Saint, 24, 51 Anscombe, Elizabeth, 127 Appearance, 50, 54, 60, A priori/A posteriori, 50-54, 61, 68, 81, 104, 111 Aquinas, Thomas, 24, 59, 100, 114 Aristotle, 7, 21-23, 25, 26, 30, 36, 39, 60, 71, 75, 99, 110, 114, 127, 130 Arnold, Matthew, 16, 30 Audi, Robert, 62, 65 Augustine, 23-25, 30, 59, 72, 73, 100, 122 Axelrod, Robert, 108 Ayer, A. J., 60, 105, 129 B Bacon, Francis, 26, 30, 61 Badawi, Zaki, 35 Badiou, Alain, 13, 135, 140 Baggini, Julian, 10, 32, 117, 141 Baker, Gordon, 143 Barth, Karl, 59, 72 Baudrillard, Jean, 42, 43, 79 Bauman, Zygmunt, 44
Benson, Ophelia, 81, 117 Bentham, Jeremy, 29, 32, 105, 106, 108 Bergson, Henri-Louis, 119 Berkeley, George, 27, 51, 53, 60, 73 Blackburn, Simon, 53, 112 Boghossian, Paul, 51 Bradley, F. H., 31 Butler, Joseph, 27, 101 C Calvin, John, 25, 49, 59, 100 Caputo, John D., 13, 39, 94, 118, 140 Carel, Hari, 133 Carnap, Rudolf, 77 Cassian, John, 99 Cathcart, Thomas, 18 Cave, Peter, 63 Chalmers, David, J., 81, 82, 94 Cicero, 20, 99, 100, 114 Coherence theory, 75-77 Common sense, 53, 76, 105, 118 Comte, Auguste, 30, 40-42, 141 Conscience, 100, 101, 106 Consciousness, 85, 92-94 Consequentialism, 104, 105, 107 Constantine, 34, 48 Continental Philosophy, 555, 67, 79 Contingency, 50, 53, 54
156 Contractualism, 107 Correspondence theory, 75-77 Craig, Edward, 143 Crane, Tim, 10 Critchley, Simon, 140 Critical thinking, 118, 147 D Damian, Peter, 25 D’Ancona, Matthew, 80 Darwin, Charles, 87 Da Vinci, Leonardo, 88 Dawkins, Richard, 94 De Beauvoir, Simone, 134 De Botton, Alain, 89, 140-142 Deductive argument, 26, 61, 62, 128-130 Deflationary theory, 76 Deleuze, Gilles, 14, 79, 140 Dennett, Daniel, 79, 80, 94 Deontological ethics, 103 Derrida, Jacques, 14, 42, 67, 79, 101, 135 Descartes, René, 11, 13, 23, 2527, 36, 60, 90, 122, 123, 128 Dewey, John, 140 Diderot, Denis, 27, 83 Driver, Julia, 109 Dualism, 90-92 Dummett, Michael, 38, 73 Duns Scotus, 25
E Effingham, Nikk, 137 Eliot, T. S., 76 Emotivism, 105, 107 Empiricism, 50, 53 Epicurus, 114 Epistemology, 12, 16, 62, 67, 69 Ethics, 12, 16, 23, 29, 40, 54, 55, 89, 99, 101, 103, 104, 107-112, 117, 128, 137, 142, 144 Existentialism, 67, 79, 144 F Facts, 10, 36, 38, 57, 61, 68, 7781, 84, 109, 146, 147 Faith, 24-26, 30, 47, 49, 59, 69, 73, 99, 114, 115, 146 Fernandez-Armesto, Felipe, 79, 80 Field, Hartry, 52, 54 Foerst, Anne, 35 Fosl, Peter S., 14 Foucault, Michel, 14, 42, 45, 79, 140 Frege, Gottlob, 31, 41, 129, 132 Freud, Sigmund, 30, 80, 88 G Gamez, David, 133 Garvey, James, 121 Genesis, 46, 57, 85
157 Gensler, Harry J., 111 Gettier, Edmund, 64, 65 Gӧdel, Kurt, 83, 120, 132 Grayling, A. C., 55, 91, 142 Greek philosophy, 20-25, 30, 36, 45, 60, 62, 114 Green, T. H., 30, 31 Grice, Paul, 73, 138, 139 Gutting, Gary, 79, 119, 140 H Haack, Susan, 74 Habermas, Jürgen, 79, 120 Halbach, Volker, 132 Hare, R. M., 102 Hariri, Yuval Noah, 148,149 Hawking, Stephen, 139 Hegarty, Paul, 42, 43 Hegel, G. W. F., 30, 40, 42, 61 Heidegger, Martin, 31, 44 Hermeneutics, 67 Hewlett, Martinez, 35 Hodge, Charles, 59, 131 Hodges, Wilfrid, 100 Horwich, Paul, 76, 77, 131 Hume, David, 27, 28, 30, 60-62, 65, 68, 101-103, 106, 123 Husserl, Edmund, 31, 66 Hutcheson, Francis, 29, 103 IJ Idealism, 50, 51, 53, 73
Identity, 125, 130 Inductive argument, 26, 61, 62, 128 Intuitionism, 105, 113, 114, 118121, 124-126, 132, 147 Jackendoff, Ray, 84 James, William, 31, 95 Jenkins, David, 49 Jesus, 23, 34, 35, 49, 59, 72, 73, 87, 97, 98 John, 23, 72, 98 John, the Scot, 114 Johnson, Mark, 89 Justification, 49, 63, 64 K Kahneman, Daniel, 116, 119-121 Kallestrup, Jesper, 126 Kant, Immanuel, 29, 36, 39, 40, 59, 60, 102, 103, 106, 115 Kearney, Richard, 67 Kierkegaard, Søren, 30 Kim, Jaegwon, 36-38, 92 Klein, Daniel, 18 Knowledge, 11, 14, 16, 20-22, 28, 29, 38, 44, 49, 51-53, 57-70, 93, 100, 118-120, 137, 138 Kołakowski, Leszek, 38, 39, 73 Kripke, Saul, 76, 139 Kuanvig, Jonathan, 65
158 L Lacan, Jacques, 140 Lakoff, George, 89 Language, 14-16, 31, 37, 41, 67, 68, 73, 91, 102, 125, 126, 129, 131, 132, 138, 139 Leibniz, Gottfried, 27, 128 Levinas, Emmanuel, 14, 16 Lewis, David, 60 Linguistic analysis, 41 Locke, John, 27, 30, 36, 51, 60, 106, 115 Logic, 14-17, 26, 31, 51, 53, 60, 61, 74, 75, 102, 113-114, 116, 127-132, 139, 147 Lowe, E. J., 90-92, 137
Mind, 7, 8, 16, 22, 24, 26, 36, 40, 55, 73, 85-96, 125, 126, 136 Moltmann, Jürgen, 72 Moore, G. E., 60, 65, 105, 119 Moral philosophy, 12, 24, 29, 144 More, Thomas, 25
M McGinn, Colin, 140, 142 McGinn, Marie, 15 MacIntyre, Alasdair, 137, 143 Mackie, J. L., 109, 110 Magee, Bryan, 55, 129, 148 Margolis, Joseph, 91, 92 Marsh, Henry, 95 Marxism, 40 Matthew, 34, 98 Mercier, Hugo, 120 Metaphysics, 12, 16, 17, 24, 137 Midgley, Mary, 135 Mikis, David, 135 Mill, J. S., 30, 50, 106
OPQ Objectivity, 50, 109 Ockham, William of, 25 O’Hear, Anthony, 27, 37, 38, 158 Ontology, 24, 136, 137 Oxford Philosophy, 16, 24, 25, 30, 32, 72, 129, 134 Paley, William, 123 Pascal, Blaise, 114, 115, 138 Patterson, Douglas, 82 Paul, Saint, 23, 34, 58, 59, 72, 90, 99, 100 Peacocke, Christopher, 51 Pettit, Philip, 144 Phenomenology, 66
N Nagel, Thomas, 43 Natural law, 50 Necessity, 53, 54 Newman, J. H., 100 Newton, Isaac, 14, 27, 28, 30, 123, 128 Nietzsche, Friedrich, 30, 39, 79, Nozick, Robert, 123
159 Philosophy courses, 16, 17 Physicalism, 90-92 Pinker, Steven, 125 Plato,7, 11, 15, 21-23, 30, 39, 51, 60, 83, 87, 99, 114, 122, 134 Plotinus, 25 Postmodernism, 42, 80, 117 Pragmatic theory, 76, 77 Puzzles, 17-19, 143 Pythagoras, 20 Quine, W. V. O., 75, 139 R Ramsey, Frank, 19 Rawls, John, 106, 107, 123 Realism, 50, 53, 109 Reason, 25, 27, 29, 40, 42, 59, 99, 100, 102, 114-118, 120, 121, 132, 146, 147 Relativism, 19, 50, 78-81, 109, 111, 137, 146 Restak, Richard M., 93 Rhetoric, 117, 132 Rorty, Richard, 44, 67, 140 Rousseau, J-J., 27, 106 Royal Society, 17, 26, 27, 61 Russell, Bertrand, 7, 12, 14, 31, 41, 60, 62, 73, 118, 129 S Sartre, Jean-Paul, 36, 66, 134, 140
Scepticism, 60-62, 65, 66, 133 Schlick, Moritz, 77 Schoolmen, 15, 25, 26, 30, 36, 61, 72, 99, 132, 134 Scruton, Roger, 40, 77, 78, 89, 140 Searle, John, 44, 92, 93, 124, 125 Seven Deadly Sins, 49, 99 Sider, Theodore, 15, 81 Sidgwick, Henry, 30, 104, 105, 118 Shafer-Landau, Russ, 37, 38, 110 Singer, Peter, 117, 142, 143 Smith, Peter, 129,130 Socrates, 21, 22, 30, 36, 51, 60, 134, 135, 139 Spencer, Herbert, 30 Sperber, Dan, 120 Spinoza, Baruch, 27, 30, 128 Spirit, 35, 38, 40, 47, 48, 58, 8587, 89, 91, 93, 95, 141 Stangroom, Jeremy, 10, 81, 117 Subjectivity, 50, 66, 92, 93, 110, 116, 145, 146 TUV Tartaglia, James, 17 Toland, John, 115 Torrance, T. F., 59, 62 Truth, 13, 14, 24, 27, 31, 38, 39, 42, 43, 53, 63, 68, 69, 71-84, 102,
160 105, 109, 111, 115, 117, 121, 131, 135, 137, 147 Truth tables, 74, 75 Turri, John, 64 Unger, Peter, 65 Utilitarianism, 104 Validity, 129, 130 Verification Principle, 41, 42, 105 Vienna Circle, 41, 42, 77, 105 Virtue Ethics, 110, 111 Voltaire, 27, 73 W-Z Warburton, Nigel, 10, 18 Ward, David, 13 Ward, Keith, 73 Warnock, Geoffrey, 32 Warnock, Mary, 144 Weil, Simone, 134 Whewell, William, 29 Whitehead, Alfred, 31, 83 William of Ockham, 25 Williams, Bernard, 55, 56, 134 Williams, Rowan, 87 Williams, Shirley, 134 Williamson, Timothy, 9, 144 Wisdom, 7, 17, 20-23, 33, 34, 38, 46, 58, 79, 99, 146, 149 Wisdom of Solomon, 23 Wittgenstein, Ludwig, 7, 19, 44, 74, 75
Women in Philosophy, 24, 29, 118, 134, 135 Xanthippe, 134 Žižek, Slavoj, 142
ibidem.eu