4 minute read

Introduction

When planning this book, in which I want to refute the idea that there can be processes in machines comparable to mental processes in humans, i.e. that machines can be human-like, I first thought to take typical examples of computer functions allegedly comparable to human mental functions, compare the two and then decide whether it makes sense to regard computer functions as really human-like. But I soon realized that my task would be comparable to Sisyphus’. With the latest computer accomplishment popping up on a monthly if not weekly basis, I would have to start pushing the rock uphill anew every month or week with no end in sight. The book would have to be rewritten at least once a year. Machines with human-like mental functions now have been predicted for many decades (since the advent of electronic computers) and I am afraid that this will go on for many decades more unless the scientific community realizes that there are qualitative differences between human mental functions and functions running on computers, which cause quantitative comparisons (as with, for example, intelligence) to make no sense whatsoever. So the book will be about a priori grounds on which computers cannot be human-like.

When, long ago, I read Weizenbaum’s comment that to compare the socialization of a machine with that of a human is a sign of madness, I immediately felt that this was the ultimate comment possible and did not expect Weizenbaum to explain why (which he did not bother to do anyway). But the insight into this madness not having befallen the scientific community nearly half a century later that “Why” must finally be delivered.

Advertisement

A comparison between the socialization of a human and that of a computer is not seen as mad in the AI community. This is so not because obvious similarities could be shown to exist between the two (there aren’t any; there is just a faint analogy in that both are somehow affected by their environment) but simply because this kind of speak (and think) has become quite

common among AI researchers. Take a typical phrase like “culture is poured into artificial brains” (Collins, 2018, p. 173). You can pour culture into computers as little as you can pour the theory of relativity into a teacup. Any nonsense can be brought forward in a gramatically correct form and, if done routinely, it may not be felt as nonsense any more. The above phrase about how culture can manage to get into a machine reminds one in its nonsensicality of Francis Picabia’ s1 “The head is round in order to allow the thoughts to change direction”. At a closer look much of AI speak appears to be inspired by Dadaism. The only difference between AI and Dada speak is that Dada is funny and the Dadaists consciously invented nonsensicalities in order to provoke and entertain, while AI speak is by no means funny and the AI researchers are convinced that they talk reasonably. So when reading the AI literature, we are confronted with all kinds of absurdities, like machines being socialized, having human feelings, motives, consciousness and religion, westernized human-like desires, and culture being poured or fed into machines, or with artificial slaves with human-like bodies, or with self replicating intelligent machines. I think that large parts of the AI literature may be seen as lessons in absurdity, presented as rational, scientifically based predictions about our digital future.

The idea of human-like computers is based, quite like its 18th century predecessor, L’homme machine (Man a Machine) by de la Mettrie (1921), on a simple mechanistic-materialistic belief, abandoned in physics long ago. As a consequence of this simplistic man-a-machine view, the whole debate about human mental functions in computers suffers, as mentioned, from a deep ignorance about the nature of human mental processes, about the human psyche. In addition to that, the engineering perspective of the field largely underestimates our deep ignorance about the biological, largely neural, basis (the biological substrate) of those processes. So I will in some detail deal with that substrate and then try to paint a realistic picture of the daunting complexity of human mental functions in accord with the simple motto “if you want to mimic something you should know what it is.”

That picture will constitute the largest part of the book. My view of the man-a-machine matter is not just that the idea is a somehow problematic one, but that it is an absurd, i.e. a ridiculousy nonsensical one. This view,

1 Francis Picabia (1879–1953), painter, Dadaist, surrealist.

identical to the one Weizenbaum took half a century ago, today is one held by a small minority facing a huge majority of believers in human-like computers, both in a wide public and also among scientists. To convince the reader to join a small minority against a seemingly overwhelming opposition is no easy task. As a kind of psychological support in this task let me, before coming to the actual topic of the book (the absurdity of the man-a-machine notion), present some examples from mental history where a large majority had it wrong for decades, centuries or even milennia. And let me also point to some peculiar characteristics of the debate which must make us doubt that we are dealing with a scientific one in the first place. In terms of argumentation, the field must be seen as one following entirely its own rules.

This article is from: