16 minute read

Sunless Light & Wordless Logic

by Boyd R. Collins

Jaron Lanier, a founding father of virtual reality, recently wrote an insightful book called You Are Not a Gadget, in which he describes the transition currently taking place in Silicon Valley culture. “Since implementation speaks louder than words, ideas can be spread in the designs of software. If you believe the distinction between the roles of people and computers are starting to dissolve, you might express that—as some friends of mine at Microsoft once did—by designing features for a word processor that are supposed to know what you want, such as when you want to start an outline within your document.” (Lanier, 2011, p. loc. 564). Software has become so successful at supporting human objectives that many are now convinced it should formulate the objectives themselves.

Because of this, software design often seems based on an invisible ideology. The effectiveness of its hidden agenda comes from its pretense to neutrality and the fact that it is now the default means for accomplishing most of our work. Its unstated assumptions can therefore be smuggled in under the cover of “practicality.” On the surface, it appears to enable the most efficient way of interacting with a process to achieve a defined goal. But if we pay careful attention, we can see that it incorporates assumptions about the nature of our tasks that we thoughtlessly accept.

We have been conditioned to expect software suggestions that will enable improvements to our work which will be automatically implemented. An outline will be started for our document whether we thought about it or not, so we either play along and assume that Word knows our document better than we do or we learn to manipulate the software’s expectations. This default behavior accustoms users to accept such suggestions without consideration, while covertly embedding an assumption of machine superiority into our own mental software. By becoming ever more thoughtless, our work incorporating software suggestions seemingly improves and we receive the rewards of that “higher quality.”

In addition, our software conditions us to believe that the most significant human achievement is to enable incremental improvements in existing processes rather than re-thinking them from the ground up. Gradually, we are losing the ability to rethink what already exists, weakened as we are through increasing dependence on software that trivializes our thought processes. The easier our lives become, the more the inner freedom necessary to break out of our continually reinforced limitations wanes.

Better than the real thing?

As another example, consider how social media such as Facebook molds its users. According to media critics like Siva Vaidhyanathan in his book, Anti-Social Media (Vaidhyanathan, 2018), Facebook is consciously designed to generate addictive behavior. The platform’s algorithms constantly push users into extremes of identity performance. Users gain friends and likes by manufacturing popular tribal identities in which the tribe is defined ever more narrowly. For instance, by embracing greater and greater ideological extremes in an alt-right grouping, one enhances his or her popularity ratings. It is an instance of what Guy Debord named “The Society of the Spectacle”—in this case manufacturing a consumable identity that partakes in the spectacle. Facebook is designed to promote caricatures of our human potential, illusions we are conditioned to accept as if they were real persons. It is a stage for imitating value commitments and fulfilling life activities which frees us from actually having to live the values espoused in the virtual medium.

On a global level, the lords of the computing clouds are rapidly integrating humanity into a transhuman matrix. By pretending that machines can be conscious persons, they hope to enforce new goals enabled by artificial intelligence. Software is now being built with the assumption that it knows what we want better than we do ourselves, as AI proponents such as Yuval Harari incessantly proclaim. Rather than formulating the best path to achieve human-directed purposes, AI-generated algorithms are beginning to override the purposes of the mass of humanity with those of the few in dominating positions of corporate power. And most of us will gladly surrender to the software’s expectations of us because the rewards will come mainly to those who are the fastest to submit. Eventually we may lose the capacity to perform anything but what our software makes easy.

Turing’s Test for Humanity

Alan Turing, often considered the father of computer science, proposed a test for humanity called the “Turing Test.” The point of it is to determine if the computer can generate conversational responses indistinguishable from those a human being might make. If an objective evaluator cannot distinguish the machine from the human, the machine is said to have passed the test. However, as Jaron Lanier suggested, “… the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?” (Lanier, 2011, p. loc. 635). In many ways, we are being trained by our software to imperceptibly degrade our idea of what constitutes a conscious person. It would seem that the capacities of software belong to a different order than what lives in the human soul no matter how clever their speech imitation algorithms might be.

Science without understanding?

In Silicon Valley, some even believe that machine learning can replace scientific understanding. In a recent article in Singularity Hub, Amar Vutha asks the question, “Could Machine Learning Mean the End of Understanding in Science?” (Vutha, 2018). Citing the example of Alpha Zero, a machine-learning based program that taught itself chess in about a day and then beat the world’s leading chess playing programs, he speculates on the potential of machine learning to replace the need for scientific understanding. This rapidly evolving technique “… allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.” (LeCun, 2015). It has been used with spectacular success in the areas of speech and visual object recognition, drug discovery, and many other domains. Instead of understanding physical phenomena through mathematical theories, deep learning “… discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer.” (Ibid.). In other words, through the use of statistical methods based on large data sets, predictions can be made about how phenomena will behave—without the need for a scientific theory that actually explains their behavior.

Making successful predictions has long been considered a primary goal of science. But throughout the scientific revolution, understanding the phenomena in question through mathematical modeling has been considered the most reliable means of achieving this goal. If backpropagation algorithms can deliver predictions as accurate as those based on mathematical models, what are the implications for the future of science? Many computer scientists now recognize that deep learning can provide accurate predictions even though their statistical basis cannot be reduced to a humanly conceivable theory of how the universe works. If this acceptance becomes widespread, science as the search for understanding might soon be considered a relic of biology-based computing. At that point, AI would become the Delphic oracle which dispenses indisputable knowledge on the basis of faith in statistical algorithms, compelling us bio-computers to accept the pronouncements of the data-driven gods. While this is likely to be an outlier fantasy for the time being, it epitomizes the worshipful attitude which is being cultivated toward Artificial Intelligence, while neglecting the potential of our own thought processes.

Ease or striving

One possible definition of the human is embodied in the words of the poet Rainer Maria Rilke,

People have (with the help of convention) found the solution of everything in ease and the easiest side of easy; but it is clear that we must hold to the difficult; everything living holds to it, everything in Nature grows and defends itself according to its own character and is an individual in its own right, strives to be so at any cost and against all opposition.

(Rilke, 2012, p. loc. 462)

By directly facing our human challenges without reaching immediately for a quick technical solution, we build new forces within ourselves. These are our untapped potentials, the human powers that technology can only imitate.

For most of us, the temptation to accept the AI vision will be overwhelming due to our desire for a life without labor, filled with all the pleasures of virtual reality, perhaps with an electronic form of clairvoyance thrown in as a bonus. But in the discovery of the difficult mission which has been given to us, a mission that implies a level of human dignity which few today are willing to embrace, we may forge a vision of humanity that will permanently awaken us to “the difference of man and the difference it makes” as Mortimer Adler put it so well.

The antidote to dataism

The code programmers write is executed using binary logic gates which form the basis for modern computing. It also incorporates the limits of its non-conscious logic. As Dr. Gopi Krishna demonstrated in his paper, Technology and the Laws of Thought, “This conversion of all logical statements into algebraic form is thus seen to remove everything that could not be quantified or mechanized and retain only that which could. It is not an extension, as Boole believed, but a reduction, or a filtration.” (Vijaya, 2015). Cyber-filtered reality represents a reduction of human faculties, not an updated version of our bio-computers. The detailed analysis behind this statement is laid out clearly in Dr. Krishna’s paper [found at www.rudolfsteiner.org/articles ].

A primary limitation of artificial intelligence is that it is not capable of “living thinking”, which Rudolf Steiner, the great Austrian epistemologist, described as follows:

In the world of living things, everything develops from within … Things that grow and wane develop from within, and so it is also in the case of living thinking.

Steiner, Materialism and the Task of Anthroposophy, Lecture 10, 1921

In other words, thinking evolves within us through the entire process of life, not through a logic which is often merely the anatomy of dead thoughts. The reason we can so easily believe that human thinking is just an inferior version of what goes on in integrated circuits is that we no longer experience the life within our own thinking. What happens in computers is truly lifeless, the result of precisely defined algorithms incapable of inner evolution. Since our thought has for the most part become a library of dead abstractions, turning it into something that can be manipulated by super computers is easily implemented. But the human mind which has awakened to itself is capable of far more than manipulating congealed forms from the past.

Will we be tricked into giving away this power? Steiner expresses the life of thinking as follows:

… One who really penetrates to the life within thinking will reach the insight that to experience existence merely in feeling or in will cannot in any way be compared with the inner richness, the inwardly-at-rest yet at the same time alive experience, of the life within thinking, and no longer will he say that the other could be ranked above this. It is just because of this richness, because of this inner fullness of living experience, that its reflection in the ordinary life of soul appears lifeless and abstract.

Steiner, The Philosophy of Spiritual Activity: The Factors of Life

The main message of this passage is that our ordinary experience of thought as lifeless and abstract is due precisely to the richness of its actual inner reality.

These observations demonstrate why thought seems so easy to computerize. Our thinking for the most part has become a series of abstractions ordered according to the rules of logic (if we’re lucky). But the logical operations of the human mind can never be carried out with the speed and accuracy of a digital computer. To the extent that we remain within the “Dead Zone” of dried-out thought-corpses, we might as well merge with our avatars, those perfectly uploaded cloud versions of ourselves promised by the transhumanists. If we accept the definition of the human mind as a poorly designed biologically-based computer, then the sooner we do this, the sooner we can get free from the stupidity and evil inherent in the inferior devices referred to as “human beings.”

But thinking can be the root of a spirituality deeper than either feeling or will can provide.

No other human soul-activity is so easily underestimated as thinking. Will and feeling warm the human soul even when experienced only in recollection. Thinking all too easily leaves the soul cold in recollection; the soul-life then appears to have dried out. But this is only the strong shadow cast by its warm luminous reality, which dives down into the phenomena of the world. This diving down is done by a power that flows within the thinking activity itself, the power of spiritual love.

Steiner, The Philosophy of Spiritual Activity: The Factors of Life

It is precisely this living thinking that would be submerged beneath the presumed superiority of artificial intelligence. But how can we experience the zest of life by emulating an embalmed thought-corpse?

Yet this is all that artificial intelligence can offer us. In Steiner’s analysis, our knowledge is built from two elements: perception and thinking. Perception provides us the “Given” which are the elements of the real. Our minds then operate on the “Given” through the process of active thought in the process called “living thinking.” The interactions of these two human capacities lead to new knowledge and the forms of artistic imagination. Translated into the computer, perception and thought are reduced to data and algorithms. In Nicanor Perlas’ book, Humanity’s Last Stand (Perlas, 2018), he presents an incisive analysis of how artificial intelligence fails to substitute for the living capacities of the human mind, “This Living Thinking or deep creativity of real humans has the power to learn from the future. Sophisticated AI cannot do this. It is the power that enables humans to make really new beginnings in freedom and love. In AI, on the other hand, the human being is trapped in the infinity of the ‘Given’ and the finished ‘thought’, the algorithm.” (Perlas, 2018, p. loc. 1442). Artificial intelligence is an end-product of a long history of intellectual exploration and creativity. While it could provide professional packaging for the finished artifacts of that history, those artifacts would remain icons of a life that had fled its makers.

The modus operandi of evil

Artifical intelligence proposes an enticing deal to humanity: instead of the hard labor of developing our spiritual capacities, we can have the technological equivalent of those powers right now. In place of living thought which provides access to spiritual realities, AI will give us the illusion of intuition through the combination of big data and advanced algorithms. This is the modus operandi of evil—to subvert the higher human capacities waiting to be born by providing a distorted substitute requiring no inner effort. AI lures us into fantastic worlds of power and pleasure and even tosses in spiritual visions for a nominal charge. All we need to do to live in the world of magic is slide our credit cards.

Data and algorithms can be randomly re-ordered into billions of new combinations at subsecond speeds, but they remain completely lifeless. Even the fastest supercomputer will always lack the essential element—the creative spirit in each of us which is called to expand beyond the limits of the “Given.” The Singularity prophesized by Ray Kurzweil (see Kurzweil, 2006) is based on a fundamental delusion—that the world we inhabit as physical human beings is simply a more primitive version of what artificial intelligence will soon make possible. The truth is the opposite—artificial intelligence offers only a tiny sliver of the colors that flow through the human rainbow.

The alternative to the Singularity is Steiner’s “living thinking” which is activated when we refuse to surrender to the habits instilled by software. Life begins with the discovery of the “I”, not the little ego which is so easily hacked as our mental traffic flows through nodes in the computing cloud. This small ego can be manipulated such that, “… given enough biometric data and enough computing power, it might be possible to hack love, hate, boredom and joy…” (Harari, 2017). Our higher self cannot be exploited in this way because it emerges from a core of identity untouched by the outer world. Instead of depending on computers to order our feelings and thoughts, we can embrace the heaven above the heavens, about which Plato wrote,

There abides the very being itself with which true knowledge is concerned; the colorless, formless, intangible essence, visible only to mind, the pilot of the soul. The divine intelligence, being nurtured upon mind and pure knowledge, and the intelligence of every soul which is capable of receiving the food proper to it, rejoices at beholding reality, and once more gazing upon truth, is replenished and made glad, until the revolution of the worlds brings her round again to the same place. In the revolution she beholds justice, and temperance, and knowledge absolute, not in the form of generation or of relation, which men call existence, but of knowledge absolute in existence absolute; and beholding the other true existences in like manner, and feasting upon them, she passes down into the interior of the heavens and returns home.

Plato, 2011, p. loc. 16368

Boyd R. Collins (boydcster@gmail.com) is a web application architect for a large multi-national corporation. He has been developing software for companies large and small for over 20 years. As an environmental activist, he contributed articles to the influential Dark Mountain Project (http://dark-mountain.net) in the U.K., “… a network of writers, artists and thinkers who have stopped believing the stories our civilization tells itself.” Feeling the need for a spiritual foundation for his ecological efforts, he began an intensive study of the works of Rudolf Steiner. In these works, he discovered an inspiring vision of spiritual evolution that provides a solid foundation for hope. He currently resides near Dallas, Texas with his family.

Works Cited

Harari, Y. N. (2017, 5 3). The Mozart in the Machine. From Bloomberg View: www.bloomberg.com/view/articles/2017-05-03/the-mozart-in-the-machine

Kurzweil, R. (2006). The Singularity Is Near: When Humans Transcend Biology. New York: Penguin Books.

Lanier, J. (2011). You Are Not a Gadget: A Manifesto. New York: Vintage Books.

LeCun, Y. B. (2015, 05 27). Deep learning. Nature, online. From doi.org/10.1038/nature14539

Perlas, N. (2018). Humanity’s Last Stand. Forest Row, England: Temple Lodge.

Plato. (2011). The Complete Works of Plato, trans. Benjamin Jowett.

Steiner, R. (1918). The Philosophy of Spiritual Activity: The Factors of Life.

Steiner, R. (1921, 04 29). Materialism and the Task of Anthroposophy: Lecture 10.

Vaidhyanathan, S. (2018). Antisocial Media: How Facebook Disconnects Us and Undermines Democracy. New York: Oxford University Press.

Vijaya, G. K. (2015). Technology and the Laws of Thought. From www.anthroposophy.org/articles

Vutha, A. (2018, 08 10). Could Machine Learning Mean the End of Understanding in Science? From Singularity Hub: singularityhub.com/2018/08/10/couldmachine-learning-mean-the-end-of-understanding-in-science/

This article is from: