9 minute read
Are we ready for the digital era?
PARTNERS' & EXTERNALS' PERSPECTIVE
ARE WE READY FOR THE DIGITAL ERA?
Yannick Meneceur Head of the Digital Development Unit Council of Europe
The speed at which our society and our lifestyles are being transformed by digital technologies is unprecedented. Artificial intelligence ("AI") is certainly one of the main drivers of this transformation, at the heart of a growing number of services that already populate our daily lives. The term "AI", whose content has evolved substantially since its creation in 1955, has been reenchanted since the early 2010s and now refers to the various machine learning algorithms (such as deep learning), whose processing results have appeared to be particularly spectacular not only for image or sound recognition, but also for natural language processing. For several decades, public decision-makers have been promoting the development of computer science and digital technologies, convinced by the promise of improving our lives, as well as by the prospects of ever more dizzying economic profits, which are now counted in billions of billions of euros. In the competition for the development of "AI" in particular, international and regional regulators, such as the Council of Europe, the European Commission, the OECD or UNESCO are intervening in their respective fields of competence to try to frame the effervescence of initiatives. The idea for all the regulators is to use the law both as a technique for supervising the use of these new tools in order to prevent the significant risks they pose to the respect of a certain number of the most fundamental rights and values, but also as an instrument for stimulating and developing the market. On reading the texts already drafted or in the process of being drafted, the future of the governance of this technology looks rather balanced, between respect for human rights, economic objectives, and ethical requirements, with intergovernmental organisations that seem to agree on the need to set up mechanisms for verifying "AI" before it is put into service or introduced onto the market. As it stands, one could be satisfied with the progress made in such a short time, remembering that it took decades in other industrial fields, such as pharmaceuticals, to reach this kind of maturity. However, the capacity of some of these new legal instruments to effectively prevent violations of fundamental rights and create a real "trustworthy AI" remains questionable for some. And if we ask ourselves the more general question of our preparation to face the challenges of the digital era, we must admit that the current discourses respond to each other with a relatively identical critical argumentation, creating a form of consensus consisting in only questioning the risks of infringement of the rights and freedoms of individuals, but without ever questioning the relevance of the real project of society that underlies
these new technologies. Thus, a certain number of major challenges seem to have been significantly underestimated in recent years and at least three series of actions should be taken to really prepare us for this new era: deconstructing the consensus on the neutrality of digital technologies and artificial intelligence (1); objectifying the capacities of artificial intelligence systems (2); evaluating the environmental sustainability of the digital society model (3).
1. Deconstructing the consensus on the neutrality of digital technologies and artificial intelligence
Today, it seems quite intuitive to believe that technology is neither good nor bad in itself, and that we should focus only on its uses. However, this discourse ignores the extremely close link between human societies and the technical system made up of all their artefacts: each major discovery has contributed to substantially reshaping our environment, while at the same time sometimes extending its effects over several centuries. Thus the scope of the invention of printing went beyond the mere mechanization of the reproduction of works: the Reformation of the Church, the Enlightenment, and access to knowledge in general were all events linked to this invention. The advent of industrial processes in the 19th century also profoundly recomposed the relationships between individuals as well as our living spaces and modes of governance. Pursuing this continuous dynamic of progress, we would be today at our 4th industrial revolution with the meeting between "the world of the physical, the digital, the biological and the innovation" whose tools already exceed the simple sophistication of existing means and feed the hopes of the transhumanists. This is why the originality of the system composed by the interactions between humans and digital technologies requires us to make an effort to decipher the environment being composed in order to grasp its composition and its governmentality. The transformation we are currently undergoing with the translation of the smallest corners of our lives into data for algorithmic processing is not simply an optimization of our modes of operation. This transition is actually leading us towards a whole new model of society, which obviously brings with it technical progress, but also its share of disenchantment, control and even totalitarianism. And this is not just because of the way we use these tools, but because of the structure woven by the tangle of computer and statistical mechanisms that are supposed to have the capacity to better appreciate, in all circumstances, an ever-increasing number of situations. In a very concrete way, the functioning of a company like Amazon already allows us to see how the presence of algorithmic foremen transforms, by their very nature, the work relationships. Fortunately, as we have seen with contact tracing applications during the health crisis, fundamental rights still protect us from many abuses. But the exercise of power over individuals, this biopolitics theorized by Michel Foucault, is today complemented by discrete mechanisms of algorithmic decision-making, increasingly autonomous, whose
creation has no democratic basis and which would even dismiss the political thing.
2. Objectivizing the capabilities of algorithmic systems
But the rhetoric of our digital era, especially about "AI", also fails to address other equally important issues: the structural difficulties in the operation of algorithmic systems, which are constantly being fixed, patched and adapted. Many of us conceive of these machines in an abstract way, ordered and stored in clean rooms in datacenters, whereas we should rather retain the image of a very artisanal steam engine, cobbled together by craftsmen with many pipes and patches. The accumulation of these challenges is such that we should, again, look at the big picture rather than considering these issues in isolation. This would allow us to assess whether these devices are mature enough to move out of the laboratory and operate in the real world, especially for decision-making functions in areas as sensitive as public services or health. To take just one example of these long-identified difficulties, we might consider the approximations of many "AI" systems due to random correlations or misinterpretations of causality. A model revealing, for example, that asthmatics have a lower risk than the rest of the population of developing serious lung diseases should not lead us to consider that they have developed some form of protection against pulmonary complications, but rather that they will consult specialists more quickly because of their fragility. If this reasoning seems simple and common sense, how can we be sure that, in the vast intricacy of the links discovered by a machine, thousands (millions) of parameters do not lead to such confusions? When added together, we realize that we are in fact facing a very serious technical problem that should lead us to question the very project of generalizing machine learning as a viable solution to ever more diverse categories of applications, and even make us reconsider the claim of using this approach to progress towards general artificial intelligence by making learning methods more sophisticated. As a critical reflex, a very artificial balance between benefits, hoped for, and risks, resulting from the mere misuse of the technology, is most often invoked. In other words, due to a lack of technological culture and wrapped up in the certainty of the neutrality of algorithms, we
may be entertaining illusions forged by the marketing of "AI" and we are wrongly leaving any effort to measure technical problems to technicians alone.
3. Assessing the environmental sustainability of the digital society model
Deconstructing the consensus on the neutrality of technologies and objectifying the capacities of algorithmic systems are therefore two prerequisites that are all too often ignored in the construction of new forms of governance in the digital era, all too often ignored out of concern for not losing rank in the technology race. However, it is to be feared that another reality will impose itself with great brutality on all the competing blocks: this reality is the limit of our planet's resources, which have been largely overexploited for decades. Once we have reached the end of the rare earths available, how will we continue to produce the physical materials that support digital technologies? The shortage of semiconductors resulting from the pandemic disorder is a clear warning signal of our current dependence and the resulting fragility. Kate Crawford, a professor at New York University and a researcher at Microsoft, has illustrated in her Atlas of AI the profound impact of the development of this technology on our planet and the related power issues. Crawford first physically visited the place where lithium is extracted, which is essential for creating batteries for mobile terminals or electric cars. The findings are overwhelming and remind us of the consequences of the gold rush of the 19th century, when vast areas were rendered barren to enrich cities and individuals who are still prosperous today (already in the western United States). The parallel with the current logic of the digital industry (massive extraction of minerals to build materials, massive extraction of data to run algorithms, concentration of wealth produced with very little "trickle down", indifference to the damage caused) lets us perceive a model of pure and simple plundering that is absolutely not sustainable over time. The author thus invites us to an obvious awareness: we are in the process of exhausting considerable quantities of the earth's materials to serve the space of a few seconds of geological time. Crawford thus demonstrates with great acuity the very short-term vision of current public policies, all the more flagrant if one links her conclusions to those of the IPCC on the evolution of our climate. Our ever-increasing dependence on digital technology therefore has an exorbitant cost that is difficult to sustain. In response to these questions or to those concerning the monstrous energy consumption of data centres or blockchains, new technical solutions are often put forward, which are supposed to balance the carbon footprint, but whose long-term effectiveness remains to be proven. Embracing the technical approach of engineer-entrepreneurs alone will not be enough to constitute a viable project for society and it seems urgent to add a political dimension that asks what kind of world we want to live in. Is "all-digital" absolutely the right model? Shouldn't other projects be put into competition and debated? Can't the European continent, which is so rich in values, contribute to raising awareness through ambitious public policies for the environment instead of giving in to harmful competition with other continents?