// Who Speaks?
// Artificial Intelligence, Language, and Democracy
import language; {
var who = "artificial_intelligence" || "human"; var democracy = who.speaks(language); democracy.exec(); }
// Who Speaks?
// Artificial Intelligence, Language, and Democracy
import language; {
var who = "artificial_intelligence" || "human"; var democracy = who.speaks(language); democracy.exec(); }
// Artificial Intelligence, Language, and Democracy
import language;
var who = "artificial_intelligence" || "human";
var democracy = who.speaks(language); democracy.exec();
and Niels
A Voice in the Shadow
Jack Poulson in conversation with Justine Corrijn, Katie Pelikan, Taya Reshetnik and Elinor Salomon Unboxing
Evelyn Austin in conversation with Esther van der Heijden, Pablo Perez, Sophie Czich and Tuana İnhan
Ezekiel Dixon-Román in conversation with Natalia Śliwińska, Marcin Liminowicz, Dario Di Paolantonio, Lance Laoyan and Jenny Konrad
Fenna Hup Deputy-Director of Education
Royal Academy of Art, The Hague
In your hands lies the result of the special collaboration between the Royal Academy of Art (KABK) and the Analysis and Research Department (DAO) of the Dutch House of Representatives. In 2020, students of the master Non Linear Narrative, Camberwell College of Arts and Goldsmiths in London, investigated the effects of artificial intelligence on our freedom of speech and right to vote.
Freedom of speech and the right to vote are enshrined in the constitution, and language is of crucial importance to both. The rapid pace of digitization, as well as technologies such as artificial intelligence and the resulting artificial language, have a major impact on the democratic process and on the fundamental rights of citizens. The Greek agora where the democratic process once took shape through the spoken word, has made way for a worldwide network of countless media and users where language and images can fall prey to deep fake and fake news. Technological developments have an impact on privacy rights, freedom rights, equality rights and the right to vote, and therefore on democracy as a whole. Who Speaks? is important and very topical, especially considering increasing polarization. The students’ projects touch on topics such as robot journalism, speech recognition software, crowd monitoring and management, upload filters and fraud prevention platforms. The collaboration has led to the launch of the website and digital exhibition: whospeaks.eu.
It is with great joy that we also proudly present this publication, which provides a deeper understanding of the project. Finally, we would like to thank the Analysis and Research Department for their confidence in entering into this challenging project.
Fenna Hup
Adjunct-Directeur
Onderwijs
Koninklijke Academie van Beeldende Kunsten, Den Haag
Voor u ligt een resultaat van de bijzondere samenwerking tussen de Koninklijke Academie van Beeldende Kunsten (KABK) en de Dienst Analyse en Onderzoek (DAO) van de Tweede Kamer der StatenGeneraal. Studenten van de master Non Linear Narrative, Camberwell College of Arts en Goldsmiths in London bogen zich in 2020 over de vraag welke effecten kunstmatige intelligentie heeft op onze vrijheid van meningsuiting en op ons stemrecht.
Zowel vrijheid van meningsuiting als stemrecht zijn vastgelegd in de grondwet en voor beiden geldt dat taal van cruciaal belang is. Het hoge tempo van toenemende digitalisering en technologieen als kunstmatige intelligentie en daaruit voortkomende kunstmatige taal oefenen grote invloed uit op het democratisch proces en op grondrechten van burgers. De Griekse agora, waar het democratisch proces ooit gestalte kreeg door middel van gesproken woord, heeft plaats gemaakt voor een wereldwijd netwerk van ontelbare media en gebruikers waar taal en beeld ten prooi kunnen vallen aan deep fake en fake news. De technologische ontwikkelingen hebben impact op privacyrechten, vrijheidsrechten, gelijkheidsrechten en stemrecht van de burgers en daardoor op de democratie als geheel. Who Speaks? is, juist nu polarisatie toeneemt, belangrijk en zeer actueel. De projecten, raken thema’s als robotjournalistiek, spraakherkenningssoftware, crowd-monitoring en -management, upload filters en fraudepreventieplatformen. De samenwerking heeft geleid tot de lancering van de website en digitale expositie: whospeaks.eu.
Trots presenteren we tevens de voor u liggende publicatie, die een verdieping geeft van het samenwerkingsproject. Tot slot willen wij de Dienst Analyse en Onderzoek bedanken voor het vertrouwen in het samen met ons aangaan van dit uitdagende project.
Ramon Amaro, Sheena Calvert and Niels Schrader Queer Computing Consortium and project initiators
A thriving democracy is based on informed debate and involves a wide range of language-based interactions. In fact, the term parliament itself derives from the French word parler (to talk / speak). Deliberation and debate in both public and private spaces are at the core of both democratic processes and personal liberties. Voting is based on language, whether on the physical ballot, during the election campaign or by formulating voting and election laws. Language permits ideas to circulate freely, and is part of the very fabric of political processes.
Language itself, in the broadest terms, is a technology, and with the rise of self-automating text and voice-based systems, debating platforms powered by artificial intelligence technologies, and language-based interfaces are changing how language is used as an arm of democracy. Both the expansion and abuse of free speech are in full swing, as these technologies swiftly proliferate and gather momentum. This project asks what human and nonhuman languages’ role in shaping democracy should be, as these technologies continue to develop.
Who Speaks? tackles major issues including politics as a rhetorical gesture, law as an act of speech and aesthetics as a language of imagination. It aims to bring together academic researchers, non-profit activists, and public services such as Bits of Freedom and the Analysis and Research Department (DAO) of the Dutch Parliament, for joint knowledge exchange
and discussion on AI driven democracy, language and decision-making. DAO provides substantive support to parliamentary investigations and gives advice to the Dutch Members of Parliament on information relevant to the parliamentary process.
The project collaboration is organised in the form of a semesterlong education programme in partnership with DAO that investigates artificial intelligence and its influences on democracy by means of language. The initiative addresses very particular concerns of this individual stakeholder. Starting with the Dutch Constitution, students responses include exploring the evolution of, and changes to, freedom of speech, the ethics of the Dutch digital fraud prevention system SyRI, the implications of implementing language-based technologies in administrative processes, the effects of artificial intelligence (AI) on employment, and information overload around parliamentary news, amongst others.
Who Speaks? is part of a series of initiatives organised by the Queer Computing Consortium (QCC), co-founded by Ramon Amaro, Sheena Calvert and Niels Schrader, in collaboration with the Non Linear Narrative Master’s programme of the Royal Academy of Art, The Hague, UCL (University College London) History of Art Department and Camberwell College of Arts (CCW / University of the Arts London).
Ramon Amaro, Sheena Calvert and Niels Schrader Queer Computing Consortium en initiatiefnemers
Een bloeiende democratie is gebaseerd op een goed geïnformeerd debat en omvat een breed scala aan op taal gebaseerde interacties. De term parlement komt eigenlijk van het Franse woord parler (spreken). Overleg en debat in de openbare en private ruimte vormen de kern van zowel democratische processen als van persoonlijke vrijheden. Stemmen is gebaseerd op taal, hetzij op het fysieke stembiljet, tijdens de verkiezingscampagne of door het formuleren van stem- en kieswetten. Taal brengt ideeën tot uiting en maakt zodoende deel uit van het DNA van politieke processen. In de breedste zin van het woord is taal zelf ook een technologie, en de opkomst van geautomatiseerde op tekst en spraak gebaseerde systemen, debatplatforms aangedreven door nieuwe technologieën als kunstmatige intelligentie en op taal gebaseerde interfaces, veranderen hoe taal wordt gebruikt als een kracht van de democratie. Zowel de uitbreiding als het misbruik van de vrijheid van meningsuiting is in volle gang omdat deze technologieën snel groeien en momentum verzamelen. Aangezien deze technologieën zich zullen blijven ontwikkelen, kijkt dit project naar wat de rol van menselijke en niet-menselijke talen bij het vormgeven van democratie zou moeten zijn.
Who Speaks? behandelt belangrijke vraagstukken zoals politiek als retorisch gebaar, recht als een daad van spreken en esthetiek als een taal van de verbeelding. Het brengt academische onderzoekers, non-profit activisten en ambtenaren bijeen zoals Bits of Freedom
en de Dienst Analyse en Onderzoek (DAO) van de Tweede Kamer der Staten-Generaal ten behoeve van kennisuitwisseling en discussie over AI-gedreven democratie, taal en besluitvorming. De DAO ondersteunt de parlementaire onderzoeken inhoudelijk en adviseert Nederlandse Kamerleden over voor het parlementaire proces relevante informatie.
De samenwerking is opgezet in de vorm van een onderwijsproject van één semester dat in overleg met de DAO kunstmatige intelligentie en haar invloeden op de democratie door middel van taal onderzoekt. Het initiatief komt tegemoet aan de zorgen van deze stakeholder. Beginnend met de Nederlandse grondwet, gaan studenten onder meer in op de evolutie van en veranderingen in de vrijheid van meningsuiting, de ethiek van het Nederlandse systeem voor digitale fraudepreventie SyRI, de implicaties van het implementeren van op taal gebaseerde technologieën in administratieve processen, de effecten van kunstmatige intelligentie (AI) op de werkgelegenheid en de overdaad aan informatie rond bijvoorbeeld parlementair nieuws.
Who Speaks? maakt deel uit van een reeks initiatieven georganiseerd door het Queer Computing Consortium (QCC), mede opgericht door Ramon Amaro, Sheena Calvert en Niels Schrader, in samenwerking met de masteropleiding Non Linear Narrative van de Koninklijke Academie van Beeldende Kunsten, Den Haag, History of Art Department van UCL (University College London), en Camberwell College of Arts (CCW / University of the Arts London).
Screenshots from online student exhibition at whospeaks.eu.
Schermafbeeldingen van de online studentententoonstelling op whospeaks.eu.
Jeroen Kerseboom Head of the Analysis and Research Department (DAO) at the Dutch Parliament
Democracy depends on language. Discussion, information exchange and debate in public and private spaces are at the heart of the democratic processes and our personal freedoms. Even during elections, everything – including voting – is based on free speech. Language expresses ideas and is part of the DNA of political processes. In the meantime, technologies such as artificial intelligence and languagebased interfaces are entering slowly the democratic space.
How does the far-reaching digitisation affect our democracy and the language we speak in the democratic process? How do selfautomated text and speech systems, debate platforms powered by artificial intelligence, and language-based interfaces influence the way we engage in democratic dialogue? And who actually conducts the conversation? Is it still human or is it artificial intelligence?
In the context of the influence of the far-reaching digitisation on democracy, the Analysis and Research Department (DAO) and students of the master Non Linear Narrative at the Royal Academy of Art in The Hague (KABK) have designed a semester-long programme in which students research how artificial intelligence influences democracy through language.
This collaboration was mutually beneficial, but due to the corona crisis did not go as planned. The kick-off at the House of Representatives could still take place physically. There, students were able to get acquainted with the work of the Analysis and Research Department. But the planned symposium on human and non-human languages in shaping democracy with experts from
technology, philosophy, policy, politics, journalism and creative practice had to be postponed. From mid-March, all lessons as well as the interim presentations, took place online.
Due to the different approaches by the Non Linear Narrative students, this project produced a number of surprising results. In the project Travel Log: A Piece of Parliamentary Information for example, Esther van der Heijden focuses on fake news and disinformation and visualised how information from the DAO reaches the public via the intermediary journalist.
With the projects on SyRI, the tax authorities’ digital fraud detection system, Natalia Śliwińska and Pablo Perez expose how artificial intelligence that detects fraud can clash with Article 10 of the Dutch Constitution (right to privacy) and Article 16 (criminal offenses). And with her project about digital billboards in public space Sophie Czich shows how digitisation and artificial intelligence enter our daily lives.
It is precisely this progressive digitisation and the influence and interaction with public values that motivated the House of Representatives to set up the temporary committee Digital Future. The government also feels the need to get a better grip on developments in digitisation that have such a major impact on our social life and democracy. Researching, visualising and verbalising this influence, literally, makes this project of the DAO and the three academic institutions so valuable.
Jeroen Kerseboom
Hoofd Dienst Analyse en Onderzoek (DAO) bij de Tweede Kamer der Staten-Generaal
Democratie is afhankelijk van taal. Discussie, informatieuitwisseling en debat in openbare en niet-openbare ruimtes vormen de kern van de democratische processen en onze persoonlijke vrijheden. Ook tijdens verkiezingen is alles – inclusief stemmen –gebaseerd op vrijheid van meningsuiting. Taal drukt ideeën uit en maakt deel uit van het DNA van politieke processen. Ondertussen komen technologieën zoals kunstmatige intelligentie en op taal gebaseerde interfaces langzaam maar zeker de democratische ruimte binnen.
Maar welke invloed heeft de verregaande digitalisering op onze democratie en de taal die we spreken in het democratische proces?
Hoe beïnvloeden zelf-automatiserende tekst- en spraaksystemen, debatplatforms die worden aangedreven door kunstmatige intelligentie en op taal gebaseerde interfaces de manier waarop wij met elkaar het democratisch gesprek aangaan? En wie gaat eigenlijk dat gesprek aan? Is het nog een mens of is het artificiële intelligentie?
Over de invloed van de verregaande digitalisering op de democratie, hebben de Dienst Analyse en Onderzoek en de studenten van de master Non Linear Narrative aan de KABK in Den Haag een semesterlang studieprogramma vormgegeven waarin studenten onderzoek doen naar hoe kunstmatige intelligentie de democratie beïnvloedt door middel van taal.
Deze samenwerking was wederzijds voordelig maar liep door de coronacrisis wel anders dan gepland. De kick-off bij de Tweede
Kamer heeft nog fysiek plaatsgevonden. Daar konden studenten kennismaken met het werk van de Dienst Analyse en Onderzoek. Maar het geplande symposium over menselijke en niet-menselijke talen bij het vormgeven van democratie met experts uit de technologie, filosofie, beleid, politiek, journalistiek en creatieve praktijk moest helaas uitgesteld worden. En alle lessen vonden vanaf half maart online plaats, evenals de tussentijdse presentaties. Door de andere benaderingswijze van het onderwerp door studenten Non Linear Narrative, leverde dit project een aantal verrassende resultaten op. Zo heeft Esther van der Heijden in beeld gebracht hoe informatie van de DAO via de intermediaire journalist bij het publiek terecht komt in haar project Travel Log: A Piece of Parliamentary Information, waarbij zij zich vooral zorgen maakt over nepnieuws en disinformatie.
Met de projecten over SyRI, het digitale fraude opsporingssysteem van de belastingdienst, leggen Natalia Śliwińska en Pablo Perez bloot hoe artificiële intelligentie die fraude opspoort kan botsen met artikel 10 van de Nederlandse Grondwet (recht op privacy) en artikel 16 (strafbare feiten). En Sophie Czich laat met haar project over digitale billboards in de publieke ruimte zien hoe digitalisering en artificiële intelligentie ons dagelijks leven binnenkomt.
Juist die voortschrijdende digitalisering en de invloed en interactie met publieke waarden, waren reden voor de Tweede Kamer voor de oprichting van de tijdelijke commissie Digitale Toekomst. Kamerbreed wordt namelijk ook de noodzaak gevoeld om meer grip te krijgen op ontwikkelingen in de digitalisering die zulke grote invloed heeft op ons maatschappelijk leven en de democratie. Het onderzoeken, in beeld brengen, letterlijk, en in taal vatten van deze invloed, maakt dit project van de DAO en drie academische instellingen zo waardevol.
Sheena Calvert and Niels Schrader
According to philosopher and sociologist Jürgen Habermas, every healthy democracy requires an open, non-violent sphere for public debate. Here, the term sphere refers to a space or environment within which democracy takes place, meaning where it is enacted or becomes enabled. The significance of the term space as Habermas intends it is not limited to physical space but includes conceptual space (a framework). Such a space can only be called democratic, if everyone is equally involved in the decision-making processes, and if each member of the space has an equal voice in any debates. Alongside physical space and conceptual space, we have added linguistic space, and considered the primary role of language in forming and enabling any democracy. Physical, conceptual and linguistic spaces need to work together to allow a healthy democracy to flourish. In placing language at the centre of democratic processes, we return first to the notion of the public square and to the power of speech in public space as the cornerstone of a democracy. In this space, anyone can take the podium, address the town square, step onto the soapbox or (increasingly in our current times), log into their digital equivalents, to air their points of view and consider those of others.
Whether in the public square or online, between citizens or in parliament, knowledge and understanding is passed on by language, circulated by language and challenged by language, meaning language organises and makes possible a democracy. Language permits ideas to circulate, freely and — within the current technological context — increasingly fast, and continues to be part of the very
DNA of political processes. Our project begins here, with the observation that language and voice matter within a democracy, and that who is speaking and / or being heard is of the utmost significance. Full freedom of speech and expression, is constituted by what John Stuart Mill called “the marketplace of ideas” (1859)[1]. This is a place where more speech (both spoken and written), and not less, creates a democracy. Through the process of rigorous and free debate, all viewpoints are put forward, and jointly we achieve a healthy democracy, without censure. However, the ways in which such rigorous debate would have been undertaken in Mill’s time, is through means which have been increasingly challenged and disrupted in our own. That is to say, debate and its desired result — democracy — have moved from being performed solely by human agents, using speech, writing, print and analogue forms of dissemination to modes of expression which are increasingly mediated by technology. In particular, such debate has moved towards those technologies which harness language, or a more properly, replications of it. So, what are the potential political ramifications of the migration of human language to speech and text-based technologies? This is the question the presented research concerns itself with.
As Habermas reminds us:
Language in this understanding of communicative action presupposes language as the medium for reaching understanding, in the course for which participants, through relating to a world, reciprocally raise validity claims that can be accepted or contested…[2] These must be understood not as a tool to achieve what one wishes nor as a conveyer of preapproved cultural values nor as a reflection of personal expression, but rather as a medium of uncurtailed communication whereby speakers and hearers, out of the context of their
preinterpreted lifeworld, refer simultaneously to things in the objective, social, and subjective worlds in order to negotiate common definitions of the situation.[3]
In other words, according to Habermas, we should not use language as a communicative action simply to argue for what we already believe in (taking as a given what we set out to prove), but to open up a space for negotiation, and to work towards creating common understandings. This echoes Mill’s emphasis on rational debate as part of the democratic process. However, our claim is that the new digital public sphere has increasingly become an entertainmentsphere that perpetuates a condition in which whoever shouts the loudest eventually wins the argument, moving away from rational debate as Mill and Habermas understand it.
The New Democratic Space
Democratic space has always been filled with three main types of interactions and agents: consumers interacting with the industry, the industry interacting with consumers. and governments regulating the market. However, in more recent times, the interactions between consumers and industry have moved into a digital environment, leaving behind governments and their regulatory interference. The new digital democratic space is largely focused on profit-making and is maintained and dominated by a few global technology companies. As well, the digital democratic space is not available to everyone. And that means not everyone has a voice. Participation requires both technology and access to it, which cannot be guaranteed to all.
As the Cambridge Analytica scandal shows, the manipulation of facts, and intervention into political systems and processes is well documented. Increasingly, automated conversational agents intervene into personal political decision-making or determine what is perceivable to the user. It is clear that new perspectives on these
matters will not come from the big businesses since they directly profit from these inventions and have no incentive to protect democratic processes from the negative effects of these technologies. There are some areas of resistance to such dominance. The current technological situation is based on weak (also known as narrow) Artificial Intelligence, which is task-specific and unable to generate speech without programmers initiating the process. The development of strong AI, which possesses the kind of general, flexible, multi-level intelligence of human beings is far off. So far, there is no agency on the part of machines, but such technologies possess widespread distribution and many voices, and as such we can say that the “marketplace” thrives, despite its shortcomings. Promises that speech might be liberated from such restraints and will gain a degree of agency, independent of programmers is a similarly long-term goal (although gathering momentum). The kind of agency associated with speaking subjects (human beings), taking account of their utterances while participating in democratic processes, is still very far away.
Mill’s argument that more free speech will continue to propel a democratic discourse may also fall apart considering the speed at which modern technologies operate. The increased velocity of such technologies takes away the opportunity to reflect upon or assess whether it’s a human or non-human agent speaking. While a thriving democracy involves the expression of a wide range of opinions and perspectives, non-human language is increasingly hegemonic, speaking the language of industry, including its clichés. Interrupting the flow of human dialogue in turn creates dissonance and disorientation across multiple platforms, leading not to understanding but confusion. What kind of truth, and what kind of democracy emerges from these exchanges?
To counter these developments and protect democracy, we have
taken the position that we need to provide our public institutions (and the public), with the means of understanding what’s taking place with respect to these new digital spaces and offer the ability to evaluate the distorting effects of new technologies. Public institutions such as the parliament have the mandate to regulate and critically inspect the effects of these technologies but often lack the necessary understanding to challenge them. And this is where the present research aims to make a contribution.
Our claim is that we need to understand the current terminology and produce a fresh vocabulary since the old terms are failing. If we don’t act now to take on a mediating role, and promote understanding, the fragile balance of power will shift towards the markets – ultimately demolishing the space for public debate which Mill and others so furiously defended as being at the core of democracy. Academia needs to play a role in this urgent work and the Who Speaks? project collaboration is an excellent example of this. Firstly, we need to see how technology, aided by industry, is intervening in democratic processes. Secondly, as a collaboration between three institutions of higher education, we take seriously the notion that academia is a place where freedom of speech is to be preserved, where we undertake rational debate, create literacy and hold industry to account.
Facilitating and recalibrating the balance of power in a democratic space requires knowledge and understanding; not just for some but for all. It requires technological and digital literacy on the part of all participants and a revisitation of the triadic space of public, industry and governmental involvement in democratic processes. With such debates being primarily articulated by means of language (human and non-human), we conclude that only a shared vocabulary understood by all disputants will allow for a space of discourse, leading to a democracy which thrives when it moves from non-digital to digital space, or from public square to digital platform.
We are entering a situation in which full exchange of knowledge and understanding no longer takes place, since we become mired in opaque technological interfaces and / or systems which operate silently in the background of our interactions, manipulating facts, data and truth. When the process of democratic debate and processes is transferred to Twitter, Tick-Tock, or other artificial language systems, we as citizens lose our say. With this in mind, together with the next generation of designers we have created a glossary of words, since we believe true democratic space is the meeting point between the individual, the private sector and public service. This is a toolkit, a critical vocabulary for action and increased understanding. We believe that we need to find collective strategies and shared languages that empower our public institutions and future generations by reinforcing the relationships between language and democracy and assessing how technology distorts them. Only then they can become the custodians / guardians of our future democracy.
[1] John Stuart Mill, On Liberty (London: John W. Parker and Son, 1859).
[2] Jürgen Habermas, The Theory of Communicative Action, Vol. 1, transl. from German by T. McCarthy (Boston, MA: Beacon Press, 1984), p. 95.
[3] Ibid. p. 99.
Woman speaker at the Speakers’ Corner, Hyde Park, London, 1921.
Vrouwelijke spreker bij de Speakers’ Corner, Hyde Park, London, 1921.
Sheena Calvert en Niels Schrader
De Oude Democratische Ruimte
Volgens filosoof en socioloog Jürgen Habermas heeft elke gezonde democratie een open, geweldloze sfeer nodig voor het publieke debat. De term sfeer verwijst hier naar een ruimte of omgeving waarin democratie plaatsvindt; waarin het wordt uitgevoerd of mogelijk wordt gemaakt. De betekenis van de term ruimte, zoals Habermas bedoelt, is niet beperkt tot een fysieke ruimte, maar omvat een conceptuele ruimte (een raamwerk). Zo’n ruimte kan alleen democratisch worden genoemd als iedereen in gelijke mate betrokken is bij de besluitvormingsprocessen en als elk lid van de ruimte een gelijke stem heeft in elk debat.
Naast fysieke ruimte en conceptuele ruimte, hebben we taalkundige ruimte geplaatst om de primaire rol van taal bij het vormen en mogelijk maken van democratie weer te geven. Fysieke, conceptuele en taalkundige ruimtes moeten samenwerken om een gezonde democratie tot bloei te laten komen. Door taal een centrale plek in democratische processen te geven, keren we terug naar de idee van het openbare plein en naar de macht van spreken in de openbare ruimte als de hoeksteen van een democratie. In deze ruimte kan iedereen het podium betreden, het stadsplein toespreken, op de zeepkist stappen of (steeds meer in onze huidige tijd) inloggen op hun digitale equivalenten om hun standpunten naar voren brengen en die van anderen te overwegen.
Of het nu op een plein is of online, tussen burgers of in het parlement, kennis en begrip worden doorgegeven, verspreid en uitgedaagd door middel van taal, wat betekent dat taal een democratie organiseert en mogelijk maakt. Door middel van taal circuleren ideeën
vrij en – binnen de huidige technologische context – steeds sneller, en maakt daarom nog steeds deel uit van het DNA van politieke processen. Ons project begint hier, met de observatie dat taal en spreken ertoe doen binnen een democratie, en dat wie er spreekt en / of gehoord wordt van het grootste belang is.
Volledige vrijheid van meningsuiting en expressie wordt gevormd door wat John Stuart Mill “de marktplaats der ideeën” (1859) noemde.[1] Dit is een plek waar meer spraak (zowel gesproken als geschreven), en niet minder, een democratie creëert. Door het proces van rigoureus en vrij debat worden alle standpunten naar voren gebracht en samen bereiken we een gezonde democratie, zonder censuur. Echter, de manieren waarop en middelen waarmee een dergelijk rigoureus debat in Mills tijd zou zijn gevoerd, worden in onze eigen tijd in toenemende mate uitgedaagd en ontwricht. Dat wil zeggen, het debat en het gewenste resultaat – democratie – worden niet meer enkel door mensen uitgevoerd door middel van spraak, schrift, gedrukte teksten en analoge vormen van distributie, maar in toenemende mate worden verschillende vormen van expressie gemedieerd door technologie. Vooral is een dergelijk debat verschoven naar technologieën die taal gebruiken, of, beter gezegd, replicaties ervan. Dus, wat zijn de mogelijke politieke gevolgen van de verschuiving van menselijke taal naar spraak- en op tekstgebaseerde technologieën? Dit is de vraag waarmee dit onderzoek zich bezighoudt.
Zoals Habermas stelt:
Language in this understanding of communicative action presupposes language as the medium for reaching understanding, in the course for which participants, through relating to a world, reciprocally raise validity claims that can be accepted or contested…[2] These must be understood not as a tool to achieve what one wishes nor as a conveyer of preapproved cultural values nor
as a reflection of personal expression, but rather as a medium of uncurtailed communication whereby speakers and hearers, out of the context of their preinterpreted lifeworld, refer simultaneously to things in the objective, social, and subjective worlds in order to negotiate common definitions of the situation.[3]
Met andere woorden, volgens Habermas moeten we taal niet gebruiken als communicatieve actie, simpelweg om te pleiten voor waar we al in geloven (en dat wat we willen bewijzen als een gegeven nemen), maar om ruimte te creëren voor onderhandeling en toe te werken naar het creëren van gemeenschappelijke inzichten. Dit weerspiegelt Mills nadruk op rationeel debat als onderdeel van het democratische proces. Onze claim is echter dat de nieuwe digitale publieke sfeer in toenemende mate een entertainment-sfeer is geworden die een situatie in stand houdt waarin degene die het hardst schreeuwt uiteindelijk het argument wint, en hiermee verwijdert raakt van het rationele debat zoals Mill en Habermas dat bedoelen.
De democratische ruimte is altijd gevuld geweest met drie belangrijke soorten interacties en actoren: consumenten die interactie hebben met de industrie, de industrie die interactie heeft met consumenten en overheden die de markt reguleren. Tegenwoordig zijn de interacties tussen consumenten en de industrie echter verplaatst naar een digitale omgeving, waarbij regeringen en hun regelgevende inmenging achterwege wordt gelaten. De nieuwe digitale democratische ruimte is grotendeels gericht op het maken van winst en wordt onderhouden en gedomineerd door een paar wereldwijde technologiebedrijven. Bovendien is de digitale democratische ruimte niet voor iedereen beschikbaar. En dat betekent dat niet iedereen een stem heeft. Deelname vereist zowel technologie als toegang ertoe, wat niet voor iedereen kan worden gegarandeerd.
Zoals het Cambridge Analytica-schandaal aantoont, is de manipulatie van feiten en het ingrijpen in politieke systemen en processen goed gedocumenteerd. Steeds vaker oefenen geautomatiseerde gesprekspartners invloed uit op persoonlijke politieke besluitvorming of bepalen ze wat waarneembaar is voor de gebruiker. Nieuwe perspectieven hierop zullen niet van de grote bedrijven komen, aangezien zij direct profiteren van deze ontwikkelingen en geen baat hebben bij het beschermen van democratische processen tegen de negatieve effecten van deze technologieën.
Op enkele gebieden is er weerstand tegen een dergelijke dominantie.
De huidige technologische situatie is gebaseerd op zwakke (of smalle) kunstmatige intelligentie die taakspecifiek is en geen spraak kan genereren zonder dat programmeurs het proces starten. De ontwikkeling van sterke AI, die het soort algemene, flexibele, multilevel intelligentie van mensen bezit, is ver weg. Tot dusverre is er geen agency van de kant van machines, maar dergelijke technologieën hebben een brede verspreiding en veel stemmen, en als zodanig kunnen we zeggen dat de “markt” gedijt, ondanks zijn tekortkomingen.
De belofte dat spraak kan worden bevrijd van dergelijke beperkingen en een zekere mate van keuzevrijheid zal krijgen, onafhankelijk van programmeurs, is een evenzo langetermijndoel (hoewel het momentum wint). Het soort agency dat wordt geassocieerd met sprekende subjecten (mensen) en die rekening houden met hun uitingen terwijl ze deelnemen aan democratische processen, is nog ver weg.
Mills argument dat meer vrijheid van meningsuiting een democratisch discours zal blijven voortstuwen, houdt ook geen stand gezien de snelheid waarmee moderne technologieën werken. De toegenomen snelheid van dergelijke technologieën neemt de mogelijkheid weg om na te denken of te beoordelen of het een menselijke of niet-menselijke actor is die spreekt. Terwijl een bloeiende democratie een breed scala aan meningen en perspectieven voorstaat, wordt niet-menselijke taal steeds hegemonischer en spreekt het de taal van de industrie, inclusief de clichés ervan, wat niet leidt tot begrip
maar tot verwarring. Wat voor waarheid en wat voor democratie komt uit deze uitwisselingen voort?
Om deze ontwikkelingen tegen te gaan en de democratie te beschermen, moeten we onze openbare instellingen (en het publiek) voorzien van de middelen om te begrijpen en te evalueren wat er gebeurt met betrekking tot deze nieuwe digitale ruimtes en de verstorende effecten van nieuwe technologieën. Openbare instellingen zoals het parlement hebben het mandaat om de effecten van deze technologieën te reguleren en kritisch te inspecteren, maar missen vaak het nodige begrip om ze uit te dagen. En dit is waar het huidige onderzoek een bijdrage aan wil leveren.
Onze claim is dat we de huidige terminologie moeten begrijpen en een nieuw vocabulaire moeten produceren omdat de oude termen niet meer werken. Als we nu niet handelen door een bemiddelende rol op ons nemen en begrip te bevorderen, zal het fragiele machtsevenwicht verschuiven naar de markt – en uiteindelijk de ruimte voor openbaar debat die Mill en anderen zo fel verdedigden als zijnde de kern van de democratie vernietigen. De academische wereld moet een rol spelen in dit urgente werk en het samenwerkingsproject Who Speaks? is daar een uitstekend voorbeeld van. Ten eerste moeten we kijken hoe technologie, geholpen door de industrie, ingrijpt in democratische processen. Ten tweede plaatsen we, als samenwerking tussen drie instellingen voor hoger onderwijs, het idee voorop dat de academische wereld een plek is waar de vrijheid van meningsuiting moet worden behouden, waar we een rationeel debat voeren, geletterdheid creëren en de industrie ter verantwoording roepen.
Om het machtsevenwicht in een democratische ruimte te faciliteren en opnieuw vorm te geven, zijn kennis en begrip vereist; niet alleen voor sommigen, maar voor iedereen. Het vereist technologische en digitale geletterdheid van de kant van alle
deelnemers en een herziening van de driehoeksrelatie van publieke-, industriële- en overheidsbetrokkenheid bij democratische processen. Aangezien dergelijke debatten voornamelijk worden gearticuleerd door middel van taal (menselijk en niet-menselijk), concluderen we dat alleen een gedeeld vocabulaire dat door alle partijen wordt begrepen, ruimte biedt voor discours, en kan leiden tot een democratie die gedijt wanneer ze zich verplaatst van niet-digitale naar digitale ruimte, of van openbaar plein naar digitaal platform. We belanden in een situatie waarin er geen volledige uitwisseling van kennis en begrip plaatsvindt omdat we verstrikt raken in ondoorzichtige technologische interfaces en / of systemen die stilletjes opereren op de achtergrond van onze interacties en waarbij feiten, gegevens en waarheid worden gemanipuleerd. Wanneer het proces van democratisch debat wordt overgedragen aan Twitter, Tick-Tock of andere kunstmatige taalsystemen, verliezen wij als burgers onze zeggenschap. Omdat we geloven dat echte democratische ruimte het ontmoetingspunt is tussen het individu, de privésector en openbare diensten, hebben we samen met de volgende generatie ontwerpers een woordenlijst gemaakt. Dit is een toolkit, een kritisch vocabulaire voor actie en meer begrip. Wij zijn van mening dat we collectieve strategieën en gedeelde talen moeten vinden die onze openbare instellingen en toekomstige generaties kracht bij zetten door de relaties tussen taal en democratie te versterken en inzicht te krijgen in hoe technologie deze verstoort. Alleen dan kunnen zij de bewaarders / hoeders van onze toekomstige democratie worden.
[1] John Stuart Mill, On Liberty (London: John W. Parker and Son, 1859).
[2] Jürgen Habermas, The Theory of Communicative Action, Vol. 1, transl. from German by T. McCarthy (Boston, MA: Beacon Press, 1984), p. 95.
[3] Ibid. p. 99.
Mijke van der Drift Tutor Media Theory Lab
Statements about freedom of speech in contemporary politics aim to reinforce existing positions of power in a rather unsettling way. After all, how can one be against freedom of speech, and why would one be – especially after decades of activist work, highlighting that marginalised voices are unheard, without impact, and unconsidered. Whether a speech act lands in a world of sense, meaning and impact seems to rest on a variety of factors. Let me first discuss the obvious ones, before I turn to some less obvious considerations. Speech has sense when one speaks about the world in a way that is already recognised, and this often comes together with holding positions in the world that are relatively uncontested. So, when one speaks as a white, straight, and middle class man that affirms the power of institutions over people it sounds sensible, because that is what already makes sense in the world as it is. In contrast, we see Kick Out Zwarte Piet protesters speak from a different perspective. It took years of campaigning for the general public in the Netherlands to recognise what is abroad a well-known social fact: blackface is racist. The speech act of Kick Out Zwarte Piet protestors is not often included or defended when the right-wing fringe discusses free speech. This should evoke little surprise. Freedom of speech on the right means something else than statements that are made by people that have less social power. Freedom of speech (from here on referring to right wing discourses) claims the freedom to commit speech acts without being bogged down by accountability, and especially not accountability to those one is not concerned with. It is interesting that freedom of speech has emerged as one of the fault lines in the recent culture wars,
which shows how far interest, knowledge, and ethics emerging from previously subjugated peoples have become mainstream. To demand access to speech acts without needing to be accountable for them marks a nostalgic demand for uncontested power and social privilege. To claim a loss of freedom to speak because one has to be accountable, contrasts starkly to a right to be heard because one’s voice has been marginalised and one requires redress.
If speech is to be about something more than merely acts, it is certainly about accountability. Accountability for speech, for expression, and thus also for the media through which this can find form has to do with a certain form of truthfulness.[1] Such truthfulness is not to be confused with a single and pure Truth that is shared by all in nostalgic visions about worlds in which each and all thought the same. Dan-El Peralta, the scholar of Classics, remarks that classicists are often imagining themselves as liberal, exactly because they dwell in such white spaces that their opinions and acts are rarely contested.[2] What we can learn from this is that it is easy to be tolerant when one is in power and has hegemony, like in classics departments, or philosophy departments for that matter –but also the KABK and the Dutch Parliament are not spaces where diversity of social positionalities is a striking feature of those who are most at home and take space.
Truthfulness is about the attempt to do justice to the variety of people, living under forms of pressure but also privilege, that are affected by speech acts. Such truthfulness can only exist when there is collaboration between perspectives and exchange, interest and knowledge shared between people, groups, and perspectives within them. Imposing rules, opinions, or policy for that matter to secure existing hierarchies have little to do with free speech, but quite a lot with hiding the effects of speech acts under bravado, disinterest, or indifference. In some senses the freedom of speech acts are there to intentionally harm social groups that do not have the access to dismiss faulty statements, prejudice, or duress.
When we look at AI as a framework through which both erased social knowledges traverse, but also the speech acts that intend to erase and harm we find that AI is a new ground for old contestations. As Safiya Noble explicates, search terms convey a wealth of knowledge about duress.[3] The internet, as much as it has been providing platforms on which to find voice, equally has been a platform in which duress got aggregated. Ezekiel Dixon-Román indeed underlines how prejudice and harmful categorisations are imposed using networking tools.[4] Such tools can include automaticallywritten articles, but also programmes that are used for surveillance, as Lance Laoyan, Jenny Konrad, Natalia Śliwińska, Dario Di Paolantonio, and Marcin Liminowicz put forward in their text. Apart from AI in a direct sense, more recently Winnifred Poster explicated how the online service industry rests on racist framings that determine social worth and attention.[5] These framings lean on social forces, rather than that they are primarily created by AI. A speech act that is opposed by social structures is often a scream into the wind. It goes nowhere and is blown back into one’s face, occasionally followed by harmful repercussions. That is the difficult reality of trying to speak against duress versus freedom of speech advocates that prefer to avoid accountability: from a perspective of interrupted speech accountability is inevitable, from a perspective of privilege accountability is negligible. What role does cultural studies offer to these debates? Firstly, it makes worlds visible that are otherwise ignored, unaccounted for, or dismissed. Secondly, art and design have the power to convey complex information in an accessible way. This matters to enhance the meaning that is hidden in worlds that are rarely sensed in the media. Design can support sense, meaning, and impact of ignored voices. However, design not only liberates but can also be used to Black Box features of software and networks. Sophie Czich, Esther van der Heijden, Pablo Perez, and Tuana I · nhan write about how programming features are hidden under user interfaces, that need to come to
the surface. Here, parliamentary action is crucial in understanding how AI works and keeping space for intervening in this operation by public scrutiny. Such interventions can be seen as part of the protection of free speech, because members of the public are protected from either being boxed in projected categories or used for extraction of data without their consent.
Currently, there’s a reliance on whistleblowers to draw attention to the undesirable workings of Big Tech. Elinor Salomon, Justine Corrijn, Taya Reshetnik, and Katie Pelikan discuss this issue with Jack Poulson and highlight alternatives to this dependency on whistleblowing. Alternatives require attention by parliaments and outsiders to how tech is used, what it is used for, and who benefits from that usage. These considerations highlight the need for a broader understanding of technology in parliament, not only to protect who speaks, but also to ensure that people retain the freedom to listen, be accountable for their actions, and remain democratic in their functioning.
[1] Bernard Williams, Truth and Truthfulness (Princeton and Oxford: Princeton University Press, 2002).
[2] Rachel Poser, ‘He Wants to Save Classics From Whiteness. Can the Field Survive?’, New York Times Magazine (New York, 2 February 2021).
[3] Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018).
[4] Ezekiel Dixon-Román, ‘Algo-Ritmo: More-than-Human Performative Acts and the Racializing Assemblages of Algorithmic Architectures’, Cultural Studies? Critical Methodologies, 16.5 (2016), pp. 482–490.
[5] Winifred R. Poster, ‘Racialized Surveillance in the Digital Service Economy’, Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life, ed. by Ruha Benjamin (Durham and London: Duke University Press, 2019), pp. 133–169.
Mijke van der Drift
Docent Media Theory Lab
In de hedendaagse politiek zijn uitspraken over de vrijheid van meningsuiting bedoeld om bestaande machtsposities op een nogal verontrustende manier te versterken. Want hoe kan men tegen de vrijheid van meningsuiting zijn, en waarom zou je dat zijn –vooral na decennia van activisme waarbij wordt benadrukt dat gemarginaliseerde stemmen niet worden gehoord en geen impact hebben.
Of een taalhandeling betekenis verkrijgt of impact heeft, lijkt af te hangen van een verscheidenheid aan factoren. Ik zal eerst de voor de hand liggende redenen bespreken, voordat ik inga op enkele minder voor de hand liggende redenen hiervoor. Een taalhandeling heeft betekenis wanneer er iets over de wereld wordt gezegd op een manier die reeds wordt erkend, en dit valt vaak samen met posities die relatief onomstreden zijn. Dus als een blanke, heteroseksuele man uit de middenklasse die de macht van instellingen over mensen bevestigt iets zegt, klinkt het logisch want dat is wat al logisch is in de wereld. In tegenstelling, zien we Kick Out Zwarte Piet-demonstranten vanuit een ander perspectief spreken. Het kostte jaren van campagne voeren om het grote publiek in Nederland te laten inzien wat in het buitenland een bekend feit is: blackface is racistisch. De taalhandeling van Kick Out Zwarte Piet-demonstranten wordt vaak niet meegenomen of verdedigd wanneer politiek rechts spreekt over de vrijheid van meningsuiting. Verrassend is dit niet. Vanuit rechts gezien, betekent vrijheid van meningsuiting iets anders dan de uitingen van mensen met minder sociale macht. Vrijheid van meningsuiting (vanaf hier verwijzend naar rechts discours) gaat over de
vrijheid om taalhandelingen te kunnen verrichten zonder ter verantwoording te worden geroepen, en al helemaal niet om deze af te leggen aan degenen om wie men zich niet bekommert. Het is interessant om te zien dat de vrijheid van meningsuiting een van de breuklijnen is geworden in de recente cultuuroorlogen, wat aantoont hoezeer de interesses, kennis en ethiek van voorheen ondergeschikte groepen mensen mainstream zijn geworden. Toegang tot het recht van spreken eisen zonder daarvoor verantwoording te hoeven afleggen, wijst op een nostalgische vraag naar onbetwiste macht en sociale privileges. Een verlies van de vrijheid om te spreken claimen omdat men verantwoording moet afleggen, staat in schril contrast met het recht om gehoord te worden omdat iemands stem is gemarginaliseerd en dit moet worden rechtgezet.
Als spraak over meer dan een handeling gaat, dan gaat het zeker om verantwoording. Verantwoording voor een uitspraak, een expressie, en dus ook voor de media waarin deze vorm krijgt, heeft te maken met een bepaalde mate van betrouwbaarheid.[1] Een dergelijke betrouwbare waarachtigheid moet niet worden verward met een enkele en pure waarheid die door iedereen wordt gedeeld in nostalgische visioenen over werelden waarin iedereen hetzelfde denkt. Dan-El Peralta, hoogleraar Klassieke Oudheid, merkt op dat classicisten zichzelf vaak als liberaal zien omdat ze zich in zulke witte ruimtes omgeven dat hun meningen en daden zelden worden betwist.[2] Wat we hieruit kunnen opmaken is dat het makkelijk is om tolerant te zijn wanneer je aan de macht bent en hegemonie hebt, zoals klassieke of filosofische afdelingen op universiteiten – maar ook de KABK en de Tweede Kamer zijn geen ruimtes waar diversiteit van sociale positionaliteiten een kenmerk is van degenen die er het meest thuis zijn en de meeste ruimte innemen.
Betrouwbaarheid in meningsuiting gaat over de poging recht te doen aan de verscheidenheid van mensen die leven onder vormen van onderdrukking – maar ook voorrechten – en die worden
benadeeld door taalhandelingen. Dergelijke waarachtigheid kan alleen bestaan als er sprake is van samenwerking en uitwisseling tussen mensen, groepen en perspectieven. Het opleggen van regels, meningen of beleid om de bestaande hiërarchieën veilig te stellen, heeft weinig te maken met de vrijheid van meningsuiting en des te meer met het verbergen van de effecten van taalgebruik onder bravoure, desinteresse of onverschilligheid. In sommige opzichten zijn de daden van vrijheid van meningsuiting bedoeld om opzettelijk schade toe te brengen aan sociale groepen die geen mogelijkheden hebben om foutieve uitspraken, vooroordelen of dwang tegen te spreken.
Als we AI zien als een raamwerk waar onderbelichte sociale vormen van kennis elkaar kruizen, maar ook de taalhandelingen die bedoeld zijn om te wissen en schade toe te brengen, zien we dat AI een nieuwe basis vormt voor oude controverses. Zoals Safiya Noble uitlegt, bieden zoektermen ons een schat aan kennis over dwang.[3] Het internet is, hoewel het ook een platform heeft gegeven aan diegene die een stem zochten, evenzeer een platform waarop dwang wordt geaggregeerd. Ezekiel Dixon-Román onderstreept hoe vooroordelen en schadelijke categorisaties worden opgelegd met behulp van netwerktools.[4] Dergelijke middelen kunnen automatisch geschreven artikelen zijn, maar ook programma's die voor surveillance worden gebruikt, zoals Lance Laoyan, Jenny Konrad, Natalia Śliwińska, Dario Di Paolantonio en Marcin Liminowicz in hun teksten aantonen. Afgezien van AI in directe zin, legde Winnifred Poster recentelijk uit hoe de online serviceindustrie is gebaseerd op racistische kaders die de sociale waarde en aandacht bepalen.[5] Deze kaders vertrouwen op sociale krachten in plaats van dat ze primair worden gecreëerd door AI.
Een taalhandeling die wordt tegengewerkt door sociale structuren is vaak een schreeuw in de wind. Het gaat nergens heen en wordt weer in het gezicht geblazen, zo nu en dan gevolgd door schadelijke gevolgen. Dat is de moeilijke realiteit
van het proberen tegen dwang in te gaan versus voorstanders van vrijheid van meningsuiting die liever geen verantwoording afleggen: vanuit het perspectief van onderbroken spraak is verantwoording onvermijdelijk, vanuit het perspectief van privilege is verantwoording te verwaarlozen. Welke rol spelen culturele studies in dit debat? Ten eerste maakt het werelden zichtbaar die anders worden genegeerd, verloren gaan of afgewezen worden. Ten tweede hebben kunst en design de kracht om complexe informatie op een toegankelijke manier over te brengen. Dit is belangrijk om de betekenis van werelden te versterken die verborgen blijven en zelden in de media worden waargenomen. Design kan het gevoel, de betekenis en impact van genegeerde stemmen ondersteunen. Design bevrijdt echter niet alleen maar kan ook worden gebruikt voor als black box voor software en netwerken. Sophie Czich, Esther van der Heijden, Pablo Perez en Tuana I · nhan schrijven over hoe programmeerfuncties die verborgen zijn onder gebruikersinterfaces ontsloten zouden moeten worden. Hier is parlementaire actie cruciaal om te begrijpen hoe AI werkt en om ruimte te bieden aan interventie door middel van openbaar toezicht. Dergelijke interventies kunnen worden gezien als onderdeel van de bescherming van de vrijheid van meningsuiting omdat mensen in de maatschappij worden beschermd tegen categorisatie en het extraheren van gegevens zonder hun toestemming. Momenteel wordt een beroep gedaan op klokkenluiders om de ongewenste werking van Big Tech onder de aandacht te brengen. Elinor Salomon, Justine Corrijn, Taya Reshetnik en Katie Pelikan bespreken dit probleem met Jack Poulson en wijzen op alternatieven voor deze afhankelijkheid van klokkenluiders. Alternatieven vereisen aandacht van parlementen en buitenstaanders voor de manier waarop technologie wordt gebruikt, waarvoor het wordt gebruikt en wie er baat heeft. Deze overwegingen onderstrepen de noodzaak van een breder begrip van technologie in het parlement, niet alleen diegene die spreken te beschermen maar ook om ervoor te zorgen
dat mensen de vrijheid behouden om te luisteren, verantwoording afleggen voor hun daden en democratisch blijven in hun functioneren.
[1] Bernard Williams, Truth and Truthfulness (Princeton and Oxford: Princeton University Press, 2002).
[2] Rachel Poser, ‘He Wants to Save Classics From Whiteness. Can the Field Survive?’, New York Times Magazine (New York, 2 February 2021).
[3] Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018).
[4] Ezekiel Dixon-Román, ‘Algo-Ritmo: More-than-Human Performative Acts and the Racializing Assemblages of Algorithmic Architectures’, Cultural Studies? Critical Methodologies, 16.5 (2016), pp. 482–490.
[5] Winifred R. Poster, ‘Racialized Surveillance in the Digital Service Economy’, Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life, ed. by Ruha Benjamin (Durham and London: Duke University Press, 2019), pp. 133–169.
Kick Out Zwarte Piet (KOZP) protest in The Hague, 16 November 2019.
Kick Out Zwarte Piet (KOZP) protest in Den Haag op 16 november 2019.
This glossary offers markers in the discussion about Artificial Intelligence, Ethics, States, Democracy, Computation, and Technology. It should serve less as a comprehensive explanation of terms one might encounter, and more like indicators to points of interest in the contemporary discussions around AI. The terms are therefore markers for pressures around race, gender, technology, and the social systems that incorporate their workings. AI does not emerge from a pristine space, but rather in the midst of the messy political landscape that surrounds us day to day. AI is thus not a miracle cure that will save us from our faults and mistakes, but rather a confrontation with the pressures, injustices, and possibilities that have been embedded in the social environment for centuries. The terms in this glossary serve therefore as dots that need connecting. How these connections are made is not only a task for the institution of democracy, but also for critical participants, protestors, and liberators.
Provided in this glossary alongside the terminology is the anatomy of icons representing each term, consisting of unique glyphs from the Fira Code typeface. Fira Code contains ligatures for common programming multi-character combinations, allowing faster reading and understanding for the human eye. While the building
blocks of these icons are symbols made to communicate with machines, the final product are connotational visual codes for humans to interpret a message as efficiently as possible. These icons represent the tensions and relation of the ways machines and humans interpret languages differently.
filter optics
e e e e e opposing vector information
e e e collective overview data
e e e e e imperialist crown world wide web EU flag
unclear source black/white
concentration of energy attack
e altar money
e input prompt container
badge stats reinforcement
attributes dynamism collection
e e categories shades
routes
e e settlers' village 'follow suit'
e e transcending point vector
e online engagement force e e steering wheel abundant data e e e e e eye of surveillance money
Project Debater is the first AI system that can debate with humans on complex topics. It was first presented at Think 2019 by Intelligence Squared US and IBM.
Project Debater is het eerste AI-systeem dat met mensen kan debatteren over complexe onderwerpen. Het werd voor het eerst gepresenteerd op Think 2019 door Intelligence Squared US en IBM.
Justine Corrijn, Katie Pelikan, Taya Reshetnik and Elinor Salomon
Student participants in conversation with Jack Poulson, Tech Inquiry, Toronto
Thanks to the internet, our words can now travel faster and reach more people. If the internet era obliterated our sense of scale, the tech industry has completely changed everything we used to know about free speech. The sprawling world of the internet is inevitably arranged, controlled, and monitored by a handful of private companies. The scale of their networks being one of the main reasons behind this. Consequently, the interest of these companies prevails the interest of an individual. What was once dreamed up as a place of connection and access now comes with the baggage of censorship and privacy violations. How can we make sure that the individual voice is coming through and not being overshadowed by the dominant voice of the corporation? And who is in charge of holding the tech industry accountable?
The tech industry utilizes ideas that underline the internet’s utopian intent: words like global network, interconnectedness, freedom of information and freedom of speech. When brought into scale, these terms become less about the individual user. They start to represent an entity that is powerful, complex and ubiquitous. To an outsider, it seems that the tech industry is being run by big companies, funded by big corporations, and managed by big organisations. These entities might be seen as multifaceted system rather than individual companies, severely limiting the singular voice. The complexity and largeness can be daunting.
We spoke with Jack Poulson, founder of the non-profit organisation Tech Inquiry and a former senior research scientist at Google,
to untangle the web of big tech companies, non-profits (NPs), government organisations (GOs) and non-government organisations (NGOs). Since his public resignation from Google as a reaction to the corporation’s development of Dragonfly, one of Poulson’s goals is to provide independent support for other whistleblowers. Scale and language are key to his practice. He intentionally keeps his organisation small, and emphasizes that there is no clear line between good and bad when it comes to Big Tech.
Although the internet age is fairly young, powerful organisations have always had a major influence on freedom of speech. Despite the scale of their network not being as big as that from current tech organisations, different technologies invented throughout human history have both enabled and prohibited individuals to express themselves. One significant example is the invention of the printing press by Johannes Gutenberg. By making books more accessible to a bigger audience, literacy increased and more voices could be heard. Unfortunately, this also triggered mass censorship from the Catholic Church.
In her video A Hidden History of Technologies, Justine Corrijn investigates the history of freedom of expression in the Netherlands, and how the development of different communication technologies throughout time influenced the definition of freedom of expression. The questions explored in this historical retelling demonstrate how we can be aware of the different processes that enforce Big Tech’s power while simultaneously influencing the interaction between these different corporations. We discussed these systems of power with Poulson, whose research involves going through massive amount of reports in public record and mapping out relations between organisations. Hours of tiresome work pays out for him in finding unexpected connections and conflict of interest. He comments: “When you look through boring things, you’ll find a lot of public information you would expect to be secret.”
Despite the large extent of online information that is accessible to the public, tech companies definitely rely on levels of complexity to mask their intentions. Both the neutral helpfulness of home tech (such as smart speakers) and impossibly long user agreements allow companies to extract information from users that could be potentially be profitable. In her work Accent Intelligence, Katie Pelikan investigates the dangerous implications of smart speakers collecting information from user voices with non-native accents. For example, voice activated systems, such as Amazon Alexa or Google Home, include machine learning that identifies elements of user’s voice that provide data on language and ethnic origin. National security creates a market demand for pinning users to a geographical origin, and tech companies are eager to comply. Now, the most pressing concern is how to maintain individual protection while still engaging with the internet. Policy changes often touch upon the issues of monitoring and protecting individuals, including end-user licenses and distrust towards applications. But tracking these policy changes requires an exhausting level of awareness. There are no communication channels that promise full data security, and it can be paralysing for individuals trying to discern what information is safe to share. In her project I Can Know You in a Blink of an Eye, Elinor Salomon investigates the use of a patient’s medical data and how an individual is positioned in the healthcare system as a set of relations that cannot be narrowed down to the data it produces. One of the main challenges in protecting individuals is the obscurity of how data is collected and anonymized. It is also regulated too late, with protection protocols being installed in reaction to technological novelties. It can seem like an impossible task to mediate the individual freedom that the internet provides with the censorship and monitoring of its giant guardians. When talking about Tech Inquiry, Poulson reflects that he no longer thinks non-profits are the best method for keeping tech companies accountable. Instead of trying
to be “the space” for whistleblower protection, he suggests more back channel approaches and less non-profit thinking. He contends that the least technical solutions for protecting individuals are both the most effective and least attractive for the tech industry – simply not keeping a copy of user data might be one of the safest ways to handle sensitive materials or areas where the blurring borders of privacy have not been defined. According to Jack Poulson, when private companies start playing a bigger role in handling private information, conflict of interest is unavoidable. In order to protect democracy, it is important to take into consideration what kind of solutions are being offered by independent thinkers on accountability policies. Furthermore, a support for small-scale non-profit organisations is needed. This can provide a transparent and fair assessment of the complex relationship between the individual and the makers of the technologies. Only then can we lift up the voices that need to be heard.
Esther van der Heijden, Pablo Perez, Sophie Czich and Tuana İnhan
Student participants in conversation with Evelyn
Austin, Bits of Freedom, Amsterdam
Our everyday lives are interwoven with the digital realm, in which sophisticated technologies and information processes are masked by smooth, flat, and user-friendly interfaces. This causes an invisibility and incomprehension of technologies, which can be associated with the frequently used term black box. Generally, a black box refers to complex systems or devices that input a great amount of data and draw conclusions, without ever disclosing their internal workings. A black box is obscure in nature, which prevents companies from being accountable for the output of the data processing. In addition to pervading many aspects of individual life, these structures progressively become part of tools used by public institutions and multinational corporations.
Governments are for instance drawn to specific software that drastically raises the efficiency of the data process. Yet, public services do not have the knowledge and expertise to create those systems, hence they find themselves mainly dependent on private companies for this task. As private companies want to protect themselves against competitors, they usually do not disclose the design of their algorithms or the system’s inner workings. Public organisations are thus unable to commission experts who could grasp black boxes’ complexity, hence cannot access and review the functioning of these systems. This lack of transparency and accountability turns black boxes into undemocratic spaces.
The case of SyRI is a striking example. SyRI is an automated risk assessment software developed by Dutch authorities to detect welfare
fraud. Although The Hague District Court ruled against its use for being in breach of European Convention on Human Rights, the actual functioning of the algorithm was never revealed in court. In addition to the racial and class biases encountered in the software itself, SyRI is a revealing example of a government using available data to start investigations, and not the other way around. Considering that individuals in precarious situations inherently need social welfare the most, public powers own a bigger quantity of data on them. Consequently, they are more likely to be part of those algorithms than other fringes of the population. When confusing statistics with future predictions, we merely reproduce the – biased – status quo.
Evelyn Austin, the director of the Dutch organisation Bits of Freedom, focuses on discussing issues around digital rights, online privacy and freedom. Austin highlights that one of the current problems faced by Dutch democracy is the fundamental lack of expert knowledge in the government. No matter the recent scandals that have shown the entanglement of bias in technology such as we explored above, ICT representatives are scarce on electoral lists of 2021. Moreover, they are usually low on these lists, making it unlikely that they reach the parliament. This shows a lack of priority and disregard of the systemic issues embedded in technology, particularly in the field of Information and Communication Technologies, and Artificial Intelligence processes.
If the systems’ black-boxing is a risk for the exercise of democracy in the public realm, it is also increasingly problematic for the freedom and protection of users and consumers. The private sector, with big tech companies in the lead, have developed an economic model based on data extraction and exploitation, where collected users’ data is used to target, predict and influence online behaviour. While doing so, private companies hardly address what is at stake with these processes, such as user informed consent, uninvasive user interfaces, or freedom of information. These platforms’ economic logic and strategy are supported by
the design of the tools themselves. For example, users consent by default to their private data’s exploitation without having the proper ability and understanding to fully agree. Even though the option to opt-out seems to be available, the interface and design choices make it almost unreachable.
Going further, the same approach of smoothed and simplified design is also used for smart objects and technology development in cities – commonly referred to as “the smart city”. One noticeable difference remains: although the users’ consent online is blurred and tricked, these platforms are opt-in platforms, meaning whoever uses them does it more or less consciously. The Internet of Things or the smart city are, on the contrary, opt-out systems. The city-dweller is in the street by default, whether there is an extra layer of technology or not. The invisibilisation of technology thus needs to go a step further and be made as discreet as possible so people don’t start questioning it. For instance, a new network of CCTV cameras would likely be noticed and publicly discussed, but the cameras embedded in smart objects such as automatic doorbells are not. They are so discreet that their presence goes unnoticed, as well as their by-passing of regulations.
Facing the biases and issues generated by black-boxes, there is an urgent need for more accountability from both tech companies and the government. Open-sourced systems could offer – if not a way out – at least the possibility to question, control and hold accountable the systems currently at work. However, structural biases do not depend on technologies but on values and norms pushed by a political agenda. Thus, albeit open-source is a helpful tool, it is not a single solution. The examination of technologies’ current uses reveals that, whether it be for citizens or consumers’ rights, black-boxes algorithms are not fostering democratic practises. Black-boxes’ opacity lead to the impossibility to scrutinize their algorithms, or even consent with their functioning. Ultimately, transparency is key to ensure a democratic use of technology and a first step could be the development of a public expertise, together with regulation systems for the private sector.
Natalia Śliwińska, Marcin Liminowicz, Dario Di Paolantonio, Lance Laoyan and Jenny Konrad **
Student participants in conversation with Ezekiel Dixon-Román, University of Pennsylvania
The implementation of Crime Anticipation System (CAS) and Systeem Risico Indicatie (SyRI) in the Netherlands are two examples of technologies created to solve crime-related problems. Although the discrimination by the CAS system have been broadly acknowledged, it is still currently in use. SyRI, on the other hand, was proved to racially profile and target certain groups of citizens. Without the necessary understanding of how such technologies work and operate, taking them as reliable tools can eventually do more harm than good for our society.
Since scales and scope of computation and automated governance gradually take on more dominant roles in our society, more clarity about the workings of algorithms is needed. In our conversation with Ezekiel Dixon-Román the problem of the supposed neutrality of technology emerged as a distinct issue. Dixon-Román is director of Master of Science in Social Policy and Practice at the University of Pennsylvania. As a specialist in quantitative research, he gathers his expertise to question the legitimacy of current machine learning algorithms. He critically looks at how algorithms are positioned, shaped, and gradually take on more decisive roles in our society.
Technologies emerge from socio-cultural relations, and their functionality reflects problematic social dynamics. But data used in computation is neither objective nor neutral. It is not possible to solve social problems with technology. Yet not being aware of this, technology can lead to significant increase in marginalisation:
having disproportionate negative impact on disadvantaged groups and lead to an amplification of social and economic inequalities.
Quantification and measurement have always been part of governmentality. Such concept, coined by French philosopher Michel Foucault, refers to sets of practices put in place by modern states to govern over a population. The bureaucratic apparatus took always the role of gathering knowledge and information about societal subjects, yet nowadays this process can become opaque through automation and computation. Machine learning algorithms organise and manage categories such as race, gender, sexuality, ability and citizenship of individuals. As a stock of samples, population is made of “dividuals”, as Gilles Deleuze points out, endlessly divisible and reducible to data representations.
Algorithms are fed with data to provide possible future scenarios which often do not correspond with the complexity of the real world. Such training of algorithms often takes place by drawing on historical and usually biased data, arranged along parameters that inherit socio-cultural structures and inequalities.
Projecting history upon the present, the algorithm finds what it expects to find rather than what is objectively there. It then produces speculative projections about what potentially might occur without relying on empirical facts. More than predicting, it is shaping reality in a constant reiteration.
Algorithms are thus increasingly controlling and shaping citizens by arranging them in structural categories. They are able to determine who gets access to social and child welfare, loans or scholarships. Algorithms define risk levels in cases of predicting policing, presupposing potential threats and whether stricter surveillance is necessary over neighbourhoods. Dixon-Román emphasises how the problem lies in a mechanism driven by selfcausation rather than actual prediction. This is also known as preemptive logic, which takes into account information, not only about what is measured but how it is measured.
There is an ongoing controversy over which categories of data were used by the Dutch Police in their Crime Anticipation System software. Operating with preemptive logic, this is just another example of inequality by data that seems to target more vulnerable citizens without changing or addressing the root causes in underlying social issues. Researchers in the Netherlands have found that pre-existing ideas and biases concerning suspects and crimes are reproduced and amplified in the information and system of CAS. To develop such system means to design it based on particular parameters including values of the social, political and cultural. It requires to define that, which one is trying to capture through assumptions. And it has to differentiate which attributes are and which are not of interest to be a feature in the algorithm to capture the subject in question.
This is why we always need to question the interests, the function, the context in which certain technologies emerge, what assumptions they make and what aims they follow. If fair treatment of citizens is priority, the practices of governance and their technological apparatuses that might lead to exacerbation of inequalities have to be challenged. However, it does not mean we have to go back to pre-modern times: as Ezekiel Dixon-Román tells us, “the train has already left the station”. Instead of entirely rejecting technology, we have to reshape our pre-existing knowledge of computation to work with it and not for it, the way we think about computation and how we handle data in a democratic system.
There is a potential in the use of algorithms that can be opened up only after realising how deeply automated processes are intertwined with socio-cultural presuppositions. As Dixon-Román concludes “the logic is not simply the algorithm, the logic is an assemblage of the techno-social system”. To go beyond the flaws of present technologies, we do not need a better technology, or more data. We cannot change the way computation operates without transforming the socio-political relations more broadly.
Justine Corrijn Royal Academy of Art (KABK)
HD video, 5:33 min
In her work A Hidden History of Technologies, Justine Corrijn investigates how the development of communication technologies has influenced the definition, implementation and understanding of freedom of expression. The video concludes in present times with a reflection on the legal debate about the newly adopted article 17 of the European Directive on Copyright and its enforcement by means of automated upload filters. Corrijn considers if filtering technologies could be the solution against online copyright infringement, and at what expense this could be achieved?
In het werk A Hidden History of Technologies onderzoekt Justine Corrijn hoe de ontwikkeling van communicatietechnologieën de definitie, toepassing en het begrip van vrijheid van meningsuiting heeft beïnvloed. De video eindigt in het heden met een reflectie op het juridische debat over het recent aangenomen artikel 17 van de Europese richtlijn inzake auteursrecht en de handhaving daarvan door middel van geautomatiseerde uploadfilters. Corrijn vraagt zich af of filtertechnologieën de oplossing kunnen zijn tegen online inbreuk op auteursrecht, en tegen welke prijs?
Film still from ‘A Hidden History of Technologies’ displaying Article 7 of the Dutch Constitution in the top left corner and a quote from UN Secretary General Antonio Guterres in the bottom right corner.
Filmstills uit ‘A Hidden History of Technologies’ met artikel 7 van de Nederlandse Grondwet in de linkerbovenhoek en een citaat van VN-secretaris-generaal Antonio Guterres in de rechter benedenhoek.
Sophie Czich Royal Academy of Art (KABK)
HD video, 4:26 min
In Are You Looking?, Sophie Czich addresses the monetisation of public space through smart technologies. In particular, she looks at the data collection policies of multinational advertising corporation JCDecaux and their privacy implications and infringements. According to Czich these multi-sensory technologies violate not only basic human rights of individual citizens but also stimulate existing bias and discrimination.
In Are You Looking? kijkt Sophie Czich naar het te gelde maken van de openbare ruimte via smart technologies. In het bijzonder kijkt ze naar het beleid van multinationaal reclamebedrijf
JCDecaux met betrekking tot het verzamelen van gegevens en hun privacy-implicaties en inbreuken. Volgens Czich schenden deze multisensoriële technologieën niet alleen de fundamentele rechten van de mens, maar werken ze ook bestaande vooroordelen en discriminatie in de hand.
Film stills from ‘Are You Looking?’ zooming out from a digital billboard to an entire city scape.
Filmstills uit ‘Are You Looking?’ uitzoomend van een digitale mupi naar een steeds groter stadsgebied.
Esther van der Heijden Royal Academy of Art (KABK)
Travel Log: A Piece of Parliamentary Information by Esther van der Heijden raises concerns about the increasing dis- and misinformation in the diffusion of parliamentary information. She questions roles and responsibilities of searching, analysing and presenting this information. Her multi-screen website consists of four perspectives, each representing its own layer of knowledge production. Travel Log: A Piece of Parliamentary Information concludes with a plea for critical digital literacy as an educational response to fake news and online hate speech.
Met Travel Log: A Piece of Parliamentary Information uit Esther van der Heijden haar zorgen met betrekking tot de toenemende des- en misinformatie in de verspreiding van parlementaire informatie. Ze bevraagt rollen en verantwoordelijkheden bij het zoeken, analyseren en presenteren van deze informatie. Haar multi-screen website bestaat uit vier perspectieven die elk één laag kennisproductie vertegenwoordigen. Travel Log: A Piece of Parliamentary Information eindigt met een pleidooi voor verbeterde en kritische digitale geletterdheid als educatieve reactie op nep-nieuws en haatdragend taalgebruik online.
Four different information perspectives distributed through specific channels from the Dutch Parliament before reaching the public.
Vier verschillende perspectieven op informatie verspreid via specifieke kanalen van de Tweede Kamer voordat deze het publiek bereiken.
Tuana İnhan
Royal Academy of Art (KABK)
Tuana I · nhan’s website Failed Prototypes of Democracy uses animated models of democracy to illustrate the unavoidable, systemic failure of a machine-driven political apparatus. She reflects on the importance of counselling when constituting a framework of shared legal values, joint cultural identities and collective decision making. As such, I · nhan’s work refers to Jürgen Habermas’ statement that every healthy democracy requires an open, non-violent sphere for public debate.
Tuana I · nhans website Failed Prototypes of Democracy gebruikt geanimeerde democratische modellen om het onvermijdelijke, systemische falen van een door machines gestuurd politiek apparaat te illustreren. Het werk gaat in op het belang van counseling bij het vormen van een kader van gedeelde juridische waarden, gezamenlijke culturele identiteiten en collectieve besluitvorming. Hiermee verwijst I · nhan’s werk naar het statement van Jürgen Habermas waarin hij stelt dat een open, vreedzame ruimte voor publiek debat een vereiste is voor een gezonde democratie.
Diagrams visualising concepts of fluid, direct and representative democracy as an exploded view.
Ploftekeningen van prototypes voor de fluïde, directe en representatieve democratie.
Callum John Camberwell College of Arts (CCW / UAL)
HD video, 4:29 min
In his work I AM A FLAG, Callum John appropriates the flag as a universally recognised symbol, developing a new set of flag designs using a GAN (General Adversarial Network) which detects and eradicates anomalous data. The GAN was trained on a dataset of 193 national flags belonging to member states of the United Nations, allowing John to generate unique images comprised of the data of those flags. Featuring a monologue narrated from the perspective of the GAN, I AM A FLAG applies language as a framework to draw parallels between processes of AI-driven discrimination and the exclusion of non-citizens from the nationstate. In a world where citizenship and national identity are increasingly regulated by algorithms and data usage, John’s work questions the dangers and limitations of AI mediated migration processes, as well as the compromised ideals of freedom and equality upon which democracy is based.
In zijn werk I AM A FLAG gebruikt Callum John het universeel erkend symbool van de vlag om een reeks nieuwe vlagontwerpen te ontwikkelen met behulp van een GAN (General Adversarial Network) dat afwijkende gegevens detecteert en verwijdert. De GAN is getraind op een dataset van 193 vlaggen van lidstaten van de Verenigde Naties, waardoor John unieke afbeeldingen kan genereren die de gegevens van die vlaggen bevatten. Door middel van een monoloog verteld vanuit het perspectief van de GAN, gebruikt I AM A FLAG taal als raamwerk om parallellen te trekken tussen processen van AI-gedreven discriminatie en de uitsluiting van vreemdelingen van de natiestaat. In een wereld waar burgerschap en nationale identiteit in toenemende mate worden gereguleerd door algoritmen en datagebruik, stelt het werk van John de gevaren en beperkingen van AI-gemedieerde migratieprocessen ter discussie, evenals de gecompromitteerde idealen van vrijheid en gelijkheid waarop democratie is gebaseerd.
Film still from ‘I AM A FLAG’.
Jenny Konrad Royal Academy of Art (KABK)
In her videos, Democratizing Algorithmic Relevance, Jenny Konrad investigates how algorithms are used on Instagram to deliver targeted advertisements. She highlights inherit algorithmic bias related to race and gender assumptions. Konrad advocates for improved and democratic control of social media adverts in the future.
In haar video’s, Democratizing Algorithmic Relevance, onderzoekt Jenny Konrad hoe algoritmen op Instagram worden gebruikt om gerichte advertenties te leveren. Ze belicht hierbij de inherente algoritmische vooroordelen met betrekking tot ras en gender. Konrad pleit voor een betere en democratische controle van advertenties op sociale media in de toekomst.
Screenshots of Konrad’s smart phone documentary showing chapters about the distribution of advertisements via algorithms and their influence on gender inequality.
Schermafbeeldingen van Konrads smartphone documentaire met hoofdstukken over de verspreiding van advertenties via algoritmen en hun invloed op genderongelijkheid.
Lance Laoyan Royal Academy of Art (KABK)
HD video, 7:31 min
Transhumanist Rights by Lance Laoyan questions boundaries between humans and cyborgs, and whether constitutional rights could be assigned to cyborgs in the future. He presents fictional cases of medical and military body enhancements by means of brain computer interfaces (BCI) set in the year 2030. With his video work Laoyan challenges the definition of what constitutes a person and raises ethical questions about the development and implementation of artificial intelligence technologies.
Transhumanist Rights van Lance Laoyan bevraagt de grenzen tussen mensen en cyborgs en of in de toekomst constitutionele rechten aan cyborgs kunnen worden toegewezen. Het werk toont fictieve casussen van medische en militaire lichamelijke ingrepen door middel van hersencomputerinterfaces (BCI) in het jaar 2030. Met zijn videowerk daagt Laoyan de definitie uit van wat een persoon is en stelt hij ethische vragen over de ontwikkeling en implementatie van kunstmatige intelligentie.
Film stills from ‘Transhumanist Rights’ showing a fictional brain computer interface chip and a bionic eye engineered for surveillance.
Filmstills uit ‘Transhumanist Rights’ met een fictief hersenimplantaat en een bionisch oog ontwikkeld voor surveillance.
Marcin Liminowicz Royal Academy of Art (KABK)
Notes on Crowd Control is an interactive photo archive that shows Marcin Liminowicz reenacting the movement of crowds in public space. He explains step by step how crowd analysis software operates using computer vision techniques in order to detect deviant behaviour of individuals. His research highlights that machine algorithms detecting mass motion were oftentimes trained using movement of prison inmates and crowds of sport spectators.
Notes on Crowd Control is een interactief fotoarchief waarmee Marcin Liminowicz de bewegingen van mensen in de openbare ruimte toont. Stap voor stap legt hij uit hoe crowd-analyse software werkt met behulp van computertechnieken die afwijkend gedrag van individuen kunnen detecteren. Zijn onderzoek benadrukt dat de algoritmen die massabeweging detecteren vaak werden getraind door middel van de bewegingen van gevangenen en sportfans.
Marcin Liminowicz reenacting crowd movements in public space.
Marcin Liminowicz simuleert de bewegingen van een menigte in de openbare ruimte.
Felix Meermann Royal Academy of Art (KABK)
HD video, 4:47 min
Felix Meermann’s video Glass Humans takes an in-depth look at smart home technologies and how they affect user’s privacy. Primarily two factors are of Meermann’s concern: the unobtrusive collection of data and the prediction of customer behaviours based on datasets from the past. His research shows that the “human of glass” is no longer science fiction, but a viable prospect in the near future.
Felix Meermanns videowerk Glass Humans werpt een kritische blik op smart home-technologieën en het effect dat ze hebben op de privacy van de gebruiker. Meermanns zorgen zijn gebaseerd op twee factoren: het onopvallend verzamelen van gegevens en het voorspellen van klantgedrag op basis van datasets uit het verleden. Uit zijn onderzoek blijkt dat de “mens van glas” geen sciencefiction meer is, maar al in de nabije toekomst levensvatbaar is.
Film stills from ‘Glass Humans’ in which the home and its occupants become transparent to the creators of smart home products.
Filmstills uit ‘Glass Humans’ waarin het huis en zijn bewoners steeds transparanter worden voor de makers van smart home-producten.
Dario Di Paolantonio Royal Academy of Art (KABK)
HD video, 9:33 min
Dario Di Paolantonio’s video It’s Not Our Duty addresses the topic of deep fakes and how they can harm democracy and public debate.
GANs (Generative Adversarial Networks) are machine learning systems that enable the creation of artificially generated images, and are composed of two deep networks namely the “generator” and the “discriminator”. Di Paolantonio’s video stages a fictional dialogue between the generator and discriminator, thereby revealing the exchange process during machine driven image creation.
Dario Di Paolantonio’s video It’s Not Our Duty gaat over deep fakes en hoe deze de democratie en het publieke debat kunnen schaden.
GAN’s (Generative Adversarial Networks) zijn systemen voor machinaal leren waarmee kunstmatig gegenereerde afbeeldingen kunnen worden gemaakt en die bestaan uit twee deep learning netwerken, namelijk de “generator” en de “discriminator”. De video van Di Paolantonio bestaat uit een fictieve dialoog tussen de generator en de discriminator en toont daarmee het proces van uitwisseling bij het maken van door machines aangestuurde beelden.
Film still from ‘It’s Not Our Duty’ showing a fictional dialogue between the two networks of a GAN.
Filmstill uit ‘It’s Not Our Duty’ met een fictieve dialoog tussen twee GAN-netwerken.
Katie Pelikan Baselj Royal Academy of Art (KABK)
Accent Intelligence is a collection of audio recordings in which Katie Pelikan Baselj explores the impact of machine learning on the everyday use of voice recognition technology. In six chapters, Pelikan Baselj takes a closer look at governmental and industry related implementations and reveals the sources of inherit bias of voice recognition. She explains the business models inherent to voice-controlled home automation devices such as Apple’s Siri, Amazon’s Alexa and Google’s Home. Ultimately, Pelikan Baselj advocates for a culture of dynamic “accent chameleons” that represent a fuller and more accurate soundscape of human voice.
Accent Intelligence is een verzameling audio-opnamen waarin Katie Pelikan Baselj de impact van machinaal leren op het dagelijkse gebruik van spraakherkenningstechnologie onderzoekt. In zes hoofdstukken gaat Pelikan Baselj dieper in op overheids- en branchegerelateerde toepassingen en onthult ze de vooroordelen die aan de basis staan van stemherkenning. Ook zet ze de bedrijfsmodellen uiteen die inherent zijn aan spraakgestuurde apparaten zoals Siri van Apple, Alexa van Amazon en Google Home. Ten slotte pleit Pelikan Baselj voor een cultuur van dynamische “accentkameleons” die een volledigere en nauwkeurigere soundscape van de menselijke stem vertegenwoordigen.
Patent drawings from language processing methods ranging from computers to ant communication.
Octrooitekeningen van spraakherkenningstechnieken zoals gebruikt bij computers en de communicatie tussen mieren.
Pablo Perez Royal Academy of Art (KABK)
Pablo Perez’s Comprehending Complexity is an interactive diagram created to better navigate data about the automated risk assessment software System Risk Indication (SyRI) developed by the Dutch authorities to detect welfare fraud. In February 2020, the District Court of The Hague stated that SyRI violates the rights set out in the European Convention on Human Rights (ECHR). This visualisation by Perez includes a list of actors involved in the court ruling and a glossary of technical terms.
Pablo Perez’s Comprehending Complexity is een interactief diagram dat is gemaakt om de gegevens over de geautomatiseerde risicobeoordelingssoftware System Risk Indication (SyRI) die door de Nederlandse autoriteiten is ontwikkeld om welzijnsfraude op te sporen, beter te navigeren. In februari 2020 heeft de rechtbank in Den Haag verklaard dat SyRI de rechten die zijn vastgelegd in het Europees Verdrag voor de Rechten van de Mens (EVRM) schendt. De visualisatie van Perez bevat een lijst van actoren die betrokken zijn bij de uitspraak van de rechtbank en een verklarende woordenlijst van technische termen.
Flow chart overview showing the workings of risk profiling system SyRI. The black text derives from court documents, the blue text from additional analysis.
Een stroomdiagram over de werking van risicoprofileringssysteem SyRI. De zwarte teksten komt voort uit juridische documenten, de blauwe teksten uit aanvullende analyse.
Taya Reshetnik Royal Academy of Art (KABK)
History by GPT2 is an interactive webspace by Taya Reshetnik that presents the risks of automated journalism and how algorithms convert data into marketable news stories. One of the most advanced text-generators is GPT-2 which was trained at Elon Musk’s research laboratory OpenAI using data from the discussion website Reddit. By revising and quantifying the underlying source data used to train GPT-2, Reshetnik would like to highlight that fake news sites often contribute to the algorithmic bias of data output.
History by GPT2 is een interactieve website van Taya Reshetnik die de risico’s van geautomatiseerde journalistiek en de wijze waarop algoritmen gegevens omzetten in verkoopbare nieuwsverhalen toont. Een van de meest geavanceerde tekstgeneratoren is GPT-2 die werd ontwikkeld in Elon Musks onderzoekslaboratorium OpenAI, met behulp van gegevens van discussieplatform Reddit. Door de onderliggende brongegevens die worden gebruikt om GPT-2 te trainen aan te passen en te kwantificeren, benadrukt Reshetnik dat nepnieuwssites vaak bijdragen aan de algoritmische vooroordelen van data output.
Screenshot from ‘History by GTP-2’ showing how online information on specific topics originated from scraped data from unverified sources.
Schermafbeelding van ‘History by GTP-2’ die laat zien hoe online informatie over specifieke onderwerpen afkomstig is van gegevens uit niet-geverifieerde bronnen.
Elinor Salomon Royal Academy of Art (KABK)
I Can Know You in a Blink of an Eye by Elinor Salomon maps the impact of AI on healthcare and privacy of patient data. The website presents three theories that link public healthcare system AI technologies to the individual patient. In doing so, Salomon shows that machine learning technologies have become actively involved not only in patient engagement and administration, but also in medical diagnosis and treatment recommendation.
I Can Know You in a Blink of a Eye van Elinor Salomon brengt de impact van AI op de gezondheidszorg en privacy van patiëntgegevens in kaart. De website presenteert drie theorieën die AI-technologieën van de gezondheidszorg koppelen aan de individuele patiënt. Met dit werk laat Salomon zien dat machine learning-technologieën niet alleen actief betrokken zijn bij de administratie van patiënten, maar ook bij medische diagnose en advies met betrekking tot de behandeling.
Screenshot from ‘I Can Know You in a Blink of an Eye’ including notes, interview snippets, article excerpts and web references.
Schermafbeelding van ‘I Can Know You in a Blink of an Eye’ met notities, interviewfragmenten, uittreksels van krantenartikelen en weblinks.
Natalia Śliwińska Royal Academy of Art (KABK)
HD video, 7:00 min
Natalia Śliwińska’s video called SyRI illustrates the impact of AI on the digital welfare state by looking into the workings of the Dutch government’s digital fraud prevention system SyRI. Systeem Risico Indicatie (SyRI) is a risk calculation model that was developed by the Ministry of Social Affairs and Employment to predict the likelihood of engaging in benefits and tax fraud in low-income neighbourhoods. In February 2020 the District Court of The Hague ruled that the use of SyRI software is unlawful as privacy rights outweigh governmental measures against tax fraud.
Natalia Śliwińska’s video SyRI illustreert de impact van AI op de digitale verzorgingsstaat door te kijken naar de werking van het Nederlandse fraudepreventiesysteem SyRI. Systeem Risico Indicatie (SyRI) is een risicoberekeningsmodel dat is ontwikkeld door het Ministerie van Sociale Zaken en Werkgelegenheid om de waarschijnlijkheid van fraude met uitkeringen en belasting te voorspellen in wijken met lage inkomens. In februari 2020 oordeelde de rechtbank in Den Haag dat het gebruik van SyRIsoftware onrechtmatig is omdat privacyrechten zwaarder wegen dan overheidsmaatregelen tegen belastingfraude.
Film stills from ‘SyRI’ with the Rotterdam neighbourhoods Hillesluis and Bloemhof in the background.
Filmstills uit ‘SyRI’ met de Rotterdamse wijken Hillesluis en Bloemhof op de achtergrond.
Lauren Alexander 1983, ZA / NL
Designer, artist, researcher, co-founder of Foundland Collective, tutor at the MA Non Linear Narrative, and supervisor of the Who Speaks? collaboration.
Ontwerper, kunstenaar, onderzoeker, medeoprichter van Foundland Collective, docent aan de MA Non Linear Narrative en supervisor van Who Speaks?.
Ramon Amaro 1976, US
Engineer, researcher, writer, lecturer in Art and Visual Culture of the Global South at UCL, History of Art Department, and initiator of the Who Speaks? collaboration.
Ingenieur, onderzoeker, schrijver, docent Art and Visual Culture of the Global South aan de afdeling kunstgeschiedenis van UCL en initiatiefnemer van Who Speaks?.
Katie Pelikan Baselj 1995, US
Multi-media design researcher, visual storyteller, and student participant of the Who Speaks? collaboration.
Multimedia design onderzoeker, visuele verteller en studentdeelnemer aan Who Speaks?.
Sheena Calvert 1959, GB
Philosopher, artist, designer and educator at Camberwell College of Arts/Central Saint Martins (University of the Arts London), the Royal College of Art, and initiator of the Who Speaks? collaboration.
Filosoof, kunstenaar, ontwerper en docent aan Camberwell College of Arts/Central Saint Martins (University of the Arts London) en Royal College of Art, en initiatiefnemer van Who Speaks?.
1996, BE
Multi-disciplinary designer with a fascination for archival research, and student participant of the Who Speaks? collaboration.
Multidisciplinaire ontwerper met een fascinatie voor archiefonderzoek en studentdeelnemer aan Who Speaks?
Sophie Czich 1993, FR
Critical designer, visual researcher, and student participant of the Who Speaks? collaboration.
Kritische ontwerper, visueel onderzoeker en student-deelnemer aan Who Speaks?
Mijke van der Drift
NL
Philosopher working on technology and ethics, tutor at Royal College of Art, London and MA Non-Linear Narrative, and supervisor of the Who Speaks? collaboration.
Filosoof op het gebied van technologie en ethiek, docent aan de Royal College of Art, Londen en de MA Non-Linear Narrative, en supervisor van Who Speaks?.
Niels Donker
NL
Advisor at the Analysis and Research Department (DAO) at the Dutch Parliament and research coordinator at the temporary committee on the Digital Future of the Dutch Parliament.
Adviseur bij de Dienst Analyse en Onderzoek en onderzoekcoördinator van de tijdelijke commissie Digitale Toekomst, beide bij de Tweede Kamer der Staten-Generaal.
Esther van der Heijden 1993, NL
Multi-media artist, visual researcher, writer and student participant of the Who Speaks? collaboration.
Multimediakunstenaar, visueel onderzoeker, schrijver en student-deelnemer aan Who Speaks?
Tuana İnhan 1994, CY / TR
Multi-disciplinary designer, visual researcher with a fascination for eco-politics and community building, and student
participant of the Who Speaks? collaboration.
Multidisciplinaire ontwerper en visueel onderzoeker met een fascinatie voor eco-politiek en community building, en studentdeelnemer aan Who Speaks?.
Callum John 1999, GB
Graphic designer, artist, filmmaker, and student participant of the Who Speaks? collaboration.
Grafisch ontwerper, kunstenaar, filmmaker en student-deelnemer aan Who Speaks?.
Jeroen Kerseboom NL
Head of the Analysis and Research Department (DAO) at the Dutch Parliament, involved in the Who Speaks? collaboration because he likes unconventional research approaches.
Hoofd van de Dienst Analyse en Onderzoek (DAO) van de Tweede Kamer der Staten-Generaal en betrokken bij Who Speaks? omdat hij van onconventionele onderzoeksbenaderingen houdt.
Jenny Konrad 1991, DE
Visual translator of information, multi-media researcher and student participant of the Who Speaks? collaboration.
Visuele vertaler van informatie, multimedia onderzoeker en studentdeelnemer aan Who Speaks?
Vanessa Lambrecht NL
Experienced interim manager, with a broad experience in legal, organisational and quality management issues and operating on the intersection of governance and education. For Who Speaks?, she was producer and advisor on the collaboration process.
Ervaren interim manager met ruime ervaring in juridisch, organisatorisch en kwaliteitsmanagement en opererend op het snijvlak van bestuur en onderwijs. Voor Who Speaks? was ze producent en adviseur van het samenwerkingsproces.
Lance Laoyan 1996, PH
Multi-disciplinary designer, sound artist, environmentalist and student participant of the Who Speaks? collaboration.
Multidisciplinaire ontwerper, geluidskunstenaar, milieuactivist en student-deelnemer aan Who Speaks?
Marcin Liminowicz 1992, PL is a designer, lens-based artist, member of Krzak Collective, and student participant of the Who Speaks? collaboration.
Ontwerper, lens-based kunstenaar, lid van Krzak Collective en student-deelnemer aan Who Speaks?.
at the MA Non Linear Narrative, and supervisor of the Who Speaks? collaboration.
Ontwerper en ontwikkelaar, medeoprichter van Rectangle, docent aan de MA Non Linear Narrative en supervisor van Who Speaks?.
Felix Meermann 1989, DE
Designer, film maker, and student participant of the Who Speaks? collaboration.
Ontwerper, filmmaker en studentdeelnemer aan Who Speaks?.
Dario Di Paolantonio 1992, IT
Video artist, philosopher, writer, and student participant of the Who Speaks? collaboration.
Videokunstenaar, filosoof, schrijveren student-deelnemer aan Who Speaks?.
Pablo Perez 1992, CH / ES
Multi-disciplinary designer, visual researcher, and student participant of the Who Speaks? collaboration.
Multidisciplinaire ontwerper, visueel onderzoeker en studentdeelnemer aan Who Speaks?.
Dan Powers 1983, US
Lizzie Malcolm 1987, GB
Designer and developer, co-founder of Rectangle, tutor
Designer and developer, co-founder of Rectangle, tutor at the MA Non Linear Narrative, and supervisor of the Who Speaks? collaboration.
Ontwerper en ontwikkelaar, medeoprichter van Rectangle, docent aan de MA Non Linear Narrative en supervisor van Who Speaks?.
Taya Reshetnik 1992, RU
Graphic designer, artist, visual researcher and storyteller, and student participant of the Who Speaks? collaboration.
Gafisch ontwerper, kunstenaar, visueel onderzoeker, verhalenverteller en student-deelnemer aan Who Speaks?.
Elinor Salomon 1987, IL / PT
Artist, designer, researcher, and student participant of the Who Speaks? collaboration.
Kunstenaar, ontwerper, onderzoeker en student-deelnemer aan Who Speaks?
Niels Schrader 1977, VE / DE
Designer, founder of Mind Design, co-head at the MA Non Linear Narrative, and both initiator and supervisor of the Who Speaks? collaboration.
Ontwerper, oprichter van Mind Design, co-hoofd van de MA Non Linear Narrative en zowel initiatiefnemer als supervisor van Who Speaks?
participant of the Who Speaks? collaboration.
Multidisciplinaire ontwerper, visueel antropoloog en studentdeelnemer aan Who Speaks?.
Natalia Śliwińska 1993, PL
Multi-disciplinary designer, visual anthropologist, and student
*** Article 1
All persons in the Netherlands shall be treated equally in equal circumstances. Discrimination on the grounds of religion, belief, political opinion, race or sex or on any other grounds whatsoever shall not be permitted.
*** Article 2
1. Dutch nationality shall be regulated by Act of Parliament.
2. The admission and expulsion of aliens shall be regulated by Act of Parliament.
3. Extradition may take place only pursuant to a treaty. Further regulations concerning extradition shall be laid down by Act of Parliament.
4. Everyone shall have the right to leave the country, except in the cases laid down by Act of Parliament.
*** Article 3
All Dutch nationals shall be equally eligible for appointment to public service.
*** Article 4
Every Dutch national shall have an equal right to elect the members of the general representative bodies and to stand for election as a member of those bodies, subject to the limitations and exceptions prescribed by Act of Parliament.
Everyone shall have the right to submit petitions in writing to the competent authorities.
1. Everyone shall have the right to profess freely his religion or belief, either individually or in community with others, without prejudice to his responsibility under the law.
2. Rules concerning the exercise of this right other than in buildings and enclosed places may be laid down by Act of Parliament for the protection of health, in the interest of traffic and to combat or prevent disorders.
1. No one shall require prior permission to publish thoughts or opinions through the press, without prejudice to the responsibility of every person under the law.
2. Rules concerning radio and television shall be laid down by Act of Parliament. There shall be no prior supervision of the content of a radio or television broadcast.
3. No one shall be required to submit thoughts or opinions for prior approval in order to disseminate them by means other than those mentioned in the preceding paragraphs, without prejudice to the responsibility of every person under the law. The holding of performances open to persons younger than sixteen years of age may be regulated by Act of Parliament in order to protect good morals.
4. The preceding paragraphs do not apply to commercial advertising.
*** Article 8
The right of association shall be recognised. This right may be restricted by Act of Parliament in the interest of public order.
*** Article 9
1. The right of assembly and demonstration shall be recognised, without prejudice to the responsibility of everyone under the law.
2. Rules to protect health, in the interest of traffic and to combat or prevent disorders may be laid down by Act of Parliament.
*** Article 10
1. Everyone shall have the right to respect for his privacy, without prejudice to restrictions laid down by or pursuant to Act of Parliament.
2. Rules to protect privacy shall be laid down by Act of Parliament in connection with the recording and dissemination of personal data.
3. Rules concerning the rights of persons to be informed of data recorded concerning them and of the use that is made thereof, and to have such data corrected shall be laid down by Act of Parliament.
*** Article 11
Everyone shall have the right to inviolability of his person, without prejudice to restrictions laid down by or pursuant to Act of Parliament.
*** Article 12
1. Entry into a home against the will of the occupant shall be permitted only in the cases laid down by or pursuant to Act of Parliament, by those designated for the purpose by or pursuant to Act of Parliament.
2. Prior identification and notice of purpose shall be required in order to enter a home under the preceding paragraph, subject to the exceptions prescribed by Act of Parliament.
3. A written report of the entry shall be issued to the occupant as soon as possible. If the entry was made in the interests of state security or criminal proceedings, the issue of the report may be postponed under rules to be laid down by Act of Parliament. A report need not be issued in cases, to be determined by Act of Parliament, where such issue would never be in the interests of state security.
*** Article 13
1. The privacy of correspondence shall not be violated except in the cases laid down by Act of Parliament, by order of the courts.
2. The privacy of the telephone and telegraph shall not be violated except, in the cases laid down by Act of Parliament, by or with the authorisation of those designated for the purpose by Act of Parliament.
*** Article 14
1. Expropriation may take place only in the public interest and on prior assurance of full compensation, in accordance with regulations laid down by or pursuant to Act of Parliament.
2. Prior assurance of full compensation shall not be required if in an emergency immediate expropriation is called for.
3. In the cases laid down by or pursuant to Act of Parliament there shall be a right to full or partial compensation if in the public interest the competent authority destroys property or renders it unusable or restricts the exercise of the owner’s rights to it.
*** Article 15
1. Other than in the cases laid down by or pursuant to Act of Parliament, no one may be deprived of his liberty.
2. Anyone who has been deprived of his liberty other than by order of a court may request a court to order his release. In such a case he shall be heard by the court within a period to be laid down by Act of Parliament. The court shall order his immediate release if it considers the deprivation of liberty to be unlawful.
3. The trial of a person who has been deprived of his liberty pending trial shall take place within a reasonable period.
4. A person who has been lawfully deprived of his liberty may be restricted in the exercise of fundamental rights in so far as the exercise of such rights is not compatible with the deprivation of liberty.
*** Article 16
No offence shall be punishable unless it was an offence under the law at the time it was committed.
*** Article 17
No one may be prevented against his will from being heard by the courts to which he is entitled to apply under the law.
*** Article 18
1. Everyone may be legally represented in legal and administrative proceedings.
2. Rules concerning the granting of legal aid to persons of limited means shall be laid down by Act of Parliament.
*** Article 19
1. It shall be the concern of the authorities to promote the provision of sufficient employment.
2. Rules concerning the legal status and protection of working persons and concerning co-determination shall be laid down by Act of Parliament.
3. The right of every Dutch national to a free choice of work shall be recognised, without prejudice to the restrictions laid down by or pursuant to Act of Parliament.
*** Article 20
1. It shall be the concern of the authorities to secure the means of subsistence of the population and to achieve the distribution of wealth.
2. Rules concerning entitlement to social security shall be laid down by Act of Parliament.
3. Dutch nationals resident in the Netherlands who are unable to provide for themselves shall have a right, to be regulated by Act of Parliament, to aid from the authorities.
*** Article 21
It shall be the concern of the authorities to keep the country habitable and to protect and improve the environment.
1. The authorities shall take steps to promote the health of the population.
2. It shall be the concern of the authorities to provide sufficient living accommodation.
3. The authorities shall promote social and cultural development and leisure activities.
*** Article 23
1. Education shall be the constant concern of the Government.
2. All persons shall be free to provide education, without prejudice to the authorities’ right of supervision and, with regard to forms of education designated by law, their right to examine the competence and moral integrity of teachers, to be regulated by Act of Parliament.
3. Education provided by public authorities shall be regulated by Act of Parliament, paying due respect to everyone’s religion or belief.
4. The authorities shall ensure that primary education is provided in a sufficient number of publicauthority schools in every municipality and in each of the public bodies referred to in Article 132a.
Deviations from this provision may be permitted under rules to be established by Act of Parliament on condition that there is opportunity to receive the said form of education, whether in a public-
authority school or otherwise.
5. The standards required of schools financed either in part or in full from public funds shall be regulated by Act of Parliament, with due regard, in the case of private schools, to the freedom to provide education according to religious or other belief.
6. The requirements for primary education shall be such that the standards both of private schools fully financed from public funds and of public-authority schools are fully guaranteed. The relevant provisions shall respect in particular the freedom of private schools to choose their teaching aids and to appoint teachers as they see fit.
7. Private primary schools that satisfy the conditions laid down by Act of Parliament shall be financed from public funds according to the same standards as public-authority schools. The conditions under which private secondary education and preuniversity education shall receive contributions from public funds shall be laid down by Act of Parliament.
8. The Government shall submit annual reports on the state of education to the States General.
Artificial Intelligence, Language, and Democracy
A project collaboration between the Non Linear Narrative Master’s programme of the Royal Academy of Art, The Hague, Goldsmiths (University of London) and University of the Arts London with the Analysis and Research Department (DAO) of the Dutch Parliament.
whospeaks.eu
Student participants
Katie Pelikan Baselj, Justine Corrijn, Sophie Czich, Esther van der Heijden, Tuana I · nhan, Dario Di Paolantonio, Jenny Konrad, Lance Laoyan, Marcin Liminowicz, Felix Meermann, Pablo Perez, Taya Reshetnik, Elinor Salomon and Natalia Śliwińska (MA Non Linear Narrative, Royal Academy of Art, The Hague); and Callum John (BA Graphic Design, Camberwell College of Arts, London)
Initiative
Queer Computing Consortium (QCC): Ramon Amaro (University College London, formerly Goldsmiths), Sheena Calvert (University of the Arts London) and Niels Schrader (Royal Academy of Art, The Hague)
Project supervision
Lauren Alexander, Mijke van der Drift, Lizzie Malcolm, Dan Powers and Niels Schrader
Guests Q&A sessions
Evelyn Austin, Jack Poulsen and Ezekiel Dixon-Roman
Production
Ingrid Grünwald, Lizzy Kok and Vanessa Lambrecht
Translation
Irene de Craen
Photography
Roel Backaert, except: Archive Farms Inc. – Alamy Stock Photo (p. 41), Charles M. Vella – Alamy Stock Photo (p. 57), Visually Attractive for IBM (p. 71), and Nationaal Archief – Collectie Spaarnestad (p. 160)
Design
Pablo Perez and Taya Reshetnik
Development
Rectangle, Glasgow
PUBLICATION
Main editors
Lauren Alexander and Niels Schrader
Text contributions
Ramon Amaro, Mijke van der Drift, Sheena Calvert, Jeroen Kerseboom, Niels Schrader and student participants
Copy editing
Maria Dzodan
Design
Jungeun Lee and Marika Seo
Printing
Robstolk®, Amsterdam
Binding
Boekbinderij Patist, Den Dolder
Typeface
Hercules and Fira Code
Paper
Edixion 190 gm and 90 gm
ISBN ISBN 978-90-72600-57-8
Print run
300
Special thanks to Dolly van Belle-Schaafsma, Jack Clarke, Niels Donker, Fenna Hup, Jeroen Kerseboom, Bianca Meilof, Marieke Schoenmakers, Angelina Tsitoura and Erik Viskil
Disclaimer
All rights reserved. Kindly contact the Royal Academy of Art, The Hague regarding any form of use or reproduction of photographs and any material in this publication. Although every effort has been made to find the copyright holders of all the illustrations used, this proved impossible in some cases. Interested parties are requested to contact the publisher.
The project is generously supported by the Dutch Parliament, the Royal Academy of Art, The Hague and the Knowledge Exchange Impact Fund, Camberwell, Chelsea and Wimbledon Colleges (CCW), University of the Arts, London.
© 2021
Royal Academy of Art (KABK)
Master Non Linear Narrative Prinsessegracht 4 2514 AN The Hague The Netherlands
The investiture ceremony on March 30, 1814 with William Frederik, Prince of Orange and Nassau, later King William I.
Het beëdigen van de Grondwet op 30 maart 1814 door Willem Frederik, prins van Oranje en Nassau, de latere koning Willem I.
// Who Speaks? was a semester-long educational programme which took place in 2020 in partnership with the Analysis and Research Department (DAO) of the Dutch Parliament that investigates artificial intelligence and its influences on democracy by means of language.
// The project collaboration was initiated by Non Linear Narrative Master’s programme of the Royal Academy of Art (KABK), The Hague, Goldsmiths (University of London) and Camberwell College of Arts (CCW / University of the Arts London).
// Who Speaks? was een educatief programma van een semester dat plaatsvond in 2020 in samenwerking met de Dienst Analyse en Onderzoek (DAO) van de Tweede Kamer der Staten-Generaal. Hierin werden de gevolgen onderzocht van kunstmatige intelligentie op de democratie door middel van taal.
// Het project is een samenwerking van de masteropleiding Non Linear Narrative van de Koninklijke Academie van Beeldende Kunsten (KABK) in Den Haag, Goldsmiths (University of London) en Camberwell College of Arts (CCW / University of the Arts London).
// ISBN 978-90-72600-57-8