2024 COMMEMORATIVE EDITION
REBUILDING TRUST A DAVOS DIALOGUE
EXPONENTIAL TECHNOLOGIES AI AND HUMAN FLOURISHING THE FUTURE OF EDUCATION LEARNING BEYOND PASTICHE AI AND GOVERNANCE MAKING AI WORK FOR ALL
How might you work with others to help unlock impact at scale? Tackling the interconnected challenges of climate change and social inequality requires systemic change. By joining forces with other like-minded organizations, we can achieve together what no one organization could achieve alone. Join us on the journey to accelerating progress toward the SDGS. #EYRipples
© 2023 EYGM Limited. All Rights Reserved. ED None.
A DVERTI SEMENT
MASTHEAD CEO & PUBLISHER ANA C. ROLD EDITOR-IN-CHIEF SHANE SZARKOWSKI ART DIRECTOR MARC GARFIELD
EDITORS JEREMY FUGLEBERG MELISSA METOS CORRESPONDENTS ELIA PRETO MARTINI NIKOLA MIKOVIC
MULTIMEDIA MANAGER WHITNEY DEVRIES
OPERATIONS COORDINATOR BEKI ADAMS
BOOK REVIEWER JOSHUA HUMINSKI
PHOTOGRAPHER MARCELLUS MCINTOSH
EDITORIAL ADVISORY BOARD ANDREW M. BEATO FUMBI CHIMA KERSTIN COATES DANTE A. DISPARTE
SIR IAN FORBES LISA GABLE GREG LEBEDEV ANITA MCBRIDE
CONTRIBUTING AUTHORS NIKOS ACUÑA HENRY ANUMUDU KATHERINE BLANCHARD DANTE DISPARTE MANJULA DISSANAYAKE LISA GABLE THOMAS GARRETT NICOLE GOLDIN IKE IKEME SRUJANA KADDEVARMUTH
CHRISTOPHER KARWACKI WENDY KOPP TINA MANAGHAN ALEXANDER NICHOLAS THOMAS PLANT ANGELA REDDING NOAH SOBE SHANE SZARKOWSKI JOSEPH TOSCANO MARIO VASILESCU
Copyright© by Diplomatic CourierTM and Medauras Global Publishing 2024. All rights reserved under International and Pan-American Copyright Conventions. Published in the United States by Medauras Global and Diplomatic Courier. LEGAL NOTICE. No part of this publication may be reproduced in any form—except brief excerpts for the purpose of review—without written consent from the publisher and authors. Every effort has been made to ensure the accuracy of information in this publication; however, the authors, Diplomatic Courier, and Medauras Global make no warranties, express or implied, in regards to the information and disclaim all liability for any loss, damages, errors, or omissions. EDITORIAL. The content represents the views of their authors and do not reflect those of the editors and the publishers. Every effort has been made to ensure the accuracy of information in this publication, however, Medauras Global and the Diplomatic Courier make no warranties, express or implied in regards to the information, and disclaim all liability for any loss, damages, errors, or omissions. CITATIONS & SOURCES. All articles in this special edition have already been or will be published on the Diplomatic Courier website, including relevant hyperlink citations. PERMISSIONS. This publication cannot be reproduced without the permission of the authors and the publisher. For permissions please email: info@medauras.com with your written request. ARTWORK. Cover design via BigStock Photos. Artwork and design by Marc Garfield for Diplomatic Courier.
DAVOS DIALOGUE 2024 | 3
A DVERTI SEMENT
DAVO S D I A L OG U E I J ANU ARY 2024
Welcome
Shane Szarkowski Editor-in-Chief
With AI, Will Humanity Flourish or Falter?
O
ne of the most enduring headlines of 2023 was the rapid and widespread adoption of AI. From generative AI and its likely impacts on the knowledge economy to concerns over AI bias, AI-empowered mis- and disinformation, to privacy and more, AI has been a major force of disruption this year and every sector has had to grapple with what AI could mean for its future. At Diplomatic Courier, we spent a great deal of 2023 pondering the question of what impact AI will have on human flourishing—both in the near future and a decade or more from now. To really dig deep on the question, we turned to our recently formed inaugural cohort of World in 2050 (W2050) Senior Fellows, designing collective intelligence gatherings to consider how AI will impact human flourishing through the lens of three of the five W2050 megatrends: “Exponential Technologies Radically Reshaping the World,” Education and Work Grapple with the Next Great Rebalancing,” and “Societal, Governance Institutions Under Pressure.” This also coincides with the launch of a new, redesigned look for our special editions. This iteration of our annual “Davos Dialogue” special edition is very different from what we’ve published before. Herein, we provide you with short and actionable reports from the findings of the W2050 Senior Fellows, functionally dividing this special edition into three chapters. Each chapter is populated by commentaries from Diplomatic Courier’s network of experts. We believe this new format will enhance the impact of our network’s thought leadership while making it more accessible to global publics than ever. And Davos is a great place for this launch if we want to maximize the impact of these insights. You can find the special edition here. As always, feel free to reach out with any feedback or if you’d like to be involved with what we do. We hope you enjoy! DAVOS DIALOGUE 2024 | 5
A DVERTI SEMENT
United Nations University
Knowledge to Transform the World
United Nations University
INSIGHTS
A monthly digest from the UNU research network
SUBSCRIBE NOW: unu.edu/insights Photo: Nicholas Doherty/Unsplash
Contents DAVO S D I AL OG U E I J ANU ARY 2024
PART I: EXPONENTIAL TECHNOLOGIES AND HUMAN FLOURISHING
part II: AI AND THE FUTURE OF EDUCATION
10 I COLLECTIVE INTELLIGENCE REPORT Regulating AI: Guardrails Without Going Off the Rails
26 I COLLECTIVE INTELLIGENCE REPORT AI for Education, Educating for AI
By: Dr. Shane Szarkowski with contributions
By: Dr. Shane Szarkowski with contributions
by Ana C. Rold, Lisa Gable, Joseph Toscano,
by Katherine Blanchard, Henry Anumudu,
Mario Vasilescu, and Nikos Acuña
Manjula Dissanayake, and Dr. Noah W. Sobe
14 I Universal Intelligence for Human Flourishing By: Nikos Acuña
16 I Deus Ex Machina, Exploring AI’s Impact on Human Potential By: Dante Disparte
18 I Engineering Generative AI to Ease Information Overload By: Thomas Plant
20 I AI is Where Demographic, Digital Dividend Potential Collide By: Nicole Goldin
30 I With AI Teachers Can Finally Revolutionize Education By: Wendy Kopp
32 I Learning Beyond Pastiche By Noah Sobe
34 I Academic Intelligence: The Evolution of Education Through AI By: Ike Ikeme
36 I AI Can Empower Individuals with Learning Disabilities By: Lisa Gable
part III: GOVERNING AI 40 I COLLECTIVE INTELLIGENCE REPORT AI Could Be Salvation or Doom of Our Institutions By: Dr. Shane Szarkowski with contributions by Christopher Karwacki, Lisa Gable, Thomas Garrett, and Dr. Tina Managhan
44 I Preserving Institutional Democracy in the Age of AI By: Thomas Garrett
46 I Making Generative AI Work for the Global South By: Alexander Nicholas
48 I Unplugging Democracy with AI and the Threat of Surveillance By: Joseph Toscano
50 I Spirituality is Essential to Ensuring AI Helps Humanity Flourish By: Angela Redding
22 I Commercializing and Monetizing Generative AI By: Srujana Kaddevarmuth
DAVOS DIALOGUE 2024 | 7
A DVERTI SEMENT
A DVERTI SEMENT
IMF Publications… Keeping readers in touch with global economic issues
Visit Bookstore.imf.org I
N
T
E
R
N
A
T
I
O
N
A
L
M
O
N
E
T
A
R
Y
F
U
N
D
DAVO S D I A L OG U E I J ANU ARY 2024
COLLECTIVE INTELLIGENCE REPORT
Regulating AI Guardrails Without Going Off the Rails By: Dr. Shane Szarkowski with contributions by Ana C. Rold, Lisa Gable, Joseph Toscano, Mario Vasilescu, and Nikos Acuña
This report was compiled from a collective intelligence gathering of World in 2050’s (W2050) Senior Fellows (Exponential Technology Committee). The meeting took place under the Chatham House Rule, so specific ideas will not be directly attributed to any specific Fellow.
Image via Adobe Stock.
O
n the morning of October 30, 2023, the Biden administration released an executive order giving guiding regulatory principles and priorities for the development and use of AI. On the afternoon of the same day, members of W2050’s Senior Fellows committee on exponential technology met to discuss the executive order. Fellows also discussed what best practice regulation on AI could look like in 2024 and beyond. While the discussion began with the Biden administration executive order, thinking on longer-term regulatory best practice was global.
DESPITE PUSHBACK FROM INNOVATING COMPANIES THERE IS A GREAT DEAL OF PRECEDENT, FROM FINANCES FOR THE SAKE OF TAXATION TO DISCLOSURE ON SAFETY PROTOCOLS ACROSS VARIOUS INDUSTRIES.
Where the Executive Order Falls Short
However, the executive order itself had many positive aspects, according to the Fellows. Some of the programs suggested in the executive order are promising, and the order is beginning to tackle the legal ramifications of what healthy regulation can look like. Perhaps more importantly, it gives us a basis for thinking about what is possible and what can work.
The Biden administration’s executive order calls for companies developing the largest, most capable AI models to disclose a wide swath of information to the federal government. Fellows agreed that while the intent of the executive order appears to be good and it will help advance the legal side of regulating / de-risking the development of AI, what the executive order calls for is infeasible for two main reasons. First, the level of granularity the executive order calls for in companies’ disclosure requirements is technically unrealistic and would, if enforced, have a cooling effect on innovation. Second, many of the larger companies developing the most powerful AI models will be resistant to disclosing information, even when it would be for the good of society. The first problem is a matter of regulators asking for more than is necessary and would create an unnecessary level of burden on innovators. The second problem speaks to the level of resistance large companies innovating AI will bring to bear against any calls for disclosure of their development practices. This second problem is exacerbated by powerful lobbying practices—which decry government interference and infringement on free will—that will make building actual regulatory law off this executive order very tricky. 12 | D I PLOM AT I C CO URIE R
Toward Healthy Regulatory Ecosystems A regulatory ecosystem is not about specif ic regulations for specif ic problems. Instead, it is a set of guiding principles which not only informs how particular innovations will be regulated, but helps innovators understand what will and will not be acceptable behavior as they develop new technologies. A big part of successful regulation must be about disclosure, and despite pushback f rom innovating companies there is a great deal of precedent, f rom f inances for the sake of taxation to disclosure on safety protocols across various industries. It is possible for companies to disclose where their training data comes f rom, how training is carried out, and what the intent of the AI innovation is. This can be done in a way that is vague enough that the algorithm cannot be reverse engineered—thus protecting the intellectual property of the innovators—while still giving regulators enough information
DAVO S D I A L OG U E I J ANU ARY 2024
to operate effectively. This will create extra work both for regulators and for companies, but it can be done without harming the ability to innovate and is worthwhile in the name of safety and national security both. These same points also point to what other digital platforms could do if they chose to. The failure both of regulators to prompt this sort of disclosure and the failure of platforms to choose to be transparent can be instructive in f iguring out a way forward. The same problems that came about with poor social media regulation—the proliferation of mis- and disinformation, misuse of user data, and non-transparent engineering of platforms to influence human behavior— will still exist with AI but will be supercharged. So, in better regulating AI we should also be addressing pre-existing issues in order to create a truly healthy regulatory ecosystem.
Priorities for Healthy Regulatory Ecosystems Data Disclosure & Laws on Data Crime: Companies should not be required to disclose everything they’re doing, but disclosures need to be enough to check for data crimes. Unfortunately, we haven’t yet fully agreed on legislation about what data crimes look like, which makes that a necessary f irst step in creating truly effective disclosure guidance. Independent Regulators: We need regulatory agencies which are wholly independent of either governments or private enterprises, though these agencies must be funded at least in part by governments. Independent experts are needed to regulate effectively, and creating independence f rom political systems helps avoid regulatory overreach. Government funding helps avoid regulatory capture—when regulatory agencies become dominated by the interests they regulate rather than the public interest.
WE CAN REWARD MEANINGFUL AND RELIABLE INFORMATION VIA CONCEPTS LIKE TRANSPARENT VERIFICATION OF PROVENANCE (FOR AIGENERATED CONTENT) AND PROOF OF EFFORT FOR CONTENT CREATORS. Data Sovereignty: The sacrif ice of privacy must be addressed—we need transparency about where our data goes and how it is being used by algorithms. This should be part of data disclosure. We also need to def ine and protect fundamental rights to our data—to delete, amend, and access our data at will. We also need to resist AI-empowered automated decision making based on data capture as algorithmic risk assessment has proven prone to bias. Digital Education: Media literacy and digital literacy should just as core to education systems as mathematics and language. This can protect against misand disinformation, but also help us protect our data and be better prepared for evolving labor market needs. Protecting the Information Commons: The information commons incentivizes speed and noise—those who can garner the most attention the most quickly prof it. We can reward meaningful and reliable information via concepts like transparent verif ication of provenance (for AI-generated content) and proof of effort for content creators. In this way we can slow down the proliferation of content and ensure there is transparency about where it comes f rom so consumers can make informed decisions about the information they consume.
DAVOS DIALOGUE 2024 | 13
Image via BigStock Photos.
Universal Intelligence for Human Flourishing By Nikos Acuña
14 | D I PLOM AT I C CO URIE R
I
DAVO S D I A L OG U E I J ANU ARY 2024
Success lies in the ability to re-imagine the web into a singular data ecosystem system that makes data meaningful for every user node. In aggregate this will create a harmonized system that can rebalance the self with society, deliver hyper-eff iciencies in global resource allocation, and resolve political and cross-cultural misalignment.
UNIVERSAL INTELLIGENCE CAN BRIDGE THE GLOBAL SKILLS AND CREDENTIALING GAP, TRANSCENDING TRADITIONAL EDUCATION SYSTEMS AND FOSTERING A DIRECT RELATIONSHIP BETWEEN SKILLS, KNOWLEDGE, AND EMPLOYMENT OPPORTUNITIES.
It starts with addressing the problem of binary thinking embedded in everything built in the western world. This is a central motif behind transistors, logic gates, and the political systems that permeate our splintered reality. While this divisive approach has proven effective in fostering competition and innovation through system opposition, it fails to address a more fundamental reality: totality. This totality is a universal intelligence that can be attained through a paradigm shift, uniting binary approaches, and harmonizing data in the spirit of the Dewey Decimal System. How? By adding relational data to keywords.
and government. Universal intelligence can bridge the global skills and credentialing gap, transcending traditional education systems and fostering a direct relationship between skills, knowledge, and employment opportunities. Additionally, it can facilitate personalized health and wellness plans, preemptive health care strategies, and encourage individuals to align their inner life with society as a whole. Finally, the platform can serve as a collective democratic tool by empowering voices, bridging geopolitical trust gaps, and reinforcing weakened alliances and rights regimes.
n a time marked by profound challenges with exponential technologies— from disrupting education reform and harmonizing self with society to AI governance and alignment—Artificial General Intelligence (AGI) may hold the promise of solving all of the challenges posed by these megatrends at once.
A dial, a relational data object, can convey a user’s interpretation of keywords and content, providing numerous benefits when added to keywords. First, it enables true AI personalization, deepening its coherence and adaptability in real-time. Second, it democratizes and harmonizes eco-nomic systems by infusing transactions and data ownership with layers of interconnected values.
This promises a collective universal intelligence that re-imagines AGI with human potential fully realized and in harmony with technological advancement, moving towards a world where humanity can flourish.
The result is a global scale neural network that serves the collective whole while optimizing the needs of its individual nodes.
Nikos Acuña is a technologist, entrepreneur, AI researcher, and award-winning author. He is the co-founder of Dialin and Lyrical AI, and is a global keynote speaker on technology’s impact on humanity.
This has compounding effects at the intersection of education, well-being,
***** About the author:
DAVOS DIALOGUE 2024 | 15
Image via Adobe Stock.
Deus Ex Machina: Exploring AI’s Impact on Human Potential By Dante A. Disparte
16 | D IPLOM AT I C CO URIE R
W
DAVO S D I A L OG U E I J ANU ARY 2024
hile many exponential technologies have vied for global attention— whether f rom investors, policymakers, regulators, or customers—none have enjoyed the rapid ascent to consciousness as AI. Like all novel technologies, there are both risks and opportunities. On one hand, AI’s proponents argue how AI, like the internet before it, can take human potential beyond intellectual points of diminishing returns. Meanwhile, the opponents argue that AI, especially when it reaches a state of general intelligence (AGI), raises the specter of an extinction level event of everything that requires math, computational logic, or encryption (which by today’s standards is virtually everything). The result would be serfs at the hands of super-intelligent machines. Somewhere between the glorif ied modernization of Clippy, Microsoft’s paperclip shaped virtual assistant f rom 1996, to the end-of-day scenes of the “Terminator” movies, lies a pragmatic middle ground. Indeed, more than 1,000 concerned scientists and technologists joined ranks in a rare sign of unanimity calling for responsible innovation and guardrails for AI. The world got a flavor of what these governance guardrails might look like with the controversial near ouster of Sam Altman by OpenAI’s board at the end of last year. This event, for which scant details have been made public, shows how governance still very much depends on individual leaders— instead of the specif ic types of collective defense advocated by concerned technologists in their global appeals. This case was notable since OpenAI is the maker of ChatGPT, arguably the world’s most successful technology launch—if measured on an axis of user growth and the limited marketing funds spent by the f irm. This begs some important societal questions. Namely, will generally available AI genuinely, meaningfully, and constructively augment human potential? Or, will it trigger a deleterious
WILL GENERALLY AVAILABLE AI GENUINELY, MEANINGFULLY, AND CONSTRUCTIVELY AUGMENT HUMAN POTENTIAL? OR, WILL IT TRIGGER A DELETERIOUS SLIDE IN INTELLECTUAL CURIOSITY, INDEPENDENCE, AND HUMAN DISCOVERY? slide in intellectual curiosity, independence, and human discovery? The early breakthrough of calculators was received by hardcore pencil and paper mathematicians as a scourge, yet today, calculators play a vital role as mathematical building blocks of an ambitious human learning journey. In the hands of thoughtful curators, AI is like a scientif ic calculator for complex, intellectual abstractions and the occasional platitudinous ponderosity occupying our minds. Whether AI sets humans f ree or binds us is still to be determined, especially since the world has only witnessed the f irst population-scale versions of this impressive technology. ***** About the author: Dante A. Disparte serves as the Chief Strategy Off icer & Head of Global Policy for Circle. He is a life member of the Council on Foreign Relations and serves on the World Economic Forum’s Digital Currency Governance Consortium. He is also a member of Diplomatic Courier’s editorial advisory board.
DAVOS DIALOGUE 2024 | 17
Image via BigStock Photos.
Engineering Generative AI to Ease Information Overload By Thomas Plant
18 | D I PLOM AT I C CO URIE R
S
DAVO S D I A L OG U E I J ANU ARY 2024
taring down the lineup of critical elections in 2024—the United States, Taiwan, Russia, India, and the United Kingdom, among others—there’s a need to ensure that voters are not manipulated as a cornerstone of civic resilience. Here, large language models (LLMs) like ChatGPT f ind themselves in the hot seat. Despite the focus on concerns about adversaries exploiting LLMs for targeted disinformation, a more substantial impact is their role in accelerating information overload. LLMs are flooding the internet with excessive, repetitive, and often irrelevant content—driven by their widespread use for automated content production across various industries. The typical reliance on influencers, journalists, and close circles to sift through the noise coupled with the surge of LLMs, deepens our reliance on LLM-generated content. This dependency creates ripe conditions for manipulation: ill-intentioned sources can omit critical details to sway opinions and perspectives, using information overload to hide the gaps. Under these conditions, it is diff icult to sort through the chaos and boil it down to essential information only. That takes work. Information overload threatens to sap society’s attention spans before having proper time to fully grasp a situation. However, there is an eye to the hurricane. In addition to generating text, LLMs can summarize texts that are clear and unambiguous. Of course, the internet is not clear and unambiguous—causing LLMs to generate falsehoods or fabrications. Yet, there exists potential to engineer LLMs that summarize complex issues and cut back on information overload. These advanced logical models, part of the journey toward “artif icial general intelligence,” would use logical weights to assess the factual accuracy, relevance, and importance of information—not word association, the current basis of LLMs.
ADVANCED LOGICAL MODELS, PART OF THE JOURNEY TOWARD “ARTIFICIAL GENERAL INTELLIGENCE,” WOULD USE LOGICAL WEIGHTS TO ASSESS THE FACTUAL ACCURACY, RELEVANCE, AND IMPORTANCE OF INFORMATION—NOT WORD ASSOCIATION, THE CURRENT BASIS OF LLMS. due to the inherent trust people place in technology. These potential risks; however, pale in comparison to the predictable consequences of succumbing to information overload. With intelligent and ethical design, such a tool could foster populations that are more resilient to mis/disinformation and more equipped to make informed political decisions at the polls. ***** About the author: Thomas Plant is an analyst at Valens Global and supports the organization’s work on domestic extremism. He is also an incoming Fulbright research scholar to Estonia and the co-founder of William & Mary’s DisinfoLab, the nation’s f irst undergraduate disinformation research lab.
Such a model is a tall task. Flaws in its design might amplify false information DAVOS DIALOGUE 2024 | 19
Photo by Lucas Lenzi via Unsplash.
AI Can Maximize the Demographic, Digital Dividend By Dr. Nicole Goldin
20 | D IPLOM AT I C CO URIE R
F
DAVO S D I A L OG U E I J ANU ARY 2024
ew would contest that AI is a signif icant disrupter to the future of work, prompting cheers for the potential drive to productivity and economic growth and fears for job displacement and exacerbation of inequality. Some estimate labor productivity gains and spending could drive $7 trillion in global GDP growth. Yet, as we have seen in the past through industrial revolutions and the onset of information and communications technology and digital transformation, the ability to mitigate or manage risk while maximizing rewards relies to a large extent on striking the right balance in developing and deploying both technology and human capital. In the past, a failure to invest in or create access to either undermines and widens gaps in economic potential. Failure to close digital skill gaps could cost nearly $12 trillion in GDP to G20 countries alone. It is also important to consider the interdependence of other macro trends with advancing AI, including the link between digital and demographic dividends to growth. In Af rica, for example, the AI market is expected to expand rapidly, by 20% a year. At the same time, 70% of the continent’s population is under age 30 and many point to this reservoir of youth consumers and workers—up to 12 million per year—as the potential to drive innovation and growth. Digital skills on the continent, however, are lacking, scoring between 1.8 and 5 on the Digital Skills Gap Index, well below the global average of 6. To take advantage of this clear opportunity, the public and private sectors need to invest in inf rastructure, education, and training, including in specif ic technical skills necessary for AI jobs and complementarity in work: machine learning, cloud computing, data science, and cyber security.
70% OF THE CONTINENT’S POPULATION [AFRICA] IS UNDER THE AGE OF 30 AND MANY POINT TO THIS RESERVOIR OF YOUTH CONSUMERS AND WORKERS—UP TO 12 MILLION PER YEAR—AS THE POTENTIAL TO DRIVE INNOVATION AND GROWTH. technology-human interfaces, driving employment and often raising wages for higher-skill work. As many as 60% of jobs today did not exist in 1940. Promising interventions include setting national digital skills and inf rastructure policies, digital education integration into secondary and technical and vocational education and training curricula as well as standalone modular “bootcamp” style training, and workplacebased up- or re-skilling initiatives, experiential, and applied learning experiences. Alongside programs and policy, increased research and evaluation on what works to promote equitable and inclusive access to human and technological capital for positive AI adoption shouldn’t be overlooked. ***** About the author: Dr. Nicole Goldin is a non-resident Senior Fellow with the Atlantic Council. She is on X @nicolegoldin.
While automation and AI may put some jobs at risk, history shows that new technology creates new functions requiring DAVOS DIALOGUE 2024 | 21
Image via Adobe Stock.
Commercializing and Monetizing Generative AI By Srujana Kaddevarmuth
22 | D IPLOM AT I C CO URIE R
T
DAVO S D I A L OG U E I J ANU ARY 2024
here has been rapid adoption of generative AI in the past 11 months, and this trend is expected to grow in the coming years. Many companies are betting their worth on the Large Language Models (LLMs) like Google with Bard (LaMDA); Meta with Llama; Microsoft and Open AI with GPT; Amazon and Anthropic with Claude. As competition gets f ierce, the economics of the LLM applications will become a deciding factor for the success of AI enterprises. Developing and deploying the software infrastructure needed to support GenAI deployments—including frameworks, libraries, and APIs—is a meticulous task that requires continuous updates and management. GenAI models, especially those involving deep learning, require significant computational power. Training and hyperparameter tuning such models often demand high-performance graphical processing units (GPUs) which can be expensive. Deployments, on the other hand, rely on distributed computing to handle computational demands. Effectively managing these distributed systems adds another layer of complexity regarding coordination, data synchronization, and network communication. Large enterprises with deeper investments will focus on the commercialization of Generative AI applications while staying within the regulatory guardrails— a challenging but imperative task. Enterprises across industries operating at humungous scales consisting of terabytes of data will start focusing on productizing generative AI applications through centralized Generative AI CoE. The Generative AI Center of Excellence (CoE) can aid this process of enabling productivity and monetization. With the rapid adoption of Generative AI across various industries, computing will become scarce. Securing GPUs at scale will need significant investment from the enterprise. These investments can be reduced to a f raction if these LLMs are centrally trained, hyperparameter tuned, and made available for downstream teams through robust built-
THERE WILL ALSO BE A SIGNIFICANT EMPHASIS ON ALGORITHMIC SAFETY AS NEW REGULATIONS AROUND RESPONSIBLE AI, EXPLAINABILITY, AND INTERPRETABILITY ARE ROLLED OUT—ADDING GUARDRAILS AROUND MONETIZING THESE SOLUTIONS IN A SOCIALLY RESPONSIBLE MANNER, MAKING THE ECOSYSTEM MORE COMPLEX. in pipelines and platforms. Technology enterprises will soon shift their focus from research to productization and monetization of Generative AI. There will also be a significant emphasis on algorithmic safety as new regulations around responsible AI, explainability, and interpretability are rolled out—adding guardrails around monetizing these solutions in a socially responsible manner, making the ecosystem more complex. Monetizing Generative AI technology within safety guardrails will be a massive focus for all players in the coming years. Through centralized productization of Gen AI capabilities, enterprises will grow optimistic in enhancing revenue, fostering innovation, and enabling technology accessibility globally. ***** About the author: Srujana Kaddevarmuth is a Senior Director at Walmart, where she leads its Data & Machine Learning Center of Excellence. DAVOS DIALOGUE 2024 | 23
A DVERTI SEMENT
A DVERTI SEMENT
The new genius is a collaborative genius
At Northwestern University’s Roberta Buffett Institute for Global Affairs, we believe that relationships among individuals and institutions—globally and locally—are what generate new knowledge that sparks solutions to global challenges. Learn more at buffett.northwestern.edu.
AI for Education, Educating for AI By: Dr. Shane Szarkowski with contributions by Katherine Blanchard, Henry Anumudu, Manjula Dissanayake, and Dr. Noah W. Sobe
This report was compiled from a collective intelligence gathering of World in 2050’s (W2050) Senior Fellows (Education & Work Committee). The meeting took place under the Chatham House Rule, so ideas will not be directly attributed to any specific Fellow.
DAVO S D I A L OG U E I J ANU ARY 2024
COLLECTIVE INTELLIGENCE REPORT
Image via Adobe Stock.
E
dTech has been a hot button topic for some time now, and the focus on generative AI in 2023 has only amplified interest in EdTech. Proponents see a plethora of ways that AI can improve education outcomes. AI’s potential for transforming education is real, but so are the dangers—so we must move forward with care and intentionality. It is with an eye toward care and intentionality that members of W2050’s Senior Fellows committee on education and work met to discuss how AI could—and how it should—impact education in 2024 and beyond.
ENSURING AI ALLEVIATES INEQUALITIES RATHER THAN EXACERBATING OLD ONES AND CREATING NEW ONES WILL REQUIRE CARE AND INTENTION IN HOW WE INNOVATE AND HOW WE USE THOSE INNOVATIONS.
AI Means the Education Transformation is Here, Ready or Not
and teachers interact with AI—and how those AI solutions are trained—will be crucial. Access to funding for appropriate and effective AI solutions is also a wellknown challenge. Less discussed are challenges around making sure that AI tools are suited to the environment where they are deployed—a solution designed for relatively affluent Western systems will inevitably miss the specific needs of children in more marginal communities. This is particularly problematic given AI is trained on data sets which generally reflect more mainstream, affluent situations, posing a danger that we could create new inequalities.
AI is bringing about a new reckoning with performativity in education. From rethinking how to measure student learning at a time Chat-GPT can write papers to ideas about how AI can take on some teaching tasks, there’s a lot to digest. But the fundamental underlying issues that we think about with AI in the classroom are the same issues that education stakeholders have been grappling with for years. Learning by memorization has been outdated for years thanks to search engines. What labor markets need from education has shifted. Our understanding of how to support teachers has evolved. What students want and need from education themselves has also been at the heart of thinking about the education transformation for several years now. In short, the education transformation has been underway for years, but AI is poised to accelerate it at speeds we are not prepared for. It is incumbent on us to ensure that transformation is undertaken with intentionality and understanding of how AI can disrupt our education systems, so we get the disruptions that will be productive and healthy for future generations.
AI Solutions Can Exacerbate, Alleviate Inequalities Bias in AI training data sets is a well-documented phenomenon, so how students 28 | D IPLOM AT I C CO URIE R
There is also a need to ensure students, teachers, and administrators themselves receive the digital literacy education they need to use their AI tools appropriately and safely. For instance, in many marginalized communities across the Global South, in particular many social media users are largely unaware of the plethora of issues plaguing these platforms. Thus, users in these communities are more likely to fall prey to mis- and disinformation as well as have their data harvested and used in ways that may be harmful. With AI in the classroom, this danger is exponentially higher. Ensuring AI alleviates inequalities rather than exacerbating old ones and creating new ones will require care and intention in how we innovate and how we use those innovations.
DAVO S D I A L OG U E I J ANU ARY 2024
Priorities for an Effective, Inclusive, AI-Powered Education Transformation Rethinking Assessments: Generative AI can produce things that look like student performance. Some such uses may be appropriate, others less so. AI can also help teachers assess students in new ways (more on this analytical power below). This means education stakeholders must fundamentally rethink what are the measurables we assess, and how we assess them, when it comes to understanding student progress and success. AI as Teacher Support: One anxiety over AI in the classroom is that it could be used to replace teachers in problematic ways. Finding ways to instead innovate AI tools that help teachers perform back-end tasks such as grading, lesson planning, and administrative work could give teachers more time to engage in the act of teaching and relationships—and, crucially, for professional development. This fundamentally flips the script on how we envision using AI in education settings and is a more appropriate use of AI, because AI cannot give students the sort of human interaction that is such a part of the education process. AI’s Analytical Power: Pilot projects using AI as an analytical tool found that using technology to keep track of student attendance helped teachers identify potentially harmful trends and protect against things like child marriage, domestic abuse, and other out-of-school issues which can impact student attendance. Similarly, AI could help analyze other classroom behaviors to help identify struggles students may be experiencing that a teacher on their own, especially in a full classroom, may struggle to catch— such as neurodiversity, learning disabilities, or even simply being more or less advanced in certain parts of the curriculum. AI and Personalization: Adopting AI as a tool for teacher support and for its analytical powers in the classroom—rather
EDUCATION STAKEHOLDERS MUST FUNDAMENTALLY RETHINK WHAT ARE THE MEASURABLES WE ASSESS, AND HOW WE ASSESS THEM, WHEN IT COMES TO UNDERSTANDING STUDENT PROGRESS AND SUCCESS. than for teaching—creates powerful opportunities for personalized education. In this way, teachers can adjust for student needs more readily, given they have more time and more analytical tools. This lets education systems not only better support learner needs, but also gives opportunities for students to better choose their own priorities. Educating for AI: AI tools cannot simply be introduced into an education ecosystem as a prescriptive medicine which will fix what ails that system. Teachers, administrators, and students all require basic levels of digital literacy to understand both the basic potential and weaknesses of AI tools. They will also need specific training on the tools which have been chosen for their particular needs (which, in turn, requires acting with great intentionality). Adopting AI with Intent: All of the above considerations, alongside the dangers of creating new inequalities, mean that we must exercise a very human trait which AI lacks: discernment. How training data sets are chosen for what AI tool and for which contextual situation, how funding is allocated, what tools are made available for who, and importantly what solutions are developed in the first place are key considerations that require a robust and inclusive consultation process with education stakeholders—student, parent, teacher, administrator, or policymaker—all over the world. DAVOS DIALOGUE 2024 | 29
Image via Adobe Stock.
With AI, Teachers Can Finally Revolutionize Education By Wendy Kopp
30 | D IPLOM AT I C CO URIE R
W
DAVO S D I A L OG U E I J ANU ARY 2024
ithin the last three decades in education, nothing has taken off as quickly among educators as AI and its various chatbots. All over the world, educators are already leveraging it to f ree up time and improve their practice. Such a transformation would allow teachers to focus on relationship building and coaching students, identify real world applications, and personalize feedback and instruction. For decades, educators and others have lamented the stubbornness of education systems that don’t evolve with the times, but the recent uptake of these tools is showing that teachers are ready to innovate and rethink how they work. For example, Schoolinka allows educators in Nigeria to use AI to diagnose and address student literacy levels; Playlab enables teachers to create applications that provide students with feedback on their college essays and generate handson activities to meet different science standards. Teach FX allows teachers to record lessons and receive instructional feedback, and Polymath AI helps tens of thousands of teachers around the world develop lesson plans, assignments, and assessments. This technology also has the potential to be a tremendous force for equity as it enables education in remote and under-resourced places, helps the less experienced teachers save time and improve their practice, and provides students with out-of-school support currently affordable only by the most privileged. However, there are massive challenges to reaching these ends, especially given that many of the world’s students still don’t have connectivity. If left up to the private industry—which incentivizes revenue growth and prof it alone—AI use will spread without the concern of becoming a force for equity and transformation.
IF LEFT UP TO THE PRIVATE INDUSTRY— WHICH INCENTIVIZES REVENUE GROWTH AND PROFIT ALONE— AI USE WILL SPREAD WITHOUT THE CONCERN OF BECOMING A FORCE FOR EQUITY AND TRANSFORMATION. ized communities who are on a mission to shape a better future for students by preparing them to navigate today’s world. There’s a need to ensure that these teachers have access to the connectivity, hardware, software, and support that will allow them to experiment with and leverage AI. We need to foster a global learning community among educators to surface and spread insights about what’s working and what’s not. And f inally, there’s a need to support teachers to found, lead, and join the teams of these ventures in order to drive the edtech and AI spaces. AI could be the tool that facilitates teachers to drive educational equity and transformation worldwide. ***** About the author: Wendy Kopp is the CEO and Co-founder of Teach For All. She was the recipient of the 2021 WISE Prize for Education.
Realizing these ends requires centering the leadership of teachers in marginalDAVOS DIALOGUE 2024 | 31
Learning Beyond Pastiche By Dr. Noah W. Sobe
32 | D IPLOM AT I C CO URIE R
Image via Adobe Stock.
E
DAVO S D I A L OG U E I J ANU ARY 2024
ducators know that when humans learn we make connections, we move between the concrete and the general, and we create and use concepts. The current generation of AI machinelearning systems are incapable of thinking in this manner. But what they are capable of is providing a reminder of what should really be at the heart of education. With generative AI like ChatGPT and Bard, the lack of understanding is profound—which is precisely why we are right to worry about the dangers of the brave new world that is upon us. At best, AI devices process data and arrange information. Though we do seem to be getting better at training AI to navigate complex situations, the discernment and wisdom that allow humans to leverage values and select actions remain far outside the learning and reasoning capabilities of our machines. The year ahead in education will remind us that schools and teachers adapt and change much faster than it sometimes appears. Just as education has adjusted to huge quantities of information and knowledge being readily available on pocket devices (for example by teaching new information and media literacies), we will see a reckoning with the fact that the A in AI better stands for “augmentation”. In place of breezy whitherthe-classroom angst, educators are already well advanced in sussing out the uses and abuses of these steroidal spell-check technologies. We learn in a world that is always, already ongoing. Our initial efforts to assimilate the craziness and beauty of the moment we have landed in do mean that earlystage learning can resemble the imitative assembling done by large-language model AIs. Think of that college essay riven with passive voice, strings of adjectival nouns, and tremendous effort at “sounding academic.” But at its necessary best, education moves us toward owning our abstractions and understanding our
WE NEED EDUCATION THAT HELPS US THINK THROUGH PATHOS AND ASPIRATION, WASTE AND REGENERATION. AND THIS IS AN EDUCATION THAT CUTS DEEPER THAN THE PERFORMATIVE—THAT BUILDS POWERS OF UNDERSTANDING, ABSTRACTION, PARTICULARIZATION, AND DISCERNMENT. concepts. ChatGPT and Bard never move beyond pastiche. We may no longer need schools to train us to write three-act musical comedies about old running sneakers featuring limerick duets the way AI can (okay, maybe we never needed this!). But we do need education that helps us think through pathos and aspiration, waste and regeneration. And this is an education that cuts deeper than the performative—that builds powers of understanding, abstraction, particularization, and discernment. Today’s AI can be a great aid to educators, but it remains unable to replace human thinking—or education. ***** About the author: Dr. Noah W. Sobe is Professor in the History Department at Loyola University Chicago, USA.
DAVOS DIALOGUE 2024 | 33
Photo by Stephanie Hau via Unsplash.
Academic Intelligence: The Evolution of Education Through AI By Ike Ikeme
34 | D I PLOM AT I C CO URIE R
C
DAVO S D I A L OG U E I J ANU ARY 2024
OVID-19 completely changed ordinary notions of academic content and delivery. Students, no longer able to receive inperson instruction, found themselves learning how to learn in remote and often less than ideal conditions. Over the same period, the use of the world’s most popular AI engine— ChatGPT—achieved widespread adoption. ChatGPT spread AI like wildf ire, as it made the once complicated wielding of natural language processing, child’s play. Eventually, the world emerged out of the global pandemic with both education and AI continuing their respective trajectories. Schools, administrators, and oversight agencies reported material declines in testing and aptitude scores across the United States. Dubbed the COVID Academic Gap, it’s important to note that these learning losses, in addition to being felt globally, had a disparate impact on poorer countries and widened already existing levels of inequality in education. A Robitussin of sorts—AI clamors as the antidote for nearly every private and public sector ailment. We are witnessing an unstoppable AI economy forming and taking footing as it is fueled by practically unlimited capital f rom venture and private equity f irms. Naturally, with the help of AI, education might be ready to close the gap. Perhaps the perfect pairing, education can benefit from AI and vice versa. Of course, there are legitimate uses of AI outside of student impropriety. From improvements in learning modalities to refinements in the methods of instruction, AI is certainly making a difference. But there’s one application of AI in education that appears to get little if any attention. Imagine if AI could fill the COVID Academic Gap with, well, AI. If AI is the future (and indeed it appears to be), then adopting and teaching AI as a discrete and distinct course where students can learn the underpinning precepts of AI theory and practicum seems like a pathway that shouldn’t just be entertained, but aggressively explored.
IF AI IS THE FUTURE (AND INDEED IT APPEARS TO BE), THEN ADOPTING AND TEACHING AI AS A DISCRETE AND DISTINCT COURSE WHERE STUDENTS CAN LEARN THE UNDERPINNING PRECEPTS OF AI THEORY AND PRACTICUM SEEMS LIKE A PATHWAY THAT SHOULDN’T JUST BE ENTERTAINED, BUT AGGRESSIVELY EXPLORED. This evolution of education isn’t just new; it’s also been proven successful. For instance, many endeavors have developed and deployed indispensable technologies as standard curricula in academic settings. Consider the impactful results of teaching youth coding and programming languages as an accepted discipline not just in the United States but across the globe. Perhaps it’s time to close the learning gaps in conventional areas of study (e.g., math and science) with what we know is the def initive future. ***** About the author: Ike Ikeme is Vice President of Investments and Strategic Partnerships for RevRoad.
DAVOS DIALOGUE 2024 | 35
Image via Adobe Stock.
AI Can Transform the Lives of Those with Learning Disabilities By Lisa Gable
36 | D I PLOM AT I C CO URIE R
A
DAVO S D I A L OG U E I J ANU ARY 2024
rtificial intelligence (AI) is critical in unlocking the untapped potential of individuals with learning disabilities. Envision a future where we harness the capabilities of nearly a billion individuals more effectively, advancing research, supporting patients, and driving innovation in business. On a global scale, the impact of dyslexia alone encompasses 20% of the U.S. population, with 80-90% of those with learning disabilities, according to the Yale Center for Dyslexia and Creativity. Seven hundred eighty million individuals grapple with this condition worldwide. The National Cancer Center highlights the expansive scope of neurodivergence, which affects 15-20% of the world’s population, emphasizing a paradigm shift in understanding neurological development beyond “typical” or “neurotypical” norms. AI offers a transformative, cost-effective solution, breaking down barriers and empowering individuals with learning disabilities to unlock the entirety of their capabilities. Advanced assistive technologies provide tutorial support, adapting to a person’s unique learning needs. Personalized learning companions, shaped by adaptive algorithms, foster a supportive skill and knowledge development environment. These algorithms analyze individual strengths and weaknesses, customizing instructional content to accommodate diverse learning styles. Through interactive exercises, gamification, and multisensory approaches, AI engages users in ways traditional methods cannot, making the learning process enjoyable and effective. A critical part of unlocking the full potential of our citizens is access and ease of use. Geography and money should not dictate the quality of education for those with learning disabilities. In a world dominated by Zoom and digital platforms, underserved communities should be able to gain access to qualified STEM experts and learning tools, even with technological limitations.
INVESTMENT IN ACCESS, BROADBAND, AND SUPPORT HUBS FOR UNDERSERVED AND REMOTE COMMUNITIES IS CRUCIAL TO FULLY UTILIZE AI. However, today, the availability of these resources needs to be expanded in communities with limited access to computers or devices that can support AI. Educational disparities became evident during the COVID-19 pandemic when children lacked access to supplementary education materials, learning pods, and high-speed networks for platforms like Zoom. Investment in access, broadband, and support hubs for underserved and remote communities is crucial to fully utilize AI. We must leverage technology and creative solutions to ensure all families and children can access the best education tailored to their needs and technological constraints. Let’s tap into the collective brilliance of our entire population. By doing so, our advancements in research and innovation can exceed our greatest expectations. In a world where shortages of teachers, medical personnel, and skilled workers persist, let’s view individuals with learning disabilities not as challenges to be overcome but as valuable assets to be embraced. ***** About the author: Lisa Gable is a Diplomatic Courier Advisory Board member, Chairperson of World in 2050, and WSJ and USA Today best-selling author of “Turnaround: How to Change Course When Things Are Going South.”
DAVOS DIALOGUE 2024 | 37
A DVERTI SEMENT
Living authentic moments Living authentic moments Small Luxury Hotel Ambassador à l‘Opéra | Utoschloss | Falkenstrasse 6 | 8008 Zürich-City | Schweiz www.ambassadorhotel.ch | welcome@ambassadorhotel.ch | #hotelambassadorzurich Opera | Dufourstrasse 5 || 8008 Zürich-City Schweiz Small Luxury HotelWorldhotels Ambassador à l‘Opéra | Utoschloss Falkenstrasse 6 | | 8008 Zürich-City | Schweiz www.operahotel.ch | welcome@operahotel.ch | #hoteloperazurich www.ambassadorhotel.ch | welcome@ambassadorhotel.ch | #hotelambassadorzurich Talent_Summit_AD.indd 1
Talent_Summit_AD.indd 1
Worldhotels Opera | Dufourstrasse 5 | 8008 Zürich-City | Schweiz www.operahotel.ch | welcome@operahotel.ch | #hoteloperazurich
09.12.2016 17:22:58
09.12.2016 17:22:58
A DVERTI SEMENT
THE WORLD MEETS AT THE WILLARD.
D.C’s landmark hotel combines a sense of history and contemporary luxury with worldclass hospitality. Discerning guests from around the world enjoy the legendary Round Robin Bar, classic French fare at Café du Parc and the Willard’s elegant guest rooms and meeting spaces. This hotel was built to impress.
For more details or to make a reservation, please call 202.628.9100 or visit washington.intercontinental.com
In over 170 locations across the globe including HONG KONG • LONDON • NEW YORK • PARIS
DAVO S D I A L OG U E I J ANU ARY 2024
COLLECTIVE INTELLIGENCE REPORT
AI Could Be Salvation or Doom of Our Institutions By: Dr. Shane Szarkowski with contributions by Christopher Karwacki, Amb. Lisa Gable, Thomas Garrett, and Dr. Tina Managhan
This report was compiled from a collective intelligence gathering of World in 2050’s (W2050) Senior Fellows (Societal & Governance Institutions Committee). The meeting took place under the Chatham House Rule, so ideas will not be directly attributed to any specific Fellow.
Photo by Seb Cate via Unsplash.
F
or some time, W2050 has been monitoring a trend of declining faith in our societal and governance institutions—this is concerning given the increasingly global nature of problems facing us today. The rapid development of AI has the potential to worsen this institutional crisis, or to provide a key to f ix what ails our institutions. W2050’s Senior Fellows committee on Societal and Governance Institutions met recently to discuss the impacts AI could have on our institutions and what steps we can take to make sure AI helps us build more effective and resilient institutions.
THE 2021 TALIBAN TAKEOVER OF AFGHANISTAN SAW CHARITIES AND NGOS STEP IN TO HELP AFGHANS WHO HAD AIDED NATO TO EVACUATE—WHEN IT BECAME CLEAR GOVERNMENTS WERE FAILING TO DO SO.
Eroding Institutional Trust, Efficacy Leads to Private Actor Involvement
Role of Democracies in Regulating AI, Combating Bias
Trust in the effectiveness and legitimacy of our social and governance institutions has been on the decline for some time. One side effect of this is that private actors have begun to f ill in some roles where institutions are either underperforming or there is a perception that they are underperforming. One recent example is the 2021 Taliban takeover of Afghanistan, which saw charities and NGOs step in to help Afghans who had aided NATO to evacuate—when it became clear governments were failing to do so.
All technologies tend to be more enabling for some people than others. One of the hopes for AI is that it will help us address issues of equity and empower marginalized groups—for instance AI can be used to aid in remote diagnosis of rare diseases, creating opportunities for better healthcare access for individuals lacking access to modern facilities. However, there are concerns that prevailing issues—for instance bias in AI training data and uncertainty about what kind of innovation pathways will both be prof itable and do good—will create more inequality rather than helping solve it.
While this example gives us some reason for hope—the idea that private actors can step in successfully when our institutions falter is a hopeful one—our institutions aren’t going anywhere, and we need them to be robust to be future ready. As AI becomes more widely available and its applications proliferate, private actors will become more capable of f illing institutional gaps. However, those actors are not regulated or beholden to the public, leaving space for self ish action. Further, there are some functions of institutions which are not well suited for private actors, and if it appears that private actors can make institutions obsolete then public buy-in to the institutions we still need will likely erode further. 42 | D I PLOM AT I C CO URIE R
Summits discussing how to regulate AI have in recent years been dominated by undemocratic governments, which itself has an inhibitory effect on open and critical conversations about regulation and best practice. Democracies provide a better space for having conversations about how to regulate AI and combat bias, given the democratic tradition for more open expression and f reedom of thought. We need this to have more inclusive conversations that brings in marginalized groups. Yet we must also be mindful of the inequalities existing within democracies and work at inclusion as we discuss regulation and bias.
DAVO S D I A L OG U E I J ANU ARY 2024
Priorities for Ensuring AI Bolsters Our Institutions Institutions, Private Actors Must Collaborate: For now, private actors are better suited for some tasks that are traditionally carried out by institutions, and that will expand as they learn to leverage AI—and that brings issues. Looking at Afghanistan again, a lack of coordination between private actors and institutions led to breakdowns. Afghans who could have been evacuated by charities were stuck under “shelter in place” orders from embassies. Meanwhile, private actors evacuated some Afghans who were likely human rights violators because they didn’t have access to the same information that institutions did. Institutions and private actors will inhabit many of the same spaces for the immediate future, and they must learn to coordinate and empower one another. Adapting AI to Institutions: One reason institutions have a crisis of public trust is that they are slow to adapt to changing needs, and there are access gaps. AI, properly tailored to the needs of individual institutions and their constituencies, can make institutions more responsive and more accessible. For instance, layers of bureaucracy can make it diff icult for global publics to access services f rom an institution. Those same layers of bureaucracy can make institutions slower to respond to evolving situations. Many bureaucratic functions are precisely the sort of thing that AI is good at streamlining, with its ability to digest masses of information quickly. Inclusive Consultation: One of the biggest roles of institutions will continue to be laying down regulation and best practice guidelines—both of which are crucial to the healthy development of AI. To address AI bias and to ensure innovations work for everyone and not just against privileged segments—institutions must actively seek consultation with marginalized groups as well as the usual slate of recognized stakeholders. These marginalized groups will be
DEMOCRACIES PROVIDE A BETTER SPACE FOR HAVING CONVERSATIONS ABOUT HOW TO REGULATE AI AND COMBAT BIAS, GIVEN THE DEMOCRATIC TRADITION FOR MORE OPEN EXPRESSION AND FREEDOM OF THOUGHT. key to getting fuller perspectives on bias and ensure innovators have a deeper understanding of community needs. Fast Tracking Innovation: When we think of how institutions promulgate regulations and best practice guidelines, they are typically well behind the tech innovation curve. In some cases, it may be best to slow or curtain certain types of innovation (see below). In other cases, there is a clear case to be made that AI can have immediate and profound impacts on the public good. In such cases—see the example of the remote diagnosis of disease mentioned above—a productive role of institutions would be to help get specific applications of AI through regulatory hurdles as rapidly as (safely) possible. Bifurcated Regulatory Pathways: Bifurcated Regulatory Pathways: The EU recently published guidance on AI regulation, where it categorized different uses of AI in three categories: acceptable risk, limited risk, and unacceptable risk. These categories help to shape what sort of regulatory hurdles must be cleared for a given application of AI. The same should be considered by sector—areas where there are clear public goods and wellunderstood or limited risks such as education and medicine should have different regulatory hurdles and best practice guidance from, for instance, AI in the security space (law enforcement or military). DAVOS DIALOGUE 2024 | 43
Preserving Institutional Democracy in the Age of AI By Thomas E. Garrett
44 | D I PLOM AT I C CO URIE R
Image via Adobe Stock.
D
DAVO S D I A L OG U E I J ANU ARY 2024
emocratic resiliency is often measured by continuity. In a world f requently def ined by disruption, upheaval threatens continuity and thus resiliency can face serious challenges. The growing impact of AI on every facet of life is the latest disruption. This upheaval is liable to be exploited by modern strongmen and autocracies. The advances in AI technology have far outpaced our ability to utilize it responsibly, and more importantly, to understand its short and long-term effects on democratic institutions. As AI ushers us into a brave new world of societal and political change, there is plenty of room to leverage lessons f rom current governance f rameworks. In parallel, it is imperative for leaders to rethink and update policies and structures now to ease the tension between the speed of technological progress and the adoption of it by society. Leaders must address this now to ensure advances in AI benef it a f ragile democratic ecosystem while preventing misuse by illiberal actors. AI has no political leaning; it has no sinister plot or benevolent cause. It is a technology that can be harnessed by people who do have agendas and motives. There is no doubt that AI is revolutionary, but leaders need to realize that it still affects people and institutions much like other technological revolutions have in the past—and they must be practical about its use and governance. As fast as it evolves, AI is still in its early stages. There is a risk of non-democratic governments engaging in shaping AI standards and guidelines; now is the time for democratic leaders to address its implications. Technology waits for no one and no law. The societal tension associated with the rise of AI has made this clear. Thus, leaders cannot wait before integrating specif ic and actionable changes to policy and governance structures regarding AI. Democratic resiliency can be secured with the help of these types of changes
AS FAST AS IT EVOLVES, AI IS STILL IN ITS EARLY STAGES. THERE IS A RISK OF NON-DEMOCRATIC GOVERNMENTS ENGAGING IN SHAPING AI STANDARDS AND GUIDELINES; NOW IS THE TIME FOR DEMOCRATIC LEADERS TO ADDRESS ITS IMPLICATIONS. at the highest level. In a way, change is continuity. However, change needs to be initiated by democratic leaders, not as a reaction to the technologies that disrupt that continuity. It is time for leaders to challenge and disrupt the status quo in their own ways and not be guided by the whims of technology. We must try to move faster than AI, or at least fast enough to ensure the healthy perseverance, yet improvement of democratic institutions and democratic resiliency. ***** About the author: Thomas E. Garrett is Secretary General of the Community of Democracies, an intergovernmental coalition founded in 2000 by U.S. Secretary of State Madeleine Albright and Polish Foreign Minister Bronislaw Geremek.
DAVOS DIALOGUE 2024 | 45
Image via Adobe Stock.
Making Generative AI Work for the Global South By Alexander Nicholas
46 | D I PLOM AT I C CO URIE R
I
DAVO S D I A L OG U E I J ANU ARY 2024
n an age dominated by technological breakthroughs, Generative Artif icial Intelligence (Gen AI) is emerging as the most impactful. This transformative technology—which is reshaping nearly every sector—signals a future where mastery of Gen AI could dictate who holds global influence and power. We have arrived at a crucial junction, magnif ied by the pandemic and rapid technological advancements. We now face a decision that will shape generations. Do we allow these changes to perpetuate global disparities or f ind ways to harness them to create a world abundant in opportunities for all?
THE WORLD NEEDS A “GLOBAL MANHATTAN PROJECT” FOR GEN AI IN THE GLOBAL SOUTH WHERE THE GLOBAL COMMUNITY COLLABORATES TO ACCOMPLISH ONE OF THE MOST SIGNIFICANT EVENTS OF THE 21ST CENTURY.
The development of Gen AI has been predominantly concentrated in the Global North, creating a signif icant digital divide. This chasm places the Global South at risk of lagging further behind, with two primary challenges: access to Gen AI technology and ensuring its relevance to diverse global populations. In many regions of the Global South, the lack of inf rastructure and educational support hinders the full realization of Gen AI’s potential. Moreover, the suitability of Gen AI solutions, primarily designed in the tech hubs of the Global North, for the unique challenges of the Global South needs to be investigated.
interconnectedness and vulnerability of our global systems. A haphazard deployment of Gen AI could precipitate a crisis far exceeding the pandemic’s impact, exacerbating poverty, widening gender disparities, and undermining democratic institutions.
The world needs a “Global Manhattan Project” for Gen AI in the Global South where the global community collaborates to accomplish one of the most signif icant events of the 21st century. This initiative would entail nurturing local AI ecosystems and forging global partnerships to encourage technological innovation. Through this, Gen AI could be tailored to revolutionize areas like agriculture in climate-sensitive regions, expand educational access, and strengthen public governance.
*****
By ensuring that Gen AI serves the global community, we can harness its immense potential to safeguard existing progress and propel humanity towards a more equitable and resilient future. The challenge is signif icant, but the potential benef its for our collective prosperity and stability are too substantial to overlook.
About the author: Alexander Nicholas is Executive Vice President at XPRIZE.
The urgency for inclusive Gen AI investment in the Global South cannot be overstated, especially after the COVID-19 pandemic. The pandemic underscored the DAVOS DIALOGUE 2024 | 47
Image by EV via Unsplash.
Unplugging Democracy With AI and the Threat of Surveillance By Joseph Toscano
48 | D IPLOM AT I C CO URIE R
“U
DAVO S D I A L OG U E I J ANU ARY 2024
nplug the Google Assistant,” Mey said as she tucked her phone under her pillow and left the room to speak to me. Born and raised in China before immigrating to the United States, Mey knew not to test the Chinese surveillance machine with her family still in Beijing. Not certain about what might happen if caught speaking unfavorably, Mey always played it safe. “You never know,” she’d always say. China is not the only nation surveilling its people, it’s just the one everyone points to. We live in a world where reading devices monitor not only what is read but how it’s read; cars record driver’s routes and schedules; the neighbor’s house is livestreaming to the local police department; and crime is being selfreported through an app, behind locked doors. Personal opinions aside, this is happening. Not only that, but it’s legal and incredibly lucrative. AI’s value is arguably greater than the downside, but without a monitoring system in place, our economic calculus in this surveillance economy incubates a critical vulnerability widely overlooked. For example, the DMV makes tens of millions of dollars each year selling personal information; Meta is offering Europeans a $12.99 monthly subscription for an adless platform—translating to the average European user being worth around $150 each year. Meanwhile, Walmart monitors shelves for eff iciencies while Whole Foods does it to avoid the need for a physical card. There is a clear incentive to turn our world into data and productize every byte of information possible— the benef its of which are not exclusive to those prof iting.
AI’S VALUE IS ARGUABLY GREATER THAN THE DOWNSIDE, BUT WITHOUT A MONITORING SYSTEM IN PLACE, OUR ECONOMIC CALCULUS IN THIS SURVEILLANCE ECONOMY INCUBATES A CRITICAL VULNERABILITY WIDELY OVERLOOKED. they’re being watched when they see a camera on the ceiling, but if people begin to fear clicking a link, reading the wrong blog post, or listening to their loved ones, the f reedom to think will be lost. And with it, democracy. As AI becomes continuously embedded into daily lives, the incentive to surveil cannot overcome the integrity of democracy. The way forward involves demanding transparency through annual reporting and audit mechanisms; requiring data rights as a basic necessity of any f ree society; and managing data as a f inancial asset. Without these things, the only democracy the world will ever know is what’s allowed to be streamed. ***** About the author: Joseph Toscano is the Founder and CEO of DataGrade.
The Hawthorne Effect occurs when individuals modify an aspect of their behavior in response to their awareness of being observed—f rom dietary choices to work ethic. Most people are aware DAVOS DIALOGUE 2024 | 49
Photo by Marek Piwnicki via Unsplash.
Spirituality is Essential to Ensuring AI Helps Humanity Flourish By Angela Redding
50 | D IPLOM AT I C CO URIE R
A
DAVO S D I A L OG U E I J ANU ARY 2024
I promises unparalleled technological breakthroughs in the most pressing problems of today. At the same time, we recognize that bias in AI can harm humans, and that humans can unwittingly internalize that bias as fact, long after they’ve stopped using that algorithm. AI’s potential for good or harm hinges on our ability to root out biased data along with the human and systemic biases those datasets are embedded within. One critically under- and mis-represented dataset is people of faith and the importance of religion and spirituality in most humans’ lives. Most of humanity (84%) affiliates with a religion. That percentage is expected to increase over time. Meanwhile, according to an AI-enabled study of 30+ million documents, 63% of faith-related digital content treats faith negatively. Furthermore, 11% of digital content mentioning faith is negative in extreme ways, including hate speech. Across multiple global studies, a majority of individuals say representation of their faith is stereotypical, sensationalized, negative, or simply absent. The digital landscape provides training data for AI, so these negative typifications of faith could have harmful effects on human flourishing. Religion and spirituality have been linked to better wellbeing, mental health, civic engagement, prosocial behavior, and longevity. Religious stereotypes can encourage violence against marginalized groups. Training datasets filled with negative and even hateful religious representation risk exacerbating violence and decreasing desperately needed positive impacts on personal wellbeing and society. If we want AI to be truly inclusive and to support human flourishing, training datasets should be audited to protect against unfair, inaccurate bias against religion and spirituality.
RELIGION AND SPIRITUALITY HAVE BEEN LINKED TO BETTER WELLBEING, MENTAL HEALTH, CIVIC ENGAGEMENT, PROSOCIAL BEHAVIOR, AND LONGEVITY. RELIGIOUS STEREOTYPES CAN ENCOURAGE VIOLENCE AGAINST MARGINALIZED GROUPS. and private sectors, and other influential people to proactively support accurate and representative stories of faith on our collective cultural narratives. By encouraging representative stories and guarding against bias and discrimination—for all faiths and types of spirituality—AI systems can be trained on better datasets. In the face of rapidly accelerating and often incomprehensible technological change, AI can accelerate us toward catastrophic harm or unimaginable thriving. The power of AI to elevate the beauty of what makes us uniquely human depends on the representation of our most ennobling and transcendent virtues in what powers AI. Spirituality is not tangential to the impact of AI on human flourishing; it is essential. ***** About the author: Angela Redding is Executive Director of the Radiant Foundation.
Experts and stakeholders urgently need to pressure world leaders, the media, public DAVOS DIALOGUE 2024 | 51
A DVERTI SEMENT
LA RÔTISSERIE Leave the hustle and bustle of everyday life behind as we whisk you off on an exquisite culinary journey. Chef Stefan Jäckel and his team will surprise you with refined, seasonal creations and innovatively interpreted classics directly from the trolley – perfect for a family get-together, meal with friends or business lunch. storchen.ch
cda_8-11x11-2_PRINT.pdf
1
13/12/2019
09:33
A DVERTI SEMENT
Creating value for Switzerland, its economy and citizens Cisco Country Digital Acceleration Strategy #CiscoCDA #SwissCDA C
M
Y
CM
MY
CY
CMY
K
www.cisco.ch/cda