WHY WE HAVE TO START WORKING ON AGI GOVERNANCE NOW By Jerome Clayton Glenn CEO, The Millennium Project
A
N international assessment of how to govern the potential transition from Artificial Narrow Intelligence (ANI) to potential Artificial General Intelligence (AGI) is needed. If the initial conditions of AGI are not “right,” it could evolve into the kind of Artificial Super Intelligence (ASI)* that Stephen Hawking , Elon Musk , and Bill Gates have warned the public could threaten the future of humanity. There are many excellent centers studying values and the ethical issues of ANI, but not potential global governance models for the transition to AGI. The distinctions among ANI, AGI, and ASI are usually missing in these studies. Even the most comprehensive and detailed U.S. National Security Commission on Artificial Intelligence Report has little mention of these distinctions. Anticipating what AGI could become Current work on AI governance is designed to catch up with the artificial narrow intelligence proliferating worldwide today. Meanwhile, investment into AGI development is forecast to be $50 billion by 2023 . Expert judgments about when AGI will be possible vary. Some working to develop AGI believe it is possible to have AGI in as soon as ten years. It is likely to take ten years to: 1) develop ANI to AGI international or global agreements; 2) design the governance system; and 3) begin implementation.
42
HUMAN FUTURES
Hence, it would be wise to begin exploring potential governance approaches and their potential effectiveness now. We need to jump ahead to anticipate governance requirements for what AGI could become. Beginning now to explore and assess rules for governance of AGI will not stifle its development, since such rules would not be in place for at least ten years. (Consider how long it is taking to create a global governance system for climate change.) The governance of AI is the most important issue facing humanity today and especially in the coming decades. --- Allan Dafoe, Future of Humanity Institute, University of Oxford * Artificial Super Intelligence (ASI) – used here for consistency with ANI and AGI – corresponds to “superintelligence,” popularized by Nick Bostrom. ANI is the kind of AI we have today: each software application has a single specific purpose. AGI is similar to human capacity in novel problem-solving whose goals are set by humans. ASI would be like AGI, except that it may emerge from AGI (or can be sui generis) and sets its own goals, independent of human awareness or understanding. n What has to be governed for the transition from ANI to AGI? What are the priorities? n What initial conditions of AGI will be necessary to ensure that the potential
emergence of ASI is beneficial to humanity? n How to manage the international cooperation necessary to build governance while nations and corporations are in an intellectual “arms race” for global leadership. (IAEA and nuclear weapon treaties did create governance systems during the Cold War with similar dynamics.) n And related: How can a governance system prevent an AI arms race and escalation from going faster than expected, getting out of control and leading to war, be it kinetic, algorithmic, cyber, or information warfare? n How can governance prevent increased centralization of power by AI leader(s) and by AI systems themselves crowding out others? n If IAEA, ITU, WTO, and other international governance bodies were created today, how would officials of such agencies create them differently, considering ANI and AGI governance issues? n Drawing on the work of the Global Partnership on Artificial Intelligence and others that have already been done on norms, principles, and values, what will be the most acceptable combination or hierarchy of values as the basis for an international governance system? n How can a governance model help assure an AGI is aligned with acceptable global values? n How can a governance model correct undesirable action unanticipated in utility functions?