5 minute read
WHY WE HAVE TO START WORKING ON AGI GOVERNANCE NOW
By Jerome Clayton Glenn
CEO, The Millennium Project
AN international assessment of how to govern the potential transition from Artificial Narrow Intelligence (ANI) to potential Artificial General Intelligence (AGI) is needed. If the initial conditions of AGI are not “right,” it could evolve into the kind of Artificial Super Intelligence (ASI)* that Stephen Hawking , Elon Musk , and Bill Gates have warned the public could threaten the future of humanity.
There are many excellent centers studying values and the ethical issues of ANI, but not potential global governance models for the transition to AGI. The distinctions among ANI, AGI, and ASI are usually missing in these studies. Even the most comprehensive and detailed U.S. National Security Commission on Artificial Intelligence Report has little mention of these distinctions.
Anticipating what AGI could become
Current work on AI governance is designed to catch up with the artificial narrow intelligence proliferating worldwide today. Meanwhile, investment into AGI development is forecast to be $50 billion by 2023 . Expert judgments about when AGI will be possible vary. Some working to develop AGI believe it is possible to have AGI in as soon as ten years.
It is likely to take ten years to: 1) develop ANI to AGI international or global agreements; 2) design the governance system; and 3) begin implementation. Hence, it would be wise to begin exploring potential governance approaches and their potential effectiveness now. We need to jump ahead to anticipate governance requirements for what AGI could become. Beginning now to explore and assess rules for governance of AGI will not stifle its development, since such rules would not be in place for at least ten years. (Consider how long it is taking to create a global governance system for climate change.)
The governance of AI is the most important issue facing humanity today and especially in the coming decades. --- Allan Dafoe, Future of Humanity Institute, University of Oxford
* Artificial Super Intelligence (ASI) – used here for consistency with ANI and AGI – corresponds to “superintelligence,” popularized by Nick Bostrom. ANI is the kind of AI we have today: each software application has a single specific purpose. AGI is similar to human capacity in novel problem-solving whose goals are set by humans. ASI would be like AGI, except that it may emerge from AGI (or can be sui generis) and sets its own goals, independent of human awareness or understanding.
n What has to be governed for the transition from ANI to AGI? What are the priorities? n What initial conditions of AGI will be necessary to ensure that the potential emergence of ASI is beneficial to humanity? n How to manage the international cooperation necessary to build governance while nations and corporations are in an intellectual “arms race” for global leadership. (IAEA and nuclear weapon treaties did create governance systems during the Cold War with similar dynamics.) n And related: How can a governance system prevent an AI arms race and escalation from going faster than expected, getting out of control and leading to war, be it kinetic, algorithmic, cyber, or information warfare? n How can governance prevent increased centralization of power by AI leader(s) and by AI systems themselves crowding out others? n If IAEA, ITU, WTO, and other international governance bodies were created today, how would officials of such agencies create them differently, considering ANI and AGI governance issues? n Drawing on the work of the Global Partnership on Artificial Intelligence and others that have already been done on norms, principles, and values, what will be the most acceptable combination or hierarchy of values as the basis for an international governance system? n How can a governance model help assure an AGI is aligned with acceptable global values? n How can a governance model correct undesirable action unanticipated in utility functions?
n How to develop and enforce AGI algorithm audit standards? n How can the use of ANI-to-AGI by organized crime and terrorism be reduced or prevented? n To what degree do thought leaders and primary stakeholders agree about the framing of governance issues? n Should an international governance trial, test, or experiment be constructed first with a single focus (e.g., health or climate change), and then to learn the rules and standards from such experiences to extend to broader governance of the transition from ANI to AGI? n Should AGI have rights if it asks for them? And might this make its potential evolution into artificial super intelligence (ASI) more acceptable to humanity? n Since Blockchain is used by some for decentralized AI development, how could it be (and should it be) included in a governance system? n How can governance be flexible enough to respond to new issues previously unknown at the time of creating that governance system?
Such questions will be reviewed and edited by the steering committee before being used for the interviews conducted during step 1. (See Appendix for other organizations with whom we intend to collaborate and build on their research and analysis of norms, principles, values, standards, rules, audits and international conferences and potential negotiations.)
Initial examples of the kinds of international and global governance models that might be explored in the scenarios (pending feedback in the first two steps):
n Models similar to IAEA, ITU, and/or WTO with enforcement powers n TransInstitution (self-selected institutions and individuals from government, business, academia, NGOs, and UN organizations) n IPCC-like model in concert with international treaties n International S&T Organization (ISTO) as an online real-time global collective intelligence system; governance by information power (MP/Office of Science, DOE study) n GGCC (Global Governance Coordinating Committees): flexible but enforced by national sanctions, ad hoc legal rulings in different countries, and insurance premiums, acting like a decentralized multi-polar monitoring system n ISO standards affecting international purchases n Put different parts of AGI governance under different bodies like ITU, WTO, WIPO n All models should be designed and managed as a complex adaptive system