Beyond the AI Hype: Balancing Innovation and Social Responsibility

Page 1

1


About The Institute for Experiential AI

The Institute for Experiential AI at Northeastern University researches and develops human-centric AI solutions that leverage machine technology to extend human intelligence. ​


BEYOND THE AI HYPE: BALANCING INNOVATION AND SOCIAL RESPONSIBILITY Prof. Dr. Virginia Dignum Chair Responsible AI - Department of Computing Science Email: virginia@cs.umu.se



THE EXPECTATIONS IN THE MEDIA • Generative AI in video • Monetization of practical applications of AI in various industries (B2B and B2C) • Impact of AI in Elections • Hype Sustainability: fad or lasting impact. • Emerging Technologies vs AI (quantum computing, gene editing) • Continued regulation efforts


WHAT ARE WE DOING WITH AI? A Bosch washing machine in the style of Jeronimus Bosch (generated w/ stable diffusion)

Questions: • who owns / is the creator? • what does this mean for arts and creativity • AI as tool or author? • …


WHAT IS AI DOING? A nurse in front of a hospital

A doctor in front of a hospital

Image from text: Stable Diffusion

Text generation: ChatGPT


HOW AI SEES THE WORLD • 50% of datasets are connected to 12 institutions • WEIRD demographics (Western, educated, industrialised, rich, democratic)

Mozilla internet health report 2022


AI AS WE CONCEPTUALISE IT • The current paradigm conceives AI as rational system AI agents hold consistent beliefs; o AI agents have preferences, or priorities, on outcomes of actions; o AI agents optimize actions based on those preferences and beliefs. o

Stuart Russell and Peter Norvig.Artificial intelligence: a modern approach. PrenticeHall, 2010.

• But… We act in context: including others and different situations We pursue seemingly incompatible goals concurrently o We hold and deal with inconsistent beliefs o We often act motivated by altruism, fairness, justice, or by an attempt to prevent regret at a later stage. o We don’t maximize forever: good is good enough o

Virginia Dignum. Social Agents: Bridging Simulation and Engineering. Communications of the ACM, November 2017, Vol. 60 No. 11, Pages 32-34



WHAT IS AI? • Simulation or operation? Understand intelligence by building intelligence, or o Active intervention in real world o

• Human-like? Why? o What does this mean? o

• Tool? o

For what? For whom?

• Normative or descriptive? o

Do as we say, or do as we do?


AI IMPLIES HUMAN RESPONSIBILITY

Bias and discrimination

Wisdom of the crowd?! Trial and error?! Brittle! (error or attack) Misinterpretation


RESPONSIBLE AI: WHY CARE? • Many AI systems act autonomously in our world • Manipulation of language is not a proxy for intelligence • Eventually, AI systems will make better decisions than humans

AI is designed, is an artefact • The question zero is

‘Should AI be used here?’ o o o

Who should decide? Which values should be considered? Whose values? How to prioritize?


RESPONSIBLE AI – WHY CARE? • Currently, AI can be compared to a car Without brakes nor seatbelts o Driven by someone without driver’s license o In a road without traffic rules o

• But

o Cars drive faster with brakes

o In a game without rules, no one wins

• Regulation is steppingstone for innovation!


RESPONSIBLE AI – WHY CARE? • Datification o Reality is more than data o Data is constructed o Data is biased • Power o Who is developing AI? o Who is deciding? • Sustainability o Computational cost of AI o Human and social costs


RESPONSIBLE AI: NOT TECH ISSUE AI does not exist in a vacuum.

Socio-technical

There is no technology fix for ill effects! Ethics, regulation, governance concern the ecosystem.

AI Autonomy ecosystem

Responsibility

Responsible AI solutions need to be social rather than technical!


PRINCIPLES AND GUIDELINES • • • • • • • •

UNESCO European Union OECD WEF Council of Europe IEEE Ethically Aligned Design National strategies ...

https://ec.europa.eu/digitalsingle-market/en/high-levelexpert-group-artificialintelligence

https://ethicsinaction.i eee.org

https://www.oecd.org/g oingdigital/ai/principles/


REGULATION – WHY? WHAT FOR? •

Regulation as incentive for responsible innovation, sustainability, and fundamental human rights o o

powerful stepping stone for innovation with societal benefits signaling expected ambitions enhancing innovation, competitive power

Comprehensive and future-proof legal framework for AI development, deployment, and use, especially generative AI models with varying risks

Demands for responsibility, accountability, and governance o o

Regulation / AI Act does not come in a vacuum o o

Control organisational actors rather than technological results Public trust and accountability for errors in automated decision making, regardless of the complexity of AI algorithms involved Existing laws, directives, standards, and guidelines applicable to AI systems, products, and results Need for better understanding and integration of existing frameworks alongside introducing more regulation

Avoidance of an "arms race" narrative in AI regulation


AI ACT The legislation aims to regulate AI based on its potential to cause harm. •

Key points o

o

o

o

Stricter rules for foundation models:  stricter rules for foundation models and bans "purposeful" manipulation and the use of emotion recognition AI-powered software in certain areas. Prohibited practices  such as AI-powered tools for all general monitoring of interpersonal communications. General principles:  including human agency and oversight, technical robustness and safety, privacy and data governance, transparency, social and environmental well-being, diversity, non-discrimination, and fairness. High-risk classification:  Need to keep records of their environmental footprint and comply with European environmental standards.  only be deemed at high risk if it posed a significant risk of harm to the health, safety, or fundamental rights.  extra safeguards for the process whereby the providers of high-risk AI models can process sensitive data such as sexual orientation or religious beliefs to detect negative biases

AI act


UNITED NATIONS ADVISORY BODY ON AI • The Global AI Imperative o Interdisciplinary Expertise, globally inclusive approach o A Multistakeholder, Networked Approach • Tasks o building a global scientific consensus on opportunities, enablers, risks and challenges, o helping harness AI for the Sustainable Development Goals, o strengthening international cooperation on AI governance.

International AI Agency? Why? What for? How?


RESPONSIBLE AI – MORE THAN ETHICS • Not philosophising about ethics o o

Ethics is not about the answer but about recognizing the issue Ethics is a (social) process not a solution

• Not technification of ethics o

Your implementation does not ‘solve’ ethics

• Fundamentally, is about choices, priorities, tradeoffs o o o o o o

Accuracy / Explanation Accuracy / Computational resources Security / privacy Equity / equality Long term benefit / Short term …


RESPONSIBLE AI IS INNOVATION • Technological innovation • Organisational innovation • Regulation innovation • Governance innovation • Social innovation

Multidisciplinary innovation!


PROVABLE TECHNOLOGY • Predictable • Transparent • Formally Verifiable • Robust to Adversarial Attacks • Generalizable • Resilient • Interpretable • Data integrity • Safe • …

multidisciplinary innovation needed!


FUNDAMENTAL CHALLENGES • Creative industries

Content generation is a commodity accessible to everyone o Is art the process or the result? o

• Programming / CS

Code generation o What are skills needed when models are truly not understood? o

• Education

Assistive self-learning o Lost skills (long division?) o Is knowledge the ability for reflection or for problem solving? o

• Science

Lab automation / hypothesis crunching o Is scientific advance about the results or about the knowledge creation? o

• Politics

Simulating the ‘average person’ / data tell us all we need to know o The voice of the people or service to the people ? o

multidisciplinary innovation needed!


TRUSTWORTHY GOVERNANCE • Development of AI o

The tech requirements

o

The license to operate

o

The rules of the game

• Use of AI

• Context in which AI is developed and used • Global efforts o o o o o

Sustainability Inclusion and participation Diversity Distribution of benefits and costs Agenda 2030 - SDGs

political will and innovation needed!


More than a technology, AI is a social construct

development and use of AI require a multidisciplinary approach understanding and critiquing the intended and unforeseen, positive and negative, socio-political consequences of AI for society in terms of equality, democracy and human rights.


RESPONSIBLE AI IS NOT A CHOICE!


Questions & Answers

|


|

29


Subscribe to our Newsletter Subscribe to In the AI Loop to stay on top of fast-moving AI news!

|


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.