Responsible AI – is it worth it and can it be achieved?
Kay Firth-Butterfield, Head of AI and Machine Learning, World Economic Forum February
Responsible AI – is it worth it and can it be achieved?
Kay Firth-Butterfield, Head of AI and Machine Learning, World Economic Forum February
• DeepMind • Hawking/Russell • bostrom • ME CAIEO
IEEE
Asilomar
Forum start in 2017
• EarthSpecies - an example of beneficial AI BUT
• The world agrees on what is responsibility but not on how we get there
• Bias,
• Fairness,
• security reliability robustness,
• privacy,
•
Explainability/transparency
• Human agency
• Accountability
• Lawful
• Civil rights and Sonderling
• US and EU see HR differently
• China…
• Where does human rights law come from – is the UN ready
UNESCO and UNICEF and OECD but no actual human rights law which is why Sonderling’s approach is important
• Ultimately a brand value issue
• Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
• Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
• Data Privacy: You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.
• Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
• Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
Generative AI making Responsible AI impossible?
Yes and is that a bad thing?
Lost civilizations
Maintaining the status quo
Where is the data from Novels and human angst
Dominance of white male thought
Indigenous and AI example
Generating content from its own content – where is the human Jobs
Has potential to create a slowing innovation and losing the massive benefits
Also creating more regulation which can spend years in litigation before definition is achieved
At Forum we have been turning principles of Responsible AI into action for last 5 years, mainly through development of frameworks and toolkits, in the form of soft governance - WHY
BEUC, the European Consumer Organisation, representing 44 independent national consumer associations from 32 European countries 2020 survey found that:
64%
Belgium, Italy, Portugal and Spain most people agree or strongly agree that companies are using AI to manipulate consumer decisions.
52%
France, Denmark, Germany, Poland and Sweden respondents (strongly) agree.
78% of Americans and 87% of Europeans want some or together regulation 2020.
And overlaps!
Doubts but still worth trying
What is governance and why is it important for AI?
Standards
Norms
Legislation
• The world needs to think about AI wholistically in regulation, but it doesn’t.
• Over 192 sets of ethical principles
• OECD, UNICEF, UNESCO, GPAI
• Less trust in AI, Government and Business – north
• South less worried – new markets – but probably follow north if we don’t do this correctly
• Influence of superpowers in global south and the AI RACE
• Add on Generative AI (ChatGPT), Dall-E, new jobs and skills, Space, etc. if AI is everywhere then governance and good thinking should be too.
• Why the choice of regulator systems matters in the big picture. Choosing the world, we want…
• US – Human Rights approach
• China – old approach – get the tech or know-how in exchange for X
• Three big systems emerging:
• EU – AI Act
• US – State and NIST
• China
• What of the rest of the world?
C4IR Network
•Azerbaijan
•Brazil
•Colombia
•Israel
•Kazakhstan
•Norway
•Rwanda
•Serbia
•South Africa
•Saudi Arabia
•Turkey
•UAE
Forum Offices
•United States
•Switzerland
•Japan
•India
•China
The network connects 15 advanced and emerging economies, producing 40% of global GDP and hosting 30% of the global population.
Over 200 people serve the network, including 50 Forum staff, 50 Fellows and 120 Centre staff. The Network is rapidly expanding and projected to double its Centres by 2024.
Bring together government, industry, civil society & academia to accelerate adoption of AI in the global public interest
Accelerate the social benefits of AI
Ensure everyone has access to benefits
Mitigate negative impacts & unintended consequences
• What is bias and why does it matter. Sits at the heart of a human rights approach to AI
• Hiring – NYC act
• Loans
• Criminal justice
• ALGORITHMIC JUSTICE LEAGUE
• In the US surveillance regulation is fragmented. Not so in China. Does passing a test with 95% accuracy of algorithm mean its fine to just use the algorithm or are their social imperatives
• Auditing – IP & Competition law
• Standards – IEEE and our FRT work
• Certification – IP and competition law
• Example – children – smart toys
• Older adults
• Persons with Disabilities
• Board
• C-suite
• Diverse teams
• Test and test again
• Education
• Reward resp AI not fast AI
• CAIEO? Or Advisory Board
• Be able to show your GC you have rules which match the seriousness e.g. FireAid v Healthcare
• Read – Resp Tech and listen – IN AI WE TRUST?
• Procurement
• Regulation or freedom to innovate - BALANCE
• What should regulators do – sandbox, existing law, outreach
• Soft governance
• Do they need AI for “x”
• Do they need surveillance
• Create a unified regulatory system or risk not being the dominant system – EU ACT like GDPR
• Hold suppliers' feet to fire – its you who are on the hook legally
• Develop a suite of questions to ask of suppliers or if you are a supplier consider how to help answer questions your customer might pose. Probably in this together
• BRAND REPUTATION and ABILITY TO EXIT
• STORY - Look out for the competition – film shorts
• AI and SDGs – Mustafa Sulemann – will the battle for AI global dominance be won here? Maternal health for example.
• AI and climate – fireAId – Global Coalition against Wildfires
Convening a community of executives working on Responsible AI to engage in collective problem-solving and identify different models for integrating ethics into the design, deployment and use of technology
We need to apply ethics and human rights-based approaches to every phase in the lifecycle of technology –from design and development by technology companies through to the end use and application by companies across a range of industries
Purpose – to learn from each other and learn from the stories of success and failure. The case studies which we are producing, from Microsoft, IBM, Salesforce and now Google are examples from which others, perhaps more resource constraint can learn what it takes to have the design, development and use of AI created in a Responsible manner
Hannah Darnton, BSRWhat’s to lose? - Credibility with the customer, brand value and possible legal action
PODCAST – In AI We Trust?
Fostering collaborative, action-oriented exchanges that facilitate greater consistency, credibility, and success amongst ongoing efforts by its participants resulting in a Blueprint for Equity and Inclusion in AI, to ensure that AI is developed and used responsibly and that it supports widespread societal benefits.
Inclusion is too often an afterthought in how we approach the visioning, design and implementation of new digital capabilities.
Nadjia Yousif, Managing Director and Partner, Boston Consulting Group.Inclusion is global not merely internal to a nation or organization. Global North and South divide – Dall-E Products are much better if they are made with diverse teams and members of the team represent the demographic for whom the product is intended
How can we create actionable guidelines to address human rights concerns arising from the use of facial recognition technology?
Implementing actionable guidelines to address human rights concerns arising from the use of facial recognition technology in flow management and law enforcement, addressing transparency, accountability, trust and oversight; co-designing actionable risk mitigation processes with industry actors
A toolkit for HR professionals to make informed and ethical decisions in adopting AI-based tools, mitigate uses that are likely to lead to biased outcomes, and promote AI-based tools appropriate for the organizational context which empower workers and respect privacy –
As large companies innovate with a growing variety of technological tools related to talent and people strategy, questions abound as to how to ensure that technology is used responsibly and effectively.
Ani Huang, Senior Vice President, HR Policy AssociationUpskilling procurement practitioners, improving government confidence in procuring AI to better interact with and serve citizens through the comprehensive
AI holds the potential to vastly improve government operations and meet the needs of citizens in new ways (…). However, governments often lack experience in acquiring modern AI solutions and many public institutions are cautious about harnessing this powerful technology. Guidelines for public procurement can help in a number of ways.
AI Procurement in a Box Toolkit
Created and tested with the UK
Scaled – Brazil, Bahrain, UAE, Rwanda and more
Why SOFT GOVERNANCE is important
Providing practical sets of tools to help corporate executives understand AI’s impact on their roles, ask the right questions, and make informed decisions on AI projects.
Toolkits for corporate executives to identify the benefits of AI and how to operationalize it in a responsible way
– Board Toolkit
– C-suite Toolkit
– The importance of buy-in from senior executives
Commercial procurement of AI solutions bridges the buy-side and the sell-side of the procurement journey and provides unique insights that help managers and firms acquire the latest AI technologies.
Developing a toolkit for commercial organizations to evaluate AI/ML technologies through a robust procurement framework and executives to make informed and agile decisions for AI procurement
Raising awareness on the value and feasibility of industrial AI, informing decision-making on responsible implementation and scaling of AI in manufacturing, and demonstrating outcomes through tangible pilots
9% of manufacturing organizations are leveraging AI today to boost efficiency, improve agility and enable self-optimizing operations
How to engage:
– Take part in an interview on high-potential applications and enabling dimensions of AI in manufacturing
– Share a use case from your organization demonstrating the impact, feasibility and scalability of AI in manufacturing
Can investors and technology companies come together to identify common metrics and tools to foster the development of trusted AI systems?
Convening the investor community and technology companies to align on business practices which effectively foster the development of trusted AI system, and developing common metrics for investors and companies to shape the trajectory of responsible AI and technology development through investment
The challenge:
Wildfire severity, spread and frequency have all increased as a result of climate change. This poses a growing thread to forest ecosystems and high-risk rural areas. Critical services and resources such as Health and Safety, Forestry, Natural Disaster and Emergency Relief agencies, and Rural planning are overburdened and the strain on these resources is likely to grow. There is an important need for technological innovation on three fronts: prediction, emergency response and forest management, where AI can help to predict the risky locations and the best possible strategy to mitigate fire hazards.
The opportunity: to address this worldwide issue the WEF has launched a joint initiative with key partners to mitigate wildfire risks using AI Systems, which include:
A dynamic wildfire risk map that ranks the likelihood of forest fires based on seasonal variables
Resource allocation that is optimal based on many data sources and the wildfire risk map
In case of a forest fire, a first - response proposal using pre-optimized resources and maps
–
Climate change is accelerating: in 2022 alone we had witnessed devastation of floods in Pakistan, severe drought and heatwaves in Europe, droughts in Kenya, and catastrophic impact from Hurricane Ian, causing major disruptions around the world
– Adaptation has unprecedented urgency: COP27 put climate adaptation & resilience at forefront of agenda, acknowledging mitigation alone is no longer enough. Government and business leaders are urged to step up adaptation planning and take action now
– Huge investment is needed for adaptation, esp. in Global South: adaptation & resilience financing need is expected to reach ~$140-300B p.a. by 2030 in developing countries alone
–
Data & AI is key to accelerating adaptation efforts and driving cost efficiency: Data & AI can be applied in boosting adaptation and resilience capacity and improving cost efficiency through improved hazard projections of regionalized long-term effects (e.g., sea-level rise) or extreme events (e.g., hurricanes and droughts)
Mitigations are falling short…
Well below 2C, pursuing 1.5C Paris agreement goal
+2.6-2.8 C
Current policies
+2.4 C
Full implementation of 2028 NDCs