3 minute read

AI Governance Aliance debuts research reports on AI guidelines

/ Read original article here /

The AI Governance Alliance (AIGA) has released a series of three new reports, focused on guidelines and recommendations for ethical use of advanced artificial intelligence (AI).

The papers were announced at the World Economic Forum’s (WEF’s) annual meeting in Davos yesterday. They focus on the governance of AI and generative AI, providing insights on unlocking its value and a framework for responsible AI development and deployment.

Established by the WEF in June 2023, the AIGA is a collaboration that brings together industry leaders, governments, academic institutions, and civil society organisations, to ensure equitable distribution and enhanced access to AI worldwide.

While AI holds the potential to address global challenges, it also poses risks of widening existing digital divides or creating new ones, says the AIGA.

These and other topics are explored in the paper series crafted by AIGA’s three core work streams, in collaboration with IBM Consulting and Accenture.

As AI technology evolves at a rapid pace and developed nations race to capitalise on AI innovation, the alliance stresses the urgency to address the digital divide to ensure that billions of people in developing countries are not left behind.

“The AI Governance Alliance is uniquely positioned to play a crucial role in furthering greater access to AI-related resources, thereby contributing to a more equitable and responsible AI ecosystem globally,” says Cathy Li, head of AI, data and metaverse at the WEF.

“We must collaborate among governments, the private sector and local communities to ensure the future of AI benefits all.”

Focused on optimised AI development and deployment, paper one, “Presidio AI Framework: Towards Safe Generative AI Models”, addresses the need for standardised perspectives on the model lifecycle by creating a framework for shared responsibility and proactive risk management.

Paper two is titled: “Unlocking Value from Generative AI: Guidance for Responsible Transformation”. It provides guidance on the responsible adoption of generative AI, emphasising use case-based evaluation, multi-stakeholder governance, transparent communication, operational structures, and value-based change management for scalable and responsible integration into organisations.

Focused on international cooperation and inclusive access in AI development and deployment, paper three is titled: “Generative AI Governance: Shaping Our Collective Global Future”. It evaluates national approaches, addresses key debates on generative AI, and advocates for international coordination and standards to prevent fragmentation.

AIGA says it seeks to mobilise resources for exploring AI benefits in key sectors, including healthcare and education.

The alliance is calling upon experts from various sectors to help address several key areas. This includes improving data quality and availability across nations, boosting access to computational resources, and adapting foundation models to suit local needs and challenges.

“There is also a strong emphasis on education and the development of local expertise to create and navigate local AI ecosystems effectively. In line with these goals, there is a need to establish new institutional frameworks and public-private partnerships along with implementing multilateral controls to aid and enhance these efforts,” it says.

This article is from: