![](https://assets.isu.pub/document-structure/240205205743-d4a93cee2e3ac055c1abb794f3a38377/v1/4e5150d6deaa71c313691da9189be865.jpeg?crop=&height=3012&originalHeight=3012&originalWidth=2407&width=720&zoom=&quality=85%2C50)
1 minute read
Making trustworthy AI
AI is a growing presence in our lives and has great potential for good — but people working with these systems may have low trust in them. A research team at ASU’s Center for Accelerating Operational Efficiency is working to address that concern by testing a tool that could help government and industry identify and develop trustworthy AI technology.
The tool, called the Multisource AI Scorecard Table (MAST), is based on a set of standards originally developed to evaluate the trustworthiness of human-written intelligence reports.
Funded by the U.S. Department of Homeland Security, the research group tested whether MAST effectively measures the trustworthiness of AI systems. To do so, they had volunteer groups of transportation security officers complete tasks with a simulated AI system programmed to score
high or low on MAST. Afterward, the officers answered a survey that included questions on how trustworthy they thought the AI was.
If the ASU team is able to show that the MAST tool is useful for assessing AI trustworthiness, it will help in building and buying systems that people can rely on, paving the way for AI’s smooth integration into critical sectors, protecting national security and multiplying its power for positive impact.