4 minute read

Women in AI

Next Article
Seen & Heard

Seen & Heard

Women are instrumental to the monotonous work in technology (think of women being the first human “computers” while men went to fight in wars), in addition to heading initiatives to craft up ingenious ideas. Today, women are creating the next big advancement in artificial intelligence (AI), bringing us into the 21st century with AI that is more reliable, understanding, and fair.

Intelligence without understanding

People see artificial intelligence as the key to all of our problems, but there are still some sizable limitations and vulnerabilities. Melanie Mitchell’s research exposes these shortcomings, with a particular focus on AI’s lack of human-level understanding. Melanie is the author or editor of five books and numerous papers as well as a programmer, currently working on “Situate,” which extends Copycat to interpret and make analogies between real-world visual situations.

We had the pleasure of catching up with her after her presentation, Artificial Intelligence and the “Barrier of Meaning,” at the Montreal AI Symposium.

Her presentation began with an overview of the historical instances (of which there were many) when experts vastly underestimated the amount of time that certain AI milestones would be reached. Melanie elaborates on this phenomenon: “They see machines doing something really impressive, like playing Go or playing chess and beating the world champion, and think, ‘Oh, they have to be incredibly intelligent to be able to do that, so how far could human intelligence really be?’. I think people underestimate how hard it is and how complex human intelligence is.” In particular, understanding is preventing us from reaching human-level AI.

Other things missing from the AI picture are reliability and robustness. A lot of the deeplearning architecture relies on getting better overtime and learning from examples in order to make better classifications – think Google Photos, image captioning, translation services, etc. However, unreliability pops up quickly, including problems with generalization, biases, abstraction, transfer learning, lack of common sense, and vulnerability.

Google Translate, for example, cannot take context into account. Auto-captions are unreliable. Salt confuses Tesla’s autopilot system, and these cars can hit a stopped firetruck. These AIs lack generalization and common sense. They can perform well in one scenario, but then you throw a curveball, and they’re useless. Melanie expands, “Our concepts are very flexible, and AI just doesn’t have that. It also doesn’t have all of the vast amount of background knowledge that humans have that helps them make sense of the world, and things that we don’t even know we know, like basic concepts of objects and how objects behave… how are these systems going to learn that stuff? That’s a really big question.”

Another contemporary problem with AI is security: both security for humans and security of the AI from human attacks. AI is vulnerable to attacks where humans can control, confuse, or misuse the AI. “As AI gets deployed more and more in the sort of stuff around us – our cars, our houses, our buildings, whatever, and is vulnerable, it can be taken advantage of.” As for security for human data, it’s not necessarily a deal-breaker, but it’s definitely something we have to keep in mind as AI becomes more powerful and ubiquitous. Melanie notes, “Every security system has vulnerabilities.”

How to make AI fair and just

Margaret Mitchell (no relation to Melanie Mitchell)

also gave a keynote at the Montreal AI Symposium, titled (Un)fairness in AI Vision and Language, discussing how with the increased success in machine learning, various effects of bias have been uncovered.

Margaret is a Senior Research Scientist with Google AI, leading Google’s ethics and fairness efforts with vision-language, grounded language, and using AI for the greater good. Her projects include assistive and clinical technology as well as helping AI communicate what they are able to process. Particularly, she has new work on diversity and representation in text and face data.

It’s been common knowledge that humans are flawed, but we’re now aware of how these biases affect how data is collected, how AI is trained, how media is filtered, aggregated, and generated, and how this data is then output in terms of machine learning. Computers amplify the injustice and bias of the humans that input the data into them. Margaret notes, “Machine learning (ML) propagates common patterns in the data it is trained on, and all human data contains human biases. Our tendency to see the output of ML systems as correct and value-neutral (known as ‘automation bias’), allows for the effects of historic discrimination to be amplified and propagated at a massive scale. New evaluation techniques, such as disaggregated evaluation across population subgroups, can begin to address this issue. However, we also need to open up the conversation with social scientists and others with deep expertise on human systems and social structures.”

Margaret discussed her study about finding words used in tweets that are associated with someone attempting suicide, in order for clinicians to be notified. However, she realized that publishing these results to the public could lead to discrimination against those who use this type of vocabulary, so she forwent publishing the exact phrases publicly. She expands on this idea of considering ethics before publishing research: “The need to evaluate technology for how well it works must be balanced against the concerns of technology that can itself directly promote or reinforce discrimination at a global scale. For example, if a binary gender prediction algorithm works equally well across different population subgroups, that does not mean it should be made generally available to categorize all individuals into two genders. Caution around releasing technology is especially relevant for human-centric artificial intelligence technology that identifies, labels, or otherwise categorizes people into groups.”

Moreover, technology is often set for baseline straight white males (this is best shown in Joy Buolamwini and Timnit Gebru’s research with GenderShades which uncovers that error rates for darker-skinned women are much higher than those for light-skinned males). Instead of having technology try to predict race, gender, or sexuality, Margaret suggests that technology just needs to be given representations of all sorts of human data in order to improve downstream tasks of the AI.

Overall, Margaret acknowledges that technology needs to consider the biases that may come about and try to counteract them instead of pretending that AI is unbiased. She’s equally optimistic about how to improve the tech industry in general, by focusing on evaluation and baselines instead of results, by helping people instead of focusing on profits, by having transparency on the research, and, of course, by ensuring reliable results across demographics.

In terms of demographics, it’s often the marginalized people who are at the forefront of cuttingedge discoveries in industries but who often also go overlooked, and women in technology are no exception. Without women, who would be taking the time to ensure that AI has understanding and is reliable and fair?

By Rebecca Kahn

This article is from: