3 minute read
FROM OUR PARTNERS
can drive things like depression and suicide. Absent some sense of responsibility, people can be setting themselves up for what she dubbed a “Jurassic Park scenario” of AIs that no one can trust to do the right thing. “[Responsibility] means asking the question, ‘Is this the right thing to do? Should this AI solution even be built? I’d like to avoid that Jurassic Park scenario: Just because your scientists could, they did it without thinking if they should,” she said.
approach to AI, making sure to start with a clear understanding of the ethical risks. For instance, they’d need to consider whether an AI collections bot could unfairly penalize some customers or be more aggressive or more harassing in attempting to collect than others because the data that went into training the AI was flawed. With these possible scenarios in mind, Harris said it’s important that human oversight and accountability be factored into the product, even if the AI is highly advanced.
building an AI that’s perfectly ethical, which is impossible, but rather one that can be trusted by everyday people. As opposed to an abstract discussion of what is and isn’t ethical, she said trust can be defined and solved for, which she says is more practical.
Ethics is a big part of this, yes, but so are reliability (people need to know the program will work as expected), security (people must be confident it hasn’t been compromised), safety (people need to feel confident the program won’t harm them physically or mentally), explainability (its processes cannot just be a black box), respect for privacy (the data that trains the program was used with their consent), and the presence of someone — presumably human — who ultimately is accountable for the AI’s actions. “All of these are important factors to consider if you want to make AI trustworthy because when we use AI in the real world, when it’s out of the research labs and is being used by accountants or CEOs, you need to be able to trust that AI and know that broader context,” she said.
Like Harris and Palmerino, she noted that the consequences of failure can be quite high. For just one example, she pointed to recent findings on how social media algorithms
Palmerino said Botkeeper indeed takes these kinds of considerations into account when developing new products. Their process, he said, involves looking at everything their products touch and analyzing where potentially unethical actions can creep in. Right now, he said, “We have not been able to identify those situations” but the key is that they did it in the first place and intend to keep doing it. He didn’t rule out the possibility of future issues along these lines — for example, if they start focusing on the tax area.
“Say we teach the AI that there are certain buckets to [expense] categorization that bode advantageously from a tax perspective for the client, whether or not that is proper. The AI identifies things that are strategic and more beneficial than things that are not, so it could develop a bias that might categorize everything as meals and entertainment, even if it’s a personal meal or out of state meal, to get that 100% deduction because it understands those incentives and this behavior could then be reinforced because the business owner starts encouraging it,” he said. For such a case, he said, programmed “guard rails” of some sort will be needed.
Harris described a similar process at Sage, saying his company takes a careful
“We’ve been pretty conservative in our approach to AI at Sage … We started off pretty early trying to balance our enthusiasm for what we can accomplish with AI with the humility that AI has immense opportunity for positive impact and making things more efficient but done wrong can have an immense negative impact as well,” he said.
Palmerino felt encouraged that these issues were getting more attention in the public, and urged professionals to think carefully about the potential negative impacts of their actions.
“If you plan on having anything that will have an impact, you have to consider the good and the bad. I have to be looking at it from all angles to make sure I’m changing things for the better. … Anyone reading this should take a second to reflect and think: Do you want to be remembered for having good consequences, or be remembered for creating something negative, something you can’t take back. We’ve only one life and the only way we live on after death is in memory. So let’s hope you leave a good memory behind,” he said.
Chris Gaetano is technology editor for Accounting Today.