2 minute read

Experts Weigh In on How AI Learn

Experts show how AI work

AI stands for Artificial Intelligence. AI programs can be used for automating processes like driving, analyzing finances and even chatting with someone. To train AI, computer scientists run thousands of generations with thousands of AI populations. However, problems like underfitting can lead to funny mistakes.

Advertisement

In one of Russ Bodnyk’s AI projects, he tried to create a fictional character that mimicked the vocabulary and speech style of both Oprah and Michelle Obama. The AI, not knowing what each voice was, decided to randomly switch between the two. This caused weird results.

“And kind of what happened is that there’s just a lot of really kind of funny ways in which the chatbot would switch between kind of like a serious Oprah voice and then like a very lyrical Michelle Obama voice,” said Russ. “And it was funny to watch the AI that created both the voice like what it sounded like, as well as the things that it said. It would kind of flip between a serious mode and sort of a joking mode. “

Mishaps like this begin when the AI is unable to process what is occurring. Kenneth O. Stanley explained that these usually happen when the AI has never been exposed to the new scenario. For example, with ChatGPT, if it hasn’t developed to find a certain part of information, it will “overfit” and put other information that seems plausible. “Underfitting” is the opposite, where instead of not having developed for one part of information, it hasn’t developed for ANY information.

“You’re looking for the happy middle between the sweet spot between underfitting and overfitting generally, which is where you get generalization and so yeah, that exists somewhere and you can see even in ChatGPT or something that, there’s some degree of compromise happening when you ask it about things it’s not familiar with and it struggles,” said Stanley. Another issue that can cause problems in AI is a lack of good data. While OpenAI has 100 million people giving data, Dimakis only has data from his university, of around 55,000 people. This means OpenAI has a factor of 1.72 times more data than Dimakis has available.

‘And kind of what happened is that there’s just a lot of really kind of funny ways in which the chatbot would switch between kind of like a serious Oprah voice and then like a very lyrical Michelle Obama voice,” said Russ. “And it was funny to watch the AI that created both the voice like what it sounded like, as well as the things that it said. It would kind of flip between a serious mode and sort of a joking mode.”

“Our problem is really finding lots of good data sets, high quality data sets, or training the models in a way that they are useful for different real problems,” said Dimakis. “Most of the time this is kind of where our research is spent. It’s not like the overfitting versus underfitting issue. It’s usually more of a data issue. One of the significant problems we have in universities is that we don’t have that much data compared to what companies have.”

“You ask about me and what I’ve done in my life,” said Stanely. “It’ll tell half the stuff will be correct and half of it will be just made up stuff that sounds like I could have done it, but I actually didn’t.”

This article is from: