1 minute read

CEO’S CORNER The

Race To Make A.I Smarter

Large language models may sound more human if they are taught fewer words. Bigger is always better when it comes to chatbots with artificial intelligence.

Advertisement

As they are fed more data, large language models like ChatGPT and Bard, which produce conversational, original text, get better. Every day, bloggers use the internet to describe how the most recent technological developments, such as an app that summarizes articles, podcasts produced cial intelligence, and a ned model that can answer any question about professional basketball, will "change everything."

A growing number of people are worried that a small group, including Google, Meta, OpenAI, and Microsoft, will have almost complete control over the technology since developing larger and more capable A.I. demands processing capacity that only a limited number of corporations have.

Nulla nunc lectus porttitor vitae pulvinar magna. Sed et lacus quis enim mattis nonummy sodales.

Presented By Kennedy Lucas Publishings LC

Larger language models are also more difficult to comprehend. They frequently Even their designers have called them "black boxes," and top researchers have expressed concern that A.I.'s ultimate aims may not be the same as ours. If bigger is better, it also tends to be more secretive and exclusive. Natural language processing is the area of artificial intelligence that focuses on verbal comprehension. In January, a group of young academics issued a challenge to attempt and challenge this paradigm. Teams were tasked with developing functional language models from data sets that were less than a tenth the size of those utilized by the most University and one of the BabyLM organizers. The project's other organizer and computer scientist Alex Warstadt stated, "The challenge puts questions about human language learning, rather than 'How big can we make our models?' at the center of the conversation."

This article is from: