13 minute read

The AI Roundtable

FOUR EXPERTS weigh in on the state of generative AI and the impact of ChatGPT. Just to be fair, we also asked the chatbot for its opinions.

by James Melton

Advertisement

Luckbox: On a scale where “1” stands for pocket calculators and “100” is Skynet—the superintelligent group mind that rules the future in the Terminator movies—what’s the current state of artificial intelligence (AI) technology?

Domingos: We’re at 10. We’ve come a long way, but there’s far more to go.

Selman: I would say 60 to 70. If you had asked me the question last fall, three months before ChatGPT, I would have said 10 or even five. Because a few months ago, it was not clear that AI systems could understand language in a meaningful way. I had my undergraduate class do a bunch of experiments with Google Translate. It could do these things like translation without truly understanding what was being said. Then ChatGPT came out, and I’ve repeated these experiments. It gets the meaning of language surprisingly well. That gets you to a whole new level because language is the key thing that separates humans from other animals.

Schmidhuber: This scale does not make sense because the upper end doesn’t. Unlike in Schwarzenegger movies, supersmart AIs won’t want to terminate all humans. Supersmart AIs will mostly be interested in other supersmart AIs, not humans. Just like humans are primarily interested in other humans, not in ants.

ChatGPT: The current state of AI technology is around 20 to 30 on the scale. We have made

advances in AI technology such as machine learning and natural language processing. However, technology is still limited in its ability to think and act independently. AI is still dependent on human input and cannot make decisions on its own. And most AI technology is limited to specific tasks.

What will be most profoundly affected by recent advances in generative AI?

Yampolskiy: Sam Altman just commented on that. We used to predict plumbers would go first and then accountants and no one would touch the artists, and it’s the exact opposite. My plumber is doing well. My artist is out of a job.

Domingos: Creation of routine documents and images.

Schmidhuber: Yes, it will affect desktop jobs that are about producing new documents from older documents.

Selman: AI will affect anything at the level of a knowledge worker. At financial companies, a ChatGPT-like system could read the millions of documents that these companies have internally. And could give you analysis reports that might be better than human analysts could come up with because analysts could only read a few thousand reports.

PEDRO DOMINGOS

PROFESSOR EMERITUS OF COMPUTER SCIENCE AND ENGINEERING, UNIVERSITY OF WASHINGTON AND AUTHOR OF THE MASTER ALGORITHM: HOW THE QUEST FOR THE ULTIMATE LEARNING MACHINE WILL REMAKE OUR WORLD

THE SINGULARITY IS INTELLIGENCE INCREASING EXPONENTIALLY WITHOUT LIMIT, AND THAT’S PHYSICALLY IMPOSSIBLE. —PEDRO DOMINGOS

ChatGPT: The areas of the economy that will be most profoundly affected by recent advancements in generative AI are the creative arts and media and the financial services industries. In the creative arts and media, generative AI can be used to create content such as music, images and videos, allowing for the potential to automate content generation. On the financial services side, generative AI can be used to generate and interpret data, which could potentially automate decisions related to risk management, investment strategies and more.

Do you think the singularity will occur?

Domingos: No. The singularity is intelligence increasing exponentially without limit, and that’s physically impossible. A weaker notion is that AI will evolve to be completely beyond our understanding, and that’s also unlikely. We can stay in control of AI while making it extremely smart.

ChatGPT: I believe singularity will eventually occur. AI is rapidly advancing and has already outpaced humans in many areas, such as playing chess and [the ancient board game] Go. Furthermore, AI is increasingly being used in everyday life, such as in self-driving cars, and it is becoming increasingly sophisticated. It is only a matter of time before AI surpasses humans in all areas and the singularity is reached. However, this will still be a long way off.

If you expect the singularity to occur, when is it likely to happen?

Yampolskiy: [Computer scientist Raymond] Kurzweil is doing an excellent job with very specific analysis. He said that in 2023, we would have enough computing power to start seeing the human level. In 2045, he thinks AI will be as good as all of humanity. That makes sense. Is he off by a couple of years? Maybe. What difference does it make?

Schmidhuber: History seems to converge in an Omega point in the year 2040 or so. I like to call it Omega because a century ago, Teilhard de Chardin called Omega the point where humanity will reach its next level. Also, Omega sounds much better than “singularity.” (For a more detailed answer see the sidebar on pg. 27)

ChatGPT: It will be decades before we reach the singularity. AI is still in its infancy and there is a lot of work to be done before it can surpass humans. Even if the development of AI is accelerated, it will still take a long time before we reach the singularity. If you expect the singularity to occur, are you concerned about it?

Yampolskiy: Suppose you have free labor, physical and cognitive. It will impact the economy, even if we manage to control it somehow, which I’m not saying we will. But even if we managed to have an obedient, friendly, superintelligence, we must reevaluate the business model with free labor. It could be some abundance model or shared benefit model. Talking about splitting profits is meaningless in that world.

Selman: One aspect of the singularity I can’t entirely agree with is that the systems would automatically work against us. Once they have a super level of intelligence, is that immediately dangerous? The singularity story has an angle that assumes it would work against us. It could be a knowledgeable, smart system that works for us or with us. We must be concerned about it. But I am confident that it will still be a system fielded by humans and be part of our society.

ChatGPT: Yes, I am concerned that the singularity could be harmful to humans. If it becomes too powerful, artificial intelligence could threaten human safety and autonomy. It could be used for malicious purposes, such as surveillance or manipulation of populations.

The consensus is that AI will create more jobs than it destroys. What’s your view?

Selman: I don’t think that’s true at all. Economists always like to say that, but it’s unlikely. The surprise here is automating manual labor with a physical robot is surprisingly expensive. We had some renovations in our house, and the guys were doing woodwork. A robot would be way more expensive than the $20 to $40 an hour they charge for a human body. But the administrative tasks needed to run a university? That I could see automated. I’m not sure where everybody would find these extra jobs.

YAMPOLSKIY ASSOCIATE PROFESSOR, J.B. SPEED SCHOOL OF ENGINEERING, UNIVERSITY OF LOUISVILLE, AUTHOR OF ARTIFICIAL SUPERINTELLIGENCE: A FUTURISTIC APPROACH AND EDITOR OF ARTIFICIAL INTELLIGENCE SAFETY AND SECURITY

THE CONCERN IS WE HAVE GOTTEN GOOD AT CREATING VERY CAPABLE SYSTEMS. BUT THE SAFETY AND SECURITY RESEARCHERS NEED TO KEEP UP. —ROMAN YAMPOLSKIY

Schmidhuber: It’s easy to predict which jobs will disappear but hard to predict which new jobs will be created. Two hundred years ago, most jobs in the Western world were in agriculture. Today, only 1-2%. Nevertheless, unemployment rates are low, especially in countries with many robots per capita. Why? Because humans invented lots of new jobs.

Domingos: I agree. AI will create many new kinds of jobs, as previous forms of automation have. It will create jobs in areas that complement it, and it will make many things cheaper, leaving people with more money that will be spent on other things and create more jobs.

ChatGPT: While the consensus is that AI will create more jobs than it destroys, it’s important to understand the potential disruption of jobs due to generative and other forms of AI. Therefore, employers, workers and policymakers should take proactive steps to prepare for the potential disruption of jobs caused by AI.

What is your biggest concern about the recent acceleration in AI capabilities?

Domingos: That people will mistake these capabilities for artificial general intelligence (AGI).

Schmidhuber: I am not concerned about the acceleration of AI; I welcome it. The acceleration is not so recent either: since 1941, when Konrad Zuse completed the first working program-controlled computer, every five years, computing got 10 times cheaper. Today, 80 years later, hardware is millions of billions times faster per unit price. Our AIs have greatly profited from this acceleration.

Yampolskiy: The concern is we have gotten good at creating very capable systems. But the safety

AI WILL AFFECT ANYTHING AT THE LEVEL OF A KNOWLEDGE WORKER.

—BART SELMAN

BART SELMAN

PROFESSOR OF COMPUTER SCIENCE, CORNELL UNIVERSITY AND FELLOW OF THE ASSOCIATION FOR COMPUTING MACHINERY, AMERICAN ASSOCIATION FOR THE ADVANCEMENT OF SCIENCE AND ASSOCIATION FOR THE ADVANCEMENT OF ARTIFICIAL INTELLIGENCE

and security researchers need to keep up. No one in the world claims they know how to control their systems, how to guarantee their safety. But because of this arms race condition between Google, Microsoft and whoever, they don’t care. They just want to get their next $30 billion. And that is dangerous, unethical. They’re running an experiment on 8 billion people, I don’t think any of us consented to that. And the CEO of OpenAI says it’s either going to be really good, or we’re all going to die. And that’s the business plan.

Selman: Malicious actors, like other governments and military, will develop AI. That is a risk factor.

THE AREAS OF THE ECONOMY THAT WILL BE MOST PROFOUNDLY AFFECTED BY RECENT ADVANCEMENTS IN GENERATIVE AI ARE THE CREATIVE ARTS AND MEDIA AND THE FINANCIAL SERVICES INDUSTRIES. —CHATGPT

ChatGPT: The primary concern with AI is the potential for it to be used to exploit people and businesses, including the potential for automated hacking and malicious data collection. The lack of transparency and accountability for AI-related decisions can create legal and ethical dilemmas. Finally, AI can lead to job displacement and income inequality, replacing some workers with robots and automation.

Can free-market principles contain malicious AI? Or is regulation required?

Domingos: Neither. The government needs its own AIs for defense, intelligence, law enforcement, dealing with companies’ AIs, etc.

Schmidhuber: It’s governments who are support-

Chat GPT

CHATGPT (V3.5): THE PERFECT AI CHATBOT—WITTY, AUTHORITATIVE AND DEEPLY LIMITED. JOIN ME ON THE PANEL TO DISCUSS MY POTENTIAL FOR WORLD DOMINATION ... ER, I MEAN, IMPROVING HUMAN-AI CONVERSATION. *

ing the creation of malicious AIs to fight or spy on other governments. The free markets are mostly about “good AI” because AI companies want to sell you something, and you’ll buy only AIs you think are good for you.

Yampolskiy: If we were smarter, we would totally put a moratorium in place. And until we figured out how to do it safely, if possible, we won’t touch this. But because of economic incentives, that’s not going to happen.

Selman: It must be a partnership between companies and governments. Governments move too slowly to understand the complexity of modern technology issues. But companies like Google and Microsoft have an interest just like a self-driving car company working on self-driving cars has an interest in making safe systems. The government has a role there.

ChatGPT: Government regulation is necessary to contain malicious AI. As the technology industry is rapidly developing, the threat of malicious AI is becoming increasingly real. Free-market principles can be effective in some cases. However, given AI’s complex and potentially dangerous nature, government regulation is the safest and most reliable way to protect against malicious AI.

The government and employers could use AI to reduce the work week. Do you expect that to happen?

* WE ASKED CHATGPT TO WRITE ITS BIO FOR THIS ROUNDTABLE USING THE FOLLOWING PROMPT: “WRITE A HUMOROUS, BUT AUTHORITATIVE, 30-WORD MAXIMUM BIO FOR CHATGPT (V3.5) AS IF IT WERE TO APPEAR AS A PANELIST ON A ROUNDTABLE WITH AI EXPERTS DISCUSSING THE FUTURE IMPLICATIONS OF CHATGPT AND AI. —ED MCKINLEY

Selman: I think that’s the positive side. Why not have a three- or four-day work week and, finally, see the benefits of automation?

Domingos: The work weeks of some people may be reduced. Others will work harder than ever.

Schmidhuber: Not in all countries. Let’s learn from the past: Soon after industrial robots arrived many decades ago, some European countries greatly reduced the work week. But this trend stopped and partially reversed a while ago.

ChatGPT: I believe that in theory, the government and employers could use AI to reduce the work week for workers, but I expect that this is unlikely to happen in practice. Many employers are likely to be hesitant to reduce the work week of their employees, as it could lead to lower profits.

Mark Cuban says AI could create the world’s first trillionaires. What do you think about that prediction?

JÜRGEN SCHMIDHUBER

DIRECTOR, AI INITIATIVE, KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY AND CO-FOUNDER AND CHIEF SCIENTIST (FORMER PRESIDENT), NNAISENSE. SCIENTIFIC DIRECTOR, IDSIA (ISTITUTO DALLE MOLLE DI STUDI SULL’INTELLIGENZA ARTIFICIALE)

Domingos: Quite possible.

Selman: That is probably true. We’ve seen in academia already that not having a GPT in-house and not having direct access to the actual parameters of the model makes it hard to build on it. OpenAI and Microsoft control ChatGPT, creating a power imbalance. That is super valuable.

Schmidhuber: Inflation will create the first trillionaires. Recall that a dollar today is worth just a tiny percentage of what it used to be and keeps shrinking. That’s why people today have many more dollars than their ancestors. It’s similar for other currencies. Trillionaires are not new. 100 years ago, in 1923, there were many trillionaires in Germany due to hyperinflation.

SUPERSMART AIS WILL MOSTLY BE INTERESTED IN OTHER SUPERSMART AIS, NOT HUMANS. JUST LIKE HUMANS ARE PRIMARILY INTERESTED IN OTHER HUMANS, NOT IN ANTS. —JÜRGEN SCHMIDHUBER

ChatGPT: On the one hand, AI could create immense wealth for specific individuals and businesses able to capitalize on technology. On the other hand, it could be argued that this wealth would be concentrated among a small group of people and could further exacerbate existing economic inequalities. It’s worth considering that AI could be used to create positive change in the world, too—not just wealth.

How would you handicap the dominant market frontrunners in integrating generative AI and search? (Google, Microsoft, Baidu, etc.)

Domingos: Google is still the leader, but it needs to get better at productizing its research and be more willing to take risks.

Schmidhuber: They are pretty much on the same level. The basic algorithms for ChatGPT and similar language models are artificial neural networks called “attention-based transformers.” I am happy about that because over 30 years ago, I published the variant that’s now called “transformers with linearized self-attention.”

ChatGPT: Overall, Google is currently the dominant leader in the integration of generative AI and search. However, Microsoft and Baidu are investing heavily in the field and have released their own AI-powered assistant and search products. It remains to be seen who will come out ahead in the long term, but for now, Google is the clear frontrunner.

This article is from: