Could AI Achieve Superintelligence?

While some experts are pessimistic about AI innovations, some look forward to its altruistic use. Here’s what our expert has to say.

Published on Oct. 28, 2024
An AI robot wearing a hero’s cape blowing in the wind with its hands on its hips and head tilted triumphantly toward the sky.
Image: Shutterstock / Built In
Brand Studio Logo

AI is rapidly evolving — but could it achieve superintelligence? Curiously, some who are most concerned about the dangers of superintelligent AI are the same ones who deny that large language models are intelligent.

More on Artificial IntelligenceEverything You Need to Know About AI

 

AI Superintelligence: Superforecasters vs. Experts

How seriously should we and government regulators take the concerns of superintelligence? The Economist asked a group of 15 AI experts and 89 “superforecasters” to assess “extinction risks.”

What Is a Superforecaster?

Superforecasters are general-purpose prognosticators with a track record of making accurate predictions on a wide range of issues, such as elections and outbreaks of wars.

The doomsday assessments of the AI experts were almost an order of magnitude higher than those of the superforecasters for the threat of catastrophe or extinction from AI. The pessimism of AI experts did not change when they learned how the superforecasters had voted. Similar discrepancies were found in other existential threats, such as nuclear war and pathogen outbreaks.

The problem, though, in making guesses with no data is that judgments are based only on prior beliefs. Debates on extraterrestrial life in the universe also suffer from a lack of data. However, even when there are data, such as 80 years of living with nuclear weapons, the experts are still more pessimistic than the superforecasters. The reason why experts are more pessimistic than superforecasters is still unclear.

 

Balance Caution With Hope for AI

However, imagining worst-case superintelligence scenarios and preparing contingency plans is probably a good idea. So far, the focus has been on the uses of superintelligence for evil purposes — but in best-case scenarios, superintelligence could be enormously helpful in advancing our health and wealth while preventing catastrophes created by humans. We should proceed not with alarm but caution, which may be inevitable.

We can find guidance from looking back at nuclear weapons in the 1940s. J. Robert Oppenheimer, the director of the Los Alamos Laboratory and responsible for the research and design of an atomic bomb during World War II, testified at the 1954 Atomic Energy Commission hearing that led the AEC to revoke his security clearance:

When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.

Oppenheimer later opposed further research on nuclear weapons, quoting from the Hindu Bhagavad Gita: “Now I am become Death, the destroyer of worlds.”

No one can imagine the unintended consequences of introducing LLMs into society in the long run, any more than we could imagine how the internet would change every aspect of our lives when it went public in the 1990s, 30 years ago. No one predicted the unintended consequences of the internet, which made it possible for anyone to broadcast their opinions both far and wide.

The internet architects thought it would be a purer form of democracy — but they did not anticipate the proliferation of fake news and echo chambers. Altruistic ideals can have unintended consequences. The internet has made it possible for weaponized propaganda and advertising to go viral.

But if we could find a way to control nuclear weapons and are adapting to the internet, we should be able to live with AI.

More on Artificial IntelligenceStrong AI vs. Weak AI: What’s the Difference?

 

Learn to Live With AI Advancements

One does not need a moratorium to think through these scenarios. Many are already thinking them through, and no one is predicting an evil superintelligence to emerge in the next six months.

Who would benefit if all the AI researchers in the western hemisphere suddenly decided to put a brake on advancing LLMs? Research in many other countries would continue on. AI has already bested the best human fighter pilots in dogfights. In the next global conflict, fighter pilots will have “loyal” wingmen that are autonomous drones swarming alongside, scouting ahead, mapping targets, jamming enemy signals and launching airstrikes while keeping the pilot in the loop through LLMs.

ChatGPT and the Future of AI book cover
Image provided by MIT Press.

The great discoveries by physicists in the last century — relativity and quantum mechanics — were a foundation for our modern physical world.

We are beginning a new era, the age of information. Our children will live in a world filled with cognitive devices, with personal tutors that help everybody reach their full potential, a world we can barely imagine today. There will also be a dark side, just as physics created atomic bombs of Promethean destructive power.

Naysayers have existed throughout history, but I say move forward optimistically, expect surprises and prepare for unintended consequences.

Excerpted from ChatGPT and the Future of AI: The Deep Language Revolution (MIT Press) by Terrence J. Sejnowski.

Explore Job Matches.