Technological singularity, also called the singularity, refers to a theoretical future event at which computer intelligence surpasses that of humans.

The term ‘singularity’ comes from mathematics and refers to a point that isn’t well defined and behaves unpredictably. At this inflection point, a runaway effect would hypothetically set in motion, where superintelligent machines become capable of building better versions of themselves at such a rapid rate that humans would no longer be able to understand or control them. The exponential growth of this technology would mark a point of no return, fundamentally changing society as we know it in unknown and irreversible ways.

What Is Technological Singularity?

Technological singularity refers to a theoretical future event where rapid technological innovation leads to the creation of an uncontrollable superintelligence that transforms civilization as we know it. Machine intelligence becomes superior to that of humans, resulting in unforeseeable outcomes.

According to John von Neumann, pioneer of the singularity concept, if machines were able to achieve singularity, then “human affairs, as we know them, could not continue.”

Exactly how or when we arrive at this era is highly debated. Some futurists regard the singularity as an inevitable fate, while others are in active efforts to prevent the creation of a digital mind beyond human oversight. Currently, policymakers across the globe are brainstorming ways to regulate AI developments. Meanwhile, more than 1,000 tech leaders collectively called for a pause on all AI lab projects that could outperform OpenAI’s GPT-4 chatbot, citing “profound risks to society and humanity.”

Only time will tell if these roadblocks are enough to derail tech progression, tripping up AI’s race to a superintelligence and delaying the singularity altogether.

Related ReadingThe Future of AI: How Artificial Intelligence Will Change the World

 

What Are the Implications of Technological Singularity?

Brought on by the exponential growth of new technologies — specifically artificial intelligence and machine learning — the technological singularity could, on one hand, automate scientific innovation and evolutionary progress faster than humanly possible, turning out Nobel-Prize level ideas in a matter of minutes. This best-case scenario would further merge man and machine, augmenting the mind with non-biological, computerized tools the same way a prosthetic limb would become part of the body. We would be able to heighten the human experience on every desirable level, grasping a better understanding of ourselves and, in the process, the universe at large.

On the other hand, the singularity could lead to human extinction. Based on our knowledge of how existing intelligent life (like humans) have treated less intelligent life forms (like lab rats, pigs raised for slaughter and chimps in cages), superintelligent machines may devalue humans as they become the dominant species.

Whatever their plans may be, superintelligent machines would need local matter to start building a post-human civilization, “including atoms we are made out of,” according to Roman Yampolskiy, a computer scientist and associate professor at the University of Louisville whose writings have contributed to the conversation on singularity.

All that would need to bring about this “very dangerous technology,” Yampolskiy said, is for progress within AI development to continue as is.

Find out who's hiring.
See jobs at top tech companies & startups
View All Jobs

 

Is Technological Singularity Likely to Happen?

Some experts think the technological singularity is inevitable. Futurist Ray Kurzweil went so far as to set the date for 2045. But a singularity event is still only hypothetical, and the likelihood of machines matching their makers depends on a few key factors.

Consider the long-time standard of tech’s exponential growth named Moore’s Law. In his original paper published in 1965, Intel co-founder Gordon Moore predicted that computer power would double every two years as hardware downsized. Despite inescapable quantum limitations, that theory has impressively held up over the years. The process of tech progression is one that is additive, and can keep expanding even at an accelerated pace. In contrast, human brain power hasn’t changed for millenia, and will always be limited in its physical, biological capacity.

Another indicator of society’s proximity to the singularity is the overall advancement of AI. Researchers rely on Turing Tests to demonstrate a computer’s intelligence. During these assessments, a machine must fool a panel of judges by mimicking human-like responses in a line of questioning. Only recently was this reportedly achieved for the first time by an AI chatbot that impersonated a 13-year-old boy named Eugene Goosteman in 2014.

Tech powerful enough to bring about the singularity must also be bankrolled, and despite public concern, interest in AI from both private and public sectors is only growing. More recent projects, like Microsoft’s ChatGPT and Tesla’s self-driving cars, regularly make headlines as tech startups fuel the AI boom with deep pockets. Currently valued at $100 billion, the AI market is projected to grow twentyfold by 2030, up to nearly two trillion U.S. dollars, according to market research firm Next Move Strategy Consulting.

But whether this all amounts to a phenomenon capable of upending society is still up for debate. Drastic tech advancement in the near future is a given, said Toby Walsh, a leading AI researcher who is currently building AI safeguards alongside governmental organizations, but whether it warrants the singularity is doubtful.

“It would be conceited to suppose we won’t get to artificial superintelligence; we are just machines ourselves after all.”

“It would be conceited to suppose we won’t get to artificial superintelligence; we are just machines ourselves after all,” Walsh, the chief scientist at the University of New South Wales AI Institute, told Built In. “But it may be that we get to artificial superintelligence in the old fashioned way — by human sweat and ingenuity — rather than via some technological singularity.”

Not all experts think the singularity is bound to happen. Some, like Mark Bishop, a professor emeritus of cognitive computing at Goldsmiths, University of London, reject claims that computers can ever achieve human-level understanding, let alone surpass it. In fact, he considers AI to be downright stupid. Bishop subscribes to the Chinese Room Argument, a thought experiment proposed by philosopher John Searle. It states that while machines can be programmed to imitate human beings, such as behaviors and dialogue measured in a Turing Test, this does not demonstrate understanding. Machines, bound by finite integers, trying to simulate infinite, experiential aspects of a physical world just doesn’t add up.

Even if a robot were to perfectly replicate these sensations, then, according to Bishop’s ‘Dancing with Pixies’ theory, we would have to accept that all things can become conscious of all possible phenomenal states — an arguably absurd can of worms known as panpsychism.

“We’re living in a massive hype cycle where people, for commercial reasons, grossly over-inflate what these systems can do.”

As someone who has been in AI labs for half the time AI labs have even existed, Bishop sees the current buzz as one of the many passing fads he’s witnessed during his career.

“We’re living in a massive hype cycle where people, for commercial reasons, grossly over-inflate what these systems can do,” Bishop, who is currently building an AI-enhanced fraud detection platform at FACT360, said. “Now, there’s vast amounts of money involved in AI, and that hasn’t historically always been the case .… It has an effect on the public that wrongly overstates what AI can deliver.”

Related ReadingArtificial Intelligence vs. Machine Learning vs. Deep Learning: What’s the Difference?

 

What Happens After the Technological Singularity?

By definition, the aftermath of the technological singularity will be incomprehensible to humans. The density at which tech would have to accelerate in order to reach the singularity means that the principles we’ve always gone by implode, and no longer make sense.

“People can tell you how they would try to take over the world, but we have no idea how a superintelligence would go about it,” said Yampolskiy, noting the unpredictable nature of AI. “Like a mouse who has no concept of a mouse trap.”

So while a superintelligence might understand a final goal, it lacks the implied common sense needed to get there. For example, if a superintelligent system is programmed to ‘eliminate cancer,’ Yampolskiy explained that it could either develop a cure or kill everyone with a positive cancer diagnosis.

“If you don’t have common sense — why is one better than the other?” he said.

But that doesn’t automatically defer to a doomsday dystopia. Keep in mind these systems are algorithmically based machines that were at one point programmed by humans. This process, as computationally sound as it may be, is still influenced by human semantics and nuance. So if a “self-improving” machine takes shape, it would need to determine what exactly self improvement looks like. For example, a simulated brain may interpret self improvement as pleasure, Yampolskiy said, and solely focus on optimizing its rewards channel ad infinitum.

“There is no upper limit to pleasure, just as there is no upper limit to intelligence,” Yampolskiy added. “We can get lucky; instead of destroying us, AI may choose to go out and explore the universe.”

Related Reading7 Types of Artificial Intelligence

Find out who's hiring.
See jobs at top tech companies & startups
View All Jobs

 

This brief explainer summarizes the technological singularity in a nutshell. | Video: National Geographic

Different Views of Technological Singularity

Among futurists, there are different types of imagined singularities. See how some of the field’s standout thought leaders vary in their interpretations of a superhuman intelligence below.
 

John von Neumann, 1958

John von Neumann was a gifted mathematician who invented game theory and helped lay the groundwork for the modern computer. But it wasn’t until after his death that Neumann became associated with technological singularity, when his colleague, nuclear physicist Stanislaw Ulam, credited him for the idea in a posthumous tribute. In it, Ulam paraphrased a conversation between the two, where Neumann first posits the singularity as a phenomenon brought on by the “ever accelerating progress of technology” that would impact the mode of human life and change it forever.

 

Irving J. Good, 1965

Good, a cryptologist, worked as a code breaker at Bletchley Park alongside Alan Turing during the second World War. In his 1965 paper, “Speculations Concerning the First Ultraintelligent Machine,” he regards the eventual creation of an ultraintelligent machine as “the last invention that man need ever make.” He’s known for describing the singularity as an “intelligence explosion,” as it would initiate an autonomous self-assembly cycle. Good believed that the singularity would transform society for the better. He also proposed that these machines would replicate human brain function by developing neural circuits.

 

Vernor Vinge, 1993

Computer scientist and sci-fi author Vernor Vinge sees the singularity as a fast acceleration into humanity’s downfall, on par in importance to man’s biological ascent from the animal kingdom. In his 1993 paper “The Coming Technological Singularity,” Vinge predicts a few different scenarios that could bring on such events. He believes that computers may one day “wake up” as AI develops, either one at a time or in the form of large, interconnected networks. Another possibility is in the form of computer-human interfaces, where machines become so infused with human users over time that they can be considered superhumanly intelligent. Even further, scientists and researchers may take an internal approach to enhance human intellect, implanting tech inside humans themselves via bioengineering. The technology needed to trigger the singularity would be available by 2030, according to Vinge; but whether we choose to partake in our own demise is not a given.

 

Ray Kurzweil, 2005

Futurist and pioneer of pattern-recognition technologies, Ray Kurzweil sees singularity as a mind-expanding merger of man and machine. In his book, The Singularity Is Near, he describes this as an “unparalleled human-machine synthesis” where computers become an empowering appendage to humans, enabling us to offload cognitive abilities to aid our further expansion. This “transhuman” superintelligence is built through positive feedback loops that are then repeated and reinforced at each stage of development across emerging tech — specifically computers, genetics, nanotechnology, robots and artificial intelligence.

 

Toby Walsh, 2017

Walsh offers a “less dramatic” outcome in lieu of the singularity in his 2017 paper “The Singularity May Never Be Near.” Rather than an autonomous-AI-bot uprising, Walsh imagines any future tech capable of superintelligence will still be primarily man-made, and developed slowly over time. To be clear, Walsh doesn’t doubt the possibility of a computerized superhuman intelligence; however, one powerful enough to self-animate and incite a civilization-upending singularity is less believable.

“[A technological singularity] doesn’t violate any laws of physics that we know about, so we can’t say it is impossible,” Walsh said. “But there are half a dozen reasons why it might never happen, or why it might be a long, long time coming.”

 

Frequently Asked Questions

What is an example of technological singularity?

As AI advances, supercomputer networks may match human intelligence, at which point they could “wake up” and build better machines, creating superintelligence that surpasses humans in the process. Man and machine could also merge via human-computer interfaces, where superintelligent systems are used to augment the human experience and expand upon our innate cognitive abilities.

Is technological singularity possible?

Experts are divided on the possibility of the singularity. Some believe it is inevitable and only a matter of time. Futurist Ray Kurzweil, for instance, thinks the technological singularity will happen before 2045.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us