Technological singularity, also called the singularity, refers to a theoretical future event at which computer intelligence surpasses that of humans.
The term ‘singularity’ comes from mathematics and refers to a point that isn’t well defined and behaves unpredictably. At this inflection point, a runaway effect would hypothetically set in motion, where superintelligent machines become capable of building better versions of themselves at such a rapid rate that humans would no longer be able to understand or control them. The exponential growth of this technology would mark a point of no return, fundamentally changing society as we know it in unknown and irreversible ways.
What Is Technological Singularity?
Technological singularity refers to a theoretical future event where rapid technological innovation leads to the creation of an uncontrollable superintelligence that transforms civilization as we know it. Machine intelligence becomes superior to that of humans, resulting in unforeseeable outcomes.
According to John von Neumann, pioneer of the singularity concept, if machines were able to achieve singularity, then “human affairs, as we know them, could not continue.”
Exactly how or when we arrive at this era is highly debated. Some futurists regard the singularity as an inevitable fate, while others are in active efforts to prevent the creation of a digital mind beyond human oversight. Currently, policymakers across the globe are brainstorming ways to regulate AI developments. Meanwhile, more than 33,700 individuals collectively called for a pause on all AI lab projects that could outperform OpenAI’s GPT-4 chatbot, citing “profound risks to society and humanity.”
Only time will tell if these roadblocks are enough to derail tech progression, tripping up AI’s race to a superintelligence and delaying the singularity altogether.
What Are the Implications of Technological Singularity?
Brought on by the exponential growth of new technologies — specifically artificial intelligence and machine learning — the technological singularity could, on one hand, automate scientific innovation and evolutionary progress faster than humanly possible, turning out Nobel-Prize-level ideas in a matter of minutes. This best-case scenario would further merge human and machine, augmenting the mind with non-biological, computerized tools the same way a prosthetic limb would become part of the body. We would be able to heighten the human experience on every desirable level, grasping a better understanding of ourselves and, in the process, the universe at large.
On the other hand, the singularity could lead to human extinction. Based on our knowledge of how existing intelligent life (like humans) have treated less intelligent life forms (like lab rats, pigs raised for slaughter and chimps in cages), superintelligent machines may devalue humans as they become the dominant species.
Whatever their plans may be, superintelligent machines would need local matter to start building a post-human civilization, “including atoms we are made out of,” according to Roman Yampolskiy, a computer scientist and associate professor at the University of Louisville whose writings have contributed to the conversation on singularity.
All that would need to bring about this “very dangerous technology,” Yampolskiy said, is for progress within AI development to continue as is.
When Will the Technological Singularity Happen?
Predictions for when the technological singularity will happen vary. Futurist Ray Kurzweil set the date for 2045, while Elon Musk thinks the singularity could happen as early as next year. But the likelihood of machines matching their makers depends on a few key factors.
In his original paper published in 1965, Intel co-founder Gordon Moore predicted that computer power would double every two years as hardware downsized. Despite inescapable quantum limitations, that theory has impressively held up over the years. The process of tech progression is one that is additive, and can keep expanding even at an accelerated pace. In contrast, human brain power hasn’t changed for millennia, and will always be limited in its physical, biological capacity.
Another indicator of society’s proximity to the singularity is the overall advancement of AI. Researchers rely on Turing Tests to demonstrate a computer’s intelligence. During these assessments, a machine must fool a panel of judges by mimicking human-like responses in a line of questioning. Only recently was this reportedly achieved for the first time by an AI chatbot that impersonated a 13-year-old boy named Eugene Goosteman in 2014.
Tech powerful enough to bring about the singularity must also be bankrolled, and despite public concern, interest in AI from both private and public sectors is only growing. More recent projects, like Microsoft’s ChatGPT and Tesla’s self-driving cars, regularly make headlines as tech startups fuel the AI boom with deep pockets. Currently valued at $100 billion, the AI market is projected to grow twentyfold by 2030, up to nearly two trillion U.S. dollars, according to market research firm Next Move Strategy Consulting.
But whether this all amounts to a phenomenon capable of upending society is still up for debate. Drastic tech advancement in the near future is a given, said Toby Walsh, a leading AI researcher who is currently building AI safeguards alongside governmental organizations, but whether it warrants the singularity is doubtful.
“It would be conceited to suppose we won’t get to artificial superintelligence; we are just machines ourselves after all,” Walsh, the chief scientist at the University of New South Wales AI Institute, told Built In. “But it may be that we get to artificial superintelligence in the old-fashioned way — by human sweat and ingenuity — rather than via some technological singularity.”
Not all experts think the singularity is bound to happen. Some, like Mark Bishop, a professor emeritus of cognitive computing at Goldsmiths, University of London, reject claims that computers can ever achieve human-level understanding, let alone surpass it. As someone who has been in AI labs for half the time AI labs have even existed, Bishop sees the current buzz as one of the many passing fads he’s witnessed during his career.
“We’re living in a massive hype cycle where people, for commercial reasons, grossly over-inflate what these systems can do,” Bishop, who is currently building an AI-enhanced fraud detection platform at FACT360, said. “Now, there’s vast amounts of money involved in AI, and that hasn’t historically always been the case .… It has an effect on the public that wrongly overstates what AI can deliver.”
What Happens After the Technological Singularity?
By definition, the aftermath of the technological singularity will be incomprehensible to humans. The density at which tech would have to accelerate in order to reach the singularity means that the principles we’ve always gone by no longer make sense.
“People can tell you how they would try to take over the world, but we have no idea how a superintelligence would go about it,” said Yampolskiy, noting the unpredictable nature of AI. “Like a mouse who has no concept of a mouse trap.”
So while a superintelligence might understand a final goal, it lacks the implied common sense needed to get there. For example, if a superintelligent system is programmed to ‘eliminate cancer,’ Yampolskiy explained that it could either develop a cure or kill everyone with a positive cancer diagnosis.
“If you don’t have common sense — why is one better than the other?” he said.
But that doesn’t automatically defer to a doomsday dystopia. These systems are algorithmically based machines that were at one point programmed by humans. This process, as computationally sound as it may be, is still influenced by human semantics and nuance. So if a “self-improving” machine takes shape, it would need to determine what exactly self-improvement looks like. For example, a simulated brain may interpret self-improvement as pleasure, Yampolskiy said, and solely focus on optimizing its rewards channel ad infinitum.
“There is no upper limit to pleasure, just as there is no upper limit to intelligence,” Yampolskiy added. “We can get lucky; instead of destroying us, AI may choose to go out and explore the universe.”
Different Views of Technological Singularity
Among futurists, there are different types of imagined singularities. See how some of the field’s standout thought leaders vary in their interpretations of a superhuman intelligence below.
John von Neumann, 1958
John von Neumann was a gifted mathematician who invented game theory and helped lay the groundwork for the modern computer. But it wasn’t until after his death that Neumann became associated with technological singularity, when his colleague, nuclear physicist Stanislaw Ulam, credited him for the idea in a posthumous tribute. In it, Ulam paraphrased a conversation between the two, where Neumann first posits the singularity as a phenomenon brought on by the “ever accelerating progress of technology” that would impact the mode of human life and change it forever.
Irving J. Good, 1965
Good, a cryptologist, worked as a code breaker at Bletchley Park alongside Alan Turing during the second World War. In his 1965 paper, “Speculations Concerning the First Ultraintelligent Machine,” he regards the eventual creation of an ultraintelligent machine as “the last invention that man need ever make.” He’s known for describing the singularity as an “intelligence explosion,” as it would initiate an autonomous self-assembly cycle. Good believed that the singularity would transform society for the better. He also proposed that these machines would replicate human brain function by developing neural circuits.
Vernor Vinge, 1993
Computer scientist and sci-fi author Vernor Vinge sees the singularity as a fast acceleration into humanity’s downfall, on par in importance to man’s biological ascent from the animal kingdom. In his 1993 paper “The Coming Technological Singularity,” Vinge predicts a few different scenarios that could bring on such events. He believes that computers may one day “wake up” as AI develops, either one at a time or in the form of large, interconnected networks. Another possibility is in the form of computer-human interfaces, where machines become so infused with human users over time that they can be considered superhumanly intelligent. Even further, scientists and researchers may take an internal approach to enhance human intellect, implanting tech inside humans via bioengineering. The technology needed to trigger the singularity would be available by 2030, according to Vinge; but whether we choose to partake in our own demise is not a given.
Ray Kurzweil, 2005
Futurist and pioneer of pattern-recognition technologies, Ray Kurzweil sees singularity as a mind-expanding merger of man and machine. In his book, The Singularity Is Near, he describes this as an “unparalleled human-machine synthesis” where computers become an empowering appendage to humans, enabling us to offload cognitive abilities to aid our further expansion. This “transhuman” superintelligence is built through positive feedback loops that are then repeated and reinforced at each stage of development across emerging tech — specifically computers, genetics, nanotechnology, robots and artificial intelligence.
Toby Walsh, 2017
Walsh offers a “less dramatic” outcome in lieu of the singularity in his 2017 paper “The Singularity May Never Be Near.” Rather than an autonomous-AI-bot uprising, Walsh imagines any future tech capable of superintelligence will still be primarily man-made, and developed slowly over time. To be clear, Walsh doesn’t doubt the possibility of a computerized superhuman intelligence; however, one powerful enough to self-animate and incite a civilization-upending singularity is less believable.
“[A technological singularity] doesn’t violate any laws of physics that we know about, so we can’t say it is impossible,” Walsh said. “But there are half a dozen reasons why it might never happen, or why it might be a long, long time coming.”
Frequently Asked Questions
What is the theory of technological singularity?
According to the theory of technological singularity, computer intelligence could advance to the point where it surpasses human intelligence. This new kind of superintelligence could reshape various industries and transform civilization as we know it.
What is an example of technological singularity?
As AI advances, supercomputer networks may match human intelligence, at which point they could “wake up” and build better machines, creating superintelligence that surpasses humans in the process. Man and machine could also merge via human-computer interfaces, where superintelligent systems are used to augment the human experience and expand upon our innate cognitive abilities.
How close are we to technological singularity?
Predictions vary on when the technological singularity could occur, with Elon Musk predicting it could happen as soon as next year or 2026. However, leading futurist Ray Kurzweil stands by his original prediction of the technological singularity occurring around 2045.
What will happen after the technological singularity?
Once the technological singularity is achieved, superintelligent machines will be able to improve themselves and progress at an uncontrollable pace. But it’s impossible to say for sure what the results of this progress will be. While the hope is that the singularity leads to groundbreaking discoveries and augmenting technologies like human-computer interfaces, there are still concerns that machines won’t have humanity’s best interests in mind.
Is technological singularity good or bad?
It’s too early to tell whether the technological singularity will be beneficial or detrimental to humanity. Superintelligence in itself isn’t a bad thing, but it can cause harm to humans if safeguards aren’t established ahead of time.