We appear to be in a golden age of artificial intelligence.

OpenAI’s ChatGPT is continuing to boggle minds, generating text that can do everything from writing in the style of William Shakespeare or Donald Trump to diagnosing medical conditions. Lensa’s AI-generated pictures have taken social media by storm. And deepfake technology is quickly surpassing our ability to spot it. Meanwhile, companies the world over are folding enterprise AI into their daily operations. And funding in the space has reached new heights — indicating a collective awakening among venture capitalists from a months-long bear market slumber. 

Indeed, after surviving years of false starts and stumbles, artificial intelligence is finally in its hot girl summer era.

What Is An AI Winter?

AI winters are times when funding and public interest in the artificial intelligence space decline considerably. They occur when the promise and potential of AI developments fall short or fail to deliver a return on investment, causing attention and excitement in the industry to wane. 

Of course, the party can’t last forever. And the industry is bracing itself for what is known as an AI winter — an inevitable decrease in funding, public interest and innovation that comes for every cutting-edge field eventually.

Dive Deeper What Can’t AI Do?

 

What Is An AI Winter?

An AI winter occurs when general interest in the artificial intelligence industry cools, both in terms of funding and public attention. As a result, many of the initiatives and research in the space wane. 

Winters aren’t exclusive to artificial intelligence. In fact, the cryptocurrency industry is experiencing one right now. After a years-long funding frenzy, crypto companies big and small have either flatlined or gone into a full-blown death spiral, and the general public’s faith in the space has diminished considerably. Like crypto, artificial intelligence is a relatively new, cutting-edge industry, making it more sensitive to macroeconomic trends (like a looming recession). 

“It’s very easy for something like AI to get a lot of buzz, a lot of interest, a lot of investment, and then not have anything to show for it,” Manny Bernabe, an AI and IoT deployment strategist at bigplasma.ai, told Built In. “Because [AI] is still a very hard problem. It’s a very difficult problem to start, and it’s a very difficult problem to scale and to professionalize.”

As a result, he continued, the industry experiences “peaks and troughs” in terms of expectations and the actual value delivered. Those peaks are referred to as an AI summer or spring — when interest and funding in artificial intelligence surges, causing a boom in research, development and adoption. 

A quick explainer of AI winters and how they are caused. | Source: Cognilytica

By all accounts we are in the midst of an AI summer right now, and how long it will last is anyone’s guess. One thing we know for certain is that, like all seasons, change is inevitable. Nothing lasts forever.

“Things tend to have a cyclical nature. Stock markets have a cyclical nature, societies have a cyclical nature — there’s a rise and a fall. This is just sort of built into the system,” Bernabe said. “You just have to ride them out and hope that the highs don’t get so high that, when you do come down, it’s a big crash and destroys everything. And you’ve got to hope that the lows don’t get so low that it snuffs out the ember of this very powerful technology.”

 

What Causes AI Winters?

Just as winter in nature happens when the Earth’s axis shifts away from the sun, winter in the AI industry happens when public interest and funding shifts away from the technology. 

A good way to test if we’re in an AI winter or not is to evaluate whether the majority of companies see AI as an important growth strategy for their business, Bernabe said. If enough executives and leaders don’t see AI’s strategic importance, and instead view it as a simple R&D project or a fun toy to play with, that’s an indication that the industry is “cooling off.”

This cooling off period is generally caused by the overhyping of artificial intelligence, when the technology does not live up to the expectations of companies, investors and the general public. It’s a result of “disillusionment,” AI researcher Michael Wooldridge told Built In. Wooldridge is a professor of computer science at the University of Oxford and director of foundational AI research at the Alan Turing Institute in London. 

“When people fail to deliver, when they’re not delivering on what they said they were going to, winter kicks in,” he said. “It’s where realism starts to kick in, where we start to understand the limits of these approaches.”

“When people fail to deliver, when they’re not delivering on what they said they were going to, winter kicks in.”

This might not necessarily be a bad thing, though. The heightened levels of scrutiny in AI winters allow its headier, more aggrandized ideas to be fleshed out and brought to fruition. They also give regulators and the general public a chance to catch up with what is otherwise a very fast-paced industry. Right now, legislation isn’t anywhere near where it needs to be in terms of proper AI regulation. AI winters give people “a little bit of breathing room,” according to Bernabe.

“How do we strike that balance between regulating the space but also fostering creativity and entrepreneurship so that we’re still competitive on the global stage with AI and machine learning?” he said. “There’s going to be a lot of disruption, and I don’t think general society is quite ready for it. So, if there’s a little bit of breathing room, that might be good, too.”

Learn More Why Aren’t Governments Paying More Attention to AI?

 

AI Winter Timeline: Ghosts of AI Past

The artificial intelligence industry dates back to the early 1950s, when mathematician Alan Turing first broached the concept of whether machines were capable of thought in his landmark paper, “Computing Machinery and Intelligence.”

The early days of AI coincide closely with the very beginnings of computer science itself. In just a couple decades, computers evolved from needing clunky relays and vacuum tubes, to relying on integrated circuits and microprocessors. 

Over that 20-year span, artificial intelligence became an established area of interest. The term was reportedly coined at a symposium at Dartmouth University in 1956. From there, AI research built upon those advancements in computer technology thanks to a massive wave of funding from government and university sources. In fact, many of the fundamentals of AI that the industry continues to use today were established during this time period, including neural networks, machine learning, natural language processing, and more.

By the 1970s, much of the innovation in AI research came out of academic settings like Stanford University, Carnegie Mellon University, Massachusetts Institute of Technology and the University of Edinburgh. Around this time, James Lighthill, a prominent mathematician and the Lucasian chair of mathematics at Cambridge University, was asked to write a report on the state of artificial intelligence. It turned out to be “absolutely scathing,” according to Wooldridge, who recently authored a book about the history of artificial intelligence.

“He spoke to, for example, some of the people at Stanford, and I don’t think he had ever come across people quite like that. His mind was just completely blown by these Californian professors who were saying, ‘Yeah, we’re going to have thinking machines in 20 years.’ He thought these people were crazy, that they were living in cuckoo land,” Wooldridge said. “And his report was extremely negative.”

It became clear that the innovations being made in this time period were not scalable, and much harder to build than expected, causing enthusiasm for the technology to dwindle. As a result, AI funding dried up. Thus, the first AI winter had rolled in. And it overtook the industry for about a decade.

AI Winter Timeline

  • 1950: Alan Turing authors his landmark paper “Computing Machinery and Intelligence,” setting the stage for what would become artificial intelligence.
  • 1956: The term “artificial intelligence” is officially coined at a symposium at Dartmouth University.
  • 1950s-1970s: AI becomes an established area of study, investment and public interest. Fundamental concepts including machine learning, neural networks and natural language processing are established during this time. 
  • 1972: Mathematician James Lighthill publishes a scathing paper about the state of artificial intelligence, exposing all of its shortcomings.
  • 1970s-1980s: AI experiences its first winter, caused by a dwindling of funding, interest and research.
  • Mid-1980s: New, AI-powered technology is developed for corporate use, causing mass industry adoption and a surge in funding.
  • Mid-1990s: AI experiences its second winter, caused by overly complicated technology and a drop in corporate interest.
  • Mid-2000s: Innovations in areas like big data and cloud computing, as well as an increase in processing power, resolve the issues AI experienced in the 1990s. The development of new algorithms in areas like deep learning pushes the limits of AI, drawing more attention to the space once again.

Then, in the mid-1980s, interest in artificial intelligence picked up again with the development of new technology like AI-powered expert systems and newfound corporate uses of AI. During this resurgence, private entities like venture capital firms and corporations began investing in the space as well.

In general, computers in the 1980s were more powerful and widely available than they were in the 1950s, so adoption of AI flourished. But by the mid-1990s, the industry ran into roadblocks again. For instance, expert systems proved to be more complicated than many companies were willing to put up with, requiring a lot of data and compute power to maintain effectively. Back then, data storage was expensive, and existing algorithms had a hard time keeping up. Corporate interest in it dwindled, and so too did the funding — resulting in the second AI winter.

“It wasn’t as severe or brutal” as the first AI winter, Wooldridge said. “But nevertheless people felt it.”

The issues that brought on the second AI winter were soon resolved with innovations in areas like big data, cloud computing and increased processing power in the mid-2000s. And the development of new algorithms in areas like deep learning and neural networks promised to push the limits of artificial intelligence, drawing more attention to the space. AI had come in from the cold once again.

 

The State of AI Today

This brings us to today, where we appear to be in the heat of an AI summer.

AI has evolved from a niche area of research to a pervasive part of everyday life — impacting everything from the way we shop to the way we drive. There are billions of AI voice assistants in homes around the world, and chatbots have become surprisingly human. Plus, the capabilities of artificial intelligence have gotten broader, and more generalized, with capabilities even the experts don’t fully understand.

“In the history of AI, I think it’s unprecedented what we’re seeing now. The progress that we’ve made is very, very real, and very exciting,” Wooldridge said.

“In the history of AI, I think it’s unprecedented what we’re seeing now.”

That being said, it’s important to remember that big advancements in AI don’t mean we’ll all have robot butlers tomorrow. There are still major limitations to even the most impressive systems today that prevent the industry from becoming entirely ubiquitous. Take ChatGPT for example. Despite all of its impressive capabilities, this technology is nowhere near perfect. It regularly generates false information, for one, and has been known to “hallucinate,” as Wooldridge put it, coming up with some pretty bizarre creations.

“It’s very impressive. But if anybody believes that this means that the end is in sight — full-blown AI, the kind of Hollywood dream — I think they’re wrong,” Wooldridge said. 

“They’re not going to crawl out of the box and start taking over the world. That’s just not going to happen,” he continued. “I think there is a big disconnect between reality and how people perceive what’s being delivered.

Now That We’ve Looked to the Past Let’s Look to the Future of AI

 

When Can We Expect the Next AI Winter?

One of the tenets of AI winters is the industry’s inability to keep up with the public’s expectations. Just because that hasn’t happened yet, doesn’t mean it won’t happen at all. 

Wooldridge said he hasn’t noticed any early signs of an AI winter yet — the industry is steadily progressing, and “stuff is getting better.” But, he believes the industry will cool off when the general public inevitably gets disillusioned with the technology, and when this recent rapid-fire innovation begins to drop off.

“People have such high expectations. And, in some cases, those expectations are not going to be realized,” Wooldridge said. “The next AI winter will kick in when we don’t see interesting developments coming out.”

And, of course, there are limits to everything. Every new and interesting AI advancement requires more money, more compute power and more data than the one before, and eventually these things will run out. “After just a couple more iterations, we’ll be at the limit of what’s feasible,” he added. “That might cause things to flatten.”

But again, when this will happen is anyone’s guess. Bernabe of bigplasma.ai believes this period of AI innovation, adoption and disruption could last for as long as 30 years.

“With this next generation of AI technologies, I think it’s going to be pretty drastic,” he said. “It’s not just the AI researchers and the data scientists that are excited about this. It’s everyday people that are using this technology. That, to me, is a very strong signal that we are on the cusp of changing the way that people can interact with this technology.” 

Great Companies Need Great People. That's Where We Come In.

Recruit With Us