UPDATED BY
Brennan Whitfield | Apr 21, 2023

An AI winter occurs when general interest in the artificial intelligence industry cools, both in terms of funding and public attention. As a result, many of the initiatives and research in the space wane.

What Is AI Winter?

AI winters are times when funding and public interest in the artificial intelligence space decline. They occur when the promise and potential of AI developments fall short or fail to deliver a return on investment, causing attention and excitement in the industry to wane.

“It’s very easy for something like AI to get a lot of buzz, a lot of interest, a lot of investment, and then not have anything to show for it,” Manny Bernabe, an AI and IoT deployment strategist at bigplasma.ai, told Built In. “Because [AI] is still a very hard problem. It’s a very difficult problem to start, and it’s a very difficult problem to scale and to professionalize.”

As a result, he continued, the industry experiences “peaks and troughs” in terms of expectations and the actual value delivered. Those peaks are referred to as an AI summer or spring — when interest and funding in artificial intelligence surges, causing a boom in research, development and adoption. 

Fueled by excitement from innovations like ChatGPT and AI-powered content generators, AI is in the midst of its hot girl summer era, but how long it will last is anyone’s guess. One thing we know for certain is that, like all seasons, change is inevitable. Nothing lasts forever.

“Things tend to have a cyclical nature. Stock markets have a cyclical nature, societies have a cyclical nature — there’s a rise and a fall. This is just sort of built into the system,” Bernabe said. “You just have to ride them out and hope that the highs don’t get so high that, when you do come down, it’s a big crash and destroys everything. And you’ve got to hope that the lows don’t get so low that it snuffs out the ember of this very powerful technology.”

Find out who's hiring.
See all Developer + Engineer jobs at top tech companies & startups
View 9175 Jobs

 

What Causes AI Winters?

AI winters happen when public and private interest shifts away from the technology, investments into new AI projects aren’t seen as a necessity for business growth or AI’s capabilities just aren’t keeping up with what was expected.

AI Winter Causes

  • Decreased public and company interest in AI.
  • Decreased funding toward AI projects.
  • AI technology not living up to expectations.

 

Decreased Interest and Funding in AI

A good way to test if we’re in an AI winter or not is to evaluate whether the majority of companies see AI as an important growth strategy for their business, Bernabe said. If enough executives and leaders don’t see AI’s strategic importance, and instead view it as a simple R&D project or a fun toy to play with, that’s an indication that the industry is “cooling off.”

 

AI Not Living Up to Expectations

This cooling off period is generally caused by the overhyping of artificial intelligence, when the technology does not live up to the expectations of companies, investors and the general public. It’s a result of “disillusionment,” AI researcher Michael Wooldridge told Built In. Wooldridge is a professor of computer science at the University of Oxford and director of foundational AI research at the Alan Turing Institute in London. 

“When people fail to deliver, when they’re not delivering on what they said they were going to, winter kicks in,” he said. “It’s where realism starts to kick in, where we start to understand the limits of these approaches.”

 

AI Winters Are Sometimes Necessary

An AI winter might not necessarily be a bad thing, though. The heightened levels of scrutiny in AI winters allow its headier, more aggrandized ideas to be fleshed out and brought to fruition. They also give regulators and the general public a chance to catch up with what is otherwise a very fast-paced industry. Right now, legislation isn’t anywhere near where it needs to be in terms of proper AI regulation. AI winters give people “a little bit of breathing room,” according to Bernabe.

“How do we strike that balance between regulating the space but also fostering creativity and entrepreneurship so that we’re still competitive on the global stage with AI and machine learning?” he said. “There’s going to be a lot of disruption, and I don’t think general society is quite ready for it. So, if there’s a little bit of breathing room, that might be good, too.”

Learn More Why Aren’t Governments Paying More Attention to AI?

 

AI Winter Timeline: Ghosts of AI Past

The early days of AI coincide closely with the very beginnings of computer science itself. In just a couple decades, computers evolved from needing clunky relays and vacuum tubes, to relying on integrated circuits and microprocessors — making for eager, fast-growing interest in the AI space.

AI Winter Timeline

  • 1950: Alan Turing authors his landmark paper “Computing Machinery and Intelligence,” setting the stage for what would become artificial intelligence.
  • 1956: The term “artificial intelligence” is officially coined at a symposium at Dartmouth University.
  • 1950s-1970s: AI becomes an established area of study, investment and public interest. Fundamental concepts including machine learning, neural networks and natural language processing are established during this time. 
  • 1972: Mathematician James Lighthill publishes a scathing paper about the state of artificial intelligence, exposing all of its shortcomings.
  • 1970s-1980s: AI experiences its first winter, caused by a dwindling of funding, interest and research.
  • Mid-1980s: New, AI-powered technology is developed for corporate use, causing mass industry adoption and a surge in funding.
  • Mid-1990s: AI experiences its second winter, caused by overly complicated technology and a drop in corporate interest.
  • Mid-2000s: Innovations in areas like big data and cloud computing, as well as an increase in processing power, resolve the issues AI experienced in the 1990s. The development of new algorithms in areas like deep learning pushes the limits of AI, drawing more attention to the space once again.

 

The First AI Winter

Between the 1950s and 1970s, artificial intelligence became an established area of interest. The term was reportedly coined at a symposium at Dartmouth University in 1956. From there, AI research built upon those advancements in computer technology thanks to a massive wave of funding from government and university sources. In fact, many of the fundamentals of AI that the industry continues to use today were established during this time period, including neural networks, machine learning, natural language processing, and more.

By the 1970s, much of the innovation in AI research came out of academic settings like Stanford University, Carnegie Mellon University, Massachusetts Institute of Technology and the University of Edinburgh. Around this time, James Lighthill, a prominent mathematician and the Lucasian chair of mathematics at Cambridge University, was asked to write a report on the state of artificial intelligence. It turned out to be “absolutely scathing,” according to Wooldridge, who recently authored a book in 2022 about the history of artificial intelligence.

“He spoke to, for example, some of the people at Stanford, and I don’t think he had ever come across people quite like that. His mind was just completely blown by these Californian professors who were saying, ‘Yeah, we’re going to have thinking machines in 20 years.’ He thought these people were crazy, that they were living in cuckoo land,” Wooldridge said. “And his report was extremely negative.”

It became clear that the innovations being made in this time period were not scalable, and much harder to build than expected, causing enthusiasm for the technology to dwindle. As a result, AI funding dried up. Thus, the first AI winter had rolled in. And it overtook the industry for about a decade.

 

The Second AI Winter

Then, in the mid-1980s, interest in artificial intelligence picked up again with the development of new technology like AI-powered expert systems and newfound corporate uses of AI. During this resurgence, private entities like venture capital firms and corporations began investing in the space as well.

In general, computers in the 1980s were more powerful and widely available than they were in the 1950s, so adoption of AI flourished. But by the mid-1990s, the industry ran into roadblocks again. For instance, expert systems proved to be more complicated than many companies were willing to put up with, requiring a lot of data and compute power to maintain effectively. Back then, data storage was expensive, and existing algorithms had a hard time keeping up. Corporate interest in it dwindled, and so too did the funding — resulting in the second AI winter.

“It wasn’t as severe or brutal” as the first AI winter, Wooldridge said. “But nevertheless people felt it.”

The issues that brought on the second AI winter were soon resolved with innovations in areas like big data, cloud computing and increased processing power in the mid-2000s. And the development of new algorithms in areas like deep learning and neural networks promised to push the limits of artificial intelligence, drawing more attention to the space. AI had come in from the cold once again.

 

THE STATE OF AI TODAY

This brings us to today, where we appear to be in the heat of an AI summer.

AI has evolved from a niche area of research to a pervasive part of everyday life — impacting everything from the way we shop to the way we drive. There are billions of AI voice assistants in homes around the world, and chatbots have become surprisingly human. Plus, the capabilities of artificial intelligence have gotten broader, and more generalized, with capabilities even the experts don’t fully understand.

“In the history of AI, I think it’s unprecedented what we’re seeing now. The progress that we’ve made is very, very real, and very exciting,” Wooldridge said.

“In the history of AI, I think it’s unprecedented what we’re seeing now.”

That being said, it’s important to remember that big advancements in AI don’t mean we’ll all have robot butlers tomorrow. There are still major limitations to even the most impressive systems today that prevent the industry from becoming entirely ubiquitous.

“They’re not going to crawl out of the box and start taking over the world,” Wooldridge said of today’s AI. “I think there is a big disconnect between reality and how people perceive what’s being delivered.”

Now That We’ve Looked to the Past Let’s Look to the Future of AI

 

When Is the Next AI Winter?

One of the tenets of AI winters is the industry’s inability to keep up with the public’s expectations. Just because that hasn’t happened yet, doesn’t mean it won’t happen at all. 

Wooldridge said he hasn’t noticed any early signs of an AI winter yet — the industry is steadily progressing, and “stuff is getting better.” But, he believes the industry will cool off when the general public inevitably gets disillusioned with the technology, and when this recent rapid-fire innovation begins to drop off.

And, of course, there are limits to everything. Every new and interesting AI advancement requires more money, more compute power and more data than the one before, and eventually these things will run out. “After just a couple more iterations, we’ll be at the limit of what’s feasible,” he added. “That might cause things to flatten.”

But again, when this will happen is anyone’s guess. Bernabe of bigplasma.ai believes this period of AI innovation, adoption and disruption could last for as long as 30 years.

“With this next generation of AI technologies, I think it’s going to be pretty drastic,” he said. “It’s not just the AI researchers and the data scientists that are excited about this. It’s everyday people that are using this technology. That, to me, is a very strong signal that we are on the cusp of changing the way that people can interact with this technology.”

Great Companies Need Great People. That's Where We Come In.

Recruit With Us