2022 was a banner year for artificial intelligence, with the industry seeing record adoption, funding and innovation.
AI Trends to Expect in 2023
- Rapid democratization of AI tech and research
- Generative AI taking it up a notch
- Heightened AI industry regulation
- More emphasis on explainable AI
- Increased collaboration between humans and AI
According to a recent report published by consulting giant McKinsey & Company, which surveyed some 1,492 participants globally across a range of industries, business adoption of AI has more than doubled over the last five years. Areas like computer vision, natural language generation and robotic process automation were particularly popular.
Funding in this space has also reached new heights, despite ongoing economic uncertainty. Like many other tech sectors, artificial intelligence saw a sizable drop in VC investments in the first half of 2022, hitting its lowest levels since 2020, according to a State of AI report published by private equity firm CB Insights.
Then came the November public release of OpenAI’s AI art generator DALL-E 2, quickly followed by chatbot ChatGPT (another OpenAI creation). These groundbreaking projects triggered what can only be described as a generative AI craze, with techies and non-techies alike clamoring to make their own creations. They also prompted a massive resurgence in funding, with young AI startups pulling in nine-figure rounds and hitting billion-dollar valuations in short order.
In the wake of 2022, artificial intelligence has evolved into one of the fastest growing industries right now, and it is continuing to move at breakneck speeds.
“Last year was an incredible year for the AI industry,” Ryan Johnston, the vice president of marketing at generative AI startup Writer, told Built In. “This is all moving so fast that we don’t know what six months from now is going to look like, we don’t know what a year from now is going to look like.”
AI Trends to Expect in 2023
That may be true, but we’re going to give it a try. Built In asked several AI industry experts for what they expect to happen in 2023, here’s what they had to say.
Rapid Democratization of AI Tech and Innovation
Until very recently, artificial intelligence was all but dominated by juggernauts like Google, IBM, Microsoft and Amazon. The creation of this technology was much too costly and complex for smaller companies or individuals to participate in. But an open-source revolution has swept across the entire tech industry over the several years, ushering in a democratization of the artificial intelligence space, specifically. And it is poised to knock the AI crown off of Big Tech’s head.
“The open sourcing of generative modeling is going to have a transformative impact,” Ali Chaudhry, the founder and CEO of software development company Infinit8AI, told Built In, particularly for the work of AI engineers, data scientists and others working in this artificial intelligence. “It gives them the power to kind of play around with it, tweak it and apply it in different sectors.”
“The open sourcing of generative modeling is going to have a transformative impact.”
This has already started to happen. Last year, Hugging Face released the first community-built, multilingual large language model called BLOOM. And Stable Diffusion, Lensa and a slurry of other open-source AI art generators have brought about an explosion of individual innovation, rivaling OpenAI’s DALL-E.
Speaking of OpenAI: What started as a non-profit research lab has quickly grown into a $29 billion tech giant, according to recent reporting by the Wall Street Journal, making it one of the most valuable startups in the United States. Meanwhile, AI copywriter Jasper recently hit unicorn valuation following a $125 million raise, joining a wave of other open-source AI startups to garner investors’ attention. At the same time, big tech companies have been tightening their belts as the global economic outlook continues to darken, meaning they’ve had to get more selective about the AI projects they take on and prioritize ones that are moneymakers as opposed to just innovative or interesting.
The burgeoning success and popularity of these startups is lowering the barrier to entry for smaller companies and even individuals to create and experiment with artificial intelligence, making this technology a lot less exclusive than it once was. Because of that, the AI industry is going to experience a surge in innovation in the near future, entrepreneur Arijit Sengupta told Built In in an interview last November. Sengupta is the founder and CEO of Aible, an enterprise AI company that helps anyone create their own AI models.
“Every technology goes through this phase where, initially, you have these experts and only the experts can do it. But the real potential comes when everyone is empowered to leverage it,” he said. “That’s what’s going to happen with AI.”
Generative AI Taking It Up A Notch
Generative AI is among the hottest areas of artificial intelligence, with OpenAI’s ChatGPT being the latest standout. The chatbot comprises a family of AI-powered large language models and is a slicker, more refined version of the company’s GPT-3 platform, which took the world by storm in 2020 with its groundbreaking ability to perform tasks it hadn’t explicitly learned.
Now, ChatGPT’s ability to generate natural (if, at times, weird) language, has pushed the limits of what was previously thought possible with artificial intelligence. According to Writer’s Johnston, the release of ChatGPT alone advanced the industry by about 12 or 18 months. And it has been joined with a bevy of other generative AI models, including viral AI art generators like DALL-E 2 and Mijourney.
“2022 was one for the history books with generative AI,” Johnston said, particularly when it comes to public awareness. Consumer-facing generative AI products “really put AI at the fingertips of so many people in ways that they had never experienced before,” he added. “The productization of things like ChatGPT, DALL-E and Midjourney, making it actually real for people in seeing what all this progress actually means, is the biggest advancement that happened in the last year.”
Looking ahead, 2023 will be a year where industry adoption of this technology really takes off. “Businesses are going to start looking a lot more closely at generative AI and how it can impact their business,” Johnston continued. “It’s becoming much more about how can we integrate this technology into our product? How can we leverage it as part of our go-to-market efforts? How will this affect our customers? Businesses need to sort through the hype to actually figure out what’s real for them.”
“Businesses are going to start looking a lot more closely at generative AI and how it can impact their business.”
This increased industry adoption will also force generative AI products to become more ethical and secure, otherwise companies will not trust or want to work with them.
“There are technologies out there that really aren’t clear to people about how they will be using your business’s data and your business’s customers’ data. And I think, before we get too far ahead of ourselves, people need to ask some hard questions,” Johnston said, adding that AI questionnaires have already started to become a normal part of the sales process. “Businesses are starting to warm up to the idea that they need to understand how [they] should approach it.”
As for ChatGPT specifically, many believe that the release of ChatGPT-4 is on the horizon for 2023, and the expectation is that it will be even more impressive than its predecessor. While the specifics of ChatGPT-4 remain to be seen, smart money says it will be huge, with many more parameters and requiring much more processing ability and memory, as well as more data to be trained on.
“GPT-4 will be trained on considerably more, a significant fraction of the internet as a whole. As OpenAI has learned, bigger in many ways means better, with outputs more and more humanlike with each iteration. GPT-4 is going to be a monster,” Gary Marcus, a professor emeritus of psychology at New York University and a well-known name in artificial intelligence, wrote in a recent blog post. “I guarantee that minds will be blown. I know several people who have actually tried GPT-4, and all were impressed.”
Heightened AI Industry Regulation
For years, the artificial intelligence industry has been a veritable Wild West, with little to no government regulation or legislation specifically managing its development and use. But that’s beginning to change. Lawmakers and regulators spent 2022 sharpening their claws, and now they’re ready to pounce.
Governments around the world have been establishing frameworks for further AI oversight. In the United States, President Joe Biden and his administration unveiled an artificial intelligence “bill of rights,” which includes guidelines for how to protect people’s personal data and limit surveillance, among other things. And the Federal Trade Commission has been closely monitoring how companies collect data and use AI algorithms — and have taken action against some already.
Regulation is growing on the state and local level, too. More than a dozen U.S. cities, including San Francisco and Boston, have banned government use of facial recognition software. Massachusetts nearly became the first state to do so in December, but then-Governor Charlie Baker struck the bill down. Meanwhile, in 2020, Illinois and Maryland passed laws restricting the use of face analysis technology in hiring; and in 2021, the New York City Council announced it was cracking down on the use of AI in hiring.
The European Union has also beefed up its AI regulation efforts. Following multiple amendments and much discussion, the Council of the EU approved a compromise version of its proposed AI Act, and the Parliament is scheduled to put it to vote later this year. Once in place, this will reportedly be the world’s first broad standard for regulating or banning certain uses of artificial intelligence. The EU is also working on a new law to hold companies accountable when their AI products cause harm, such as privacy infringements or biased algorithms.
Still, there is a lot of work to be done. How existing laws play into this brave new world of artificial intelligence remains to be seen, particularly in the generative AI space. This has played out recently in a string of high-profile lawsuits, with artists claiming that several popular generative AI art tools violated copyright law by scraping their work from the web without their consent, resulting in generated art that resembles theirs.
“These are serious questions that still need to be addressed for us to continue to progress with this,” Johnston said. “We need to think about state-led regulation. But we also need to think about the individual level. How are we thinking about these things and having these discussions? Right now we’re all playing catch-up, and generative AI is moving much faster than the conversation that’s currently happening.”
Looking ahead, the global effort to regulate AI will inevitably shape how new technology is built, sold and used. Chaudhry of Infinit8AI predicts in the near future that all commercial AI systems will have to come with disclosures on what kind of data was used to build them, how they were built, their existing limitations, and what kind of biases the model has.
“I think we will see systems becoming more transparent in the coming year because of regulation.”
“No matter how good your AI is, it is going to work as expected only in certain contexts. And these contexts need to be outlined, and they need to be defined by the companies who are selling these systems,” Chaudhry said. Some of this is being covered in legislation that is or has already passed. “Right now, they might be coming in as guidelines, but soon they will be binding upon the companies who are building these AI systems,” he continued. “I think we will see systems becoming more transparent in the coming year because of regulation.”
That being said, increased government oversight won’t necessarily solve all of AI’s problems regarding things like bias or misuse. It could even make the problem worse depending on the government.
The advanced abilities of AI, such as facial and speech recognition, can be easily exploited by “tyrannical regimes,” Adi Andrei, the head of data at cosmetics company SpaceNK, told Built In, adding that this is already a reality in China, with President Xi Jinping employing artificial intelligence to bolster his government’s totalitarian control. “AI is going to reflect the ideology of its programmer. It’s going to be an extension of the people who programmed it, or who decided what it’s going to do. AI is not consciousness, it doesn’t think for itself. It is a tool for enforcing a certain kind of thinking, for better or worse.”
More Emphasis on Explainable AI
Artificial intelligence has gotten a lot more sophisticated in recent years, but the AI models that exist today are not very well understood at all. Even AI researchers and programmers don’t always fully understand why their creations make the decisions they make — leaving it vulnerable to rampant discrimination and misuse.
The harm ambiguous and biased algorithms can cause has been seen in virtually every facet of society, from criminal justice to social services to healthcare. And as the mass adoption of this technology continues to grow, becoming an integral part of everyday life, the challenge of AI bias and fairness has become a genuine concern across the board. Even that recent McKinsey & Company survey reported a marked increase in concern about AI explainability among respondents.
This has led to a growth in what is known as explainable AI — a field of study in which researchers use mathematical techniques to examine the patterns in AI models and draw conclusions about how they reach their decisions. The National Institute of Standards, which is part of the U.S. Department of Commerce, defines four principles of explainable AI. The first is that a system must deliver accompanying evidence or reasons for all outputs; the second is that those explanations must be understandable to individual users; the third is that the explanation accurately reflects the process used to arrive at that output; and the fourth is that the system only operates under the conditions it was designed for, or when the system reaches a sufficient confidence in its output.
“AI is not real intelligence, it’s not real consciousness, it doesn’t have any responsibility. Somebody needs to be responsible for the decisions made.”
All told, the goal of explainable AI is to make the rationale behind the output of an algorithm understandable to humans.
“AI is not real intelligence, it’s not real consciousness, it doesn’t have any responsibility. Somebody needs to be responsible for the decisions made. You can’t just say ‘AI did it,’” Andrei said. “I think in order for these algorithms to have any credibility, they need to be able to explain why this is the decision they [made].”
Chaudhry said explainability is “one of the most important dimensions of ethical AI development,” adding that it will become a “necessity” for the future of AI.
Increased Collaboration Between Humans and AI
In a world where AI-enabled computers are capable of writing movie scripts, generating award-winning art and even making medical diagnoses, it is tempting to wonder how much longer we have until robots come for our jobs. While automation has long been a threat to lower level, blue-collar positions in manufacturing, customer service, and so on, the latest advancements in AI promise to disrupt all kinds of jobs — from attorneys to journalists to the C-suite.
But Andrei of SpaceNK thinks the ongoing fear that we will all eventually be ousted from our job by some chatbot is largely blown out of proportion, and said it is important to keep in mind that artificial intelligence is, first and foremost, a research field. Even advanced capabilities like speech recognition and natural language generation are not a sign of true intelligence. Therefore, it cannot truly replace human intelligence.
“It’s like milk and manure are byproducts of a cow,” Andrei said. Just because a machine can make artificial milk (soy milk) and artificial manure (chemical fertilizer), that doesn’t make it an actual cow. The same is true of human versus artificial intelligence. “Really, artificial intelligence is just pretend intelligence, or simulated intelligence. … So, I don’t see it taking people’s jobs away.”
Instead, as companies continue to embrace artificial intelligence, 2023 will be a year of increased collaboration between humans and their computer counterparts.
“There is immense value in keeping the human element in the process intact,” Raghu Ravinutala, CEO and co-founder of Yellow.ai told Built In via email. The conversational AI platform is focused on automating the customer experience industry, and uses natural language processing to facilitate human-like conversations between users and AI agents via text and voice. “Essentially, a combination of human intelligence augmented with AI will help deliver customized strategies for an elevated experience.”
“There is immense value in keeping the human element in the process intact.”
This is also true of generative AI, both text and images. While AI generated art has received its fair share of criticism from the design and art community, many designers are actually leaning into this new technology to help with everything from character design to concept exploration. And some artists are using it to get inspired in their own work. Meanwhile, writers are using these tools to help generate ideas and refine their style. For corporate copywriting, Johnston said Writer has several features to help people improve and automate their writing. The goal is simplicity, not outright substitution.
“We fully believe that writers and AI should be working together,” Johnston said. “What we want in this process is to be able to accelerate, scale, help writers find the mundane tasks and the things that slow them down, and do those at a quicker pace, while still leaving room for creativity and the parts of writing that everybody enjoys.”
At the same time, it’s important to remember that humans and AI are not equal. Even if they are able to work together effectively, there shouldn’t be complete homogeneity, Andrei said. Otherwise, we run the risk of creating future generations of humans who don’t truly understand the value of their humanity. As an example, he pointed to the Saudi Arabian government’s decision to make the humanoid robot Sophia a legal citizen in 2017.
“They decided to reduce the human value to a first generation chatbot with some wires and some plastic skin,” Andrei said. “It’s not raising the robot to a level of human, by doing that you downgrade what it means to be human.”
The time to decide the role this increasingly intelligent technology will play in society and humanity moving forward appears to be right now. Just as AI is pushing the limits of what is possible out in the world, redefining everything from work to war, it is also forcing humanity to look inward at what it means to be cognitive and creative. At what point could something that is meant to be working for us, suddenly work against us?
“I think we’re living in interesting times. We’re almost living at the confluence of two different trains of thought pretty much crashing into each other. And what’s going to come out of it, we don’t know,” Andrei said. “AI is not going to decide by itself where it goes, it will have to follow where humanity is going. And 2023, I think, is one of those major times.”