Artificial intelligence (AI) seems to be on everyone’s lips these days, but are we all talking about the same thing? In a recent Web Summit keynote, I argued that the epicenter of the AI conversation, as well as what was meant by the term “AI” itself, has undergone some tectonic shifts this century, with the loudest buzz coming from a different cast of characters in each decade:

Who Drives AI Work by Decade

  • 2003–2013: AI researchers
  • 2013–2023: AI builders
  • 2023–2033: AI users

Each group is talking about something slightly different when they talk about AI, which is the perfect recipe for confusion. That’s why it’s worth asking yourself which AI your colleagues, policy-makers, and fellow citizens are talking about so that you don’t jump to the wrong conclusions. It’s worth bearing in mind the experience and perspective of whoever you’re getting your information from. Each of these three groups has different perspectives and priorities when it comes to AI.

Video: YouTube

More From Cassie KozyrkovStatistics: Are You Bayesian or Frequentist?

 

Whose AI Is It, Anyway?

AI was once entirely the territory of AI researchers. Two decades ago, the only people working seriously on AI were theorists whose goal was to create blueprints for general-purpose algorithms able to extract patterns from data that could be turned into instructions for a computer to handle new data inputs. That’s a lot like computer code written by human programmers, except it comes from patterns in data instead of explicit instructions written in a programming language.

On the edge of researcher territory, you’d find a gaggle of science-fiction enthusiasts describing AI in the most far-fetched ways possible, which is laudable in fiction but simultaneously rather annoying for anyone trying to have a practical discussion. Anyone whose entire exposure to AI arrived via some rollicking good novels is bound to have wildly misinformed expectations. Imagine trying to have a sober conversation about geology with people who keep asking whether rocks have feelings because pet rocks have eyes painted on them ... and now you know what it’s like to discuss AI with people who only know it from science fiction about robots.

To benefit from today’s AI systems, the public will have to unlearn decades of distracting sci-fi tropes. Sam Altman, CEO of OpenAI, the company that created ChatGPT, puts it well: “ChatGPT is a tool, not a creature. It’s easy to get confused.”

At the turn of the century, working in AI meant you were a researcher — that was the only job available outside science-fiction writing. There was simply nothing practical you could do with it, so there was no choice but to stick to solving purely theoretical problems. Fast forward a decade and, thanks to the rise of cloud computing, progress in data processing hardware, and voracious data collection (remember when Big Data was the buzzword du jour?) the conversation shifted to practical applications.

That doesn’t mean practical applications for the individual software developer, however. This was typically large-team work at enterprise scale because applied AI isn’t necessarily the same thing as inexpensively applied AI.

AI became a darling topic in the tech world. Those of us who were in the thick of it, myself included, felt like the AI buzz was everywhere ... until we stepped outside our echo chamber and saw just how niche our little revolution was. Engineers employed to implement business AI solutions, though numerous enough to drown out the researchers’ voices, were, as far as the general public was concerned, a rare species working on an obscure technology.

Then the rise of generative AI created a new wave of AI-aware users numbering in the hundreds of millions, effectively yanking the AI loudspeaker away from the engineers and builders toward AI-aware product users. (If you’re reading this and you’re not one of them, it’ll take you less than five minutes to change that by trying Google Bard or ChatGPT.)

Do these three groups represent three professions? Do they all have a right to claim they work in AI? That’s what we’ll explore in this article.

 

AI Research vs. Applied AI

Before you get the impression that AI research peaked at the turn of the millennium and might have simmered down in the intervening decades, think again. Researchers now enjoy even more funding from the feeding frenzy of applied AI teams looking for best-in-class algorithms for solving real business problems. Today, the research heroes of two decades ago preside over enormous conferences. Check out the 37th annual installment of NeurIPS if you’re looking for a pure dose.

And although there’s no sign of AI research slowing down, it no longer represents the only profession on the AI scene. Over the last decade, AI researchers’ voices have had to compete with an increasingly noisy ecosystem of applied AI builders. You’d find these folks at technology conferences, where the tech on display is disproportionately AI-oriented and at least one of the conference posters invariably features some kind of chrome-plated humanoid for no particular reason. For a taste of that scene, try Web Summit or a cloud provider conference like Google Cloud Next.

 

‘I Work in AI’

Both groups — AI researchers and applied AI builders — help themselves to the term AI, but they’re using it to mean very different things. Researchers are referring to particular general-purpose algorithms for other people to use to solve specific problems. To them, inventing new algorithms is the only acceptable activity that could possibly give a self-respecting individual permission to say, “I work in AI” ... which, to be fair, used to be the only job you could get in it. You couldn’t apply AI to all that much when I was in college.

Many AI researchers are tremendously annoyed when my kind — the builders of applied AI systems — cheerfully talk about how “we work in AI” when what we’re doing is taking their precious algorithms and “merely” using them to solve an important practical automation problem at scale.

Excuse me, merely?! Our applied AI work is just as challenging as AI research and deserves its own equal set of laurels. You see, solving thorny automation problems and building reliable software systems at enterprise scale is already a sophisticated pursuit, regardless of whether there’s data/AI involved. The data component adds a host of complications, making the idea that this work is easy, well, laughable. (Don’t worry, I used to be a researcher myself before I went over to the applied side, so I have plenty of empathy for both groups.)

My prediction for the coming decade is that the NeurIPSes, Web Summits, and Nexts of the world will be going stronger than ever since each one represents an important type of contributor to an industry that’s continuing to build momentum. There’s a voracious hunger for more algorithms to be invented and more intelligent automation solutions implemented at scale. But there will be a new kid on the block. We’re already seeing a third group that’s strutting onto the scene with a chirpy call of “We work in AI toooo” as they skip merrily to their annual Prompt Engineering Conference. (Not yet a thing, but just you wait....)

More on AiWhat No-Code Can Tell Us About the Future of AI in Workplaces

 

Rise of the AI Users

Who are these bold newcomers? They’re the users. For the last decade, users got what we builders thought they wanted. (Sure, we availed ourselves of data science and experiments to hone our guesses about users’ desires, but we were still guessing.) The rise of generative AI is significant in that it gives users iterative tools to specify what they want the AI system to output for them.

The rise of generative AI is significant in that it gives users tools to specify what they want the AI system to output for them.

Among those who use these tools to radically extend their personal and professional productivity and creativity, a job title is rising in popularity: prompt engineer. Since there’s no industry standard for this term yet, buyer beware. Hiring managers will find that candidate qualifications vary wildly from I’ve tinkered with what I typed into a generative AI tool once” all the way to I’ve been on a GenAI Red Team and know a lot about how to hack LLMs to produce troublesome hallucinations and I’ve also figured out how to take advantage of increased token limits to generate more reliable API calls.For candidates who think there’s a get-rich-quick scheme on the horizon, it’s worth noting that the big salary offers tend to coincide with the latter.

It’s hard to predict what the industry will accept as “real” prompt engineering since we’re already seeing the gap between newbie prompter and committed prompt hacker widening. At the leading edge, I see two different kinds of prompting expertise in two different activities:

  • Reliable secondary systems: “Such a cool way to get the system to produce that kind of output reliably!”
  • Pushing the creative envelope: “Wow, I didn’t even know we could get the generative AI system to do that!”

The first group is well-aligned with traditional engineering. Instead of basic prompting that anyone could do, such as “Please write me a wedding speech in the style of John Milton” (what a wedding!) or even “Here’s some Python code, translate it to C++,” they’re carefully crafting structured vectors that they’ve formatted to get reliable model behavior. So, before you dismiss these folks as non-engineers, take a look at how these modern prompt slingers pull off Parameter-Efficient Fine-Tuning (PEFT).

PEFT is an approach that lets you adjust what a base large language model (LLM) can do by pretty much programming it and giving it structured examples right in the prompt. How is this possible? The token limits — you can loosely think of token limits as the number of words processed, and thus the amount of context the model has when picking its next word from your prompt — have gotten huge, meaning you can input a few thousand characters of your choosing. You’ll be able to train these systems by putting your data right into the (enormous) prompt ... if you have the engineering know-how, that is. Trust me, PEFT is not pfffft.

The second group represents individual productivity taken to new heights — new frontiers in what people are capable of when they get access to interesting tools. It takes persistence, creativity and curiosity to push the envelope on what you can get a machine to do for you. These are also the folks who might say that if you can’t get LLMs to do anything useful for you, it says more about you than about the LLM.

Some of those self-styled prompt engineers will earn synchronized eye rolls from both the AI researchers and the AI engineers, but I’d recommend keeping an open mind. I expect that we’ll be thoroughly impressed by prompt engineers who lean into greedily using AI tools to unleash their most productive and creative selves. They’ll show us new levels of what humankind is capable of.

And let’s not forget, fellow AI professionals, that all that engineering and all those algorithms are a means to an end. If the end is available in its pure form, why not celebrate the shortcut? After all, isn’t human creativity an essential expression of what we’re trying to enable with technology? If these newbies can come up with an original idea and achieve in an afternoon what would have taken me months in grad school, I’m inclined to clap. I bet the Prompt Engineering Conference will be a fascinating event to visit! Maybe I’ll even speak there one day.

But, to summarize, a lot of the buzz in AI these days comes from a large group of people (users who don’t necessarily want to be coders or mathematicians) who previously weren’t invited to interact creatively with AI systems.

Their excitement is about tinkering with the output, not the equations or the algorithms or the training data, which is a confusing notion for AI researchers (who tinker with equations to make machine learning algorithms) and applied AI engineers (who tinker with the algorithms and data to enable the output).

“Prompting seems to be difficult for some machine learning researchers to understand. This is not surprising because prompting in not machine learning. Prompting is the opposite of machine learning.” —Denny Zhou

And while I’d be delighted if we all agreed that there needn’t be fisticuffs over “We all work in AI,” we’ll have a hard time understanding one another if we forget that we’re all dealing with very different parts of the homonym swamp that is AI, especially if we also invite some science-fiction enthusiasts to the party since their AI is another beast entirely ... but the good news is that it’s a swamp that we can all coexist in happily.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us