Tokenmaxxing Explained: Why AI Use Is Becoming a Workplace Status Symbol

Tokenmaxxing is Silicon Valley’s latest obsession, where AI use itself becomes a metric for success. Some say it boosts productivity, while others see it as a meaningless status game. Let’s unpack one of tech’s most divisive work trends.

Written by Ellen Glover
Published on Apr. 22, 2026
A person working at a computer with lights surrounding him.
Image: Daniel Plana Trenchs / Shutterstock
REVIEWED BY
Sara B.T. Thiel | Apr 22, 2026
Summary: Tokenmaxxing turns AI usage into a proxy for productivity, pushing workers to consume massive amounts of token via autonomous agents. While it fuels adoption, critics say it’s costly, performative and generally flawed — rewarding activity over outcomes and creating pressure, burnout and waste.

In many ways, artificial intelligence has helped make work easier, cheaper and remarkably more efficient. But it has also spawned a controversial new competition between employees, where productivity is measured less by what you produce and more by how aggressively you use the technology itself.

The trend, known as “tokenmaxxing,” is taking much of the tech industry by storm. At some companies, individuals are ranked on leaderboards based on how much they use AI, with generous perks and incentives encouraging them to push these tools to their limits. In extreme cases, people are running their setups around the clock, racking up usage (and massive invoices) in an effort to automate as much of their work as possible. The assumption is that the more you use AI, the more productive you must be. Those who lean in the hardest will come out on top.

What Is Tokenmaxxing?

Tokenmaxxing is the practice of maximizing one’s AI usage, specifically by consuming as many tokens as possible with autonomous agents. This has emerged as a workplace trend in some corners of the tech industry, with ultra-high AI utilization being treated as a signal of productivity, regardless of the output.

But not everyone is buying into the hype. To some, the whole thing looks like nothing more than a glorified rat race — an expensive, useless status game that rewards style over substance under the guise of progress. 

So, how does tokenmaxxing work? Why has it caught on so fast? And is there a smarter way to apply it at work? Let’s dive in.

Related ReadingYour AI Use is a Performance Metric. Here’s How to Talk About It.

 

First, What Are Tokens?

To understand tokenmaxxing, it helps to start with what a token is. In artificial intelligence, tokens are the small chunks of text that an AI model processes when it’s trying to understand a prompt and generate a response. They’re also used to measure overall AI utilization and calculate cost. 

AI companies typically charge a monthly subscription for a fixed allotment of tokens, with additional usage billed separately or unlocked through higher-tier plans. For context, one token equals about four characters, and pricing per token can range by fractions of a cent depending on the model and whether it is an input or output (companies tend to charge more for output tokens than input tokens).

Until recently, even highly active AI users consumed a relatively modest number of tokens. Someone using a tool like ChatGPT or Claude to draft and revise a report, for example, might burn through 10,000 tokens, or about 7,500 words, across multiple iterations, which would cost anywhere from a few cents to a dollar. Using millions of tokens would require hours of sustained, hands-on use. Reaching into the billions or trillions was virtually impossible — until now.

Related ReadingIn the Vibe Coding Era, What Does a Software Engineer Even Do?

 

How Tokenmaxxing Took Over the Workplace

Tokenmaxxing has come about amid the rise of so-called “vibe coding” tools like Claude Code and Codex. Instead of simply responding to prompts one at a time, these systems use AI agents to work autonomously for hours on end, reviewing and editing large codebases and writing entire programs while their human users are out living their lives. Each agent can also spin up their own sub-agents to tackle different parts of a task in parallel. All the while, they’re devouring massive volumes of tokens. 

Unsurprisingly, this is seeping into the work of software developers specifically. Some coders have begun orchestrating entire teams of agents, setting dozens of them loose on multiple projects at once — and using a substantial number of tokens in the process, hence the phrase “tokenmaxxing.”

Many workplaces encourage this behavior, too, creating internal leaderboards to track how many tokens each person uses and rewarding those at the top with trophies, special titles and other prizes. “Token budgets” are even becoming another form of employee compensation, alongside stock options and yearly bonuses. In the end, some workers are going through millions of tokens a week, totalling thousands of dollars a month. And their employers are happily footing the bill, surmising that more AI use means more productivity, and of course, more money for the business in the long run. 

“We all should be tokenmaxxing,” Sonya Huang, a partner at Sequoia Capital, told The Wall Street Journal. The VC firm has its own token leaderboard (as do many of the companies in its portfolio, according to Huang), and it offers firmwide office hours to help drive more AI usage internally. “There is this insane new piece of technology that is fundamentally going to rewrite how we work. Some companies are going to make it, some companies are not” she continued. “The thing that matters most for your company is: is my employee become insanely AI-pilled? And that requires getting them on this tokenmaxxing mindset.”

Not everyone is convinced, though. Critics argue that raw token consumption is a poor proxy for productivity, and that celebrating ultra-high usage is essentially just rewarding people for spending the most money — not necessarily for producing the best results.  

Nevertheless, the trend is doing wonders for the AI companies selling the tokens themselves. Anthropic says it more than doubled its revenue projections in just two months, attributing much of its success to the breakneck growth of Claude Code. And OpenAI claims its Codex tool’s weekly active users has tripled since the start of 2026, and that overall Codex use had increased fivefold in a matter of months. The more tokens users need, the more money these platforms bring in, making this once-abstract unit of computation a much-need revenue stream for companies that are desperate to finally turn a profit.

 

What’s the Appeal of Tokenmaxxing?

The primary appeal of tokenmaxxing is that it accelerates AI use in the workplace. By and large, corporate America remains steadfast in the belief that anyone who doesn’t fully embrace AI risks falling behind, or becoming obsolete altogether. For tech companies, the complete and total adoption of artificial intelligence is a matter of survival. Token leaderboards and similar incentives offer a fun, simple way to ensure everyone is moving in the right direction.

“You should be getting people at all different kinds of functions actually engaging with and experimenting [with AI],” LinkedIn co-founder and venture capitalist Reid Hoffman said in an interview at Semafor’s World Economy Summit. Though he did not mention “tokenmaxxing” by name, Hoffman said tracking employees’ token spend was a good idea. “Some of it will be experiments that’ll fail, that’s fine,” he added, but “you want a wide variety of people using it essentially, collectively, and simultaneously.” 

Indeed, the New York Times spoke with several tokenmaxxers who claim that the practice makes them more productive at work. Others, however, framed their token maximization as a largely performative move — a strategic way to signal to their bosses and colleagues that they’re keeping up with the times.

Related ReadingWhen AI Writes Code, What Skills Are Employers Looking For?

 

Reasons to Avoid Tokenmaxxing

While the allure of tokenmaxxing might be strong, there are several reasons to avoid the practice.

Tokenmaxxing Isn’t Outcomes-Based

Encouraging employees to simply burn through as many tokens as possible doesn’t guarantee high-quality work. After all, traditional generative AI already has a well-documented tendency to produce what has become known as “workslop,” or large volumes of low-value, error-ridden outputs that can cost companies millions of dollars a year. The problem gets substantially worse with agentic AI, since autonomous agents can essentially run wild with little to no human supervision or input, eroding efficiency rather than improving it.

In other words, you can tokenmaxx all day long and still wind up with a whole lot of nothing. A developer might be churning out completely useless code at scale (and wasting valuable processing power), but winning on their company’s leaderboard because they’re using a lot of tokens to do it. Without clear, outcome-based objectives, adopting a culture that prioritizes tokenmaxxing could leave you optimizing for activity rather than real, useful results.

Tokenmaxxing Is Expensive

And then there’s the issue of cost. In the most extreme cases, workers are gobbling up billions — even trillions — of tokens a month, setting their employers back thousands of dollars a day. “It doesn’t seem sustainable,” one anonymous OpenAI employee told the New York Times. 

Things have gotten so expensive that some companies are having to apply strict limits on the amount of tokens available to their employees, citing a computing capacity crunch. But others  remain undeterred, spending millions a year to give their teams unfettered access to AI tools. Even Jensen Huang, the founder and CEO of AI chip-maker Nvidia said in the All-In Podcast that he would be “deeply alarmed” if an engineer earning $500,000 a year didn’t use at least $250,000 worth of tokens.

Tokenmaxxing Can Create a Toxic Work Environment

Although tokenmaxxing is often pitched as a fun, gamified way to boost efficiency, it tends to create a grueling work environment, where the persistent use of AI feels compulsory. At the same time, several tech companies have announced that they are laying off hundreds of employees due to automation, while simultaneously elevating AI implementation as a key performance metric.

“When it comes to identifying redundancies, it’s a fair assumption that things like ‘AI usage’ and ‘pull requests per engineer’ will be taken into account, especially as one theme of such layoffs will almost certainly be that the employer wants to focus more on AI,” Gergely Orosz, who runs a popular newsletter for software engineers called The Pragmatic Engineer, wrote. “So, it’s common sense (and self-preservation) to use more AI, if only not to be seen as unproductive.”

This pressure has triggered something that feels a lot like addiction to even the most successful people in the industry. OpenAI co-founder Andrej Karpathy, who coined the term “vibe coding,” told the No Priors podcast that he was in a “state of AI psychosis” for months, spending upward of 16 hours a day directing swarms of agents. If he had any unused tokens at the end of the month, he said, he would “feel extremely nervous” and rush to exhaust his supply to keep up with everyone else. Meanwhile, Quentin Rousseau, CTO and co-founder of incident management company Rootly, told Axios that he couldn’t sleep for months after switching to agentic coding — to the point where he eventually needed a doctor to prescribe him medication just so he could shut his brain off at night.

“They operate a lot like slot machines,” he said of these tools. “You hit one prompt, you get an answer, you get some coding done.” But then, other times, the agent will fail, and the user is pulled into fixing it. 

Over time, that ceaseless loop of prompting, monitoring and correcting can wreak havoc on a worker’s mental health, resulting in what researchers from Boston Consulting Group and UC Riverside call “brain fry,” a form of cognitive overload caused by excessive AI exposure. Their findings, published in Harvard Business Review, linked heavy AI use to increased errors, “decision fatigue” and a higher desire to quit among employees.

“Between nonstop headlines about AI replacing jobs and the ongoing hiring slowdown, many professionals feel like they’re stuck in a high-stakes game of musical chairs where the music hasn’t stopped, but it might at any moment,” Elizabeth Bodett-Dresser, a therapist at Still Oak Counseling, previously told Built In. That sense of uncertainty can be “exhausting” she added, not just from the work itself but from the “mental gymnastics” of imagining every possible bad scenario, whether it’s missing a new development or losing a job completely. “When employees believe those voices, it creates a constant sense of pressure.”

And this phenomenon appears to be permeating across the entire tech industry. Nikunj Kothari, a venture capitalist in San Francisco, wrote a recent Substack article about the rise of what he calls “token anxiety,” describing a culture that is obsessed with AI-driven productivity. 

“Dinner conversations used to start with ‘what are you building?’ That’s over. Now it’s ‘how many agents do you have running?’,” he wrote. “People drop the number the way they used to drop follower count. Quietly competitive. The flex isn’t what you’ve accomplished anymore. It’s what’s working while you’re sitting here not working.”

Related ReadingFeeling Burned Out at Work? AI Might Be to Blame.

 

Alternatives to Tokenmaxxing

If you truly want to increase productivity with AI and measure that productivity effectively, tokenmaxxing alone will only get you so far. Rather than optimizing for total token usage, experts suggest focusing on how the technology is actually being used, and what it yields in the end.

Measure Outcomes and Use Cases, Not Just Activity

Instead of only tracking how many tokens are being used, track what is being achieved. That means tying AI usage to concrete results — time saved, revenue generated, et cetera — while also documenting the specific use cases where AI added the most value. This will make it easier to identify what’s working and scale it, as opposed to just prioritizing sheer volume.

“Quantify your results when you can,” Brandon Sammut, chief people and AI transformation officer at Zapier, told Built In in a previous interview. “‘I automated our weekly reporting process using AI, saving roughly six hours a week across the team that we’ve reallocated to generating 15 percent more leads per week’ lands differently than ‘I’ve been using AI tools regularly.’”

Prioritize Good Judgement

Consuming lots of tokens is only valuable if the outputs are accurate and helpful. Instead, the focus should be on where these tokens are being applied, as well as how their outputs are reviewed, refined and validated. At the end of the day, human oversight and the ability to think critically are critical components of effective AI use.    

“Anyone can describe what they delegated to AI,” Sammut continued. “What’s equally important is that you evaluate and are accountable to what AI produced.”

Focus on Thoughtful Experimentation

When it comes to artificial intelligence, experimentation is essential. But it should also be intentional. The goal here isn’t to blindly insert AI everywhere, but to identify where it genuinely streamlines work and improves outcomes. This requires constant exploration and iteration, with people evaluating what works and refining (or abandoning) what doesn’t.

“You’re not trying to build a culture of adoption of AI,” Nick Kennedy, a partner at tech consulting firm West Monroe, told Built In in a previous interview. “You’re trying to build a culture of innovation. You’re trying to build a culture of continuous improvement. You’re trying to build a culture of testing and learning.” 

To cultivate this kind of culture, employees need to feel safe asking questions, testing out ideas and acknowledging when something isn’t working. When teams feel free to experiment without the pressure of being perfect or doing the absolute most, it is much easier to spot use cases that truly work and ditch the ones that don’t.

“We should have, essentially, a weekly check-in. It doesn’t have to be everyone all the time with each other — but a group check-in about ‘what did we try to do new this week, to use AI for personal and group and company productivity, and what did we learn?’” Hoffman explained at the summit. “Because what you’ll find, some of the things are really amazing.”

Frequently Asked Questions

A token is a portion of text that AI models use to process language, usually amounting to a few characters. When a user enters a prompt, the model converts those words into tokens, processes them internally, and then generates a response with more tokens. These are used to measure how much an AI system is being used and to calculate the cost of running it.

Tokenmaxxing involves pushing AI tools to generate as many tokens as possible, with tokens being the small chunks of text that models use to understand inputs and produce outputs. It is accomplished by running lots of prompts, chaining tasks together, or using autonomous AI agents that can work without supervision. In more advanced setups, multiple agents may run in parallel, or even leverage sub-agents to tackle multiple jobs at once. Because these tools track activity in tokens, this kind of heavy, continuous use adds up quickly.

The tokenmaxxing trend has a mixed reputation in the tech industry. Some claim it encourages AI adoption, boosts productivity and fosters innovation. But others argue it is an expensive, inefficient practice that prioritizes sheer activity over meaningful outcomes. Ultimately, the value of tokenmaxxing is dependent on whether token usage is tied to real, measurable results.

Explore Job Matches.