The click farmer has become an almost mythical figure — a symbol of how meaningless work can be. This is someone who perpetrates click fraud — fake, paid-for clicks on advertisements and affiliate links, along with other ad and engagement fraud.
Click farmers can be found around the world. In the United States, people sell fake clicks on cheap smartphones as a side hustle; in China, giant, multi-thousand-device “phone farms” increasingly employ humans to make their clicks look more authentic.
In the Philippines, a man who went by Albert in a Cracked article worked at a warehouse click farm.
“Few people talked, so all you would hear is the clicks,” he said. The clicking, the writer added, “left a burn mark on [Albert’s] very soul.”
Albert isn’t typical, though. More often, human-staffed phone farms “generate legit looking social profiles that can then be sold off,” Andreas Naumann, director of fraud prevention at Adjust, told Built In. They “tend to do things that pay out better than click fraud ... [which is] a lot more scalable without human interaction.”
And if click fraud is anything, it’s scalable. It’s one of the more common forms of ad fraud, researcher Sam Barker, of Juniper Research, told Built In — and ad fraud, by the think tank’s estimate, cost advertisers about $19 billion in 2018. And that number is growing; in 2022, Juniper Research forecasts that it will cost advertisers $44 billion.
The typical click fraud perpetrator is not a human “farmer,” but software. Or, on a macro level, a cornucopia of different softwares, oftentimes working in the background of unsuspecting people’s devices. It’s possible your devices have generated millions of fake clicks without your knowledge.
Computers and phones can click on their own
How does this work, exactly? To explain, we need to zoom out and review the difference between brand and performance advertising. Brand advertising is all about raising awareness — this is what a Coca-Cola billboard by the highway is doing — and publishers usually get paid one lump sum for it. Performance advertising, meanwhile, aims to trigger specific, measurable actions. When it comes to performance ads, publishers get paid per click, email sign-up, purchase or app install.
Payment for performance ads always flows through clicks, though, even when the ad isn’t a pay-per-click one. Let’s say a company posts a pay-per-installation ad on Website X — if someone clicks on that ad, then installs the app, Website X gets paid. If that person sees the ad, doesn’t click, and later installs the app, Website X is very unlikely to get paid. In other words: Whatever the performance metric, clicks are how advertisers attribute performance. So clicks, in aggregate, make big money.
“Where there’s money to be had, fraudsters are there,” Bhavana Mathur, vice president of product management at Tune, told Built In.
Click fraud is especially pervasive, and attractive, because software can “click” autonomously without the user knowing. This allows for click fraud on desktop and on mobile, most of which falls into two basic categories: click spamming and click injection.
Click spamming
There are a few different types of click spamming, but they all involve a high volume of clicks from the same device — and the person who owns that device has no idea these clicks have occurred.
These fraudulent clicks can occur through web rerouting, where a user clicks on a normal link, and gets sent to their planned destination — but along the way, they make a near-invisible pit stop on an ad. This raises the ad’s click count, a phenomenon the marketing industry calls “hit inflation.”
Another kind of click spamming happens on sketchy websites. In the foreground, a user may be illegally streaming a great TV show, but in the background, invisibly, Naumann notes that it’s possible “devices are being exposed to hundreds and hundreds of impressions and clicks and videos.”
This is a “spraying and praying” approach to click fraud, Mathur notes. Not every fake click pays off; far from it. It’s a game of chance. If software fake-clicks on an ad for an app, that likely only pays if the user converts — in other words, if they actually install the app in question.
That’s a moonshot, since the user never clicked on, or even saw, the original ad. But if software can fake enough clicks on enough ads across enough devices, it’s bound to luck out. Even if it happens hours or days after the fake click, a user will eventually do something monetizable. They’ll get the software what looks like a conversion, at least to the untrained eye.
Click injection
This is the opposite of click spam’s spraying and praying approach — you might call it a stalking and pouncing approach instead. Like click spamming, click injection involves software operating in the background of a device, autonomously clicking on links users never see. But those clicks happen under very specific circumstances.
Let’s say you click on an ad for a dog-walking app on your phone.
“You have a dog, you need walks,” Naumann said, spitballing. “This sounds amazing.”
The click takes you to the app store, where you start to download the Dogwalking App. So far, so good. But now, because of the way phones are structured, your existing apps can see that you’ve initiated an app download, and they can see which app, specifically, you’re downloading. If one of your existing apps dabbles in fraud, it can connect with its home server and click autonomously on an ad for the Dogwalking App — which makes it seem like that fraudster app prompted the Dogwalking App installation. It was, after all, the last place you “clicked” on an ad before you actually opened the app.
When we say “fraudster app,” that doesn’t mean the app doing the fraud is purely malicious, either. Often, Naumann noted, apps that do what they say they do — clean up memory, say, or save battery — also inject clicks and poach ad dollars.
This sounds sneaky, and it is. It’s only a piece of the puzzle, too — click fraud, and ad fraud in general, evolves constantly.
“The ecosystem in which fraud is perpetrated is going to be vastly different in five years time to what it is today,” said Barker of Juniper Research.
But when it comes to the above types of click fraud, at least, marketing platforms like Adjust and Tune have solutions. Today, fraud prevention features are “table stakes” for marketing platforms, Mathur noted.
They used to be exactly the opposite, though — features marketers actively avoided.
The allure of click fraud
You’d think anyone would want to know they were being defrauded. But for marketers, screening out fake clicks didn’t just mean saving money. It meant abandoning an entire worldview.
“The performance network sales pitch always was, ‘We deliver the installs at a competitive price at massive volumes, as long as you can pay for those volumes,’” Naumann said.
It sounded too good to be true, and it was. Let’s say, for argument’s sake, that an advertiser wants to reach a 30-year-old man in New York City.
“Those people don’t turn around when trucks crash into one another,” Naumann said. “How do you get their attention with a mobile advertisement?”
Even when marketers can reach billions of devices, he said, only about 1 percent of those users are interested in engaging with ads. This ran totally counter to the ubiquitous network sales pitch, though — and Naumann, who has been working on ad fraud detection in 2007, saw firsthand how hard this was for marketers accept.
Back when he got his start, click fraud was a thriving cottage industry. Naumann estimates it dates back to around 1997, when publishers realized they could make more money by clicking on their own banner ads. The fraud grew more sophisticated from there; think spammy pop-up ads that open more pop-up ads without you lifting a finger.
When smartphones — essentially, tiny computers — began gaining mainstream traction, he knew that click fraud would thrive on these portable devices too. But marketers didn’t want to hear it, at least at first. Reducing mobile click fraud, he explained, meant charging seven times the going rate per installation, with no guarantee of a million installations, either — only about 10,000. Those 10,000 installs would be real, but the overall package still didn’t sound attractive to advertisers.
The tide shifted toward fraud prevention, though, thanks to a few key companies, Naumann said. Procter & Gamble, for instance, was an early company that began prioritizing authenticity over scale in digital advertising. Today, most advertisers have followed their lead.
But Naumann recalls that, even in 2016, when he first started working on mobile fraud prevention at Adjust, “we created a lot of trouble for a lot of people.”
“Plenty of people lost their jobs because we did ours,” he added. “Nobody knew before how much fraud was there.”
So, how do we know now?
Ferreting out fraud
The Tune and Adjust platforms use several strategies to prevent fraudsters from making money.
One involves comparing recorded conversion rates with average conversion rates. Click-through on an ad, Naumann said, is “a very crude measurement,” but it tends to hover around 1 percent — so, one in every hundred viewers actually clicks. A 2 or 3 percent conversion rate is excellent; a 15 percent conversion rate, though, is suspicious (unless it’s an “incentivized conversion,” where users click through for some type of credit or coupon — those typically yield higher conversion rates).
Sky-high, fraudulent click-through rates were hard for advertisers to let go at first. But while filtering out click fraud lowers click-through rates, it improves other metrics, Mathur notes. Revenue per click, for instance, rises when fraud clicks drop. Tune’s platform highlights the metrics that improve with fraud prevention, as well as click-through rates.
Fraud detection tools can also catch odd click-through rates at the device level.
“If I’m a human looking at an advertisement, the number of clicks from my IP address to that advertisement should be a sum total of one,” Mathur said. But one version of click spamming, only lucrative on pay-per-click ads, is just “hammer[ing] the ad” from one device. Think a million clicks, literally, from the same computer.
At Tune, “we detect non-unique clicks,” Mathur said, “and we basically say only the first click is allowed.” Further clicks from the same IP address don’t impact the ad’s metrics.
Adjust is working on similar functionality, requiring logical relationships between page views, or impressions, and clicks. There’s only one click allowed per impression, and clicks don’t count toward ad metrics if they don’t actually open the landing page. (This is a hallmark of click spam; the device autonomously clicks on links, but never actually opens the destination page.)
There’s also a certain logical lag between an impression and a click, or a click and an installation. “An impression that is 10 hours old can’t generate a click, because nobody’s seeing that anymore,” said Naumann, explaining Adjust’s planned fraud protection rules.
Conversely, if a user installs and opens an app 0.25 seconds after hitting a click-to-install link, that’s a potential symptom of click injection.
A normal lag between clicking to download and actually completing the download is more like 30 seconds to a minute, Mathur estimated. Tune currently offers advertisers “time to action” reports, which help them notice strange gaps (or strange crowding) between impressions and clicks, clicks and installs, and any other time-stamped, measurable user behavior that’s of interest.
All these fraud prevention efforts are works in progress, though. They have to evolve with time — fraudsters and fraud detection platforms are locked in an eternal “cat and mouse game,” Barker said.
For now, though, fraud prevention tools save advertisers major cash, and protect everyone from the invisible fallout of click fraud: energy and personal data plans wasted. They also disincentivize phone farms, protecting people from the most boring work of all time: click farming.
“How many hours a day can you [perpetrate click fraud] without going nuts?” Naumann said. “I would argue not a whole lot.”