This isn’t news, but it is newly relevant: The internet is way too big for advertisers, or anyone, to read.
The web contained about six billion indexed websites in early 2020, and it’s growing all the time. That doesn’t even include the dark web, whose spooky name belies how much we rely on it. Zoom is the dark web. Email is too.
(Of course we can’t read the entire internet — we need productivity hacks to manage our own inboxes.)
So, on a macro scale, algorithms have replaced us as readers. They do what we can’t, and read — well, “crawl” — the whole internet, ranking and categorizing content, flagging potential harmful or illegal activity. These robo-readers include search algorithms; content moderation algorithms that work in tandem with human moderators to monitor social platforms, and contextual analysis algorithms that enable contextual advertising.
You may not have heard of contextual advertising, but don’t underestimate it. It’s a more than $100 billion market. It could also be the future of targeted advertising.
The Anti-Cookie?
Contextual analysis solves two problems for advertisers: the long-standing problem of reading at scale, and the new problem of the cookie.
For a long time, advertisers served consumers targeted ads based on anonymized data from third-party cookies. These snippets of code, like Facebook’s Pixel, could track consumer behavior across the web, whether the user was on Facebook’s site or not. For consumers, this ensured that if you clicked on a Facebook ad for a sleeper couch, you would see similar ads everywhere online for weeks.
This wasn’t controversial; it was totally normal.
What is Contextual Analysis?
Data privacy has become an increasingly major consumer concern, though, and third-party cookies have become increasingly regulated and unpopular. The European Union has passed the General Data Protection Regulation, or GDPR, which constrains cookie usage; California has passed its own statewide privacy law that does the same. Mozilla’s Firefox browser currently blocks third-party cookies by default; Google, meanwhile, plans to drop third-party cookies from Chrome by 2022.
But how will ad targeting work in a cookie-less future?
Contextual analysis offers one option. It allows for targeting that’s not about user behavior, but about the nature of the content the user has chosen to view. And it involves reading the web the way an advertiser would — but at scale.
“You don’t necessarily go to Google and search for all pages that are classified in a sports category. But we would be able to do that.”
GumGum’s Verity, for instance, scans for tone and focuses on broad categorization over keyword searching.
“You don’t necessarily go to Google and search for all pages that are classified in a sports category,” GumGum CTO Ken Weiner explained, “but we would be able to do that.”
Verity also crawls sites for threats to “brand safety” — a term whose definition varies by brand. But, Ken explains, an airline might not want to place an ad in a news story about an airplane crash; Disney, meanwhile, might not want to advertise near content that’s “mature in terms of sexual content or stuff about drugs.”
Often, Weiner said, Verity is the only entity that reads a page before a brand places an ad on it. It’s one layer in the programmatic advertising workflow, which unfolds at warp speed; ad space gets bought and filled in the milliseconds the page takes to load. Humans simply can’t work fast enough to participate in the process; it’s purely algorithmic. Verity, for instance, relays its findings to an ad server, which automatically assesses whether Verity’s report on a given page matches its advertiser’s campaign criteria.
“Nobody’s looking at the page data in real time with thousands and thousands of ad impressions,” Weiner said.
Unless you count Verity as somebody.
How Verity ‘Reads’
Sure, Verity doesn’t meet many criteria for personhood — no body, no feelings – but it’s certainly sophisticated technology. Once a feature in GumGum’s larger media platform, it’s now rolling out as its own standalone feature, which publishers and advertisers can integrate into their programmatic marketing processes via API.
What sets it apart from other contextual analysis tools, Weiner said, is that it combines natural language processing with computer vision. In other words, it not only crawls text, but also views images and even “watches” videos by downloading the video file and examining each frame.
If this sounds too slow for programmatic advertising’s milliseconds-long purchasing window — it is. When an ad server inquires about a freshly published website Verity hasn’t yet scanned, there’s a lag. At first the software replies, essentially, “I don’t know what that is yet,” Weiner said. Then it conducts a scan, which takes anywhere between a few seconds and a few minutes.
It’s a one-time thing, though. Once the report is complete, Verity can surface it ultra-promptly forever after.
And for the price of that brief lag, advertisers get a deeper understanding of what webpages are really about. For instance, Weiner said, if a text features the word “shooting,” it could raise brand safety red flags — but Verity can see that it appears next to a video of a basketball game, and grasp that the page likely references athletics, not violence.
This is all what the Media Ratings Council calls “content-level assessment.” It allows for deeper understanding of content than a purely property-level assessment — so, scanning the captions and backend keyword tagging on images and videos. This manually entered data, especially when it comes to video, doesn’t always capture the “holistic nature” of content, as Weiner puts it.
Verity’s multi-layered reading process gets closer. First, it finds the meat of the article on the page, Weiner said — which means differentiating it from any sidebar and header ads. Next, it parses the body text, headlines, image captions and the like with natural language processing; at the same time, it uses computer vision to parse the main visuals. Finally, it blends its textual and visual analysis into one cohesive report, which it sends off to ad servers.
This report may not be for human eyes, but it’s quite detailed. It slots the page into a general category, like sports, and documents the most prominent keywords. It also notes whether or not the content addresses a time-sensitive event, like a holiday, or the Olympics. Another section focuses on the tone of the page: positive, negative or neutral? (This is more about sentiment than content, Weiner said; if the writer were delighted by a tragedy, that would read as “positive.”)
The report assesses “brand safety” too, flagging possible threats with machine learning algorithms. Lately, Weiner said, GumGum has been training Verity’s algorithms to recognize hate symbols.
“A swastika happens to be one of the ones that it’s better at,” he said.
The Future of Contextual Analysis
Verity’s reading and “seeing” skills depend heavily on the training data it’s ingested. The more it trains, the more its ability to recognize basketball games, brand threats, and everything in between, improves and expands.
Training, though, is a structured activity.
“You can’t really train it against the whole internet,” Weiner explained. “It’d be impossible. You have to curate data from the internet and then train it with that data.”
There are many modes of training algorithms, but one is simply giving it labeled examples of the things you want it to recognize. If you train an algorithm on labeled examples of water guns and real guns, it will learn to tell the difference.
This even applies to less concrete stylistic moves, Weiner said — like irony.
“Usually the challenge in those types of things is people don’t have the training data set to play around with. Where do you find thousands of labeled pages that speak to irony?”
Nowhere ... yet. In the meantime, Verity’s algorithms will keep training on more concrete brand threats, and contesting the cookie’s hold on ad targeting.
Contextual analysis isn’t the only way to replace the cookie, though. Weiner notes other options have emerged too. Google’s Privacy Sandbox initiative, for instance, has sparked some discussion of federated learning. This is an ad-targeting method whereby your browser — rather than cookies — tracks your web activity. Your data is stored on your device, rather than sent to a central, cloud-based repository, but your device’s browser shares a broad overview of your activity with ad servers, to ensure your ads are relevant.
“You can always have a strategy ... where you use automation to narrow things down or surface likely problems, and then humans go in for the final judgment.”
Federated learning is an AI buzzword, but how precisely to use it for ad targeting remains murky. Contextual analysis, meanwhile, is already here, and it has huge potential to grow. Not only can new training data sets emerge, but crawling technology can grow more precise. Weiner was optimistic about “early fusion” algorithms, which could allow GumGum to read images and text together, rather than with separate algorithms that later reconcile their findings.
When it comes to content analysis, though, he doesn’t think human intelligence will ever be totally looped out of the process. Humans have to curate the data sets, and humans grasp nuanced and ever-shifting cultural distinctions in a way machines often can’t.
“I don’t know that [AI] will ever fully replace [humans], but it can definitely help [them],” Weiner said. “You can always have a strategy ... where you use automation to narrow things down or surface likely problems, and then humans go in for the final judgment.”
In other words, algorithms can “read” the internet in all its vastness, and tell humans what needs traditional, eyes-on-the-page reading.