Online Privacy Trends to Watch in 2021
Online privacy isn’t going great. “So many companies out there are constantly trying to stalk everything that you do, and make money off you,” Don Vaughn, head of product at consumer-insights platform Invisibly, explained.
By “stalk everything that you do,” Vaughn might be referring to companies tracking your location, analyzing your browsing history, examining the way you scroll, passing personal information to third parties or following you around the internet with targeted ads.
Online Privacy Trends to Watch For in 2021
- No more third-party cookies
- Possible new regulations
- Alternate data-collecting processes
- More conversations about IoT privacy
Some people, dubbed “privacy nihilists” or “data nihilists” don’t really care. The only noticeable outcome of all that tracking is more personalized content and experiences. And besides, would panicking really change how companies treat users and their data?
But other people care a lot. Eighty-one percent of Americans believe the risks of corporate data collection are greater than the benefits, according to Pew Research Center, and Google searches for “data privacy” have climbed steadily in the last five years.
No matter where you fall, here’s how today’s biggest data privacy questions will impact you.
Third-Party Cookies Are Going Away
Third-party cookies, or the bits of code websites use to follow you around the internet, have earned a reputation as a creepy surveillance technology. (Whether they’re so bad compared to other invasive technologies is another question.) Firefox and Safari have phased out third-party cookies, and, soon, Chrome will too.
In its place, Google introduced Privacy Sandbox, a set of solutions for a cookie-free browsing experience — but one in which advertisers can still do behavioral targeting. So far, it looks like the cornerstone of Privacy Sandbox will be a giant machine-learning model called Federated Learning of Cohorts (FLoC).
Data experts and privacy buffs will be familiar with “federated learning” — it’s one of a couple ways to anonymize data for artificial-intelligence systems. Google’s FLoC will do that by grouping users into cohorts based on demographics or interests. Instead of one big AI model ingesting everyone’s individual data, each user will have a tiny model that processes data locally, on the device they’re using to browse. Once that model is “trained,” it passes its insights along to larger, cohort-level models that help with ad-serving, but the user’s data stays on their device.
In practice, marketers will likely still be able to target ads with some precision — serving energy drink ads to thousands of people in a cohort like “video game enthusiasts,” for example — without knowing which sites individual users viewed and when.
This is arguably better for privacy. But FLoC still has its critics. Lack of visibility into individual user choices would make marketing conversions harder to measure for publishers and brands. Ad tech professionals were dubious at Google’s claim that FLoC would produce basically the same number of conversions per ad dollar spent. Others speculated that FLoC’s limitations would just lead to more reliance on contextual advertising and wondered how the system would deal with users who browse across multiple devices.
Last, many worried that FLoC would lead to discriminatory or predatory marketing by grouping users based on categories like race, sexuality, health status or political interests. (Google has stated it won’t create cohorts based on “sensitive topics,” which include the aforementioned concerns. This was part of Google’s personalized advertising policy before the introduction of FLoC.)
Google did not directly address Built In’s questions about FLoC’s treatment of identity and potential impact on users, publishers and brands.
Apple and Facebook Are Fighting
In an upcoming Apple iOS update, apps will have to ask users’ permission to track them across the web and other apps.
That’s bad news for companies like Facebook, which make money by learning what their users do online and then serving personalized ads. Facebook CEO Mark Zuckerberg has criticized the change, suggesting that Apple — which competes with Facebook in the messaging space — has ulterior motives. Facebook also ran several advertisements in major newspapers arguing that personalized ads help small businesses and users.
Data Privacy Regulation Is a Toss-Up
As big tech hashes out (and bickers about) privacy solutions, the government is also weighing in, sort of.
The arrival of laws like the California Consumer Privacy Act, the European Union’s General Data Protection Regulation and Virginia’s new Consumer Data Protection Act were good signs for some privacy proponents. When certain regions enact stricter privacy guidelines, companies are forced to build new privacy solutions — even if they’re just for a subset of customers.
And other U.S. states are getting on board. Currently, there are 15 states with privacy bills in the works, and, because a mishmash of local laws would make data management incredibly difficult for companies, federal data privacy regulation is likely.
That’s all good news — right?
Not if new legislation caters to large tech companies instead of consumers, Hayley Tsukayama, a legislative activist at Electronic Frontier Foundation, told me.
“We certainly don’t want to see states pass laws that lower the bar, particularly as we head into a long-term conversation about what federal legislation would look like.”
“Right now, we have a California model that set a bar,” she said. “It’s not a perfect law; there are improvements we’d like to see there too. But we certainly don’t want to see states pass laws that lower the bar, particularly as we head into a long-term conversation about what federal legislation would look like.”
“Lowering the bar” might look like weak enforcement. Laws giving consumers the right to limit what data they share with companies don’t mean much if companies can violate the law without swift consequences.
Virginia’s new law, for instance, doesn’t allow any private right of action — that means consumers can’t sue companies who violate it. California’s law allows consumers to sue companies only if data is breached, but, otherwise, enforcement falls to the state attorney general’s office. Enforcement in Washington would also fall to the attorney general, though the state’s house of representatives revised the bill on April 1 to include a limited private right of action, so individual consumers could sue non-compliant companies to prevent future violations, but not to collect damages.
According to Tsukuyama, most state attorney general’s offices aren’t equipped to handle enforcement. A fiscal analysis of Washington’s bill indicated the attorney general would get only 3.6 additional employees to handle consumer complaints, she said. That’s probably not enough people, and the office could get overwhelmed trying to address violations. In a letter to state lawmakers two years ago, California Attorney General Xavier Becerra said CCPA enforcement imposes “unworkable obligations” and “serious operational challenges.” Virginia’s new bill, meanwhile, allocated just $400,000 annually for state-wide enforcement.
“Lawmakers shouldn’t be convinced by legislation pitched as ‘GDPR-lite:’ bills that grant lots of ‘rights’ without thorough definitions or strong enforcement,” the organization wrote in a 2020 blog post.
But privacy models like Washington’s and Virginia’s have received strong support from the tech industry. In fact, “When we see opposition to a bill [from big tech companies], it is often when enforcement provisions are strong,” Tsukuyama said.
Both Microsoft and Amazon testified in support of Virginia’s law. Microsoft and the Washington Technology Industry Association went on record endorsing the Washington bill back in January, and Amazon sent a letter to one of the bill’s sponsors expressing its support.
As the prospect of federal regulation looms larger, big tech’s tendency to support legislation that organizations like EFF consider “milquetoast” might be cause for concern — at least for consumers who think companies shouldn’t be allowed to profit from their data without consent.
The Data Economy Is Shifting
Some People Want Tech Companies to Pay You for Your Data...
At the heart of the debate over privacy regulation is a larger debate about the so-called data economy. Should data serve as currency, allowing users to visit websites and social-media platforms at no cost?
Right now, many online publishers — like newspapers — work with ad platforms to show targeted ads to visitors. That, theoretically, pays for the publishers’ operations. Meanwhile, companies collect and analyze user data — like browsing behavior, gender or location — to better target ads or product offerings. Often, they also sell that data to other companies in exchange for money or technology and services. And all that, the thinking goes, lets visitors enjoy most online content for free.
The only party not earning money from user data is users.
Some people think that should change. In 2018, authors Jaron Lanier and Glen Weyl argued that users should be paid for their data and proposed a new type of organization called a MID, or mediator of individual data. MIDs would be like labor unions, in that they advocate for data payouts and handle the technical requirements. Former Democratic presidential candidate Andrew Yang even launched an organization, Data Dividend Project, dedicated to collective bargaining for data payouts.
Reception was mixed. Based on CCPA’s guidelines for valuing data, data dividend payments would be both too small to make a difference to consumers and too large for companies to manage, Will Rinehart argued in Wired. (A $20 annual payment to every U.S. user would tank Facebook, he wrote.)
So, a large-scale approach to data dividends is unlikely, at least in the near future. But what about a small-scale one?
That’s exactly what data-management platform Invisibly claims it’s doing. Invisibly hasn’t launched yet, but when it does, users can sign up for free access to personal data dashboards that show their search histories, credit card information and social media activity. Then, they can choose which data to share with companies in exchange for some money (between $30 and $100 a year, an Invisibly spokesperson told Built In).
“The problem isn’t that there’s data about you. The problem is that you don’t have control over it.”
Invisibly’s brand partners will use that data to target users with relevant ads or questionnaires. Eventually, Vaughn said, users will be able to provide immediate feedback — like, “Yes, I liked this ad experience” — and that feedback will affect which types of ads they see.
Vaughn, who has a Ph.D. from the University of California–Los Angeles and describes himself on LinkedIn as a “recovering neuroscientist,” calls that “advocational AI” — a term he coined to describe algorithms that help users improve their browsing experiences.
Of course, if a user’s ideal browsing experience were, “one where my data doesn’t get collected without my consent,” they’d be out of luck. Right now, consumers can’t opt out of the data economy, so it’s hard to discern whether better targeted ads are a service to users and brands — or just brands.
But Invisibly’s position is one Vaughn calls “data positive”: The data economy isn’t going anywhere, so let’s give users some money and more agency.
“The problem isn’t that there’s data about you,” he said. “The problem is that you don’t have control over it.”
By connecting consumers and brands directly, Invisibly gives consumers more visibility into where their data is going. It also gives better advertising insights to brands, it claims. Vaughn said that, so far, brands have been “highly interested.”
Rather than legally compelling companies to pay users for their data, Invisibly’s model is a voluntary one. Companies pay to use the platform and become Invisibly customers. Meanwhile, Invisibly users will still see plenty of targeted ads they didn’t sign up for from brands that don’t work with the startup.
But, if the Invisibly model is successful, it could push more brands to pay for consensually shared data.
...but ‘Data Dividends’ Could Lead to Privacy Privilege
For people who could really use a little extra cash, data dividends are especially attractive.
“I think thinking about data privacy is a luxury thing that we get to talk about, when most people are like, ‘I can use 100 more bucks a year,’” Vaughn said.
That distinction — people who can afford to worry about data privacy and people who can’t — opens the doors for a hierarchical data economy, in which people with higher incomes can afford to keep their personal information private, but others can’t.
The EFF, for example, refers to data dividends as “pay-for-privacy schemes.” By foregoing the data dividend, the organization argued, some consumers would essentially be paying a higher price for the same online products or services.
At the same time, if consumers were no longer able to “trade” their personal data for free access to online products and services, some couldn’t afford to pay with money. That could limit access to online content like journalism. (Keep in mind, though, that targeted ads cost consumers money too, in the form of more spending.)
It’s a dilemma — and one without immediate answers.
If Granular User Data Isn’t Available From Browsers, Brands Might Just Look Elsewhere
Eyeo, the company that maintains the open-source software Adblock, has also pitched what it calls a “‘new deal’ between users and advertisers. The forthcoming product, a browser extension called Crumbs, gives users a personal data dashboard (like Invisibly) and allows them to choose what to share. It processes data on local browsers and anonymizes data by grouping users into larger categories (like FLoC). Crumbs also comes with privacy tools that block third-party cookies and protect users’ IP and email addresses from marketing software.
Like FLoC and Invisibly, Crumbs operates on the assumption that an ad-supported internet — and the free content that comes with it — is here to stay.
“We really believe that we can reach some sort of a fair game of providing the web economy with all the tools it needs in order to make meaningful monetization of content, while also preserving user rights and user choice along the way,” Rotem Dar, director of media operations at eyeo, told me.
“The result of that would be demonetization of journalism and content.”
This isn’t a new line of thinking for eyeo, Director of Advocacy Ben Williams said. In 2011, the company rolled out the controversial Acceptable Ads update, which adjusted Adblock’s default setting to allow certain ads to appear. Only about 8 percent of Adblock users chose to disable Acceptable Ads and go back to an ad-free experience, according to Williams. That suggests higher-quality ads really do pose value to users. (Either that, or customers didn’t understand how to disable the setting.)
“It took us a really long time until Acceptable Ads and ad-filtering were the standard and were accepted by the industry,” he added. “We [as an industry] don’t want to do the same thing with privacy. We want the users to be involved from day one, because if they’re not, they’re going to rebel again, and they’re going to block everything.”
“Blocking everything” could mean users pushing for the type of global data-sharing opt-out Tsukuyama mentioned. And that — for better or worse — would threaten the online economy publishers, brands and the ad industry have settled into.
“My fear is that if data is not going to be available in-browser, then budgets of advertisers would simply be shifted either to the walled gardens or to other mediums, whether it’s connected TV or basically any environment where granular data about users would be available,” Dar said. “And the result of that would be demonetization of journalism and content.”
Connected Devices Pose a New Set of Questions
What about the internet of things — how’s privacy going in the realm of internet-connected devices?
“IoT is a mess,” Chet Wisniewski, principal research scientist at enterprise security firm Sophos, said. “It has been for a really long time, and I’m not sure we’re ever going to see it improve that much.”
That’s bad news, because any insecure device with a camera or microphone could be accessed by people you don’t want accessing it.
“IoT is a mess.”
A few years ago, Wisniewski gathered about a dozen different connected devices — nanny cams, light bulbs, speakers — and hacked into them, looking for security vulnerabilities. If there was any common thread, Sophos could release some general guidelines for IoT manufacturers, he figured.
“Unfortunately, there was no commonality,” Wisniewski said. “They were all garbage. They were all super easily hackable. But they were all broken in a different way where there wasn’t a uniform piece of advice that we could give to the companies.”
In general, name brands tend to do a much better job with IoT security, according to Wisniewski. Apple, for instance, has high standards for items marketed as part of its “home kit.” And if a brand-name consumer product is abused by hackers, the company is likely to fix the vulnerability or face recourse from the Federal Trade Commission.
Off-brand IoT products, on the other hand, are the wild west of cybersecurity. Many “brands” are just single-batch white labels from overseas factories, so there’s no way for regulators or researchers like Wisniewski to hold manufacturers accountable.
“When I was doing my IoT testing, I don’t think a single one of the brands existed long enough for me to find them after I bought [the product],” he said.
Even worse perhaps, those manufacturers are often looking for the cheapest and quickest way to get products to market — including the software inside them. Most run outdated versions of the open-source operating system Linux with known bugs and vulnerabilities still in the code.
There’s potential for this to get better. Alan Friedman, director of cybersecurity initiatives at the U.S. Department of Commerce, has proposed something called a “software bill of materials,” which would compel consumer-tech manufacturers to disclose a product’s software components. That way, helpful third parties could assign consumer-friendly risk scores — almost like the ingredient labels on food.
VPNs — or encrypted internet connections inaccessible from the outside — are another potential IoT security solution.
“IoT is one area where I think VPNs actually can make a very large difference,” said James Yonan, CTO of OpenVPN and creator of the original open-source project of the same name. “If you have a webcam that is designed to connect to the internet over a VPN, that can really be the difference between it being secure and it being not secure.”
Many OpenVPN customers are IoT providers, Yonan added, and OpenVPN has worked with “a lot” of companies to integrate VPN functionality into connected devices.
“I don’t think anybody probably cares about most of the IoT devices in your house. What am I going to do, turn your lights out without your permission? I mean, scary. But like, why would I bother?”
But until the government regulates IoT — which Wisniewski believes is unlikely — concerned consumers can cross their fingers for better transparency or encryption, or just air toward pricier, name-brand products. It’s very unlikely, for instance, that your Amazon Alexa will be hacked.
But that doesn’t mean it doesn’t come with big-time privacy concerns. Alexa records conversations, even when you don’t ask it to. Apple had to apologize after letting contractors listen to private Siri voice recordings from users.
“[Big tech companies] are collecting information from you to use for whatever purpose, and you’ll never find out, no matter how much you read the privacy agreement,” Wisniewski said.
In the end, it might make sense to worry less about shadowy hackers and more about the companies that access our data in perfectly legal ways.
“I don’t think anybody cares about your [connected] fridge, to be honest,” Wisniewski said. “And I don’t think anybody probably cares about most of the IoT devices in your house. What am I going to do, turn your lights out without your permission? I mean, scary. But like, why would I bother?”