Have you ever tried to buy something online only to notice, at the final checkout page, some extra item had been sneakily added into your cart? Or found yourself on a 20-minute phone call trying to cancel a trial subscription you signed up for with one easy click? If so, you’ve been subjected to what UX designer Harry Brignull dubbed, back in 2010, dark patterns — manipulative user interface tricks “that make you do things that you didn’t mean to.”

More than a decade after Brignull coined the term, the push is on to regulate these shady and widespread tactics. But a big question remains: How much can regulators rein in some of the more slippery dark patterns — those that are manipulative, and possibly unfair, but not patently deceptive?

Some dark patterns are transparently fraudulent — like what Princeton researcher Arunesh Mathur and his co-authors labeled in one study “deceptice activity notifications.” Those include shenanigans like using a random number generator to (falsely) show how many other people are “currently viewing” a product or passing off fictitious customer testimonials as genuine. 

But what about so-called “nagging” dark patterns? Consider, for instance, the pop-up in an event-ticketing app that asks to send users notifications, but only lets them choose “OK” to accept or “Maybe later,” which sets them up to receive the same pop-up again and again. Infuriating, no doubt. But illegal?

Challenges in the Push to Regulate Dark Patterns

The Federal Trade Commission has vowed greater enforcement against dark patterns, or interface tactics that coerce users into making unintended decisions. The agency, and some state attorneys general, have targeted some of the more blatantly deceptive offenders. Yet, dark patterns that are manipulative, but not necessarily deceptive, are harder to prosecute. Still, some experts contend they can and should be targeted, as unfair practice.

“In general, it’s the case that asking questions is protected by the First Amendment, as it should be,” said Lior Strahilevitz, a University of Chicago legal scholar who’s studied dark patterns. “But there is such a thing as abusive asking of questions,” such as an employee who repeatedly continues to ask a co-worker for a date after being turned down, to the point of harassment.

That is, even when legal lines are fuzzy, they can still be drawn and crossed.

The less outwardly deceitful UI tactics — gray patterns, as they’re sometimes called — will indeed be tricker to hold to account. But legal experts hardly think it’s impossible, which means tech firms and UX teams who use these tricks, or are considering doing so, should probably think twice.

This story is part of a series about dark patterns.

PART I:Confirmshaming Works. Resist It Anyway.

PART II:What Exactly Is Ethical Design?

 

Breaking Down Dark Patterns

Dark patterns are so numerous that it can be difficult to wrap your head around all of them. A helpful way to frame them is through a taxonomy created by Purdue researchers in 2018.

Here are the five categories of dark patterns they outlined, plus their definition of each:

  • Nagging: “Redirection of expected functionality that persists beyond one or more interactions.” Example: the pop-up request without a “no” option.
  • Obstruction: “Making a process more difficult than it needs to be, with the intent of dissuading certain action(s).” Example: the easy-to-sign-up, back-breaking-to-cancel process (also known as roach motel). 
  • Sneaking: “Attempting to hide, disguise or delay the divulging of information that is relevant to the user.” Example: the surprise item in your e-commerce checkout cart, often added through an inconspicuous opt-out checkbox (dubbed sneak into basket).
  • Interface interference: “Manipulation of the user interface that privileges certain actions over others.” Example: advertisements that are designed to look like a site’s UX elements or content (known as disguised ads).
  • Forced action: “Requiring the user to perform a certain action to access (or continue to access) certain functionality.” Example: an interface that manipulates users into sharing more information and granting more permissions than they realize (dubbed privacy zuckering).

 

The State of Dark Pattern Regulation

Before we delve further into questions of constitutionality and unfairness, it’s important to understand the current legal landscape related to dark patterns.

The first notable attempt to tackle the issue came in November of last year when California voters approved the California Privacy Rights Act (CPRA), an expansion of the California Consumer Privacy Act (CCPA), the state’s watershed data privacy legislation. The CPRA was the first piece of legislation to include a definition of dark pattern: “a user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making, or choice, as further defined by regulation.” (The CPRA will take effect in 2023, following a rulemaking process and request for comment.) Proposed legislation in Washington adopts the same language.

Then, in March, the California Attorney General’s Office announced strengthened language of the CCPA. The addition didn’t include the phrase “dark pattern,” but did list some examples of UI methods that are prohibited in interactions that involve consumer control over the selling or sharing of personal data. These include banning “confusing language, such as double-negatives (e.g., ‘Don’t Not Sell My Personal Information’)” in opt-outs, and not requiring users to “click through or listen to reasons why they should not submit” an opt-out request.

Consumer and digital rights advocates championed these developments, but it’s important to understand their limitations. Namely, they only impact interactions that pertain to personal information. The endlessly protracted unsubscribe process? The confirmshaming language trying to foil your service opt-out? The repeated notification requests with no way to say no? Those aren’t accounted for under these rules.

The first major federal legislative proposal to curtail dark patterns, the bipartisan DETOUR Act, introduced in 2019 by Sen. Mark Warner (D-VA) and Sen. Deb Fischer (R-NE), was similarly focused on data privacy. But it also took aim at issues of user disclosure and consent. That bill was packaged into the SAFE DATA Act after failing to advance, but a newer version of SAFE DATA no longer includes DETOUR’s language. Warner said earlier this year, during a Federal Trade Commission workshop about dark patterns, that he hopes to reintroduce DETOUR, but that the FTC should begin tackling the problem sooner.

He might get his wish. An FTC official closed out the workshop saying that “aggressive” enforcement against dark patterns should be expected.

RelatedIs Empathy Missing From Your UX?

 

Deceptive vs. Unfair

Section 5 of the FTC Act authorizes the agency to pursue action against “unfair or deceptive acts or practices.”

“‘Unfair’ has a lot of wiggle room in it. ‘Deceptive’ has less wiggle room,” Strahilevitz told Built In.

Strahilevitz noted in his research, and in his comments at the FTC’s dark patterns workshop, that the agency has in fact already filed charges, using its deception authority, against companies that engage in certain dark patterns — even if the phrase “dark pattern” was never mentioned in the process.

The most notable case, according to Strahilevitz, was a lawsuit against payday lender AMG Capital Management. Ruling in favor of the FTC, a court cited a plethora of shady tactics AMG used to deceive consumers — methods that were textbook dark patterns, including forced-continuity and roach-motel subscription renewals, and confusingly-worded trick question descriptions. 

“The court did not need a dark patterns label … to see how deceptive the individual strategies and their cumulative effect could be,” wrote Strahilevitz and co-author Jamie Luguri in a recent study.

The FTC had pursued dark-pattern purveyors under its deception authority before, too — again, even though the phrase dark pattern was never invoked. Internet advertiser LeadClick Media used disguised ads and made-up customer and journalist testimonials, while the founder of a site called Jerk.com (seriously) used a computer program to manufacture profile pages that it implied were user-generated. Both were targeted as deceptive.

 

The Slipperiness of Unfairness

On the other hand, unfairness — the measure under which those manipulative (but not necessarily deceptive) gray patterns would have to be targeted — is a tougher standard.

Jennifer King, a privacy and data policy fellow at Stanford University’s Institute for Human-Centered Artificial Intelligence, and Adriana Stephan, a Stanford graduate researcher, wrote in a recent paper for Georgetown Law Technology Review that regulating dark patterns “likely requires expanding consumer protection statutes beyond deceptive and unfair practices to explicitly include manipulation.”

King and Stephan also noted that it’s rare for the FTC to bring cases solely on unfairness “after the agency was accused in the past of applying the unfairness standard too broadly.” Indeed, the handful of cases the FTC has pursued as unfair, and which could be said to involve dark patterns, were also targeted for deception, not unfairness alone.

But while slam-dunk unfairness cases against manipulative patterns have so far been nonexistent, other legal experts are confident that the framework fits the problem.

Lauren Willis, a professor at Loyola Law School who’s written about dark patterns, contends that courts would be “very receptive” to unfairness arguments for tactics like roach-motel unsubscribe processes. “There’s no legitimate commercial purpose to it,” she said. “It’s really just trying to trap people … Every judge in the country has been annoyed by this personally.”

The FTC uses a three-prong test to prove unfairness. For a practice to be considered unfair, it must result in injury that is 1) substantial, 2) unable to be reasonably avoided by consumers and 3) not be outweighed by any offsetting benefits to consumers or competition.

Felix Wu, a law professor, First Amendment expert and faculty director of the Cardozo Data Law Initiative, said dark patterns are precisely the kind of practice the unfairness standard is intended to curtail, and that manipulative gray patterns meet the threshold.

They substantially injure, he said, in that “they’re designed to lead consumers to choices they wouldn’t otherwise make,” in terms of money, attention or both. They’re also unavoidable, he added, because they prey upon psychological tendencies that we can’t cognitively switch off. (Dark patterns can exploit certain cognitive biases. Confusing language, for example, acts on both the default effect, or users’ tendency to stay with default options, and the framing effect, or the tendency to reach different conclusions based on how information is presented.) And dark patterns offer no countervailing benefit because “they’re largely just about a wealth transfer.”

Willis granted that interpreting injury as something not entirely financial may be difficult, which may influence which unfairness cases the FTC might bring, but she contends that the way online commerce has evolved necessitates an expanded view.

Many of the more manipulative dark patterns thrive on the hope that users will throw their hands up and not want to endure any more hassle. “That has to be recognized as a substantial harm,” she said. “At this point in the internet economy, what is valuable, besides money, is people’s attention and time.”

RelatedAre You Focusing on the Wrong Set of Goals?

 

The Constitutional Question

Another gray area for gray pattern regulation, if you will, is the First Amendment. Brazen deception, like fabricated testimonials, has no free speech protection, but what about the more subtly manipulative patterns? As Strahilevitz asked in his comments to the FTC, are roach motel patterns protected commercial free speech? Is the nagging, no-way-out notification request, however irritating, constitutionally safeguarded?

Wu, for one, strongly believes the answer is no. “Commercial speech is really meant to protect the listeners — the people receiving the commercial speech — not the commercial speakers — the companies who are saying these things,” he said. 

Companies may of course still decide to make a First Amendment challenge to any litigation brought against them for using gray patterns that aren’t outright deceptive. (The current Supreme Court’s free-speech-as-deregulation bent could of course be a complicating factor.) And it’s not clear how the FTC is internally thinking about gray pattern enforcement in a constitutional context. (The agency did not return a request for comment.) But reasonable legal interpretation and precedent should both point toward a more consumer-friendly interpretation, Wu said.

 

What’s Next?

Dark patterns have grown more rampant and remained effective in the decade-plus since Brignull coined the term. Research has shown the practice to be widespread — the study led by Mathur reviewed more than 50,000 product pages and found that some 10 percent employed at least one dark pattern. It also works: The study led by Strahilevitz found that, among nearly 2,000 test participants, the acceptance rate for a dubious offer more than doubled with the use of mild dark patterns and more than tripled with aggressive dark patterns.

To truly push back, the best and most likely path forward, according to Willis, is a combination of state legislation and enforcement action from the FTC and state attorney general’s offices. (Many state consumer protection laws also incorporate federal statutes, so state attorney generals could also pursue unfairness charges. So far, local attorney general action, like D.C.’s lawsuit against Instacart for misleading fees last year, has gone more after deception.)

But two big practical question marks remain, according to experts: FTC resources and priorities. David Vladeck, former director of the FTC bureau of consumer protection, emphasized at a recent Senate Commerce subcommittee that the agency is chronically underfunded and understaffed, even as its responsibilities grow. 

Willis, meanwhile, said the consumer protection bureau of the FTC may find it more effective to go after lower-hanging fraud than dark patterns. That bureau is also historically underfunded compared to the FTC’s bureau of competition, which enforces antitrust laws, she added. Regulators have signaled a focus on dark patterns, but how much will budgets follow?

Those uncertainties aside, the push to regulate dark patterns has clearly advanced beyond the gestational period, Willis said. And that includes the potential to go after grayer dark patterns as unfair. “There could be more legislation, but they could also go forward right now,” she said.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us