Smart glasses are back. Only this time, they’re arriving as part of a much bigger shift toward AI-first hardware, with companies like Meta, Apple and Google pitching them as hands-free assistants capable of seeing, listening and responding to their environments in real time. Instead of having to pull out a phone, smart glasses wearers can ask for directions, translate conversations and capture video simply by looking at the world around them.
No longer the experimental gadget of the early 2010s, smart glasses are being marketed as the next natural successor to the smartphone — or at least a powerful companion. As artificial intelligence becomes more capable and multimodal, the appeal of a wearable interface that can continuously interpret their surroundings is growing fast. And their integration in everyday life feels almost inevitable.
Are Smart Glasses Legal in the United States?
Smart glasses are generally legal in the United States, but it’s complicated. There is no law specifically governing their use, so they fall under a patchwork of existing regulations, including recording consent, wiretapping, and state-level privacy laws. Because these rules were not designed with smart glasses in mind, certain uses — like recording someone without their consent in states that require it — is still illegal.
But that same always-on intelligence is what makes smart glasses uniquely controversial. Unlike smartphones, which require deliberate action to record someone, smart glasses collapse the gap between seeing and capturing. With the wave of a wearer’s hand or a single voice command, these devices can passively collect massive amounts of visual and audio data — and often without the knowledge of the people being recorded. In an era already overrun with concerns over privacy and mass surveillance, the prospect of us all wearing smart glasses raises some pretty major red flags.
Which leads us to a pretty crucial question: Are smart glasses actually legal?
First, What Are Smart Glasses and How Do They Work?
At a basic level, smart glasses are wearable computers built into a familiar form factor. On the outside, they look like regular eyeglasses. But inside they’re packed with miniaturized cameras, microphones, speakers and processors. Many models also connect directly to a smartphone via the cloud, allowing users to take photos and videos, listen to audio, get directions or interact with AI assistants — all without pulling out another device.
What sets the latest generation of smart glasses apart is how tightly they integrate artificial intelligence. Powered by advanced AI models, these devices capture and interpret what a wearer sees and hears in real time, allowing them to identify surrounding objects or landmarks, translate speech on the fly, offer navigation directions, capture photos and videos and much more.
This real-time intelligence is what makes smart glasses feel so convenient — and what makes their legality so ambiguous. In some cases, the visual and audio data they use is processed on the glasses themselves, but it is often routed to external cloud servers as well, where more powerful systems handle the more computationally intense tasks. Sometimes it’s even reviewed by humans living thousands of miles away.
In practice, that means the moments captured by smart glasses — whether mundane or deeply private — can be recorded, stored and analyzed in ways that aren’t always visible or fully understood by the wearers or the unsuspecting people around them.
Are Smart Glasses Legal?
Smart glasses aren’t illegal — but they’re not clearly regulated either. Currently, there is no single law that governs how these devices can be used. Instead, they sit inside a haphazard patchwork of existing rules repurposed from older recording consent, wiretapping and state-level privacy rules, including those covering things like biometric data collection. These laws differ from state to state and weren’t written with smart glasses in mind, so they don’t explicitly address how they actually work. All of this leaves some pretty major gaps, particularly when it comes to whether or not they can be used to record people.
While you generally have “one-party consent” rules that allow you to film what is in plain view in a public space, many states require “all-party consent” for any recording where there is a reasonable expectation of privacy. This rule is pretty straightforward for a visible handheld phone, but it becomes legally murky in semi-public spaces like cafes, where a conversation might be covertly picked up on a pair of smart glasses that would otherwise be considered private. “That is a real issue when the microphone lives on your face,” Mark McCreary, partner and chief artificial intelligence and information security officer at Fox Rothschild LLP, told Built In. Even if that sensitive discussion takes place in a communal area, recording that audio without consent can be illegal.
Biometric privacy laws add another layer. States like Illinois, Texas and Washington restrict how companies can collect or use data tied to someone’s physical identity, such as facial features. Other states, including California and Massachusetts, broaden what counts as sensitive data, covering things like irises and retinas. Illinois’ BIPA Act is widely considered the most severe biometric privacy law in the country, requiring written consent before any biometric data can even be captured. It also grants individuals a private right of action to sue for statutory penalties — up to $5,000 per person — without needing to prove actual harm.
“There are currently more than 160 countries around the world which have a comprehensive national privacy law. The United States is not one of them.”
But then there’s one overarching concern that complicates all of this at once: the privacy of children. Under the Federal Trade Commission’s 2025 updates to the Children's Online Privacy Protection Act (COPPA), minors’ biometric data became tightly restricted. The thing is, for a device to know whether someone is underage, it has to scan and process their face first, McCreary points out. That initial scan already counts as collecting data, which is prohibited by law. This paradox leaves companies in a bind. “You cannot comply with the law without first breaking it,” he added.
Zooming out to a global scale, many are pointing to the European Union’s General Data Protection Regulation (GDPR) as a gold standard for regulating devices like smart glasses. In it are strict mandates that require companies to bake security directly into their product’s hardware, and govern how they process every captured image, including people’s faces, license plates and home addresses — and especially any that may reveal specific personal characteristics about an identifiable person, like their health condition or race. This isn’t just a suggestion; the law imposes massive penalties of up to €20 million, or four percent of a company’s global revenue for non-compliance.
“There are currently more than 160 countries around the world which have a comprehensive national privacy law. The United States is not one of them.” Gabriela Zanfir-Fortuna, vice president for global privacy at the non-profit Future of Privacy Forum, told Built In. There are, however, around 20 states that have passed baseline data privacy laws in recent years, each with various levels of protection and unique thresholds. “All of them are relevant in the context of smart glasses.”
What Makes Smart Glasses Special?
The legal ambiguity tied to smart glasses largely comes down to form factor. Technically, there is no difference in function between a smartphone and a pair of smart glasses. It’s in the way that we use them that changes everything.
“When someone holds up a smartphone to record, there is a social signal. You can see the phone, react, walk away or object,” McCreary explained. “That friction is doing a lot of heavy legal and ethical lifting, even if we do not think of it that way. Smart glasses eliminate that friction.”
But with smart glasses, there is no reaching into a pocket or bending of an arm into a point-and-shoot posture. You just… look. Styled like classic, thick-frame Wayfarers, these discreet, always-on devices passively sit on a user’s face, turning even the most casual glance into a documented recording. From a distance, they’re indistinguishable from regular eyewear. Aside from a lit-up micro LED in the top corner of the frames (which some users are going out of their way to disable), there really is no way for bystanders to know for certain if they’re being recorded by someone wearing smart glasses.
“That matters legally because many consent statutes hinge on notice, and a tiny white light nobody can see in broad daylight is not the kind of notice privacy law contemplates,” McCreary said.
While online social media posts show they’re being used for innocuous purposes, like workouts or outdoor adventures, smart glasses are also earning a sleazy reputation as “pervert glasses.” Right now, they’re feeding the manosphere a steady stream of content harassing and rage-baiting women on the street for viral clicks, and filming popular pick-up videos where alpha bro “manfluencers” collect women’s phone numbers for sport.
A recent investigation by Swedish news outlets revealed a disturbing data pipeline behind how Meta Ray Ban smart glasses actually work. When certain AI features are activated, users — whether they’re aware of it or not — are streaming footage to remote subcontractors whose job is to help review and label the data. This included the collection of clips of people using the toilet, getting undressed and watching pornography, among other things. Even bank card details and intimate sex scenes reportedly passed through offices in Nairobi, Kenya for annotation.
Taken together, the issue comes down to how covertly these devices operate, blending so seamlessly into everyday life that the act of recording all but disappears.
“The form factor makes surveillance invisible to the people whose data is being collected,” McCreary said, “and that invisibility is what makes it a genuinely new privacy problem.”
Is Facial Recognition Making a Comeback With Smart Glasses?
Facial recognition is not permitted in smart glasses yet, but there is a concerted effort being led by Meta to change that. The company is aggressively rebranding facial recognition as an identity helper rather than a surveillance tool. Announced in an internal memo leaked from Meta’s Reality Labs to the New York Times, its smart glasses would include a new feature codenamed “Name Tag,” which was originally pitched as an accessibility-centered tool to help blind and low vision users locate friends in real time.
According to privacy and AI governance attorney Yelena Ambartsumian, Meta seems to think that, by presenting this feature as a tightly-scoped, opt-in feature, the company is betting they can sidestep backlash as long as the public is distracted — especially given an opportune political climate. As detailed in the memo: “We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
This comes after the company was forced to shut down its face-scanning system in 2021, paying more than $5 billion to the FTC in privacy settlements. After promising to obtain “affirmative express consent” from users when it comes to facial recognition tech, Meta went on to pay $650 million to Illinois and $1.4 billion to Texas over claims it captured biometric data from millions of Facebook users without their permission.
“To be clear, nothing has changed with respect to privacy law … that would bless this,” Ambartsumian, founder of Ambart Law and Women in AI Governance charter member, told Built In. “That said, even if only people who have opted into the ‘Name Tag’ feature can be identified, that does not solve the problem of having to collect and process some amount of data in order to be able to determine whether a person has opted-in.”
Are There Any Legal Protections for Bystanders?
Not really. Whatever protections exist for bystanders under current laws rarely hold up in practice. Most people won’t know they’re being recorded by smart glasses, making violations nearly impossible to detect — let alone prove — and extremely difficult to enforce. The burden almost entirely falls on the person being recorded.
“Until the law shifts that burden to the wearer and the manufacturer,” McCreary said, “bystander protection will remain mostly theoretical.”
Frequently Asked Questions
Can smart glasses legally record people without their consent?
It depends on the situation and location. Recording is generally allowed in public spaces, but there are states and situations in which audio recording without all-party consent can be considered illegal.
Do smart glasses use facial recognition?
Not yet. But there is a concerted push being led by Meta to build facial recognition into smart glasses.
