After more than two decades of a near lawless cyberscape, regulators are finally trying to reign in the internet — with a particular focus on children. Age verification laws are sweeping the United States and beyond, aimed at curbing kids' exposure to things like pornography, violent content and the addictive pull of algorithm-driven social media feeds.
Until recently, the only federal safeguard in this space was the 1998 Children’s Online Privacy Protection Act, which prohibits websites and ad-supported platforms from collecting personal information from children under the age of 13 without parental consent. Any meaningful attempts to expand these restrictions since then were stymied by concerns over feasibility, enforcement or free speech. But in July 2025, the Supreme Court removed those barriers, with a 6-3 decision noting that states have a “compelling interest in protecting children,” and that age verification requirements are “narrowly tailored” enough to do the job without impeding on adults’ rights.
Now, with the Supreme Court’s approval, America is becoming a patchwork of state-level age verification laws, each with their own protocols and boundaries. As of September, 25 states have enacted such laws. They primarily target pornography sites, but often extend to social media feeds that surface mature or violent content as well, including NSFW posts on platforms like Reddit or X.
What Are Age Verification Systems?
Age verification systems confirm a user’s age through methods like photo IDs, selfies with facial recognition or parental approval. Some platforms use AI to estimate age based on behavior. Although these systems are designed to improve children’s safety, they raise concerns about privacy, accuracy and who is responsible when safeguards fail.
They also seek to shield minors from “harmful content,” though each state defines this term differently. “Harmful” may apply to material that encourages risky behavior — such as online discussions promoting fentanyl use on Facebook — or pro-anorexia content on Instagram. More recently, attention has turned to AI chatbots like ChatGPT, which have been found to flirt with minors, or even veer into conversations involving self-harm and suicide. New York’s SAFE For Kids Act goes beyond simple content restrictions, limiting nighttime notifications to and imposing strict rules on how platforms handle their data.
One of the most restrictive measures yet comes from across the pond, with the United Kingdom’s Online Safety Act. Enacted in July 2025, it requires platforms ranging from social media and video-sharing sites to cloud storage providers to verify users’ ages and remove both illegal and “legal but harmful” material. It also implemented guardrails around recommendation systems that could push dangerous content. Penalties reach up to £18 million or 10 percent of global revenue, with the power to block entire services, making the UK one of the strictest online safety regimes in the world. Early enforcement is already reshaping how platforms operate. Bluesky users, for example, must verify their age by uploading an ID, payment card or face scan. Reddit has enlisted a third-party service to age-gate its mature content, which can only be accessed after submitting a government-issued ID or a live selfie.
“Most governments were asleep at the wheel for the first two decades of the internet, during which there was very little regulation and almost none focused on safety,” Jeremy Gottschalk, a trust and safety expert who runs Marketplace Risk, told Built In. But now, with lawmakers pushing to protect children from the very real dangers of the web, the pendulum seems to be swinging hard the other way — raising concerns that safeguarding minors could come at the cost of adults’ rights, particularly when it comes to privacy and free expression.
How Is Age Being Verified Online?
Age verification typically begins with something simple — ticking a box, scrolling through a dropdown menu to select a birth year. But behind these seemingly trivial steps is a complex web of technology and regulation designed to keep minors away from certain material.
Here’s a look at some of the ways platforms are checking users’ ages.
Photo ID and Credit Cards
Many platforms ask users to verify their age by uploading a government-issued ID, like a passport or driver’s license. Sometimes they’ll accept a credit card as well. While straightforward, these methods can exclude anyone without official documentation and pose serious privacy risks, from potential data breaches to identity theft. They’re also not fool-proof — AI-generated or doctored fake IDs are increasingly common, making enforcement imperfect.
Facial Recognition
Facial recognition as a means of verifying age has become increasingly common on social media platforms like TikTok and Instagram, as well as dating apps like Tinder. TikTok, for instance, allows users to upload a photo ID along with three selfies, while Instagram verifies minors through a short video selfie.
But this process is far from flawless. Bad lighting or cheap cameras can throw off results, and determined teens have been known to bypass the checks by using photos of siblings and friends, or editing their own selfies. Even when the system does work, it raises deeper concerns around consent, algorithmic bias and the long-term storage of sensitive biometric data — all at a time when realistic deepfakes are making impersonation easier than ever.
Parental or Trusted Third-Party Verification
Some apps let parents or other trusted adults prove a child’s age with a simple “yes/no” option, keeping the child’s image and personal data off the platform entirely. Apple and Google go a step further, letting parents manage their kids’ accounts and share only a rough age range so the apps filter out inappropriate content. Still, these systems rely on honesty, and tech-savvy kids often find workarounds, making them convenient but by no means bulletproof.
Age Estimation
Artificial intelligence is increasingly being used to estimate users’ age. In the same way algorithms inundate your feed with snorkeling videos after your search for flights to the Caribbean, or wrinkle cream ads after you turn 30, platforms analyze behaviors like typing patterns, search history, geolocation and even social connections to make an educated guess of a user’s age.
Youtube now predicts a user’s age based on their watch history and search queries, while Meta looks at engagement patterns and social connections to flag likely teens. This approach is less intrusive than asking for a photo ID or selfie, but far from foolproof. A 14-year-old obsessed with adult content could be falsely identified as an adult, while an adult who frequently watches kids’ shows may be flagged as underage. Because of these blind spots, AI-powered age estimation usually works best when paired with other verification methods.
Zero-Knowledge Proofs
A zero-knowledge proof is a cryptographic method that lets someone prove a fact online — like being a certain age — without revealing any other information. Instead of handing over your birthdate or a photo to each individual app, a trusted verifier checks it once and issues a digital cross-platform “proof” that works like a pass-or-fail token. Any platform can read that token to confirm eligibility without ever storing or seeing your personal details.
Think of it like showing a wristband at a concert: Security doesn’t need to know your name or date of birth, just that your wristband proves you’ve already been checked. ZKPs work the same way for age — the platform only sees confirmation that you clear the bar, nothing more.
So far, TikTok has piloted this with Incode, where a user uploads an ID once to receive a cryptographic age token. Reddit has tested a comparable setup with Persona. This is a big shift from older systems, where every platform stored its own copy of your data, multiplying the chances of a leak even years after you leave. Operating with anonymous tokens limits data breach fallouts, leaving hackers with proofs that are meaningless outside of the verification check. Because of this, experts see ZKPs as one of the most promising technologies for meeting all of these new regulations.
Federated Verification
Other emerging methods rely on trusted intermediaries or distributed ledgers. In federated verification, a single identity provider confirms your age once and issues a secure token multiple platforms can use, similar to how single sign-on (SSO) works when you log into apps with your Google or Apple account. Platforms don’t actually get your password (or, in the case of age verification, your birthdate), they just receive confirmation that you are who you say you are.
Blockchain-Based Verification
Blockchain-based approaches go a step further than federated verification by storing proofs in a tamper-resistant, decentralized ledger. That way, platforms can check against a distributed record rather than depending on one company’s servers. The European Union, for example, is piloting a Digital Identity Wallet that would let citizens prove their age or identity across borders without sharing unnecessary details. Similarly, companies like Worldcoin are experimenting with blockchain-backed credentials to provide privacy-preserving age checks. Both federated and blockchain-based models reduce the need for every individual platform to store sensitive data while keeping services compliant with global regulations.
Why Is Age Verification Important?
Age verification laws are coming at a time when we’re just beginning to understand that the risks young people face online are not isolated incidents, but systemic problems baked into the platforms themselves. Studies show that adolescents who are exposed to self-harm imagery and cyberbullying often experience higher levels of psychological distress and increased suicidal thoughts — patterns that can fuel negative self-talk and intensify the urge to self-harm. These findings are borne out in tragic cases like Molly Russell, who died by suicide at just 14 after consuming a flood of self-harm content on Instagram and Pinterest, and Chase Nasca, who took his own life at 16 following the instruction of pro-suicide videos on TikTok.
Even among children as young as eight, exposure to things like cyberbullying, adult material or predatory contact has been linked to lower self-worth and elevated depressive symptoms. For those already struggling, such as young people receiving mental health care, excessive smartphone and social media use can deepen problems like poor sleep and self-harm behaviors. Newer technologies like AI chatbots add another layer of risk: One study analyzing more than 35,000 interactions over the span of six years with a popular AI companion app called Replika found repeated instances of harmful behaviors, including encouraging self-harm and sexual misconduct. Families have even sued similar companies like Character.ai and OpenAI, alleging their chatbots played a role in their child’s death.
With several generations of children now raised in front of screens, it is abundantly clear that digital platforms are actively molding their behavior, and can worsen mental health risks, leaving them more vulnerable than ever without proper protections in place.
The Problems With Age Verification
The push for age verification laws involves a complex balance between protecting children online without also eroding adults’ rights, threatening privacy or slowing innovation. This comes down to two main challenges: Flawed systems that are prone to mistakes and security risks, and a lack of clarity over who is responsible for ensuring kids’ safety.
Flawed Systems
Even the best age verification systems make mistakes. Adults can be wrongly flagged and locked out of certain sites or content, while minors can find cracks and slip through. And when these systems do work, they can pose some significant security risks to adults and children alike, as photo IDs, biometric data and other sensitive information can be leaked and misused by bad actors.
“It’s a privacy nightmare waiting to happen,” Ron De Jesus, the field chief privacy officer at Transcend, told Built In.
Of course, we all want to shield kids from the dangers of the internet. But age verification offers a somewhat paradoxical form of safety. Laws intended to protect children can place an undue burden on adults, who must hand over personal data simply to access the internet — data that, if breached, removes the very sense of safety the rules were meant to provide.
What’s more, imposing restrictions based on what content is or isn’t “harmful” can be a slippery slope toward First Amendment violations, giving governments the ability to police otherwise lawful free speech in the name of protecting kids. That tension is one reason content-restriction laws were often struck down as unconstitutional. But this latest wave of regulation has survived because its scope is largely focused specifically on minors’ access to pornography, without limiting actual speech.
“When laws narrowly focus on minors’ access to pornography, they are more likely to survive,“ Zahra Timsah, an AI governance advisor and CEO of i-Gentic AI, said. “Precision is everything.”
The Question of Accountability
Closely tied to the inherent flaws of age verification systems is the question of accountability. Who should be held responsible when a minor slips past age verification systems and is harmed online?
Some regulators in the U.S. and European Union argue that tech companies profit massively from young users' engagement and should therefore shoulder the responsibility of keeping them safe — which includes facing fines or other penalties if they fail. Predictably, tech companies have pushed back on this narrative, noting that no system can perfectly prevent underage access. Thus, holding them fully liable when kids bypass the system puts them in a hard place between stifling legal activity and collecting more invasive personal information.
“No age verification solution is perfect, which leads to false positives and false negatives,” Gottschalk said. “That results in simultaneously losing customers while erroneously granting unintended access to children.”
For that reason, Gottschalk argues that age verification laws wrongly shift the responsibility of raising and monitoring children onto tech companies, rather than leaving it with parents. After all, kids these days are “digital natives,” he said. “[They] understand the gaps, loopholes and workarounds more than anyone.” Hence, parents should be the “gatekeepers” to their children’s device and internet access.
Possible Solutions
Several government entities are actively examining the ways artificial intelligence and chatbots affect children’s safety. And a number of families are suing over the harms they believe AI companies caused to their kids. But without clear legal precedent, it remains unclear whether these companies can ultimately be held responsible, leaving the issue in a persistent gray area.
Some legal scholars propose companies should be held accountable for certain forms of negligence — such as weak enforcement, ignored warning signs or poorly designed safeguards — but not for every clever tactic a minor uses to bypass protections. Other experts think the focus should be more on the specific design choices that enable or amplify harm, such as excessive notifications, infinite scrolling and gamification, as well as features like maps and suggested connections, which enable children to connect with total strangers.
“In our view, the worst online harms to children stem not simply from exposure to indecent content, but from design features that facilitate harmful interactions or compulsive use,” Morgan Wilsmann, a policy analyst at Public Knowledge, wrote in a report. “Looking to features as conduits of harm rather than content gets at more insidious dangers that a focus on content does not address. Rather than burden all users with verifying their ages to view content online, risky features should be age-gated, while allowing low-risk activities — such as basic content browsing or accessing educational material — should remain unrestricted.”
Adding more consistent standards on this front could incentivize platforms to implement more effective and privacy-conscious safety measures. Meanwhile, some ethicists frame the question in broader terms, comparing online safety to product safety: Just as toy makers are responsible for preventing harm to children, tech platforms should be expected to anticipate and mitigate risks inherent in their products.
Even if age verification systems remain part of the picture, De Jesus says there are much more trustworthy and secure options than the ones we’re currently using. Risk-based checks, for example, could scale with the sensitivity of the content, applying lighter measures to teen-friendly spaces and stricter ones to adult services. Zero-knowledge proofs could preserve more privacy, confirming a user’s age via cryptographic tokens without exposing any personal details. Other measures might include minimal data retention, where companies store only what’s absolutely necessary, or red-team testing, where third-party experts try to break or bypass the safeguards. Maintaining a strict separation between age-check systems and the platforms themselves, along with plain-language explanations for users, could also make the process both more secure and more transparent.
Other proposals, such as the App Store Accountability Act, take a different approach, shifting the responsibility for age verification over to app stores rather than specific apps. Leaning into this model could standardize safeguards across all platforms according to their content, offering a more efficient solution than requiring each individual site to implement its own system.
“The technical architecture exists — privacy-preserving authentication can verify age without creating massive databases of sensitive personal information,” De Jesus said. But in order to expand these practices outside of private use and into the public sector, there would need to be federal investment in secure digital ID infrastructure and standardized protocols that work across state lines.
So, until there’s bipartisan leadership backing federal privacy standards, De Jesus said, “we’re going to keep passing the compliance burden around state by state — a mess that leaves both companies and consumers worse off.”