Deepfakes Are About to Break the Social Contract

Increasingly sophisticated synthetic media threatens to destroy our shared social reality. Here’s how we can stop the rupture.

Written by Ken Jon Miyachi
Published on Oct. 20, 2025
A torso below three screens showing computer-generated faces
Image: Shutterstock / Built In
Brand Studio Logo
REVIEWED BY
Seth Wilson | Oct 17, 2025
Summary: Deepfakes pose a critical threat, breaking the social contract by enabling convincing fraud. Losses hit $16B in 2024. The solution requires a shift from blind trust to "trust, but verify through a second channel," cryptographic proof, multi-factor authentication and cross-sector collaboration to safeguard capital and society's shared reality.

A criminal no longer requires sleight of hand or illusionary tactics that feature grainy photos or bad links, once hallmarks of spam emails. They can now look and sound exactly like your boss on a live call, or they could echo the sound of a loved one’s voice by using just a few public clips of them. 

The FBI reported that Americans disclosed more than $16 billion in losses from internet crime in 2024. That’s up one-third year-over-year, and it’s not just mere phishing or ransomware scams we’re talking about here.

Impersonation crimes thrive when video and audio can be so convincing that victims are quickly duped. The losses become permanent in seconds, with payments that settle immediately. People trust what they see and what they hear, so criminals now have a more dangerous vector than ever to manipulate and deceive unsuspecting victims.  

We’re already aware of how well deepfakes can be used within a company, as seen when a Hong Kong finance worker was duped into transferring nearly $25 million through a fake conference. 

But corporate fraud is only the beginning. Retirees are losing their life savings to scammers who clone their children’s voices in distress calls. Victims wire emergency funds to fake family members within minutes, discovering the truth only after the money is gone. This isn’t happening to careless people; it’s happening to anyone with a digital footprint.

How Can We Defend the Threat of Deepfakes?

In order to curb the threat caused by deepfake media, we need implement a series of measures to “trust, but verify through a second channel.”

  • Provenance signals.
  • Liveness/bio-marker checks.
  • Multi-factor authentication.
  • Checks on activity at unusual times.

More From Ken Jon MiyachiHow the Senate’s AI Regulation Reversal Opens Door to State-Driven Deepfake Laws

 

Trust Is Already Gone

Emails and messages on social media were the primary vectors for attack via bogus links, so we taught people to double-check a link before clicking it and to look for typos. That guidance still has value, but it collapses in a world where the “link” becomes a live (fake) face and voice in a meeting room. 

Law enforcement is undoubtedly aware of this plague, but legal assistance and routine evidence handling rarely move as fast as cross-border money. So, even when a victim reports the crime within hours, the funds have often already hopped through exchanges and accounts in different jurisdictions — each with its own legal threshold and ticking clock. 

Is the problem becoming clearer? When a video app or social network fails to verify authenticity or confirm real-time presence in high-risk scenarios (such as the fake Hong Kong conference), the person making the decision bears the full cost of a false positive.

Regulators are right to demand proof that safeguards are in place and set as defaults to protect consumers, but the goal isn’t to delegate trust to platforms; it’s to replace blind trust with verifiable checks at critical moments.

The harsh reality is that most people are unprepared for this threat. We need a fundamental shift in financial literacy, not just teaching people to avoid suspicious links, but helping them understand that even a live video call from someone they trust could be fabricated. The new mantra must be “trust, but verify through a second channel,” even when you're looking at your boss’s face or hearing your daughter’s voice. Until this becomes second nature, scammers will continue to exploit our most human instinct: Believing our own eyes and ears.

 

What Needs to Change to Limit Deepfakes

Proof. That is where genuine deep change begins when it comes to patching up this hole in consumer protections, and where securing funds within companies should start. 

If media is captured on hardware, that device should only earn trust through cryptographic time-stamping and identification. A lack of proof doesn’t mean the media is fake, but it does indicate to users, observers and regulators alike that it runs the risk of being fake without provenance. 

Any requests to move large sums of money, change credential details or even change beneficiaries should initiate several challenges that are difficult to spoof. For example, these could include multi-factor authentication, a liveness check, or an out-of-band confirmation. These measures ensure the requester’s identity is verified through multiple layers, making it harder for deepfakes or other fraudulent tactics to succeed.

This type of due diligence should be embedded into company policy as well as applications and services directly and be set-up and tested immediately during the onboarding process. If it had been, the Hong Kong case might have been averted, and a single call wouldn’t have blown a hole in the company's bank account.

Technology alone won’t solve this crisis, though. We need genuine collaboration across every sector. Tech and social media companies must open-source their deepfake detection tools and share threat intelligence in real-time, not hoard them as competitive advantages. Financial institutions need to implement fraud flagging systems that can detect suspicious patterns before money is transferred. Regulators must establish minimum standards for authentication and verification while providing clear liability frameworks. No one can solve this alone. The threat is distributed, so our response must be too.

 

Protecting Trust, Capital and Speech

This isn’t a call to silence synthetic media, but instead a call to distribute responsibility evenly across both media providers and consumers so that people aren’t left to shoulder the risks alone. Or at least not without a clear understanding of what the risks really are. 

The financial losses are devastating, but they’re not the only casualty. Deepfakes are systematically dismantling our ability to trust anything we see or hear, and that has enormous implications across the entire world.

The social contract depends on a shared reality. We’ve always had liars and propagandists, but we’ve never before had the ability to manufacture convincing false evidence of events that never happened. When the maxim “seeing is believing” no longer holds true, how do we conduct trials? How do we hold leaders accountable? How do we know what’s real?

If we allow deepfakes to erode trust in journalism, judicial evidence and democratic discourse without building robust countermeasures, we won’t just lose money. We’ll lose the ability to function as a society that can agree on basic facts.

More on Deepfake DefenseHow to Defend Against Deepfakes

 

Building Verifiable Trust

Devices and apps should compete on user experience while collaborating on and implementing standardized safeguards, such as provenance signals and liveness checks.

Payment platforms should add safety checks, such as mandatory multi-factor authentication for first-time recipients, alerts for transfers at unusual hours and liveness/bio-marker tests. Regulators in the European Union have already asked the largest platforms to disclose their methods for tackling financial scams, including deepfake investment pitches and fake apps. So, why isn’t everyone following suit? 

If we pair these kinds of regulatory safeguards with cryptographic proofs and a second channel callback for any money movement, we can proceed safely together. By maintaining open communication, preserving privacy and safeguarding capital, it’s possible to rebuild trust and make it verifiable in a world where sight and sound can now be manipulated on demand. 

We must act now. Every day we delay is another day for scammers to develop better deepfakes, and victims lose not just their savings, but their faith in the digital world we’ve all come to depend on.

Explore Job Matches.