Deepfake Phishing: Is That Actually Your Boss Calling?

Deepfake technology is outpacing our ability to spot it. That could be bad news for cybersecurity.
Tatum Hunter
November 10, 2020
Updated: November 11, 2020
Tatum Hunter
November 10, 2020
Updated: November 11, 2020

When you hear the term “deepfake,” you probably think of synthetic reproductions of politicians or celebrities. But, starting now, you should also think about your boss.

Remote work is putting companies at greater risk of deepfake phishing attacks, executives at Technologent warned during a cybersecurity webinar last week. In a deepfake attack, criminals use synthetic audio to mimic the tone, inflection and idiosyncrasies of an executive or other employee. Then, they ask for a money transfer or access to sensitive data.

Concern about deepfakes — or technology that uses machine learning to realistically recreate the face, body or voice of a real person — has been on the rise as open-source tools like DeepFaceLab and Avartify gain traction. Meanwhile, Facebook released its Deepfake Detection Challenge data set in June to help researchers develop ways to identify deepfakes, and the U.S. House of Representatives passed legislation in 2019 funding further research.

So far, actual deepfake attacks have been few and far between. There was a high-profile case in 2019, in which criminals used the tech to impersonate the CEO of a German conglomerate and steal almost $250,000 from a U.K.-based energy company, and Technologent reported three cases among its clients last year.

Accessible deepfake detection methods are still under development, leaving companies and employees exposed to the new form of exploitation. But our bosses and coworkers are people we know — usually, we talk to them every day. Could an attacker really imitate them convincingly enough to fool us?

According to Technologent’s panel, the answer is yes. That’s because bad actors, as it happens, are actually pretty great actors — and they research their roles thoroughly.

Read This NextEthical Hacking: Inside the World of White Hat Hackers

 

deepfake phishing
Image: Shutterstock

Putting the ‘Deep’ in Deepfake

The widespread switch to remote work brought on by the pandemic has come with sweeping IT challenges for companies — cybersecurity among them. Some of those challenges are technical, but others simply come from the increased likelihood of human error when employees are separated from each other.

Take this example Technologent Chief Information Security Officer Jon Mendoza offered:

The CFO of one of Technologent’s client companies received an email from his CEO, who he knew was boarding an airplane, requesting an urgent payment to a third party to avoid late penalties. Then, he received a text from the CEO, checking to make sure he got the email. Eager to avoid the fees, he forwarded the email to accounts payable, and that was that. Days later, the CFO mentioned the last-minute payment over drinks, and his boss had no idea what he was talking about.

In this story, it’s clear what went wrong: The CFO didn’t want to disturb his boss as he boarded the airplane, so the whole interaction was conducted via email and text. With workforces operating remotely, criminals have even more ways to manipulate unsuspecting employees — whether that involves deepfake audio, a really good impression or a perfectly timed email.

Often, the attack starts long before it concludes, with attackers identifying a target and watching that target’s behavior in the workplace.

“They may survey that victim for three months, four months, five months, a year.”

“The timetable is so much more extended than what we’ve seen in the past,” Technologent security practice director David Martinez said. “They may survey that victim for three months, four months, five months, a year.”

Whether it’s Facebook, LinkedIn or a compromised professional email account, attackers find ways to get to know their victims and understand the company’s internal workflows. That way, they can identify weak links in a company’s processes or busy moments when employees are particularly vulnerable.

“We’re creatures of habits,” Mendoza said. “So if I can get you in a state where you’re busy, you’re distracted, and I know all of your habits and your organization’s habits, I might not even need to do synthetic audio. It could just simply be impersonating an executive, which is most often what we see, or perhaps impersonating a trusted third party.”

When attackers do rely on synthetic audio, that audio comes from deepfake software programs trained on existing audio recordings of the target speaking on phone calls, at conferences or in press conferences. Their proximity to natural human speech is getting better, and that plays into our cognitive biases: My boss speaks in a particular cadence, so if the person leaving me a voicemail speaks in that candence, it must be him — right?

“That’s what’s really made the COVID-19 impact here. Those people that you would have gotten up and walked down the hall to say, ‘Are you sure you want me to transfer this money?’ those people are not in the office together anymore,” Martinez said.

Given the amount of planning criminals put into these attacks, it’s understandable when employees are duped. But the fallout can be rough, Martinez told Built In. First, there’s the water cooler conversations. (“Did you hear? Marcy fell for a phone scam and gave all our money away.”)

Then, there’s the reaction from executives, who too often equate investments in security toolkits with all-around protection from fraud. Unfortunately, Martinez said, there’s no one-and-done prevention strategy for social engineering.

“When an event like this happens, initially, there’s some disbelief, and there’s some anger on the part of executive management because they feel like they spent money on all of these, quote, security tools, and they were useless,” he added. “[Executives] feel like they’re going to be viewed as lax or ill prepared when, really, this is a very nefarious and insidious type of attack.”

 

deepfake scam
Image: Shutterstock

Who’s at Risk?

Industries With Sensitive Data

Companies that have to comply with HIPAA or PCI security standards — namely, healthcare and financial institutions — are more likely to be the victims of deepfake attacks aimed at stealing data.

But any organization with a sprawling IT network — like logistics companies or casinos — is especially vulnerable, as well, because it’s easy for scammers to outmaneuver employees who are only familiar with one chunk of the network or one area of the company’s processes.

“Sometimes they build a better picture of the network than the company has.”

“Sometimes they build a better picture of the network than the company has,” Technologent senior solutions architect Jason DeJong said.

All in all, any operation that has lots of money moving around needs to keep an eye on deepfake developments, Martinez said. A bad wire transfer at a mortgage or insurance company, for example, may not raise immediate red flags.

 

Companies in Geographies With More Data Protection

Organizations in regions with advanced consumer privacy protection laws — like California or the European Union — are vulnerable to what Martinez called the “double threat” of paying ransoms to get sensitive data back and then paying fines for exposing that data.

 

Small Companies Aren’t Off the Hook

Even though the current profile of a deepfake target is a large organization in the healthcare, finance or government sectors, small companies shouldn’t relax. That’s because big paydays aren’t the only goals of these scams. Criminal enterprises are businesses, too, Martinez said, and they think about quality of revenue as well as quantity.

Take a company that sells led pipes, for instance.

“They say, ‘We don’t want to spend X amount of money trying to protect the secrets of making pipes, because there are no secrets,’ right? And what they fail to realize is, they’ve got a one-gig internet pipe that comes into their business, and they’re supporting 250 remote salespeople,” Martinez told Built In.

Those devices themselves have value to thieves. Find a way to compromise the laptops and access the internet traffic without attaching your name to either, and you’re free to leverage that equipment to commit more crimes. Often, criminals use compromised internet services to build botnets and launch credential stuffing or other attacks at their next targets. Once they’ve exploited that botnet as much as possible, they’ll hold the network ransom.

“They’re basically trying to extort the last amount of money they can get,” Martinez said. “They’ve already utilized those individuals to attack someone else, and now they’re leaving them holding the bag.”

And it doesn’t end there. Sometimes, bad actors will resell access to sensitive data even after they’ve received the ransom, especially if small companies don’t spend the money to cleanse and eradicate any remote-access encryption tools from the data environment.

 

deepfake fraud
Image: Shutterstock

How to Protect Against Deepfake Attacks

Knowing that deepfakes exist is not enough to protect employees from a deepfake attack.

“No one’s going to upload the voicemail and look for key artifacts in an audio recording to say, ‘Oh, this has obviously been faked,’ Martinez said. “That software is not available to us commercially right now.” (For the record, New York City-based ID R&D offers anti-spoofing technology for biometric authentication systems, but not for normal old voicemail.)

Here’s what Martinez, Mendoza and DeJong recommended instead:

 

Slow Down the Process

When it comes to preventing deepfakes, urgency is the enemy of good security.

“[Criminals] are going to try to push and accelerate and move the timeline as quickly as possible to get something done,” Martinez said.

The more pairs of eyes are on a transaction before it happens, the less chance a deepfake has to be successful. So, building delays and double checks into company processes, especially financial ones, can stop some deepfakes in their tracks.

“Time kills a lot of fakes,” Martinez added.

“There really is not a technology that will protect you, so you have to use common sense.”

That means companies must train employees on deepfake red flags — a sense of urgency, an unexpected voicemail, a wrong “mail from” SMTP header on an email from a colleague — so they can feel comfortable asking potentially awkward questions, like, “Do you mind hopping on Zoom before I make this transfer?”

“There really is not a technology that will protect you, so you have to use common sense,” Mendoza said.

 

Balance Security Tools and User Training

As mentioned, executives often deploy cybersecurity tools and expect that alone to protect their companies from attacks. Instead, they should focus on building holistic security programs.

“Having a holistic security program is just as important as deploying the latest and greatest firewalls, point protection and email protection, because there really isn’t a good security control that will address some of the things we’re describing,” Mendoza said.

A holistic security program starts with assessments of network risk, user risk and internal policies from a third party or internal security experts. Maybe that assessment reveals that the company needs to better segregate its IT resources to make them more difficult to compromise, that user email addresses need multi-factor authentication or that the company has failed to fully take advantage of automated security tools.

“We’re barely scratching the surface. We believe the technology is only going to get better from here.”

Whatever the conclusion, the next step is to update and train employees on the company’s security processes and policies — and then enforce those policies. Time-crunched executives may want to skip security training or cut corners to save money, but it’s essential that everyone participates and fully understands the security challenges of the moment.

As deepfake technology gets more advanced and accessible, attacks of this sort will happen more often. Until deepfake detection software catches up, companies and employees must be on the lookout.

“We’re barely scratching the surface,” Mendoza said. “We believe the technology is only going to get better from here.”

(In this case, that’s a bad thing.)

Read This NextIs It Time to Leave Open Source Behind?

Great Companies Need Great People. That's Where We Come In.

Recruit With Us