The Terrifying World of Social Engineering
The Amazon series “Sneaky Pete” stars Giovanni Ribisi as a slick con man who gains people’s trust in order to take from them what he wants or needs — money, jewels, art, etc. Blessed with the gift of gab and steely nerves, he does this by speaking authoritatively based on information gleaned from keen observation and known facts. A master of psychological and emotional manipulation, he also cajoles, flatters, feigns ignorance and fakes compassion. Much of the time, it pays off.
Pete operates entirely in the physical world, hence the traditional con man tag. If he used his trickery to commit crimes online, however, his methods would be dubbed “pre-texting” or "baiting" and Pete himself would be known by another handle: social engineer. Which sounds kind of respectable, but isn’t.
Over a nearly four year period (from 2013 through the end of 2016) cyber attacks — which include such deceptive social engineering tactics as “spear-phishing,” “water-holing,” “baiting” and several other types with similarly catchy names — cost companies $1.6 billion, according to the FBI. And plenty of prominent players have been snookered: the Associated Press, Target, Sony Pictures, Yahoo, the Democratic National Convention and even the U.S. Department of Justice.
What is social engineering?
Social engineering uses influence and persuasion to deceive people by convincing them that the social engineer is someone he is not, or by manipulation.
Way back in 1992, Kevin Mitnick, once known as "The World's Most Wanted Hacker," persuaded someone at Motorola to give him the source code for its new flip phone, the MicroTac UltraLite. You can read about how he pulled it off in Chapter 3 of a book Mitnick co-wrote titled, "The Path of Least Resistance." Among other things, it demonstrates that falling prey to social engineering has less to do with inadequate technological defense measures and more to do with the human mind. As Mitnick and his co-author put it in their introduction: "Social engineering uses influence and persuasion to deceive people by convincing them that the social engineer is someone he is not, or by manipulation. As a result, the social engineer is able to take advantage of people to obtain information with or without the use of technology."
How social engineering Happens
Sal Lifrieri, who spent 20 years as a New York City cop before founding his company Protective Countermeasures, says our innate tendency to be of assistance to others is especially exploitable. That's certainly true in the following example of something called “vishing” (voice solicitation), whereby the expert demonstrator easily dupes her mark in a way that’s both impressive and scary.
“The ideal situation in a social engineering attack is that I get you to do something you would normally not do and you believe that you're helping me,” Lifrieri says. “And you're going to be so satisfied that you helped, that you’ll walk away feeling satisfied.”
Of course, you may have just aided a Chianti-loving cannibal.
Manipulating an innate respect for authority
While some of these attacks employ demands, whereby the target is strong-armed into capitulating, an authority tack can be trickier to pull off. Trave Harmon, founder and CEO of Triton Technologies, says that has a lot to do with upbringing.
“If you were taught that people in authoritative positions, even perceived ones, are to be trusted, all of a sudden an email from the admin at Microsoft that says, ‘We found a virus on your computer’ [seems credible],” he says.
'People are going to people'
"Social engineering in general isn’t about how smart technically you are. It’s about what connects you to others, what makes you curious and angry and what might make you act without thinking."
A nonexistent computer virus is nothing compared to more advanced schemes that have been and continue to be played out on a grand scale. Whether enacted in person, online or by phone, those plots require time and patience — time to do background research and prepare the setup; patience to build trust and lie in wait until the moment is right to strike.
In 2014, only one year after a hack that was ultimately revealed to have swiped the personal information of three billion Yahoo users, the company got hit again — this time with a spear phishing email that duped an employee and led to the compromising of 500 million accounts. The U.S. Department of Justice charged Russian intelligence agents with the crime. This timeline of the breach shows how it all went down.
As LARES Consulting founder Chris Nickerson sees it, people actually “have a very strong understanding, even if they're not willing to admit it, that the cyber world is really just a basic reflection of the physical world.” In the former realm, he says by way of example, things called firewalls are often used to thwart intruders — even though “the concept of a wall in software is ridiculous. It's not a dimensional space in that aspect. So it just goes to show that even in the language people have created around it, cyber space is not very dissimilar to the human space.”
Nickerson, though, puts little stock in the notion that human psychology is chiefly to blame when it comes to the efficacy of social engineering. While targets often dwell mentally in the criminally advantageous sweet spot “between fear and hope,” he also knows some “highly advanced beings” who’ve been hustled.
“Humans are going to do human stuff,” he says. “I don't necessarily know if it’s a vulnerability. Lots of times it's functioning as intended. From a psychological perspective, you're doing the things that you're natively supposed to do, whether for health or the promotion of the species or [something else]. People are going to people, no matter what happens. The part of social engineering that most people get wrong is that it is founded in science, in engineering. It is not a methodology for lying. The fundamental pieces of engineering that create the space for something like reframing in a conversation, or anchoring, or the ability to use [neuro]linguistic programming in order to get an anticipated response are concepts that a social engineer has a foundational understanding of.”
In a similar vein, Michele Fincher, now chief influencing agent of the security consulting firm Social-Engineer, told Forbes that "social engineering in general isn’t about how smart technically you are. It’s about what connects you to others, what makes you curious and angry and what might make you act without thinking."
Confidence is key
“If you have confidence, you’re 75 percent of the way there."
Lots of social engineering plays out entirely online, where perpetrators can hide behind their screens and keyboards — and where things like tone of voice, facial expressions and body language are immaterial. When a scam requires more personal elements, such as phone calls or in-person visits, those facets become much more significant. Confidence, therefore, is key.
It surely was in 2007, when a man absconded with $28 million in loose diamonds from five safe deposit boxes at a Belgian bank. Passing himself off as a businessman named Carlos Hector Flomenbaum (probably a false identity), he'd been a regular customer for at least a year prior, ingratiating himself with employees through charm and chocolates. Flomenbaum became so trusted and beloved, in fact, that he was granted a coveted vault key that allowed him access during off-hours.
Swap out vaults for networks, and diamonds for passwords, and you've got yourself a social engineering swindle for the technological age.
“If you have confidence, you’re 75 percent of the way [there],” says Gregory Morawietz, vice president of operations at Single Point of Contact in San Francisco. “If you're just walking through a door and you walk past somebody and don't say anything at all, they might not stop you. Then you just hump it over to a conference room, drop a WiFi APN on their network and you're in business. Leave out the side exit, and now you have an APN that you hid under the table broadcasting from their conference room. You've got access to their whole network, and you can hack away at it all day and night from the parking lot.”
Too Much SHaring
Social engineers don’t operate in a vacuum. Their jobs are made easier by a culture of rampant oversharing on social media. Names, dates, locations, likes, dislikes, political leanings, sexual proclivities — it’s all useful stuff for someone who wants to take you or your company for a ride. Online, as in person, knowledge is power.
“Once you learn a person, you can become them in certain situations that would benefit you,” says Morawietz, who agrees that social media is exacerbating the problem.
Social Engineering attacks
8 Well-Known Examples
Attackers lure potential targets by offering them some sort of reward.
- Diversion Theft
Attackers con physical and online targets into rerouting goods or confidential information.
Attackers seduce targets, via ads, email and social media, into giving up personal information or compromising sensitive work.
- Quid Pro Quo
Attackers promise goods or services in exchange for information.
Attacker send fraudulent — but often real-looking — emails to thousands of potential victims in hopes that a portion of them will divulge personal information, including passwords, social security numbers and credit card numbers.
Attackers lie about who they are or create a fictional scenario to extract sensitive personal information.
Attackers trick targets into buying fake and malicious security software, through which it deploys ransomware — malware that blocks access to a system or data until a ransom is paid.
- Spear Phishing
Attackers pose as known friends or colleagues to launch a targeted email attack individuals or companies for the purposes of obtaining sensitive personal or corporate information.
Social engineering is rapidly advancing, so much so that the list of above techniques — all of which can wreak serious havoc — is becoming almost passe.
Morawietz notes that having your personal information somehow compromised — whether by Facebook, Equifax or numerous other data-rich organizations — is fast becoming old hat. So are tried-and-true techniques like large-scale email phishing. Nickerson dismisses these familiar practices as having nothing to do with actual social engineering and likens them to “throwing dynamite into a lake;” invariably, a bunch of random fish (or phish) will get blasted.
Will AI create an epidemic of new, scarier attacks?
“It’s going to be a shock to a lot of people. It's going to hit hard."
As artificial intelligence becomes smarter, Morawietz warns, a social engineering “epidemic” is mounting — “a new layer of hacking that’s way deeper and more three dimensional.” Forget armies of lie-spewing Twitter bots and fake Facebook stories designed to foment protests or thwart voter turnout; something far more wicked this way comes. And its arrival, at least according to Morawietz, is imminent.
“Imagine AI set loose on a human that says, "Hi, this is Gregory from your phone company and I’ve kidnapped your husband, Jerry. Pay this ransom." Or, "Hi, Grandma, I need 500 bucks. My car broke down."
It’ll be so convincing, Morawietz says, that Grandma will ask only one question: “Where do I send the money?”
Where will AI obtain these voice samples for devious purposes? Look no further than Instagram, Facebook, Twitter and YouTube, all of which contain personal videos aplenty. Which means the potential for image rejiggering is equally high, as evidenced by recent half-baked clips featuring Facebook CEO Mark Zuckerberg and Speaker of the House Nancy Pelosi. Called “deepfakes” and so far legally unregulated, they’ll soon be even easier to make than they already are, tougher to spot and widely disseminated.
But this AI trickery won’t just be used to bilk people out of money or personal information. In politics, where the effect of deepfakes is already being felt, the potential exists to sway public perception and possibly elections. Before a recent House Intelligence Committee hearing on June 13, at which AI experts gathered to discuss the growing problem, committee chair Rep. Adam B. Schiff told the Washington Post, “I don’t think we’re well prepared at all. And I don’t think the public is aware of what’s coming."
As this AI-driven mayhem ratchets up, Morawietz explains, ill-meaning hackers will use the technology to scour individual social media accounts for recognizable patterns between connected users (friends, family, colleagues) in order to launch additional targeted attacks. The resulting scripted, AI-powered daily robocalls, he says, could number in the hundreds of thousands. That’s what’s known in the corporate world as scalability.
“It’s going to be a shock to a lot of people,” he says of forthcoming developments.
And this: “It’s going to hit hard.”
Defense: What can be done?
There is no single solution, but there are ways to more consistently mitigate the ill effects of social engineering in its many forms. Harmon’s tack is to “remove the weak link,” meaning humans, by “removing their ability to screw up. It’s like having bumpers on a bowling alley. You can’t get into the gutter. You can’t not hit the goal.”
His company, he says, uses “physical fail-safes” that include the multi-factor authentication software Cisco Duo, “so even if [an account] becomes compromised, [the perpetrator] still can’t get in.”
On a more elemental level, Harmon’s directive is this: “Change your mentality. Learn not to trust.” Whether it’s a “critical” email from a seemingly familiar source, a call from the “I.R.S.” threatening punishment for non-payment of back taxes or audio/video of a public figure saying something that doesn’t jibe with his or her past statements, learn what to look for and be on constant guard. Various companies offer fee-based anti-social engineering training online. One of the most prominent, KnowBe4, goes beyond warnings and words to feature "unlimited simulated social engineering attacks through email, phone and text." For those who’d rather not pay for scam-proofing, YouTube instructional videos abound. Are they comparable, educationally? You'll have to judge that for yourself.
“My most hated T-shirt [saying] of all time is the one that says, ‘There's No Patch for Human Stupidity,’” Nickerson says. “That is ultra-bullshit. I believe very firmly that we as humans have experienced patches throughout our lives and our existence as a species. If we can find ways to safely give people that experience [of social engineering], or to educate them in a way that makes them feel like they've had that experience, we can develop the instinct to not touch the hot stove.”
While Lifrieri is all for education, he’s certain that no amount of teaching can negate the natural human inclinations toward greed and charity that social engineers love to exploit — whether through a long-lost “uncle” who left you a small fortune when he died or that stranded and penniless “grandchild” who hits Nana up for five Benjamins. And though he remains unconvinced that “people are becoming smarter” about protecting themselves, they’re at least growing more cautious due to their “lack of understanding.”
In other words, ignorance isn't always bliss. Sometimes, though, it’s a blessing.