business

How Social Engineering Has (And Hasn’t) Evolved Over Time

There’s no patch for humanity, but that’s not a bad thing

Oct 25, 202112 min read

Let’s face it. We all make mistakes. Even the most diligent, well-intentioned employee can inadvertently open the floodgates to an attack. Telecoms giant Verizon illustrated this earlier this year, when it published the 2021 edition of its Data Breach Investigations Report (DBIR).

The DBIR examined approximately 5,250 successful security breaches, gathering data about the methodologies used. It found 35 percent of attacks incorporated a social element. Of these, the vast majority (85%) used phishing tactics.

In short: you can deploy all the technological measures you want, but unless you address the human element, an attacker can defeat your defenses with a simple phone call or email.

Social engineering has always intrigued me. There’s something weirdly compelling about someone circumventing the defenses of a company with nothing but a winning smile and a confident tone, and industry raconteurs like Jenny “The People Hacker” Radcliffe and Kevin Mitnick have some incredible stories to share.

But there’s something else. Many of the tactics successfully employed by social engineers can be found throughout history, used in the fields of warfare, espionage, and criminality. Shadowy state actors targeting government departments can be found using the same playbook as Classical-era Greek military tacticians.

There’s a reason for this. Humans are social animals. We have certain inherent traits. We’re often willing to trust strangers, and we help those we don’t necessarily know. We live by a set of rules, both written and unwritten. These attributes allowed us to develop societies, permanent settlements, and eventually nation states. But they can also be exploited by malicious actors seeking to further a particular goal.

Technologies and tactics change. Human nature is a constant.

This article will explore some of these historical parallels. We’ll take a journey through time that starts in the Roman city of Brundisium, circa 19 BCE, with detours through the swinging sixties, the latter years of the Vietnam War, and the early days of the Internet. We’ll put the tactics used by social engineers into context, and show why they’re so effective. And finally, we’ll learn how to defend against them.

Our Unpatchable Human Zero-Days

Before we start our trip, it’s probably a good idea to talk about the human qualities commonly targeted by social engineers.

While researching this post, I sought out as many tales of real-world social engineering as I could find. I thumbed through books by Chris Hadnagy and Kevin Mitnick. I listened to conference talks from the likes of Paul Wilson and Jenny Radcliffe. I devoured entire episodes of The Darknet Diaries.

As I went through, I noted most attacks focused on certain human attributes. These included:

  • Curiosity
    Every great scientific and technological leap started with the words: “What if?” But you don’t need to be Einstein to have curiosity. We use trial-and-error and experimentation to make sense of our environment. Social engineers frequently exploit this when harvesting credentials or deploying malware.
  • Trust
    Life’s easier (and nicer) when you assume everyone acts in good faith. And so, we take people at their word. If an unfamiliar voice calls and claims to be from your bank or phone company, you might instinctively believe them. Social engineers can — and often do — exploit this.
  • The Desire to Help
    Since we’re social animals, collaboration is inevitable. If we see someone in need of assistance, chances are we’ll offer it. When exploiting this, a social engineer might pretend to be a low-ranking employee in need of credentials, access, or documents. They’ll count on the target feeling pity for them.
  • Fear
    The “fear factor” frequently features in social engineering attacks. You’re probably familiar with phishing emails claiming your bank account has been hacked, or your social media profile will be deleted unless you click a link. This tactic pressurizes the target into making hasty decisions and dropping their skepticism.
  • The Desire to Comply
    Almost all organizations — both public and private sector — are hierarchical. There’s a pecking order, with some people at the top, and others at the bottom. We tend to listen to (and obey) those higher up in the echelons. Attackers can exploit this by masquerading as high-ranking employees and issuing edicts to junior employees.

Each of the historical examples listed in this article uses these innate attributes, often to devastating effect.

From USB Sticks to Wooden Horses

The year is 2008. A threat actor, most likely working for a foreign state, had broken into the heart of the United States defense establishment. Showing remarkable skil, the attacker had successfully deployed a data-siphoning worm onto a Central Command (CENTCOM) computer. From there, it proliferated like a fungus, spreading undetected into other computers.

William J. Lynn III, an Obama-era Deputy Secretary of Defense, later described it as "the most significant breach of U.S. military computers ever." He wasn’t wrong. The worm took almost 14 months to eradicate, and it infected both classified and unclassified machines. It’s hard to quantify the damage it inflicted on the Department of Defense.

Adding insult to injury, the damage was arguably self-inflicted. The virus, later dubbed Agent.biz, was distributed on USB sticks dropped in the parking lot at an unnamed Middle East military base. A CENTCOM employee picked one up and plugged it into their laptop.

The Department of Defense never identified the employee responsible for the first infection. We’ll never know their motivations. Were they curious about the drive’s contents? Did they want to reunite the drive with its owner? Or did they just want to save the $20 on a new memory stick? It’s not clear.

In the end, it doesn’t matter. The damage was done. The Department of Defense was forced to spend critical resources identifying and wiping infected machines, as they eliminated the worm one computer at a time. And it forced the military to radically change their approach to computer security, ultimately leading to the creation of the United States Cyber Command.

Still, I can’t help but wonder what the ancient Roman poet Virgil would think of this tale. Two millennia prior, Virgil was spending his twilight years in the city of Brundisium (now Brindisi), where he worked on his opus, the epic poem Aeneid.

Aeneid described the Siege of Troy. This is perhaps the most famous battle that never actually took place — or, at least, in the way depicted. Pretty much everything below is apocryphal. Still, it’s a great story, so indulge me.

Here’s the TL;DR: Greece was at war with Troy. It wasn’t going well. Both sides were effectively at a stalemate. Troy had retreated behind the walls of its capital city, where it could hold out indefinitely. After an exhausting decades-long siege, the Greeks were eager to end the campaign. And so, their leader, Odysseus, conjured a devious ruse.

Greece would pretend to surrender. Their forces would retreat. By means of apology, they left a large wooden horse at the gates of the city, which the Trojans interpreted as a tribute to their greatness.

We all know what happened next. The Trojans hauled the horse into the city, unbeknownst to the shock force of Greek warriors lurking within its interior. As night fell, they crept out and unlocked the gates to the city. The rest of the Greek army surged forward, flooding in and bringing the war to its conclusion in a matter of hours.

Technologically, USB drives and wooden horses couldn’t be more different. But in both of the examples cited, they were used to inflict devastating losses to the target. More importantly, they were only effective because the victim implicitly trusted they wouldn’t be used to harm them. Trust can be a dangerous thing.

Helpful Humans

In the 1990s, Kevin Mitnick was the ultimate cybercrime bogeyman. He won his notoriety by breaking into some of the most powerful technology and telecommunications companies in the world. His misadventures were front-page news and even inspired two feature-length movies (one documentary, another dramatic).

Now, Mitnick is a legitimate cybersecurity professional and author. Through talks, articles, and books, he warns about the risks posed by social engineering attacks, giving examples from his own criminal past.

Many of his tales describe interactions with relatively low-level employees at the target company. He would build a rapport with the worker and gather information, which he would later use to achieve his larger goals.

In one example, detailed in his book Ghost in the Wires, Mitnick sought to access the phone records of a friend he suspected of working for the FBI. He repeatedly called the phone provider, each time building a rapport with the employee and teasing out details relating to the account. Once he gathered enough information, he convinced the provider to fax the target’s previous invoice, claiming he needed it to rebuild his phone book.

Mitnick was successful because the employees didn’t question whether he was the real account holder. They weren’t motivated by data security, but rather customer happiness. They simply weren’t suspicious. He would successfully re-use these tactics again and again, before the FBI brought his criminal career to an abrupt end in 1995.

Twenty-five years earlier, a group of Anti-Vietnam War activists used a similar approach when breaking into a Delaware draft office. The group had already penetrated several military offices, as they sought to frustrate the recruitment of new soldiers into the highly unpopular war. They were seasoned burglars.

This Delaware office, however, proved a challenge. It was protected by an advanced, heavy duty door. Their usual tools — crowbars and lock picks — were useless here. And so, they had to think outside the box.

During the daytime, the office’s door was unlocked. A member of the group crept in and left a note behind. It read: “Please don’t lock this door tonight.”

It was simple. Obvious, even. And it worked. The group later returned under the cover of darkness. They approached the door and twisted its handle. It opened with a satisfying click. They gathered the documents they came for and left. In 2014, after the statute of limitations expired, the group would tell their tale to veteran Washington Post journalist Betty Medsger.

No doubt, the draft office was staffed by seasoned military servicemen. They understood risk, and no doubt knew of the widespread opposition to the Vietnam war. And yet, this experience was overridden by their desire to be helpful, and to comply with the request of a colleague or superior. They trusted the note’s provenance, and didn’t think to consider that it may have been left by an adversary.

Why Social Engineering Works

The term “social engineering” sounds deceptively modern. In reality, it could be used to describe the kind of subterfuge used by criminals and military leaders throughout history.

You can draw parallels between the approaches detailed in Chris Hadnagy’s book Human Hacking, which focuses on their application in legitimate offensive security procedures, and those used by convicted fraudster Frank Abagnale, as described in his book Catch Me If You Can. The basic concepts are the same, with the only real point of differentiation being how they’re used.

Complicating matters, it’s now possible to perform social engineering attacks at scale. Through phishing (as well as its evil siblings, Smishing, or SMS phishing, and Vishing, or phishing via voice), a bad actor can reach millions in a matter of moments.

In 2020, the FBI received 231,000 reports of phishing attacks — or double that previous year’s figure. The real figure is almost certainly significantly higher, with most attempts unreported to the bureau. It affects non-profit organizations, businesses, and individuals, albeit to varying levels, and the costs can measure in the millions.

Earlier this year, US authorities sentenced the ringleader of a Business Email Compromise (BEC) scam to ten years in prison. The man, Obinwanne Okeke, stole $11m from companies in the construction industry by issuing fake invoices with the funds directed to foreign bank accounts.

Looking further back, in May 2000, Filipino malware developer Onel De Guzman caused an estimated $5.5bn to $8.7bn in damages, after he released a self-propagating worm masquerading as a love letter. The ILOVEYOU worm sought to steal dial-up internet credentials, and inadvertently took down the mail services of the UK parliament, Microsoft, and the US Department of defense.

Here’s the good news: You can take steps to mitigate the risk social engineering poses to your organization.

Technical measures, as recommended by the US Computer Emergency Response Team (US-CERT), include the use of multi-factor authentication (MFA), email filtering, and firewalls. It helps to stay abreast of recent developments, and the Anti-Phishing Working Group provides helpful information about new threat actors and trends.

Training also plays an important role. It’s possible your colleagues are unaware that social engineering exists. Teach them about the tactics used and the importance of verifying the identities and details of those they interact with, along with your organization’s phishing reporting procedures. If you don’t have one, make a point to create one, even if it’s something as simple as a dedicated inbox, where employees can forward suspicious communications.

Finally, creating a culture of transparency and collaboration is important. Your colleagues should feel able to report security issues without fear or judgment.

"Our innate human traits make us vulnerable to social engineering attacks, and always will. The best and most scalable defense is to create a strong security culture, both within organizations and across society at large. When we empower people to detect and respond to these attacks, we build a more secure Internet for all," said Annybell Villarroel, Security Awareness and Culture Manager. To find out how Auth0 can meet your organization’s identity needs, click here.