Social engineering is a concept that’s been around for millennia. But it’s one that’s evolved and developed dramatically over the course of time— especially since the practice was first given a formal name and digital notoriety in the last two decades.
In this comprehensive guide, we’re taking a look at how social engineering originated and transformed throughout its lifespan.
From defining the concept’s early beginnings to outlining the various types and techniques bad actors use to manipulate their targets in the modern age, we’ll not only look to the past for powerful lessons, but we’ll also use what we know about social engineering to forecast future threat predictions and help you stay safe today. It all begins with an explanation of what social engineers do, and what they’re after...
Want to take this resource with you? Download the ebook below.
Chapter 1: What is Social Engineering?
Chapter 2: The Early History of Social Engineering
Chapter 3: Types of Social Engineering
Chapter 4: Common Social Engineering Tactics
Chapter 5: Things You & Your Employees May Not Know About Social Engineering
Chapter 6: The Biggest Social Engineering Attacks in History
Have you ever received a suspicious email from what appears to be your boss, asking you to transfer money? What about a voice memo telling you your car warranty is about to expire and you need to renew it— right now— before it’s too late?
Both are prime examples of social engineering, wherein a bad actor attempts to manipulate or deceive a user. But to what avail? Why all the trickery?
The cybercriminal’s goal is often to convince the target to disclose private information or procure money— or to perform an action that could, in turn, give the bad actor access to said info or funds.
In the instances above, the boss impersonator’s motive is clear: cold, hard cash. The latter is to gain access to private information, like your name, address, social security number, etc.
But when you’re in the moment, a social engineer’s motives are not always so obvious… and oftentimes, they’re multi-faceted— because the bad actor is often after more than one thing.
Really, most social engineers are after any and everything they can get their hands on— knowing that the more leverage they have, the more to gain.
You just learned that the key purpose of social engineering is to gain access to private information or money. But unlike a technical hack such as a SQL injection attack which grants access to information that was not intended to be displayed.
Don’t be mistaken; a social engineer is usually still well-versed in cyber technology and coding. They could slowly peck away at digital reinforcements and search for weaknesses to hack a company. But for major brands who invest heavily in cybersecurity, this is a tedious or fruitless endeavor.
It’s often easier to trick a person on the inside than it is to crack air-tight cybersecurity measures. Once in, the hacker knows how to acquire the goods. They just need to find an initial foothold… to discover even the smallest crack to sneak through the door
Instead of imploring brute force to attack cybersecurity barriers, social engineers are masters of the art of deception.
These cunning engineers use the principles of human psychology to build trust with a user— often someone directly associated with their targeted organization— knowing that the person may be their “in.”
It all starts with selecting a brand and choosing human target(s). From there, a social engineer typically creates a believable pretext, specific to the victims he’s after— inventing a false story or creating a seemingly plausible situation in hopes of obtaining information to breach a system or secure money.
What makes social engineering different from a typical con or fraud is that these attacks usually involve a series of highly-calculated steps— methodically planned to slowly reach an end goal— using principles of human psychology to manipulate the target.
A social engineer typically begins by scouring the Internet for open-source intelligence (OSINT), digging through publicly-available information to select specific users to manipulate.
Let’s say the social engineer begins by searching LinkedIn for everyone who works for a particular organization. For the sake of example, we’ll call the company Tea Castle, an online retailer of loose leaf tea.
On LinkedIn, the engineer can see each employee’s job title, who they work under, how long they’ve been an employee, accounts they manage, etc. From there, the engineer may silo down by department, choosing, for example, to target marketing personnel instead of the tech-savvy IT team.
The engineer then looks up the social media profiles of the marketing team individuals, discovering core knowledge of their lifestyles and personal info.
He sees that the VP of Marketing posted a public picture on Instagram of her working remotely, saying she’s thrilled to work from her favorite cafe and enjoy a local tea. The social engineer notes this, wondering what remote security vulnerabilities he may be able to exploit, knowing she’ll likely have high-permissions access in the organization. He jots down that she uses a MacBook, the exact shop she’s working from, the possible location of where she lives based on her proximity to the café. He keeps scrolling and notices she likes to work from the cafes on Fridays.
All the information the social engineer gathers contains crucial plot pieces, helping to weave together a deceitful narrative. The engineer uses this knowledge of the target to strategically plan deceitful scenarios to use against the VP.
Through more OSINT hunting, the social engineer finds a tea shop out of state called Steepers. He creates a fake email address, mimicking the email extension used on the tea owner’s website. Posing as the owner of Steepers, the social engineer constructs an email for the VP of Tea Castle, introducing himself as Steeper’s CEO and asking if she’d be interested in sampling their hand-blended teas. He even goes a step further by singling out Steeper’s earl gray saying it is his favorite (knowing from the VP’s posts that she also likes this type of tea).
The VP replies that she’d love to sample their best-selling teas and the social engineer has his in. He pulls a sample of Steeper’s tea list from their site and attaches malware to the PDF.
The bad actor waits until Friday at 3 p.m. to reply with his second phishing email, knowing that the VP is probably out enjoying tea and more likely to be excited about trying more. Then, he sends her the PDF, hoping that she’s using public WiFi at the shop and not on a secure VPN or home network.
Once clicked on, this attachment sneakily injects the VP’s computer with malware, giving the bad actor a doorway into her corporate system. He may continue the conversation with the VP for a bit, especially if he needs more information to get past more barriers in her system and needs to launch another posed cyberattack.
What’s important to note is that some social engineers invest weeks or months into nurturing a slow-building relationship with their victims, posing as a helpful, friendly source before laying their big attack.
This is just a short narrative of how a social engineering exploit could unfold. There are many different types and methods of social engineering that we'll discuss in length deeper on this page.
Now that you know a little about what social engineering is, let’s explore how it all began...
While computer technology has only advanced enough to spur the idea of security-based social engineering for the past few decades, people have been using the principles of human psychology to manipulate others for hundreds of years.
Incredulously, the earliest accounts of social engineering-like strategic hoodwinking trace back to the Trojan War in 1184 B.C.
Are you familiar with the story of the Trojan Horse trick, first mentioned in the famous novel The Odyssey?
The year was 1184 B.C. The Trojans and Greeks were immersed in a long, seemingly never-ending war.
After a 10-year siege, the Greeks realized they had to get crafty to defeat the Trojans. They constructed a giant wooden horse and hid some of their army inside it. The rest of the military sailed away, appearing defeated. The Trojans fell for the trick; dragging the wooden statue past their protective barriers as a trophy for their long-overdue victory.
After the sun went down and the Trojans went to bed, the Greek soldiers waiting inside of the horse snuck out and unlocked the gates around their city— sneaking in the rest of their armed forces who sailed back under the cover of darkness. The Greeks then used the element of surprise to destroy the city of Troy from the inside, formally ending the war.
And therein lies the first recorded instance of social engineering.
While these acts of deceit were alive and well for nearly all of civilized humanity, it wasn’t until millennia later that someone put a name to this type of deceit— something more methodical and planned than a simple ruse… calculated steps carefully orchestrated to manipulate and breach a barrier.
Hacker Kevin Mitnick helped to popularize the concept of “social engineering” in the cybersecurity world in the 1990’s, wherein bad actors engineer social situations to trick a person into taking an action.
Here’s an example of how Kevin exploited users in the 90’s:
In the 90’s, Kevin Mitnick was once the most wanted cybercriminal in the country. In 1992, he became a fugitive when he violated probation from previous cyber crimes by monitoring the voicemails of the authorities investigating him.
In hopes of being able to communicate privately and avoid arrest, Kevin set out on a quest to manipulate the technology inside the once high-tech MicroTAC Ultra Lite cell phone by Motorola. In order to fly under the radar and chat without being traced, Kevin decided to go after the source code in the firmware of the phone.
He began his social engineering siege by calling the directory to get the phone number for Motorola (a common practice before the popularity of Google). Kevin began small by asking to talk to the Project Manager of the MicroTAC Ultra Lite. A receptionist connected him to others, who transferred him many times until he finally got in touch with the VP for all of Motorola Mobility.
During Kevin’s eight transfers prior to connecting with the VP, he learned a very interesting fact: Motorola has a research center in Arlington Heights. Under the pretense of an employee from the Arlington branch, Kevin asked again to connect with the Project Manager for the Ultra Lite. He socially engineered this Arlington branch pretext to gain trust and get an in with the VP— a key tactic these engineers use.
The VP gave Kevin the Project Manager Pam’s extension, only to get a message that she was away on vacation. On her voicemail, she left a contact number to reach another person in her absence. Kevin called the contact, Aleesha, and asked if Pam left on vacation yet to create the illusion that he and Pam had connected prior, making his story all-the-more believable. He then told Aleesha that Pam promised him she’d send him the Ultra Lite source code but said that if she got caught up before leaving, Aleesha could send it.
He then instructed her on how to zip the files, since there were hundreds to package. But when he tried instructing her on how to transfer the zip to his anonymous FTP, the connection failed and Pam asked him to hold while she went to grab her Security Manager to help.
It’s here that Kevin panics, realizing that the jig could be up if security personnel got involved, suspecting foul play. But to his surprise, she returned with the security person’s personal username and password to the proxy server to upload the file.
This clever narrative helped Kevin complete his mission and walk away with the source code. Although he didn’t end up doing anything with the code, this type of highly-sensitive property information could have easily been sold for high profit, or been used as blackmail against Motorola for a generous payout.
While there’s no arguing that the Trojan Horse story and Kevin’s Motorola exploit denote powerful examples of manipulation, modern social engineering ploys involve more direct relationship building and clever storytelling over digital technology.
Bad actors attempt to compromise vast servers and networks of online data, with entirely new threat landscapes and vectors like email, WiFi, routers, injected USBs, SMS, etc.
But because social engineering attacks are often conducted on complex, interconnected devices, it’s harder to trace a breach than some instances from the ’90s through the early 2000’s.
Plus, even when an attack is launched, the breach is often stealthy, with the target sometimes having no idea they’ve been compromised for a length of time. In fact, the median length of time an adversary will sit inside a network undetected is an incredible 146 days. During these months, a hacker could dig deeper into a system, gradually uncovering more private data for financial gain.
Oftentimes, it isn’t until the bad actor takes the quest too far and is accidentally discovered through suspicious activity or boldly reveals themselves that the attack is even detected.
Next, let’s take a look at some of the various types of social engineering tactics used by the bad actors of the modern age...
Social engineers have more than a few tricks up their sleeves for deceiving unsuspecting targets:
Fishermen cast a line with a juicy piece of bait to catch big fish. Cyber criminals throw out digital bait of their own when social engineering, a practice often referred to as “phishing.”
While at its root, phishing attempts share a core purpose of tricking a target into performing an action or revealing information, the practice comes in many forms.
Amateur hackers send out mass correspondence— casting a wide net and hoping to trick a large pool of recipients. But more often than not, these generic messages are often too impersonal to fool anyone.
Most versed cyber criminals bait one phish at a time. This is an attacker who researches and obtains deep knowledge of his victim and crafts a unique narrative, hyper-relevant to the individual. Like a spear fisherman hunts for a single fish, spear phishers oftentimes only bait one particular target.
Let’s reveal some ways spear phishers strike:
These are any emails that have malicious intent. Whether it’s a seemingly normal message with an infected attachment or one that tricks readers into clicking on a spoofed URL that captures their login credentials, phishers often get crafty in your inbox.
Sometimes bad actors use the influence of a friendly voice to their advantage. Voice phishing is any form of phishing that takes place over the phone. These are voicemail messages asking you to call back to take immediate action, and often leverage fear to get callbacks.
With the growing use of cell phones, bad actors are messaging your direct number to compromise you. This may be a text message telling you you’re late on a payment and to pay on the attached link to avoid a late fee, wherein the hacker captures your login information or banking details. Or, it’s a fake number posing as the government sending you COVID-19 resources with an infected malware-laden link.
While phishing is the process of attempting to acquire sensitive information, pretexting is the fake story the bad actor weaves during the phish. It’s the narrative they invent based on their researched knowledge of you to fool you into believing its legitimacy.
Common pretexts involve impersonating someone you know or another trusted source, with a clearly explained reason why they’re asking you for information or to take an action.
Baiting occurs when a cyber criminal dangles something tempting in front of you, hoping you’ll take action. This could be an email with a provocative porn video clip (sometimes called a honeytrap) or a document labeled “Confidential.” Sometimes the bad actor won’t even ask you to click it, hoping that your own curiosity will take over.
Just like a driver hugging the back of your car on the road, some social engineers trail closely behind an employee entering a building to gain access to a restricted, fob/code-accessible only area. These bad actors usually have a clever pretext— dressed as a delivery person carrying boxes or as a friendly face with a dozen donuts for the staff— creating a false sense of trust to let them through the door behind their tailgated target.
Quid Pro Quo is Latin for “something for something” and is a social engineering technique wherein the cyber criminal offers a benefit to the target in exchange for information or access.
This could be someone posing as a member of your IT team saying they need your computer password to make a necessary system update or the promise of a free music download if you subscribe to a fake streaming service. In the end, the engineer promises to provide a service or item in exchange for you providing something.
Social engineers are such savvy information swindlers because they understand the psychology of influence.
There are seven huge ways people influence others, according to behavior psychologist Robert Cialdini. Understanding these principles can help you better educate your employees on some common social engineering tactics used by bad actors.
Social engineers understand that it’s human nature to give back when we receive. Most of us feel obligated to repay someone for a favor, gift, invite, or kind gesture, which is why bad actors often bait their target with a little offer.
Let’s say your employees get an email saying they’ll receive a $10 Amazon gift card for anonymously filling out a survey from the IT department asking how well they’re handling digital security. Unfortunately, it’s actually from a spoofed recipient masquerading as a trusted source. Yet, because it looks like an email from your own IT team and your employees will receive a reward— you scratch IT’s back by filling this out, IT will scratch yours by giving you a gift card!— your employees may be inclined to give pertinent details about your security to a hacker. Employees who fill out the survey may even receive a legitimate gift card, to reduce arousing suspicion, a small price for the cyber criminals to pay for a wealth of information aiding in a mass-scale hack.
Social engineers use reciprocation not as a kind gesture, but as a compliance tactic for getting private data.
We want what we can’t have, especially when we perceive it as rare or hard to come by. That’s why those emails we receive saying, “Order now! Only 10 left” often make us impulse-buy a product we don’t really need.
Social engineers capitalize on scarcity to influence targets often, creating a clear divide between “you can have this now” and “you can never have it again.” This may be a bad actor emailing an employee a special offer for a new tool your team could really use. The cybercriminal found a place on a public forum where a staff member asked what the best plugin for SEO was, and the clever hacker created a fake pretense that they were a rep with the company and offering a free trial of the plugin— that’s only good until the end of the day— if they were interested in downloading the Chrome extension. The fake webpage downloads malware attached to the real extension, so your employee never even realizes that they were infected.
Bad actors use scarcity to create a sense of urgency, so you are less inclined to think before taking an action or sharing information.
Your employees are taught to respect the leadership team and to understand their place on the corporate ladder. While some work environments blur these lines more than others, the reality is that teams often have a structural hierarchy, wherein authority figures manage over lower-tiered staff.
Cybercriminals often pose as managers or members of the C-suite to trick lower-level employees into conceding to a request. The infamous “wire transfer” social engineering exploits are a prime example of authority at play. A social engineer may know a manager is out-of-office and create a spoofed email address to ask a staff member to route money from one location to another, since the boss is very busy or on vacation. Because an authority figure demanded the action, some employees may do it without thinking, fearing reprimand from management for hesitating.
Social engineers imitate a person of importance quite often, using a false sense of authority and urgency to get their way.
We’re more willing to help someone we find likable than someone who exhibits characteristics or traits we dislike. Face it, we’re attracted to people who are charming. According to Kevin Mitnick in his book The Art of Deception, the main tools a social engineer needs are, “sounding friendly, using some corporate lingo, and… throwing in a little verbal eyelash-batting.”
A prime example of the liking principle in action would be a charismatic voice phisher. The social engineer rings you up, claiming to be an authoritative source— let’s say a vendor— and cracks a few jokes, maybe even compliments you or your company on something. Just from a two-minute chat with this stranger, you like the guy. He’s got spunk. But he’s also got his finger on the trigger, waiting to use his charm to his advantage.
Social engineers are often so successful at their cons because they work very hard to get you to like them, knowing you’ll be more willing to cooperate with their requests if you find them appealing.
People want to see themselves as consistent with their word. Social engineers often leverage this need for self-preservation by building a slow, steady rapport with a target and requesting small commitments to achieve their strategic goals.
A cybercriminal may email you a friendly correspondence, pretending to be a happy customer who wants to thank you for how incredible your product is. A few weeks go by, and the bad actor commits to their ruse, emailing you again asking if you’d be interested in some lifestyle photos of your product set up in his office to share on social media. You concede and he sends over what he promised to establish trust. You thank him, and he asks you to promise to keep him in mind for influencer marketing help in the future. You loved the content, and you agree without hesitation. So the next time he emails you a few images, you eagerly open the attachment, only to download malware.
Social engineers commit to long-term, slow-nurture engagements for a big payoff in the end. They may also lock you into upholding your word on a small promise, knowing you’ll want to preserve your self-image by keeping it, and that they can use your need for verbal consistency to manipulate you in the future.
Cybercriminals know that people rely on the actions and opinions of others to determine their own. That’s because we innately trust that if others are doing or saying one thing, it must be a safe or wise choice. Unfortunately, that’s not always the case, the reason why moms ask, “if all your friends jumped off a bridge, would you?”
Social engineers create crafty pretenses using “proof” from what others have done to convince you to do the same. For instance, a cybercriminal might call and ask an employee for sensitive information, like a daily changing code, and when they resist answering, they’ll say something like, “I don’t understand, Linda shared this with me last week.”
Social engineers often couple this strategy with the “authority” tactic when their pretext begins to backfire, threatening lower-tiered employees to comply or they’ll pull a manager into the conversation for resisting the request.
Bad actors know that in moments of uncertainty, we tend to turn to lessons from others for guidance on next steps and will often bring up other employees or fictional sources to validate trust and manipulate you into complying with a sketchy request.
We all want to feel as if others can relate and empathize with us in times of need. Bad actors will use false pretenses to make themselves as relatable as possible, creating a sense of unity amongst them and their target to build trust before deceiving.
Social engineers use their OSINT research to understand the inside knowledge of your organization, like staff names and clock-in times. A cybercriminal may know after previous rapport with an employee, for instance, that your staff hates having to use their fobs every time they want to enter the building. After manipulating the surveillance camera, the engineer tailgates one of your workers, pretending to be a new employee who forgot their fob. She banters about how annoying it is to always remember your fob, and introduces herself as the new receptionist working for a neighboring department. The social engineer mentions the manager by name and has a relatable pretense for rushing to get to work on time, much like your almost-late employee is now. The real employee feels unified in their struggle and lets her through the door with a smile and laugh, saying, “don’t be late!” as he walks the other way and grants a stranger full access to the building.
Malicious manipulators capitalize on shared struggles or experiences to make a relatable connection with their target for a quick “in.”
If social engineering is a relatively new concept to you or your organization, there are a few key principles to keep in mind when implementing human factor security policies:
Social engineers don’t go after one department or individual in your organization exclusively, making it difficult to know who could become the next target.
While you may think these cybercriminals would go after lower-tiered employees, it all depends on the information the bad actor is after and who they think can lead them to it. Sometimes entry-level employees give social engineers some base information that they use to create more strategic pretenses with those higher up the corporate ladder, with high-level permissions and data access. Other times, they do their own research and go right after the big shots, spear phishing management or C-suite executives. Social engineers can even find a connection easy to compromise, like a relationship with a vendor with poor security measures, gaining access through a “side door” into your digital database. Even IT managers can fall victim to a convincing social engineer if an attacker has the right context and craft.
The point is: no one is 100% safe from being the victim of a social engineering attempt. This is an important point to drill home to your entire team and anyone you work with.
Social engineers don’t simply pick up the phone and start making random calls, asking random questions. In fact, there’s little left up to chance when it comes to a well-devised and orchestrated social engineering exploit. The bad actors do in-depth, extensive research and develop a strategic plan with many steps before beginning their conquest.
By the time a social engineer makes their first point of contact, they have already formulated the tactics and methods they’ll use to compromise your business and rehearsed and perfected their pretense. They’ll know your company’s inside lingo, specific details about your teams and staff members, your office location, and other bits of information that could be useful in painting a convincing portrait of authority and trust with unsuspecting targets.
Many mistake social engineers as smooth talkers who fly by the seat of their pants, but their process is a lot more methodical and calculated than that. Some elite software developers will spend weeks developing a fake program to capture user credentials or steal information, and another few weeks devising how they’ll drop the payload without suspicion, knowing one wrong move would mean “burning” (stopping any future exploits on) the target. Remember, these are people who often have deep knowledge of hacking, making them a dangerous combination of charismatic, cunning, and tech-savvy.
In most hacking scenarios, the bad actor does not want to be detected— throughout the entire course of the exploit. That’s because the longer they can sit within your system unnoticed, the more likely it is they can gain deeper access to exploitable information.
With this in mind, cybercriminals often launch an attack in stealth, compromising a device or an entire network quietly, with your organization none-the-wiser. You could click on an infected link in a phishing email or download malware from a fake update triggered by a lookalike WiFi network and never know that someone can now access your corporate network. You could even unintentionally contract spyware onto your device, and have a bad actor watching and recording your actions, capturing your keystrokes as you type in usernames and passwords, or tapping into your device audio or video functionality to hear your conversations or view your webcam.Never assume that you would know if you’re compromised, because social engineers work very hard to stay hidden long after the initial implant.
There are many things we can learn from social engineering exploits from the past, especially some of the most infamous attacks over the past decade or so.
Here are some noteworthy social engineering hacks, in hopes that their mistakes may be your lesson.
A subsidiary of Toyota Boshoku Corporation was fooled by a crafty social engineering scheme last year— one that cost the brand greatly. This particular business email compromise (BEC) scam was actually quite simple: a hacker targeted the inboxes of the car corp’s finance and accounting department’s emails, impersonating a business partner of the Toyota subsidiary requesting payment to a specific account.
While $37 million might sound like an outrageous request, large-scale businesses like Toyota see requests of this nature often, and an unsuspecting worker transferred the funds to the social engineers’ account.
Objectively speaking, this is a plausible mistake. But what makes this hack so cringe-worthy is that it was the third acknowledgment of an attack on Toyota that year alone, according to the CEO of their security company. The first was in Australia in February 2019, then again in Japan that March before the attack on Zavantem, Belgium European headquarters of Toyota Boshoku in September.
Toyota had been subject to multiple cybersecurity attacks in early 2019, so for an employee— later that same year— to approve a financial request without verifying the need for the transaction and the identity of the recipient is, no doubt, unacceptable.
This is a classic case of “fool me once, shame on you. Fool me twice, shame on me” on Toyota’s part for not prioritizing extensive cybersecurity awareness training after a series of targeted attacks. If you’ve noted suspicious cybersecurity attacks on your business within the last two years, we highly recommend educating your staff on what previously happened as well as possible threat scenarios to look out for, while simultaneously improving your current defense gaps.
Barbara Corcoran, of the ABC show Shark Tank, lost a large chunk of change in February to a savvy social engineer. The hacker took to the inbox of Corcoran’s bookkeeper, spoofing the email address of the TV-star’s assistant and requesting $388,000 funds to be wired to an Asian bank with an attached invoice for real estate renovations.
Because the email looked like a direct message from the assistant and the hacker responded so professionally and accurately in their email correspondence to confirm the request— a social engineer who clearly did their research into Corcoran’s business affairs— the bookkeeper was fooled.
This spear phishing hack could have been prevented had Corcoran’s bookkeeper directly called or contacted the assistant via any other means than email to confirm the nature of the money transfer.
Always, always, always question a request you receive via email, as these messages can be easily faked. Social engineers are especially good at mimicking email addresses, creating believable assets like invoices or spoofed URLs, and weaving together a convincing story to make the correspondence seem legitimate.
The RSA’s entire corporate system was compromised as the result of a phishing scam gone “right” (by the hackers, at least). With just two emails sent to four workers— only one of which was clicked and the attachment opened— a hacker’s malicious file named "2011 Recruitment plan.xls” did the trick, according to Wired.
Once downloaded, the spreadsheet opened up with a simple “X” in one box, which was the only sign that there was something inside the file. But really, this infected spreadsheet housed an exploit that capitalized on a vulnerability in Adobe Flash. Open opening, a script launched a “backdoor” called Poison Ivy on the user’s desktop, giving the bad actor a foothold into the corporate network.
From there, the social engineer controlled the computer remotely, stealing account passwords that granted him access to other RSA systems and private data. He even was able to transfer the sensitive files to another machine, and eventually directly to himself.
The hacker’s first attack was so successful because he exploited a vulnerability in Adobe’s Flash software, allowing malware through once the target clicked an infected email attachment. This stresses the importance of routine program updates, which provide patches for newly uncovered vulnerabilities and consistently strengthens your defenses as technology evolves.
The 2011 phishing scam on The RSA also proves that social engineers think very strategically, planning multi-phase attacks to achieve their highest goals.
While the social engineering exploits mentioned above are no doubt notorious, we rounded up the biggest and the best in a separate blog.
In our post, “The Top 5 Most Famous Social Engineering Attacks of the Last Decade,” we’ll dig into the story and lessons behind the:
Social engineering attempts will come your way, no matter how strong your security measures. It’s how you react to the attempts that matters— helping to prevent costly breaches.
Your employees aren’t IT personnel who understand the complexities of a technical cyberattack. When you start using intimidating lingo and tech-talk, you’ll lose them.
Instead of emailing over a boring list of new security regulations, shoot a video of someone from IT explaining the new policies and possible threats they may face or gather everyone for a live webinar.
While the technical logistics of a hack can be confusing to the everyday person, your team can, however, learn from stories. Walk your staff through the narrative of some of the most notorious social engineering attacks above, and include real examples of some of the hacking techniques we shared on this page.
For example, your team may not need to know exactly what happens when a payload is dropped, but they should understand how the action could occur due to negligence on their part and the consequences of the attack.
With COVID-19, more businesses than ever before have partial workforces logging time from home. Here are a few hacking techniques that all CISOs should educate their teams about to understand and safeguard against the ever-evolving remote threat landscape.
Social engineers use a number of tricky ways to find an “in.” With ever-evolving technology, these savvy swindlers use new and creative methods for hacking into your systems. The only sure-fire way to stay up-to-date with recent threats is to routinely remain informed on the evolution of social engineering.
While it may seem mundane to your employees, a yearly security awareness class and testing can help to ensure they’re staying sharp and diligent. By giving your team access to a full training library, they can learn at their own pace through educational videos, including live threat demonstrations.
Explore our security awareness training resources here.
Looking for information you can send your employees to stay better on guard? These bullets are easy to share in one email and to share periodically as reminders of best practices.
Find more cybersecurity tips here.
No matter how strong your organization’s physical defenses, you have to account for more than just technology barriers. If you’ve learned anything from this historic look at social engineering, it should be that employees will always be your brand’s number one security weakness.
And while you can’t stop social engineers from trying, you can educate your team on the threat landscape as your best defense for preventing a social engineering hack.
Curious to see how your team would stand up against phishing attacks without any formal training? Invest in social engineering strength testing with some of the best in the industry, Kevin Mitnick and his Global Ghost Team.
Kevin offers three excellent presentations, two are based on his best-selling books. His presentations are akin to technology magic shows that educate and inform while keeping people on the edge of their seats. He offers expert commentary on issues related to information security and increases “security awareness.”
Social engineering attacks account for a massive portion of all cyber-attacks.
Read more ›
Toll Free (USA & Canada)
(855) 411-1166
Local and International
(702) 940-9881
Security Services and Support:
info@mitnicksecurity.com
Engagements and Media:
socialmedia@mitnicksecurity.com
© Copyright 2004 - 2024 Mitnick Security Consulting LLC. All rights Reserved. | Privacy Policy