COVID-19 Outbreak | Defending Against the Psychology of Fear, Uncertainty and Doubt

Earthquakes. Floods. Tsunamis. Wildfires. Landslides. Hurricanes. Tornados. SARS, H1N1 (swine flu). MERS. Ebola. HIV. AIDS. Zika. And now COVID-19, also known in the media as the coronavirus. 

Natural disasters and epidemics have much in common, including the tragic loss of human life. But there is a darker and more sinister connection–the use of this same human tragedy by bad actors to spread malware, launch phishing and spear-phishing campaigns, and commit fraud by exploiting emotion. Such is the case for the coronavirus, also known as COVID-19.

The World Health Organization (WHO) has been continuing to warn against the use of the coronavirus emergency to send phishing emails that contain malware. 

Using Fear to Aid and Abet Fraud

The bad actors don’t limit themselves to malware. The US Food and Drug Administration is also warning consumers about fraudulent products that “claim to prevent, treat, mitigate, diagnose or cure coronavirus disease 2019 (COVID-19).” It’s a full-court press when it comes to fraud.

Webinar | Employees Working From Home?
Join our experts and learn how to keep your enterprise running without being affected by the cybersecurity consequences of workforce transition.

In December of 2019, SentinelLabs released a groundbreaking report about the relationship between the cybercrime group TrickBot and North Korea, a recognized Advanced Persistent Threat (APT) actor. The use of TrickBots capabilities is magnified when the effective use of psychology is deployed against email recipients.

Recently, SentinelLabs identified a malicious campaign that uses a coronavirus healthcare notification from Canadian authorities to distribute malware aimed at financial institutions. 

Johns Hopkins University and the Center for Systems, Science, and Engineering have developed a map that models the spread of COVID-19 by country, region, state and city. As of March 10, 2020, the top countries are China, Italy, Iran, South Korea, Spain, France, Germany, the United States, and Japan. That means each country becomes the context for phishing emails that target large numbers of users.

Context is extremely important when crafting an email designed to deliver a malicious payload. Human nature has always responded to fear of loss more predictably than the potential for gain. For example, in the context of COVID-19, which email subject line would generate a higher likelihood of response?

“How to prevent the spread of the coronavirus in 3 easy steps.”

 

“URGENT: You have been in contact with a verified coronavirus patient.”

The first subject line does not create fear of loss, only the potential to gain more information about stopping the spread of the coronavirus. The second subject line attacks the heart of the matter – fear of death. A related behavior affects the belief in the scarcity of a valued item. With COVID-19, it could be the availability of test kits. 

“Don’t lose your chance to get these hard-to-find coronavirus test kits.”

The last email subject combines both fear of loss with scarcity. Thousands of years of human evolution have made us loss averse. This same evolution has also reinforced the primary purpose of our brain. And that is to keep us alive. Everything beyond that is a bonus.

It’s irrelevant that citizens can’t purchase these test kits, and that only the government has them. The fear of loss, the sense of urgency, and the amount of media dedicated to COVID-19 create conditions that override our common sense and force us to act based on primal fears. Death is the ultimate trump card.

Exploiting Human Vulnerabilities

Criminals have become more advanced in their understanding of manipulating human emotion to achieve a targeted action. Social engineering is based on the premise that I can get you to take action you believe to be trusted, but which is actually malicious, using manipulation, influence, and deceit.

Nation-state actors have long relied upon social engineering to achieve targeted goals for espionage, system compromise, election influence, and social media manipulation. Business Email Compromise (BEC) relies upon convincing the recipient of an email that a sender is a person of authority and that a particular action (like transferring hundreds of thousands of dollars) should be done. 

The number one tactic used by adversarial governments and bad actors isn’t exploiting a vulnerability. It’s exploiting human weakness. In an article I wrote for The Hill, I outlined how Russia had successfully used the first attack with the malware known as Black Energy. The initial method of compromise? A spear-phishing email sent purportedly from the Ukrainian government. The attached Excel spreadsheet asked the user to enable macros.

And just like that, the initial payload was delivered. Nothing fancy. Just a sense of urgency (Ukrainian government) overriding common sense (never enable macros from an attachment). 

Nobody is Immune To Social Engineering

The psychology of fear, uncertainty, and doubt is a powerful weapon. During my time in law enforcement, I specialized in serial crime profiling and behavioral analysis interviewing. Getting someone to click on a link in an email isn’t nearly as difficult as getting someone to confess to murdering another human being.

In the behavioral analysis interview (BAI), I analyzed the case (context) and framed my questions accordingly. The goal of the BAI is to determine if the subject is being truthful or deceptive. If the subject is being deceptive, and it appears they could have committed the crime, then it’s time to move from gathering facts to the interrogation. Not every interview leads to an interrogation, however.

During the interrogation, the goal is to cause the subject to manifest anxiety to the point that the only way to relieve it is to be truthful. I taught these same techniques at the National Security Agency to damage assessment agents who had been involved in some of the most serious espionage cases in United States history. It is the same reason an employee might click on a suspicious link, or open a malware-laden document: to find out the answer and relieve the manifested anxiety of fear, uncertainty, and doubt.

What is the moral of this story? It’s that no matter how much security awareness training you do, how many posters on cyber hygiene you plaster in your offices, or how many weekly reminders you send out in an email, in the end, hundreds of thousands of years of human behavior will eventually win out. That means fear of loss (death) and self-preservation (relieving the anxiety/stress) will trump common sense.

Fear Doesn’t Work on Machines

However, there is a silver lining to these dark clouds on our horizon. The use of Artificial Intelligence and Machine Learning has shifted the balance of power from the attackers to those being attacked. Rather than responding to and recovering from attacks, AI/ML has increased the speed and precision of detection and prevention. 

The behaviors that have been ingrained into our DNA over thousands of centuries can be counterbalanced by the deliberate application of technology. Rather than requiring a user to determine whether something is ‘safe’, it’s easier to prevent it in the first place. It is easier to prevent a ransomware attack than it is to recover from one. And it is far easier to manage good press than bad

Artificial intelligence doesn’t give in to fear. It doesn’t have human emotions to be manipulated, and it can’t contract the coronavirus. This just may be the perfect antidote to fear, uncertainty, and doubt.

Morgan is the Chief Security Advisor for SentinelOne and a Senior Fellow at the Center for Digital Government. He has testified before Congress multiple times about the security of large government systems and is currently the chief technology analyst for Fox News Channel and Fox Business Network covering cybersecurity.