Lack of Cyber security awareness is likely not your problem
As the Cybersecurity Awareness month swoops in and takes control over everyone’s Twitter timeline, I can’t help but feel a rising bubble of frustration. This article is an attempt to pop this bubble so I can finally sleep soundly tonight.
The vast majority of security people, not to mention the general public, take for granted the following axiomatic truth:
Humans are the weakest link in the cybersecurity chain
Yet, to my knowledge, there is no study to back up this claim. No formal proof. Not even a semblance of sound technical analysis. It’s just…widely accepted and taken for granted. Crowd wisdom one be tempted to call it.
The reason it seems so compelling is that almost everyone can recount the tale of a phishing attack that could have been totally prevented by an employee not clicking on a link or not activating macros in an obviously fake Word document.
Ergo, the argument goes, we need more awareness and training around cybersecurity to eliminate that first entry point.
Okay, let’s pause here for a second. We just implicitly stated two important premises that are worth spelling out:
A. The employee is often the main enabler in an attacker scenario
B. That training and awareness yield reasonable results in preventing attacks
Let’s subject each of them to an (overdue) rigorous analysis.
Employees as the main enablers of an attack
Let’s take a common phishing scenario, but this time zoom in on the different steps that everyone seems to silently slip under the rag. Here is roughly the progression of your run-of-the-mill ransomware:
The attacker sends an email with a malicious Word document.
The employee opens the Word document, enables macros which triggers execution of the embedded payload.
The malicious code exploits a vulnerability to elevate privileges on the workstation and dumps credentials.
The code propagates to other workstations using these same credentials.
It compromises a significant number of workstations and bounces on servers
The ransomware finally deletes backups and start encrypting hard drives
The attack chain may vary from one group of attackers to another, from one strand of malware to another, but it roughly follows these same steps.
Now to the interesting part. What exactly went wrong in this instance?
First of all, the email bypassed the spam filter and the email sandbox detonation that was supposed to catch such a malicious document. That’s the first mistake that let the threat through the door. Then of course the employee activated the macros in the document. But wait let’s not forget that Microsoft Office allowed untrusted code coming from the internet to perform low-level system actions. That’s a structural flaw that has been plaguing Microsoft Office for decades now. Where is the outrage?
Then the company had unpatched workstations, either because they could not automate the process or because Microsoft has a history of providing unreliable and broken patches. The malware code dumped credentials that they could reuse on other machines due to flaws in Windows’ authentication protocol (NTLM and Kerberos). All the while, the endpoint Antivirus or EDR was blind to all of this shenanigans because of various technical limitations on both the vendor’s side (simple signature matching or easy bypass) and Microsoft’s (broken design, undocumented APIs, protections disabled by default). The attackers could reuse these credentials to authenticate on other machines because the company failed network segregation and basic password policies that require uniqueness and two factor authentication. The attacker bounced on servers because of poor isolation between the corporate and production environments and permissive security boundaries. They deleted backups because they were poorly protected and had no offline replication. And finally they started encrypting files at an alarming rate, an unusual operation that any component (Operating system, security team, Antivirus, etc.) failed to register and stop.
Now… who fucked up the most? Yes the employee should not have opened that Word document, but one cannot escape the inevitable conclusion that the whole tech environment around them was setting them up for failure. Forget about Cyber security awareness month. We seem to need vendor and company accountability…
Solving any one of the issues mentioned above (network isolation, patching, proper email sandbox, decent backups, different passwords, etc.) could have neutralize or severely restrained the attack. That’s a fact. Yes patching is hard to do. Yes we need to force the hands of monopolies. Yes we face numerous challenges in detection and protection because of tech limitations. But that’s all the more reasons to spend 100% of our energy, money and time solving these issues.
We can’t blame a 9-month child for sticking their finger in the electrical socket. We can try scolding them every day for 10 minutes…Or we can cover the outlet and let the kid play around safely. Every new parent will tell you that the latter works 99.9% of the time.
When I make this argument I often get back: “yes, but it could not hurt to tell people to be careful. It’s easy to set up and at least we’re doing something”. That’s a fallacious argument. Just because something is easier does not make it worth doing. It’s like searching for your keys under the next lamp post instead of where you initially dropped them…
Have we solved patching yet? How about 2FA, network isolation, least privilege ? If the answer is no, then every hour we spend on other less impactful actions is a lost opportunity and therefore detrimental to your security in the short and long run.
Let me repeat it again. Network isolation between workstations cuts malware propagation short and makes it extremely difficult. Even applying it on 90% of the infrastructure works wonders. Do we honestly believe that a one hour presentation achieves effectiveness of a closer scale?
User awareness, does it really yield results?
Up until this point I argued that user awareness is not as important as other technical measures. The next paragraphs will take us a step further and completely dismiss this whole ordeal.
Every time I hear about user awareness trainings I eagerly sign up! Surely these people are working miracles. They found a way to form skilled digital forensic investigators in an hour with a few slides…That’s uncanny.
And then I see the slides and fall off my chair: “look for grammatical errors”, “don’t click on links from unknown people”, “check the address bar”, “hover over the link before clicking on it”…
Really?
An accountant receives 200 to 300 emails a day. Most of them contain files of all sorts. Links everywhere. Macros of different flavors. All are rife with syntax errors. Unformatted signatures and so on. An hour lecture and a few slides will not help them detect that 1 bad attachment in a swarm of 70 000 emails. It’s delusional to think so.
At most we’re giving them anecdotal observations about previous attacks. That’s a very far leap from what is actually required, i.e. statistically meaningful detection patterns and technical ability to look at the domain history, analyze SPF and DKIM, reverse macros and so on. And then we’re expecting them to weed out that convincing phishing email? That’s wishful thinking.
Links are meant to be clicked on. Attachments are supposed to be opened. That’s how the internet works. Make your peace with it already. Our job as a security professionals is to help build a tech environment that allows the user to perform their tasks in a safe environment. Chromebooks, Fido2 2FA, Hardened sandboxes, alternative word and sheet processors, locking horns with Microsoft to disable Macros and any other of the hundred technical measures available to clear out the risk.
If anyone deems emails to be the actual problem, don’t waste people’s time with lectures, go build the next replacement, find a way to make Slack or Discord usable or add 3 layers of security of protection and detection…But don’t walk around with your slides expecting you contributed anything to the security of the platform. The sad truth is that did not change a thing.
If a company’s security is compromised because an employee clicked on a link that one time, then the security of that company is broken. Let’s not shift blame to people. That’s just bad taste.