Automation No Substitution for Context When Evaluating Threats
Automation streamlines—even replaces—tasks and controls at the core of many security practices. With these efficiency gains, reliance on automation can sometimes convince security analysts and network operators that there is no substantial need to proactively understand threats to the business, how adversaries operate, and why you’re a target to begin with. Failing to fully understand threats facing the business in this way does a disservice to organizations we protect.
Here are three drawbacks of automation and why no machine will ever fully absorb the tasks and nuance delivered by a human analyst:
Automation ≠ Infallibility
Automation in security doesn’t immediately translate to infallibility. Even the best security tools fail to alert on some threats, or fire off false positives that threaten confidence in the product or cause unnecessary evaluation cycles for analysts. Nor does automation take into account how adversaries shift tactics, add capabilities, and develop news ways to bypass security tools.
Sometimes, adversaries succeed. When this happens, security teams without a solid, personal understanding of threats to the business will inevitably be at a disadvantage in their fight against the risk and the adversaries behind it.
Signature-based automated tools such as intrusion prevention systems (IPS), for example, are notorious for failing to detect previously unknown attacks or presenting too many false positives, contributing further to the alert fatigue suffered by administrators. By comparing network traffic against signatures of known threats, an IPS automatically blocks traffic whenever it detects a signature. But what about traffic from unknown threats with unknown signatures? Many malware authors have been known to develop variants of malware that can bypass existing signatures and, if not detected or blocked by another line of defense, penetrate the network.
While no security program should expect to be able to neutralize each and every threat that attempts to bypass it, the better acquainted a security team is with the ins and outs of the threat landscape, the more effective its threat detection and mitigation efforts are likely to be—especially in the event that an automated tool fails to do its job.
Awareness Training isn’t Automatic
User awareness and training has its detractors, but these programs also hold an important place among many organizations’ security efforts. Organizations whose security analysts and administrators have deferred to automation may be at a disadvantage when mapping out awareness training and as a result could leave front-line employees exposed to social engineering and other attacks.
No area better illustrates this dynamic than phishing. Most organizations depend to some degree on automation to block phishing attempts, but again, none of these tools are perfect. And because phishing targets end-users, education is imperative. This is why security teams must keep up with emerging phishing campaigns and social engineering tactics. If they don’t, they won’t be able to educate users on how to identify and report phishing emails, and as a result, these users will be less capable of doing so in the event that one lands in their inbox.
Comprehensive Programs Adequately Reduce Risk
The phishing example underscores another reason why it’s imperative for security teams to gain insight into, and educate end-users on, the threats facing organizations: It can help them mitigate risk more effectively.
Now let’s consider two hypothetical programs that aim to reduce an organization’s risk of suffering a successful phishing attack. Program A’s strategy is to ensure that any and all indicators of compromise (IoCs) that have been linked to phishing campaigns are fed to the security operations center (SOC) and blocked automatically. Program B’s strategy is to employ some automation—primarily by using spam filters and email encryption—and then augment these measures with comprehensive and ongoing anti-phishing education and awareness training for all employees.
So which program would be more effective at reducing risk? In the short term, it could be either—both programs’ approaches to automation, though significantly different, are likely to block a considerable number of phishing attempts. But in the long term, Program A’s reliance on automated tools that block only known phishing campaigns means it likely won’t block emails from unknown or lesser-known campaigns. Although researchers are always identifying new phishing campaigns and IoCs for automated tools to block, adversaries are always deploying new campaigns that researchers haven’t yet identified and automated tools aren’t always able to block. This game of cat and mouse is largely why in the long term, Program A’s impact on its organization’s risk is likely to remain constant.
Meanwhile, Program B’s emphasis on education is likely to yield different results over time. The more educated an organization’s employees become on how to identify and report phishing emails, the fewer successful phishing attacks the organization is likely to experience. User education helps bridge the gap between the phishing emails that automation can block and those it can’t. And as long as education efforts are ongoing and ever-evolving, this gap—and therefore the organization’s risk—will likely diminish over time.
It’s important to emphasize that my critique of the various automated tools and functions highlighted in this article doesn’t apply to all types of automation in security. User education can’t replace or even augment the capabilities of firewalls or DDoS protection systems, for example. And although signature- and IoC-based tools can generally block or detect only known threats, these tools remain extremely valuable and necessary components of an effective security program.
What does apply to all areas of security—from network defense and endpoint protection to physical security—is our responsibility as practitioners to continually seek to understand the threats we encounter. No automated tool is capable of providing the full context in which a threat was developed and deployed. And without this context, we’re missing out on critical insights that, in many cases, can help us better protect the organizations we were hired to protect. This is why resources like threat intelligence, or, as I’ve highlighted many times before, Business Risk Intelligence will always be true requirements for defenders across the enterprise.
Josh Lefkowitz is the Chief Executive Officer of Flashpoint, where he executes the company’s strategic vision to empower organizations with Business Risk Intelligence (BRI) derived from the Deep & Dark Web. He has worked extensively with authorities to track and analyze terrorist groups. Mr. Lefkowitz also served as a consultant to the FBI’s senior management team and worked for a top tier, global investment bank. Mr. Lefkowitz holds an MBA from Harvard University and a BA from Williams College.