Why Cybersecurity Teams Struggle with Insider Threat
Unless an enterprise has a dedicated insider threat program (ITP), the mitigation of such threats usually falls to internal security operations center analysts or incident response teams generally tasked with external threats. Insider threats, however, differ substantially and require an approach that can be challenging to SOC and IR teams. Here’s why:
The fundamental difference between fighting external and internal threats is that in almost all cases, external threats must carry out an initial exploit or breach to gain access to a targeted network. This initial attack targeting the network is likely to be detected by automated tools that monitor for technical indicators of an attack, prompting SOC or IR teams to investigate and mitigate.
Since insiders already have network privileges, not only are they able to bypass the pitfalls of gaining network access, but they are also unlikely to be detected by the monitoring tactics and automated tools typically used by SOC and IR teams. As such, detecting malicious insider activity requires the correlation of behavioral indicators across multiple data sources, including network logs and endpoint device activity.
When it comes to dealing with insider threats, employing the same static, linear workflow that SOC and IR teams often use to investigate external threats is a common mistake that can stymie the progress and accuracy of an investigation into potentially malicious insider activity.
For example, suppose an employee downloads a sensitive document from their organization’s network, modifies it to obfuscate classification markings, and sends it to their personal email account. This activity then triggers an alert from a user behavior analytics (UBA) tool used by their organization.
Following the linear method of investigation shown in Figure 1, security personnel would check the documents attached in the email for classification markings, and upon finding none, likely cease their investigation. Without additional threat indicators or context, the analyst would likely to dismiss the alert. As a result, the malicious insider would evade detection simply because they had made some quick modifications to the document.
When investigating suspicious insider activity, security personnel should incorporate multiple data sources, such as proxy logs and network activity, into a contextual analysis of an employee’s behavior before, during, and after an event that triggers an alert. This type of nonlinear investigative practice provides greater insight into the context of an alert, thereby enabling personnel to evaluate the potential threat more accurately (see Figure 2).
Returning to our earlier example of an employee removing classification markings from an internal document and sending it to their personal email, a contextual analysis of their workstation log activity would have revealed indicators of a potential threat. This is why discerning the additional indicators and contextual data from other events within UBA and other available data sources is a key element of insider threat analysis. However, due to the linear manner in which many SOC and IR personnel have been trained to investigate security incidents, they may be unaccustomed to analyzing suspicious insider activity in a more dynamic, nonlinear, and contextual manner.
Skill and Resource Gaps
It’s a common misperception that skills and processes useful for combating external threats can be applied to insider threats. Significant progress has been made in the area of detecting suspicious user activity as it relates to insider threats, but at many organizations, there has been little effort to educate and train cybersecurity personnel on how to conduct thorough and effective insider threat analysis.
One skill gap common among many SOC and IR personnel is digital forensics: the art and science of gathering and analyzing digital evidence of potentially malicious behavior on an employee’s workstation. Digital forensics incorporates data from a variety of sources, including email records, instant messaging logs, internet browser history, the broader organizational network, and in many cases, UBA tools to provide immediate recognition of alerts from multiple sources on a single pane of glass. UBA tools are designed to collate user activities and behavior patterns in order to provide indications of malicious or suspicious user actions in a contextual manner, but alone they cannot detect or predict an insider threat, and therefore must be used in conjunction with enhanced investigative and programmatic ITP functions.
Despite certain surface-level similarities, monitoring for, detecting, and mitigating insider threats requires different skills, tools, and procedures than those commonly used by SOC or IR teams. Since ITPs must be tailored to an organization’s industry, employee base, internal systems, and unique requirements, there is no one-size-fits-all insider threat solution. There are no shortcuts to developing an effective ITP, and given the potentially devastating impact of malicious insider activity, organizations should take a thoughtful, nuanced approach to developing an in-house ITP.
Eric Lackey, Flashpoint’s principal advisor of insider threat program management, is an experienced professional in the areas of insider threat and counterintelligence with over 20 years of experience providing support to criminal investigations, threat intelligence analysis, network investigations, and digital forensics. Prior to joining Flashpoint, Eric most recently worked for one of the largest global financial services institutions on the Global Information Security Insider Threat team. Prior to this role, Eric spent the previous 10 years as a Senior Insider Threat and Counterintelligence Analyst for the Air Force Office of Special Investigations, as part of his 23 years of service within the Department of Defense. Eric holds a Bachelor of Science in Business Administration and a Master of Science in Digital Forensic Science.