What is normal?
It’s important to remember there is no silver bullet in security, and there is no evidence at all that tools such as ML and AI can solve the problem. ML is good at finding similarities between things (such as spam emails), but it is not so good at locating anomalies. In fact, any discussion of anomalous behavior presumes that it is possible to describe normal behavior. Unfortunately, decades of research confirm that human activity, application behavior and network traffic are all heavily auto-correlated, making it hard to understand what activity can be categorised as ‘normal’. This gives malicious actors plenty of opportunity to “hide in plain sight” and can even give them the opportunity to train the system to believe that malicious activity is normal.
The difference between Trained and Untrained learning
Any ML system must attempt to separate and differentiate activity based either on pre-defined (i.e. trained learning) or self-learned classifications. Training an ML engine using human experts seems like a great idea, but assumes that the attackers won’t subtly vary their behaviour over time in response. Self-learned categories are often impossible for humans to understand. Unfortunately, ML systems are not good at describing why a particular activity is inconsistent with normal behaviour, and how it is related to others. So when the ML system delivers an alert, security teams still have to do the hard work of understanding whether or not it is a false positive, before trying to understand how the anomaly is related to other activity within the system says, Simon Crosby, CTO and co-founder, Bromium.
Is It Real?
There is a quite a big difference between being happy when Netflix recommends a movie you like, and expecting it to never recommend a movie that you don’t. So while applying ML to your security feeds might deliver some helpful insights, you cannot rely on such a system to reliably deliver only valid results. In the cyber security industry, the difference is cost, time spent understanding why an alert was triggered and whether or not it is a false positive. Ponemon research estimates that an archetypal large enterprise spends up to 395 hours per week processing false alerts – a cost of approximately $1.27 million per year. Unfortunately, organisations also cannot rely on a ML system to find all anomalies, so there is no way to know if an attacker may still be lurking within network, and therefore no way to know when to throw the data away.
Experts Are Still Better
Cybersecurity is a field where human expertise will always be needed to pick through the subtle differences between anomalies. Rather than waste money on the unproven promises that ML and AI-based security technologies are promoting, it is wiser for companies to invest in experts, and in tools that enhance their ability to quickly search for and identify components of a new attack. In the context of endpoint security, an emerging category of tools that Gartner calls “Endpoint Detection & Response” play an important role in equipping security teams with real-time insight into indicators of compromise on the endpoint. Here, both continuous monitoring and real-time searches are key.
ML Cannot Protect You
One final word of caution: As obvious as it may be, post-hoc analysis of monitoring data cannot prevent a vulnerable system from being compromised in the first place. Ultimately, we need to swiftly adopt technologies and infrastructure that is more secure by design. By way of example, segmenting the enterprise network and placing all PCs on a separate routed network segment, and making users authenticate in order to access privileged applications makes it much harder for malware to penetrate and move sideways in the organisation. Virtualisation and micro-segmentation take this a step further, restricting the flow of activity within networks and making applications more resilient to attack. Overall, good infrastructure architecture can make the biggest difference to an organisations security posture – reducing the size of the haystack and making the business of defending the enterprise much easier.
The author of this blog is Simon Crosby, CTO and co-founder, Bromium.
Comment on this article below or via Twitter: @IoTNow_ OR @jcIoTnow