We had the pleasure of participating in a panel discussion with Michelle Drolet, CEO of Towerwall, Peter Dougherty, CEO of MantisNet and moderated by Diana Kelley, Cybersecurity Field CTO at Microsoft on automating threat detection and response.

Below you can read some of the highlights of the discussion and you can register (click the play button) to listen to the full discussion.


Earlier this year ESG published a statistic from their research that 76% of cyber professionals stated that performing threat detection and response is harder now than it was 2 years ago.

What has contributed to this evolution in difficulty?

Bad actors are collaborating and getting increasingly sophisticated with use of AI/ML in some cases to execute their tactical campaigns for fraud and breaches, which is increasing the volume of threats. What got us here today isn’t going to work in the future, improvements on the people, process and technologies holistically within organizations - with a focus on people and process - that make up detection and response capabilities.  Further it is noted in one client example that breaking down silos between operations, fraud detection and security teams to foster collaboration has the potential to make detection and response easier.

In other client examples, diverse teams are under the CIO, but really siloed. By combining and gaining collaboration among these teams, it is changing the dynamic of how the siloed data is processed and used to improve overall security posture by incorporating the fraud detection, security and compliance information together to change the dynamic of how organizations can combat data fraud and fraud within retailers.

How should threat feeds be managed within an organization? There are a growing volume of threat feeds. How many do I need? Can I automate and normalize them?

Organizations are still being overloaded with high volume and bad data within the volume that makes it tough to use the technology properly that leads to the old garbage in, garbage out situation. So, how do you avoid bad data integration and the garbage data situation? Evaluate the data based on your individual program with processes on what your essential needs and requirements are. So, what’s the technology to be used, how is  it implemented, who is running it, how often are threat detection programs integrated for success in the organization. There needs to be a program to integrate threat detection for applicable response. Ultimately, it’s a function of quality versus the quantity, where AI/ML can help reduce the false positives of the data in conjunction with other feeds to help teams focus on signals.

Keep an eye on the shift to real-time detection with improved data, streaming capabilities and event-driven or real-time analytics to help drive the action of response, based on the program. Leveraging timely and relevant data as opposed to stale feeds and longer “log analysis and alert investigation” that aren’t necessarily up-to-date and therefore not real-time can impact the response in addition to the detection, leading to a longer time for discovery and delivering a response.

How automated is automated? What are the guidelines to consider for automation of detection and response?

There are so many tools available to provide visibility and observability to various levels of vulnerabilities, but context needs to be provided in terms of prioritization and actionable remediation that still require the need for human evaluation to implement.

On the response side, it is a bit more complex as organizations can have varying tolerance to risk, as a pre-determined response action could result in terminated exchanges that have valid transactions (ecommerce, data exchange). At least the tools are to a point to enable reduced times to detect and provide the opportunity to respond in a reduced time frame. Further tightening of the detection and response loop is a focus to improve on across the board.

How should important but disparate enterprise signals be integrated with each other?

This brings us to the importance of understanding the organization’s situational awareness. Cloud, endpoint, apps on endpoint, network, IoT…all provide significant sources of data to evaluate to understand a situation through correlation (which remains a big challenges) within an analytics system to trigger response activities.

Automation needs to be setup, supported and managed properly. There is no magic ‘ML’ that figures everything out and fixes it automagically. But, with a combo of feeding the right information with properly tuned tools, you’ll receive some fast and amazing automated insights that will inform security or network teams to make a decision and the opportunity to reduce the time to respond.

Register (click the play button) to hear the continuing discussion on:

  • Will we get to a point to trust the automated response implicitly? What are the legal ramifications of taking that approach?
  • What should organizations do to implement an automated detection and response program?
  • How close are we to the “Minority Report” style of detection and reporting of precursor behaviors?
  • What should be done to improve vulnerability management programs and integrate into an automated approach?
  • How good is the feedback loop between your red team and blue teams? How much knowledge sharing is going on related to successful techniques that can be turned into a countermeasure by blue teams?
  • The impact of a perimeter-less network on situational awareness
  • How important are playbooks to the program and their relation to an incident response plan?
  • What changes do you foresee in the automated threat detection and response landscape?

 

Topics: cyber security, Real-Time Monitoring, mantis

MantisNet

Written by MantisNet