Boost Detection Confidence: Lessons from Target's Rule Management Strategy

October 7, 2024

Headshot of Paul Hutelmyer
Paul Hutelmyer

Principal Engineer, Cyber Defense

One of the challenges in the ever-growing world of cybersecurity is managing a constantly expanding detection ruleset. Each day brings new threats, often requiring the creation and deployment of new rules. As your ruleset grows, so does the need for visibility and control. How can your organization fully trust its defenses without a clear understanding of your detection capabilities, their performance, and the gaps that could leave you vulnerable?
 
This blog post summarizes insights from my SANS Blue Team Summit 2023 presentation, where I shared Target's approach to these challenges. We will discuss how Target manages our rulesets, measures ruleset efficacy, and identifies areas for improvements to achieve a more confident security posture. 
 
A Key to Improved Detection: Content Awareness 
 
To measure and improve detection content, we need to know what content exists. We should ask ourselves at least two questions for every piece of detection content an organization creates:  
 
  1. Coverage: Does this rule adequately address its intended use case with or without overlapping detections between this rule and others in the ruleset? 
  2. Performance: Is this rule genuinely effective, or does it waste resources on false positives or give a false sense of security? 
 
Navigating through an ecosystem of Security Information and Event Management (SIEMs), email appliances, antivirus programs, and endpoint detection platforms can be daunting. Each tool may operate in a silo, holding unique context or data. As organizational complexity grows, so does the challenge of quickly and accurately gathering intelligence and turning that intelligence into high-quality detection across multiple tools and environments. 
 
Let's discuss detection mechanisms for the concept of scheduled tasks, which are system functionalities allowing programs to execute at predetermined times. Although scheduled tasks are a normal part of system operations, they can be exploited to run unauthorized, suspicious, or malicious software. If you wanted to assess your organization's preexisting scheduled task detection capabilities, where would you start your search? 
 
  • Multiple Tools: Would the search process involve querying different detection platforms? 
  • Platform Scope: Do you need to search Windows, Linux, and Mac environments separately? 
  • Search Parameters: What do you look for – process name, specific keywords? 
 
Scheduled task detection discovery questions
Then let’s add in a follow up question: After identifying relevant rules, how do you evaluate their true effectiveness? Consider questions like:
 
  • True Positive Rate: How often do these rules accurately flag malicious activity?
  • False Positive Impact: How much time do analysts spend working on resolving alerts for benign activity?
 
Scheduled task detection outcome discovery questions
Furthermore, here are additional factors complicating rule assessment:
 
  • Inactive Rules: Which rules have not triggered alerts in months, or ever?
  • Historical Performance: Which rules have a consistent track record of malicious/benign activity? Is our true positive detection rate trending in a positive or negative direction over time?
  • Disposition Time: Which rules take the longest to investigate?
  • Resource Impact: Which rules are the most resource-intensive for analysts and systems?
 
When content is scattered across platforms and performance data is lacking, it leads to uncertainty, reduced visibility, and ineffective detections deployed without proper evaluation. To bolster defenses, organizations need a way to easily locate their rules and measure their real-world impact.
 
A common challenge with detection content is managing detection rules across multiple platforms, which makes it difficult to assess their effectiveness and identify improvement opportunities. Target’s solution? Detect Hub, an internal tool that aggregates, analyzes, and enriches our detection and response data.
 
What is Detect Hub?
 
Detect Hub is a central hub for rule content. It collects data from a SIEM, case management platforms, and more, storing it in a relational database. This allows for:
 
  • Rule Performance Monitoring: Track how each rule performs over time, identifying underperforming rules that might need adjustments.
  • Cross-Platform Correlation: Connect rules to alerts, cases, and other data across different platforms, providing a holistic view of your detection landscape.
  • Standardized Rule tagging: Categorize rules based on performance, ATT&CK techniques, and other criteria for easier organization and analysis.
  • Customizable Search: Find specific rules or potential new rules based on various filters and criteria, streamlining your detection management process.
 
Highlighted functionality for the Detect Hub platform
Benefits in Action
 
Think of a scenario in which you are trying to find an underperforming rule across multiple platforms. Detect Hub makes this easy. It helps you identify rules that have not triggered true positives in months, highlighting them for review and potential improvement. Additionally, it tags rules with characteristics like excessive false positives, which through review and remediation, may aid in less frustration for analysts over time.
 
The image below demonstrates the power of Detect Hub's user interface. Engineers can easily search for existing rules or explore potential new deployments across both internal and external rule sets. The UI provides in-depth rule details, including performance over time, linked ATT&CK techniques, and relevant tags. This comprehensive view facilitates data-driven decision-making for optimizing current rules and strategically deploying new ones.
 
Detect Hub rule search UI
The Power of Tagging
 
Detect Hub goes beyond storing rules. It maintains a set of filterable performance tags that are applied to categorize each rule. These tags provide crucial insights into a rule's effectiveness over time. For example, a tag can automatically flag rules that generate frequent alerts but rarely result in true positives. Some of those tags can be seen in the above image. This helps detection engineers quickly identify and prioritize underperforming rules for improvement.
 
Additionally, Detect Hub defines "standards" for rule normalization, such as proper descriptions, metadata, and logic. The image below highlights a rule with a visualization of its adherence to these standards. While we see some standards for this rule are not be fully met, others are, like "User Subscribed," which ensures relevant users receive notifications when the rule triggers.
 
Detect Hub rule standards example
In essence, Detect Hub empowers confident decision-making by providing a unified platform for understanding, managing, and improving your detection rules.
 
Detect Hub in Action: A Case Study
 
Let's illustrate the power of Detect Hub with an example. Imagine a detection rule designed to flag a SocGolish variant, which redirects victims to malicious downloads. This rule utilizes Strelka, Target's open-source file analysis tool, to scan network traffic for the threat signature.
Rule example overview
Detect Hub plays a crucial role in analyzing this rule:
 
  • Data Collection: Detect Hub gathers rule metadata, associated alerts, and details of resulting cases.
  • Automated Analysis: Detect Hub regularly analyzes the rule's historical data, including the frequency of alerts triggered, the percentage of true positive alerts, and other relevant metrics.
  • Insights: In our example below, the rule is tagged with "High Malicious Rate," indicating it effectively flags genuine threats with minimal false positives.
 
Rule with “High Malicious Rate” tag
Interestingly, this rule also gets tagged with "High Case Closure Time." While the rule is effective in detection, this tag suggests that resolving associated cases may be time consuming. This could stem from several factors:
 
  • Complexity: The cases might be inherently complex to investigate.
  • Support Needs: Analysts may require further training or better documentation to handle these alerts efficiently.
 
Rule with “High Case Closure Time” tag
Detect Hub does not just flag successful detections — it identifies deeper insights. Tags tell a story, empowering engineers to pinpoint well-performing rules and those that could benefit from refinement or additional analyst support.
 
Additional Insights Gained Through Aggregation
 
As noted above, data aggregation platforms like Detect Hub offer valuable insights for optimizing detection infrastructure. Here are additional ways Detect Hub has been leveraged to improve Target’s security posture:
 
  • Strategic Rule Expansion: Compare internal rules against external rule sets. Public rule repositories, like SigmaHQ, include rules developed by the greater cyber security community. Detect Hub brings these rules in for search. This helps identify high-priority rules for implementation, boosting coverage and closing critical gaps.
  • Framework Alignment: Map rules to security frameworks like MITRE ATT&CK to expose potential vulnerabilities and ensure robust coverage across all attack stages.
  • Threat-Focused Detection: Link rules to relevant threat intelligence (threat actors, malware) to prioritize detection and response efforts based on a specific risk profile.
  • Evaluate Intelligence Fidelity: Track the performance of threat intelligence sources by analyzing their correlation with rule outcomes. This helps identify the most valuable feeds.
  • Prioritize Rule Tuning: Use performance data to pinpoint underperforming rules that would benefit from the most attention and improvement efforts.
  • Visualize Rule Lifecycles: Understand the complete story of each rule — creation, alerts, performance, refinement — to gain deeper insights for informed optimization decisions.
 
Interface for MITRE ATT&CK framework integration in Detect Hub
Building Your Own Detect Hub
 
If you elect build a central hub for rule content, consider:
 
  • Engineering Resources: Do you have dedicated developers to build integrations, manage the database, and maintain the platform?
  • Data Access: Can you readily collect essential data points, including rule metadata, alerts, cases, and linked threat intelligence?
  • Scalability: Is your infrastructure prepared to handle the growth of both internal and external rule sets?
  • Stakeholder Interest: Will analysts and engineers actively use the platform to glean insights and guide improvements?
  • Priority: Is the organization committed to treating this project as a high-priority investment?
 
If the answer is “yes” to the above prerequisites, then here are some ideas to help you get started:
 
  1. Define Scope: Determine the initial data sources and the desired level of insight (rules, alerts, outcomes, etc.).
  2. Map Data Relationships: Understand how your various data sources interconnect (alerts to rules, cases to intel, etc.). This will guide your database design and integration process.
  3. Consider Access: Determine who needs access and what level of data sensitivity you are working with. Build secure access controls accordingly.
  4. Start Iteratively: This is the key step. Begin with one impactful data source. Demonstrate value, then gradually incorporate additional datasets over time to avoid overwhelming your team.
 
The challenges of managing an ever-evolving detection landscape are daunting. However, tools like Detect Hub offer a path toward enhanced visibility and proactive rule optimization, and result in a more robust security posture.
 
By centralizing detection data, automating analysis, and providing actionable insights, organizations gain the clarity needed to confidently refine their defenses. While building a custom platform demands careful planning, the payoff is substantial: reduced uncertainty, empowered decision-making, and the ability to stay ahead of emerging threats in the dynamic world of cybersecurity.
 
Contact and Feedback
 
This post is a summarized version of the presentation I delivered at the SANS Blue Team Summit 2023. I've updated the content with recent insights since the talk. If interested, please feel free to watch the original recorded session.
 
If you have any questions regarding the ideas or solutions mentioned, send me an email at Paul.Hutelmyer@Target.com.

RELATED POSTS

Target's Cyber Fusion Center Highlights Open Source Projects

By Caleb Walch, Mauricio Rossi Sabarros, Paul Hutelmyer, Joe Petroske, and Eric Brandel, February 27, 2024
Status of current open source projects from Target's cybersecurity team.

Synthetics: Continuous Assurance of Detection Components

By Paul Hutelmyer, December 13, 2022
This post provides a solution for utilizing synthetic events for the purpose of validating signature integrity and functionality, with the goal of achieving continuous assurance of a system’s detection signatures.