Actionable analytics designed to combat threats
๐ท๐บ ะ ัััะบะฐั ะฒะตััะธั | ๐ต๐ฑ Polska wersja
Actionable analytics designed to combat threats based on MITRE's ATT&CK.
Atomic Threat Coverage is tool which allows you to automatically generate actionable analytics, designed to combat threats (based on the MITRE ATT&CK adversary model) from Detection, Response, Mitigation and Simulation perspectives:
Atomic Threat Coverage is highly automatable framework for accumulation, development and sharing actionable analytics.
Atomic Threat Coverage is a framework that provides you with ability to automatically generate Confluence and Markdown knowledge bases (as well as the other analytics), with a mapping of different entities to each other using MITRE ATT&CK techniques IDs.
Here are links to our demo environment where you can see the outcome of the framework usage:
There are plenty decent projects which provide analytics (or functionality) of specific focus (i.e. Sigma - Detection, Atomic Red Team - Simulation, etc). All of them have one weakness โ they exist in the vacuum of their area. In reality everything is tightly connected โ data for alerts doesn't come from nowhere, and generated alerts don't go nowhere. Data collection, security systems administration, threat detection, incident response etc are parts of bigger and more comprehensive process which requires close collaboration of various departments.
Sometimes problems of one function could be solved by methods of other function in a cheaper, simpler and more efficient way. Most of the tasks couldn't be solved by one function at all. Each function is based on abilities and quality of others. There is no efficient way to detect and respond to threats without proper data collection and enrichment. There is no efficient way to respond to threats without understanding of which technologies/systems/measures could be used to block specific threat. There is no reason to conduct penetration test or Red Team exercise without understanding of abilities of processes, systems and personal to combat cyber threats. All of these require tight collaboration and mutual understanding of multiple departments.
In practice there are difficulties in collaboration due to:
That's why we decided to create Atomic Threat Coverage โ project which connects different functions/processes under unified Threat Centric methodology (Lockheed Martin Intelligence Driven Defenseยฎ aka MITRE Threat-based Security), threat model (MITRE ATT&CK) and provide security teams an efficient tool for collaboration on one main challenge โ combating threats.
Work with existing [1][2][3][4] analytics/detections repositories looks like endless copy/pasting job, manual adaptation of the information into internal analytics knowledge base format, detections data model, mappings to internal valuable metrics and entities etc.
We decided to make it different.
Atomic Threat Coverage is a framework which allows you to create and maintain your own analytics repository, import analytics from other projects (like Sigma, Atomic Red Team, as well as private forks of these projects with your own analytics) and do export into human-readable wiki-style pages in two (for now) platforms:
In other words, you don't have to work on data representation layer manually, you work on meaningful atomic pieces of information (like Sigma rules), and Atomic Threat Coverage will automatically create analytics database with all entities, mapped to all meaningful, actionable metrics, ready to use, ready to share and show to leadership, customers and colleagues.
Everything starts from Sigma rule and ends up with human-readable wiki-style pages and other valuable analytics. Atomic Threat Coverage parses it and:
tags
from Sigma rulelogsource
and detection
sections from Sigma ruletags
from Sigma rulescripts/templates
)config.yml
)analytics.csv
and pivoting.csv
files for simple analysis of existing dataData in the repository:
โโโ analytics/
โย ย โโโ generated/
โย ย โย ย โโโ analytics.csv
โย ย โย ย โโโ pivoting.csv
โย ย โย ย โโโ atc_es_index.json
โย ย โย ย โโโ thehive_templates/
โ โย ย โ โโโ RP_0001_phishing_email.json
โย ย โย ย โโโ attack_navigator_profiles/
โ โย ย โ โโโ atc_attack_navigator_profile.json
โ โย ย โ โโโ atc_attack_navigator_profile_CU_0001_TESTCUSTOMER.json
โ โย ย โ โโโ atc_attack_navigator_profile_CU_0002_TESTCUSTOMER2.json
โย ย โย ย โโโ visualizations/
โ โย ย โ โโโ os_hunting_dashboard.json
โย ย โโโ predefined/
โย ย โย ย โโโ atc-analytics-dashboard.json
โย ย โย ย โโโ atc-analytics-index-pattern.json
โย ย โย ย โโโ atc-analytics-index-template.json
โโโ customers/
โย ย โโโ CU_0001_TESTCUSTOMER.yml
โ โโโ CU_0002_TESTCUSTOMER2.yml
โ โโโ customer.yml.template
โโโ data_needed/
โย ย โโโ DN_0001_4688_windows_process_creation.yml
โย ย โโโ DN_0002_4688_windows_process_creation_with_commandline.yml
โย ย โโโ dataneeded.yml.template
โโโ detection_rules/
โย ย โโโ sigma/
โโโ enrichments/
โย ย โโโ EN_0001_cache_sysmon_event_id_1_info.yml
โย ย โโโ EN_0002_enrich_sysmon_event_id_1_with_parent_info.yaml
โย ย โโโ enrichment.yml.template
โโโ logging_policies/
โย ย โโโ LP_0001_windows_audit_process_creation.yml
โย ย โโโ LP_0002_windows_audit_process_creation_with_commandline.yml
โย ย โโโ loggingpolicy_template.yml
โโโ response
โย ย โโโ atc-react/
โโโ mitigation
โย ย โโโ atc-mitigation/
โโโ triggers/
โย ย โโโ atomic-red-team/
โโโ visualizations/
โโโ dashboards/
โย ย โโโ examples/
โย ย โย ย โโโ test_dashboard_document.yml
โย ย โโโ os_hunting_dashboard.yml
โโโ visualizations/
โโโ examples/
โย ย โโโ vert_bar.yml
โโโ wmi_activity.yml
Detection Rules are unmodified Sigma rules. By default Atomic Threat Coverage uses rules from official repository but you can (should) use rules from your own private fork with analytics relevant for you.
Links to Data Needed, Trigger, and articles in ATT&CK are generated automatically. Sigma rule, Kibana query, X-Pack Watcher and GrayLog query generated and added automatically (this list could be expanded and depends on Sigma Supported Targets)
This entity expected to simplify communication with SIEM/LM/Data Engineering teams. It includes the next data:
pivoting.csv
generationThis entity expected to explain SIEM/LM/Data Engineering teams and IT departments which logging policies have to be configured to have proper Data Needed for Detection and Response to specific Threat. It also explains how exactly this policy can be configured.
This entity expected to simplify communication with SIEM/LM/Data Engineering teams. It includes the next data:
This way you will be able to simply explain why you need specific enrichments (mapping to Detection Rules) and specific systems for data enrichment (for example, Logstash).
This entity used to track Logging Policies configuration, Data Needed collection and Detection Rules implementation per customer. Customer could be internal (for example, remote site) or external (in case of Service Providers). It even could be a specific host. There are no limitations for definition of the entity.
This entity expected to simplify communication with SIEM/LM/Data Engineering teams, provide visibility on implementation for Leadership. It used to generate analytics.csv
, atc_attack_navigator_profile.json
(per customer) and atc_es_index.json
.
Triggers are unmodified Atomic Red Team tests. By default Atomic Threat Coverage uses atomics from official repository but you can (should) use atomics from your own private fork with analytics relevant for you.
This entity needed to test specific technical controls and detections. Detailed description could be found in official site.
is a measure that has to be executed to mitigate a specific threat. For more information, please refer to the atc-mitigation repository.
is an Incident Response plan that represents a complete list of procedures/tasks that has to be executed to respond to a specific threat. For more information, please refer to the atc-react repository.
Visualisations include separate Visualisations / Saved searches and Dashboards, built on top of them.
Basically, atomic visualisations represent building blocks for Dashboards of different purposes.
For now we only support export to Kibana. But we are targeting multiple platforms export (Splunk being the nearest future).
This entity could be described as a Sigma for Visualisations.
Detailed HowTo could be found here.
Atomic Threat Coverage generates Elasticsearch index with all data mapped to each other for visualisation and analysis of existing data in Kibana.
This way it can help to answer these questions:
Ideally, these visualisations could provide organizations with the ability to connect Threat Coverage from detection perspective to money. Like:
If you don't have Elasticsearch and Kibana deployed, you can use analytics.csv
for the same purposes.
Atomic Threat Coverage generates ATT&CK Navigator common profile (for all existing Detection Rules) as well as per Customer profiles for visualisation of current detection abilities, gap analysis, development prioritisation, planning etc. You only need to upload it to public or (better) private Navigator site, click New Tab -> Open Existing Layer -> Upload from local. Here is how it looks like for default ATC dataset (original Sigma repository rules, Windows only):
Atomic Threat Coverage generates analytics.csv with list of all data mapped to each other for simple analysis.
It could be used for the same purposes as atc_es_index.json
.
Atomic Threat Coverage generates pivoting.csv with list of all fields (from Data Needed) mapped to description of Data Needed for very specific purpose โ it provides information about data sources where some specific data type could be found, for example domain name, username, hash etc:
At the same time it highlights which fields could be found only with specific enrichments:
If you just want to try it with default dataset, you can use docker:
git submodule init
git submodule update
git submodule foreach git pull origin master
scripts/config.default.yml
to config.yml
config.yml
with links to your own Confluence node (following instructions inside the default config)docker build . -t atc
docker run -it atc
That's all. Confluence will be populated with the data and all analytics will be generated on your side (elasticsearch index, csv files, thehive templates, navigator profiles etc).
We do not recommend to use this type of deployment for production.
If you just want to make yourself familiar with final result with default dataset you can also use online demo of automatically generated Confluence knowledge base.
detection_rules
directorytriggering
directorydata_needed
directory (you can create new one using template)logging_policies
directory (you can create new one using template)enrichments
directory (you can create new one using template)customers
directory (you can create new one using template)response_actions
directory (you can create new one using template)response_playbooks
directory (you can create new one using template)scripts/templates/
and adjust templates_directory
in your config.yml
config.yml
(create it from scripts/config.default.yml
and adjust settings)make
in root directory of the repositoryIf you want to partially regenerate/update analytics you can investigate Makefile
options or main.py
help.
You need both Elasticsearch and Kibana up and running (relevant for version 7.3.1).
Define variables:
ELASTICSEARCH_URL="http://<es ip/domain>:<es port>"
KIBANA_URL="http://<kibana ip/domain>:<kibana port>"
USER=""
PASSWORD=""
Then upload index pattern to Kibana:
curl -k --user ${USER}:${PASSWORD} -H "Content-Type: application/json"\
-H "kbn-xsrf: true"\
-XPOST "${KIBANA_URL}/api/saved_objects/index-pattern/42272fa0-cde5-11e9-667e-6f37d60dcef0"\
-d@analytics/predefined/atc-analytics-index-pattern.json
Then upload Dashboard to Kibana:
curl -k --user ${USER}:${PASSWORD} -H "Content-Type: application/json"\
-H "kbn-xsrf: true"\
-XPOST "${KIBANA_URL}/api/kibana/dashboards/import?exclude=index-pattern&force=true"\
-d@analytics/predefined/atc-analytics-dashboard.json
Then upload index to Elasticsearch:
curl -k --user ${USER}:${PASSWORD} -H "Content-Type: application/json"\
-XPOST "${ELASTICSEARCH_URL}/atc-analytics/_doc/_bulk?pretty"\
--data-binary @analytics/generated/atc_es_index.json
You can automate index uploading adding last command to Makefile in your private fork.
This way each time you will add new analytics, Dashboard will be automatically updated.
The project is currently in an alpha stage. It doesn't support all existing Sigma rules (current coverage is ~95%), also have some entities to develop (like Mitigation Systems). We warmly welcome any feedback and suggestions to improve the project.
make
)No. Only to your confluence node, according to configuration provided in config.yml
.
We mean that you will use community compatible formats for (at least) Detection Rules (Sigma) and Triggers (Atomic Red Team), and on some maturity level you will (hopefully) have willingness to share some interesting analytics with community. It's totally up to you.
Simplest way is to follow workflow chapter, just adding your rules into pre-configured folders for specific type of analytics.
More "production" way is to configure your private forks of Sigma and Atomic Red Team projects as submodules of your Atomic Threat Coverage private fork. After that you only will need to configure path to them in config.yml
, this way Atomic Threat Coverage will start using it for knowledge base generation.
Absolutely. We also have some Detection Rules which couldn't be automatically converted to SIEM/LM queries by Sigma. We still use Sigma format for such rules putting unsupported detection logic into "condition" section. Later SIEM/LM teams manually create rules based on description in this field. ATC is not only about automatic queries generation/documentation, there are still a lot of advantages for analysis. You wouldn't be able to utilise them without Detection Rules in Sigma format.
[1] MITRE Cyber Analytics Repository
[2] Endgame EQL Analytics Library
[3] Palantir Alerting and Detection Strategy Framework
[4] The ThreatHunting Project
See the LICENSE file.