diff --git a/doc/architecture.md b/doc/architecture.md index 3aac0e145c070d13dacc4281bfea2025718ed6ca..e0646396e033267b54493fcf8e2870a8c0d16a4e 100644 --- a/doc/architecture.md +++ b/doc/architecture.md @@ -1,28 +1,29 @@ # Architecture -SOCTools is a collection of tools for collecting, enriching and analyzing logs and other security data, threat information sharing and incident handling. Many SOCs will already have some tools in place that they want to continue to use. One main feature of SOCTools is therefore to have a flexible architecture where it is simple to integrate existing tools even if they are not directly supported by SOCTools. It is also easy to select which components of SOCTools to install. +SOCTools is a collection of tools for collecting, enriching and analyzing logs and other security data, threat information sharing and incident handling. ## High level architecture <img src="images/high_level_arch.png" width=640> The high level architecture is shown in the figure above and consists of the following components: * Data sources - the platform supports data from many common sources like system logs, application logs, IDS etc. It is also simple to add support for other sources. The main method for sending data into SOCTools is through Filebeat. -* High volume data sources - while the main platform is able to scale to high traffic volumes, it will in some cases be more convenient to have a separate setup for very high volume data like Netflow. Some NRENs might also have an existing setup for this kind of data that they do not want to change. Data sources like this will have its own storage system. If real time processing is done on the data, alerts from this can be shipped to other components in the architecture. * Data transport - [Apache Nifi](https://nifi.apache.org/) is the key component that collects data from data sources, normalize it, do simple data enrichment and then ship it to one or more of the other components in the architecture. * Storage - in the current version all storage is done in [OpenSearch](https://opensearch.org/), but it is easy to make changes to the data transport so that data is sent to other log analysis tools like Splunk or Humio. * Manual analysis - In the current version [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) is used for manual analysis of collected data. * Enrichment - This component enriches the collected data either before or after storage. In the current version this is done as part of the data transport component before data is sent to storage. * Threat analysis - collects and analyzes threat intelligence data. Typical source for enrichment data. The current version uses [MISP](https://www.misp-project.org/). * Automatic analysis - this is automatic real time analysis of collected data and will be added to later versions of SOCTools. It can be simple scripts looking at thresholds or advanced machine learning algorithms. -* Incident response - [The Hive and Cortex](https://thehive-project.org/) is used for this and new cases can be created automatically from manual analysis in Opensearch Dashboards. +* Incident response - [The Hive and Cortex](https://thehive-project.org/) is used for this and new cases can be created automatically from manual analysis in Opensearch Dashboards (note: The plugin for automatic creation of cases in The Hive currently doesn't work due to migration from Kibana to Opensearch Dashboards; it will be fixed in a future version; in the meantime, cases must be created manually). ### Authentication SOCTools uses [Keycloak](https://www.keycloak.org/) to provide single sign on to all web interfaces of the various components. +User accounts are created or edited through a special web interface - SOCtools user management GUI, which assures necessary changes in Keycloak and all the other components via their respective APIs. + ## NiFi pipeline -The main job of Nifi is to collect data from various sources, enrich it and send it to storage which currently is OpenSearch. The pipeline in Nifi is organized into two main prcoess groups, "Data processing" and "Enrichment data". +The main job of Nifi is to collect data from various sources, enrich it and send it to storage which currently is OpenSearch. The pipeline in Nifi is organized into two main process groups, "Data processing" and "Enrichment data". ### Enrichment data This process group is basically a collection of "cron jobs" that runs regularly to update various enrichment data that is used by "Data processing" to enrich collected data. The current version supports the following enrichment data: @@ -30,15 +31,16 @@ This process group is basically a collection of "cron jobs" that runs regularly * Alexa top 1 million - http://s3.amazonaws.com/alexa-static/top-1m.csv.zip * Tor exit nodes - https://check.torproject.org/torbulkexitlist * MaxMind GeoLite2-City database - Requires a free account. https://dev.maxmind.com/geoip/geoip2/geolite2/ -* Misp - NiFi automatically downloads new IOCs from the Misp instance that is part of SOCTools. IP addresses and host names are then enriched to show if they are registered in Misp. +* MISP - NiFi automatically downloads new IOCs from the MISP instance that is part of SOCTools. IP addresses and host names are then enriched to show if they are registered in MISP. ### Data processing The processing group is split into 3 parts: * Data input - receives data, normalizes it and converts it to JSON. This also adds attributes to the data that specifies which filed names to enrich. -* Enrichment - enriches the data. It currently supports enriching IP addresses, domain names and fully qualified domain name (FQDN). -* Data output - sends data to storage. In future version data will also be sent to other tools doing real time stream processing of the data. +* Enrichment - enriches the data. It currently supports enriching IP addresses, domain names and fully qualified domain names (FQDN). +* Data output - sends data to storage (OpenSearch). In future version data will also be sent to other tools doing real time stream processing of the data. -Each group contains a process group called "Custom ..." where it is possible to add new processors to the pipeline that will not be overwritten when upgrading to newer versions of SOCTools. +Each group contains a process group called "Custom ..." where it is recommended to add new processors to the pipeline +(in a future version, we plan to ensure these groups will not be overwritten when upgrading SOCTools). ## Performance @@ -50,3 +52,6 @@ Uninett is using [Humio](https://www.humio.com/) instead of OpenSearch for stori * Disk: 40GB This setup processes around 7K events per second of production data per second during peak hours. During performance testing we have been able to add an additional 17K events per second of test traffic before NiFi starting to show performance issues. This translates to more than 1.1TB of data per day. + +Nevertheless, the current version of SOCtools is designed to run on a single server only. +Support for cluster of multiple servers is planned for the future. diff --git a/doc/images/high_level_arch.png b/doc/images/high_level_arch.png index 16789c09963e5bf0962fc41405d2c1be6ccd33d4..32b7d0f60509f37e13765cd985ac357520f89316 100644 Binary files a/doc/images/high_level_arch.png and b/doc/images/high_level_arch.png differ