Skip to content
Snippets Groups Projects
Commit 755a28bd authored by root's avatar root
Browse files

update documentation - replace elasticsearch and kibana with opensearch and...

update documentation - replace elasticsearch and kibana with opensearch and opensearch dashboatrds, add missing ports in documentation
parent aa354b22
No related branches found
No related tags found
No related merge requests found
......@@ -12,14 +12,14 @@ To make modifications to the main NiFi pipeline and add it to the Ansible playbo
* Convert flowx.xml.gz to new template
`utils/flow2template.py flow.xml.gz roles/nifi/templates/flow.xml.j2`
Update Kibana dashboards
Update Opensearch dashboards
------------------------
* Make necesarry changes to the dashboards or visualizations in the Kibana GUI
* Export objects by going to "Management->Saved Objects" and click on the "Export objects" link. Select all objects.
* Make necesarry changes to the dashboards or visualizations in the Opensearch Dashboards GUI
* Export objects by going to "Management -> Stack Management -> Saved Objects" and click on the "Export objects" link. Select all objects.
* Copy the exported file,export.ndjson, to the soctools directory
* Convert export.ndjson to a new template
`utils/kibana_graphs2template.py export.ndjson roles/odfekibana/templates/kibana_graphs.ndjson.j2`
`utils/kibana_graphs2template.py export.ndjson roles/opensearch-dashboards/templates/opensearch-dashboards_graphs.ndjson.j2`
Update configuration files in docker containers using Ansible
......@@ -36,8 +36,8 @@ Update configuration files in docker containers using Ansible
`ansible-playbook -i inventories soctools.yml -t update-haproxy-config-acl`
`ansible-playbook -i inventories soctools.yml -t update-filebeat-config`
`ansible-playbook -i inventories soctools.yml -t update-nifi-config`
`ansible-playbook -i inventories soctools.yml -t update-odfees-config`
`ansible-playbook -i inventories soctools.yml -t update-odfekibana-config`
`ansible-playbook -i inventories soctools.yml -t update-opensearches-config`
`ansible-playbook -i inventories soctools.yml -t update-opensearch-dashboards-config`
Restart services inside docker containers using Ansible
......@@ -54,8 +54,8 @@ Restart services inside docker containers using Ansible
`ansible-playbook -i inventories soctools.yml -t restart-misp`
`ansible-playbook -i inventories soctools.yml -t restart-mysql`
`ansible-playbook -i inventories soctools.yml -t restart-nifi`
`ansible-playbook -i inventories soctools.yml -t restart-odfees`
`ansible-playbook -i inventories soctools.yml -t restart-odfekibana`
`ansible-playbook -i inventories soctools.yml -t restart-opensearches`
`ansible-playbook -i inventories soctools.yml -t restart-opensearch-dashboards`
Stop services inside docker containers using Ansible
----------------------------------------------------
......@@ -71,8 +71,8 @@ Stop services inside docker containers using Ansible
`ansible-playbook -i inventories soctools.yml -t stop-misp`
`ansible-playbook -i inventories soctools.yml -t stop-mysql`
`ansible-playbook -i inventories soctools.yml -t stop-nifi`
`ansible-playbook -i inventories soctools.yml -t stop-odfees`
`ansible-playbook -i inventories soctools.yml -t stop-odfekibana`
`ansible-playbook -i inventories soctools.yml -t stop-opensearches`
`ansible-playbook -i inventories soctools.yml -t stop-opensearch-dashboards`
Restart services inside docker containers manually
--------------------------------------------------
......
......@@ -25,5 +25,5 @@ What the current NiFi pipeline does. How to reconfigure it.
## Other tools?
Is there anything in Elasticsearch, Kibana, MISP, The Hive, etc., which is specific to SOCtools and should be described (i.e. can't be found in official documentation of these tools)?
Is there anything in OpenSearch, OpenSearch Dahsboards, MISP, The Hive, etc., which is specific to SOCtools and should be described (i.e. can't be found in official documentation of these tools)?
......@@ -9,12 +9,12 @@ The high level architecture is shown in the figure above and consists of the fol
* Data sources - the platform supports data from many common sources like system logs, application logs, IDS etc. It is also simple to add support for other sources. The main method for sending data into SOCTools is through Filebeat.
* High volume data sources - while the main platform is able to scale to high traffic volumes, it will in some cases be more convenient to have a separate setup for very high volume data like Netflow. Some NRENs might also have an existing setup for this kind of data that they do not want to change. Data sources like this will have its own storage system. If real time processing is done on the data, alerts from this can be shipped to other components in the architecture.
* Data transport - [Apache Nifi](https://nifi.apache.org/) is the key component that collects data from data sources, normalize it, do simple data enrichment and then ship it to one or more of the other components in the architecture.
* Storage - in the current version all storage is done in [Elasiticsearch](https://opendistro.github.io/for-elasticsearch/), but it is easy to make changes to the data transport so that data is sent to other log analysis tools like Splunk or Humio.
* Manual analysis - In the current version [Kibana](https://opendistro.github.io/for-elasticsearch/) is used for manual analysis of collected data.
* Storage - in the current version all storage is done in [OpenSearch](https://opensearch.org/), but it is easy to make changes to the data transport so that data is sent to other log analysis tools like Splunk or Humio.
* Manual analysis - In the current version [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) is used for manual analysis of collected data.
* Enrichment - This component enriches the collected data either before or after storage. In the current version this is done as part of the data transport component before data is sent to storage.
* Threat analysis - collects and analyzes threat intelligence data. Typical source for enrichment data. The current version uses [MISP](https://www.misp-project.org/).
* Automatic analysis - this is automatic real time analysis of collected data and will be added to later versions of SOCTools. It can be simple scripts looking at thresholds or advanced machine learning algorithms.
* Incident response - [The Hive and Cortex](https://thehive-project.org/) is used for this and new cases can be created automatically from manual analysis in Kibana.
* Incident response - [The Hive and Cortex](https://thehive-project.org/) is used for this and new cases can be created automatically from manual analysis in Opensearch Dashboards.
### Authentication
......@@ -22,7 +22,7 @@ SOCTools uses [Keycloak](https://www.keycloak.org/) to provide single sign on to
## NiFi pipeline
The main job of Nifi is to collect data from various sources, enrich it and send it to storage which currently is Elasticsearch. The pipeline in Nifi is organized into two main prcoess groups, "Data processing" and "Enrichment data".
The main job of Nifi is to collect data from various sources, enrich it and send it to storage which currently is OpenSearch. The pipeline in Nifi is organized into two main prcoess groups, "Data processing" and "Enrichment data".
### Enrichment data
This process group is basically a collection of "cron jobs" that runs regularly to update various enrichment data that is used by "Data processing" to enrich collected data. The current version supports the following enrichment data:
......@@ -42,9 +42,9 @@ Each group contains a process group called "Custom ..." where it is possible to
## Performance
The two components that decides the performance of SOCTools are Elasticsearch and Apache NiFi. Both components are highly scalable by adding more nodes to the cluster.
The two components that decides the performance of SOCTools are OpenSearch and Apache NiFi. Both components are highly scalable by adding more nodes to the cluster.
There are reports of NiFi being scaled to handle petabytes of data per day in a large cluster, [Processing one billion events per second with NiFi](https://blog.cloudera.com/benchmarking-nifi-performance-and-scalability/). The performance of NiFi depends heavily on the type and number of processors in the pipeline. The enrichment pipeline used in SOCTools is quite CPU intensive but it utilizes flow record processing in Nifi which means that multiple log entries of the same type are grouped together to improve performance.
Uninett is using [Humio](https://www.humio.com/) instead of Elasticsearch for storing logs, but has a pilot installation of Apache Nifi running the same pipeline as the one in SOCTools. The current setup is 6 virtual servers running on 4 physical servers. The HW specification of the virtual servers are:
Uninett is using [Humio](https://www.humio.com/) instead of OpenSearch for storing logs, but has a pilot installation of Apache Nifi running the same pipeline as the one in SOCTools. The current setup is 6 virtual servers running on 4 physical servers. The HW specification of the virtual servers are:
* CPU: 12 cores
* Memory: 8GB
* Disk: 40GB
......
......@@ -3,12 +3,12 @@
SOCTools monitors itself which means that there is already support for receiving and parsing the data from the following systems:
* Misp
* Haproxy
* Kibana
* OpenSearch Dashboards
* Keycloak
* Mysql
* Zookeeper
* Nifi
* Elasticsearch
* OpenSearch
In addtion there is also support for:
* Suricata EVE logs
......
......@@ -49,11 +49,11 @@ Then just restart rsyslog:
sudo systemctl restart rsyslog
```
## 3. Kibana
## 3. Opensearch Dashboards
When some syslog data are succesfully received, an index pattern must be created in Kibana to be able to see it.
When some syslog data are succesfully received, an index pattern must be created in Opensearch Dashboards to be able to see it.
Go to Kibana/Management/Index patterns, click on "Create index pattern" and create the pattern `syslog-*`.
Go to Opensearch Dashboards/Management/Stack Management/Index Patterns, click on "Create index pattern" and create the pattern `syslog-*`.
Then, the data will be available on Discover page when `syslog-*` index pattern is selected. A saved search and/or dashboard can be created to show the data in user's preferred way.
Then, the data will be available on Opensearch Dashboards/Discover page when `syslog-*` index pattern is selected. A saved search and/or dashboard can be created to show the data in user's preferred way.
......@@ -30,7 +30,7 @@ You can use configuration script named "configure.sh", located in the root folde
* Create whitelist for use with haproxy, in order to enable access to various tools from certain IP addresses.
* By default, following services are accessible only from internal docker network (172.22.0.0/16):
* HAProxy Stats - Statistics about proxied services/tools and their availability. Generally, you want only a selected number of people to be able to view them.
* ODFE - Direct access to ODFE Elasticsearch containers. Generally, you would need to access them only for debugging purposes.
* OpenSearch - Direct access to Opensearch containers. Generally, you would need to access them only for debugging purposes.
* By default, all SOCTools are accessible from the whole Internet. If there is any doubt in the implemented security features, you may want to fine-tune port visibility. You can restrict access to following:
* Nifi Management - Web UI for managing Nifi flows. You may want to restrict access inside you organization.
* Nifi ports - ports used for accepting data from various sources. You may want to restrict access only to certain servers/devices in your network.
......@@ -39,7 +39,7 @@ You can use configuration script named "configure.sh", located in the root folde
* Cortex - Web UI for Cortex. Usually don't want to restrict access.
* MISP - Web UI for MISP. Usually don't want to restrict access.
* User Management UI - Web UI for creating and managing SOCTools users. Increase security by restricting access only for administrator(s)
* Kibana - Web UI for Kibana. Increase security by restricting access only for administrator(s)
* OpenSearch Dashboards - Web UI for OpenSearch Dashboards. Increase security by restricting access only for administrator(s)
Edit `roles/haproxy/files/stats_whitelist.lst` in order to manually configure whitelist IP addresses for accessing various tools. You can use `access.ips` file found in the root folder as a starting template.
* `cat access.ips > roles/haproxy/files/stats_whitelist.lst`
......@@ -87,8 +87,10 @@ User authentication is done using client certificates. A certificate is generate
## Web interfaces
All Web interfaces of the various services are access by going to `https://<server name>:<port>/`using the following port numbers:
* 9443 - NiFi
* 5601 - Kibana
* 5601 - Opensearch Dashboards
* 6443 - Misp
* 9000 - The Hive
* 9001 - Cortex
* 12443 - Keycloak
* 8888 - haproxy-stats
* 5443 - User Management UI
......@@ -6,69 +6,43 @@ The list of TCP ports used in SOCtools, as available from the outside:
| port | description |
| ----: | ----------- |
| 5601 | Kibana |
| 6443 | MISP |
| 8888 | haproxy-stats (login: `haproxy`, password is in `secrets/passwords/haproxy`)
| 8888 | haproxy-stats (login: `haproxy`, password is in `secrets/passwords/haproxy`) |
| 9000 | TheHive |
| 9001 | Cortex |
| 9200 | OpenSearches |
| 5601 | OpenSearch Dashboards |
| 9443 | NiFi web GUI |
| 6443 | MISP |
| 5443 | User Management UI |
| 12443 | Keycloak |
TODO others?
TODO open to anyone / local only?
## Data ingestion
The following port ranges are opened by haproxy to allow receiving data from external systems. These ports are forwarded to NiFi nodes. So, a processor in NiFi can listen on these ports and receive data from other systems.
TODO
NOTES-1: According to haproxy.cfg, the followng ports are forwarded to NiFi:
- 7750-7760 (tcp)
- 7771 (tcp)
- 5000-5020 (http)
- 6000-6020 (tcp)
In fact, I can connect (using `nc`) to these ports 7750, 5000-5099, 6000-6099 (i.e. not 7751-7760, 7771; on the other hand, the 50??,60?? ranges are wider, I don't know where they are pointed to).
NOTES-2: haproxy container is listening on following ports:
- 0.0.0.0:443->443/tcp
- 0.0.0.0:5000-5099->5000-5099/tcp
- 0.0.0.0:6000-6099->6000-6099/tcp
- 0.0.0.0:7750->7750/tcp
- 0.0.0.0:8443->8443/tcp
NOTES-1: haproxy container is listening on following ports:
- 0.0.0.0:6443->6443/tcp
- 0.0.0.0:5000-5020->5000-5020/tcp
- 0.0.0.0:6000-6020->6000-6020/tcp
- 0.0.0.0:8888->8888/tcp
- 0.0.0.0:9000-9001->9000-9001/tcp
- 0.0.0.0:9200->9200/tcp
- 0.0.0.0:9443->9443/tcp
- 0.0.0.0:12443->12443/tcp
- 0.0.0.0:5601->5601/tcp
- 0.0.0.0:5443->5443/tcp
NOTES-3: From haproxy.cfg, following ports should go through haproxy:
NOTES-2: From haproxy.cfg, following ports should go through haproxy:
| port | description |
| ----: | ----------- |
| 8888 | haproxy-stats |
| 9000 | TheHive |
| 9001 | Cortex |
| 9200 | ODFEES |
| 9200 | OpenSearches |
| 5601 | OpenSearch Dashboards |
| 9443 | NiFi web GUI |
| 12443 | Keycloak | - incorectly configured frontend on port 10443
NOTES-4: There are a number of ports that are just made visible using EXPOSE, but are not actually published, i.e. they cannot be reached directly outside of docker, such as:
| container(s) | port(s) |
| ----: | ----------- |
| soctools-misp | 80, 443, 6379, 6666, 50000 |
| soctools-cortex | 9000 |
| soctools-thehive | 9001 |
| soctools-cassandra | 7000, 9042 |
| soctools-odfe-1/2 | 9200, 9300 |
| soctools-nifi-1/2/3 | 8000, 8080, 8443, 10000 |
| soctools-zookeeper | 2181, 2888, 3888 |
| soctools-keycloak | 8080 |
| soctools-mysql | 3306 |
Ports already used or reserved for ingesting specific data into the system via NiFi:
| port | description |
| ----: | ----------- |
| 6443 | MISP |
| 5443 | User Management UI |
| 12443 | Keycloak |
TODO (e.g. port(s) used for preconfigured ListenBeats data)
......@@ -89,8 +89,11 @@ At last you can start SOCTools containers and initialize them using you configur
After the whole process is finished, SOCTools can be accessed by going to https://[FQDN]:[port] using the following port numbers:
* 9443 - NiFi
* 5601 - Kibana
* 5601 - OpenSearch Dashboards
* 6443 - Misp
* 9000 - The Hive
* 9001 - Cortex
* 12443 - Keycloak
* 8888 - haproxy-stats
* 5443 - User Management UI
......@@ -6,9 +6,9 @@ Assume that a threat analyst in a SOC learns about a specific IP address used by
<img src="images/use_case1.png" width=640>
All logs collected by SOCTools are processed by Apache NiFi. NiFi is integrated with MISP and attributes are automatically downloaded to enrich the collected data before sending it to Elasticsearch. NiFi stores the information from MISP in an internal memory database and uses it to look up all IP addresses in logs. If it finds a match then it adds a new field to the log record that contains the event ID in MISP that contains attribute that matches the IP address. For example if you have a field "destination.ip" and it matches an attribute in MISP, the field "destination.ip_misp" will be created.
All logs collected by SOCTools are processed by Apache NiFi. NiFi is integrated with MISP and attributes are automatically downloaded to enrich the collected data before sending it to OpenSearch. NiFi stores the information from MISP in an internal memory database and uses it to look up all IP addresses in logs. If it finds a match then it adds a new field to the log record that contains the event ID in MISP that contains attribute that matches the IP address. For example if you have a field "destination.ip" and it matches an attribute in MISP, the field "destination.ip_misp" will be created.
A security analyst is using the preinstalled Kibana dashboard "Suricata Alerts" to keep an eye on Suricata alerts that are comming in. The dashboard contains a visualization listing destination IPs that are registered in MISP. By clicking on the magnifying class in front of the IP "10.10.10.10" the analyst filters out events with this destination IP.
A security analyst is using the preinstalled OpenSearch Dashboard "Suricata Alerts" to keep an eye on Suricata alerts that are comming in. The dashboard contains a visualization listing destination IPs that are registered in MISP. By clicking on the magnifying class in front of the IP "10.10.10.10" the analyst filters out events with this destination IP.
<img src="images/use_case2.png" width=640>
......@@ -16,6 +16,6 @@ He then expands one of the events and scrolls down till he sees the field "desti
<img src="images/use_case4.png" width=480>
After evaluating the information in MISP, the security analyst concludes that this is a real threat and decides to create a new case in the Hive, the tool for doing incident response. He does this by clicking on the red button "Create new Case" in the Kibana dashboard. A dialog box opens up where he can add details about the case and select the IP addresses that should be added as an observable in the Hive. When he is ready he clicks on "Create Case" and a new tab opens up showing the newly created case in the Hive.
After evaluating the information in MISP, the security analyst concludes that this is a real threat and decides to create a new case in the Hive, the tool for doing incident response. He does this by clicking on the red button "Create new Case" in the Opensearch dashboards. A dialog box opens up where he can add details about the case and select the IP addresses that should be added as an observable in the Hive. When he is ready he clicks on "Create Case" and a new tab opens up showing the newly created case in the Hive.
<img src="images/use_case3.png" width=640>
......@@ -12,15 +12,14 @@
- "8888:8888"
- "9443:9443"
- "9200:9200"
- "7750:7750"
- "9000:9000"
- "9001:9001"
- "12443:12443"
- "5601:5601"
- "5443:5443"
- "6443:6443"
- "5000-5099:5000-5099"
- "6000-6099:6000-6099"
- "5000-5020:5000-5020"
- "6000-6020:6000-6020"
interactive: "yes"
tags:
- start-docker-containers
......@@ -37,15 +36,14 @@
- "8888:8888"
- "9443:9443"
- "9200:9200"
- "7750:7750"
- "9000:9000"
- "9001:9001"
- "12443:12443"
- "5601:5601"
- "5443:5443"
- "6443:6443"
- "5000-5099:5000-5099"
- "6000-6099:6000-6099"
- "5000-5020:5000-5020"
- "6000-6020:6000-6020"
interactive: "yes"
state: stopped
tags:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment