diff --git a/doc/HOWTOS.md b/doc/HOWTOS.md
index bf11caed9e0616763299771295559792d00543b5..45b7b78a2c5a06b5f398ebbed27769a1cf95841c 100644
--- a/doc/HOWTOS.md
+++ b/doc/HOWTOS.md
@@ -12,14 +12,14 @@ To make modifications to the main NiFi pipeline and add it to the Ansible playbo
 * Convert flowx.xml.gz to new template  
   `utils/flow2template.py flow.xml.gz roles/nifi/templates/flow.xml.j2`
 
-Update Kibana dashboards
+Update Opensearch dashboards
 ------------------------
 
-* Make necesarry changes to the dashboards or visualizations in the Kibana GUI
-* Export objects by going to "Management->Saved Objects" and click on the "Export objects" link. Select all objects.
+* Make necesarry changes to the dashboards or visualizations in the Opensearch Dashboards GUI
+* Export objects by going to "Management -> Stack Management -> Saved Objects" and click on the "Export objects" link. Select all objects.
 * Copy the exported file,export.ndjson, to the soctools directory
 * Convert export.ndjson to a new template  
-  `utils/kibana_graphs2template.py export.ndjson roles/odfekibana/templates/kibana_graphs.ndjson.j2`
+  `utils/kibana_graphs2template.py export.ndjson roles/opensearch-dashboards/templates/opensearch-dashboards_graphs.ndjson.j2`
 
 
 Update configuration files in docker containers using Ansible
@@ -36,8 +36,8 @@ Update configuration files in docker containers using Ansible
   `ansible-playbook -i inventories soctools.yml -t update-haproxy-config-acl`  
   `ansible-playbook -i inventories soctools.yml -t update-filebeat-config`  
   `ansible-playbook -i inventories soctools.yml -t update-nifi-config`  
-  `ansible-playbook -i inventories soctools.yml -t update-odfees-config`  
-  `ansible-playbook -i inventories soctools.yml -t update-odfekibana-config` 
+  `ansible-playbook -i inventories soctools.yml -t update-opensearches-config`  
+  `ansible-playbook -i inventories soctools.yml -t update-opensearch-dashboards-config` 
 
 
 Restart services inside docker containers using Ansible
@@ -54,8 +54,8 @@ Restart services inside docker containers using Ansible
 `ansible-playbook -i inventories soctools.yml -t restart-misp`  
 `ansible-playbook -i inventories soctools.yml -t restart-mysql`  
 `ansible-playbook -i inventories soctools.yml -t restart-nifi`  
-`ansible-playbook -i inventories soctools.yml -t restart-odfees`  
-`ansible-playbook -i inventories soctools.yml -t restart-odfekibana`  
+`ansible-playbook -i inventories soctools.yml -t restart-opensearches`  
+`ansible-playbook -i inventories soctools.yml -t restart-opensearch-dashboards`  
 
 Stop services inside docker containers using Ansible
 ----------------------------------------------------
@@ -71,8 +71,8 @@ Stop services inside docker containers using Ansible
 `ansible-playbook -i inventories soctools.yml -t stop-misp`  
 `ansible-playbook -i inventories soctools.yml -t stop-mysql`  
 `ansible-playbook -i inventories soctools.yml -t stop-nifi`  
-`ansible-playbook -i inventories soctools.yml -t stop-odfees`  
-`ansible-playbook -i inventories soctools.yml -t stop-odfekibana`  
+`ansible-playbook -i inventories soctools.yml -t stop-opensearches`  
+`ansible-playbook -i inventories soctools.yml -t stop-opensearch-dashboards`  
 
 Restart services inside docker containers manually
 --------------------------------------------------
diff --git a/doc/administration.md b/doc/administration.md
index 447bf65d567cc062058bc6247155155f394bc171..69b35eaf3de9cffbba513f7e37dd8dabe5faa648 100644
--- a/doc/administration.md
+++ b/doc/administration.md
@@ -25,5 +25,5 @@ What the current NiFi pipeline does. How to reconfigure it.
 
 ## Other tools?
 
-Is there anything in Elasticsearch, Kibana, MISP, The Hive, etc., which is specific to SOCtools and should be described (i.e. can't be found in official documentation of these tools)?
+Is there anything in OpenSearch, OpenSearch Dahsboards, MISP, The Hive, etc., which is specific to SOCtools and should be described (i.e. can't be found in official documentation of these tools)?
 
diff --git a/doc/architecture.md b/doc/architecture.md
index dcfdb98eb70146d111c9888cab0a70c56153acc2..3aac0e145c070d13dacc4281bfea2025718ed6ca 100644
--- a/doc/architecture.md
+++ b/doc/architecture.md
@@ -9,12 +9,12 @@ The high level architecture is shown in the figure above and consists of the fol
 * Data sources - the platform supports data from many common sources like system logs, application logs, IDS etc. It is also simple to add support for other sources. The main method for sending data into SOCTools is through Filebeat.
 * High volume data sources - while the main platform is able to scale to high traffic volumes, it will in some cases be more convenient to have a separate setup for very high volume data like Netflow. Some NRENs might also have an existing setup for this kind of data that they do not want to change. Data sources like this will have its own storage system. If real time processing is done on the data, alerts from this can be shipped to other components in the architecture.  
 * Data transport - [Apache Nifi](https://nifi.apache.org/) is the key component that collects data from data sources, normalize it, do simple data enrichment and then ship it to one or more of the other components in the architecture. 
-* Storage - in the current version all storage is done in [Elasiticsearch](https://opendistro.github.io/for-elasticsearch/), but it is easy to make changes to the data transport so that data is sent to other log analysis tools like Splunk or Humio. 
-* Manual analysis - In the current version [Kibana](https://opendistro.github.io/for-elasticsearch/) is used for manual analysis of collected data.
+* Storage - in the current version all storage is done in [OpenSearch](https://opensearch.org/), but it is easy to make changes to the data transport so that data is sent to other log analysis tools like Splunk or Humio. 
+* Manual analysis - In the current version [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) is used for manual analysis of collected data.
 * Enrichment - This component enriches the collected data either before or after storage. In the current version this is done as part of the data transport component before data is sent to storage. 
 * Threat analysis - collects and analyzes threat intelligence data. Typical source for enrichment data. The current version uses [MISP](https://www.misp-project.org/).
 * Automatic analysis - this is automatic real time analysis of collected data and will be added to later versions of SOCTools. It can be simple scripts looking at thresholds or advanced machine learning algorithms.
-* Incident response - [The Hive and Cortex](https://thehive-project.org/) is used for this and new cases can be created automatically from manual analysis in Kibana. 
+* Incident response - [The Hive and Cortex](https://thehive-project.org/) is used for this and new cases can be created automatically from manual analysis in Opensearch Dashboards. 
 
 ### Authentication
 
@@ -22,7 +22,7 @@ SOCTools uses [Keycloak](https://www.keycloak.org/) to provide single sign on to
 
 ## NiFi pipeline
 
-The main job of Nifi is to collect data from various sources, enrich it and send it to storage which currently is Elasticsearch. The pipeline in Nifi is organized into two main prcoess groups, "Data processing" and "Enrichment data".
+The main job of Nifi is to collect data from various sources, enrich it and send it to storage which currently is OpenSearch. The pipeline in Nifi is organized into two main prcoess groups, "Data processing" and "Enrichment data".
 
 ### Enrichment data
 This process group is basically a collection of "cron jobs" that runs regularly to update various enrichment data that is used by "Data processing" to enrich collected data. The current version supports the following enrichment data:  
@@ -42,9 +42,9 @@ Each group contains a process group called "Custom ..." where it is possible to
 
 ## Performance
 
-The two components that decides the performance of SOCTools are Elasticsearch and Apache NiFi. Both components are highly scalable by adding more nodes to the cluster.
+The two components that decides the performance of SOCTools are OpenSearch and Apache NiFi. Both components are highly scalable by adding more nodes to the cluster.
 There are reports of NiFi being scaled to handle petabytes of data per day in a large cluster, [Processing one billion events per second with NiFi](https://blog.cloudera.com/benchmarking-nifi-performance-and-scalability/). The performance of NiFi depends heavily on the type and number of processors in the pipeline. The enrichment pipeline used in SOCTools is quite CPU intensive but it utilizes flow record processing in Nifi which means that multiple log entries of the same type are grouped together to improve performance.  
-Uninett is using [Humio](https://www.humio.com/) instead of Elasticsearch for storing logs, but has a pilot installation of Apache Nifi running the same pipeline as the one in SOCTools. The current setup is 6 virtual servers running on 4 physical servers. The HW specification of the virtual servers are:
+Uninett is using [Humio](https://www.humio.com/) instead of OpenSearch for storing logs, but has a pilot installation of Apache Nifi running the same pipeline as the one in SOCTools. The current setup is 6 virtual servers running on 4 physical servers. The HW specification of the virtual servers are:
 * CPU: 12 cores
 * Memory: 8GB
 * Disk: 40GB
diff --git a/doc/dataingestion.md b/doc/dataingestion.md
index 1ed28e00294f1859c9500a1d317bc43a7cc1f024..5f43090d2c5e63f2598f7b180aa48729a5a03a7b 100644
--- a/doc/dataingestion.md
+++ b/doc/dataingestion.md
@@ -3,12 +3,12 @@
 SOCTools monitors itself which means that there is already support for receiving and parsing the data from the following systems:
 * Misp
 * Haproxy
-* Kibana
+* OpenSearch Dashboards
 * Keycloak
 * Mysql
 * Zookeeper
 * Nifi
-* Elasticsearch
+* OpenSearch
 
 In addtion there is also support for:
 * Suricata EVE logs
diff --git a/doc/dataingestion_syslog.md b/doc/dataingestion_syslog.md
index 1ea091e9e1299085d9755fcff9638a3dd88e76a5..eadcb3db4bcdde6b92aa6238cac4a87513cbe267 100644
--- a/doc/dataingestion_syslog.md
+++ b/doc/dataingestion_syslog.md
@@ -49,11 +49,11 @@ Then just restart rsyslog:
 sudo systemctl restart rsyslog
 ```
 
-## 3. Kibana
+## 3. Opensearch Dashboards
 
-When some syslog data are succesfully received, an index pattern must be created in Kibana to be able to see it.
+When some syslog data are succesfully received, an index pattern must be created in Opensearch Dashboards to be able to see it.
 
-Go to Kibana/Management/Index patterns, click on "Create index pattern" and create the pattern `syslog-*`.
+Go to Opensearch Dashboards/Management/Stack Management/Index Patterns, click on "Create index pattern" and create the pattern `syslog-*`.
 
-Then, the data will be available on Discover page when `syslog-*` index pattern is selected. A saved search and/or dashboard can be created to show the data in user's preferred way.
+Then, the data will be available on Opensearch Dashboards/Discover page when `syslog-*` index pattern is selected. A saved search and/or dashboard can be created to show the data in user's preferred way.
 
diff --git a/doc/install.md b/doc/install.md
index 626797f6cd88ae61ac7d4bebef2ad0316c9c6f82..72d2a12af8fff1e0242648dc9125f0486dbf77b6 100644
--- a/doc/install.md
+++ b/doc/install.md
@@ -30,7 +30,7 @@ You can use configuration script named "configure.sh", located in the root folde
 * Create whitelist for use with haproxy, in order to enable access to various tools from certain IP addresses. 
 * By default, following services are accessible only from internal docker network (172.22.0.0/16):
   * HAProxy Stats - Statistics about proxied services/tools and their availability. Generally, you want only a selected number of people to be able to view them.
-  * ODFE - Direct access to ODFE Elasticsearch containers. Generally, you would need to access them only for debugging purposes.
+  * OpenSearch - Direct access to Opensearch containers. Generally, you would need to access them only for debugging purposes.
 * By default, all SOCTools are accessible from the whole Internet. If there is any doubt in the implemented security features, you may want to fine-tune port visibility. You can restrict access to following:
   * Nifi Management - Web UI for managing Nifi flows. You may want to restrict access inside you organization.
   * Nifi ports - ports used for accepting data from various sources. You may want to restrict access only to certain servers/devices in your network.
@@ -39,7 +39,7 @@ You can use configuration script named "configure.sh", located in the root folde
   * Cortex - Web UI for Cortex. Usually don't want to restrict access.
   * MISP - Web UI for MISP. Usually don't want to restrict access.
   * User Management UI - Web UI for creating and managing SOCTools users. Increase security by restricting access only for administrator(s)
-  * Kibana - Web UI for Kibana. Increase security by restricting access only for administrator(s)
+  * OpenSearch Dashboards - Web UI for OpenSearch Dashboards. Increase security by restricting access only for administrator(s)
 
 Edit `roles/haproxy/files/stats_whitelist.lst` in order to manually configure whitelist IP addresses for accessing various tools. You can use `access.ips` file found in the root folder as a starting template.
 * `cat access.ips > roles/haproxy/files/stats_whitelist.lst`
@@ -87,8 +87,10 @@ User authentication is done using client certificates. A certificate is generate
 ## Web interfaces
 All Web interfaces of the various services are access by going to `https://<server name>:<port>/`using the following port numbers:
 * 9443 - NiFi
-* 5601 - Kibana
+* 5601 - Opensearch Dashboards
 * 6443 - Misp
 * 9000 - The Hive
 * 9001 - Cortex
 * 12443 - Keycloak
+* 8888 - haproxy-stats
+* 5443 - User Management UI
diff --git a/doc/ports.md b/doc/ports.md
index 29cfefb0c629501f1adad1be7d16b3b96f3fe45f..080ecae5058cb78633a0c920ae620d62c038ea4e 100644
--- a/doc/ports.md
+++ b/doc/ports.md
@@ -6,69 +6,43 @@ The list of TCP ports used in SOCtools, as available from the outside:
 
 | port  | description |
 | ----: | ----------- |
-|  5601 | Kibana |
-|  6443 | MISP |
-|  8888 | haproxy-stats (login: `haproxy`, password is in `secrets/passwords/haproxy`)
+|  8888 | haproxy-stats (login: `haproxy`, password is in `secrets/passwords/haproxy`) |
 |  9000 | TheHive |
 |  9001 | Cortex |
+|  9200 | OpenSearches |
+|  5601 | OpenSearch Dashboards |
 |  9443 | NiFi web GUI |
+|  6443 | MISP |
+|  5443 | User Management UI |
 | 12443 | Keycloak |
 
-TODO others?
-TODO open to anyone / local only?
 
 ## Data ingestion
 
 The following port ranges are opened by haproxy to allow receiving data from external systems. These ports are forwarded to NiFi nodes. So, a processor in NiFi can listen on these ports and receive data from other systems.
 
-TODO
-
-NOTES-1: According to haproxy.cfg, the followng ports are forwarded to NiFi:
-- 7750-7760 (tcp)
-- 7771 (tcp)
-- 5000-5020 (http)
-- 6000-6020 (tcp)
-In fact, I can connect (using `nc`) to these ports 7750, 5000-5099, 6000-6099 (i.e. not 7751-7760, 7771; on the other hand, the 50??,60?? ranges are wider, I don't know where they are pointed to).
-
-NOTES-2: haproxy container is listening on following ports:
-- 0.0.0.0:443->443/tcp
-- 0.0.0.0:5000-5099->5000-5099/tcp
-- 0.0.0.0:6000-6099->6000-6099/tcp
-- 0.0.0.0:7750->7750/tcp
-- 0.0.0.0:8443->8443/tcp
+NOTES-1: haproxy container is listening on following ports:
+- 0.0.0.0:6443->6443/tcp
+- 0.0.0.0:5000-5020->5000-5020/tcp
+- 0.0.0.0:6000-6020->6000-6020/tcp
 - 0.0.0.0:8888->8888/tcp
 - 0.0.0.0:9000-9001->9000-9001/tcp
 - 0.0.0.0:9200->9200/tcp
 - 0.0.0.0:9443->9443/tcp
+- 0.0.0.0:12443->12443/tcp
+- 0.0.0.0:5601->5601/tcp
+- 0.0.0.0:5443->5443/tcp
 
-NOTES-3: From haproxy.cfg, following ports should go through haproxy:
+NOTES-2: From haproxy.cfg, following ports should go through haproxy:
 | port  | description |
 | ----: | ----------- |
 |  8888 | haproxy-stats |
 |  9000 | TheHive |
 |  9001 | Cortex |
-|  9200 | ODFEES |
+|  9200 | OpenSearches |
+|  5601 | OpenSearch Dashboards |
 |  9443 | NiFi web GUI |
-| 12443 | Keycloak | - incorectly configured frontend on port 10443
-
-NOTES-4: There are a number of ports that are just made visible using EXPOSE, but are not actually published, i.e. they cannot be reached directly outside of docker, such as:
-
-| container(s)  | port(s) |
-| ----: | ----------- |
-| soctools-misp  | 80, 443, 6379, 6666, 50000 |
-| soctools-cortex | 9000 |
-| soctools-thehive  | 9001 |
-| soctools-cassandra  | 7000, 9042 |
-| soctools-odfe-1/2  | 9200, 9300 |
-| soctools-nifi-1/2/3 | 8000, 8080, 8443, 10000 |
-| soctools-zookeeper | 2181, 2888, 3888 |
-| soctools-keycloak | 8080 |
-| soctools-mysql | 3306 |
-
-
-Ports already used or reserved for ingesting specific data into the system via NiFi:
-
-| port  | description |
-| ----: | ----------- |
+|  6443 | MISP |
+|  5443 | User Management UI |
+| 12443 | Keycloak |
 
-TODO (e.g. port(s) used for preconfigured ListenBeats data)
diff --git a/doc/quickstart.md b/doc/quickstart.md
index 3d41a992aa1b2c05e3086f94a248d46468dc8cac..0204f46275a560028b084751a66815d97ae4fdf0 100644
--- a/doc/quickstart.md
+++ b/doc/quickstart.md
@@ -89,8 +89,11 @@ At last you can start SOCTools containers and initialize them using you configur
 After the whole process is finished, SOCTools can be accessed by going to https://[FQDN]:[port] using the following port numbers:
 
 * 9443 - NiFi
-* 5601 - Kibana
+* 5601 - OpenSearch Dashboards
 * 6443 - Misp
 * 9000 - The Hive
 * 9001 - Cortex
 * 12443 - Keycloak
+* 8888 - haproxy-stats
+* 5443 - User Management UI
+
diff --git a/doc/usecase.md b/doc/usecase.md
index c8691728ca761d32d0cd42d2cd302a1987861e70..54ee60a60ac30836b4b8cb9f4478ba61226eeee3 100644
--- a/doc/usecase.md
+++ b/doc/usecase.md
@@ -6,9 +6,9 @@ Assume that a threat analyst in a SOC learns about a specific IP address used by
 
 <img src="images/use_case1.png" width=640>
 
-All logs collected by SOCTools are processed by Apache NiFi. NiFi is integrated with MISP and attributes are automatically downloaded to enrich the collected data before sending it to Elasticsearch. NiFi stores the information from MISP in an internal memory database and uses it to look up all IP addresses in logs. If it finds a match then it adds a new field to the log record that contains the event ID in MISP that contains attribute that matches the IP address. For example if you have a field "destination.ip" and it matches an attribute in MISP, the field "destination.ip_misp" will be created.
+All logs collected by SOCTools are processed by Apache NiFi. NiFi is integrated with MISP and attributes are automatically downloaded to enrich the collected data before sending it to OpenSearch. NiFi stores the information from MISP in an internal memory database and uses it to look up all IP addresses in logs. If it finds a match then it adds a new field to the log record that contains the event ID in MISP that contains attribute that matches the IP address. For example if you have a field "destination.ip" and it matches an attribute in MISP, the field "destination.ip_misp" will be created.
 
-A security analyst is using the preinstalled Kibana dashboard "Suricata Alerts" to keep an eye on Suricata alerts that are comming in. The dashboard contains a visualization listing destination IPs that are registered in MISP. By clicking on the magnifying class in front of the IP "10.10.10.10" the analyst filters out events with this destination IP. 
+A security analyst is using the preinstalled OpenSearch Dashboard "Suricata Alerts" to keep an eye on Suricata alerts that are comming in. The dashboard contains a visualization listing destination IPs that are registered in MISP. By clicking on the magnifying class in front of the IP "10.10.10.10" the analyst filters out events with this destination IP. 
 
 <img src="images/use_case2.png" width=640>
 
@@ -16,6 +16,6 @@ He then expands one of the events and scrolls down till he sees the field "desti
 
 <img src="images/use_case4.png" width=480>
 
-After evaluating the information in MISP, the security analyst concludes that this is a real threat and decides to create a new case in the Hive, the tool for doing incident response. He does this by clicking on the red button "Create new Case" in the Kibana dashboard. A dialog box opens up where he can add details about the case and select the IP addresses that should be added as an observable in the Hive. When he is ready he clicks on "Create Case" and a new tab opens up showing the newly created case in the Hive.
+After evaluating the information in MISP, the security analyst concludes that this is a real threat and decides to create a new case in the Hive, the tool for doing incident response. He does this by clicking on the red button "Create new Case" in the Opensearch dashboards. A dialog box opens up where he can add details about the case and select the IP addresses that should be added as an observable in the Hive. When he is ready he clicks on "Create Case" and a new tab opens up showing the newly created case in the Hive.
 
 <img src="images/use_case3.png" width=640>
diff --git a/roles/docker/tasks/haproxy.yml b/roles/docker/tasks/haproxy.yml
index 1c7b765abffa54894d0c467292ff475fe7aeeb03..0ff24e5aeed07458370e47d5d71184b2edb03bb4 100644
--- a/roles/docker/tasks/haproxy.yml
+++ b/roles/docker/tasks/haproxy.yml
@@ -12,15 +12,14 @@
       - "8888:8888"
       - "9443:9443"
       - "9200:9200"
-      - "7750:7750"
       - "9000:9000"
       - "9001:9001"
       - "12443:12443"
       - "5601:5601"
       - "5443:5443"
       - "6443:6443"
-      - "5000-5099:5000-5099"
-      - "6000-6099:6000-6099"
+      - "5000-5020:5000-5020"
+      - "6000-6020:6000-6020"
     interactive: "yes"
   tags:
     - start-docker-containers
@@ -37,15 +36,14 @@
       - "8888:8888"
       - "9443:9443"
       - "9200:9200"
-      - "7750:7750"
       - "9000:9000"
       - "9001:9001"
       - "12443:12443"
       - "5601:5601"
       - "5443:5443"
       - "6443:6443"
-      - "5000-5099:5000-5099"
-      - "6000-6099:6000-6099"
+      - "5000-5020:5000-5020"
+      - "6000-6020:6000-6020"
     interactive: "yes"
     state: stopped
   tags: