Skip to content
Snippets Groups Projects
Commit ac373deb authored by Kiril KJiroski's avatar Kiril KJiroski
Browse files

thehive integration with keycloak

parent 16fb12c7
No related branches found
No related tags found
1 merge request!1Dev02
Showing
with 634 additions and 320 deletions
...@@ -13,45 +13,36 @@ Log in and install ansible: ...@@ -13,45 +13,36 @@ Log in and install ansible:
`yum -y install ansible git` `yum -y install ansible git`
`ansible-galaxy collection install ansible.posix` `ansible-galaxy collection install ansible.posix`
Clone soctools: Clone soctools:
Temporary solution: Upload your ssh key to gitlab.geant.org `git clone https://scm.uninett.no/geant-wp8-t3.1/soctools.git`
`git clone git@gitlab.geant.org:gn4-3-wp8-t3.1-soc/soctools.git`
`cd soctools` `cd soctools`
Install soctools: Install soctools:
Edit group_vars/all/main.yml and change 'soctoolsproxy' so that it point to the FQDN of the server. Edit group_vars/all/main.yml and change 'dslproxy' so that it point to the FQDN of the server.
`vi group_vars/all/main.yml` `vi group_vars/all/main.yml`
Users are specified in the file: The first entry in the soctools_users variable is the user with full admin privileges in NiFi and Kibana.
`group_vars/all/users.yml`
To configure the server running soctools, run the ansible playbook: To configure the server running soctools, run the ansible playbook:
`ansible-playbook -i inventories soctools_server.yml` `ansible-playbook -i soctools-inventory soctools_server.yml`
To build the Docker images needed, run the ansible playbook: To build the Docker images needed, run the ansible playbook:
`ansible-playbook -i inventories buildimages.yml` `ansible-playbook -i soctools-inventory buildimages.yml`
To build the CA needed for host and user certificates, run the ansible playbook: To build the CA needed for host and user certificates, run the ansible playbook:
`ansible-playbook -i inventories buildca.yml` `ansible-playbook -i soctools-inventory buildca.yml`
If using soctools CA certificates provided with this installation, you first need to download and import root certificate found in secrets/CA/ca.crt User certificates are can be found in the directory roles/ca/files/CA/private. Import into browser for authentication.
For Windows, CA certificate should be installed in Trusted Root Certification Authorities store.
User certificates are can be found in the directory secrets/certificates. Import into browser for authentication.
For Windows, user certificate should be installed in Personal store. Passwords for the certificates can be found in the directory secrets/passwords.
To start the cluster, run the ansible playbook soctools.yml: To start the cluster, run the ansible playbook soctools.yml:
`ansible-playbook -i inventories soctools.yml -t start` `ansible-playbook -i soctools-inventory soctools.yml -t start`
To stop the cluster, run the ansible playbook soctools.yml: To stop the cluster, run the ansible playbook soctools.yml:
`ansible-playbook -i inventories soctools.yml -t stop` `ansible-playbook -i soctools-inventory soctools.yml -t stop`
Web interfaces are available on the following ports: The NiFi interface should now be available on port 9443 on the server.
* 9443 - NiFi The OpenDistro for Elasticsearch interface should now be available on port 5601 on the server. To access preconfigured
* 5601 - Kibana index patterns you have to switch to Global tenant.
* 6443 - Misp : Default user/password: admin@admin.test/test The Keycloak IdP interface should now be available on port 12443 on the server.
* 9000 - The Hive : Default user/password: admin@thehive.local/secret
* 9001 - Cortex
* 12443 - Keycloak : Default user/password: admin/Pass005
License License
------- -------
......
--- ---
- name: Build certification authority - name: Build certification authority
hosts: soctoolsmain hosts: dsldev
roles: roles:
- ca - ca
--- ---
soctoolsproxy: "<CHANGE_ME:hostname>" dslproxy: "dsoclab.gn4-3-wp8-soc.sunet.se"
maxmind_key: ""
docker_build_dir: "{{playbook_dir}}/build"
# TheHive Button plugin # TheHive Button plugin
THEHIVE_URL: "https://hive.gn4-3-wp8-soc.sunet.se/" THEHIVE_URL: "https://hive.gn4-3-wp8-soc.sunet.se/"
THEHIVE_API_KEY: "5LymseWiurZBrQN8Kqp8O+9KniTL5cE0" # here enter API key for default admin user
THEHIVE_OWNER: "admin" THEHIVE_API_KEY: "bs2Jc3tGJqhVv0AYyX2NYlhMlorPz7mX"
# ID of the default admin user
THEHIVE_OWNER: "admin@thehive.local"
# TheHive Create Organisation and Users
# Login as default admin user and create API key, populate it here
# thehive_admin_api: "KoHrKbIJm8XMsJxA9nZLs6YemCu76o3u"
# thehive_writer: "[write]"
#THEHIVE_API_KEY: "1gFdNhmUSxO3BRe1SBB5JYEvkW9UOo6s"
THEHIVE_USERS:
- kiril:
username: "kiril"
name: "Kiril"
surname: "Kiroski"
roles: '["read", "write", "admin"]'
organization: "uninett.no"
- temur:
username: "temur"
name: "Temur"
surname: "Maisuradze"
roles: '["read", "write", "admin"]'
organization: "uninett.no"
soctools_netname: "soctoolsnet" soctools_netname: "soctoolsnet"
soctools_network: "172.22.0.0/16" soctools_network: "172.22.0.0/16"
repo: soctools repo: gn43-dsl
version: 7 version: 7
suffix: a20201004 suffix: a20201004
haproxy_name: "soctools-haproxy" haproxy_name: "dsoclab-haproxy"
haproxy_version: "2.2" haproxy_version: "2.2"
haproxy_img: "{{repo}}/haproxy:{{version}}{{suffix}}" haproxy_img: "{{repo}}/haproxy:{{version}}{{suffix}}"
HAPROXY_PROCESSES: "2" HAPROXY_PROCESSES: "2"
HAPROXY_STATS_PASS: "eiph2Eepaizicheelah3tei+bae3ohgh"
FILEBEAT_VERSION: "7.9.3"
FILEBEAT_OUTPUT_HOST: "{{soctoolsproxy}}"
FILEBEAT_OUTPUT_PORT: "6000"
FILEBEAT_CERT: "/opt/filebeat/filebeat.crt"
FILEBEAT_KEY: "/opt/filebeat/filebeat.key"
temp_root: "/tmp/centosbuild" temp_root: "/tmp/centosbuild"
openjdk_img: "{{repo}}/openjdk:{{version}}{{suffix}}" openjdk_img: "{{repo}}/openjdk:{{version}}{{suffix}}"
zookeeper_name: "soctools-zookeeper" zookeeper_name: "dsoclab-zookeeper"
zookeeper_img: "{{repo}}/zookeeper:{{version}}{{suffix}}" zookeeper_img: "{{repo}}/zookeeper:{{version}}{{suffix}}"
misp_name: "soctools-misp" misp_name: "dsoclab-misp"
misp_img: "{{repo}}/misp:{{version}}{{suffix}}" misp_img: "{{repo}}/misp:{{version}}{{suffix}}"
misp_url: "https://{{soctoolsproxy}}:6443"
nifi_img: "{{repo}}/nifi:{{version}}{{suffix}}" nifi_img: "{{repo}}/nifi:{{version}}{{suffix}}"
mysql_name: "soctools-mysql" mysql_name: "dsoclab-mysql"
mysql_img: "{{repo}}/mysql:{{version}}{{suffix}}" mysql_img: "{{repo}}/mysql:{{version}}{{suffix}}"
mysql_dbrootpass: "Pass006"
cassandra_name: "soctools-cassandra" cassandra_name: "dsoclab-cassandra"
cassandra_img: "{{repo}}/cassandra:{{version}}{{suffix}}" cassandra_img: "{{repo}}/cassandra:{{version}}{{suffix}}"
thehive_name: "soctools-thehive" thehive_name: "dsoclab-thehive"
thehive_img: "{{repo}}/thehive:{{version}}{{suffix}}" thehive_img: "{{repo}}/thehive:{{version}}{{suffix}}"
# GENERATED WITH cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1
thehive_secret_key: "LcnI9eKLo33711BmCnzf6UM1y05pdmj3dlADL81PxuffWqhobRoiiGFftjNPKpmM"
cortex_name: "soctools-cortex" cortex_name: "dsoclab-cortex"
cortex_img: "{{repo}}/cortex:{{version}}{{suffix}}" cortex_img: "{{repo}}/cortex:{{version}}{{suffix}}"
cortex_elasticsearch_mem: "256m" cortex_elasticsearch_mem: "256m"
# GENERATED WITH cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1
cortex_secret_key: "9CZ844IcAp5dHjsgU4iuaEssdopLcS6opzhVP3Ys4t4eRpNlHmwZdtfveLEXpM9D"
cortex_odfe_pass: "Pass009"
kspass: "Testing003"
tspass: "Testing003"
sysctlconfig: sysctlconfig:
- { key: "net.core.rmem_max", val: "4194304" } - { key: "net.core.rmem_max", val: "2097152" }
- { key: "net.core.wmem_max", val: "4194304" } - { key: "net.core.wmem_max", val: "2097152" }
- { key: "vm.max_map_count" , val: "524288" } - { key: "vm.max_map_count" , val: "524288" }
nifi_javamem: "1g" nifi_javamem: "1g"
odfe_javamem: "512m" odfe_javamem: "512m"
nifi_version: 1.12.1 nifi_version: 1.11.4
nifi_repo: "https://archive.apache.org/dist" nifi_repo: "https://archive.apache.org/dist"
ca_cn: "SOCTOOLS-CA" ca_cn: "SOCTOOLS-CA"
soctools_users:
- firstname: "Bozidar"
lastname: "Proevski"
username: "bozidar.proevski"
email: "bozidar.proevski@finki.ukim.mk"
DN: "CN=Bozidar Proevski"
CN: "Bozidar Proevski"
password: "Pass001"
- firstname: "Arne"
lastname: "Oslebo"
username: "arne.oslebo"
email: "arne.oslebo@uninett.no"
DN: "CN=Arne Oslebo"
CN: "Arne Oslebo"
password: "Pass002"
- firstname: "Kiril"
lastname: "Kjiroski"
username: "kiril.kjiroski"
email: "kiril.kjiroski@finki.ukim.mk"
DN: "CN=Kiril Kjiroski"
CN: "Kiril Kjiroski"
password: "Pass003"
odfees_img: "{{repo}}/odfees:{{version}}{{suffix}}" odfees_img: "{{repo}}/odfees:{{version}}{{suffix}}"
odfekibana_img: "{{repo}}/odfekibana:{{version}}{{suffix}}" odfekibana_img: "{{repo}}/odfekibana:{{version}}{{suffix}}"
# GENERATE 32-bit secure value
odfekibana_cookie: "iroAm0ueIV7w6CS1WcJTwIV6R4d5RIAt"
odfees_adminpass: "Pass004"
#elk_version: "oss-7.6.1" #elk_version: "oss-7.6.1"
elk_version: "oss-7.4.2" elk_version: "oss-7.4.2"
#odfeplugin_version: "1.7.0.0" #odfeplugin_version: "1.7.0.0"
...@@ -80,25 +129,16 @@ openid_scope: profile ...@@ -80,25 +129,16 @@ openid_scope: profile
openid_subjkey: preferred_username openid_subjkey: preferred_username
keycloak_img: "{{repo}}/keycloak:{{version}}{{suffix}}" keycloak_img: "{{repo}}/keycloak:{{version}}{{suffix}}"
keycloak_adminpass: "Pass005"
elastic_username: "admin" elastic_username: "admin"
misp_token: ""
misp_url: ""
maxmind_key: ""
misp_dbname: "mispdb" misp_dbname: "mispdb"
misp_dbuser: "misp" misp_dbuser: "misp"
misp_dbpass: "Pass007"
services: # misp_salt generated with: openssl rand -base64 32
- mysql misp_salt: "wa2fJA2mGIn32IDl+uKrCJ069Mg3khDdGzFNv8DOwM0="
- haproxy
- openjdk
- zookeeper
- nifi
- elasticsearch
- kibana
- odfees
- odfekibana
- keycloak
- misp
- cassandra
- thehive
- cortex
...@@ -5,115 +5,96 @@ ...@@ -5,115 +5,96 @@
name: "{{repo}}/centos:{{version}}{{suffix}}" name: "{{repo}}/centos:{{version}}{{suffix}}"
register: centosimg register: centosimg
- name: Assert CentOS image #- name: Skip if image exists
assert: # meta: end_play
that: centosimg.images | length == 0 # when: centosimg.images | length != 0
fail_msg: "CentOS image already exists"
# tags:
- name: Create etc tree in build directory # - start
file:
path: '{{ temp_root}}/{{ item.path }}' #- name: Assert CentOS image
state: directory # assert:
mode: '{{ item.mode }}' # that: centosimg.images | length == 0
with_filetree: templates/etcroot/ # fail_msg: "CentOS image already exists"
when: item.state == 'directory'
- name: Build CentOS image
- name: Populate etc tree in build directory when: centosimg.images | length == 0
template: block:
src: '{{ item.src }}' - name: Create etc tree in build directory
dest: '{{ temp_root}}/{{ item.path }}' file:
force: yes path: '{{ temp_root}}/{{ item.path }}'
with_filetree: templates/etcroot state: directory
when: item.state == 'file' mode: '{{ item.mode }}'
with_filetree: templates/etcroot/
- name: Create dev tree in build directory when: item.state == 'directory'
command: mknod -m {{ item.mode }} {{ item.dev }} {{ item.type }} {{ item.major }} {{ item.minor }}
args: - name: Populate etc tree in build directory
creates: "{{ item.dev }}" template:
with_items: src: '{{ item.src }}'
- { mode: 600, dev: "{{temp_root}}/dev/console", type: c, major: 5, minor: 1 } dest: '{{ temp_root}}/{{ item.path }}'
- { mode: 600, dev: "{{temp_root}}/dev/initctl", type: p, major: '', minor: '' } force: yes
- { mode: 666, dev: "{{temp_root}}/dev/full", type: c, major: 1, minor: 7 } with_filetree: templates/etcroot
- { mode: 666, dev: "{{temp_root}}/dev/null", type: c, major: 1, minor: 3 } when: item.state == 'file'
- { mode: 666, dev: "{{temp_root}}/dev/ptmx", type: c, major: 5, minor: 2 }
- { mode: 666, dev: "{{temp_root}}/dev/random", type: c, major: 1, minor: 8 } - name: Create dev tree in build directory
- { mode: 666, dev: "{{temp_root}}/dev/tty", type: c, major: 5, minor: 0 } command: mknod -m {{ item.mode }} {{ item.dev }} {{ item.type }} {{ item.major }} {{ item.minor }}
- { mode: 666, dev: "{{temp_root}}/dev/tty0", type: c, major: 4, minor: 0 } args:
- { mode: 666, dev: "{{temp_root}}/dev/urandom", type: c, major: 1, minor: 9 } creates: "{{ item.dev }}"
- { mode: 666, dev: "{{temp_root}}/dev/zero", type: c, major: 1, minor: 5 } with_items:
- { mode: 600, dev: "{{temp_root}}/dev/console", type: c, major: 5, minor: 1 }
- name: Install centos-release in build directory - { mode: 600, dev: "{{temp_root}}/dev/initctl", type: p, major: '', minor: '' }
yum: - { mode: 666, dev: "{{temp_root}}/dev/full", type: c, major: 1, minor: 7 }
installroot: "{{ temp_root}}" - { mode: 666, dev: "{{temp_root}}/dev/null", type: c, major: 1, minor: 3 }
name: centos-release - { mode: 666, dev: "{{temp_root}}/dev/ptmx", type: c, major: 5, minor: 2 }
state: present - { mode: 666, dev: "{{temp_root}}/dev/random", type: c, major: 1, minor: 8 }
- { mode: 666, dev: "{{temp_root}}/dev/tty", type: c, major: 5, minor: 0 }
- name: Install Core CentOS in build directory - { mode: 666, dev: "{{temp_root}}/dev/tty0", type: c, major: 4, minor: 0 }
yum: - { mode: 666, dev: "{{temp_root}}/dev/urandom", type: c, major: 1, minor: 9 }
installroot: "{{ temp_root}}" - { mode: 666, dev: "{{temp_root}}/dev/zero", type: c, major: 1, minor: 5 }
name:
- "@Core" - name: Install centos-release in build directory
- yum-plugin-ovl.noarch yum:
- epel-release installroot: "{{ temp_root}}"
state: present name: centos-release
state: present
- name: Install extra packages
yum: - name: Install Core CentOS in build directory
installroot: "{{ temp_root }}" yum:
name: installroot: "{{ temp_root}}"
- daemonize name:
state: present - "@Core"
- yum-plugin-ovl.noarch
- name: Clean yum cache - epel-release
command: 'yum --installroot="{{ temp_root}}" -y clean all' state: present
- name: Remove unneeded directories - name: Clean yum cache
file: command: 'yum --installroot="{{ temp_root}}" -y clean all'
path: "{{temp_root}}/{{item}}"
state: absent - name: Remove unneeded directories
with_items: file:
- usr/share/cracklib path: "{{temp_root}}/{{item}}"
- var/cache/yum state: absent
- sbin/sln with_items:
- etc/ld.so.cache - usr/share/cracklib
- var/cache/ldconfig - var/cache/yum
- usr/share/backgrounds - sbin/sln
- etc/ld.so.cache
- name: Create needed directories - var/cache/ldconfig
file: - usr/share/backgrounds
path: "{{temp_root}}/{{item}}"
state: directory - name: Create needed directories
with_items: file:
- var/cache/yum path: "{{temp_root}}/{{item}}"
- var/cache/ldconfig state: directory
with_items:
- name: Download filebeat - var/cache/yum
get_url: - var/cache/ldconfig
url: "https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-{{ FILEBEAT_VERSION }}-linux-x86_64.tar.gz"
dest: "{{ temp_root}}/opt/filebeat.tar.gz" - name: Import image in docker
mode: '0640' shell: tar --numeric-owner -c -C {{temp_root }} . | docker import - {{repo}}/centos:{{version}}{{suffix}}
- name: Unarchive filebeat - name: Remove temp directory
unarchive: file:
src: "{{ temp_root}}/opt/filebeat.tar.gz" path: "{{temp_root}}"
dest: "{{ temp_root}}/opt/" state: absent
remote_src: yes
- name: Delete filebeat archive
file:
path: "{{ item }}"
state: absent
with_items:
- "{{ temp_root}}/opt/filebeat.tar.gz"
- name: move filebeat directory to /opt/filebeat
command: "mv {{ temp_root}}/opt/filebeat-{{ FILEBEAT_VERSION }}-linux-x86_64 {{ temp_root}}/opt/filebeat"
- name: Import image in docker
shell: tar --numeric-owner -c -C {{temp_root }} . | docker import - {{repo}}/centos:{{version}}{{suffix}}
- name: Remove temp directory
file:
path: "{{temp_root}}"
state: absent
...@@ -2,45 +2,19 @@ ...@@ -2,45 +2,19 @@
- assert: - assert:
that: that:
- "'CHANGE_ME' not in soctoolsproxy" - "'CHANGE_ME' not in dslproxy"
fail_msg: "Review *all* settings in group_vars/all/main.yml" fail_msg: "Review *all* settings in group_vars/all/main.yml"
- include: centos.yml - include: centos.yml
- include: mysql.yml
- name: Create main build dir - include: haproxy.yml
file: - include: openjdk.yml
path: "{{docker_build_dir}}" - include: zookeeper.yml
state: directory - include: nifi.yml
- include: odfees.yml
- name: Create build dir - include: odfekibana.yml
file: - include: keycloak.yml
path: "{{docker_build_dir}}/{{item}}" - include: misp.yml
state: directory - include: cassandra.yml
with_items: "{{services}}" - include: thehive.yml
- include: cortex.yml
- name: Configure the Dockerfile
template:
src: "{{item}}/Dockerfile.j2"
dest: "{{docker_build_dir}}/{{item}}/Dockerfile"
with_items: "{{services}}"
- name: Copy thehive_button to build path
copy:
src: "{{role_path}}/templates/odfekibana/thehive_button"
dest: "{{docker_build_dir}}/odfekibana/"
- name: Copy keycloak-tools to build path
copy:
src: "{{role_path}}/templates/keycloak/keycloak-tools"
dest: "{{docker_build_dir}}/keycloak/"
- name: Copy build files
copy:
src: "files/{{item}}/"
dest: "{{docker_build_dir}}/{{item}}/"
with_items: "{{services}}"
ignore_errors: yes
- name: Build image
command: docker build -t {{repo}}/{{item}}:{{version}}{{suffix}} -f {{docker_build_dir}}/{{item}}/Dockerfile {{docker_build_dir}}/{{item}}
with_items: "{{services}}"
FROM {{repo}}/openjdk:{{version}}{{suffix}} FROM {{repo}}/openjdk:{{version}}{{suffix}}
USER root USER root
#COPY cassandra.repo /etc/yum.repos.d/cassandra.repo
#COPY supervisord.conf /etc/supervisord.conf
#COPY start.sh /start.sh
RUN echo "[cassandra]" > /etc/yum.repos.d/cassandra.repo && \ RUN echo "[cassandra]" > /etc/yum.repos.d/cassandra.repo && \
echo "name=Apache Cassandra" >> /etc/yum.repos.d/cassandra.repo && \ echo "name=Apache Cassandra" >> /etc/yum.repos.d/cassandra.repo && \
echo "baseurl=https://downloads.apache.org/cassandra/redhat/311x/" >> /etc/yum.repos.d/cassandra.repo && \ echo "baseurl=https://downloads.apache.org/cassandra/redhat/311x/" >> /etc/yum.repos.d/cassandra.repo && \
echo "gpgcheck=1" >> /etc/yum.repos.d/cassandra.repo && \ echo "gpgcheck=1" >> /etc/yum.repos.d/cassandra.repo && \
echo "repo_gpgcheck=1" >> /etc/yum.repos.d/cassandra.repo && \ echo "repo_gpgcheck=1" >> /etc/yum.repos.d/cassandra.repo && \
echo "gpgkey=https://downloads.apache.org/cassandra/KEYS" >> /etc/yum.repos.d/cassandra.repo && \ echo "gpgkey=https://downloads.apache.org/cassandra/KEYS" >> /etc/yum.repos.d/cassandra.repo && \
echo '#!/bin/bash' > /start.sh && \
echo 'export CASSANDRA_HOME=/usr/share/cassandra' >> /start.sh && \
echo 'export CASSANDRA_CONF=$CASSANDRA_HOME/conf' >> /start.sh && \
echo 'export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh' >> /start.sh && \
echo 'log_file=/var/log/cassandra/cassandra.log' >> /start.sh && \
echo 'pid_file=/var/run/cassandra/cassandra.pid' >> /start.sh && \
echo 'lock_file=/var/lock/subsys/cassandra' >> /start.sh && \
echo 'CASSANDRA_PROG=/usr/sbin/cassandra' >> /start.sh && \
echo '' >> /start.sh && \
echo '$CASSANDRA_PROG -p $pid_file > $log_file 2>&1' >> /start.sh && \
yum install -y epel-release && \ yum install -y epel-release && \
yum install -y cassandra supervisor rsync && \ yum install -y cassandra supervisor && \
mkdir /usr/share/cassandra/conf && \ mkdir /usr/share/cassandra/conf && \
cp -a /etc/cassandra/conf/* /usr/share/cassandra/conf && \ cp -a /etc/cassandra/conf/* /usr/share/cassandra/conf && \
chown -R cassandra:cassandra /usr/share/cassandra && \ chown -R cassandra:cassandra /usr/share/cassandra && \
chown -R cassandra:cassandra /var/lib/cassandra && \ chown -R cassandra:cassandra /var/lib/cassandra && \
sed -i -e 's,/etc/cassandra,/usr/share/cassandra,g' /usr/share/cassandra/cassandra.in.sh && \ sed -i -e 's,/etc/cassandra,/usr/share/cassandra,g' /usr/share/cassandra/cassandra.in.sh && \
chmod a+x /start.sh && \
yum -y clean all yum -y clean all
COPY cassandrasupervisord.conf /etc/supervisord.conf
EXPOSE 7000 9042 EXPOSE 7000 9042
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"] #ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
USER cassandra
# ENTRYPOINT ["/start.sh"]
FROM {{repo}}/openjdk:{{version}}{{suffix}} FROM {{repo}}/openjdk:{{version}}{{suffix}}
USER root USER root
#COPY thehive.repo /etc/yum.repos.d/thehive.repo
#COPY supervisord.conf /etc/supervisord.conf
#COPY start.sh /start.sh
RUN echo "[thehive-project]" > /etc/yum.repos.d/thehive.repo && \ RUN echo "[thehive-project]" > /etc/yum.repos.d/thehive.repo && \
echo "enabled=1" >> /etc/yum.repos.d/thehive.repo && \ echo "enabled=1" >> /etc/yum.repos.d/thehive.repo && \
echo "priority=1" >> /etc/yum.repos.d/thehive.repo && \ echo "priority=1" >> /etc/yum.repos.d/thehive.repo && \
...@@ -10,7 +13,7 @@ RUN echo "[thehive-project]" > /etc/yum.repos.d/thehive.repo && \ ...@@ -10,7 +13,7 @@ RUN echo "[thehive-project]" > /etc/yum.repos.d/thehive.repo && \
yum install -y epel-release && \ yum install -y epel-release && \
rpm --import https://raw.githubusercontent.com/TheHive-Project/TheHive/master/PGP-PUBLIC-KEY && \ rpm --import https://raw.githubusercontent.com/TheHive-Project/TheHive/master/PGP-PUBLIC-KEY && \
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch && \ rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch && \
yum install -y cortex supervisor rsync daemonize vim net-tools telnet htop python3-pip.noarch git gcc python3-devel.x86_64 ssdeep-devel.x86_64 python3-wheel.noarch libexif-devel.x86_64 libexif.x86_64 perl-Image-ExifTool.noarch gcc-c++ whois && \ yum install -y cortex supervisor daemonize vim net-tools telnet htop python3-pip.noarch git gcc python3-devel.x86_64 ssdeep-devel.x86_64 python3-wheel.noarch libexif-devel.x86_64 libexif.x86_64 perl-Image-ExifTool.noarch gcc-c++ whois && \
rpm -Uvh https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-6.8.13.rpm && \ rpm -Uvh https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-6.8.13.rpm && \
chown -R elasticsearch:elasticsearch /etc/elasticsearch && \ chown -R elasticsearch:elasticsearch /etc/elasticsearch && \
mkdir -p /home/cortex && \ mkdir -p /home/cortex && \
...@@ -24,5 +27,6 @@ RUN echo "[thehive-project]" > /etc/yum.repos.d/thehive.repo && \ ...@@ -24,5 +27,6 @@ RUN echo "[thehive-project]" > /etc/yum.repos.d/thehive.repo && \
for I in responders/*/requirements.txt; do LC_ALL=en_US.UTF-8 pip3 install --no-cache-dir -U -r $I || true; done && \ for I in responders/*/requirements.txt; do LC_ALL=en_US.UTF-8 pip3 install --no-cache-dir -U -r $I || true; done && \
yum -y clean all yum -y clean all
EXPOSE 9001 EXPOSE 9001
COPY cortexsupervisord.conf /etc/supervisord.conf #ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"] USER cortex
# ENTRYPOINT ["/start.sh"]
...@@ -17,7 +17,7 @@ search { ...@@ -17,7 +17,7 @@ search {
index = cortex3 index = cortex3
# ElasticSearch instance address. # ElasticSearch instance address.
# For cluster, join address:port with ',': "http://ip1:9200,ip2:9200,ip3:9200" # For cluster, join address:port with ',': "http://ip1:9200,ip2:9200,ip3:9200"
uri = "http://soctools-elastic:9200" uri = "http://dsoclab-elastic:9200"
## Advanced configuration ## Advanced configuration
# Scroll keepalive. # Scroll keepalive.
......
...@@ -24,8 +24,6 @@ RUN \ ...@@ -24,8 +24,6 @@ RUN \
iptables \ iptables \
pcre2-devel \ pcre2-devel \
daemonize \ daemonize \
supervisor \
rsync \
pth-devel && \ pth-devel && \
`# Install newest openssl...` \ `# Install newest openssl...` \
wget -O /tmp/openssl.tgz https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz && \ wget -O /tmp/openssl.tgz https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz && \
...@@ -64,5 +62,10 @@ RUN \ ...@@ -64,5 +62,10 @@ RUN \
&& cp -R /usr/src/haproxy/examples/errorfiles /usr/local/etc/haproxy/errors \ && cp -R /usr/src/haproxy/examples/errorfiles /usr/local/etc/haproxy/errors \
&& rm -rf /usr/src/haproxy && rm -rf /usr/src/haproxy
COPY haproxysupervisord.conf /etc/supervisord.conf ENTRYPOINT ["/bin/bash"]
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
# https://www.haproxy.org/download/1.8/doc/management.txt
# "4. Stopping and restarting HAProxy"
# "when the SIGTERM signal is sent to the haproxy process, it immediately quits and all established connections are closed"
# "graceful stop is triggered when the SIGUSR1 signal is sent to the haproxy process"
STOPSIGNAL SIGUSR1
...@@ -11,7 +11,7 @@ USER root ...@@ -11,7 +11,7 @@ USER root
#ADD /{{role_path}}/templates/keycloak/keycloak-tools /opt/jboss/tools #ADD /{{role_path}}/templates/keycloak/keycloak-tools /opt/jboss/tools
ADD keycloak-tools /opt/jboss/tools ADD keycloak-tools /opt/jboss/tools
#ADD ../templates/keycloak/keycloak-tools /opt/jboss/tools #ADD ../templates/keycloak/keycloak-tools /opt/jboss/tools
RUN yum -y install openssl supervisor rsync && yum -y clean all && \ RUN yum -y install openssl && yum -y clean all && \
mkdir -p /opt/jboss/ && cd /opt/jboss/ && \ mkdir -p /opt/jboss/ && cd /opt/jboss/ && \
curl -L $KEYCLOAK_DIST | tar zx && \ curl -L $KEYCLOAK_DIST | tar zx && \
mv /opt/jboss/keycloak-* /opt/jboss/keycloak && \ mv /opt/jboss/keycloak-* /opt/jboss/keycloak && \
...@@ -27,7 +27,6 @@ RUN yum -y install openssl supervisor rsync && yum -y clean all && \ ...@@ -27,7 +27,6 @@ RUN yum -y install openssl supervisor rsync && yum -y clean all && \
adduser -u 1000 -g 0 -d /opt/jboss jboss && \ adduser -u 1000 -g 0 -d /opt/jboss jboss && \
chown -R jboss:root /opt/jboss && \ chown -R jboss:root /opt/jboss && \
chmod -R g+rwX /opt/jboss && \ chmod -R g+rwX /opt/jboss && \
chmod a+x /opt/jboss/tools/x509.sh && \
mkdir -p /etc/x509/{https,ca} && chown -R jboss:root /etc/x509/{https,ca} mkdir -p /etc/x509/{https,ca} && chown -R jboss:root /etc/x509/{https,ca}
ENV PATH="/opt/jboss/keycloak/bin:${PATH}" ENV PATH="/opt/jboss/keycloak/bin:${PATH}"
...@@ -37,8 +36,6 @@ WORKDIR /opt/jboss/keycloak ...@@ -37,8 +36,6 @@ WORKDIR /opt/jboss/keycloak
EXPOSE 8080 EXPOSE 8080
EXPOSE 8443 EXPOSE 8443
RUN echo 'jboss ALL=(ALL:ALL) NOPASSWD: ALL' >> /etc/sudoers USER jboss
ENTRYPOINT ["/bin/bash"]
COPY keycloaksupervisord.conf /etc/supervisord.conf
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
...@@ -2,7 +2,7 @@ FROM {{repo}}/centos:{{version}}{{suffix}} ...@@ -2,7 +2,7 @@ FROM {{repo}}/centos:{{version}}{{suffix}}
USER root USER root
RUN yum install -y epel-release centos-release-scl scl-utils ; \ RUN yum install -y epel-release centos-release-scl scl-utils ; \
yum install -y gcc git zip openssl supervisor rsync rh-git218 httpd24 mod_ssl mod_auth_openidc rh-redis32 libxslt-devel zlib-devel libcaca-devel ssdeep-devel rh-php72 rh-php72-php-fpm rh-php72-php-devel rh-php72-php-mysqlnd rh-php72-php-mbstring rh-php72-php-xml rh-php72-php-bcmath rh-php72-php-opcache rh-php72-php-gd mariadb devtoolset-7 make cmake3 cppcheck libcxx-devel gpgme-devel openjpeg-devel gcc gcc-c++ poppler-cpp-devel pkgconfig python-devel redhat-rpm-config rubygem-rouge rubygem-asciidoctor zbar-devel opencv-devel wget screen rh-python36-mod_wsgi postfix curl make cmake python3 python3-devel python3-pip python3-yara python3-wheel python3-redis python3-zmq python3-setuptools redis sudo vim zip sqlite moreutils rng-tools libxml2-devel libxslt-devel zlib-devel libpqxx openjpeg2-devel ssdeep-devel ruby asciidoctor tesseract ImageMagick poppler-cpp-devel python36-virtualenv opencv-devel zbar zbar-devel ; \ yum install -y gcc git zip openssl supervisor rh-git218 httpd24 mod_ssl mod_auth_openidc rh-redis32 libxslt-devel zlib-devel libcaca-devel ssdeep-devel rh-php72 rh-php72-php-fpm rh-php72-php-devel rh-php72-php-mysqlnd rh-php72-php-mbstring rh-php72-php-xml rh-php72-php-bcmath rh-php72-php-opcache rh-php72-php-gd mariadb devtoolset-7 make cmake3 cppcheck libcxx-devel gpgme-devel openjpeg-devel gcc gcc-c++ poppler-cpp-devel pkgconfig python-devel redhat-rpm-config rubygem-rouge rubygem-asciidoctor zbar-devel opencv-devel wget screen rh-python36-mod_wsgi postfix curl make cmake python3 python3-devel python3-pip python3-yara python3-wheel python3-redis python3-zmq python3-setuptools redis sudo vim zip sqlite moreutils rng-tools libxml2-devel libxslt-devel zlib-devel libpqxx openjpeg2-devel ssdeep-devel ruby asciidoctor tesseract ImageMagick poppler-cpp-devel python36-virtualenv opencv-devel zbar zbar-devel ; \
yum -y clean all ; \ yum -y clean all ; \
sed -i "s/max_execution_time = 30/max_execution_time = 300/" /etc/opt/rh/rh-php72/php.ini ; \ sed -i "s/max_execution_time = 30/max_execution_time = 300/" /etc/opt/rh/rh-php72/php.ini ; \
sed -i "s/memory_limit = 128M/memory_limit = 2048M/" /etc/opt/rh/rh-php72/php.ini ; \ sed -i "s/memory_limit = 128M/memory_limit = 2048M/" /etc/opt/rh/rh-php72/php.ini ; \
...@@ -76,12 +76,9 @@ RUN chown -R apache:apache /var/www/MISP ; \ ...@@ -76,12 +76,9 @@ RUN chown -R apache:apache /var/www/MISP ; \
chmod -R g+ws /var/www/MISP/app/files ; \ chmod -R g+ws /var/www/MISP/app/files ; \
chmod -R g+ws /var/www/MISP/app/files/scripts/tmp chmod -R g+ws /var/www/MISP/app/files/scripts/tmp
COPY misp_rh-php72-php-fpm /etc/logrotate.d/rh-php72-php-fpm
# 80/443 - MISP web server, 3306 - mysql, 6379 - redis, 6666 - MISP modules, 50000 - MISP ZeroMQ # 80/443 - MISP web server, 3306 - mysql, 6379 - redis, 6666 - MISP modules, 50000 - MISP ZeroMQ
EXPOSE 80 443 6443 6379 6666 50000 EXPOSE 80 443 6443 6379 6666 50000
ENV PATH "$PATH:/opt/rh/rh-php72/root/bin/"
COPY mispsupervisord.conf /etc/supervisord.conf COPY mispsupervisord.conf /etc/supervisord.conf
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"] #ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
...@@ -2,7 +2,7 @@ FROM {{repo}}/centos:{{version}}{{suffix}} ...@@ -2,7 +2,7 @@ FROM {{repo}}/centos:{{version}}{{suffix}}
USER root USER root
RUN yum -y update && yum install -y epel-release centos-release-scl scl-utils && \ RUN yum -y update && yum install -y epel-release centos-release-scl scl-utils && \
yum install -y rh-mariadb103 python36-PyMySQL MySQL-python supervisor rsync && \ yum install -y rh-mariadb103 python36-PyMySQL MySQL-python supervisor && \
/usr/bin/scl enable rh-mariadb103 -- /opt/rh/rh-mariadb103/root/usr/libexec/mysql-prepare-db-dir /var/opt/rh/rh-mariadb103/lib/mysql /usr/bin/scl enable rh-mariadb103 -- /opt/rh/rh-mariadb103/root/usr/libexec/mysql-prepare-db-dir /var/opt/rh/rh-mariadb103/lib/mysql
RUN yum clean all RUN yum clean all
......
...@@ -44,8 +44,6 @@ RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut -d: - ...@@ -44,8 +44,6 @@ RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut -d: -
&& chown -R nifi:nifi ${NIFI_BASE_DIR} \ && chown -R nifi:nifi ${NIFI_BASE_DIR} \
&& yum -y install jq xmlstarlet procps-ng && yum -y install jq xmlstarlet procps-ng
RUN echo 'nifi ALL=(ALL:ALL) NOPASSWD: ALL' >> /etc/sudoers
USER nifi USER nifi
# Download, validate, and expand Apache NiFi Toolkit binary. # Download, validate, and expand Apache NiFi Toolkit binary.
...@@ -96,8 +94,4 @@ WORKDIR ${NIFI_HOME} ...@@ -96,8 +94,4 @@ WORKDIR ${NIFI_HOME}
# Also we need to use relative path, because the exec form does not invoke a command shell, # Also we need to use relative path, because the exec form does not invoke a command shell,
# thus normal shell processing does not happen: # thus normal shell processing does not happen:
# https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example # https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example
USER root ENTRYPOINT ["/bin/bash"]
RUN yum install -y supervisor rsync
RUN yum clean all
COPY nifisupervisord.conf /etc/supervisord.conf
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
FROM {{repo}}/centos:{{version}}{{suffix}} FROM {{repo}}/centos:{{version}}{{suffix}}
RUN yum install -y supervisor rsync
RUN yum clean all
ENV PATH="/usr/share/kibana/bin:${PATH}" ENV PATH="/usr/share/kibana/bin:${PATH}"
RUN groupadd -g 1000 kibana && \ RUN groupadd -g 1000 kibana && \
...@@ -15,9 +12,7 @@ RUN rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch && \ ...@@ -15,9 +12,7 @@ RUN rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch && \
cp -a /etc/kibana/ /usr/share/kibana/config/ && \ cp -a /etc/kibana/ /usr/share/kibana/config/ && \
chown -R kibana /usr/share/kibana/config/ chown -R kibana /usr/share/kibana/config/
RUN echo 'kibana ALL=(ALL:ALL) NOPASSWD: ALL' >> /etc/sudoers
EXPOSE 5601 EXPOSE 5601
COPY kibanasupervisord.conf /etc/supervisord.conf USER kibana
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"] ENTRYPOINT ["/bin/bash"]
FROM {{repo}}/openjdk:{{version}}{{suffix}} FROM {{repo}}/openjdk:{{version}}{{suffix}}
USER root USER root
#COPY thehive.repo /etc/yum.repos.d/thehive.repo
#COPY supervisord.conf /etc/supervisord.conf
#COPY start.sh /start.sh
RUN echo "[thehive-project]" > /etc/yum.repos.d/thehive.repo && \ RUN echo "[thehive-project]" > /etc/yum.repos.d/thehive.repo && \
echo "enabled=1" >> /etc/yum.repos.d/thehive.repo && \ echo "enabled=1" >> /etc/yum.repos.d/thehive.repo && \
echo "priority=1" >> /etc/yum.repos.d/thehive.repo && \ echo "priority=1" >> /etc/yum.repos.d/thehive.repo && \
...@@ -9,12 +12,13 @@ RUN echo "[thehive-project]" > /etc/yum.repos.d/thehive.repo && \ ...@@ -9,12 +12,13 @@ RUN echo "[thehive-project]" > /etc/yum.repos.d/thehive.repo && \
echo "gpgcheck=1" >> /etc/yum.repos.d/thehive.repo && \ echo "gpgcheck=1" >> /etc/yum.repos.d/thehive.repo && \
yum install -y epel-release && \ yum install -y epel-release && \
rpm --import https://raw.githubusercontent.com/TheHive-Project/TheHive/master/PGP-PUBLIC-KEY && \ rpm --import https://raw.githubusercontent.com/TheHive-Project/TheHive/master/PGP-PUBLIC-KEY && \
yum install -y thehive4 supervisor daemonize vim net-tools telnet htop rsync && \ yum install -y thehive4 supervisor daemonize vim net-tools telnet htop && \
mkdir -p /opt/thp_data/files/thehive && \ mkdir -p /opt/thp_data/files/thehive && \
chown -R thehive:thehive /opt/thp_data/files/thehive && \ chown -R thehive:thehive /opt/thp_data/files/thehive && \
mkdir -p /home/thehive && \ mkdir -p /home/thehive && \
chown -R thehive:thehive /home/thehive /etc/thehive && \ chown -R thehive:thehive /home/thehive /etc/thehive && \
yum -y clean all yum -y clean all
EXPOSE 9000 EXPOSE 9000
COPY thehivesupervisord.conf /etc/supervisord.conf #ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"] USER thehive
# ENTRYPOINT ["/start.sh"]
...@@ -29,8 +29,6 @@ EXPOSE 2181 2888 3888 ...@@ -29,8 +29,6 @@ EXPOSE 2181 2888 3888
WORKDIR ${ZOOKEEPER_BASE_DIR}/zookeeper WORKDIR ${ZOOKEEPER_BASE_DIR}/zookeeper
#ENTRYPOINT ["/opt/zookeeper/bin/zkServer.sh"] ENTRYPOINT ["/opt/zookeeper/bin/zkServer.sh"]
#CMD ["start-foreground"] CMD ["start-foreground"]
RUN yum install supervisor rsync -y
COPY zookeepersupervisord.conf /etc/supervisord.conf
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
--- ---
- name: Create secret directory
file:
path: "{{playbook_dir}}/{{item}}"
state: directory
loop:
- secrets
- secrets/certificates
- secrets/tokens
- secrets/passwords
- name: Check for existing CA folder - name: Check for existing CA folder
stat: stat:
path: "{{playbook_dir}}/secrets/CA" path: roles/ca/files/CA
register: capath register: capath
- name: build ca root key and cert - name: build ca root key and cert
...@@ -24,19 +14,27 @@ ...@@ -24,19 +14,27 @@
environment: environment:
EASYRSA_BATCH: 1 EASYRSA_BATCH: 1
EASYRSA_REQ_CN: "{{ ca_cn }}" EASYRSA_REQ_CN: "{{ ca_cn }}"
EASYRSA_PKI: "{{playbook_dir}}/secrets/CA" EASYRSA_PKI: roles/ca/files/CA
when: not capath.stat.exists when: not capath.stat.exists
- name: Copy cert to truststore
copy:
src: roles/ca/files/CA/ca.crt
dest: "roles/ca/files/truststore/{{ ca_cn }}.crt"
- name: Remove previous truststore - name: Remove previous truststore
file: file:
path: '{{playbook_dir}}/secrets/CA/cacerts.jks' path: roles/ca/files/truststore/cacerts.jks
state: absent state: absent
- name: Generate truststore - name: Generate truststore
command: > command: >
docker run --rm -v {{playbook_dir}}/secrets/CA/:/opt/cafiles/:z docker run --rm -v {{role_path}}/files/truststore/:/opt/cafiles/:z
"{{repo}}/openjdk:{{version}}{{suffix}}" keytool -import -noprompt -trustcacerts "{{repo}}/openjdk:{{version}}{{suffix}}" keytool -import -noprompt -trustcacerts
-alias "{{ ca_cn }}" -file "/opt/cafiles/ca.crt" -keystore /opt/cafiles/cacerts.jks -storepass "{{lookup('password', '{{playbook_dir}}/secrets/passwords/truststore')}}" -alias "{{item}}" -file "/opt/cafiles/{{item}}.crt" -keystore /opt/cafiles/cacerts.jks -storepass "{{tspass}}"
with_items:
- "{{ ca_cn }}"
#- GN43WP8T31_CA
- name: Check for existing host certificates - name: Check for existing host certificates
command: roles/ca/files/easyrsa/easyrsa show-cert {{item}} command: roles/ca/files/easyrsa/easyrsa show-cert {{item}}
...@@ -49,17 +47,16 @@ ...@@ -49,17 +47,16 @@
- "{{ groups['thehive'] }}" - "{{ groups['thehive'] }}"
- "{{ groups['cortex'] }}" - "{{ groups['cortex'] }}"
- "{{ groups['haproxy'] }}" - "{{ groups['haproxy'] }}"
- "filebeat"
environment: environment:
EASYRSA_BATCH: 1 EASYRSA_BATCH: 1
EASYRSA_PKI: "{{playbook_dir}}/secrets/CA" EASYRSA_PKI: roles/ca/files/CA
register: hostcerts register: hostcerts
ignore_errors: true ignore_errors: true
- name: Generate host certificates - name: Generate host certificates
command: > command: >
roles/ca/files/easyrsa/easyrsa roles/ca/files/easyrsa/easyrsa
--subject-alt-name="DNS:{{item}},DNS:{{soctoolsproxy}}" --subject-alt-name="DNS:{{item}},DNS:{{dslproxy}}"
build-serverClient-full {{item}} nopass build-serverClient-full {{item}} nopass
with_items: with_items:
- "{{ groups['nificontainers'] }}" - "{{ groups['nificontainers'] }}"
...@@ -70,10 +67,9 @@ ...@@ -70,10 +67,9 @@
- "{{ groups['thehive'] }}" - "{{ groups['thehive'] }}"
- "{{ groups['cortex'] }}" - "{{ groups['cortex'] }}"
- "{{ groups['haproxy'] }}" - "{{ groups['haproxy'] }}"
- "filebeat"
environment: environment:
EASYRSA_BATCH: 1 EASYRSA_BATCH: 1
EASYRSA_PKI: "{{playbook_dir}}/secrets/CA" EASYRSA_PKI: roles/ca/files/CA
ignore_errors: true ignore_errors: true
loop_control: loop_control:
index_var: my_idx index_var: my_idx
...@@ -97,7 +93,7 @@ ...@@ -97,7 +93,7 @@
expect: expect:
command: roles/ca/files/easyrsa/easyrsa export-p12 {{item}} command: roles/ca/files/easyrsa/easyrsa export-p12 {{item}}
responses: responses:
Enter Export Password: "{{lookup('password', '{{playbook_dir}}/secrets/passwords/keystore')}}" Enter Export Password: "{{kspass}}"
with_items: with_items:
- "{{ groups['nificontainers'] }}" - "{{ groups['nificontainers'] }}"
- "{{ groups['odfeescontainers'] }}" - "{{ groups['odfeescontainers'] }}"
...@@ -108,7 +104,145 @@ ...@@ -108,7 +104,145 @@
- "{{ groups['mispcontainers'] }}" - "{{ groups['mispcontainers'] }}"
environment: environment:
EASYRSA_BATCH: 1 EASYRSA_BATCH: 1
EASYRSA_PKI: "{{playbook_dir}}/secrets/CA" EASYRSA_PKI: roles/ca/files/CA
- name: Copy nifi host certs to nifi role
copy:
src: roles/ca/files/CA/private/{{item}}.p12
dest: roles/nifi/files/{{item}}.p12
with_items:
- "{{ groups['nificontainers'] }}"
- name: Copy odfees host certs to odfees role
copy:
src: roles/ca/files/CA/private/{{item}}.p12
dest: roles/odfees/files/{{item}}.p12
with_items:
- "{{ groups['odfeescontainers'] }}"
- name: Copy odfekibana host p12 certs to odfekibana role
copy:
src: roles/ca/files/CA/private/{{item}}.p12
dest: roles/odfekibana/files/{{item}}.p12
with_items:
- "{{ groups['odfekibanacontainers'] }}"
- name: Copy cortex host p12 certs to cortex role
copy:
src: roles/ca/files/CA/private/{{item}}.p12
dest: roles/cortex/files/{{item}}.p12
with_items:
- "{{ groups['cortex'] }}"
- name: Copy odfekibana host certs to odfekibana role
copy:
src: roles/ca/files/CA/issued/{{item}}.crt
dest: roles/odfekibana/files/{{item}}.crt
with_items:
- "{{ groups['odfekibanacontainers'] }}"
- name: Copy odfekibana host keys to odfekibana role
copy:
src: roles/ca/files/CA/private/{{item}}.key
dest: roles/odfekibana/files/{{item}}.key
with_items:
- "{{ groups['odfekibanacontainers'] }}"
- name: Copy haproxy host cert to haproxy role
copy:
src: roles/ca/files/CA/issued/{{item}}.crt
dest: roles/haproxy/files/{{item}}.crt
with_items:
- "{{ groups['haproxy'] }}"
- name: Copy haproxy host key to haproxy role
copy:
src: roles/ca/files/CA/private/{{item}}.key
dest: roles/haproxy/files/{{item}}.key
with_items:
- "{{ groups['haproxy'] }}"
- name: Copy keycloak host certs to keycloak role
copy:
src: roles/ca/files/CA/issued/{{item}}.crt
dest: roles/keycloak/files/{{item}}.crt
with_items:
- "{{ groups['keycloakcontainers'] }}"
- name: Copy keycloak host keys to keycloak role
copy:
src: roles/ca/files/CA/private/{{item}}.key
dest: roles/keycloak/files/{{item}}.key
with_items:
- "{{ groups['keycloakcontainers'] }}"
- name: Copy misp host certs to misp role
copy:
src: roles/ca/files/CA/issued/{{item}}.crt
dest: roles/misp/files/{{item}}.crt
with_items:
- "{{ groups['mispcontainers'] }}"
- name: Copy misp host keys to misp role
copy:
src: roles/ca/files/CA/private/{{item}}.key
dest: roles/misp/files/{{item}}.key
with_items:
- "{{ groups['mispcontainers'] }}"
- name: Copy thehive host cert to thehive role
copy:
src: roles/ca/files/CA/issued/{{item}}.crt
dest: roles/thehive/files/{{item}}.crt
with_items:
- "{{ groups['thehive'] }}"
- name: Copy thehive host key to thehive role
copy:
src: roles/ca/files/CA/private/{{item}}.key
dest: roles/thehive/files/{{item}}.key
with_items:
- "{{ groups['thehive'] }}"
- name: Copy cortex host cert to cortex role
copy:
src: roles/ca/files/CA/issued/{{item}}.crt
dest: roles/cortex/files/{{item}}.crt
with_items:
- "{{ groups['cortex'] }}"
- name: Copy cortex host key to cortex role
copy:
src: roles/ca/files/CA/private/{{item}}.key
dest: roles/cortex/files/{{item}}.key
with_items:
- "{{ groups['cortex'] }}"
- name: Copy truststore to roles
copy:
src: roles/ca/files/truststore/cacerts.jks
dest: "roles/{{item}}/files/cacerts.jks"
with_items:
- nifi
- odfees
- odfekibana
- keycloak
- misp
- cortex
- thehive
- name: Copy ca cert to roles
copy:
src: "roles/ca/files/truststore/{{ ca_cn }}.crt"
dest: "roles/{{item}}/files/{{ ca_cn }}.crt"
with_items:
- nifi
- odfees
- odfekibana
- keycloak
- misp
- thehive
- cortex
- name: Check for existing user certificates - name: Check for existing user certificates
command: roles/ca/files/easyrsa/easyrsa show-cert {{item.CN | regex_escape()}} command: roles/ca/files/easyrsa/easyrsa show-cert {{item.CN | regex_escape()}}
...@@ -116,7 +250,7 @@ ...@@ -116,7 +250,7 @@
- "{{soctools_users}}" - "{{soctools_users}}"
environment: environment:
EASYRSA_BATCH: 1 EASYRSA_BATCH: 1
EASYRSA_PKI: "{{playbook_dir}}/secrets/CA" EASYRSA_PKI: roles/ca/files/CA
register: usercerts register: usercerts
ignore_errors: true ignore_errors: true
...@@ -126,7 +260,7 @@ ...@@ -126,7 +260,7 @@
- "{{soctools_users}}" - "{{soctools_users}}"
environment: environment:
EASYRSA_BATCH: 1 EASYRSA_BATCH: 1
EASYRSA_PKI: "{{playbook_dir}}/secrets/CA" EASYRSA_PKI: roles/ca/files/CA
ignore_errors: true ignore_errors: true
loop_control: loop_control:
index_var: my_idx index_var: my_idx
...@@ -136,17 +270,24 @@ ...@@ -136,17 +270,24 @@
expect: expect:
command: roles/ca/files/easyrsa/easyrsa export-p12 "{{item.CN}}" command: roles/ca/files/easyrsa/easyrsa export-p12 "{{item.CN}}"
responses: responses:
Enter Export Password: "{{lookup('password', '{{playbook_dir}}/secrets/passwords/{{item.CN}}')}}" Enter Export Password: "{{item.password}}"
with_items: with_items:
- "{{soctools_users}}" - "{{soctools_users}}"
environment: environment:
EASYRSA_BATCH: 1 EASYRSA_BATCH: 1
EASYRSA_PKI: "{{playbook_dir}}/secrets/CA" EASYRSA_PKI: roles/ca/files/CA
- name: Copy user certs to odfees
copy:
src: "roles/ca/files/CA/private/{{ item.CN }}.p12"
dest: "roles/odfees/files/{{ item.CN }}.p12"
with_items:
- "{{soctools_users}}"
- name: Copy user certs to certificates - name: Copy user certs to odfekibana
copy: copy:
src: "{{playbook_dir}}/secrets/CA/private/{{ item.CN }}.p12" src: "roles/ca/files/CA/private/{{ item.CN }}.p12"
dest: "{{playbook_dir}}/secrets/certificates/{{ item.CN }}.p12" dest: "roles/odfekibana/files/{{ item.CN }}.p12"
with_items: with_items:
- "{{soctools_users}}" - "{{soctools_users}}"
--- ---
- include: start.yml - name: Configure Cassandra
template:
src: cassandra.yaml.j2
dest: /usr/share/cassandra/conf/cassandra.yaml
tags: tags:
- start - start
- include: stop.yml
- name: Start Cassandra
command: "/start.sh"
tags: tags:
- stop - start
- stop-cassandra
- include: update-config.yml - name: Wait for Cassandra
wait_for:
host: "{{groups['cassandra'][0]}}"
port: 9042
state: started
delay: 5
tags: tags:
- update-config - start
- update-cassandra-config
- include: restart.yml - name: Stop Cassandra
command: "pkill -SIGTERM -F /var/run/cassandra/cassandra.pid"
tags: tags:
- restart - stop
- restart-cassandra
--- ---
- include: start.yml - name: Copy cacert to ca-trust dir
remote_user: root
copy:
src: "files/{{ca_cn}}.crt"
dest: /etc/pki/ca-trust/source/anchors/ca.crt
tags: tags:
- start - start
- include: stop.yml - startcortex
- name: Install cacert to root truststore
remote_user: root
command: "update-ca-trust"
tags:
- start
- startcortex
- name: Copy certificates in cortex conf dir
copy:
src: "{{ item }}"
dest: "/etc/cortex/{{ item }}"
mode: 0600
with_items:
- "{{ inventory_hostname }}.p12"
- "{{ inventory_hostname }}.crt"
- "{{ inventory_hostname }}.key"
- cacerts.jks
- "{{ca_cn}}.crt"
tags:
- start
- startcortex
- name: Get openid authkey
set_fact:
cortexsecret: "{{lookup('file', 'files/cortexsecret',convert_data=False) | from_json }}"
tags:
- start
- name: Configure embedded Elasticsearch 6
remote_user: root
template:
src: jvm.options.j2
dest: /etc/elasticsearch/jvm.options
tags:
- start
- startcortex
- name: Start embedded Elasticsearch 6
remote_user: root
command: >
daemonize
-u elasticsearch
-c /usr/share/elasticsearch
-p /tmp/elasticsearch.pid
-o /tmp/elasticsearch-stdout.log
/usr/share/elasticsearch/bin/elasticsearch
tags: tags:
- stop - start
- stop-cortex - startcortex
- include: update-config.yml
- name: Configure Cortex
template:
src: application.conf.j2
dest: /etc/cortex/application.conf
tags: tags:
- update-config - start
- update-cortex-config - startcortex
- include: restart.yml
- name: Configure Cortex logging
copy:
src: logback.xml
dest: /etc/cortex/logback.xml
tags: tags:
- restart - start
- restart-cortex
- name: Start Cortex
command: >
daemonize
-c /opt/cortex
-p /tmp/cortex.pid
-o /tmp/cortex-stdout.log
/opt/cortex/bin/cortex
-Dconfig.file=/etc/cortex/application.conf
-Dlogger.file=/etc/cortex/logback.xml
-J-Xms1g
-J-Xmx1g
-Dpidfile.path=/dev/null
tags:
- start
- startcortex
- name: Wait for Cortex
wait_for:
host: "{{groups['cortex'][0]}}"
port: 9001
state: started
delay: 5
tags:
- start
- startcortex
- name: Stop Cortex
command: "pkill -SIGTERM -F /tmp/cortex.pid"
tags:
- stop
- stopcortex
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
# #
# IMPORTANT: If you deploy your application to several instances, make # IMPORTANT: If you deploy your application to several instances, make
# sure to use the same key. # sure to use the same key.
play.http.secret.key="{{lookup('password', '{{playbook_dir}}/secrets/passwords/cortex_secret_key')}}" play.http.secret.key="{{cortex_secret_key}}"
## ElasticSearch ## ElasticSearch
search { search {
...@@ -34,18 +34,18 @@ search { ...@@ -34,18 +34,18 @@ search {
## ## Authentication configuration ## ## Authentication configuration
## search.username = "cortex" ## search.username = "cortex"
## search.password = "{{lookup('password', '{{playbook_dir}}/secrets/passwords/cortex_odfe')}}" ## search.password = "{{cortex_odfe_pass}}"
## ##
## ## SSL configuration ## ## SSL configuration
## search.keyStore { ## search.keyStore {
## path = "/etc/cortex/soctools-cortex.p12" ## path = "/etc/cortex/dsoclab-cortex.p12"
## type = "PKCS12" # or PKCS12 ## type = "PKCS12" # or PKCS12
## password = "{{lookup('password', '{{playbook_dir}}/secrets/passwords/keystore')}}" ## password = "{{kspass}}"
## } ## }
## search.trustStore { ## search.trustStore {
## path = "/etc/cortex/cacerts.jks" ## path = "/etc/cortex/cacerts.jks"
## type = "JKS" # or PKCS12 ## type = "JKS" # or PKCS12
## password = "{{lookup('password', '{{playbook_dir}}/secrets/passwords/truststore')}}" ## password = "{{tspass}}"
## } ## }
} }
...@@ -66,7 +66,7 @@ auth { ...@@ -66,7 +66,7 @@ auth {
# the "ad" section below. # the "ad" section below.
# - ldap : use LDAP to authenticate users. The associated configuration shall be done in the # - ldap : use LDAP to authenticate users. The associated configuration shall be done in the
# "ldap" section below. # "ldap" section below.
provider = [local] provider = [local,oauth2]
ad { ad {
# The Windows domain name in DNS format. This parameter is required if you do not use # The Windows domain name in DNS format. This parameter is required if you do not use
...@@ -108,6 +108,84 @@ auth { ...@@ -108,6 +108,84 @@ auth {
# If 'true', use SSL to connect to the LDAP directory server. # If 'true', use SSL to connect to the LDAP directory server.
#useSSL = true #useSSL = true
} }
oauth2 {
# URL of the authorization server
clientId = "dsoclab-cortex"
clientSecret = {{cortexsecret.value}}
redirectUri = "https://{{dslproxy}}:9001/api/ssoLogin"
responseType = "code"
grantType = "authorization_code"
# URL from where to get the access token
authorizationUrl = "https://{{dslproxy}}:12443/auth/realms/{{openid_realm}}/protocol/openid-connect/auth"
authorizationHeader = "Bearer"
tokenUrl = "https://{{dslproxy}}:12443/auth/realms/{{openid_realm}}/protocol/openid-connect/token"
# The endpoint from which to obtain user details using the OAuth token, after successful login
userUrl = "https://{{dslproxy}}:12443/auth/realms/{{openid_realm}}/protocol/openid-connect/userinfo"
scope = "profile"
userIdField = "email"
#userUrl = "https://auth-site.com/api/User"
#scope = ["openid profile"]
}
ws.ssl.trustManager {
stores = [
{
type = "JKS" // JKS or PEM
path = "cacerts.jks"
password = "{{tspass}}"
}
]
}
# Single-Sign On
sso {
# Autocreate user in database?
autocreate = true
# Autoupdate its profile and roles?
autoupdate = true
# Autologin user using SSO?
autologin = true
# Name of mapping class from user resource to backend user ('simple' or 'group')
#mapper = group
#mapper = simple
#attributes {
# login = "user"
# name = "name"
# groups = "groups"
# organization = "org"
#}
# defaultRoles = ["read", "write", "admin"]
# defaultOrganization = "uninett.no"
#defaultRoles = ["read"]
#defaultOrganization = "csirt"
#groups {
# # URL to retreive groups (leave empty if you are using OIDC)
# #url = "https://auth-site.com/api/Groups"
# # Group mappings, you can have multiple roles for each group: they are merged
# mappings {
# admin-profile-name = ["admin"]
# editor-profile-name = ["write"]
# reader-profile-name = ["read"]
# }
#}
mapper = simple
attributes {
login = "user"
name = "name"
roles = "roles"
organization = "org"
}
defaultRoles = ["read", "analyze"]
defaultOrganization = "uninett.no"
}
} }
## ANALYZERS ## ANALYZERS
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment