Skip to content
Snippets Groups Projects
Commit a7f9aa3e authored by Vojdan Kjorveziroski's avatar Vojdan Kjorveziroski
Browse files

Update local deployment guide

parent 4cc3d5be
No related branches found
No related tags found
1 merge request!23Resolve "Update the all-in-one VM guide"
......@@ -4,288 +4,48 @@ This tutorial will assume that nmaas is installed in a virtual machine that is c
## Virtual Machine Prerequisites
- Ubuntu >= 20.04 Server or any Desktop flavor
- Debian 12 or Ubuntu >= 22.04
- 12GB+ RAM
- 2+ VCPUs
- 60GB+ storage space
## Virtual Machine Setup
Although we will focus on VirtualBox, any virtualization software can be used, depending on the user's preference. Virtualbox 6 is an open-source virtualization software which can be downloaded for free from the [official website](https://www.virtualbox.org/wiki/Downloads).
Although we will focus on VirtualBox, any virtualization software can be used, depending on the user's preference. Virtualbox 7 is an open-source virtualization software which can be downloaded for free from the [official website](https://www.virtualbox.org/wiki/Downloads).
After installation, additional network configuration needs to be done before a Kubernetes cluster can be set up. The following network configuration creates a completely isolated network environment from the production network. nmaas will only be accessible from the host operating system where the virtual machine is deployed.
Our virtual machine will need three network interfaces in total:
- 3 NAT type network adapters (from the same NAT network) which are created manually in Virtualbox (one for Kubernetes, two for freeRTR)
- **Optional:** 1 Host-only type network adapter which is also created manually in Virtualbox - for accessing the NAT network from the host system. Using this approach, the nmaas instance deployed inside the virtual machine will be made accessible by adding a custom route on the host operating system towards the NAT network, traversing the host-only interfaces.
DHCP should not be enabled for the second or third NAT interfaces, but DHCP should be enabled for the first NAT interface. If an optional host interface has been added, DHCP should be enabled on it as well.
Detailed description of the required configuration steps is given below.
### Creating a New NAT Network in Virtualbox
- Navigate to `File -> Preferences -> Network` and click the Plus button on the right hand side.
![Creating a new NAT network in Virtualbox](img/01-new-nat-network.png)
- Once added, click on the cog icon to configure the newly created network. Alter the network name as desired, and enter a preferred CIDR. Make sure that the `Supports DHCP` option is checked.
![Configuring the new NAT network in Virtualbox](img/02-nat-network-config.png)
- If the pre-prepared nmaas VirtualBox image is used, make sure to select the exact same Network CIDR (10.99.99.0/24) since all nmaas components have already been installed and expect addresses in the 10.99.99.0/24 range.
### Optional: Creating a New Host-Only Network in Virtualbox
If the nmaas installation needs to be accessible from other networks, one option is to add a Host-Only interface to the virtual machine that will act as a transit between the outside networks and the internal VirtualBox NAT network configured in the previous step.
- Navigate to `File -> Host Network Manager` and click on the green `Create` Button.
![Creating a new host only network in Virtualbox](img/03-new-host-only-network.png)
- Select the Configure Adapter Manually radio button and enter the IP address that will be allocated to the host interface connected to the hypervisor along with an appropriate network mask. **Make sure that the selected range does not overlap with any existing network or the previously created NAT network.**
![Configuring the new host only network in Virtualbox](img/04-host-only-network-config.png)
After installation, additional network configuration needs to be done before a Kubernetes cluster can be set up. The following network configuration will make the nmaas deployment accessible by any host in the same local area network (bridged-mode). nmaas can be isolated from the local network by altering the network strategy and using NAT, host-only network adapaters or a combination of the two. Such customization is beyond the scope of this tutorial.
### Creating the Virtual Machine in VirtualBox
Create a regular virtual machine in VirtualBox, using the latest Ubuntu 20.04 ISO. The following parameters need to be altered:
Create a regular virtual machine in VirtualBox, using the latest Debian 12 or Ubuntu 22.04 ISOs. Either the [desktop](https://releases.ubuntu.com/22.04/ubuntu-22.04.4-desktop-amd64.iso) or the [server](https://releases.ubuntu.com/22.04/ubuntu-22.04.4-live-server-amd64.iso) edition can be used. To conserve resources, it is recommended to use the server edition of Ubuntu. The following parameters need to be altered:
- Choose `Skip unattended installation` if you want to manually control the deployment process, similar to the default behavior in VirtualBox versions prior to 7.
- Allocate sufficient memory to the virtual machine. 12GB is the minimum amount which will support a complete nmaas installation, along with the possibility for deploying additional applications via the catalog.
- Allocate sufficient number of CPU cores, depending on the performance of your system.
- In the `Network` configuration tab, add three adapters:
- Adapter 1: NAT Network (Select the network created [previously](./p1_local-kubernetes-cluster.md#creating-a-new-nat-network-in-virtualbox))
- Adapter 2: NAT Network (Select the network created [previously](./p1_local-kubernetes-cluster.md#creating-a-new-nat-network-in-virtualbox))
- Adapter 3: NAT Network (Select the network created [previously](./p1_local-kubernetes-cluster.md#creating-a-new-nat-network-in-virtualbox))
- **Optional:** Adapter 4: Host-only Adapter (Select the network created [previously](./p1_local-kubernetes-cluster.md#optional-creating-a-new-host-only-network-in-virtualbox))
- If a Desktop version of Ubuntu is being installed, make sure to enable 3D acceleration in the `Display` tab.
- After the VM has been created, using the `Settings` option, adjust the following parameters:
- In the `Network` configuration tab make sure to choose the `Bridged` adapter type.
- If a Desktop version of Ubuntu is being installed, make sure to enable 3D acceleration in the `Display` tab.
### Configuring the Guest Operating System
Once the guest operating system has been installed, DHCP should be manually disabled on the second and third NAT interfaces. In case of an Ubuntu Server installation, this can be done by editing the Netplan configuration, located in `/etc/netplan/00-installer-config.yaml`:
```yaml title="/etc/netplan/00-installer-config.yaml"
network:
ethernets:
...
enp0s8:
dhcp4: false
...
version: 2
```
Make sure to execute `sudo netplan apply` so that the new changes will take effect.
Once the guest operating system has been installed, it will automatically acquire an IP address from the local DHCP server.
Desktop editions of Ubuntu usually come with their own GUI network manager, so the interface status should be set to `Disabled`:
![Disabling extra interfaces in GUI versions of Ubuntu](./img/05-disable-extra-interfaces-gui.png)
**Optional:** In case a host-only interface has been added to the virtual machine, create a route on your host operating system towards the NAT network via the host-only network interface. Examples are given below both for Microsoft Windows and GNU/Linux host operating systems below.
```powershell title="Microsoft Windows"
route add <NAT_NETWORK> mask <SUBNET_MASK> <VIRTUALBOX_HOST_NETWORK_IP>
# Using the examples above, the command would be:
# route add 10.99.99.0 mask 255.255.255.0 192.168.56.1
```
```bash title="GNU/Linux"
ip route add <NAT_NETWORK>/<CIDR_PREFIX> via <VIRTUALBOX_HOST_NETWORK_IP>
# Using the examples above, the command would be:
# ip route add 10.99.99.0/24 via 192.168.56.1
```
//TODO: Explain hosts access method.
## Kubernetes Cluster Setup
In this section we discuss two quick methods of setting up a local Kubernetes cluster.
### Option 1: MicroK8s Installation
MicroK8s is a snap-based application that can setup a fully functional Kubernetes cluster by executing a single command. It also supports many popular addons which can also be enabled very easily.
MicroK8s abstracts away many Kubernetes configuration steps, especially when using the addon system. This can be seen as either an advantage or a disadvantage.
- Install the MicroK8s snap, using the 1.20 version:
```bash
sudo snap install microk8s --classic --channel=1.20/stable
```
- Add the current user to the `microk8s` group so that access to the `microk8s` command is unrestricted:
```bash
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
su - $USER
```
- Wait until everything is ready:
In this section we discuss how to quickly get a Kubernetes cluster up and running using the lightweight K3s Kubernetes distribution.
```bash
microk8s status --wait-ready
```
- Manually check the node status and the list of running pods:
```bash
microk8s kubectl get node
microk8s kubectl get pod --all-namespaces
```
#### Addons Setup
##### CNI
### Kubernetes Deployment Using K3s
Calico comes installed by default, no further manual configuration is required.
##### DNS
To enable CoreDNS the following command should be executed:
```bash
microk8s enable dns
```
By default the Google DNS servers will be used as upstreams (8.8.8.8 and 8.8.4.4). If there is a local DNS server available that should be used instead, it can be specified using semicolons:
```bash
microk8s enable dns:192.168.1.1
```
Once the command is executed, all of the necessary Kubernetes resources are immediately created and the associated pods brought up:
```bash
microk8s kubectl get pod -n kube-system
# NAME READY STATUS RESTARTS AGE
# calico-node-gvlsm 1/1 Running 0 8m39s
# coredns-86f78bb79c-zhn7p 1/1 Running 0 74s
# calico-kube-controllers-847c8c99d-vrlm2 1/1 Running 0 8m42s
```
##### Testing DNS Resolution
Testing the DNS resolution is an optional, but recommended step to ensure that the deployed CoreDNS instance is functioning properly. To do so, an instance of `dnsutils` can be deployed:
```bash
microk8s kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
```
Once the Pod enters a ready state, we can open a shell session:
```bash
microk8s kubectl exec -it dnsutils -- /bin/sh
ping geant.org
```
##### Storage
A local path provisioner can be enabled using:
```bash
microk8s enable storage
```
Beware when using this in clusters with more than one node.
##### MetalLB
MetalLB is a Kubernetes LoadBalancer implementation.
```bash
microk8s enable metallb:192.168.99.150-192.168.99.200
```
Pick a free range from the local address space for easiest access during testing.
##### Ingress Nginx
Ingress Nginx is a popular Ingress controller for Kubernetes, based on the widely used Nginx web server.
```bash
microk8s enable ingress
```
To make the newly deployed ingress accessible from outside the CNI network, a LoadBalancer Service can be created with an address assigned by MetalLB:
```yaml title="ingress-lb.yaml"
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
```
```bash
microk8s kubectl create -f ingress-lb.yaml --save-config
microk8s kubectl get service -n ingress
```
Once the assigned IP to the LoadBalancer Service has been acquired by executing the previous command, a browser window can be opened on the local workstation so as to perform a test access to the Ingress Controller. A generic `404 not found` message should be displayed.
##### Helm
Helm is a package manager for Kubernetes allowing seamless installation of complex application. nmaas and all of its dependencies have also been packaged as Helm charts, thus easing their deployment process.
```bash
microk8s enable helm3
```
Similarly to the way that the kubectl client is accessed, Helm can be invoked using:
```bash
microk8s helm3 <HELM_COMMAND>
# for example:
microk8s helm3 list --all-namespaces
```
!!! danger "Helm Version"
Unfortunately, the Helm version installed in this manner, as an official MicroK8s addon is too old. A newer version, if needed, can be installed by following the instructions available below. **Please note that the GitLab chart, which is a dependency of nmaas requires a newer Helm version than the one installed as a MicroK8s addon.**
###### Installing a Newer Helm Version
- Download the latest Helm release from [https://github.com/helm/helm/releases](https://github.com/helm/helm/releases) for your architecture (e.g. [https://get.helm.sh/helm-v3.7.0-linux-amd64.tar.gz](https://get.helm.sh/helm-v3.7.0-linux-amd64.tar.gz)).
- Unpack the downloaded archive file and move it to a location in `PATH`.
```bash
wget https://get.helm.sh/helm-v3.7.0-linux-amd64.tar.gz -O helm-latest.tar.gz
tar -xvzf helm-latest.tar.gz
mv linux-amd64/helm /usr/local/bin
chmod +x /usr/local/bin/helm
```
- Finally, the kube config to interact with the MicroK8s Kubernetes cluster needs to be copied to the appropriate location:
```bash
microk8s.config > ~/.kube/config
```
### Option 2: K3s Installation
K3s is another easy way to configure a full-fledged Kubernetes cluster in a matter of minutes. K3s is more lightweight than other Kubernetes distributions since it does not ship with unnecessary modules, such as the ones for integrating with various cloud providers. K3s offers seamless scalability across multiple nodes and provides the ability to either use an embedded database for storing the cluster state or a relational one, such as PostgreSQL or MySQL.
K3s is one of the many options to deploy full-fledged Kubernetes cluster in a matter of minutes. K3s is more lightweight than other Kubernetes distributions since it does not ship with unnecessary modules and is packaged as a single binary. K3s offers seamless scalability across multiple nodes and provides the ability to either use an embedded database for storing the cluster state or a relational one, such as PostgreSQL or MySQL.
- K3s can be installed with the following command:
```bash
export INSTALL_K3S_VERSION=v1.20.14+k3s2
export INSTALL_K3S_VERSION=v1.29.7+k3s1
curl -sfL https://get.k3s.io | sh -s - server \
--tls-san demo.nmaas.local \
--tls-san 10.99.99.100 \
--tls-san nmmaas.internal \
--disable=traefik \
--flannel-backend=none \
--disable-network-policy \
......@@ -294,7 +54,7 @@ K3s is another easy way to configure a full-fledged Kubernetes cluster in a matt
--cluster-cidr=10.136.0.0/16
```
- `--tls-san` – can be specified multiple times to add additional names for which the automatically generated Kubernetes API certificates will be valid. **Make sure to replace the IP address with the IP address of your VM**.
- `--tls-san` – can be specified multiple times to add additional names for which the automatically generated Kubernetes API certificates will be valid. If using a static IP address on your VM, make sure to replace the IP address with the IP address of your VM.
- `--disable=traefik` – Traefik needs to be explicitly disabled since it ships by default with new K3s installations. We will use ingress-nginx as our ingress controller and will install it manually in a later step.
- `--flannel-backend=none` – Flannel CNI needs to be explicitly disabled, since we will manually install Calico.
- `--disable-network-policy` – we do not need the default network policy addon that enabled the use of Kubernetes NetworkPolicy objects, since Calico has built-in support for network policies.
......@@ -321,27 +81,20 @@ K3s is another easy way to configure a full-fledged Kubernetes cluster in a matt
- Calico can be manually installed by downloading the manifest file and setting the CALICO_IPV4POOL_CIDR parameter to the value set when deploying K3s.
```bash
mkdir -p ~/manifests/calico
cd ~/manifests/calico
wget https://docs.projectcalico.org/manifests/calico.yaml
nano calico.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/tigera-operator.yaml
mkdir -p ~/nmaas-deployment/manifests/calico
curl -O --output-dir ~/nmaas-deployment/manifests/calico/ https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/custom-resources.yaml
```
```yaml title="calico.yaml"
- Edit the downloaded `custom-resources.yaml` file (`~/nmaas-deployment/manifests/calico/custom-resources.yaml`) and change the `cidr` and `encapsulation` properties as below:
```yaml
...
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.136.0.0/16"
# Disable file logging so `kubectl logs` works.
cidr: 10.136.0.0/16 # same range as the above K3s command
encapsulation: VXLAN
...
```
```bash
kubectl create -f calico.yaml --save-config
```
- Once Calico has been installed, the node should transition to a `Ready` state.
```bash
......@@ -373,43 +126,6 @@ NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 45h
```
##### MetalLB
MetalLB can be installed using the official Kubernetes manifests.
- To install MetalLB, first the `metallb-system` namespace must be created:
```bash
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
```
- Once the namespace has been created, it can be populated with all of the other necessary components:
```bash
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml
```
- Finally, a default configuration ConfigMap should be created, describing the address range allocated to MetalLB. Please make sure to select an unused block of space. In our case, we will use addresses `10.99.99.150` to `10.99.99.200` from the GEANT NAT network which we configured in VirtualBox at the start of the guide.
```yaml title="metallb-config.yaml"
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.99.99.150-10.99.99.200
```
```bash
kubectl create -f metallb-config.yaml --save-config
```
##### Helm
To install Helm, we need to first download the latest binary for our architecture and extract it to a location which is in the `PATH` system variable.
......@@ -419,13 +135,22 @@ To install Helm, we need to first download the latest binary for our architectur
```bash
cd $(mktemp -d)
wget https://get.helm.sh/helm-v3.7.1-linux-amd64.tar.gz
tar -xvzf helm-v3.7.1-linux-amd64.tar.gz
wget https://get.helm.sh/helm-v3.15.4-linux-amd64.tar.gz
tar -xvzf helm-v3.15.4-linux-amd64.tar.gz
sudo mv helm /usr/local/bin/helm
sudo chmod +x /usr/local/bin/helm
```
- Test whether Helm has been successfully installed by executing `helm version`.
!!! Warning
For helm to function properly, the `kube.config` file must be copied (or linked) to `~/.kube/config`. This can be done like so:
```bash
ln -s /etc/rancher/k3s/k3s.yaml ~/.kube/config
```
##### Ingress Nginx
The last application that needs to be installed before we can move on to installing the nmaas components is Ingress Nginx. Since we have already configured Helm, the Ingress Nginx installation is simple.
......@@ -434,22 +159,24 @@ The last application that needs to be installed before we can move on to install
```yaml title="ingress-values.yaml"
defaultBackend:
enabled: true
enabled: true
controller:
config:
log-format-upstream: '{"time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":$status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", "request_query": "$args", "request_length": $request_length, "duration": $request_time,"method": "$request_method", "http_referrer": "$http_referer", "http_user_agent": "$http_user_agent" }'
kind: Deployment
ingressClass: nginx
scope:
enabled: false
namespace: default
service:
type: LoadBalancer
metrics:
enabled: false
```
In our case we have opted to use a Deployment instead of a DaemonSet for the deployment strategy. Additionally, we have selected a service type of LoadBalancer since we have already installed MetalLB and it is ready to allocate an IP address to our LoadBalancer service.
hostPort:
enabled: true
config:
log-format-upstream: '{"time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":$status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", "request_query": "$args", "request_length": $request_length, "duration": $request_time,"method": "$request_method", "http_referrer": "$http_referer", "http_user_agent": "$http_user_agent" }'
kind: Deployment
ingressClass: nginx
scope:
enabled: false
namespace: default
service:
type: ClusterIP
metrics:
enabled: false
```
In our case we have opted to use a Deployment instead of a DaemonSet for the deployment strategy. Additionally, we have selected a service type of `ClusterIP` and enabled `hostPort` so that the ingress controller can be reachable using the VMs LAN IP address. In this way we avoid using LoadBalancer addons, simplifying the single node nmaas deployment.
- Add the `ingress-nginx` Helm repository and install the application:
......@@ -475,6 +202,6 @@ The last application that needs to be installed before we can move on to install
- We can test the installed ingress by directly visiting the allocated LoadBalancer IP address in a browser. We should be presented with a generic `404-not found` page.
```bash
kubectl get service -n nmaas-system
curl --insecure https://10.99.99.150
curl --insecure https://localhost
curl --insecure https://$VM_IP
```
\ No newline at end of file
......@@ -2,17 +2,13 @@
Once a working Kubernetes cluster has been deployed, we are ready to proceed to the next step - installing nmaas.
All the necessary components will be installed in a single namespace – `nmaas-system`. If this namespace has not been created so far, execute:
```bash
kubectl create namespace nmaas-system
```
All the necessary components will be installed in the `nmaas-system` namespace that was created in the [previous part](p1_local-kubernetes-cluster.md).
## GitLab Installation
The first nmaas dependency that we will set up is GitLab, a self-hosted web based Git repository hosting service. Many applications that are deployed by nmaas users store their configuration data in a Git repository, allowing easier editing, and version management.
The first nmaas dependency that we will set up is GitLab, a self-hosted web based Git repository hosting service. Many applications that are deployed by nmaas users store their configuration data in a Git repository, allowing easier editing and version management, thus following the GitOps approach.
GitLab has an official Helm chart, and we will use it to create a basic GitLab installation locally. Some parameters must be customized in the values.yaml file before deployment:
GitLab has an official Helm chart, and we will use it to create a basic GitLab installation locally. Some parameters must be customized in the values .yaml file before deployment:
- `global.hosts.domain` – should be set to the domain that will be allocated to GitLab. Note that the final hostname where GitLab will be reachable will have a `gitlab` prepended to it. If `nmaas.example.local` is set as the `global.hosts.domain` parameter, then GitLab will be available on `gitlab.nmaas.example.local`.
- `global.hosts.ssh` – in order for users to be able to interact with their GitLab repositories via SSH, the value of `global.hosts.ssh` should be set to the MetalLB IP that will be assigned to this new service (usually the next available one) for the gitlab-shell component. If the IP is not known at the time of deployment, then after the initial deployment, once the LoadBalancer service is created and the IP is allocated, a chart upgrade can be performed, where the `global.hosts.ssh` parameter will be set to the appropriate value.
......@@ -21,46 +17,54 @@ GitLab has an official Helm chart, and we will use it to create a basic GitLab i
- optionally, if an email server is available, the `global.smtp` section can be edited with the appropriate parameters so that outbound email is enabled.
```yaml title="gitlab-values.yaml"
gitlab:
gitlab-shell:
minReplicas: 1
maxReplicas: 1
webservice:
deployments:
default:
ingress:
path: /
hpa:
enabled: false
minReplicas: 1
maxReplicas: 1
sidekiq:
minReplicas: 1
maxReplicas: 1
certmanager:
install: false
nginx-ingress:
enabled: false
prometheus:
install: false
gitlab-runner:
install: false
redis:
install: true
registry:
enabled: false
postgresql:
postgresqlUsername: gitlab
install: true
postgresqlDatabase: gitlabhq_production
usePasswordFile: false
existingSecret: 'gitlab-postgresql'
master:
extraVolumeMounts:
- name: custom-init-scripts
mountPath: /docker-entrypoint-preinitdb.d/init_revision.sh
subPath: init_revision.sh
podAnnotations:
postgresql.gitlab/init-revision: "1"
metrics:
enabled: false
gitlab-runner:
install: false
gitlab-shell:
service:
type: LoadBalancer
global:
kas:
enabled: false
edition: ce
hosts:
domain: nmaas.<INGRESS_IP>.nip.io
https: true
ssh: <LB_SSH_IP>.nip.io
domain: nmaas.internal
https: false
ingress:
enabled: true
configureCertmanager: false
tls:
enabled: true
# secretName: <EXISTING_OR_DUMMY_TLS_SECRET_NAME> # can be left empty, self-signed certificates will be generated
enabled: false
path: /
class: "nginx"
initialRootPassword:
......@@ -72,25 +76,6 @@ global:
time_zone: Europe/Warsaw
smtp:
enabled: false
address: smtp.example.com
port: 587
user_name: ""
## doc/installation/secrets.md#smtp-password
password:
secret: "my-smtp-secret"
key: password
# domain:
authentication: "login"
starttls_auto: true
openssl_verify_mode: "peer"
## doc/installation/deployment.md#outgoing-email
## Email persona used in email sent by GitLab
email:
from: 'noreply@example.com'
display_name: GitLab
reply_to: 'support@example.com'
smime:
enabled: false
```
GitLab requires the deployment of a PostgreSQL instance. The necessary secrets containing the PostgreSQL passwords need to be created, as well as the secret containing the initial root GitLab password:
......@@ -103,12 +88,12 @@ kubectl create secret generic -n $NMAAS_NAMESPACE gitlab-root-password --from-li
The root GitLab password will be used for login to the GitLab web interface.
We are ready to add the GitLab Helm repository and install the 8.2.x version of GitLab:
We are ready to add the GitLab Helm repository and install the 8.5.x version of GitLab:
```bash
helm repo add gitlab https://charts.gitlab.io
helm repo update
helm install -f gitlab-values.yaml --namespace nmaas-system nmaas-gitlab --version 8.2.0 gitlab/gitlab
helm install -f gitlab-values.yaml --namespace nmaas-system nmaas-gitlab --version 8.5.0 gitlab/gitlab
```
Once GitLab has been deployed, it should be possible to navigate to the login page using a web browser. After logging in, users are advised to configure the following settings:
......@@ -117,8 +102,9 @@ Once GitLab has been deployed, it should be possible to navigate to the login pa
- `Sign-up enabled` should be unchecked
- `Require admin approval for new sign-ups` should be unchecked
- enable webhooks to local addresses (`Admin Area -> Settings -> Network -> Outbound requests`)
- `Allow requests to the local network from web hooks and services` = checked
- `Allow requests to the local network from system hooks` = checked
- `Allow requests to the local network from web hooks and services` should be checked
- `Allow requests to the local network from system hooks` should be checked
- `Enforce DNS-rebinding attack protection` should be unchecked
The final step before installing nmaas itself is to generate a GitLab personal access token which will allow nmaas to connect to the GitLab API. This can be done from the User Profile page:
......@@ -134,10 +120,10 @@ The final step is to install nmaas. nmaas uses SSH communication to connect betw
export NMAAS_NAMESPACE="nmaas-system"
tmpdir=$(mktemp -d)
ssh-keygen -f $tmpdir/key -N ""
# nmaas-helm-key-private should be replaced with {{ .Values.global.helmAccessKeyPrivate }}
kubectl create secret generic nmaas-helm-key-private -n $NMAAS_NAMESPACE --from-file=id_rsa=$tmpdir/key
# nmaas-helm-key-private should be replaced with {{ .Values.global.helmAccessKeyPublic }}
kubectl create secret generic nmaas-helm-key-public -n $NMAAS_NAMESPACE --from-file=helm=$tmpdir/key.pub
```
......@@ -160,46 +146,71 @@ global:
acmeIssuer: false
demoDeployment: true
ingressName: nmaas
nmaasDomain: nmaas.<INGRESS_IP>.nip.io
wildcardCertificateName: <EXISTING_OR_DUMMY_TLS_SECRET_NAME>
nmaasDomain: nmaas.internal
wildcardCertificateName: nmaas-internal-wildcard
gitlabApiUrl: 'http://nmaas-gitlab-webservice-default:8181/api/v4'
gitlabApiToken:
literal: <GITLAB_ACCESS_TOKEN>
literal: glpat-bSHxML48QNsZJE4CLHxc
platform:
initscripts:
enabled: true
ingress:
className: nginx
adminPassword:
literal: saamn
apiSecret:
literal: saamn
initscripts:
enabled: true
properties:
autoNamespaceCreationForDomains: true
adminEmail: noreply@nmaas.internal
appInstanceFailureEmailList: noreply@nmaas.internal
sso:
encrpytionSecret:
literal: saamn
adminEmail: noreply@nmaas.local
appInstanceFailureEmailList: noreply@nmaas.local
enabled: false
k8s:
ingress:
certificate:
issuerOrWildcardName: <EXISTING_OR_DUMMY_TLS_SECRET_NAME>
issuerOrWildcardName: nmaas-internal-wildcard
controller:
ingressClass: nmaas
publicIngresClass: nmaas
publicServiceDomain: nmaas.<INGRESS_IP>.nip.io
externalServiceDomain: nmaas.<INGRESS_IP>.nip.io
externalServiceDomain: nmaas.internal
ingressClass: nginx
publicIngresClass: nginx
publicServiceDomain: nmaas.internal
portal:
ingress:
className: nginx
properties:
langingPageFlavor: VLAB
sp:
enabled: false
postfix:
image:
repository: artifactory.software.geant.org/nmaas-docker-local/nmaas-postfix-smtp
tag: 1.0.0
properties:
hostname: mailer.nmaas.internal
smtp:
fromAddress: noreply@nmaas.internal
host:
literal: localhost
username:
literal: smtpUsername
password:
literal: mysecret
port: '1050'
```
Once the values.yaml file has been customized, nmaas can be deployed by executing:
```bash
helm repo add nmaas https://artifactory.software.geant.org/artifactory/nmaas-helm
helm install -f nmaas-values.yaml --namespace nmaas-system nmaas --version 1.2.11 nmaas/nmaas
helm install -f nmaas-values.yaml --namespace nmaas-system nmaas --version 1.2.14 nmaas/nmaas
```
nmaas also requires an the stakater autoreloader component, which can simply be installed using the commands below. This component takes care of restarting the affected pods whenever a configuration change is submitted via GitLab.
The email configuration in the `postfix` section configures an invalid email server on purpose (`localhost:1050:`), as to prevent email sending. If available, users are advised to use their own SMTP credentials, so that email sending will be fully functional.
nmaas also requires an the Stakater AutoReloader component, which can simply be installed using the commands below. This component takes care of restarting the affected pods whenever a configuration change is submitted via GitLab.
```bash
helm repo add stakater https://stakater.github.io/stakater-charts
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment