Skip to content
Snippets Groups Projects
Commit a6fb3c78 authored by Vojdan Kjorveziroski's avatar Vojdan Kjorveziroski
Browse files

Add JRES 2024 content

parent 17bfdfcd
No related branches found
No related tags found
1 merge request!23Resolve "Update the all-in-one VM guide"
Showing
with 21 additions and 446 deletions
# Part 1: Deploying a Local Kubernetes Cluster # Deploying a Local Kubernetes Cluster
This tutorial will assume that nmaas is installed in a virtual machine that is completely isolated from any production environment. However, the discussed steps are applicable to bare-metal hardware as well, once the correct network strategy has been identified by the system administrator. This tutorial will assume that nmaas is installed in a virtual machine that is completely isolated from any production environment. However, the discussed steps are applicable to bare-metal hardware as well, once the correct network strategy has been identified by the system administrator.
...@@ -28,9 +28,7 @@ Create a regular virtual machine in VirtualBox, using the latest Debian 12 or Ub ...@@ -28,9 +28,7 @@ Create a regular virtual machine in VirtualBox, using the latest Debian 12 or Ub
### Configuring the Guest Operating System ### Configuring the Guest Operating System
Once the guest operating system has been installed, it will automatically acquire an IP address from the local DHCP server. Once the guest operating system has been installed, it will automatically acquire an IP address from the local DHCP server.
//TODO: Explain hosts access method.
## Kubernetes Cluster Setup ## Kubernetes Cluster Setup
...@@ -145,11 +143,11 @@ To install Helm, we need to first download the latest binary for our architectur ...@@ -145,11 +143,11 @@ To install Helm, we need to first download the latest binary for our architectur
!!! Warning !!! Warning
For helm to function properly, the `kube.config` file must be copied (or linked) to `~/.kube/config`. This can be done like so: For helm to function properly, the `kube.config` file must be copied (or linked) to `~/.kube/config`. This can be done like so:
```bash ```bash
ln -s /etc/rancher/k3s/k3s.yaml ~/.kube/config ln -s /etc/rancher/k3s/k3s.yaml ~/.kube/config
``` ```
##### Ingress Nginx ##### Ingress Nginx
......
# Part 2: Installing nmaas # Installing nmaas
Once a working Kubernetes cluster has been deployed, we are ready to proceed to the next step - installing nmaas. Once a working Kubernetes cluster has been deployed, we are ready to proceed to the next step - installing nmaas.
All the necessary components will be installed in the `nmaas-system` namespace that was created in the [previous part](p1_local-kubernetes-cluster.md). All the necessary components will be installed in the `nmaas-system` namespace that was created in the [previous part](./deploying-local-kubernetes-cluster.md).
## GitLab Installation ## GitLab Installation
......
...@@ -3,4 +3,9 @@ ...@@ -3,4 +3,9 @@
This section contains the materials for tutorials and workshops where the nmaas Platform or one of its use-cases has been presented. Currently it includes the archived resources for the following events: This section contains the materials for tutorials and workshops where the nmaas Platform or one of its use-cases has been presented. Currently it includes the archived resources for the following events:
- [JRES 2022 - GÉANT Network Management as a Service tutorial](jres2022/introduction.md) - [JRES 2022 - GÉANT Network Management as a Service tutorial](jres2022/introduction.md)
- [JRES 2024 - Orchestrated Deployment of Virtual Labs for Education](jres2024/introduction.md) - [JRES 2024 - Orchestrated Deployment of Virtual Labs for Education](jres2024/introduction.md)
\ No newline at end of file
To fully leverage the content in these materials, access to an nmaas test instance is advised. This can be accomplish by:
- deploying a local Kubernetes cluster and then deploying an nmaas test instance in it.
- registering for an account on the [managed vNOC Playground instance](https://nmaas.geant.org) or the [managed vLAB instance](https://vlab.dev.nmaas.eu), depending on the use-case that you are interested in.
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
!!! note Virtual Machine Download !!! note Virtual Machine Download
The virtual machine image can be downloaded from [https://drive1.demo.renater.fr/index.php/s/rp2awZ6sMnNFQwK](https://drive1.demo.renater.fr/index.php/s/rp2awZ6sMnNFQwK). Users are advised to follow [part 1](p1_local-kubernetes-cluster.md) in order to set up the required VirtualBox NAT network before importing. The virtual machine image can be downloaded from [https://drive1.demo.renater.fr/index.php/s/rp2awZ6sMnNFQwK](https://drive1.demo.renater.fr/index.php/s/rp2awZ6sMnNFQwK). Users are advised to follow [part 1](../deploying-local-kubernetes-cluster.md) in order to set up the required VirtualBox NAT network before importing.
| Name | Value | | Name | Value |
|--------------------------------------------|--------------------------------------------------------| |--------------------------------------------|--------------------------------------------------------|
......
...@@ -2,10 +2,8 @@ ...@@ -2,10 +2,8 @@
Network management is an essential part of any production network, no matter its size. However, organizations often face staff shortages or lack the required resources to properly monitor their network. nmaas (Network Management as a Service) is a GÉANT production service that allows effortless deployment of many open-source network monitoring tools on demand, with minimal initial configuration by the end users. Based on the Kubernetes container orchestrator, and deployable on private infrastructure as well, a dedicated nmaas instance can be used as a central point for monitoring many distributed networks, by utilizing VPN tunnels. New applications can be added to the nmaas catalogue at any time using Helm charts, an industry standard package manager for Kubernetes. nmaas hides the operational complexity from end users who access the service through a web application from where they can manage and configure their existing application instances or deploy new ones. Network management is an essential part of any production network, no matter its size. However, organizations often face staff shortages or lack the required resources to properly monitor their network. nmaas (Network Management as a Service) is a GÉANT production service that allows effortless deployment of many open-source network monitoring tools on demand, with minimal initial configuration by the end users. Based on the Kubernetes container orchestrator, and deployable on private infrastructure as well, a dedicated nmaas instance can be used as a central point for monitoring many distributed networks, by utilizing VPN tunnels. New applications can be added to the nmaas catalogue at any time using Helm charts, an industry standard package manager for Kubernetes. nmaas hides the operational complexity from end users who access the service through a web application from where they can manage and configure their existing application instances or deploy new ones.
Users can also evaluate nmaas on their own infrastructure by either following this tutorial or by simply [downloading the already prepared virtual machine](https://drive1.demo.renater.fr/index.php/s/rp2awZ6sMnNFQwK). After downloading the virtual machine image, users are advised to follow [part 1](./p1_local-kubernetes-cluster.md) in order to set up the necessary VirtualBox NAT network which is required by the VM so that all of its components can run as expected, and the subnets described in the [Appendix](./appendix.md) remain unchanged. Users can also evaluate nmaas on their own infrastructure by either following this tutorial or by simply [downloading the already prepared virtual machine](https://drive1.demo.renater.fr/index.php/s/rp2awZ6sMnNFQwK).
By the end of this 5 part tutorial, users should have an exact replica of the setup done within the virtual machine. If you want to follow this tutorial, please make sure that you have either downloaded the pre-prepared VM or have followed the necessary steps for deploying a local Kubernetes clusters and installing an nmaas test instance. After completing these prerequisites, this tutorial continues with [setting up a demo network environment](./p3_demo-network-environment.md), where virtualized demo networking devices are used that can later act as monitoring targets for the applications deployed by nmaas. The process of deploying such monitoring applications from the list of supported applications in the nmaas catalog is described in the part on [monitoring the demo network environment](./p4_monitoring-demo-network-environment.md). The tutorial is concluded with [instructions on adding a custom application](./p5_adding_custom_app.md), allowing advanced users to add their own applications to the nmaas catalog, thus making it available to all potential users of their nmaas instance.
This tutorial begins with [part 1](./p1_local-kubernetes-cluster.md) where a local Kubernetes cluster is initialized allowing users to choose between two lightweight Kubernetes distributions - MicroK8s or K3s. It then proceeds with [part 2](./p2_installing-nmaas.md) where the nmaas installation procedure is explained, along with all required dependencies. In [part 3](./p3_demo-network-environment.md), a simple method is described for setting up virtualized demo networking devices that can later be used as monitoring targets for the applications deployed by nmaas. The process of deploying such monitoring applications from the list of supported applications in the nmaas catalog is described in [part 4](./p4_monitoring-demo-network-environment.md). The tutorial is concluded with [part 5](./p5_adding_custom_app.md), allowing advanced users to add their own custom applications to the nmaas catalog, thus making it available to all potential users of their nmaas instance.
For users who choose to download the already prepared virtual machine and avoid the whole setup process, the [Appendix](./appendix.md) gives an overview of all the credentials that have been used. For users who choose to download the already prepared virtual machine and avoid the whole setup process, the [Appendix](./appendix.md) gives an overview of all the credentials that have been used.
\ No newline at end of file
# Part 1: Deploying a Local Kubernetes Cluster
This tutorial will assume that nmaas is installed in a virtual machine that is completely isolated from any production environment. However, the discussed steps are applicable to bare-metal hardware as well, once the correct network strategy has been identified by the system administrator.
## Virtual Machine Prerequisites
- Debian 12 or Ubuntu >= 22.04
- 12GB+ RAM
- 2+ VCPUs
- 60GB+ storage space
## Virtual Machine Setup
Although we will focus on VirtualBox, any virtualization software can be used, depending on the user's preference. Virtualbox 7 is an open-source virtualization software which can be downloaded for free from the [official website](https://www.virtualbox.org/wiki/Downloads).
After installation, additional network configuration needs to be done before a Kubernetes cluster can be set up. The following network configuration will make the nmaas deployment accessible by any host in the same local area network (bridged-mode). nmaas can be isolated from the local network by altering the network strategy and using NAT, host-only network adapaters or a combination of the two. Such customization is beyond the scope of this tutorial.
### Creating the Virtual Machine in VirtualBox
Create a regular virtual machine in VirtualBox, using the latest Debian 12 or Ubuntu 22.04 ISOs. Either the [desktop](https://releases.ubuntu.com/22.04/ubuntu-22.04.4-desktop-amd64.iso) or the [server](https://releases.ubuntu.com/22.04/ubuntu-22.04.4-live-server-amd64.iso) edition can be used. To conserve resources, it is recommended to use the server edition of Ubuntu. The following parameters need to be altered:
- Choose `Skip unattended installation` if you want to manually control the deployment process, similar to the default behavior in VirtualBox versions prior to 7.
- Allocate sufficient memory to the virtual machine. 12GB is the minimum amount which will support a complete nmaas installation, along with the possibility for deploying additional applications via the catalog.
- Allocate sufficient number of CPU cores, depending on the performance of your system.
- After the VM has been created, using the `Settings` option, adjust the following parameters:
- In the `Network` configuration tab make sure to choose the `Bridged` adapter type.
- If a Desktop version of Ubuntu is being installed, make sure to enable 3D acceleration in the `Display` tab.
### Configuring the Guest Operating System
Once the guest operating system has been installed, it will automatically acquire an IP address from the local DHCP server.
//TODO: Explain hosts access method.
## Kubernetes Cluster Setup
In this section we discuss how to quickly get a Kubernetes cluster up and running using the lightweight K3s Kubernetes distribution.
### Kubernetes Deployment Using K3s
K3s is one of the many options to deploy full-fledged Kubernetes cluster in a matter of minutes. K3s is more lightweight than other Kubernetes distributions since it does not ship with unnecessary modules and is packaged as a single binary. K3s offers seamless scalability across multiple nodes and provides the ability to either use an embedded database for storing the cluster state or a relational one, such as PostgreSQL or MySQL.
- K3s can be installed with the following command:
```bash
export INSTALL_K3S_VERSION=v1.29.7+k3s1
curl -sfL https://get.k3s.io | sh -s - server \
--tls-san nmmaas.internal \
--disable=traefik \
--flannel-backend=none \
--disable-network-policy \
--disable=servicelb \
--write-kubeconfig-mode 664 \
--cluster-cidr=10.136.0.0/16
```
- `--tls-san` – can be specified multiple times to add additional names for which the automatically generated Kubernetes API certificates will be valid. If using a static IP address on your VM, make sure to replace the IP address with the IP address of your VM.
- `--disable=traefik` – Traefik needs to be explicitly disabled since it ships by default with new K3s installations. We will use ingress-nginx as our ingress controller and will install it manually in a later step.
- `--flannel-backend=none` – Flannel CNI needs to be explicitly disabled, since we will manually install Calico.
- `--disable-network-policy` – we do not need the default network policy addon that enabled the use of Kubernetes NetworkPolicy objects, since Calico has built-in support for network policies.
- `--disable=servicelb` – the preconfigured implementation for LoadBalancer service objects should be disabled, since we will manually install MetalLB.
- `--write-kubeconfig-mode 664` – more permissive permissions are needed for the automatically generated kubeconfig file so that regular users, apart from root, can use the kubectl client as well.
- `--clister-cidr=10.136.0.0/16` – a free subnet range which will be used as the pod network. Should be written down since it will be required in the Calico deployment as well.
- Another way of providing `kubectl` access to different users is to make a copy of the original kubeconfig file located in `/etc/rancher/k3s/k3s.yaml` into a directory and changing its permissions. Then, by exporting the `KUBECONFIG` environment variable, the kubectl client will be forced to use the newly created configuration:
```bash
export KUBECONFIG=~/.kube/config
```
- Our cluster is still not in a Ready state, since we do not have a CNI plugin installed yet.
```bash
kubectl get node -o wide
```
#### Addons Setup
##### CNI
- Calico can be manually installed by downloading the manifest file and setting the CALICO_IPV4POOL_CIDR parameter to the value set when deploying K3s.
```bash
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/tigera-operator.yaml
mkdir -p ~/nmaas-deployment/manifests/calico
curl -O --output-dir ~/nmaas-deployment/manifests/calico/ https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/custom-resources.yaml
```
- Edit the downloaded `custom-resources.yaml` file (`~/nmaas-deployment/manifests/calico/custom-resources.yaml`) and change the `cidr` and `encapsulation` properties as below:
```yaml
...
cidr: 10.136.0.0/16 # same range as the above K3s command
encapsulation: VXLAN
...
```
- Once Calico has been installed, the node should transition to a `Ready` state.
```bash
kubectl get node -o wide
```
##### DNS
CoreDNS is installed by default with K3s, so no need for any manual installation or configuration. Once Calico CNI has been deployed and the cluster has entered a `Ready` state, DNS resolution can be tested using the `dnsutil` pod, as described in the official Kubernetes documentation page.
```bash
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
```
Once the Pod enters a ready state, we can open a shell session:
```bash
kubectl exec -it dnsutils -- /bin/sh
ping geant.org
```
##### Storage
An instance of local path provisioner is automatically installed when deploying K3s, which is sufficient for development single-node clusters such as ours.
```bash
# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 45h
```
##### Helm
To install Helm, we need to first download the latest binary for our architecture and extract it to a location which is in the `PATH` system variable.
- Visit [https://github.com/helm/helm/releases](https://github.com/helm/helm/releases) and copy the download link for the latest release.
- Download the latest release locally
```bash
cd $(mktemp -d)
wget https://get.helm.sh/helm-v3.15.4-linux-amd64.tar.gz
tar -xvzf helm-v3.15.4-linux-amd64.tar.gz
sudo mv helm /usr/local/bin/helm
sudo chmod +x /usr/local/bin/helm
```
- Test whether Helm has been successfully installed by executing `helm version`.
!!! Warning
For helm to function properly, the `kube.config` file must be copied (or linked) to `~/.kube/config`. This can be done like so:
```bash
ln -s /etc/rancher/k3s/k3s.yaml ~/.kube/config
```
##### Ingress Nginx
The last application that needs to be installed before we can move on to installing the nmaas components is Ingress Nginx. Since we have already configured Helm, the Ingress Nginx installation is simple.
- Customize the values.yaml file according to the local environment:
```yaml title="ingress-values.yaml"
defaultBackend:
enabled: true
controller:
hostPort:
enabled: true
config:
log-format-upstream: '{"time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":$status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", "request_query": "$args", "request_length": $request_length, "duration": $request_time,"method": "$request_method", "http_referrer": "$http_referer", "http_user_agent": "$http_user_agent" }'
kind: Deployment
ingressClass: nginx
scope:
enabled: false
namespace: default
service:
type: ClusterIP
metrics:
enabled: false
```
In our case we have opted to use a Deployment instead of a DaemonSet for the deployment strategy. Additionally, we have selected a service type of `ClusterIP` and enabled `hostPort` so that the ingress controller can be reachable using the VMs LAN IP address. In this way we avoid using LoadBalancer addons, simplifying the single node nmaas deployment.
- Add the `ingress-nginx` Helm repository and install the application:
```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl create namespace nmaas-system
helm install -f ingress-values.yaml --namespace nmaas-system nmaas-ingress ingress-nginx/ingress-nginx
```
We have chosen to install `ingress-nginx` in the `nmaas-system` namespace, which will house all the other nmaas components as well.
!!! danger "Note About Helm Errors"
When running the helm install command, Helm might throw an error about the cluster being unreachable. This is most likely because Helm looks for the kube.config file in the default location, but `--write-kubeconfig-mode 664` has been specified during the K3s installation, and the actual location is `/etc/rancher/k3s/k3s.yaml`.
This can be fixed by simply executing:
```bash
export KUBECONFIG='/etc/rancher/k3s/k3s.yaml'
```
- We can test the installed ingress by directly visiting the allocated LoadBalancer IP address in a browser. We should be presented with a generic `404-not found` page.
```bash
curl --insecure https://localhost
curl --insecure https://$VM_IP
```
\ No newline at end of file
# Part 2: Installing nmaas
Once a working Kubernetes cluster has been deployed, we are ready to proceed to the next step - installing nmaas.
All the necessary components will be installed in the `nmaas-system` namespace that was created in the [previous part](p1_local-kubernetes-cluster.md).
## GitLab Installation
The first nmaas dependency that we will set up is GitLab, a self-hosted web based Git repository hosting service. Many applications that are deployed by nmaas users store their configuration data in a Git repository, allowing easier editing and version management, thus following the GitOps approach.
GitLab has an official Helm chart, and we will use it to create a basic GitLab installation locally. Some parameters must be customized in the values .yaml file before deployment:
- `global.hosts.domain` – should be set to the domain that will be allocated to GitLab. Note that the final hostname where GitLab will be reachable will have a `gitlab` prepended to it. If `nmaas.example.local` is set as the `global.hosts.domain` parameter, then GitLab will be available on `gitlab.nmaas.example.local`.
- `global.hosts.ssh` – in order for users to be able to interact with their GitLab repositories via SSH, the value of `global.hosts.ssh` should be set to the MetalLB IP that will be assigned to this new service (usually the next available one) for the gitlab-shell component. If the IP is not known at the time of deployment, then after the initial deployment, once the LoadBalancer service is created and the IP is allocated, a chart upgrade can be performed, where the `global.hosts.ssh` parameter will be set to the appropriate value.
- `global.ingress.tls.secretName` – an existing Kubernetes TLS secret where the TLS certificate to be used is stored.
- `global.ingress.annotations.kubernetes.io/ingress.class` – should be set to the ingress class used by the deployed ingress-nginx instance. In case of MicroK8s this should be set to public. In case of K3s, it should be set to `nginx`.
- optionally, if an email server is available, the `global.smtp` section can be edited with the appropriate parameters so that outbound email is enabled.
```yaml title="gitlab-values.yaml"
gitlab:
gitlab-shell:
minReplicas: 1
maxReplicas: 1
webservice:
deployments:
default:
ingress:
path: /
hpa:
enabled: false
minReplicas: 1
maxReplicas: 1
sidekiq:
minReplicas: 1
maxReplicas: 1
certmanager:
install: false
nginx-ingress:
enabled: false
prometheus:
install: false
gitlab-runner:
install: false
redis:
install: true
registry:
enabled: false
postgresql:
postgresqlUsername: gitlab
install: true
postgresqlDatabase: gitlabhq_production
usePasswordFile: false
existingSecret: 'gitlab-postgresql'
metrics:
enabled: false
global:
kas:
enabled: false
edition: ce
hosts:
domain: nmaas.internal
https: false
ingress:
enabled: true
configureCertmanager: false
tls:
enabled: false
path: /
class: "nginx"
initialRootPassword:
secret: gitlab-root-password
key: password
appConfig:
defaultProjectFeatures:
builds: false
time_zone: Europe/Warsaw
smtp:
enabled: false
```
GitLab requires the deployment of a PostgreSQL instance. The necessary secrets containing the PostgreSQL passwords need to be created, as well as the secret containing the initial root GitLab password:
```bash
export NMAAS_NAMESPACE="nmaas-system"
kubectl create secret generic -n $NMAAS_NAMESPACE gitlab-postgresql --from-literal=postgresql-password=<POSTGRESQL_USER_PASSWORD> --from-literal=postgresql-postgres-password=<POSTGRESQL_ROOT_PASSWORD>
kubectl create secret generic -n $NMAAS_NAMESPACE gitlab-root-password --from-literal=password=<GITLAB_ROOT_PASSWORD>
```
The root GitLab password will be used for login to the GitLab web interface.
We are ready to add the GitLab Helm repository and install the 8.5.x version of GitLab:
```bash
helm repo add gitlab https://charts.gitlab.io
helm repo update
helm install -f gitlab-values.yaml --namespace nmaas-system nmaas-gitlab --version 8.5.0 gitlab/gitlab
```
Once GitLab has been deployed, it should be possible to navigate to the login page using a web browser. After logging in, users are advised to configure the following settings:
- disable new user registrations (`Admin Area -> Settings -> General -> Sign-up restrictions`)
- `Sign-up enabled` should be unchecked
- `Require admin approval for new sign-ups` should be unchecked
- enable webhooks to local addresses (`Admin Area -> Settings -> Network -> Outbound requests`)
- `Allow requests to the local network from web hooks and services` should be checked
- `Allow requests to the local network from system hooks` should be checked
- `Enforce DNS-rebinding attack protection` should be unchecked
The final step before installing nmaas itself is to generate a GitLab personal access token which will allow nmaas to connect to the GitLab API. This can be done from the User Profile page:
- Click on the user avatar in the right-hand corner of the screen, Edit Profile. Select Access Tokens from the left-hand navigation menu. Give a new name for the authentication token, as well as an optional expiry date. Check all scopes.
- Store the token until the next section, where we will create a new secret containing it.
## nmaas Installation
The final step is to install nmaas. nmaas uses SSH communication to connect between components, so we need to create an SSH key pair and store it in a Kubernetes secret. This can be done by executing the following commands:
```bash
#!/bin/bash
export NMAAS_NAMESPACE="nmaas-system"
tmpdir=$(mktemp -d)
ssh-keygen -f $tmpdir/key -N ""
# nmaas-helm-key-private should be replaced with {{ .Values.global.helmAccessKeyPrivate }}
kubectl create secret generic nmaas-helm-key-private -n $NMAAS_NAMESPACE --from-file=id_rsa=$tmpdir/key
# nmaas-helm-key-private should be replaced with {{ .Values.global.helmAccessKeyPublic }}
kubectl create secret generic nmaas-helm-key-public -n $NMAAS_NAMESPACE --from-file=helm=$tmpdir/key.pub
```
A few parameters need to be customized in the values.yaml file, to reflect the environment where nmaas is deployed.
- `global.wildcardCertificateName` – the name of the secret containing the TLS certificate to be used to secure the HTTP communication
- `global.nmaasDomain` – the hostname where nmaas will be accessible.
- `global.gitlabApiUrl` - the API endpoint for GitLab
- `global.gitlabApiToken.literal` - the value of the personal access token created previously in GitLab.
- `platform.properties.adminEmail` – the email address which will receive various notifications such as new user sign-up, deployment errors, new application versions...
- `platform.adminPassword.literal` – the password used to login as the admin user in the nmaas Portal.
- `platform.properties.k8s.ingress.certificate.issuerOrWildcardName` – the name of the wilcard certificate to be used for customer deployed applications, or the name of the cert-manager issuer to use if certificates are issued ad-hoc.
- `platform.properties.k8s.ingress.controller.ingressClass` – the ingress class to be used for deployed applications. Should be set to nginx in the case of K3s and public in the case of MicroK8s.
- `platform.properties.k8s.ingress.controller.publicIngressClass` – the ingress class to be used for applications where the users have explicitly selected to enable public access (e.g. without a VPN). Since this is a local deployment, the value of this parameter should equal the value set in `platform.properties.k8s.ingress.controller.ingressClass`.
- `publicServiceDomain`, `externalServiceDomain` – for a local deployment this parameter should be set to the same value as `global.nmaasDomain`.
```yaml title="nmaas-values.yaml"
global:
acmeIssuer: false
demoDeployment: true
ingressName: nmaas
nmaasDomain: nmaas.internal
wildcardCertificateName: nmaas-internal-wildcard
gitlabApiUrl: 'http://nmaas-gitlab-webservice-default:8181/api/v4'
gitlabApiToken:
literal: glpat-bSHxML48QNsZJE4CLHxc
platform:
initscripts:
enabled: true
ingress:
className: nginx
adminPassword:
literal: saamn
apiSecret:
literal: saamn
properties:
autoNamespaceCreationForDomains: true
adminEmail: noreply@nmaas.internal
appInstanceFailureEmailList: noreply@nmaas.internal
sso:
enabled: false
k8s:
ingress:
certificate:
issuerOrWildcardName: nmaas-internal-wildcard
controller:
externalServiceDomain: nmaas.internal
ingressClass: nginx
publicIngresClass: nginx
publicServiceDomain: nmaas.internal
portal:
ingress:
className: nginx
properties:
langingPageFlavor: VLAB
sp:
enabled: false
postfix:
image:
repository: artifactory.software.geant.org/nmaas-docker-local/nmaas-postfix-smtp
tag: 1.0.0
properties:
hostname: mailer.nmaas.internal
smtp:
fromAddress: noreply@nmaas.internal
host:
literal: localhost
username:
literal: smtpUsername
password:
literal: mysecret
port: '1050'
```
Once the values.yaml file has been customized, nmaas can be deployed by executing:
```bash
helm repo add nmaas https://artifactory.software.geant.org/artifactory/nmaas-helm
helm install -f nmaas-values.yaml --namespace nmaas-system nmaas --version 1.2.14 nmaas/nmaas
```
The email configuration in the `postfix` section configures an invalid email server on purpose (`localhost:1050:`), as to prevent email sending. If available, users are advised to use their own SMTP credentials, so that email sending will be fully functional.
nmaas also requires an the Stakater AutoReloader component, which can simply be installed using the commands below. This component takes care of restarting the affected pods whenever a configuration change is submitted via GitLab.
```bash
helm repo add stakater https://stakater.github.io/stakater-charts
helm repo update
helm install config-reload --namespace nmaas-system stakater/reloader
```
After the installation, login as the `admin` user should be possible with the configured password.
![Preview of the nmaas catalog of applications](./img/06-nmaas-catalog.png)
\ No newline at end of file
...@@ -4,6 +4,10 @@ ...@@ -4,6 +4,10 @@
These instructions are heavily based on the excellent blog posts and FreeRTR Docs written by [Fréderic Loui](https://twitter.com/FredericLoui) and the RARE team. These instructions are heavily based on the excellent blog posts and FreeRTR Docs written by [Fréderic Loui](https://twitter.com/FredericLoui) and the RARE team.
!!! note "Clarification"
This guides assumes that a local deployment of nmaas already exists and that either you are working in the provided nmaas test VM or you have followed the [instructions to deploy nmaas from scratch locally](../deploying-local-kubernetes-cluster.md).
If there are existing network elements ready to be monitored by nmaas applications, then this part can be completely skipped. If there are existing network elements ready to be monitored by nmaas applications, then this part can be completely skipped.
## Configuring VirtualBox ## Configuring VirtualBox
......
docs/tutorials-workshops/jres2024/img/01-app-catalog.png

714 KiB

docs/tutorials-workshops/jres2024/img/02-bulk-domain-deployment-wizard.png

144 KiB

docs/tutorials-workshops/jres2024/img/03-bulk-domain-deployment-overview.png

186 KiB

docs/tutorials-workshops/jres2024/img/04-domain-group-app-whitelist.png

127 KiB

docs/tutorials-workshops/jres2024/img/05-catalog-vlab-participant-perspective.png

258 KiB

docs/tutorials-workshops/jres2024/img/06-postgresql-step1.png

61 KiB

docs/tutorials-workshops/jres2024/img/07-postgresql-step2.png

173 KiB

docs/tutorials-workshops/jres2024/img/08-postgresql-access-details.png

189 KiB

docs/tutorials-workshops/jres2024/img/09-adminer-config-parameters.png

176 KiB

docs/tutorials-workshops/jres2024/img/10-adminer-deployment-app-access.png

200 KiB

docs/tutorials-workshops/jres2024/img/11-eduvpn-login.png

165 KiB

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment