Skip to content

Set-up log

Administration

The machine is latte.proj.lip6.fr.

I added my user (thiel):

  • added it to the sudo group

I set ufw to only allow SSH and NTP, then enabled it:

Bash
sudo ufw allow ssh
sudo ufw allow ntp
sudo ufw enable

I set-up SSH:

  • sent my public key in order to log-in without a password
  • disabled password log-in
  • disabled root log-in entirely

Created and set ownership of /srv/latte/data to www-data group. Followed this answer:

  • Set the getGID of the /srv/latte/data (all new files will be owned by the www-data group)
  • Added new latte user with HOME at /srv/latte/data
Bash
sudo useradd -d /srv/latte/data -g www-data -M -N -s /sbin/nologin latte
  • For more security, also added custom rules to sshd for this user at the end of the sshd_config file.
/etc/ssh/sshd_config
Match User latte
  X11Forwarding no
  AllowTcpForwarding no
  • From client, Create an SSH keypair for the latte user and send it to the server:

Note

Has to be passwordless so that the Gitlab-runner can connect to it in CI/CD.

Bash
ssh-keygen -t ed25519
ssh-copy-id -i ~/.ssh/my_new_key.pub  latte@latte.proj.lip6.fr
  • Can now connect to latte@latte.proj.lip6.fr using Filezilla. or via the command line:
Bash
sftp latte@latte.proj.lip6.fr

Note

May have to set-up ssh-agent if you haven’t not done so already.

Clock synchronisation

Make sure that the NTP server used by the machine is ntp.lip6.fr, other NTP servers likely do not pass the LIP6 firewall.

Install chrony and edit the /etc/chrony/chrony.conf

Bash
sudo apt update
sudo apt install -y chrony
Text Only
# DO NOT use Debian vendor zone, or any other non-lip6 source.
# They seem to be blocked by the LIP6 firewall
# pool 2.debian.pool.npt.org iburst
# Instead use the LIP6's own NTP server
server ntp.lip6.fr iburst

You may also want to add the server ntp.lip6.fr iburst line at the top of your own machine's NTP conf, so that your clock is synchronised with the latte server. This is especially important when you are monitoring or performing stress tests.

Install software

Docker

Install Docker Engine on Debian.

GitLab Runner

Follow these steps to install GitLab Runner on the machine.

Note

Gitlab Runner’s major.minor version must match the target instance of Gitlab As of now, the lab’s instance of GitLab is in version 18.6.0, which happens to match the latest version available of GitLab Runner. So now may simply install that latest version. If this is no longer the case, GitLab Runner may have to be reinstalled on the version matching our GitLab instance.

Bug

As of Gitlab Runner v18.6.x, there are issues with the new Docker 29 API changes So not sure this is going to work.

Automatic building config

Register the runner as a Group runner in our GitLab instance:

Note

Register in system-mode / root so that build processes can be run automatically.

Bash
sudo docker exec -it gitlab-runner gitlab-runner register  --url https://gitlab.lip6.fr  --token xxxx

Choose docker as an executor. Choose some docker-cli image as the default one. For instance docker:28.5.2-cli Use Unix socket instead of /cache dir to share data between build and service containers as described here.

Download the GitLab instance TLS certificate chain, adapting commands from this guide.

Warning

I am using Certificate Ripper because of issues with OpenSSL maybe not extracting the issuer certificate?

Bash
  sudo crip export pem --url=https://gitlab.lip6.fr -cd /srv/gitlab-runner/config/certs/ca.crt
  echo | openssl s_client -CAfile /srv/gitlab-runner/config/certs/ca.crt \
  -connect gitlab.lip6.fr:443 -servername gitlab.lip6.fr \
  | grep "Verify return code:"

The second command should show OK.

Note

You must have the whole certificate chain in a single file.

Configure GitLab runner with Docker to build our Docker images

Create a CI/CD variable at the group level for the deployment ssh key following this guide, and inspired by that one:

  • MACHINE_DEPLOY_SSH_PRIVATE_KEY
  • MACHINE_DEPLOY_HOST
  • MACHINE_DEPLOY_USER

Deployment config

Bug

This is only a temporary solution. We should now move on to Docker Swarm.

Create an SSH key-pair from latte user to access GitLab instance. Either set it to passwordless or create a new CI/CD variable.

Authorise that key with one account that has pull access to the LATTE group repositories

Clone the deploy repo in /srv/latte/home Also clone any other repo that you need be deployed.

Create docker networks:

  • proxy-tier for communicating with the reverse proxy.
  • dashboard-tier for the dashboard to communicate with the LRS database.
Bash
docker network create proxy-tier
docker network create dashboard-tier

Allow incoming HTTP and HTTPS (ports 80 and 443) through the firewall:

Bash
sudo ufw allow http
sudo ufw allow https

LRS service and database

First, copy the compose.override.yaml.sample files to compose.override.yaml

Bash
cp lrs/compose.override.yaml.sample lrs/compose.override.yaml

And fill/modify values as appropriate.

Then, create the data and config directories which are mounted in the container (1).

  1. If they do not exist, the container will create them with root as its owner, which will cause issue when attempting to write to them from the container.

Warning

The ElasticSearch image runs using uid:gid 1000:0, so we must allow the 0 group on its config and data directories.

Bash
sudo mkdir -p /srv/latte/config/lrs/ralph
sudo mkdir -p /srv/latte/data/lrs/elasticsearch
sudo chown -R latte:www-data /srv/latte/config/lrs
sudo chown -R latte:www-data /srv/latte/data/lrs
sudo chown -R latte:0 /srv/latte/data/lrs/elasticsearch

Ralph

Create a basic auth creadential for your user.

Info

The /srv/latte/config/lrs/ralph directory is owned by the user latte, which is why we must run as latte to write to it.

Bash
docker compose -f lrs/compose.yaml -f lrs/compose.override.yaml \
  run --rm lrs \
  ralph auth -u USERNAME -p PASSWORD \
  -s "all" -M mailto:EMAIL -N APP_OR_USER_ID -w

ElasticSearch

We can't use MongoDB right now, because it requires AVX support and our VMs do not have it.

We must first create an index for our statements:

Info

We only run a single node of ElasticSearch, so we must disable replicas.

Bash
http PUT :9200/statements
http PUT :9200/statements/_settings 'CONTENT-Type: application/json' number_of_replicas=0

Attempt at Kubernetes through Minikube

Follow the official documentations to install Kubernetes, Helm, and [Minikube][m8-doc-install].

Start the Minikub cluster, then check that the pods have been created.

Info

It will probably use the docker mode, which creates the clusters' nodes as Docker containers.

Bash
minikub start
kubectl get po -A

Next, install Ingress, which will act as our load balancer, and also our reverse proxy with the ingress-nginx controller.

[m8-doc-install]: https://minikube.sigs.k8s.io/docs/start/?arch=%2Flinux%2Fx86-64%2Fstable%2Fdebian+package

Things to investigate

Danger

We should set vm.max_map_count >= 262144. We possibly need to change the ProxMox config for this VM.

Danger

Docker and ufw: Docker-published ports ignore ufw’s config.