Skip to main content
Skip table of contents

Installation

Installing Docker

Docker Engine must be installed on the internet-connected host. Both Docker Engine and Docker Compose must be installed on the service host.

Please follow the official instructions for installing Docker Engine: https://docs.docker.com/engine/install/ubuntu/

Docker Compose should come packaged with Docker Engine, but in case it is not, find installation instructions here: https://docs.docker.com/compose/install/linux/

To install Docker Engine and Docker Compose on an isolated host, transfer the files downloaded with curl to the isolated host before continuing the installation steps for each of them on that machine.

After Docker Engine and Docker Compose have been installed, follow the post-installation instructions at: https://docs.docker.com/engine/install/linux-postinstall/. In particular, the “Manage Docker as a non-root user” and “Configure Docker to start on boot” sections may be useful.

Verifying Docker installation

  1. On a command line terminal, issue the docker --version command to verify installation:

CODE
$ docker --version
Docker version 24.0.7, build afdd53b
  1. On a command line terminal, build the Docker Hello-world image to confirm build capability:

Note: The following steps require an internet connection!

CODE
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c1ec31eb5944: Pull complete 
Digest: sha256:d000bc569937abbe195e20322a0bde6b2922d805332fd6d8a68b19f524b7d21d
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/
  1. Verify web server capability with nginx. Using a command line terminal issue the following command (if port 1234 is in use on your system you can substitute with any available port):

CODE
$ docker run --detach --publish=1234:80 --name=webserver nginx
  1. Check that the website is up with $ wget http://localhost:1234.

  2. If wget successfully downloads an HTML document displaying a brief greeting message, the nginx Docker container is working.

  3. (Optional) Try accessing the server with a web browser on the same subnet. You can obtain the server’s IP address with $ ip a.

  4. Type the following commands to stop the container and clean up.

CODE
$ docker stop webserver
$ docker rm webserver

Installing the Datahosting application

Prerequisites

RBR will provide you with a .zip file that contains the following tree:

CODE
.
├─ docker-compose.yaml (provided then optionally modified)
├─ tools (provided and unzipped)
│  ├─ backup-volume.sh
│  ├─ remove-containers.sh
│  ├─ remove-images.sh
│  ├─ remove-volumes.sh
│  ├─ restore-volume.sh
│  └─ stop-containers.sh
├─ secrets
│  ├─ web-user-config-template.json
│  └─ readme.md
└─ daemon
   ├─ license.lic
   └─ database-config
      ├─ customer-config.json
      ├─ customer-config-schema.json
      └─ web-user-config-schema.json

Extracting the configuration files

Extract the .zip file into a clearly-named directory in your home directory.

Configuring the Datahosting application

Docker-compose file

docker-compose.yaml configures environment variables, network configurations, file access permissions, secure passwords, boot and configuration ordering, and persistent volumes. Generally, the user should not need to modify docker-compose.yaml once the containers are up.

If it becomes necessary, documentation can be found at https://docs.docker.com/compose/compose-file/.

Web user and customer configuration files

Before running the containers, please test secrets/web-user-config.json against daemon/database-config/web-user-config-schema.json and secrets/customer-config.json against daemon/database-config/customer-config-schema.jsonat https://www.jsonschemavalidator.net/

daemon/database-config/customer-config.json associates instruments with where they are deployed, and deployment locations with the different programs served by a datahosting instance.

secrets/web-user-config.json defines which users can access which instruments' data through the Datahosting web interface. This file should be copied and populated from the template at daemon/database-config/web-user-config-template.json. The admin user (username: 'admin') can access all instruments, whereas users associated with a “customer” in the JSON file can only access the instruments associated with them, as referenced by the unique URL slug placed after the web host's top domain name.

While it is running, the Cervello/database interface daemon will periodically check both of these files and apply any changes to the database. However, it will not delete existing customers, locations, or instruments from the database. Therefore, to make sure that only correct data is inserted, it’s recommended that you stop the containers with $ docker compose down before adding new customers, locations, or instruments, or before modifying a customer’s “slug” field or an instrument’s serial number field.

(Optional) Enabling Google Maps

In addition to displaying an RBRcervello’s latitude and longitude, the web interface embeds Google Maps to display its position on a map. This requires a Google Maps JavaScript API key.

  1. Follow the steps at the following link to set up a project and generate an API key: https://developers.google.com/maps/documentation/javascript/get-api-key.

  2. Copy and paste the API key between the double quotes at docker-compose.yaml at services > dhweb > environment : GOOGLE_API_KEY="".

Setting database passwords

Database passwords are stored in single-line plaintext files which must be created by the user (e.g: $ echo "Elcano1522&%" > secrets/mariadb-datahosting-password, $ echo "Thomson1872&%" > secrets/mariadb-root-password). Non-alphanumeric ASCII characters (e.g.'‘/','‘-','‘,', ''’) may be used. The admin password should only be used by the database container, while the user password is used by all three.

Defining web interface users

Web interface users are configured in a .json file, which should be placed by default at secrets/web-user-config.json. This file should be copied and populated from the template at daemon/database-config/web-user-config-template.json. The admin user (username: 'admin') can access all instruments, whereas users associated with a customer can only access the instruments associated with that customer, as referenced by the unique URL slug placed after the web host's top domain name.

Defining initial deployment information

Open customer-config.json. Double-check that all customers, locations, and instruments are listed correctly.

The Cervello/database interface daemon will periodically check the customer-config.json file and apply any changes to the database. However, it will not delete existing customers, locations, or instruments from the database even if they have been removed from customer-config.json. Therefore, if the containers are running, it’s recommended that you stop the containers with $ docker compose down before adding new customers, locations, or instruments, or before modifying a customer’s “slug” field or an instrument’s serial number field.

Verifying the Datahosting application’s configuration

At the end of the configuration, the directory tree should be as follows:

CODE
.
├─ docker-compose.yaml (provided then optionally modified)
├─ tools
│  ├─ backup-volume.sh
│  ├─ remove-containers.sh
│  ├─ remove-images.sh
│  ├─ remove-volumes.sh
│  ├─ restore-volume.sh
│  └─ stop-containers.sh
├─ secrets
│  ├─ web-user-config-template.json
│  ├─ mariadb-root-password (created)
│  ├─ mariadb-datahosting-password (created)
│  └─ readme.md
└─ daemon
   ├─ license.lic
   └─ database-config
      ├─ customer-config.json
      ├─ customer-config-schema.json
      └─ web-user-config-schema.json

Logging into the container registry

The containers are hosted in an Amazon Web Services Elastic Container Registry (AWS ECR) instance.

Step 1: Download AWS CLI

  1. $ sudo apt update

  2. $ sudo apt install awscli

Step 2: Set up your access key

Enter the provided access key ID and secret access key.

You should only have to do this once.

  1. $ aws configure set aws_access_key_id "<access key id>"

  2. $ aws configure set aws_secret_access_key "<secret access key>"

Step 3: Log into the repositories

Run $ aws ecr get-login-password --region ca-central-1 | docker login --username AWS --password-stdin "458757806210.dkr.ecr.ca-central-1.amazonaws.com/systems/" to authenticate with AWS. The docker login command may require (and it is recommended) that you configure a secure password management tool such as pass for Docker to store the login credentials for the repositories.

AWS automatically rotates the repository passwords for a given account approximately 12 hours after every login. If the passwords have been rotated and the docker-compose.yaml file is configured to download a new image, the container will not start and Docker will prompt you to re-authenticate with the repository.

Troubleshooting

CODE
An error occurred (UnrecognizedClientException) when calling the GetAuthorizationToken operation: The security token included in the request is invalid.
Error: Cannot perform an interactive login from a non TTY device

If you see the error above in response to $ aws ecr get-login-password --region ca-central-1 | docker login --username AWS --password-stdin "458757806210.dkr.ecr.ca-central-1.amazonaws.com/systems/" , your Access Key ID and/or Secret Access Key may be incorrect.

Downloading and running the containers

Before starting the containers, note the web service port at services > dhweb > ports > <host port>:<container port>. You will have to change this port if another web server is running on the default port 80.

When starting the containers for the first time, Docker Compose will automatically download the container’s images from their repositories in the AWS container registry and run them as containers.

To start the containers, cd to the directory containing the docker-compose.yaml file and issue $ docker compose up -d.

Once started:

  1. Wait 60 seconds for Docker to start and configure all of the services.

  2. Verify that all containers are running by entering $ docker container ls. If the containers are properly running and configured, the status of the dhweb, dhdb, and dhdaemon containers will be “up” and dhdb and dhdaemon will also display a “(healthy)” message.

  3. Verify that the web server is up by entering $ curl localhost:<host_port> .

  4. Verify that the web interface is rendering properly by navigating to <host_ip>:<host_port>/<customer slug> in a browser on a computer connected to the same subnet as the container host.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.