Creating a tenant
Prerequisites
The
helm
binary is installed on the local hostThe
kubectl
binary is installed on the local hostCredentials for accessing the SmartMakers docker registry are available (please contact your account manager @ SmartMakers)
The thingsHub tenant helm chart is available locally
Assuming the prerequisites are given, this guide will now go step by step through a number of basic decisions that will result in a basic tenant configuration. Finally, this configuration will be used the create a new tenant release on the cluster.
How to name the tenant?
Each tenant requires a unique identifier, which will be used internally by the system e.g. for the tenant's Kubernetes namespace. When chosing this identifier, stick to the letters a - z, the numbers and dashes. The identifier should start with a letter and can end on a letter or a number. Keep the length above 5 and below 56 characters. The following regular expression should match: [a-z][a-z0-9-]{4-53}[a-z0-9]
.
Good Names | Bad Names |
---|---|
| 0 |
|
|
|
|
|
|
|
|
The tenant name will also be used as part of the tenant's domain name, by prepending the tenant name to a base domain name. , e.g. a tenant named example
will result in example.thingshub.smartmakers.de
assuming it's parent domain is thingshub.smartmakers.de
.
We can now create the first bit our our configuration file. The file should be in YAML format and the filename is recommended to match the tenant's name:
example.yaml (base section)
global:
domain: thingshub.smartmakers.de
name: example
image:
tag: 3.11.0
How to secure the tenant?
Securing the tenant requires two steps: an admin password and service key as well as SSL certificates for the tenant. Create an admin password and a key for inter-service communication by creating a password (following your company's password) and a sufficiently long random string as the service key.
Secondly, if TLS is required a certificate (including the certificate bundle) and a matching private key need to be added to the config file. The tenant configuration requires a PEM encoded certificate and RSA key. Use YAML's literal style for this:
example.yaml (security section)
global:
...
security:
service_key: <service-key>
admin_password: <admin-password>
tls:
enabled: true
cert: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
key: |
----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY----
Where to store the tenant's configuration data?
The tenant will require a PostgreSQL database to store configuration data, e.g. network connectors, integrations, devices etc. Note that sensor data will be stored in a separate time series database, which will be take care of later on.
The database needs to be hosted on a separate PostgreSQL database server. The database name as well as the role need to be unquoted PostgreSQL identifiers. Nonetheless, it is recommended to keep the database name similar to the tenant name (as chose above). The easiest way to achieve this is by replacing all - with _. Generate a random password, e.g. by following this guide. Note down the password. It will be needed later to provide the tenant with access to the database.
Given the username and password, the following SQL script can be used to create a database suited for a thingsHub tenant:
Create tenant database
CREATE USER <tenant-sql-name> WITH PASSWORD '<tenant-sql-password>' NOCREATEDB;
CREATE DATABASE <tenant-sql-name> ENCODING = UTF8;
GRANT ALL PRIVILEGES ON DATABASE <tenant-sql-name> TO <tenant-sql-name>;
We can now add this information to the tenant configuration file:
example.yaml (sql section)
global:
...
sql:
host: <hostname>:<port>
database: <tenant-sql-name>
user: <tenant-sql-name>
password: <tenant-sql-password>
With the mostly static data taken care of, time series data needs to be considered next.
Where to store the tenant's time-series data?
Time series data, e.g. sensor measurements or events, needs to be stored in an InfluxDB time series database. This database can be run in two different modes: internal or external in relation to the tenant. If InfluxDB is run externally, a dedicated database on an an existing InfluxDB instance is used to store time series data. This corresponds to how PostgreSQL is used by the tenant. Backups are the responsibility of the adminstrator of this InfluxDB server. If InfluxDB is run internally to the tenant, one instance of InfluxDB is started up as a service in the tenant's Kubernetes namespace. In this case, InfluxDB data needs to be stored in a Kubernetes Persistent Volume. In this mode, the Kubernetes administrator is responsible for providing this volume and for a backup strategy thereof.
Conditional: creating the InfluxDB database
If the InfluxDB database is hosted on an external InfluxDB database server, a database needs to be created on that server. For this, login to that InfluxDB server as adminstrator and execute the following steps:
CREATE DATABASE <tenant-influxdb-name>;
USE <tenant-influxdb-name>;
CREATE USER <tenant> WITH PASSWORD '<tenant-influxdb-password>';
GRANT ALL PRIVILEGES ON <tenant-influxdb-name> TO <tenant-influxdb-name>;
CREATE RETENTION POLICY receptions_retention_policy ON <tenant-influxdb-name> DURATION 168h REPLICATION 1;
CREATE RETENTION POLICY thub_device_data ON <tenant-influxdb-name> DURATION 168h REPLICATION 1;
CREATE RETENTION POLICY logs_rp ON <tenant-influxdb-name> DURATION 48h REPLICATION 1;
Note that, if the user with which the thingsHub connects to the InfluxDB has admin privileges (instead of only read and write privileges, as created above), it is not necessary to create the retention policies as described in the last three lines above. In this case, the thingsHub will create the retention policies by itself.
Conditional: creating the InfluxDB configuration
This step can be skipped if the InfluxDB database is hosted internally to the tenant. If it is hosted internally, connection information needs to be added to the tenant's configuration:
example.yaml (time series section)
global:
...
influx_db:
address: <hostname>:<port>
database: <tenant-grafana-db>
user: <tenant-grafana-user>
password: <tenant-grafana-password>
The time series database is now configured, next up is the Visualizer which runs on top of the time series database.
Where to store the visualizer's configuration?
The visualizer's state, e.g. users and dashboards, can be store either in an internal SQLite database persisted on a Persistent Volume or in an external PostgreSQL database.
There are three options for this:
Store the Visualizer's state in a persistent volume as an SQLite file. This requires a persistent volume and a backup strategy for this persistent volume.
Store the Visualizer's state in a dedicated PostgreSQL database: this is preferrable if setting up PostgreSQL databases can be done efficiently.
Store the Visualizer's state in the same PostgreSQL database as the thingsHub tenants. The visualizer's data is then stored in the public schema, while the thingsHub's services' data is stored in dedicated schemas.
Conditional: creating the Grafana database
If options 1 or 3 If grafana is configured to use a dedicated PostgreSQL database as a backing store, then create a second database for grafana now:
Create Grafana database
CREATE USER <tenant-grafana-user> WITH PASSWORD '<tenant-grafana-password>' NOCREATEDB;
CREATE DATABASE <tenant-grafana-db> ENCODING = UTF8;
GRANT ALL PRIVILEGES ON DATABASE <tenant-grafana-db> TO <tenant-grafana-user>;
This database now needs to be added
Creating the visualizer configuration
Fill in the configuration options in the tenant's config file:
example.yaml (visualizer section)
grafana:
database:
external: true
type: postgres
host: <host>
port: 5432
username: <tenant-grafana-username>
password: <tenant-grafana-password>
database: <tenant-grafana-db>
Additional configuration
See the document Tenant Configuration for additional tenant configuration options. The configuration file will now look something like this:
example.yaml (complete)
global:
domain: thingshub.smartmakers.de
name: example
image:
tag: 3.11.0
security:
service_key: <service-key>
admin_password: <admin-password>
tls:
enabled: true
cert: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
key: |
----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY----
sql:
host: <hostname>:<port>
database: <tenant-sql-name>
user: <tenant-sql-name>
password: <tenant-sql-password>
influx_db:
address: <hostname>:<port>
database: <tenant-grafana-db>
user: <tenant-grafana-user>
password: <tenant-grafana-password>
grafana:
database:
external: true
type: postgres
host: <host>
port: 5432
username: <tenant-grafana-username>
password: <tenant-grafana-password>
database: <tenant-grafana-db>
Everything is now ready for starting up the tenant.
Starting up the tenant
With everything now in place, we can proceed with starting up the tenant. This is done using the helm tool:
helm install --namespace <tenant> -f <tenant>.yaml --name <tenant> th-tenant-<version>.tar.gz
Helm will a snapshot of the current status of all the new resources created in the cluster.
Checking the tenant's health
To confirm that the installation succeeded, try to log in to the REST API using the admin credentials. Do the same for grafana.
References
https://www.postgresql.org/docs/8.0/static/sql-createuser.html
https://www.postgresql.org/docs/9.0/static/sql-createdatabase.html
https://ryaneschinger.com/blog/using-google-container-registry-gcr-with-minikube/