Installation#
Overview#
Mission Control consists of a Java backend server accessed via a web front end. Data is stored in a PostgreSQL database. Mission Control has to run on the same cluster and infrastructure used for Starburst Enterprise platform (SEP) installation it controls.
Installation#
Mission Control can be run on any Kubernetes cluster, including Azure Kubernetes Service AKS, Google Kubernetes Engine GKE and Amazon AWS EKS. It can also be run on other Kubernetes deployments, and managed by your infrastructure team or from other public providers.
Requirements#
The installation uses the Helm and requires the same initial setup and configuration as the other Helm charts.
For setting up Mission control with Helm on your cluster follow steps:
Establish access to the Helm chart repository and the Docker registry. Ensure you have access to the
starburst-mission-control
andstarburst-presto-helm-operator
charts.To configure Mission Control, specify the properties in a
values.yaml
file. Refer to the configuration section for the supported configuration properties.Ensure your Helm/kubectl configuration points at the correct cluster with
kubectl cluster-info
.Proceed on to use to use helm install to deploy the charts.
Installation with Helm#
Deploy and start the SEP with the Helm operator:
Note
Remember to adjust the values.yaml
file by providing the Docker
registry credentials.
Warning
If you upgrade from 345-e or earlier, remember to uninstall the old operator first, and ensure that all related objects are deleted, including CRD.
helm install --version 350.2.0 --values values.yaml --generate-name starburstdata/starburst-presto-helm-operator
Deploy and start the Mission Control server:
helm upgrade --install --version 350.2.0 --values values.yaml starburst-mission-control starburstdata/starburst-mission-control
After a short while everything is up and running. Confirm the details of the
service with kubectl
.
kubectl get service/mission-control
Once the server is up and running, take note of the server URL or IP, and proceed to log in and get started.
Mission Control can now manage your data sources and clusters. It replaces the traditional management of configuration files.
Configuration#
Mission Control is automatically configured with reasonable default values as part of the installation process on Kubernetes.
Override default values or set advanced configuration properties in the
values.yaml
file in the Helm chart. More details on these properties are
outlined in the following sub-sections.
Any changes you make require a restart of Mission Control.
Memory allocation and additional JVM properties#
missioncontrol:
memoryAllocation: 1G
additionalJvmConfig: |
-XX:+HeapDumpOnOutOfMemoryError
-XX:+UseGCOverheadLimit
-XX:+ExitOnOutOfMemoryError
allowInsecureHttp: false
Note
The default values expose.type: "clusterIp"
and
missioncontrol.allowInsecureHttp: false
do not allow external access to
the Mission Control WebUI. You need to secure the access by configuring TLS
encryption through a load balancer or ingress, or by enabling insecure HTTP
access with missioncontrol.allowInsecureHttp: true
.
Authentication#
For authentication you can choose one of the following:
none
- Mission Control manages its own usersauthentication: type: none
google
- Mission Control integrates with Google authenticationauthentication: google: clientId: 1234567890-example.apps.googleusercontent.com clientSecret: SuperSecureSecret hostedDomain: starburstdata.com
Registry credentials#
Use this section to provide your credentials to connect to an authenticated Docker registry.
registryCredentials:
enabled: true
registry: harbor.starburstdata.net
username: example_username
password: SuperSecurePassword
Database#
Use this section to configure the database backend for Mission Control.
Warning
Mission Control 350.1.0 and higher requires the PostgreSQL 11 or later as it uses stored procedures.
There are three types of databases available:
embedded
- use an in-memory database. It is destroyed when Mission Control is shut down. This makes it suitable for quick demo/trial scenarios.database: type: "embedded" embedded: user: "na" password: "na"
internal
- provisions a PostgreSQL database in the cluster. You can provide either an existing persistent volume claim or a specification for a new persistent volume claim for storing the data.database: type: "internal" internal: image: repository: "library/postgres" tag: "11.3" pullPolicy: "IfNotPresent" volume: persistentVolumeClaim: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi resources: requests: memory: 1Gi cpu: 2 limits: memory: 1Gi cpu: 2 port: 5432 databaseName: missioncontrol user: mission_control_admin password: McPass123
external
- use an external PostgreSQL database. This is typically used for normal day to day usage. This option only requires the connection details for the database.database: type: "external" external: port: 5432 host: database.example.com databaseName: mcdb user: example_db_user password: SuperSecurePassword