Red Hat OpenShift deployment guide#
Starburst Enterprise platform (SEP) is available as an operator on OpenShift.
The following are required to deploy SEP on OpenShift:
Access to an OpenShift cluster with correctly-sized nodes, using IAM credentials, and with sufficient Elastic IPs.
Previously installed and configured Kubernetes, including access to
An editor suitable for editing YAML files.
Your SEP license file.
The latest OpenShift Container Platform (OCP) client for your platform as described in the OpenShift documentation and the
ocexecutable copied into your path, usually
Getting up and running#
Before you get started installing SEP, we suggest you read our reference documentation.
You can install SEP on OpenShift using one of the following methods:
Starburst’s Kubernetes deployment. This preferred method allows you to deploy any supported version of SEP.
Starburst’s certified operator in the OpenShift Operator Hub. Note that the supported SEP version may be several versions behind the latest release. We strongly recommend using the Kubernetes deployment instead.
Starburst’s Kubernetes deployment#
Starburst’s Kubernetes deployment on OpenShift follows the same steps as with any other Kubernetes service. More information on Starburst’s Kubernetes deployment is available in our Kubernetes reference documentation.
Starburst Enterprise Helm operator deployment#
In OpenShift, the Helm operator utilizes SEP Helm charts to deploy SEP and the Hive Metastore Service as separate custom resources.
Before you can install SEP, you must install the operator in your OpenShift cluster using the following steps:
Log into the OCP web console using your administrator login for Red Hat OCP.
In the left-hand menu, click Project > Create project, and provide a meaningful name for the project, such as
Click Create to create the project. Creating a separate project for your SEP deployment makes it easier to distinguish between each of the SEP resources in your OpenShift cluster.
In the left-hand menu, click Operators > Operator hub.
At the top of the screen, expand the Project drop-down, and select the name of the project you just created.
In the Operator Hub search, search for
Click on the Starburst Enterprise Helm operator tile, then click Install.
On the Create Operator Subscription page, in the Installation mode section, choose A specific namespace on the cluster.
In the Installed namespace field, select the Starburst project you just created.
Leave all other options as default, and click Subscribe to finish the installation.
Now that the Starburst Enterprise Helm operator is installed in your OCP cluster, you must add a license file.
The Starburst Enterprise Helm operator is not configured with a license file by default. Use the following steps to add a license to your SEP cluster:
In the OCP web console, go to Workloads > Secrets, then click Create.
Expand the dropdown menu, and select Key-value secret.
In the Secret name field, input
starburstdatafor the secret name.
In the Key field, input
In the Value field, click Browse, then select your Starburst Enterprise license file from your local machine.
Click Create to create the secret.
Once you have created the secret, you must add the license file property to the
ClusterServiceVersion (CSV). Use the following steps to add this
In the OCP web console, go to Installed Operators.
Click Starburst Enterprise Helm operator.
Click the YAML tab.
Add the license file property
"starburstPlatformLicense": "starburstdata"on the first indented level under the
specfield for the Starburst Enterprise resource. The following example shows the proper format:
spec: starburstPlatformLicense: starburstdata
Click Save, then Reload.
Make additional configuration changes based on your specific needs:
After you have completed all the configurations for your deployment of SEP, you can install each resource:
In the OCP web console, click Installed Operators > SEP Helm Operator.
Click Create instance for the resource you want to install, and provide a meaningful name for this instance of the resource, such as
starburst-enterprise-production. Click Create.
Go to Workloads > Pods to track the installation status of the resource.
To access your SEP cluster in OpenShift, you must create a route that points
to the appropriate Kubernetes service. To create a route, go to the OCP web
console, and select Networking > Routes. Be sure to select
the Service field and ensure the correct port mapping in the Target port
field is selected based on your cluster’s configuration.
If you need Ranger in your cluster, you can install it using Starburst’s Helm chart. To configure Ranger, read our Configuring Starburst Enterprise with Ranger in Kubernetes documentation.
Your cluster is now operational! You can now connect to it with your client tools, and start querying your data sources.
Follow these steps to quickly test your deployed cluster:
Create a route to the default ‘starburst’ service. If you changed the name in the
exposesection, use the new name.
Run the following command using the CLI with the configured route:
trino --server <URL from route> --catalog tpch
SHOW SCHEMAS;in the CLI, and you can see a list of schemas available to query with names such as
sf100, and others.
We’ve created an operations guide to get you started with common first steps in cluster operations.
It includes some great advice about starting with a small, initial configuration that is built upon in our cluster sizing and performance video training.
SEP is powerful, enterprise-grade software with many moving parts. As such, if you find you need help troubleshooting, here are some helpful resources:
Q: Once it’s deployed, how do I access my cluster?
Trino CLI command:
./trino --server example-starburst-enterprise.apps.demo.rht-sbu.io --catalog hive
Web UI URL:
Many other client applications can be connected, and used to run queries, created dashboards and more.
Q: I need to make administrative changes that require a shell prompt. How do I get a command line shell prompt in a container within my cluster?
A: On OCP, you’ll get a shell prompt for a pod. To get a shell prompt for a pod, you’ll need the name of the pod you want to work from. To do so, log in to your cluster. For example:
oc login -u kubeadmin -p XXXXX-XXXXX-XXXXX-XXXX https://api.demo.rht-sbu.io:6443
Get the list of running pods:
❯ oc get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hive-XXXXXXXXX-lhj7l 1/1 Running 0 27m 10.131.2.XX ip-10-0-139-XXX.us-west-2.compute.internal <none> <none> starburst-enterprise-coordinator-example-XXXXXXXXX-4bzrv 1/1 Running 0 27m 10.129.2.XX ip-10-0-153-XXX.us-west-2.compute.internal <none> <none> starburst-enterprise-operator-7c4ff6dd8f-2xxrr 1/1 Running 0 41m 10.131.2.XX ip-10-0-139-XXX.us-west-2.compute.internal <none> <none> starburst-enterprise-worker-example-XXXXXXXXX-522j8 1/1 Running 0 27m 10.131.2.XX ip-10-0-139-XXX.us-west-2.compute.internal <none> <none> starburst-enterprise-worker-example-XXXXXXXXX-kwxhr 1/1 Running 0 27m 10.130.2.XX ip-10-0-162-XXX.us-west-2.compute.internal <none> <none> starburst-enterprise-worker-example-XXXXXXXXX-phlqq 1/1 Running 0 27m 10.129.2.XX ip-10-0-153-XXX.us-west-2.compute.internal <none> <none>
pod name is the first value in a record. Use the
pod name to open a
❯ oc rsh starburst-enterprise-coordinator-example-XXXXXXXXX-4bzrv
A shell prompt will appear. For example, on OCP 4.4:
Q: Is there a way to get a shell prompt through the OCP web console?
A: Yes. Log in to your OCP web console and navigate to Workloads > Pods. Select the pod you want a terminal for, and click the Terminal tab.
Q: I’ve added a new data source. How do I update the configuration to recognize it?
A: Using the making configuration changes section to edit your YAML configuration,
additionalCatalogs, and add an entry for your new data source. For
example, to add a PostgreSQL data source called
mydatabase: | connector.name=postgresql connection-url=jdbc:postgresql://172.30.XX.64:5432/pgbench connection-user=pgbench connection-password=postgres123
Once your changes are complete, click
Save and then
Reload to deploy your
changes. Note that this restarts the coordinator and all workers on the cluster,
and might take a little while.