Coordinator high availability#
Starburst Enterprise platform (SEP) offers the ability to enable high availability (HA) of the coordinator. In the event the coordinator becomes unavailable, the cluster automatically switches over to a new coordinator and continues to accept new queries.
Coordinator high availability (HA) is only supported via the Starburst
CloudFormation template in AWS. In order to fully utilize this capability set
HACoordinatorsCount field of the Stack creation form (
Configuration section) to a value greater than
1. Setting it to
3 should suffice most scenarios.
HA is ALWAYS enabled. However in the when
HACoordinatorsCount is set to
1, there is no hot standby. In that case SEP eventually creates a new
coordinator. This may take several minutes. If
2 or more, then there are hot standby coordinators and the fail-over switch
Note: It is possible to disable automatic failover of the coordinator by
setting the``KeepCoordinatorNode`` CFT parameter to
Coordinator IP address#
The Coordinator is accessible via attached Elastic Network Interface (ENI) which
has a static auto-assigned private IP address. After you launch the Starburst
CloudFormation cluster stack, note the
StarburstSSH keys in the stack’s Outputs section/tab in the AWS CloudFormation
StarburstCoordinatorURL is the Trino (formerly PrestoSQL) Web UI and REST API endpoint
address, which you use to point your Trino CLI or JDBC/ODBC drivers or access
the Trino Web UI from your browser.
StarburstSSH notes the SSH connection
details to manually log onto the current coordinator.
In general in the event of a failure of the current coordinator the HA mechanism kicks in and performs the following steps:
Terminate the old/failed coordinator EC2 instance
Attach Elastic Network Interface to the new coordinator
Launch a new stand-by coordinator (within a couple minutes)
The core failover process (steps 1 and 2) should complete in under a minute, from the time when the coordinator started failing to respond. It is a matter of seconds once the coordinator is identified to be in a failed state, but there is some built-in time buffer so that we don’t act on a false alarm. When Elastic Network Interface is attached to the new coordinator, it is almost immediately available to clients.
In real life it may happen that a coordinator “dies” because one of the following:
The node becomes unresponsive (e.g. hardware issues, OS level and network issues).
The node disappears, might be terminated by some account admin or by AWS.
The SEP process may exit because of an fatal error.
The SEP process may become unresponsive, e.g. because of a long full garbage collection taking place.
In all those scenarios, after a short grace period, the failed coordinator, if still exists, is terminated. Then a new coordinator is chosen among the hot standby instances and has coordinator ENI attached to it. Clients should re-issue the failed queries when the new coordinator becomes accessible. A new hot standby coordinator is launched in the background to take place of the one that has just been assigned.
When SEP is deployed in a custom setup (e.g. with a bootstrap script which sets up security) make sure the HTTP port (unsecured) is open and accessible from localhost. You may want to restrict access to it by binding it to localhost only or otherwise securing external access e.g. via the AWS Security Group assigned to the cluster. See HA with HTTPS enabled for more information.
Coordinator ENI has private IP address which is accessible only within the same VPC as the cluster stack. This means in order to connect to the coordinator you need to initiate the connection from a client either on EC2 machine deployed in the same VPC or connected to the VPC via a VPN.
Note that all queries, that were running when the coordinator failed, fail to complete. You need to restart these queries on the new coordinator. Similarly the SSH connections to the old coordinator needs to be re-established after the fail-over.
When connecting via SSH, depending on your SSH configuration you may see login issues like
REMOTE HOST IDENTIFICATION HAS CHANGEDetc, due to the fact that the underlying host has changed, and the key’s fingerprint that was previously accepted has changed. You may want to not verify the host keys at all, by adding
-o StrictHostKeyChecking=noto the SSH command or deleting the key from your
known_hostsfile and accepting the new one.
HA with HTTPS enabled#
The coordinator’s health is checked by polling SEP locally via HTTP. This is why you need to have the coordinator’s HTTP port open even if you configured SEP to use HTTPS, regardless how many coordinators are configured (even if only one). Workers do not need to have their HTTP port open, although we recommend using HTTP for internal cluster communication (unless HTTPS is explicitly required for internal communication as well). Note that using HTTPS for internal communication may have substantial impact on overall cluster performance, because all intermediate data needs to be encrypted and decrypted. The overhead of HTTPS depends on the amount of data sent over network and actual ciphers being used.
http-server.http.enabled=true http-server.https.enabled=true ... other `https` related configs
Additionally you should block non-local HTTP access to the coordinator by configuring the AWS Security Group assigned to the cluster accordingly.