AWS EKS networking#

This document helps you to understand networking requirements and configuration for Starburst Enterprise platform (SEP) on AWS EKS.


We very strongly suggest using Amazon’s Classic Load Balancer with SEP. The configuration of Application Load Balancers (ALB) is a very complex, advanced configuration and is highly dependent upon the particulars of your environment.

Best practice: Use a single availability zone inside your existing VPC#

When deploying SEP into an AWS EKS cluster, it is important to ensure that your cluster resources are located with the following in mind:

  • cluster communications latency

  • data ingress/egress costs

  • IP address availability

Using your existing VPC is a key cost control measure. It ensures that costs associated with data ingress and egress are kept to a minimum. Every new VPC comes with a NAT gateway. And, while NAT gateways are inexpensive, costs for data transferred through that gateway add up very quickly. As a best practice, placing your SEP inside of your existing VPC, co-resident with as many of your data sources as possible not only keeps costs down, it also greatly simplify networking and security.

Equally as important is performance. No matter if you use an existing or new VPC, SEP must run in a single availability zone (AZ) to ensure the best performance possible. To accomplish this, use node groups. The node groups are then tied to a single AZ using affinity or node selection rules.

For SEP, two managed node groups are required. The SEP coordinator and workers are deployed to one group while support services, such as HMS, Ranger and nginx, are deployed to the second node group.


You must use an existing VPC in order to enforce the use of a single AZ. Do not use eksctl to create a new VPC at cluster creation time.

IP address requirements#

An important consideration in using an existing VPC is IP address availability. As part of standing up your SEP cluster, you must ensure that sufficient IP addresses are reliably available for use by your SEP instances.

In EKS clusters, AWS creates one subnet per availability zone. Usually, these are configured as /20 Classless Inter-Domain Routings (CIDR) with 4,091 IP addresses available for use (an additional 5 are reserved by the cluster itself). SEP requires that all hosts, both workers and coordinators, are sized identically. Each of these instances has a maximum number of IP addresses that can be assigned to it, and EKS reserves twice that number of addresses for it.

For purposes of this discussion, we assume an SEP deployment involving six m5.xlarge workers and one m5.xlarge coordinator. Each of this instances can have a maximum of 15 IP addresses in use. An additional 15 are reserved, for a total of 30 IP addresses needed per instance. Those seven instances together then require 210 IP addresses:

  ( 7 m5.xlarge instances)
x ( 15 IPs per interface )
x ( 2 interfaces per instance )
= 210 IP addresses needed

In this example, you must ensure that a minimum of 210 IP addresses are reliably available for use by your SEP instances at all times.


If you are taking advantage of AWS EKS autoscaling, you must calculate the number of addresses needed on the maximum allowable number of workers.

Using subnets#

You can use an existing subnet or create a new one. It must be configured with a route out to the Internet either via a NAT gateway or an IGW to allow your EKS cluster to communicate with the AWS EKS management backplane. Cost considerations for these communications are minimal.

Considerations if you must use VPC peering#

If you cannot place SEP within your current VPC because of a scarcity of IP addresses, as an alternative you can create a peering connection with the new EKS cluster’s VPC to avoid the often cost-prohibitive operation of all data passing through the NAT gateway. VPC peering requires additional setup, and comes with potential downsides:

  • Does not scale well.

  • Transitive routing is not available.

  • Peering connections are a resource that must be managed.

  • Firewall rules must be carefully managed.

Additionally, you must ensure that the CIDR you set for the new SEP VPC does not match or overlap with your existing VPC’s CIDR. VPCs with overlapping CIDRs cannot create peering connections.

The following are required when using internet-facing load balancers:

  • Subnets tagged with → shared

  • Subnets tagged with → 1

The following is required when using load balancers that are not internet-facing:

  • → 1

Configure DNS#

Install the ExternalDNS add-on to automatically add DNS entries to Amazon Route 53 once ingress resources are created, as shown in the following example external-dns.yaml file:

provider: aws
  zoneType: public
  region: us-east-2
txtOwnerId: <txtOwnerId>   # Identify this
txtPrefix: <txtPrefix>     # external DNS instance
  - <HostedZoneID> # ID of AWS R53 Hosted Zone

# This allows external-dns to delete entries as well
policy: sync

Run the following commands to complete the install. Be sure to use the latest artifact of external-dns and specify its version in the command:

$ helm repo add bitnami
$ helm repo update
$ helm upgrade external-dns bitnami/external-dns --namespace kube-system --install --version <X.Y.Z> \
  --values external-dns.yaml

Best practice: Configure ingress access for SEP and services through a controller#

Accessing your SEP cluster from external clients requires access to the coordinator and Ranger pods within the K8s cluster. This can be handled either through an AWS classic load balancer, or by deploying an Nginx ingress controller behind a classic layer 4 load balancer.

There are technical considerations for either approach:

  • Using an AWS classic load balancer

    • TLS termination is taken care of externally from the load balancer.

    • A separate load balancer resource is be created for each service (Starburst, Ranger, etc.)

  • Using an Nginx ingress controller

    • Nginx can route traffic to different services using a single load balancer.

    • Internal or alternate CA issue certificates, as it is not possible to use Amazon Certificate Manager with an Nginx pod.

For either approach, you must first configure and apply changes to your cluster’s ingress.yaml file. When you have verified that the ingress for the cluster is working properly, proceed with configuring and deploying your ingress controller of choice.


While it is possible to set up ingress directly in Starburst Helm charts, it is not recommended. Using standard K8s controllers as outlined here avoids complex networking and certificate issues inherent in connecting directly through application-specific ingress configurations.

Configure ingress.yaml#

The following ingress.yaml file provides the necessary networking configuration for both SEP and Ranger. You can safely ignore the Ranger configuration if your organization does not use it:

kind: Ingress
  name: starburst
  annotations: elb sep internet-facing '[{"HTTP": 80},{"HTTPS": 443}]' '443' 200,303 key1=value1, key2=value2 <cert-??>
    - hosts:
        - <starburst-hostname>
    - host: <starburst-hostname>
          - path: /
            pathType: Prefix
                name: starburst
                  number: 8080
kind: Ingress
  name: ranger
  annotations: elb sep internet-facing '[{"HTTP": 80},{"HTTPS": 443}]' '443' 200,302 key1=value1, key2=value2 <cert-??>
    - hosts:
        - <ranger-hostname>
    - host: <ranger-hostname>
          - path: /
            pathType: Prefix
                name: ranger
                  number: 6080

Once all changes to ingress.yaml are made, apply the customizations using the following command in the namespace that contains SEP:

$ kubectl apply -f ingress.yaml

Using an AWS classic load balancer#

The AWS documentation provides complete instructions for creating a classic load balancer on your EKS cluster’s VPC.

Once your load balancer is created, you must define it in your SEP chart as shown below, with TLS enabled:

  type: "loadBalancer"
    name: "starburst"
    IP: ""
        port: 8443
      http: null
    annotations: "true"
    sourceRanges: []

Using Nginx#

The Nginx controller is deployed as an additional pod in the EKS cluster. Certificate management for HTTPS connections is implemented using one of the following Helm-managed certificates:

  • A self-signed certificate

  • A internal CA-signed certificate

  • A Let’s Encrypt-issued certificate using

The following is an example of Nginx chart to control access to an SEP cluster:

kind: Ingress
  name: nginx-ingress
  annotations: nginx "true" "nlb"
spec: tls:
  - hosts:
    secretName: tls-secret
  - host:
http: paths:
      - backend:
          serviceName: trino
          servicePort: 8080
    ##   default-ssl-certificate: "<namespace>/<secret_name>"
    default-ssl-certificate: default/

In this case, the default certificate is a Kubernetes secret specified by applying the following YAML:

apiVersion: v1
kind: Secret
  namespace: default
  ## Both tls.crt and tls.key are base64-encoded PEM files on a single line
  tls.crt: <self-signed certificate>
  tls.key: <self-signed private key>

In the SEP values.yaml file, ingress is exposed via the following YAML block:

  type: "ingress"
    ingressName: "nginx"
    serviceName: "trino"
    servicePort: 8080
      enabled: true
    host: ""
    path: "/"
    pathType: Prefix
    annotations: "nginx"

All inbound HTTPS traffic to is now directed to the SEP coordinator. A similar configuration is applied in the Starburst Ranger Helm with as the host.

Advanced networking: Using ALBs with SEP#

After you have your cluster up and running, you may wish to switch to using an AWS Application Load Balancer (ALB). ALBs are inherently complicated and configuration is very dependent upon the particulars of your environment. When using an ALB ingress controller:

  • TLS termination is taken care of in the load balancer.

  • Easy to integrate AWS Certificate Manager for publicly-signed certificates.

  • A separate load balancer resource must be created for each service (Starburst, Ranger, etc.)

In addition to setting alb, the following are important considerations to remember as you modify cluster’s ingress.yaml file:

  • - If AWS ACM contains multiple certificates matching the hostname, specify certificate ARN explicitly, otherwise you can remove this annotation.

  • - SEP forces the ALB controller to combine multiple Kubernetes ingress resources into a single ALB on AWS, reducing costs.

  • - Contains HTTP response codes of the root (/) resource that are used by target group health checks. By default they accept only 200, but SEP responds with 303, and Ranger with 302.

  • - Ensures that ALB listens on both HTTP and HTTPS ports

  • - Redirects from HTTP to HTTPS

Once all changes to ingress.yaml are made, apply the customizations using the following command in the namespace that contains SEP:

$ kubectl apply -f ingress.yaml

You must wait until the ALB status changes from Provisioning to Ready before the networking behaves as expected with the configuration changes.

The AWS documentation provides complete instructions for installing the ALB ingress controller on your EKS cluster. The following example commands provide an overview of the process to install ALB controller that automatically discovers ingress resources and propagates them to AWS ALB and target groups.

$ helm repo add eks
$ helm repo update
$helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system \
  --set clusterName=<cluster-name>

Once the ALB controller is installed, you must define ingress in your SEP chart:

  type: "ingress"
    ingressName: "starburst-ingress"
    serviceName: "starburst"
    servicePort: 8080
      enabled: true
    host: ""
    path: "/*"
    annotations: alb interal ip

With the defined host, the ingress controller automatically associates certificates in ACM with a matching Subject Alternative Name or with a wildcard such as “*” with this load balancer.

By default, the controller creates an ALB, configures it with TLS certificate stored in the AWS Certificate Manager (private key is not exposed to Kubernetes), and terminate TLS at ALB.

Target groups are configured to send traffic to ports on Kubernetes worker nodes. A NodePort service type must be configured in Kubernetes to expose specific ports across all nodes.

The following traffic flows apply when using “ingress” as the expose: type:

  • Kubernetes pod: Runs the workload and listens on a port

  • kube-proxy: Listens on NodePort, proxies traffic to the pods in cluster

  • Target Groups: Discover EKS worker nodes, run health checks

  • ALB: Terminates TLS, performs HTTP routing

  • Route 53 A record: Points to ALB endpoints in public subnets


When using ingress, you must specify the TLS certificate as a Kubernetes secret if TLS is enabled.