Teradata Direct connector in Kubernetes#

<< Return to section in k8s configuration documentation.

The Starburst Teradata Direct connector is supported for Kubernetes deployments in AWS EKS and in Azure AKS.


The configuration to use the Starburst Teradata Direct connector on Kubernetes is complex. You need significant Kubernetes and networking knowledge. Contact our Starburst Support team for assistance.

It relies on the Teradata cluster nodes, and the deployed table operator, to be able to address the SEP worker nodes. Each worker node needs an assigned IP address that is exposed outside the Kubernetes SEP cluster. This need translates into the following technical requirements:

  • A CNI (Container Network Interface) must be configured using this interface for AWS with SNAT disabled so that it is performed outside of Kubernetes, allowing pods to be accessed from outside the cluster by their assigned IP addresses.

  • Teradata must be deployed into the same Virtual Network via a VPC/VNet peering connection or VPN.

  • An adequate number of node instance network interfaces and IP address are available, taking autoscaling needs into account.

Thoroughly read the Teradata Direct connector documentation before you proceed to setup your environment, and create and verify catalogs.

AWS EKS setup#

1. Install the this interface required CNI plugin. 2. Configure the CNI plugin and restart or recreate the node group to apply changes:

$ kubectl set env ds aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true
$ kubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=true
$ kubectl set env daemonset aws-node -n kube-system ENABLE_POD_ENI=true
$ kubectl patch daemonset aws-node \
  -n kube-system \
  -p '{"spec": {"template": {"spec": {"initContainers": [{"env":[{"name":"DISABLE_TCP_EARLY_DEMUX","value":"true"}],"name":"aws-vpc-cni-init"}]}}}}'


ENIConfig custom networking resources, as well as the following kubectl command, may be deployed in order to have more pods on each node:

kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=failure-domain.beta.kubernetes.io/zone

Details are found in the CNI plugin instructions and the official EKS documentation.

Azure AKS setup#

Ensure that Azure VNet CNI is applied to the cluster, from your preferred management interface, such as Azure Portal or Terraform. Following is an example section from a Terraform resource:

resource "azurerm_kubernetes_cluster" "main" {
  network_profile {
    network_plugin    = "azure"
    network_mode      = "transparent"

Catalog setup and verification#

  1. Install the native table operator of the Teradata Direct connector on Teradata.

  2. Add a catalog to your sep-prod-catalogs.yaml file, following this example:

  myteradatadirect: |-
  1. Verify all pods have IP assigned addresses in a shared network, and the following command does not return an error indicating that no IP addresses are available:

$ kubectl describe pod <my-deployment-xxxxxxxxxx-xxxxx> -n <my-namespace>

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "<LongHash>" network for pod "<my-deployment-59f5f68b58-c89wx>": networkPlugin
cni failed to set up pod "<my-deployment-59f5f68b58-c89wx_my-namespace>" network: add cmd: failed to assign an IP address to container
  1. Run the following Teradata Direct connector query through your preferred client:

select count(*) from myteradatadirect.sys_calendar.calendar;
  1. Verify that the receivers are responding. The connector should return the result of the above query immediately. A brief check of the /tmp/starburstdata_table_operator*.log file should not display any sleep: entries, indicating that receivers could not respond or were not found:

2021-04-15T11:18:32 DEBUG src/main/cpp/util.cpp:211 sleep: 244
2021-04-15T11:18:32 DEBUG src/main/cpp/util.cpp:211 sleep: 500
2021-04-15T11:18:32 DEBUG src/main/cpp/util.cpp:211 sleep: 1000