Configure Starburst AI#

To use the Starburst AI connector and the supported models and functions, configure external large language models (LLMs) or embeddings with a supported provider.

Note

Starburst Enterprise AI is available as a private preview. Contact your Starburst account team for further information.

Prerequisites#

To use and configure external AI models, consider the following:

  • To access the AI models tab, you must have the AI models SEP user interface entities feature privilege.

  • To connect a new model, you must have the CREATE permission and access to All AI models.

  • All credentials should be stored using a secrets manager. Keys or secrets cannot be entered as plaintext in the configuration and should be provided by secret reference.

  • For AWS Bedrock connections:

    • The AWS account must have Bedrock service enabled and access to the desired models.

    • Users must have the "Action": ["bedrock:InvokeModel"] AWS IAM permission to use models.

    • To manage models and perform operations as an AWS administrator, you must have the "Action": ["bedrock:*"] permission.

    • Some models require the use of an inference profile. AWS should indicate this requirement in the model card when requesting access. For language models, set the Model ID to the inference profile ID. Starburst recommends using the inference profile ID over the ARN. For embedding models, set the Model ID to the functional model ID, and specify the inference profile ID in the Advanced configuration options section. This is required because each model may have custom encoding requirements.

Configuration#

To use the AI functions, configure a catalog configuration file to reference the sep_ai connector and connect AI language or embedding models as detailed in the following sections.

You must create a catalog named starburst that references the sep_ai connector.

connector.name=sep_ai
ai.client.cache.refresh.enabled=true
ai.client.cache.refresh.interval=1s
ai.client.cache.ttl=1h
ai.client.models.storage=EXTERNAL

You can use an embedding or language model. The AI functions are available with the ai schema name. The functions use the starburst.ai catalog and schema prefix.

To avoid needing to reference the functions with their fully qualified name, for example, starburst.ai.prompt(), configure the sql.path SQL environment property in the config.properties file to include the catalog prefix:

sql.path=starburst

After configuring the sql.path, you can simplify the reference to the function as ai.prompt().

Since the catalog can reference multiple models, it is sufficient to configure a single catalog for AI functions.

In the ai schema, there are two tables that list the available embedding and language models, embedding_models and language_models, respectively.

Catalog configuration properties#

The following table describes general catalog configuration properties for the sep_ai connector:

General catalog configuration properties#

Property name

Description

Default

ai.client.cache.refresh.enabled

Enables periodic refreshing of the model configuration information cache.

false

ai.client.cache.refresh.interval

Define the interval between automatic cache refreshes. The possible value can be 1s to 10s.

1s

ai.client.cache.ttl

The duration model configuration information is cached. The possible value can be 10m to 3h.

1h

Providers#

AI functions invoke an external LLM. Access to the LLM API must be configured in the AI models tab. Performance, output, and cost of all AI function invocations are dependent on the LLM provider and the model used. Choose a model that aligns with your specific use case and performance requirements. Starburst supports AWS Bedrock and OpenAI compatible APIs.

AWS Bedrock#

The AWS Bedrock provider offers access to a suite of foundation models hosted on AWS. Integrating AWS Bedrock models requires configuring AWS credentials and specifying the models you wish to use for different AI functions. To use AWS Bedrock models, configure the necessary model and provider connection details.

Supported models#

The following models are supported by Starburst:

AWS Bedrock supported models#

Model name

Description

amazon.titan-embed-text-v2:0

The AWS Bedrock Titan Text Embedding v2 embedded model.

cohere.embed-multilingual-v3

The AWS Bedrock Cohere embed multilingual model.

anthropic.claude-3-5-haiku-20241022-v1:0

The AWS Bedrock provided Claude3.5 LLM.

meta.llama3-2-3b-instruct-v1:0

The AWS Bedrock provided Llama 3.2 3B LLM.

For more information about AWS Bedrock’s providers and models, read the documentation.

OpenAI#

The OpenAI provider offers access to a number of language and embedding models. To use OpenAI models, configure the necessary connection information select appropriate models for your tasks.

Supported models#

The following models are supported by Starburst:

OpenAI supported models#

Model name

Description

text-embedding-3-small

The OpenAI text embedding 3 model.

text-embedding-3-large

The OpenAI text embedding 3 model.

gpt-4o-mini

The OpenAI GPT-4o mini. A compact version of the standard GPT-4o LLM.

For more information about OpenAI’s models and APIs, read the documentation.

Compatible APIs#

Starburst Enterprise supports a number of AI models from Amazon Bedrock and OpenAI. Starburst lets you integrate with compatible APIs from these supported providers, whether these external models are deployed on-premises or in the cloud. This lets you connect to external LLMs for inferencing tasks and embedding models to generate vector embeddings for AI search.

Model configuration#

To access the AI models tab in the Starburst Enterprise web UI, you must have the AI models SEP UI feature privilege. After configuring the necessary privileges and catalog, navigate to the AI models tab and click Connect external model.

The following sections provide more detail for connecting to an AI model.

Select a provider and model#

Follow these steps to select one of the supported providers and connect to a model using a compatible API.

  • From the Provider drop-down menu, select Amazon Bedrock or OpenAI.

  • From the Model type drop-down menu, select LLM or Embedding.

  • In the Model name text field, enter the fully qualified name of the model from the provider. For example, if using the AWS Bedrock provided Llama 3.2 LLM, specify meta.llama3-2-3b-instruct-v1:0. When the provider is Amazon Bedrock and model type is Embedding, the Model name field becomes a drop-down menu. The Amazon Model name property value can be a foundation model name or an inference profile.

    For more information about AWS Bedrock’s providers and models, read the documentation.

Model ID and description#

The Model ID is the identifier used to reference the model in an AI function when writing SQL and or showing the catalog and its nested schemas and tables in client applications. The model ID has the following requirements:

  • Max length of 64 characters.

  • Only lowercase letters, numbers, and special characters are allowed.

  • No spaces.

If you are configuring a model with the Amazon Bedrock provider, read the prerequisites section.

The Description is a short optional paragraph that provides further details about the AI model.

Connection details#

Specify the connection credentials for AWS IAM role, AWS access key, or OpenAI. Security-sensitive properties such as private keys and passwords cannot be passed as plaintext. If you provide the properties in plaintext, you may get an error prompting you to reference these properties using a secrets manager.

  • AWS IAM role connection details:

    • AWS IAM role arn: Specify the ARN of the IAM role to assume when connecting to AWS.

    • STS AWS access key ID: Specify the Security Token Service (STS) AWS access key to use for authentication for the specified role.

    • STS AWS secret access key: Specify the Security Token Service (STS) AWS secret key to use for authentication for the specified role.

    • AWS region code: The AWS region code. For example, the region code for US East (Ohio) is us-east-2.

For more information about AWS regions and region codes, read the AWS documentation.

  • AWS access key connection details:

    • AWS access key ID: Specify the AWS access key to use for authentication.

    • AWS secret access key: Specify the AWS secret key to use for authentication

    • AWS region code: The AWS region code. For example, the region code for US East (Ohio) is us-east-2.

  • OpenAI connection details:

    • OpenAI endpoint: Specify the URL for the OpenAI API endpoint. For example, https://api.openai.com/.

    • OpenAI key: Specify the API key value for OpenAI API access. This property is optional when using a compatible OpenAI API that does not require a key.

Use secrets manager to avoid exposing API or secret key values.

Advanced configuration options#

In addition to the basic connection details for an external AI model, you can specify additional configuration options to fine-tune and control model behavior. The advanced configuration options available change depending on the Model type.

  • When the model type is set to LLM, the following options are available:

    • Max Tokens: The maximum number of tokens expected to be returned by the model in its output. This generally includes words, punctuation, spaces, and special formatting.

    • Temperature: Given a value from 0.1 and higher, this controls the perceived randomness or creativity of the model’s output. A lower value makes the output more deterministic.

    • Top P: Controls the diversity of the output. An alternative to temperature sampling. The model selects tokens from the smallest group whose cumulative probability mass exceeds the specified threshold. For example, if the specified value is 0.1, only the combined tokens that make up the top ten percent of the probability mass are considered for the next word generation.

  • When the model type is set to Embedding, the following options are available:

    • Dimensions: The number of numerical components in each embedding vector. This determines the size and complexity of the vector space.

    • Inference profile: A profile of preset configuration settings. Read the AWS documentation. to learn more about inference profiles in AWS Bedrock.

General system prompts#

Use a system prompt to customize and improve the output of your language model by providing instructions or messages that define its behavior, role, tone, or context before any interaction takes place.

To add add a general system prompt, follow these steps:

  1. Click Add general system prompts.

  2. Click Add general system prompt.

  3. In the text field, define a specific action or behavior for the LLM.

  4. If you want to add additional system prompts, click Add general system prompt.

  5. Click Save.

For more information about system prompts, see the general system prompts.

Prompt overrides#

Prompt overrides let you override general system prompts or use a different model to further refine the output from your LLM.

To add a prompt override, follow these steps:

  1. Click Add prompt override.

  2. Select the function you would like to add the override to, and specify the parameters for either User or System. You can add overrides to multiple functions at the same time.

  3. Click Save.

For more information about prompt overrides, see the prompt overrides section.

Read more about Starburst AI functions to see a list of the supported functions and use cases.

Access control#

Starburst AI lets you connect to a number of supported models, invoke functions, and embeddings. Starburst Enterprise enables administrators to configure access and privileges to these models and functions for specific roles.

For more information about roles and privileges in Starburst Enterprise, see Built-in access control overview.