Starburst Galaxy supports a variety of embedding and language models. To use Starburst AI and the supported models and functions with Galaxy, configure external large language models (LLMs) or embeddings with a supported integration.
The following topics cover configuration of integrations, models, and functions.
To use and configure external AI models, consider the following:
starburst
catalog level to execute each table function that you want
to use."Action": ["bedrock:InvokeModel"]
AWS IAM permission
to use models."Action": ["bedrock:*"]
permission.AI functions invoke an external LLM. Access to the LLM must be configured in the AI models pane. Performance, output, and cost of all AI function invocations are dependent on the LLM integration and the model used. Choose a model that aligns with your specific use case and performance requirements.
Starburst supports AWS Bedrock and OpenAI-compatible APIs.
AWS Bedrock offers access to a suite of foundational models hosted on AWS. Integrating AWS Bedrock models requires configuring AWS credentials and specifying the models that you want to use for different AI functions. To use AWS Bedrock models, configure the necessary model connection details.
The following models are tested by Starburst:
Model name | Description |
---|---|
amazon.titan-embed-text-v2:0 |
The AWS Bedrock Titan Text Embedding v2 embedded model. |
cohere.embed-multilingual-v3 |
The AWS Bedrock Cohere embed multilingual model. |
anthropic.claude-3-5-haiku-20241022-v1:0 |
The AWS Bedrock provided Claude3.5 LLM. |
meta.llama3-2-3b-instruct-v1:0 |
The AWS Bedrock provided Llama 3.2 3B LLM. |
Starburst supports AWS Bedrock models that support the Converse API. This includes most AWS Bedrock models, but may not include custom models and newer OS models.
For more information about AWS Bedrock’s supported models, read the documentation.
OpenAI offers access to a number of language and embedding models. To use OpenAI models, configure the necessary model connection details.
The following models are tested and supported by Starburst:
Model name | Description |
---|---|
text-embedding-3-small |
The OpenAI text embedding 3 model. |
text-embedding-3-large |
The OpenAI text embedding 3 model. |
gpt-4o-mini |
The OpenAI GPT-4o mini. A compact version of the standard GPT-4o LLM. |
For more information about compatible OpenAI models and APIs, read the documentation.
Galaxy supports a number of OpenAI models through OpenAI-compatible APIs. This lets you connect to external LLMs for inferencing tasks and embedding models to generate vector embeddings for AI search.
Azure OpenAI lets you connect to OpenAI models hosted in the Microsoft Azure cloud. You can use Azure OpenAI for both language and embedding tasks by connecting through the OpenAI-compatible API.
To configure Azure OpenAI with Galaxy, select OpenAI as the integration during model configuration. Then, log in to your Azure OpenAI portal, copy the deployment URL and API key from Azure OpenAI foundry portal, and enter them in the appropriate fields in the connection details section of the Galaxy model configuration.
Gemini provides large language
and embedding models accessible
through an OpenAI-compatible API. To use Gemini with Galaxy, select
OpenAPI as the integration during model configuration, get the API key from your
cloud console in
Google, and use the
following endpoint:
https://generativelanguage.googleapis.com/v1beta/openai/
To configure an external model in Galaxy, the role you assume must
have the account-level Create AI models
privilege. After a role is granted
this privilege, the AI models icon becomes visible in the Galaxy
navigation menu. Click the AI models icon to access the AI models pane,
where you can click Connect to an external model to configure an external
model.
The following sections provide more detail for connecting to an AI model.
Follow these steps to select one of the supported integrations and connect to a model using a compatible API.
meta.llama3-2-3b-instruct-v1:0
. The Amazon Model ID property value can be a
foundation model ID or an inference
profile.
When the integration is Amazon Bedrock and the model type is
Embedding, the Model ID field becomes a drop-down menu.For more information about AWS Bedrock’s integrations and models, read the documentation.
Specify the connection credentials for a cross account AWS IAM role, AWS access key, or OpenAI.
us-east-2
.us-east-2
.For more information about AWS regions and region codes, read the AWS documentation.
https://api.openai.com/
.In addition to the basic connection details for an external AI model, you can specify additional configuration options to fine-tune and control model behavior. The advanced configuration options available change depending on the Model type.
When the model type is set to LLM
, the following options are available:
When the model type is set to Embedding
, the following options are available:
Use a system prompt to customize and improve the output of your language model by providing instructions or messages that define its behavior, role, tone, or context before any interaction takes place.
To add a general system prompt, follow these steps:
For more information about system prompts, see the general system prompts section.
Prompt overrides let you override general system prompts or use a different model to further refine the output from your LLM.
To add a prompt override, follow these steps:
For more information about prompt overrides, see the prompt overrides section.
Read more about AI functions to see a list of the supported functions and use cases.
Is the information on this page helpful?
Yes
No