Catalog management properties#
The following properties are used to configure catalog management with further controls for dynamic catalog management. See also, Migration to Dynamic catalogs.
Note
Dynamic catalog management is available as a public preview in Starburst Enterprise. Contact your Starburst account team with questions or feedback.
All properties described in this page are defined as follows, depending on the deployment type:
Kubernetes: In the
additionalProperties
section of the the top-levelcoordinator
andworker
nodes in thevalues.yaml
file.Starburst Admin: In the
files/coordinator/config.properties.j2
andfiles/worker/config.properties.j2
files.
catalog.management
#
Type: string
Allowed values:
static
,dynamic
Default value:
static
When set to static
, Trino reads catalog property files and configures
available catalogs only on server startup. When set to dynamic
, catalog
configuration can also be managed using CREATE CATALOG and
DROP CATALOG. New worker nodes joining the cluster receive the current
catalog configuration from the coordinator node.
Warning
Several connectors do not support dynamic catalog management, including the
Prometheus and deprecated_hive
connectors.
When you drop a catalog that uses connectors capable of reading from HDFS, such as the Hive connector, Iceberg connector, Delta Lake connector, and Hudi connector connectors, some resources may not be fully released. Restart the coordinator and workers after dropping these catalogs to ensure proper cleanup.
The complete CREATE CATALOG
query is logged and visible in the Starburst Enterprise web UI. It
is strongly recommended to use a secrets manager rather
than pass any credentials in plain text.
catalog.store
#
Type: string
Allowed values:
file
,memory
,starburst
Default value:
file
Requires catalog.management to be set to dynamic
. When set to file
,
creating and dropping catalogs using the SQL commands adds and removes catalog
property files on the coordinator node. Trino server process requires write
access in the catalog configuration directory. Existing catalog files are also
read on the coordinator startup. When set to memory
, catalog configuration is
only managed in memory, and any existing files are ignored on startup. When set
to starburst
, catalog configurations are stored in the
Backend service database, and any existing files are ignored on
startup.
When using the starburst
value, secrets cannot be stored in plaintext.
CREATE CATALOG and ALTER CATALOG fail when trying to set
security-sensitive properties without using a secrets
manager. Any security-sensitive properties must have their
entire value set to reference a secret manager, so a configuration like
mongo.connection-url=mongodb://${vault:user}:${vault:password}@example.host:27017/
is invalid. The whole URL would instead need to be stored and specified as
mongo.connection-url=${vault:connection-url}
.
catalog.prune.update-interval
#
Type: duration
Default value:
5s
Minimum value:
1s
Requires catalog.management to be set to dynamic
. Interval for
pruning dropped catalogs. Dropping a catalog does not interrupt running queries,
but prevents new ones from using it.
catalog.config-dir
#
Type: string
Default value:
etc/catalog/
Requires catalog.management to be set to static
or
catalog.store to be set to file
. The directory with catalog property
files.
catalog.disabled-catalogs
#
Type: string
Requires catalog.management to be set to static
or
catalog.store to be set to file
. Comma-separated list of catalogs to
ignore on startup.
catalog.read-only
#
Type: string
Default value:
false
Requires catalog.store to be set to file
. If true, existing catalog
property files cannot be removed with DROP CATALOG
, and no new catalog files
can be written with identical names with CREATE CATALOG
. As a result, a
coordinator restart resets the known catalogs to the existing files only.
Migration to Dynamic catalogs#
Dynamic catalog management lets you define and manage catalogs directly through SQL statements, eliminating the need to manually update catalog configuration files. The following migration guide details how to transition from static to dynamic catalogs.
Prerequisites#
Before migrating to dynamic catalog management, you must meet the following requirements:
Configure a secret manager. See also, Considerations.
Ensure you are not using one of the following connectors:
deprecated_hive
Additionally, using a connector that is not included in a SEP release may cause catalog management queries to fail.
Avoid using connectors that rely on HDFS libraries, including:
Hive, Delta Lake, Iceberg, or Hudi with HDFS or legacy file system enabled.
MapR connector
HBase connector
Hive connector with Hadoop-based SerDes such as
org.apache.hadoop.hive.serde2.JsonSerDe
.
These connectors may fail to clean up properly when performing operations like
ALTER CATALOG
orDROP CATALOG
.
Considerations#
Take the following considerations into account when migrating from static to dynamic catalog management:
Ensure your catalog configuration files do not contain security-sensitive properties with plaintext values. Security-sensitive values are values that are not printed in the server’s startup log and are instead replaced with
***
to mask the values. Security-sensitive properties such as private keys and passwords cannot be passed as plaintext inCREATE
andALTER CATALOG
catalog management statements. If you provide these properties in plaintext, the server displays an error prompting you to reference these properties using a secrets manager.Minimize the use of
ALTER CATALOG
statements when using caching mechanisms such as metadata caching or connection pooling. These statements do not directly clear the caches. However, they can create the perception of cleared caches.
Migrate#
Follow these steps to migrate from a statically managed catalog to dynamic catalog management:
Enable dynamic catalog management and store definitions in the SEP backend database by setting
catalog.management=dynamic
andcatalog.store=starburst
. Upon cluster startup, catalog configuration files are ignored. Only thesystem
catalog is accessible until additional catalogs are created usingCREATE CATALOG
. See also, catalog.management and catalog.store.Convert catalog configuration files to CREATE CATALOG statements, ensuring security-sensitive properties use a secrets manager. For example,
${vault:secret/postgres:password}
.The following example converts a configuration file definition called
my_postgres.properties
into a new catalog usingCREATE CATALOG
.The
my_postgres.properties
configuration file:connector.name=postgresql connection-url=jdbc:postgresql://example.net:5432/database connection-user=root connection-password=<secret-password>
Use
CREATE CATALOG
:CREATE CATALOG my_postgres USING postgresql WITH ( "connection-url" = 'jdbc:postgresql://example.net:5432/database', "connection-user" = 'root', "connection-password" = '${vault:secret/postgres:password}' )
Use a secrets manager to create variables for your sensitive properties and to avoid errors.
After creating the catalogs, run queries against them to validate that they are working correctly.
Create a backup of your catalog configuration files so you can revert to static catalog management if needed.
Remove the catalog configuration files from the deployment/helm chart. These files are not used when catalog.store is set to
starburst
.Restart your SEP cluster and verify that the catalogs initialize successfully.
Revert dynamic catalog management#
To revert from dynamic to static catalog management in your SEP cluster, follow these steps:
Compare your catalog configuration files with the backend database definitions. Run the
SHOW CREATE CATALOG
query and verify that the output matches your existing configuration files.If the configuration does not match, update the catalog configuration files using information from the output of
SHOW CREATE CATALOG
.Switch to static catalog management by editing your configuration to do one of the following:
Delete
catalog.management=dynamic
andcatalog.store=starburst
to use default settings.Explicitly set
catalog.management=static
and remove thecatalog.store
property.
Restart your SEP cluster to apply these changes.