Catalog management properties#
The following properties are used to configure catalog management with further controls for dynamic catalog management. See also, Migration to Dynamic catalogs.
All properties described in this page are defined as follows, depending on the deployment type:
Kubernetes: In the
additionalPropertiessection of the the top-levelcoordinatorandworkernodes in thevalues.yamlfile. Some properties must only be set in thecoordinator.Starburst Admin: In the
files/coordinator/config.properties.j2andfiles/worker/config.properties.j2files.
catalog.management#
Type: string
Allowed values:
static,dynamicDefault value:
staticKubernetes deployment type: Set in both the
coordinatorandworker
When set to static, Trino reads catalog property files and configures
available catalogs only on server startup. When set to dynamic, catalog
configuration can also be managed using CREATE CATALOG and
DROP CATALOG. New worker nodes joining the cluster receive the current
catalog configuration from the coordinator node.
Warning
Several connectors do not support dynamic catalog management, including the
Prometheus and deprecated_hive connectors.
When you drop a catalog that uses connectors capable of reading from HDFS, such as the Hive connector, Iceberg connector, Delta Lake connector, and Hudi connector connectors, some resources may not be fully released. Restart the coordinator and workers after dropping these catalogs to ensure proper cleanup.
The complete CREATE CATALOG query is logged and visible in the Starburst Enterprise web UI. It
is strongly recommended to use a secrets manager rather
than pass any credentials in plain text.
catalog.store#
Type: string
Allowed values:
file,memory,starburstDefault value:
fileKubernetes deployment type: Set only in the
coordinator
Requires catalog.management to be set to dynamic. When set to file,
creating and dropping catalogs using the SQL commands adds and removes catalog
property files on the coordinator node. Trino server process requires write
access in the catalog configuration directory. Existing catalog files are also
read on the coordinator startup. When set to memory, catalog configuration is
only managed in memory, and any existing files are ignored on startup. When set
to starburst, catalog configurations are stored in the
Backend service database, and any existing files are ignored on
startup.
When using the starburst value, secrets cannot be stored in plaintext.
CREATE CATALOG and ALTER CATALOG fail when trying to set
security-sensitive properties without using a secrets
manager. Any security-sensitive properties must have their
entire value set to reference a secret manager, so a configuration like
mongo.connection-url=mongodb://${vault:user}:${vault:password}@example.host:27017/
is invalid. The whole URL would instead need to be stored and specified as
mongo.connection-url=${vault:connection-url}.
catalog.prune.update-interval#
Type: duration
Default value:
5sMinimum value:
1sKubernetes deployment type: Set only in the
coordinator
Requires catalog.management to be set to dynamic. Interval for
pruning dropped catalogs. Dropping a catalog does not interrupt running queries,
but prevents new ones from using it.
catalog.config-dir#
Type: string
Default value:
etc/catalog/Kubernetes deployment type: Set in both the
coordinatorandworker
Requires catalog.management to be set to static or
catalog.store to be set to file. The directory with catalog property
files.
catalog.disabled-catalogs#
Type: string
Kubernetes deployment type: Set in both the
coordinatorandworker
Requires catalog.management to be set to static or
catalog.store to be set to file. Comma-separated list of catalogs to
ignore on startup.
catalog.read-only#
Type: string
Default value:
falseKubernetes deployment type: Set in both the
coordinatorandworker
Requires catalog.store to be set to file. If true, existing catalog
property files cannot be removed with DROP CATALOG, and no new catalog files
can be written with identical names with CREATE CATALOG. As a result, a
coordinator restart resets the known catalogs to the existing files only.
Migration to Dynamic catalogs#
Dynamic catalog management lets you define and manage catalogs directly through SQL statements, eliminating the need to manually update catalog configuration files. The following migration guide details how to transition from static to dynamic catalogs.
Prerequisites#
Before migrating to dynamic catalog management, you must meet the following requirements:
Configure a secret manager. See also, Considerations.
Ensure you are not using one of the following connectors:
deprecated_hive
Additionally, using a connector that is not included in a SEP release may cause catalog management queries to fail.
Avoid using connectors that rely on HDFS libraries, including:
Hive, Delta Lake, Iceberg, or Hudi with HDFS or legacy file system enabled.
MapR connector
HBase connector
Hive connector with Hadoop-based SerDes such as
org.apache.hadoop.hive.serde2.JsonSerDe.
These connectors may fail to clean up properly when performing operations like
ALTER CATALOGorDROP CATALOG.
Considerations#
Take the following considerations into account when migrating from static to dynamic catalog management:
Ensure your catalog configuration files do not contain security-sensitive properties with plaintext values. Security-sensitive values are values that are not printed in the server’s startup log and are instead replaced with
***to mask the values. Security-sensitive properties such as private keys and passwords cannot be passed as plaintext inCREATEandALTER CATALOGcatalog management statements. If you provide these properties in plaintext, the server displays an error prompting you to reference these properties using a secrets manager.Minimize the use of
ALTER CATALOGstatements when using caching mechanisms such as metadata caching or connection pooling. These statements do not directly clear the caches. However, they can create the perception of cleared caches.
Migrate#
Follow these steps to migrate from a statically managed catalog to dynamic catalog management:
Enable dynamic catalog management and store definitions in the SEP backend database by setting
catalog.management=dynamicandcatalog.store=starburst. Upon cluster startup, catalog configuration files are ignored. Only thesystemcatalog is accessible until additional catalogs are created usingCREATE CATALOG. See also, catalog.management and catalog.store.Convert your catalog configuration files to CREATE CATALOG statements. Ensure that any security-sensitive properties reference a secrets manager. For guidance, see Convert configuration files to SQL statements.
After creating the catalogs, run queries against them to validate that they are working correctly.
Create a backup of your catalog configuration files so you can revert to static catalog management if needed.
Remove the catalog configuration files from the deployment/helm chart. These files are not used when catalog.store is set to
starburst.Restart your SEP cluster and verify that the catalogs initialize successfully.
Convert configuration files to SQL statements#
Convert your static catalog configuration files to CREATE CATALOG statements
using one of two methods:
Use the migration assistance table#
Enable migration assistance by setting catalog.migration.enabled=true.
Query the migration helper table using the sysadmin role:
SELECT * FROM system.metadata.static_catalog_definitions
The table contains the following columns:
catalog_name: The name of the catalogconnector_name: The name of the underlying connectorsql: ACREATE CATALOGSQL statement that recreates the catalog dynamicallyerror: Any error the system encounters reading the static catalog configuration
Warning
Review the sql column carefully for any plaintext secrets. If secrets are
present, replace them with secrets manager references
(e.g., ${vault:secret/postgres:password}) before you execute the statements.
Running SQL with plaintext secrets exposes them through query history.
Manually convert configuration files#
Manually convert each catalog configuration file to a CREATE CATALOG
statement. Ensure security-sensitive properties reference a secrets
manager.
The following example converts a configuration file definition called
my_postgres.properties into a new catalog using a CREATE CATALOG statement.
The my_postgres.properties configuration file contains the following
properties:
connector.name=postgresql
connection-url=jdbc:postgresql://example.net:5432/database
connection-user=root
connection-password=<secret-password>
The equivalent CREATE CATALOG statement is:
CREATE CATALOG my_postgres
USING postgresql
WITH (
"connection-url" = 'jdbc:postgresql://example.net:5432/database',
"connection-user" = 'root',
"connection-password" = '${vault:secret/postgres:password}'
)
Revert dynamic catalog management#
To revert from dynamic to static catalog management in your SEP cluster, follow these steps:
Compare your catalog configuration files with the backend database definitions. Run the
SHOW CREATE CATALOGquery and verify that the output matches your existing configuration files.If the configuration does not match, update the catalog configuration files using information from the output of
SHOW CREATE CATALOG.Switch to static catalog management by editing your configuration to do one of the following:
Delete
catalog.management=dynamicandcatalog.store=starburstto use default settings.Explicitly set
catalog.management=staticand remove thecatalog.storeproperty.
Restart your SEP cluster to apply these changes.
Troubleshoot dynamic catalogs#
Dynamic catalog management with catalog.store set to starburst stores
catalog configurations in the backend database. After SEP upgrades,
some catalogs may fail to load if they contain invalid or deprecated properties.
Use the system.metadata.broken_catalog_definitions table to identify and fix
these broken catalogs.
Note
This feature requires catalog.management set to dynamic and
catalog.store to starburst (the default value). Only users with the
sysadmin role can access the table. Users with other roles see an empty table.
Query the broken catalogs table to identify broken catalogs:
SELECT * FROM system.metadata.broken_catalog_definitions
The table contains the following columns:
catalog_name: The name of the broken catalogconnector_name: The name of the underlying connectorsql: ACREATE CATALOGSQL statement that contains the catalog’s current configuration
Use the following steps to fix a broken catalog:
Query the
broken_catalog_definitionstable. Save the SQL statement for the catalog you want to fix.Drop the broken catalog:
DROP CATALOG catalog_name
Review your saved SQL statement for any issues:
Check for deprecated or invalid properties
Verify property names and values match the current connector requirements
Ensure there are no plaintext secrets (replace plaintext secrets with secrets manager references)
Warning
Do not run a CREATE CATALOG statement that contains plaintext secrets. Query
history stores these statements and exposes your credentials. Use secrets
manager references for sensitive values.
Fix any issues you identify in the SQL statement.
Run the corrected
CREATE CATALOGstatement to recreate the catalog.Test the catalog to ensure it works correctly.
Warning
The SQL statement in the broken_catalog_definitions table contains the exact
configuration that caused the catalog to fail. You must identify and fix the
problematic properties before running the statement.