Schema discovery#

Schema discovery identifies and registers tables and views that are newly added to a known schema location. For example, when a logging process creates a new log file every hour, rolling over from the previous hour’s log file, schema discovery locates the newly added files so that SEP can query them.

Configure schema discovery#

To configure schema discovery, set the following configuration properties in the coordinator’s config.properties file:

schema-discovery.starburst-jdbc-url=jdbc:trino://coordinator.example.com
schema-discovery.starburst-user=alice
schema-discovery.starburst-password=my123password

Both the current user and the user assigned to the schema-discovery.starburst-user property must have the sysadmin role to execute a schema discovery run. The current user is required to have it assigned and enabled in the current session, while the user added to schema-discovery.starburst-user must only have it assigned. The user assigned to schema-discovery.starburst-user must also be listed in the starburst.access-control.authorized-users property on the coordinator.

If you do not provide a value for schema-discovery.starburst-user, SEP uses schema_discovery by default.

Warning

Registering a table in different catalogs can lead to inconsistencies in the table.

Iceberg and Delta Lake catalog configuration#

The following tables list additional catalog configuration properties for the Iceberg and Delta Lake connectors. The iceberg.register-table-procedure.enabled and delta.register-table-procedure.enabled properties are required to enable schema discovery for the respective catalogs:

Iceberg schema discovery configuration properties#

Property

Description

iceberg.register-table-procedure.enabled

Set to true to allow user to call the register_table procedure. Required to enable schema discovery on Iceberg tables.

iceberg.schema-discovery.concurrency

Maximum number of parallel jobs used for scanning files by schema discovery. Default is 8.

iceberg.schema-discovery.max-buckets

Maximum number of scanned buckets. Default is 10.

Delta Lake schema discovery configuration properties#

Property

Description

delta.register-table-procedure.enabled

Set to true to allow user to call the register_table procedure. Required to enable schema discovery on Delta Lake tables.

delta.schema-discovery.concurrency

Maximum number of parallel jobs used for scanning files by schema discovery. Default is 8.

delta.schema-discovery.max-buckets

Maximum number of scanned buckets. Default is 10.

List of discoveries#

For catalogs that have run discovery before, the Query > Schema discovery page shows a list of previous runs with the following columns:

  • Source: The source URL for the bucket used for discovery. Click the source name to open the discovery results pane.

  • Timestamp The timestamp when the discovery was run.

  • Status: The current status of each discovery instance, such as the last successful discovery run, or whether the discovery is in progress.

  • Changes: Displays a summary of the changes made during the discovery run, such as the number of tables created.

  • Logs: Click to open the log events pane for that event.

  • Rerun: Click to run schema discovery on the source again. This option performs a diff on the location and returns any changes found.

Run schema discovery#

Run schema discovery to analyze a schema in an object storage location and return the structure of any discovered tables using a system table or a table function in that location.

Run in the UI#

To run schema discovery using the Starburst Enterprise web UI:

  1. Click Query > Query editor in the left-hand navigation.

  2. Use the options menu for the target entity in the cluster explorer and click Run discovery.

  3. In the Catalog location URL field, enter the URL of the bucket and directory to scan.

Note

Schema discovery follows the pattern of schema/table/<files/partition>. It cannot run on a file. For example, s3://my-s3-bucket/my_csv_file.csv does not work.

  1. Enter the name of a schema in the Default schema field. This is designated for newly discovered tables that are not part of an existing schema.

  2. Optionally open the Advanced settings section:

    Specify one of the following scan types for this schema discovery run:

    • Incremental discovery from last run scans for tables created in the specified schemas since the last schema discovery run. This is the default selection.

    • Full discovery runs a full discovery scan on the specified location, potentially finding tables already registered with SEP.

    Specify the maximum number of lines to show in sample files, and/or specify the maximum number of files per table.

  3. Click Run discovery.

Run with SQL#

To run schema discovery using a system table, specify the system table in the FROM clause, and the URI and schema in the WHERE clause, along with any options as shown in the following statement:

SELECT sql FROM hive.schema_discovery.discovery
WHERE uri = 'hdfs://hadoop-master:9000/user/hive/warehouse/customer_orders'
AND schema = 'starburst_demo'
AND options = 'sampleFilesPerTableModulo=1,excludePatterns=**/_SUCCESS'

To run schema discovery using a table function, you must call the schema_discovery.discovery function in the FROM clause, and specify the URI and schema along with any options as follows:

SELECT sql FROM TABLE(schema_discovery.discovery(uri => 's3://test-bucket/path/to/discoverable/table',
schema => 'demo_schema', options => 'sampleFilesPerTableModulo=1,excludePatterns=**/_SUCCESS'))

The following sections describe the available options.

Default options#

Schema discovery is run by default with the following options and their default values. Specify a different value as shown in the previous examples to override the default values.

Option

Default value

dateFormat

yyyy-MM-dd

positiveInf

Inf

generatedHeadersFormat

COL%d

nanValue

NaN

locale

US

lineSeparator

nullValue

complexHadoop

false

sampleLinesModulo

3

maxSampleFilesPerTable

5

skipFileExtensionsCheck

false

quote

includePatterns

**

delimiter

,

escape

\

headers

true

ignoreTrailingWhiteSpace

false

discoveryMode

NORMAL

maxSampleLines

10

excludePatterns

.*

sampleFilesPerTableModulo

3

partitionProjection

false

negativeInf

-Inf

encoding

UTF-8

inferJsonStringDecimal

false

supportBuckets

false

ignoreLeadingWhiteSpace

false

maxSampleTables

2147483647

hiveSkipHiddenFiles

false

timestampFormat

yyyy-MM-dd HH:mm:ss[.SSSSSSS]

comment

#

Standard options#

The following options are set at runtime, and apply to all schema discovery queries:

Option

Description

encoding

Charset to use when reading files.

locale

Locale for date parsing.

dateFormat

Date format pattern.

timestampFormat

Time format pattern.

nanValue

Value to use for not-a-number.

positiveInf

Value to use for positive infinity.

negativeInf

Value to use for negative infinity.

includePatterns

HDFS include GLOB, split by.

excludePatterns

HDFS exclude GLOB, split by.

maxSampleLines

Max lines to sample from each file.

sampleLinesModulo

Each SAMPLE_LINES_MODULO line is sampled. i.e. if 3, every 3rd line is sampled.

sampleFilesPerTableModulo

Each SAMPLE_FILES_PER_TABLE_MODULO file is sampled. i.e. if 3, every 3rd file is sampled.

maxSampleTables

Max Tables to discovery.

maxSampleFilesPerTable

Max files per table to sample.

supportBuckets

If “true” attempt to infer buckets.

forceTableFormat

Optional = force the table format [JSON, CSV, ORC, PARQUET, ICEBERG, DELTA_LAKE, ERROR].

discoveryMode

Discovery mode, NORMAL - default discovery mode, directories in top level tables [RECURSIVE_DIRECTORIES, NORMAL].

partitionProjection

If “true” attempt to infer partition projection.

skipFileExtensionsCheck

Whether to skip looking at file extension when discovering file format. Use in case of mismatched file format/extension.

inferJsonStringDecimal

Try to parse string json values into Trino’s DECIMAL type hiveSkipHiddenFiles Whether to skip files/directories starting with _ or.

CSV and text file Options#

The following options are set at runtime, and apply only to schema discovery queries on CSV and text files:

Option

Description

headers

If “true” treat first line as columns names.

generatedHeadersFormat

Pattern to use if column names are auto-generated.

delimiter

Text file delimiter.

quote

Text file quote.

escape

Text file escape.

comment

Text file comment.

nullValue

Text file null value.

ignoreLeadingWhiteSpace

If “true” ignore leading white space.

ignoreTrailingWhiteSpace

If “true” ignore trailing white space.

complexHadoop

If “true” try to discover arrays, structs and maps.

lineSeparator

Text line separator.

Examine schema discovery run#

A successful discovery run opens the Select schemas pane, which shows a list of schemas with the following columns:

  • Schema: The name of the schema that contains the discovered tables.

  • Tables: The number of tables added to the schema since the last scan.

  • Partitions: The number of partitions in the table, if any.

  • Path: The URL of the schema and table.

The next step is to register the discovered tables with SEP. Select one or more schemas, or select a set of tables within a schema that you would like to register. Then click Create all tables or Create selected tables. The table registration process runs for a few moments, then opens the log events pane to show progress.

When the registration process completes, click Close to return to the Schema discovery page.

Log events#

The Log events pane shows a list of log entries for each discovery-related event. The Summary section shows the number of successful query executions and the number of errors that occurred during the discovery run.

The list of log events includes the following information:

  • Status: The outcome of the event. A green checkmark indicates a successful query execution. A red exclamation mark indicates an error.

  • Timestamp: The timestamp when the event occurred.

  • Query text: The SQL query execution text, such as CREATE TABLE or CREATE SCHEMA. Click the text to view the full query.

  • Message: A message detailing the log event, such as the successful creation of a schema, or an error message.

Supported formats#

Schema discovery supports creating tables in the Hive, Iceberg, and Delta Lake table formats. It does not support Hudi tables.

Caution

Schema discovery can detect all supported table formats, but it only allows creating tables that are supported by the underlining catalog connector. If a table format is not supported, execution of statements fail.

Schema discovery identifies tables and views that are stored in the following file formats:

  • JSON in Hive-JSON-SerDe format with one JSON object per line and no line termination characters separating the objects.

  • CSV

  • ORC

  • PARQUET

Schema discovery identifies tables and views that use the following compression codecs:

  • ZSTD

  • LZ4

  • SNAPPY

  • GZIP

  • DEFLATE

  • BZIP2

  • LZO

  • LZOP

Note

Schema discovery identifies compression codecs by file extension only. Ensure that your compression codec ends with the proper file extension.

Limitations and warnings#

  • Only JSON, CSV, ORC, and PARQUET file formats are supported.

  • Only ZSTD, LZ4, SNAPPY, GZIP, DEFLATE, BZIP2, LZO, and LZOP compression codes are supported.