Starburst Oracle connector#

The Starburst Oracle connector is an extended version of the Oracle connector, the initial configuration and usage is identical.

The following improvements are included:

Requirements#

SQL support#

The connector supports all of the SQL statements listed in the Oracle connector documentation

ALTER TABLE EXECUTE#

The Starburst enhanced connector supports the following commands for use with ALTER TABLE EXECUTE:

collect_statistics#

The collect_statistics command is used with Managed statistics to collect statistics for a table and its columns.

The following statement collects statistics for the example_table table and all of its columns:

ALTER TABLE example_table EXECUTE collect_statistics;

Collecting statistics for all columns in a table may be unnecessarily performance-intensive, especially for wide tables. To only collect statistics for a subset of columns, you can include the columns parameter with an array of column names. For example:

ALTER TABLE example_table
    EXECUTE collect_statistics(columns => ARRAY['customer','line_item']);

Performance#

The connector includes a number of performance improvements, detailed in the following sections.

Parallelism#

The connector is able to read data from Oracle using multiple parallel connections for tables partitioned as described in the Oracle partitioning documentation.

Oracle parallelism configuration properties#

Property name

Description

Default

oracle.parallelism-type

Determines the parallelism method. Possible values are:

  • NO_PARALLELISM, single JDBC connection

  • PARTITIONS, separate connection for each partition

NO_PARALLELISM

oracle.parallel.max-splits-per-scan

Maximum number of parallel connections for a table scan

10

Table statistics#

This feature is available for free, and does not require a valid license.

The Oracle connector can use table and column statistics for cost based optimizations, to improve query processing performance based on the actual data in the data source.

The statistics are collected by Oracle and retrieved by the connector.

To collect statistics for a table, add the following statement to your Oracle database:

EXECUTE DBMS_STATS.GATHER_TABLE_STATS('USER_NAME', 'TABLE_NAME');

See Oracle’s documentation for additional options and instructions on invoking a procedure when you’re not using SQL*Plus.

Pushdown#

The connector supports pushdown of all operations listed in the Oracle connector documentation, and the following improvements:

Cost-based join pushdown#

The connector supports cost-based Join pushdown to make intelligent decisions about whether to push down a join operation to the data source.

When cost-based join pushdown is enabled, the connector only pushes down join operations if the available Table statistics suggest that doing so improves performance. Note that if no table statistics are available, join operation pushdown does not occur to avoid a potential decrease in query performance.

The following table describes catalog configuration properties for join pushdown:

Property name

Description

Default value

join-pushdown.enabled

Enable join pushdown. Equivalent catalog session property is join_pushdown_enabled.

true

join-pushdown.strategy

Strategy used to evaluate whether join operations are pushed down. Set to AUTOMATIC to enable cost-based join pushdown, or EAGER to push down joins whenever possible. Note that EAGER can push down joins even when table statistics are unavailable, which may result in degraded query performance. Because of this, EAGER is only recommended for testing and troubleshooting purposes.

AUTOMATIC

Dynamic filtering#

Dynamic filtering is enabled by default. It causes the connector to wait for dynamic filtering to complete before starting a JDBC query.

You can disable dynamic filtering by setting the dynamic-filtering.enabled property in your catalog configuration file to false.

Wait timeout#

By default, table scans on the connector are delayed up to 20 seconds until dynamic filters are collected from the build side of joins. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.

You can configure the dynamic-filtering.wait-timeout property in your catalog properties file:

dynamic-filtering.wait-timeout=1m

You can use the dynamic_filtering_wait_timeout catalog session property in a specific session:

SET SESSION example.dynamic_filtering_wait_timeout = 1s;

Compaction#

The maximum size of dynamic filter predicate, that is pushed down to the connector during table scan for a column, is configured using the domain-compaction-threshold property in the catalog properties file:

domain-compaction-threshold=100

You can use the domain_compaction_threshold catalog session property:

SET SESSION domain_compaction_threshold = 10;

By default, domain-compaction-threshold is set to 32. When the dynamic predicate for a column exceeds this threshold, it is compacted into a single range predicate.

For example, if the dynamic filter collected for a date column dt on the fact table selects more than 32 days, the filtering condition is simplified from dt IN ('2020-01-10', '2020-01-12',..., '2020-05-30') to dt BETWEEN '2020-01-10' AND '2020-05-30'. Using a large threshold can result in increased table scan overhead due to a large IN list getting pushed down to the data source.

Metrics#

Metrics about dynamic filtering are reported in a JMX table for each catalog:

jmx.current."io.trino.plugin.jdbc:name=example,type=dynamicfilteringstats"

Metrics include information about the total number of dynamic filters, the number of completed dynamic filters, the number of available dynamic filters and the time spent waiting for dynamic filters.

Starburst Cached Views#

The connectors supports table scan redirection to improve performance and reduce load on the data source.

Managed statistics#

The connector supports Managed statistics allowing SEP to collect and store table and column statistics that can then be used for performance optimizations in query planning.

Statistics must be collected manually using the built-in collect_statistics command, see collect_statistics for details and examples.

Security#

The connector includes a number of security-related features, detailed in the following sections.

User impersonation#

Oracle connector supports user impersonation. In the Oracle connector, user impersonation creates proxy user accounts and authorizes users to connect through them in Oracle database.

Enable user impersonation in the catalog file:

oracle.impersonation.enabled=true

For more information, go to docs.oracle.com.

Kerberos authentication#

The connector supports Kerberos authentication using either a keytab or credential cache.

To configure Kerberos authentication with a keytab, add the following catalog configuration properties to the catalog properties file:

oracle.authentication.type=KERBEROS
kerberos.client.principal=example@example.com
kerberos.client.keytab=etc/kerberos/example.keytab
kerberos.config=etc/kerberos/krb5.conf

To configure Kerberos authentication with a credential cache, add the following catalog configuration properties to the catalog properties file:

oracle.authentication.type=KERBEROS
kerberos.client.principal=example@example.com
kerberos.client.credential-cache.location=etc/kerberos/example.cache
kerberos.config=etc/kerberos/krb5.conf

In these configurations the user example@example.com, as defined in the principal property, connects to the database. The related Kerberos service ticket is located in the etc/kerberos/example.keytab file, or cache credentials in the etc/kerberos/example.cache file.

Kerberos credential pass-through#

You can configure the Starburst Oracle connector to pass through Kerberos credentials, received by SEP, to the Oracle database. To configure Kerberos and SEP, see Kerberos credential pass-through.

After you configure Kerberos and SEP, edit the properties file to enable the connector to pass the credentials from the server to the database.

Confirm the correct Kerberos client configuration properties in the catalog properties file. For example:

oracle.authentication.type=KERBEROS_PASS_THROUGH
http.authentication.krb5.config=/etc/krb5.conf
http-server.authentication.krb5.service-name=exampleServiceName
http-server.authentication.krb5.keytab=/path/to/Keytab/File

Now any database accessed using SEP is subject to the Kerberos defined data access restrictions and permissions.

Password credential pass-through#

The connector supports password credential pass-through. To enable it, edit the catalog properties file to include the authentication type:

oracle.authentication.type=PASSWORD_PASS_THROUGH

For more information about configurations and limitations, see Password credential pass-through.