Starburst Vertica connector#

The Starburst Vertica connector allows querying a Vertica database as an external data source.

Requirements#

To connect to Vertica, you need:

  • Vertica 9.1.x or higher.

  • Network access from the coordinator and workers to the Vertica server. Port 5433 is the default port.

  • A valid Starburst Enterprise license.

Configuration#

Create a catalog properties file in etc/catalog named example.properties to access the configured Vertica database in the example catalog (replace example with your database name or some other descriptive name of the catalog). Configure the usage of the connector by specifying the name vertica and replace the connection properties as appropriate for your setup.

connector.name=vertica
connection-url=jdbc:vertica://example.net:5433/test_db
connection-user=root
connection-password=secret

The connection-user and connection-password are typically required and determine the user credentials for the connection, often a service user. You can use secrets to avoid actual values in the catalog properties files.

General configuration properties#

The following table describes general catalog configuration properties for the connector:

Property name

Description

case-insensitive-name-matching

Support case insensitive schema and table names. Defaults to false.

case-insensitive-name-matching.cache-ttl

Duration for which case insensitive schema and table names are cached. Defaults to 1m.

case-insensitive-name-matching.config-file

Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases. Defaults to null.

case-insensitive-name-matching.config-file.refresh-period

Frequency with which Trino checks the name matching configuration file for changes. The duration value defaults to 0s (refresh disabled).

metadata.cache-ttl

Duration for which metadata, including table and column statistics, is cached. Defaults to 0s (caching disabled).

metadata.cache-missing

Cache the fact that metadata, including table and column statistics, is not available. Defaults to false.

metadata.schemas.cache-ttl

Duration for which schema metadata is cached. Defaults to the value of metadata.cache-ttl.

metadata.tables.cache-ttl

Duration for which table metadata is cached. Defaults to the value of metadata.cache-ttl.

metadata.statistics.cache-ttl

Duration for which tables statistics are cached. Defaults to the value of metadata.cache-ttl.

metadata.cache-maximum-size

Maximum number of objects stored in the metadata cache. Defaults to 10000.

write.batch-size

Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance. Defaults to 1000.

dynamic-filtering.enabled

Push down dynamic filters into JDBC queries. Defaults to true.

dynamic-filtering.wait-timeout

Maximum duration for which Trino waits for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries. Defaults to 20s.

Type mapping#

Because Trino and Vertica each support types that the other does not, this connector modifies some types when reading or writing data. Data types may not map the same way in both directions between Trino and the data source. Refer to the following sections for type mapping in each direction.

Vertica to Trino type mapping#

The connector maps Vertica types to the corresponding Trino types according to the following table:

Vertica to Trino type mapping#

Vertica type

Trino type

Notes

BOOLEAN

BOOLEAN

BIGINT

BIGINT

Vertica treats TINYINT, SMALLINT, INTEGER, and BIGINT as synonyms for the same 64-bit BIGINT data type

DOUBLE PRECISION (FLOAT)

DOUBLE

Vertica treats FLOAT and REAL as the same 64-bit IEEE FLOAT

DECIMAL(p, s)

DECIMAL(p, s)

CHAR, CHAR(n)

CHAR, CHAR(n)

VARCHAR, LONG VARCHAR, VARCHAR(n), LONG VARCHAR(n)

VARCHAR(n)

VARBINARY,LONG VARBINARY, VARBINARY(n), LONG VARBINARY(n)

VARBINARY(n)

DATE

DATE

No other types are supported.

Unsupported Vertica types can be converted to VARCHAR with the vertica.unsupported_type_handling session property. The default value for this property is IGNORE.

SET SESSION vertica.unsupported_type_handling = 'CONVERT_TO_VARCHAR'

Trino to Vertica type mapping#

The connector maps Trino types to the corresponding Vertica types according to the following table:

Trino to Vertica type mapping#

Trino type

Vertica type

BOOLEAN

BOOLEAN

TINYINT

BIGINT

SMALLINT

BIGINT

INTEGER

BIGINT

BIGINT

BIGINT

REAL

DOUBLE PRECISION

DOUBLE

DOUBLE PRECISION

DECIMAL(p, s)

DECIMAL(p, s)

CHAR

CHAR

VARCHAR

VARCHAR

VARBINARY

VARBINARY

DATE

DATE

No other types are supported.

Type mapping configuration properties#

The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.

Property name

Description

Default value

unsupported-type-handling

Configure how unsupported column data types are handled:

  • IGNORE, column is not accessible.

  • CONVERT_TO_VARCHAR, column is converted to unbounded VARCHAR.

The respective catalog session property is unsupported_type_handling.

IGNORE

jdbc-types-mapped-to-varchar

Allow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR

SQL support#

The connector provides read and write access to data and metadata in Vertica. In addition to the globally available and read operation statements, the connector supports the following features:

ALTER TABLE RENAME TO#

The connector does not support renaming tables across multiple schemas. For example, the following statement is supported:

ALTER TABLE example.schema_one.table_one RENAME TO example.schema_one.table_two

The following statement attempts to rename a table across schemas, and therefore is not supported:

ALTER TABLE example.schema_one.table_one RENAME TO example.schema_two.table_two

Table functions#

The connector provides specific table functions to access Vertica.

query(VARCHAR) -> table#

The query function allows you to query the underlying database directly. It requires syntax native to the data source, because the full query is pushed down and processed in the data source. This can be useful for accessing native features or for improving query performance in situations where running a query natively may be faster.

The query table function is available in the system schema of any catalog that uses the Vertica connector, such as example. The following example passes myQuery to the data source. myQuery has to be a valid query for the data source, and is required to return a table as a result:

SELECT
  *
FROM
  TABLE(
    example.system.query(
      query => 'myQuery'
    )
  );

Performance#

The connector includes a number of performance improvements, detailed in the following sections.

Pushdown#

The connector supports pushdown for a number of operations:

Join pushdown#

The join-pushdown.enabled catalog configuration property or join_pushdown_enabled catalog session property control whether the connector pushes down join operations. The property defaults to false, and enabling join pushdowns may negatively impact performance for some queries.

Dynamic filtering#

Dynamic filtering is enabled by default. It causes the connector to wait for dynamic filtering to complete before starting a JDBC query.

You can disable dynamic filtering by setting the dynamic-filtering.enabled property in your catalog configuration file to false.

Wait timeout#

By default, table scans on the connector are delayed up to 20 seconds until dynamic filters are collected from the build side of joins. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.

You can configure the dynamic-filtering.wait-timeout property in your catalog properties file:

dynamic-filtering.wait-timeout=1m

You can use the dynamic_filtering_wait_timeout catalog session property in a specific session:

SET SESSION example.dynamic_filtering_wait_timeout = 1s;

Compaction#

The maximum size of dynamic filter predicate, that is pushed down to the connector during table scan for a column, is configured using the domain-compaction-threshold property in the catalog properties file:

domain-compaction-threshold=100

You can use the domain_compaction_threshold catalog session property:

SET SESSION domain_compaction_threshold = 10;

By default, domain-compaction-threshold is set to 32. When the dynamic predicate for a column exceeds this threshold, it is compacted into a single range predicate.

For example, if the dynamic filter collected for a date column dt on the fact table selects more than 32 days, the filtering condition is simplified from dt IN ('2020-01-10', '2020-01-12',..., '2020-05-30') to dt BETWEEN '2020-01-10' AND '2020-05-30'. Using a large threshold can result in increased table scan overhead due to a large IN list getting pushed down to the data source.

Metrics#

Metrics about dynamic filtering are reported in a JMX table for each catalog:

jmx.current."io.trino.plugin.jdbc:name=example,type=dynamicfilteringstats"

Metrics include information about the total number of dynamic filters, the number of completed dynamic filters, the number of available dynamic filters and the time spent waiting for dynamic filters.

JDBC connection pooling#

When JDBC connection pooling is enabled, each node creates and maintains a connection pool instead of opening and closing separate connections to the data source. Each connection is available to connect to the data source and retrieve data. After completion of an operation, the connection is returned to the pool and can be reused. This improves performance by a small amount, reduces the load on any required authentication system used for establishing the connection, and helps avoid running into connection limits on data sources.

JDBC connection pooling is disabled by default. You can enable JDBC connection pooling by setting the connection-pool.enabled property to true in your catalog configuration file:

connection-pool.enabled=true

The following catalog configuration properties can be used to tune connection pooling:

JDBC connection pooling catalog configuration properties#

Property name

Description

Default value

connection-pool.enabled

Enable connection pooling for the catalog.

false

connection-pool.max-size

The maximum number of idle and active connections in the pool.

10

connection-pool.max-connection-lifetime

The maximum lifetime of a connection. When a connection reaches this lifetime it is removed, regardless of how recently it has been active.

30m

connection-pool.pool-cache-max-size

The maximum size of the JDBC data source cache.

1000

connection-pool.pool-cache-ttl

The expiration time of a cached data source when it is no longer accessed.

30m

Caching table projections#

The connector supports table scan redirection to improve performance and reduce load on the data source.

Table statistics#

You can use ANALYZE statements in SEP to populate the table statistics in Vertica. The cost-based optimizer then uses these statistics to improve query performance.

Support for table statistics is disabled by default. You can enable it with the catalog property statistics.enabled set to true. In addition, the connection-user configured in the catalog must have superuser permissions in Vertica to gather and populate statistics.

You can view statistics in SEP using SHOW STATS.

Security#

The connector includes a number of security-related features, detailed in the following sections.

User impersonation#

The connector supports user impersonation. Enable user impersonation by setting the vertica.impersonation.enabled property in the catalog properties file to true:

vertica.impersonation.enabled=true

User impersonation in the connector is based on the SET ROLE command supported in Vertica. Prior to setting the impersonated role, SET ROLE NONE is executed to clear any roles that have been already set, so only the impersonated role is used.

Password credential pass-through#

The connector supports password credential pass-through. To enable it, edit the catalog properties file to include the authentication type:

vertica.authentication.type=PASSWORD_PASS_THROUGH

For more information about configurations and limitations, see Password credential pass-through.