Release 393-e LTS (31 Aug 2022)#

Starburst Enterprise platform (SEP) 393-e is the follow up release to the 392-e STS release and the 380-e LTS release.

This release is a long term support (LTS) release.

The 393-e release includes all improvements from the following Trino project releases:

It contains all improvements from the Starburst Enterprise releases since 380-e LTS:

Highlights since 380-e#

  • Update to Java 17, improving general performance and efficiency.

  • Full CRUD feature support added to the Iceberg connector.

  • Add support for Iceberg version 2.

  • Add support for MERGE to several connectors, including Delta Lake, Hive, and Iceberg.

  • Add support for explicit deny policies in Built-in access control privileges.

  • Fault-tolerant execution is now available in public preview.

  • Add support for OAuth 2.0 refresh tokens.

Breaking changes since 380-e#

  • SEP requires Java 17 as runtime version. The scope of the necessary changes varies on your deployment method and usage of SEP, including connectors, authentication type, security integrations and other components. Thoroughly review the dedicated migration guide to determine necessary steps.

  • Renamed the optimizer.join_partitioned_build_min_row_count configuration property to optimizer.join-partitioned-build-min-row-count, which no longer accepts negative values.

  • Remove support for Kubernetes 1.18.

  • Remove support for MapR-based Hive metastore as well as the MapR filesystem.

  • The release adds a new target catalog parameter to the cloneDataProduct endpoint of the data products API. This new parameter is mandatory, which breaks existing configurations that use this endpoint.

  • If you enable snowflake.database-prefix-for-schema.enabled, any existing queries must be updated to replace schema in the fully-qualified name with "database.schema" in double-quotes. Failure to do so results in an error when the query is run.

  • The optimizer.enable-starburst-optimizer-rules property is now defunct and must be removed from config.properties if configured. Failure to do so results in the cluster failing to start. Extended Starburst optimizer properties are now enabled on all clusters.

  • The catalogname.wait-for-dynamic-filters catalog property is now defunct, and must be removed from your catalog properties files to avoid the cluster failing to start. JDBC-based connectors wait for dynamic filters collection by default. The catalogname.dynamic-filtering.wait-timeout property can be used to change the default behavior.

  • The JMX table containing metrics related to dynamic filtering for JDBC based connectors is now jmx.current."io.trino.plugin.jdbc:name=catalogname,type=dynamicfilteringstats".

  • Validation was added to data products to ensure that dataset names are SQL compliant. Datasets with an invalid name do not refresh until the name is updated to be SQL compliant.

  • The teradata.query-pass-through.enabled catalog configuration property is now deprecated, as the current configuration is considered legacy and may be removed in a future release. Existing configurations that use this property must either remove it or rename the property to deprecated.teradata.query-pass-through.enabled or the cluster does not start.

  • The http-server.authentication.oauth2.auth-url, http-server.authentication.oauth2.jwks-url, and http-server.authentication.oauth2.token-url authentication properties are deprecated.

393-e.1 initial changes#

The following changes from 393-e.0 and 393-e.1 are all part of the first public release.

General#

  • Add endpoint to return Data Products OpenAPI .json and .yaml.

  • Add ability to log anonymized query statistics and query input-output metadata in telemetry.

  • Data products now defaults the data-product.starburst-jdbc-url configuration property to the coordinator’s address with no JDBC driver parameters set, if not specified. Customers using TLS must still specify this property in order to set the SSL=true parameter.

  • After cloning a data product, the Starburst Enterprise web UI now redirects to the Details screen of the cloned data product.

  • Hide the data products edit button in the UI for users who are not authorized to edit.

  • Fix reporting workers count and CPU usage in Starburst Enterprise web UI, when a cluster scales workers to 0.

  • Fix issue where closing the preview query window in the query editor did not properly cancel the query.

  • Fix bug that prevented loading the details page for data products published in catalogs with dashes in the name.

Db2 connector#

  • Add support for pass-through statements with the query table function.

Delta Lake connector#

  • Add Parquet INT32 to Trino short DECIMAL type coercion in the accelerated Parquet reader.

  • Fix bug in accelerated Parquet reader when reading data written by certain writers.

Generic JDBC connector#

  • Add support for pass-through statements with the query table function.

Hive connector#

  • Add Parquet INT32 to Trino short DECIMAL type coercion in the accelerated Parquet reader.

  • Fix bug in accelerated Parquet reader when reading data written by certain writers.

Iceberg connector#

  • Add Parquet INT32 to Trino short DECIMAL type coercion in the accelerated Parquet reader.

  • Fix bug in accelerated Parquet reader when reading data written by certain writers.

Netezza connector#

  • Add support for pass-through statements with the query table function.

SAP HANA connector#

  • Add support for pass-through statements with the query table function.

Snowflake connector#

  • Add support for pass-through statements with the query table function.

Stargate#

  • Add support for pass-through statements with the query table function.

Teradata connector#

  • Add support for pass-through statements with the query table function.

  • Fix reading DATE columns when Teradata DATEFORM was set to ANSI.

  • Fix issue with incorrect case-sensitive comparison for predicate pushdown of CHAR and VARCHAR types in TERA mode.

Vertica connector#

  • Add support for pass-through statements with the query table function.

393-e.2 changes (26 Sep 2022)#

  • Fix bug that prevented cluster metrics for Insights from being persisted.

  • Fix query failure when renaming or dropping columns that should be quoted. Applies to the ClickHouse, MariaDB, MySQL, Oracle, Phoenix, PostgreSQL, Redshift, SingleStore, and SQL Server connectors.

  • Fix query failure when adding a column that shoud be quoted. Applies to the Phoenix connector.

  • Fix query failure when adding a column with a column comment that has special characters which require it to be escaped. Applies to the ClickHouse connector.

  • Fix query failure when creating a table with a table or column comment that has special characters which require it to be escaped. Applies to the ClickHouse, MariaDB, and MySQL connectors.

  • Fix query failure when setting a table comment that has special characters which require it to be escaped. Applies to the ClickHouse, MariaDB, and MySQL connectors.

  • Fix query failure when setting a column comment that has special characters which require it to be escaped. Applies to the ClickHouse, Oracle, PostgreSQL, and Redshift connectors.

  • Fix query failure upon reading from a Hive view when hive.hive-views.run-as-invoker and hive.hive-views.legacy-translation are both enabled for views in Hive SQL translated to Trino SQL.

  • Fix memory leak in fault-tolerant execution.

  • Fix potential table corruption when changing a table before committing to the Hive metastore has completed. Applies to the Iceberg connector.

393-e.3 changes (6 Oct 2022)#

  • Fix potential SQL injection when querying BigQuery tables.

  • Fix potential data corruption when Iceberg commit to Glue fails. Applies to the Iceberg connector.

  • Fix bug that caused an empty directory to remain in the underlying storage when a materialized view was dropped. Applies to the Hive connector.

  • Fix inability to set the AWS STS endpoint and region when using a Glue metastore. Applies to the Delta, Hive, and Iceberg connectors.

393-e.4 changes (27 Oct 2022)#

  • Prevent coordinator out-of-memory failure when querying a large number of tables in a short period of time. Applies to the Delta Lake connector.

  • Fix error when using PREPARE with DROP VIEW where the view name is quoted.

  • Fix network issues during the data transfer for tables with a large number of columns. Applies to the Teradata Direct connector.

  • Fix query failure when using the merge(qdigest) function.

  • Fix issue where Ranger policies with an embedded JavaScript expression are not evaluated properly.

393-e.5 changes (17 Nov 2022)#

  • Fix potential query failure or incorrect results when reading from a table with the avro.schema.literal Hive table property set. Applies to the Hive connector.

  • Fix possible query failures for certain queries with join pushdown enabled on the Teradata Direct connector.

  • Fix failure when reading duplicated column statistics in the Hive connector.

  • No longer overwrite reader and writer versions when executing COMMENT and ALTER TABLE ... ADD COLUMN statements. Applies to the Delta Lake connector.

  • Fix creating metadata and manifest files with URL-encoded name when the metadata location has trailing slashes on S3. Applies to the Iceberg connector.

393-e.6 changes (1 Dec 2022)#

  • Fix a correctness bug for queries with certain window operators used in sequence.

  • Suppress access denied exception in the Hive connector when listing all tables/views in a Glue database.

  • Fix failure for certain queries involving joins over partitioned tables.

393-e.7 changes (8 Dec 2022)#

  • Fix bug in parquet reader for arrays spanning multiple parquet pages. Applies to the Hive, Delta Lake, and Iceberg connectors.

393-e.8 changes (20 Jan 2023)#

  • Fix an issue when creating materialized view is denied because of missing permissions to HMS but replacing materialized view is allowed when using HMS impersonation.

  • Fix redundant scope parameter.

  • Fix ArrayIndexOutOfBoundsException from accelerated parquet reader when reading string columns. Applies to the Hive and Iceberg connectors.

  • Fix issue with plain accessToken passthrough when only OAuth2 with refresh-token is configured.

  • Fix inability to use Ranger policies containing JavaScript.

  • Disallow performing UPDATE or DELETE on Hive ACID transactional tables to prevent correctness issues when the operation modifies a large number of rows. These operations can be re-enabled using the hive.acid-modification-enabled catalog configuration property or the acid_modification_enabled catalog session property.

  • Fix parquet read failure where column indexes do not include a null count.