# Release 370-e LTS (17 Feb 2022)#

Starburst Enterprise platform (SEP) 370-e is the follow up release to the 369-e STS release and the 364-e LTS release.

This release is a long term support (LTS) release.

The 370-e release includes all improvements from the following Trino project releases:

It contains all improvements from the Starburst Enterprise releases since 364-e LTS:

## Breaking changes since 364-e#

• Remove distinction between system and user memory. This simplifies cluster configuration. The configuration property query.max-total-memory-per-node is removed, query.max-memory-per-node should be used instead.

• Remove unnecessary spilling configuration properties spill-order-by and spill-window-operator.

• Remove catalog configuration property allow-drop-table from the following connectors:

## 370-e initial changes#

### General#

• Use UBI8 minimal base image for the Cache Service Docker image.

### Redshift connector#

• Add support for timestampz and timestamp with time zone Redshift data types.

## 370-e.1 changes (23 Feb 2022)#

• Fix data corruption when multiple queries DELETE from, INSERT into, or UPDATE a Delta Lake table.

• Fix incorrect date result the value is <= 1582-10-14 in MySQL, SingleStore, and PostgreSQL connectors.

• Migrate Docker image for Starburst Cache Service to UBI8 minimal base image.

• Fix IndexOutOfBoundsException with accelerated parquet reader.

• Improve query performance by removing redundant predicates above table scan.

• Added separate properties for built-in access control to set authorized users and groups.

• Fix potential query failures for queries writing data to tables backed by S3 in Hive and Iceberg connectors.

• Fix problem when memory wasn’t always released after query completion.

• Fix performance regression in internal communication authentication processing.

• Fix failure when casting to decimal(38, 38)

## 370-e.2 changes (18 Mar 2022)#

• Fix query failures due to exhausted file system resources after DELETE or UPDATE.

• Fix issue where the length of log file names grow indefinitely upon log rotation.

• Add INT to BIGINT coercion in accelerated Parquet reader in Hive, Delta Lake, and Iceberg connectors.

• Accept only lowercase letters for role names in built-in access control to ensure SQL compatibility.

• Fix metastore impersonation for Avro tables.

• Fix a bug that occurred when Parquet file schema and Trino table definition did not match.

## 370-e.3 changes (25 Apr 2022)#

• Fix CVE-2022-23848 in Alluxio client.

• Fix RPM to include MapR Hive. libraries.

• Fix failure of the sync_partition_metadata procedure when partition names differ from partition paths on the file system.

• Fix query failure when selecting partition when the name corresponds with a reserved keyword.

• Fix performance regression for GROUP BY queries.

• Fix built-in access control (BIAC) UI to support large amounts of schemas, tables, and columns during privilege creation.

• Fix certain queries failing due to dictionary compacting error.

• Fix performance regression when planning large IN lists.

• Fix bug in accelerated parquet reader.

• Fix Delta Lake queries with predicates that are not expressible by tuple domain for nullable partition columns.

## 370-e.4 changes (20 May 2022)#

• Ignore non-redirected, non-Delta Lake tables or views when querying information_schema.columns.

• Add support to the Hive connector for date type partition names with timestamp formatting.

## 370-e.5 changes (8 Jun 2022)#

• Fix potential query failure when metastore caching is enabled.

• Fix typo in the configuration property name to remove an unsupported underscore and replace it with the appropriate dash DELEGATED-OAUTH2.

• Fix sync_partition_metadata procedure failure when table has a large number of partitions.

• Fix incorrect results for queries where aggregation is pushed down for a remote database to execute and the aggregation function result is not needed to evaluate the query. Applies to the ClickHouse, MariaDB, MySQL, Oracle, SQL Server, PostgreSQL, and SingleStore connectors

## 370-e.6 changes (1 Jul 2022)#

• Allow canceling a query on a transactional table if it is waiting for a lock.

• Avoid errors when attempting to query tables that exist in multiple Snowflake databases with role impersonation enabled. The errors were a result of multiple tables matching the same schema/table name.

## 370-e.7 changes (1 Aug 2022)#

Important: In this release, we’ve patched a bug that causes the potential for incorrect results from certain queries that have had joins reordered in specific ways, in combination with certain data. A small percentage of queries are affected, but due to the complexity of the conditions that trigger the bug, it is not possible to predict which queries will be affected and thus we recommend that _all_ customers upgrade all clusters.

• Fix incorrect results for certain join queries containing filters involving explicit or implicit casts.

• Add Varbinary handling to Kafka Protobuf deserializer.

• Fix incorrect query results when reading a Delta Lake table with a cached representation of its active data files that are outdated.

• Fix certain complex queries that involve joins and aggregations.

• Fix incorrect results when using the Glue metastore and queries contain IS NULL with additional filters. Applicable to Hive, Iceberg, and Delta connectors.

• Fix incorrect pushdown of expression below join.

370-e.8 was skipped.

## 370-e.9 changes (9 Sep 2022)#

• Fix writing incorrect results in the Delta Lake connector when the order of partition columns is different from the order in the table definition

• Fix incorrect table already exists error in the Delta Lake connector caused by a client timeout when creating a new table.

## 370-e.10 changes (26 Sep 2022)#

• Fix bug that prevented cluster metrics for Insights from being persisted.

• Fix query failure when renaming or dropping columns that should be quoted. Applies to the ClickHouse, MariaDB, MySQL, Oracle, Phoenix, PostgreSQL, Redshift, SingleStore, and SQL Server connectors.

• Fix query failure when adding a column that shoud be quoted. Applies to the Phoenix connector.

• Fix query failure when adding a column with a column comment that has special characters which require it to be escaped. Applies to the ClickHouse connector.

• Fix query failure when creating a table with a table or column comment that has special characters which require it to be escaped. Applies to the ClickHouse, MariaDB, and MySQL connectors.

• Fix query failure when setting a table comment that has special characters which require it to be escaped. Applies to the ClickHouse, MariaDB, and MySQL connectors.

• Fix query failure when setting a column comment that has special characters which require it to be escaped. Applies to the ClickHouse, Oracle, PostgreSQL, and Redshift connectors.