Release 353 (5 Mar 2021)#

General#

  • Add ClickHouse connector. (#4500)

  • Extend support for correlated subqueries including UNNEST. (#6326, #6925, #6951)

  • Add to_geojson_geometry() and from_geojson_geometry() functions. (#6355)

  • Add support for values of any integral type (tinyint, smallint, integer, bigint, decimal(p, 0)) in window frame bound specification. (#6897)

  • Improve query planning time for queries containing IN predicates with many elements. (#7015)

  • Fix potential incorrect results when columns from WITH clause are exposed with aliases. (#6839)

  • Fix potential incorrect results for queries containing multiple < predicates. (#6896)

  • Always show SECURITY clause in SHOW CREATE VIEW. (#6913)

  • Fix reporting of column references for aliased tables in QueryCompletionEvent. (#6972)

  • Fix potential compiler failure when constructing an array with more than 128 elements. (#7014)

  • Fail SHOW COLUMNS when column metadata cannot be retrieved. (#6958)

  • Fix rendering of function references in EXPLAIN output. (#6703)

  • Fix planning failure when WITH clause contains hidden columns. (#6838)

  • Prevent client hangs when OAuth2 authentication fails. (#6659)

Server RPM#

  • Allow configuring process environment variables through /etc/trino/env.sh. (#6635)

BigQuery connector#

  • Add support for CREATE TABLE and DROP TABLE statements. (#3767)

  • Allow for case-insensitive identifiers matching via bigquery.case-insensitive-name-matching config property. (#6748)

Hive connector#

  • Add support for current_user() in Hive defined views. (#6720)

  • Add support for reading and writing column statistics from Glue metastore. (#6178)

  • Improve parallelism of bucketed tables inserts. Inserts into bucketed tables can now be parallelized within task using task.writer-count feature config. (#6924, #6866)

  • Fix a failure when INSERT writes to a partition created by an earlier INSERT statement. (#6853)

  • Fix handling of folders created using the AWS S3 Console. (#6992)

  • Fix query failures on information_schema.views table when there are failures translating hive view definitions. (#6370)

Iceberg connector#

  • Fix handling of folders created using the AWS S3 Console. (#6992)

  • Fix query failure when reading nested columns with field names that may contain upper case characters. (#7180)

Kafka connector#

  • Fix failure when querying Schema Registry tables. (#6902)

  • Fix querying of Schema Registry tables with References in their schema. (#6907)

  • Fix listing of schema registry tables having ambiguous subject name in lower case. (#7048)

MySQL connector#

  • Fix failure when reading a timestamp or datetime value with more than 3 decimal digits in the fractional seconds part. (#6852)

  • Fix incorrect predicate pushdown for char and varchar column with operators like <>, <, <=, > and >= due different case sensitivity between Trino and MySQL. (#6746, #6671)

MemSQL connector#

  • Fix failure when reading a timestamp or datetime value with more than 3 decimal digits of the second fraction. (#6852)

  • Fix incorrect predicate pushdown for char and varchar column with operators like <>, <, <=, > and >= due different case sensitivity between Trino and MemSQL. (#6746, #6671)

Phoenix connector#

  • Add support for Phoenix 5.1. This can be used by setting connector.name=phoenix5 in catalog configuration properties. (#6865)

  • Fix failure when query contains a LIMIT exceeding 2147483647. (#7169)

PostgreSQL connector#

  • Improve performance of queries with ORDER BY ... LIMIT clause, when the computation can be pushed down to the underlying database. This can be enabled by setting topn-pushdown.enabled. Enabling this feature can currently result in incorrect query results when sorting on char or varchar columns. (#6847)

  • Fix incorrect predicate pushdown for char and varchar column with operators like <>, <, <=, > and >= due different case collation between Trino and PostgreSQL. (#3645)

Redshift connector#

  • Fix failure when reading a timestamp value with more than 3 decimal digits of the second fraction. (#6893)

SQL Server connector#

  • Abort queries on the SQL Server side when the Trino query is finished. (#6637)

  • Fix incorrect predicate pushdown for char and varchar column with operators like <>, <, <=, > and >= due different case sensitivity between Trino and SQL Server. (#6753)

Other connectors#

  • Reduce number of opened JDBC connections during planning for ClickHouse, Druid, MemSQL, MySQL, Oracle, Phoenix, Redshift, and SQL Server connectors. (#7069)

  • Add experimental support for join pushdown in PostgreSQL, MySQL, MemSQL, Oracle, and SQL Server connectors. It can be enabled with the experimental.join-pushdown.enabled=true catalog configuration property. (#6874)

SPI#

  • Fix lazy blocks to call listeners that are registered after the top level block is already loaded. Previously, such registered listeners were not called when the nested blocks were later loaded. (#6783)

  • Fix case where LazyBlock.getFullyLoadedBlock() would not load nested blocks when the top level block was already loaded. (#6783)

  • Do not include coordinator node in the result of ConnectorAwareNodeManager.getWorkerNodes() when node-scheduler.include-coordinator is false. (#7007)

  • The function name passed to ConnectorMetadata.applyAggregation() is now the canonical function name. Previously, if query used function alias, the alias name was passed. (#6189)

  • Add support for redirecting table scans to multiple tables that are unioned together. (#6679)

  • Change return type of Range.intersect(Range). The method now returns Optional.empty() instead of throwing when ranges do not overlap. (#6976)

  • Change signature of ConnectorMetadata.applyJoin() to have an additional JoinStatistics argument. (#7000)

  • Deprecate io.trino.spi.predicate.Marker.