Great Lakes connectivity supports a set of session properties that are shared among two or more table formats. In addition, each table format supports a unique set of session properties.
The following table describes the session properties available with the Hive, Iceberg, and Delta Lake table formats:
Session property | Description |
---|---|
dynamic_filtering_wait_timeout |
The duration to wait for completion of dynamic filtering during split generation. Dynamic filtering optimizations significantly improve the performance of queries with selective joins by avoiding reading of data that would be filtered by join conditions.This extra wait time can potentially result in significant overall savings in query and CPU time, if dynamic filtering is able to reduce the amount of scanned data. |
orc_bloom_filters_enabled |
Enable bloom filters for predicate pushdown. The default value is
false . |
orc_lazy_read_small_ranges |
Read small file segments lazily. The default value is
true . |
orc_max_buffer_size |
Maximum size of a single read. The default value is 8MB .
|
orc_max_merge_distance |
Maximum size of gap between two reads to merge into a single read.
The default value is 1MB . |
orc_max_read_block_size |
Soft maximum size of Trino blocks produced by ORC reader.
The default value is 16MB . |
orc_native_zstd_ decompressor_enabled |
Enable using the native zstd library for faster
decompression of ORC files. The default value is true .
|
orc_nested_lazy_enabled |
Lazily read nested data. The default value is true . |
orc_stream_buffer_size |
Size of buffer for streaming reads. The default value is
8MB .
|
orc_string_statistics_limit |
Maximum size of the string statistics.
The default value is 64B . |
orc_tiny_stripe_threshold |
The Threshold below which an ORC stripe or file will read in its
entirety. The default value is 8MB . |
parquet_use_bloom_filter |
Specifies whether Bloom filters are used for predicate pushdown when
reading Parquet files.true . |
parquet_max_read_block_ row_count |
Sets the maximum number of rows read in a batch. The default value is
8192 . |
parquet_max_read_block_size |
The maximum block size used when reading Parquet files. The default
value is 16MB . |
parquet_native_snappy_ decompressor_enabled |
Enable using the native SNAPPY library for faster
decompression of Parquet files. The default value is
true . |
parquet_optimized_nested_ reader_enabled |
Specifies whether batched column readers are used when reading
ARRAY , MAP , and ROW types from
Parquet files for improved performance. Set this property to
false to disable the optimized Parquet reader for
structural data types. The default value is true . |
parquet_optimized_reader_ enabled |
Specifies whether batched column readers are used when reading Parquet
files for improved performance. Set this property to
false to disable the optimized Parquet reader The default
value is true . |
parquet_vectorized_ decoding_enabled |
Enable the use of the Java Vector API for faster decoding of Parquet
files. The default value is true . |
parquet_writer_batch_size |
Maximum number of rows processed by the Parquet writer in a batch.
The default value is 10000 . |
parquet_writer_block_size |
The maximum block size created by the Parquet writer. The default
value is 128MB . |
parquet_writer_page_size |
The maximum page size created by the Parquet writer. The default value
is 1MB . |
projection_pushdown_enabled |
Enable projection push down. The default value is true . |
sorted_writing_enabled |
Enable sorted writes. The default value is true . |
statistics_enabled |
Enables table statistics for performance improvements. The default
value is true . If the value is set to false
the following session property is disabled:
|
target_max_file_size |
Target maximum size of written files. The default value is
1GB . |
Read more about the available session properties for each table format:
Is the information on this page helpful?
Yes
No