Event logger#

The event logger allows query completion events and all related information to be logged into a PostgreSQL database.

Note

Event logger requires a valid Starburst Enterprise Presto license.

Requirements#

  • PostgreSQL database, version 9.6 or higher

  • Network connectivity between the coordinator and the PostgreSQL server

  • PostgreSQL JDBC driver JAR copied into plugin/event-logger on all nodes

Connectivity between the Presto and the PostgreSQL database can be verified on the coordinator or worker.

You can use ping to verify the hostname:

$ ping postgresql.example.com

You can use telnet with the hostname and port of the database to confirm PostgreSQL is listening on the configured port.

$ telnet postgresql.example.com:5432

Installation and configuration#

The event logger is implemented as a event listener implementation and can be enabled by creating a configuration file called etc/event-listener.properties on the coordinator and workers. The name needs to be set to event-logger. The configuration includes the JDBC connection URL with hostname, port and database name as well as username and password for accessing the PostgreSQL database.

event-listener.name=event-logger
jdbc.url=jdbc:postgresql://postgresql.example.com:5432/public
jdbc.user=test_psql
jdbc.password=test12

The user needs to have sufficient rights to create tables and insert data in the configured schema.

The cluster must be restarted after creating configuration file. At first usage the table completed_queries is automatically created in the PostgreSQL database.

Once the table is created, logging is activated and running.

Logged information#

For each completed query, a row is created in the completed_queries table. It captures everything that Presto emits – query, user, metadata, stats, performance related attributes, resource consumption, start time, end time, and much more:

Columns of the completed_queries table#

Column

Description

query_id

Unique ID of the query

catalog

Session catalog

schema

Session schema

principal

Identity used to authenticate to SEP

user_agent

User agent that submitted the query

client_info

Information about the client submitting the query

source

Client tool or driver used to execute query

environment

Name of the SEP environment name

remote_client_address

Address of the client that submitted the query

server_version

Version of SEP that executed the query

usr

Identity used for authorization and access control

query_state

State of the query when logged

query

Full SQL of the submitted query

query_plan

Full explain plan of the submitted query, including costs for each stage. Identical to the output from EXPLAIN ANALYZE <query>.

total_rows

Sum of all rows used in the query input

total_bytes

Sum of all bytes used in the query input

output_rows

Number of filtered rows on query output

output_bytes

Number of filtered bytes on query output

written_rows

Number of inserted rows

written_bytes

Number of bytes of inserted rows

cpu_time_ms

Total CPU reservation time

wall_time_ms

Query execution time as measured by a wall clock

queued_time_ms

Time spent between query submission and the beginning of query planning.

peak_user_memory_bytes

Maximum amount of memory directly tied to query resources used by a query at one time.

peak_total_non_revocable_memory_bytes

Maximum amount of memory used at one time which is not eligible to be spilled to disk.

peak_task_user_memory

Maximum amount of user memory reserved at one time by one task.

peak_task_total_memory

Maximum amount of user and system memory reserved at one time by one task, values are impacted by the memory management configurations.

physical_input_bytes

Number of uncompressed bytes read from source

physical_input_rows

Number of rows read from source

internal_network_bytes

Number of bytes of data exchanged between nodes during query execution

internal_network_rows

Number of rows of data exchanged between nodes during query execution

cumulative_memory

Integral of memory reservation with respect to time, giving units of memory x time. Dividing this figure by the execution time approximates the average memory reservation during query execution.

completed_splits

Total number of splits completed by the query

plan_node_stats_and_costs

Table statistics for each source table, as well as estimates for the CPU, memory, and network I/O for each stage of the query plan and each operator in each plan node. Operator costs are denoted by their plan node ID, as indicated from the operator_summaries column. Cost values do not represent any actual unit, but are numbers that are used to compare the relative costs between plan nodes, allowing the optimizer to choose the best plan for executing a query.

stage_gc_statistics

Information about JVM garbage collection tasks and time spent in GC for each query stage.

cpu_time_distribution

Min, max, average, and percentiles for reserved CPU time per task for each query stage. Can be useful for identifying skew in a query.

operator_summaries

Information about every operator, including operator type (TableScan, Aggregation, Join, etc.) and resources used (CPU time, network I/O, memory reserved, etc.) invoked at each stage of the query.

resource_waiting_time

Time spent waiting for sufficient resources before query execution

analysis_time

Time spent optimizing the query plan

execution_time

Total time a query spent in the execution phase

create_time

Timestamp from when query was received

end_time

Timestamp when results were finished being consumed by the client

accessed_metadata

List of catalogs, schemas, tables, and columns read. Query must be a SELECT query that returned more than 0 rows.

Analyzing the event log#

To analyze and query the event log, you can create a catalog with the PostgreSQL connector using the same JDBC connection parameters. This allows you to use the Presto CLI, or any other application connected to the catalog to create queries, dashboards, perform ad hoc analysis and more.

Example use cases:

  • measure query performance numbers and trends

  • understand impact of different cluster configurations

  • enable cluster workload management and resource consumption

Example#

Generate event logger data by running a query.

presto> SELECT nationkey, name, regionkey FROM tpch.sf1.nation LIMIT 5;
nationkey |   name    | regionkey
-----------+-----------+-----------
        0 | ALGERIA   |         0
        1 | ARGENTINA |         1
        2 | BRAZIL    |         1
        3 | CANADA    |         1
        4 | EGYPT     |         4
(5 rows)

Query 20200703_035704_00001_9n2pi, FINISHED, 1 node
Splits: 33 total, 33 done (100.00%)
1.80 [25 rows, 0B] [13 rows/s, 0B/s]

Event logger data is written to the completed_queries table in the PostgreSQL database.

In the following example, the catalog file is named postgresql.properties and the public schema contains the completed_queries table. The query_id from the execution is used to locate the correct record. The output in the following example has been trimmed.

presto> SELECT *
        FROM postgresql.public.completed_queries
        WHERE query_id = '20200703_035704_00001_9n2pi';
-[ RECORD 1 ]-------------------------+---------------------------------------
query_id                              | 20200703_035704_00001_9n2pi
catalog                               | NULL
schema                                | NULL
principal                             | NULL
user_agent                            | StatementClientV1/334
client_info                           | NULL
source                                | presto-cli
environment                           | demo
remote_client_address                 | 127.0.0.1
server_version                        | 334-e
usr                                   | demo
query_state                           | FINISHED
query                                 | SELECT nationkey, name, regionkey FROM tpch.sf1.nation LIMIT 5
query_plan                            | Fragment 0 [SINGLE]
                                      |     CPU: 24.50ms, Scheduled: 117.15ms, Input: 5 rows (148B); per task: avg.: 5.00 std.dev.: 0.00, Output: 5 rows
                                      |     Output layout: [nationkey, name, regionkey]
                                      | ...
total_rows                            | 25
total_bytes                           | 0
output_rows                           | 5
output_bytes                          | 148
written_rows                          | 0
written_bytes                         | 0
cpu_time_ms                           | 1615
wall_time_ms                          | 2169
queued_time_ms                        | 10
peak_user_memory_bytes                | 0
peak_total_non_revocable_memory_bytes | 0
peak_task_user_memory                 | 0
peak_task_total_memory                | 0
physical_input_bytes                  | 0
physical_input_rows                   | 25
internal_network_bytes                | 202
internal_network_rows                 | 5
cumulative_memory                     | 0.0
completed_splits                      | 33
plan_node_stats_and_costs             | {"stats":{},"costs":{}}
stage_gc_statistics                   | [{"stageId":0,"tasks":1,"fullGcTasks":0,...
cpu_time_distribution                 | [{"stageId":0,"tasks":1,"p25":25,"p50":25,...
operator_summaries                    | [{"stageId":0,"pipelineId":0,"operatorId":0,..
resource_waiting_time                 | 165
analysis_time                         | 168
execution_time                        | 1993
create_time                           | 2020-07-03 03:59:54.543 UTC
end_time                              | 2020-07-03 03:59:56.712 UTC
accessed_metadata                     | [{"catalogName":"tpch","schema":"sf1.0","table":"nation","columns":["nationkey","regionkey","name"]}]