# Cache service#

The Starburst Enterprise platform (SEP) cache service provides the ability to configure and automate the management of table scan redirections and materialized views in supported connectors.

The service connects to an existing SEP installation to run queries for copying data from the source catalog to the target catalog. The target catalog is regularly synchronized with the source and used as a cache.

The cache service can be run as a standalone service or within the coordinator process. You can interact with it using its REST API, or the cache service CLI.

Note

The cache service requires a valid Starburst Enterprise license.

## Requirements#

The cache service has similar requirements to SEP, which are described on the Deploying page.

### Linux Operating System#

• 64-bit required

• Newer release preferred, especially when running on containers

### Java Runtime Environment#

The cache service requires a 64-bit version of Java 11. Newer major versions such as Java 12 or 13 are not supported – they may work, but are not tested.

### Python#

• version 2.6.x, 2.7.x, or 3.x

• required by the bin/launcher script only

### Relational database#

The cache service requires an externally managed database for storing table scan redirections data and materialized view definitions. The following RDBMS are supported:

• MySQL 8.0.12 or higher

• PostgreSQL 9.6 or higher

• OracleDB 12.2.0.1 or higher

### Materialized views#

The cache service handles materialized view refreshes. To enable this, each supported catalog where storage tables reside must be configured to allow the creation of these tables.

## Deploy the cache service#

The cache service can be deployed either as a standalone application separate from your SEP cluster or within the existing coordinator process. The cache service is available at the configured cache-service.uri for both standalone and embedded deployments.

### Standalone deployment#

A standalone deployment ensures that the service is not affected by coordinator performance, or the deployment of a new release on the SEP cluster.

Deployment of the cache service in Kubernetes can be managed with the available Helm chart .

Manual deployment relies on using a tarball:

• Starburst Support provides access to a file named much like starburst-cache-service-*.tar.gz

• Extract it, for example with tar xfvz starburst-cache-service-*.tar.gz

The resulting directory starburst-cache-service-nnn, with nnn replaced by the release number, is called the installation directory. It contains all necessary resources.

Move the extracted directory into the desired location, such as /opt/, and you are ready to proceed with configuring the service.

### Embedded deployment#

The cache service can be setup to run embedded within the coordinator process by configuring the cache service on the coordinator in etc/cache.properties. This mode of deployment does not require installation of any additional packages or running a separate service.

Note

When running in embedded mode, the cache service uses the coordinator’s resources such as the JVM and logging. This additional resource usage must be planned for when sizing your cluster.

## Run the cache service#

Once deployed and configured, the cache service is run using the launcher script in bin/launcher as follows:

bin/launcher start


Alternatively, it can be run in the foreground, with the logs and other output written to stdout/stderr. Both streams should be captured if using a supervision system like daemontools:

bin/launcher run


Run the launcher with --help to see the supported commands and command line options. In particular, the --verbose option is very useful for debugging the installation.

The launcher configures default values for the configuration directory etc, configuration files, the data directory var, and log files in the data directory. You can change these values to adjust your usage to any requirements, such as using a directory outside the installation directory, specific mount points or locations, and even using other file names.

After starting the cache service, you can find log files in the log directory inside the data directory var:

• launcher.log: This log is created by the launcher and is connected to the stdout and stderr streams of the server. It contains a few log messages that occur while the server logging is being initialized, and any errors or diagnostics produced by the JVM.

• server.log: This is the main log file used by the service. It typically contains the relevant information if the server fails during initialization. It is automatically rotated and compressed.

• http-request.log: This is the HTTP request log which contains every HTTP request received by the server. It is automatically rotated and compressed.

## Configure the cache service#

The following files must exist inside the installation directory to hold the following configuration files. These files are used by the cache service alone in a standalone deployment, or by both the cache service and the coordinator in an embedded deployment:

The following file is specific to the cache service, and must also exist inside the installation directory to support type mapping:

• etc/type-mapping.json - optional JSON file to specify type mapping between source and target catalogs.

The following file is specific the table scan redirection feature of the cache service, and must also exist inside the installation directory to support define table scan redirection rules:

• etc/rules.json - JSON file specifying the source tables and target connector for the cache along with the schedule for refreshing them

### General configuration properties#

The configuration properties file, etc/config.properties, contains the configuration for the cache service, when deployed as a standalone system. Users of the embedded mode use a etc/cache.properties file on the coordinator with the same properties.

The following is a minimal configuration for the service:

service-database.user=alice
service-database.jdbc-url=jdbc:mysql://mysql-server:3306/<service-database-name>
starburst.user=bob
starburst.jdbc-url=jdbc:trino://coordinator:8080
rules.file=etc/rules.json


The properties to configure the cache service are explained in detail in the following sections.

General configuration properties#

Property name

Description

starburst.user

Username to connect to the SEP cluster for executing queries to refresh the cached tables. This user must also be able to read the source tables for any materialized views it refreshes.

starburst.password

Password to connect to the SEP cluster when password based authentication is enabled on the SEP cluster.

starburst.jdbc-url

JDBC URL of the SEP cluster used for executing queries to refresh the cached tables. You can use JDBC driver parameters in the connection string to configure details of the connection. For example, use jdbc:trino://coordinator?SSL=true for a cluster secured with TLS.

cache-service.uri

URI of the cache service, including the hostname and port number.

rules.file

Path to the JSON file containing rules for identifying source tables and target connector for caching. It also specifies a schedule for refreshing cached tables.

rules.refresh-period

Frequency at which cache rules are refreshed from the rules.file. Defaults to 1m.

max-table-import-threads

Maximum number of table import jobs that can be run in parallel. Defaults to 20.

refresh-interval

Frequency at which the cache service triggers refresh of cached tables and checks for a need to refresh materialized views based on their refresh interval and cron expression. Do not set this cache service interval to large values that can cause defined materialized view refreshes to be skipped. Defaults to 2m.

refresh-initial-delay

Initial delay for startup of the refresh. Defaults to 0s.

cleanup-interval

Frequency at which cache service triggers cleanup of expired tables in the cache. Defaults to 5m.

cleanup-initial-delay

Initial delay for startup of the cleanup. Defaults to 0s.

### HTTP/TLS and authentication configuration properties#

The following properties allow you to configure the cache service to use either HTTP or HTTPS, and to set up file-based authentication if appropriate for your environment.

HTTP/TLS and authentication properties#

Property name

Description

Default

http-server.http.port

HTTP port for the cache service

8180

http-server.https.port

HTTPS port of the cache service

8543

http-server.https.enabled

Flag to activate HTTPS/TLS

false

http-server.authentication.type

Authentication type used for the cache service, use password for password file based authentication

none

http-server.https.keystore.path

Path to the JKS keystore file used for TLS

http-server.https.keystore.key

Name of the key in the JKS keystore used for TLS

file.password-file

Path to the password file used with the file authentication type

#### Configure the cache service for HTTPS/TLS connections#

To configure the cache service to use HTTPS connections, add the following configuration properties to the cache.properties file on the coordinator:

starburst.user=<starburst-user>
starburst.jdbc-url=jdbc:trino://<coordinator-hostname>:8443?SSL=true


This example assumes that your cluster has the default 8443 HTTPS port already enabled.

You can configure the cache service to accept incoming HTTPS connections on a custom port by using the http-server.https.port configuration property in the cache.properties file:

http-server.https.port=<custom-port>


#### Configure the cache service for insecure HTTP connections#

To configure the cache service to use HTTP connections, add the following configuration properties to your cache.properties file:

starburst.user=cache-service
starburst.jdbc-url=jdbc:trino://<coordinator-hostname>:8080


The following configuration property must also be set in the config.properties file:

server.authentication.allow-insecure-over-http=true


This example assumes that your cluster has the default 8080 HTTP port still enabled.

You can configure the cache service to accept incoming HTTP connections on a custom port by using the http-server.http.port configuration property in the cache.properties file:

http-server.http.port=<custom-port>


#### File-based authentication#

File-based password authentication can be configured for the cache service by adding the following properties in the config.properties file:

http-server.https.enabled=true
http-server.https.keystore.path=etc/auth/localhost.keystore
http-server.https.keystore.key=changeit


The following required properties allow you to configure the connectivity to the service database used for storing redirections.

Database-related configuration properties#

Property name

Description

service-database.user

Username used to connect to the database storing table redirections

service-database.password

Password used to connect to the database storing table redirections

service-database.jdbc-url

JDBC URL of the database storing table redirections, only MySQL, PostgreSQL and Oracle URLs are supported

service-database.connection-pool.enabled

Enables pooling for connections to the service database. Defaults to true.

service-database.connection-pool.max-size

Maximum number of connections in the pool. Defaults to 10.

service-database.connection-pool.idle-timeout

Maximum time an idle connection is kept in the pool. Defaults to 10m.

### Table import configuration properties#

The following optional properties allow you to configure the table import configuration used when running queries on SEP to populate the cached views.

Cache service cached view import properties#

Property name

Description

Default

unpartitioned.writer-count

Number of writers per task when writing unpartitioned table

4

unpartitioned.scale-writers

Scale writers when writing unpartitioned table

false

unpartitioned.writer-min-size

Target minimum size of writer output when writing unpartitioned table with writers scaling

32MB

partitioned.use-preferred-write-partitioning

Use table partitioning to parallelize writes between worker nodes when writing to a partitioned table. This reduces import memory usage and improves cached table file sizes.

true

partitioned.preferred-write-partitioning-min-number-of-partitions

The minimum number of written partitions that is required to use connector preferred write partitioning

1

partitioned.writer-count

Number of writers per task when writing partitioned table

4

partitioned.scale-writers

Scale writers when writing partitioned table

false

partitioned.writer-min-size

Target minimum size of writer output when writing partitioned table with writers scaling

32MB

### Configuration properties for table scan redirection#

The global table scan redirection properties for defaultPartitionedImportConfig and defaultUnpartitionedImportConfig, as well as the rule-specific importConfig and incrementalImportConfig properties for table scan redirection use the following sub-properties:

Table scan redirection-specific configuration properties#

Property name

Description

usePreferredWritePartitioning

Implements use-preferred-write-partitioning.

preferredWritePartitioningMinNumberOfPartitions

writerCount

scaleWriters

Implements scale-writers.

writerMinSize

Implements writer-min-size.

### Log levels configuration property#

The optional log levels file, etc/log.properties, allows setting the minimum log level for named logger hierarchies. There are four decreasingly verbose levels: DEBUG, INFO, WARN and ERROR. The default logging level is INFO.

Every logger has a name, which is typically the fully qualified name of the class that uses the logger. Loggers have a hierarchy based on the dots in the name, like Java packages. For example, consider the following log levels file:

com.starburstdata.cache=WARN


This sets the minimum level to WARN for both com.starburstdata.cache.db and com.starburstdata.cache.rules.

## Type mapping#

Type mapping overcomes missing type support in target storage catalogs for cache service-manged table scan redirections and materialized views with Hive as the target catalog. It allows your users to create materialized views without the need to perform data type casting, and allows data engineers to create cached table projections in target catalogs where there is not a one-to-one type mapping. Type mapping uses definitions in a JSON file, type-mapping.json, to define type casting rules used when the target catalog is created or updated.

Mappings are key-value pairs as source: target. The following example shows how to construct type mapping rules for the target catalog, myhivesalesdata:

{
"rules": {
"myhivesalesdata": {
"timestamp(0)": "timestamp(3)",
"timestamp(1)": "timestamp(3)",
"timestamp(2)": "timestamp(3)"
}
}
}


Each target catalog has a separate entry in the JSON file. The Trino name of any type that is supported by the source catalog can be used as a key in the type map. Values must be Trino types supported by the target catalog.

Type casting is applied to regular columns, partition columns, and the column used for incremental update.

### Type mapping behavior#

Columns can only be cast based on their type. It is impossible to cast one column of a given type without casting all columns of that same type.

There are pre-configured type mappings CAST_TIMESTAMPS_TO_MILLISECONDS, CAST_TIMESTAMPS_TO_MICROSECONDS and CAST_TIMESTAMPS_TO_NANOSECONDS that can only be set using cache properties, not via the JSON type mapping. These mappings extend the precision of all timestamp-related types, they never truncate data, and ignore the target catalog name.

Some mappings, like integer -> varchar, can change the semantics of the max() function used for calculating the contents of an incremental update. For example "-1" > "7" because string lengths are compared first, so in incremental update the row with "-1" is added even if the table already contains data up to "7".

Using timestamp columns for partitioning is strongly discouraged. Timestamp partitions are silently changed to timestamp(3) while being written. This behavior can result in loss of precision for some columns. Additionally, it can lead to a huge amount of partitions, which negatively impacts performance.

### Type mapping configuration#

The following optional properties allow configuring type mapping rules:

Cache service type mapping configuration#

Property name

Description

type-mapping

Set the kind of type mapping to apply:

• NONE

• CAST_TIMESTAMPS_TO_MILLISECONDS

• CAST_TIMESTAMPS_TO_MICROSECONDS

• CAST_TIMESTAMPS_TO_NANOSECONDS

• FILE

Defaults to NONE.

type-mapping.file

Path to the JSON file

type-mapping.file.refresh-period

Frequency to use to refresh the type mapping rules from the type-mapping.file file. Defaults to 1m.

## JVM configuration#

The Java Virtual Machine (JVM) config file, etc/jvm.config, contains a list of command line options used for launching the JVM running the cache service for the cache service, when deployed as a standalone system. When deployed embedded within the coordinator, the contents of the etc/jvm.config apply to both the cache-service and to all other services and processes running on the coordinator.

The format of the file is a list of options, one per line. These options are not interpreted by the shell, so options containing spaces, or other special characters, should not be quoted.

The following is a basic etc/jvm.config file:

-server
-Xmx512M
-XX:+ExitOnOutOfMemoryError
-XX:+HeapDumpOnOutOfMemoryError


An OutOfMemoryError typically leaves the JVM in an inconsistent state. The above configuration causes the JVM to write a heap dump file for debugging, and forcibly terminate the process when this occurs.

## JMX metrics#

Metrics about table import are reported in the JMX table jmx.current."com.starburstdata.cache:name=TableImportService".

Metrics about cached table cleanup are reported in the JMX table jmx.current."com.starburstdata.cache:name=CleanupService".

Metrics about redirections requests on the web service resources are reported in the JMX table jmx.current."com.starburstdata.cache.resource:name=RedirectionsResource".

Metrics about table import and expiration requests on the web service resource are reported in the JMX table jmx.current."com.starburstdata.cache.resource:name=CacheResource".