Azure Data Lake Storage catalogs #
You can use an Azure Data Lake Storage (ADLS) catalog to configure access to an Azure Data Lake Storage object store hosted on Microsoft Azure.
The data can be stored using Iceberg, Delta Lake, Hive, or Hudi table formats.
The metadata about the objects and related type mapping needs to be stored in a metastore. You can use a Hive Metastore, or the built-in metastore.
Follow these steps to begin creating a catalog for Azure Data Lake Storage:
- In the navigation menu, select Catalogs.
- Click Create catalog.
On the Select a data source pane, click the Azure Data Lake Storage icon.
- Follow the instructions in the next sections to configure your Azure Data Lake Storage connection.
Define catalog name and description #
The Name of the catalog is visible in the Query editor and other clients. It is used to identify the catalog when writing SQL or showing the catalog and its nested schemas and tables in client applications.
The name is displayed in the Query editor, and when running a SHOW
CATALOGS command. It is used to fully
qualify the name of any table in SQL queries following the
catalogname.schemaname.tablename syntax. For example, you can run the
following query in the sample cluster without first setting the catalog or
SELECT * FROM tpch.sf1.nation;
The Description is a short, optional paragraph that provides further details about the catalog. It appears in the Starburst Galaxy user interface and can help other users determine what data can be accessed with the catalog.
Authentication to ADLS #
Provide an ABFS storage account name, which you can find in your list of Resources in the Azure services section when you log into the Azure portal.
Specify an authentication method that can grant access to that object storage account. Select between the following authentication methods:
Azure service principal: Select a service principal alias from the drop-down list of configured service principals. If you have not yet configured a service principal for this Starburst Galaxy account, click Configure an Azure service principal to do so now. Configure Azure service principals following the guidance in:
Azure access key: Provide the ABFS access key for the specified storage account. Obtain this access key as described in:
Metastore configuration #
Before you can query data in an object storage account, it is necessary to have a metastore service associated with that object storage.
For more information about object storage and the requirement for a metastore, see Using object storage systems.
Hive Metastore Service #
You can use a Hive Metastore Service (HMS) to manage the metadata for your object storage. The HMS must be located in the same cloud provider and region as the object storage itself.
A connection to the HMS can be established directly, if the Starburst Galaxy IP range/CIDR is allowed to connect.
If the HMS is only accessible inside the virtual private cloud (VPC) of the cloud provider, you can use an SSH tunnel with a bastion host in the VPC.
In both cases, configure access with the following parameters:
- Hive Metastore host: the fully qualified domain name of the HMS server.
- Hive Metastore port: the port used by the HMS, typically 9083.
- Allow creating external tables: switch to indicate whether new tables can be created in the object storage and HMS from Starburst Galaxy with CREATE TABLE or CREATE TABLE AS commands.
- Allow writing to external tables: switch to indicate whether data management write operations are permitted.
Starburst Galaxy metastore #
Starburst Galaxy provides its own metastore service for your convenience. You do not need to configure and manage a separate Hive Metastore Service deployment or equivalent system.
In Metastore configuration, select Starburst Galaxy to set up and use the built-in metastore provided by Galaxy.
For Amazon S3 and Google Cloud Storage, create a bucket in your object storage account, and create a directory in that bucket. Provide that bucket name and directory name. This location is then used to store the metastore data associated with this S3 or GCS account.
For Azure ADLS, create a container in your storage account, and create a directory in that container. Provide this storage container name and directory name. This sets up the location used to store the metadata associated with this storage account.
The meanings of the two Allow controls are the same for a Starburst Galaxy metastore as for a separate Hive Metastore Service, described previously.
Note that deletion of the catalog also results in removal of the associated Starburst Galaxy metastore data.
Default table format #
Starburst Galaxy provides a simple way to specify the default table format for an object storage catalog. This applies to newly created tables, and does not convert any existing tables. The following table formats are supported:
- Delta Lake
If you are unsure which format to use, we recommend leaving the default
Iceberg format selected. More details are available in the discussion of the
format options on the Great Lakes connectivity page.
Test the connection #
Once you have configured the connection details, click Test connection to confirm data access is working. If the test is successful, you can save the catalog.
If the test fails, look over your entries in the configuration fields, correct any errors, and try again. If the test continues to fail, Galaxy provides diagnostic information that you can use to fix the data source configuration in the cloud provider system.
Connect catalog #
Click Connect catalog, and proceed to set permissions where you can grant access to certain roles.
Set permissions #
This optional step allows you to configure read access, read only access, and full read and write access to the catalog.
Setting read only permissions grants the specified roles read only access to the catalog. As a result users have read only access to all contained schema, tables, and views.
Setting read/write permissions grants the specified roles full read and write access to the catalog. As a result users have full read and write access to all contained schema, tables, and views.
Use the following steps to assign read/write access to roles:
- In the Role-level permissions section, expand the menu in the Roles with read and write access field.
- From the list, select one or more roles to grant read and write access to.
- Expand the menu in the Roles with read access field.
- Select one or more roles from the list to grant read access to.
Click Save access controls.
Add to cluster #
You can add your catalog to a cluster later by editing a cluster. Click Skip to proceed to the catalogs page.
Use the following steps to add your catalog to an existing cluster or create a new cluster in the same cloud region:
- In the Add to cluster section, expand the menu in the Select cluster field.
- Select one or more existing clusters from the drop down menu.
- Click Create a new cluster to create a new cluster in the same region, and add it to the cluster selection menu.
Click Add to cluster to view your new catalog’s configuration.
The Pending changes to clusters dialog appears when you try to add a catalog to a running cluster.
- In the Pending changes to cluster dialog, click Return to catalogs to edit the catalog or create a new catalog.
- Click Go to clusters to confirm the addition of the catalog to the running cluster.
On the Clusters page, click the Update icon beside the running cluster, to add the catalog.
Now that your object storage catalog has been added to a cluster, you can run queries against Iceberg, Delta, Hive, and Hudi table formats using Great Lakes connectivity.
SQL support #
SQL statement support for your object storage catalog depends on the used table formats. Details are available on the Great Lakes connectivity page.
Is the information on this page helpful?
- Azure Data Lake Storage catalogs
- SQL support
Is the information on this page helpful?