Databricks

Databricks is a lakehouse platform in the cloud. This article walks you through setting up Databricks to be used as a data Data Store in DeltaStream.

Setting up the Databricks Workspace

Prerequisites

  1. Sign up for a Databricks account using AWS and complete the workspace setup (steps 1 and 2) or use an existing Databricks workspace.

  2. Have an AWS account whose S3 hosts your Delta Lake data. If you don't have an account, you can sign up for a free trial of AWS.

Create a Databricks App Token

  1. Navigate to your Databricks workspace.

  2. In the top right of the screen, click down on your account name and select User Settings.

  3. In the menu bar that displays, click Developer, and under Access Tokens, click Manage.

  4. Click Generate new token. Add an optional comment for the token and then choose a lifetime for the token. Then click Generate to create the token.

  1. Verify the save or download the newly-generated token value. You will need this when creating the store later on.

For more details on generating access tokens for a workspace, see the Databricks documentation.

Add Databricks SQL Warehouse

  1. Navigate to your Databricks workspace.

  2. In the lefthand navigation click the SQL Warehouses. A list displays of the existing SQL warehouses in your workspace. Databricks creates a starter warehouse for you.

  3. To create a new SQL warehouse, click Create SQL warehouse. To edit an existing SQL warehouse, to the right of the warehouse you want, click the 3 vertical dots. Then click Edit.

  4. Configure your SQL warehouse with your preferred specifications. (To learn more about configuring your SQL warehouse, please review the Databricks documentation.) For a more optimal experience, we recommend choosing serverless as the SQL warehouse type. More information about Databricks serverless SQL warehouse.

  5. Click Save to create the SQL warehouse. Record the warehouse ID on the overview page; you will need this ID when you create the store later on. You can also access the warehouse overview by clicking the name of the SQL warehouse from the SQL Warehouses initial landing page from step 1.

Add an S3 Bucket as External Location for Data

Use an existing S3 bucket or create a new one.

  1. To create a new AWS S3 bucket:

    1. In the AWS console, navigate to the S3 page.

    2. Click Create bucket.

    3. Enter a name for your S3 bucket and then at the bottom click Create bucket to create your new S3 bucket.

For more details, see the Databricks documentation for creating, configuring, and working with Amazon S3 buckets.

Add Databricks connection to the newly-created S3 bucket

  1. Navigate to your Databricks workspace.

  2. In the lefthand navigation, click Catalog. This displays a view of your Unity Catalog.

  3. At the top of the page, click + Add, and from the list that displays click Add an external location.

  4. Click AWS Quickstart to set up the Databricks and S3 connection, and then click Next. Advanced users can opt to set up their external location manually instead, but this article continues with the AWS Quickstart option.

  5. Enter the name of an existing S3 bucket to link to your Databricks workspace. Then click Generate new token. Copy that token, then click Launch in Quickstart. This brings you back to the AWS console and displays a page called Quick create stack.

  6. On the the AWS Quick create stack page, in the Databricks Personal Access Token field, enter the access token you copied in step 5. Then at the bottom of the page, click to acknowledge that AWS CloudFormation might create IAM resources with custom names. Then click Create stack to launch stack initialization.

  7. In a few minutes, you'll see the stack creation complete.

For more information on external locations, see the Databricks documentation.

(Optional) Create a Unity Catalog Metastore

This step is relevant if you receive an error message such as Metastore Storage Root URL Does Not Exist. In this case:

  1. Ensure you have an S3 bucket to use for metastore-level managed storage in AWS (follow the steps above to create a new S3 bucket). In this case you can use the bucket created in the previous step.

  2. Navigate to the Databricks account settings Catalog page. From here, either create a new metastore or edit existing metastores.

  3. If you're creating a new metastore, click Create metastore and follow the prompts to set the name, region, S3 path, and workspaces for the metastore.

  4. If you're editing an existing metastore, click on the name of the metastore you wish to edit. From this page you can assign new workspaces, set an S3 path, edit the metastore admin, and take other actions.

For more information on creating a Unity Catalog metastore, see the Databricks documentation.

Adding Databricks as a DeltaStream Store

  1. Open DeltaStream. In the lefthand navigation, click Resources ( ) and then click Add Store +.

  2. From the menu that displays, click Databricks. The Add Store window opens.

  3. Enter the authentication and connection parameters. These include:

    • Store Name – A unique name to identify your DeltaStream store. (For more details see Data Store). Store names are limited to a maximum of 255 characters. Only alphanumeric characters, dashes, and underscores are allowed.

    • Store Type – Databricks

    • URL – URL for Databricks workspace. Find this by navigating to the Databricks accounts page and clicking the workspace you wish to use.

    • Warehouse ID – The ID for a Databricks SQL warehouse in your Databricks workspace. (For more details see Add Databricks SQL Warehouse).

    • Databricks Cloud Region – The AWS region in which the Cloud S3 Bucket exists.

    • Cloud S3 Bucket – An AWS S3 bucket that is connected as an external location in your Databricks workspace (see Databricks).

    • App Token – The Databricks access token for your user in your Databricks workspace. (For more details see Databricks.)

    • Access Key ID – Access key associated with the AWS account in which the Cloud S3 Bucket exists.

    • Secret Access Key – Secret access key associated with the AWS account in which the Cloud S3 Bucket exists.

  4. Click Add.

Your Databricks store displays on the Resources page in your list of stores.

Note For instructions on creating the store using DSQL, see CREATE STORE.

Process Streaming Data and Sink to Databricks

For the steps below, assume you already have a stream defined called pageviews, which is backed by a topic in Kafka. Assume also there is a Databricks store labelled Databricks_Test_Store. (For more details see Adding Databricks as a DeltaStream Store.) Now perform a simple filter on the pageviews stream and sink the results into Databricks.

Note For more information on setting up a stream or a Kafka store, see Starting with the Web App or Starting with the CLI.

Inspect the Databricks store

  1. In the lefthand navigation, click Resources ( ). This displays a list of the existing stores.

  2. Click the Databricks_Test_Store. The store page displays, with the Databases tab active. Here you can view a list of the existing catalogs in your Databricks workspace.

  3. (Optional) Create a new database. To do this, click + Add Database. When prompted, enter a name for the new database and click Add. The new database displays in the list. Important If you receive this error message -- Metastore Storage Root URL Does Not Exist -- verify that you've properly set up your Databricks Unity Catalog metastore.

  4. To see the namespaces that exist in a particular database, click the database you want.

  5. (Optional) Create a new namespace. To do this:

    1. Select + Add Namespace. In the window that displays, enter a name for the new namespace and then click Add. The new namespace now displays in the list.

  6. To see the tables that exist under a particular namespace, click the namespace you want.

Write a CTAS (CREATE TABLE AS SELECT) Query to Sink Data into Databricks

  1. In the lefthand navigation, click Workspace ( ).

  2. In the SQL pane of your workspace, write the CREATE TABLE AS SELECT (CTAS) query to ingest from pageviews and output to a new table titled pv_table.

CREATE TABLE pv_table WITH (
  'store' = 'databricks_store', 
  'databricks.catalog.name' = 'new_catalog', 
  'databricks.schema.name' = 'new_schema', 
  'databricks.table.name' = 'pageviews', 
  'table.data.file.location' = 's3://deltastream-databricks-bucket2/test'
) AS 
SELECT 
  viewtime, 
  pageid, 
  userid 
FROM 
  pageviews 
WHERE 
  pageid != 'Page_3';
  1. Click Run.

  2. In the lefthand navigation click Queries ( ) to see the existing queries, including the query from the step immediately prior. It may take a few moments for the query to transition into the Running state. Keep refreshing your screen until the query transitions.

View the results

  1. In the lefthand navigation, click Resources ( ). This displays a list of the existing stores.

  2. To view the new table created by the above CTAS, navigate to databricks_store --> Databases --> + Add Database --> Add Namespace --> pageviews. Of course, if you wrote your CTAS such that the store/database/namespace/table names are different, navigate accordingly.

  3. To view a sample of the data in your Databricks table, click Print.

Last updated