Comment on page
Setting up and Integrating Databricks with your Organization
Databricks is a lakehouse platform in the cloud. Utilizing technologies such as Apache Spark, Delta Lake and MLflow., Databricks combines the best of data warehouses and data lakes to offer an open and unified platform for data and AI.
- 1.In the top right of the screen, click down on your account name and select "User Settings"
- 2.In the menu bar that appears, select "Developer" and under "Access Tokens", select "Manage"
- 3.Select "Generate new token", optionally add a comment for the token and choose a lifetime for the token. Then click "Generate" to create the token.
- 1.Select the "SQL Warehouses" tab in the left side panel. You'll see a list of the existing SQL warehouses in your workspace. Databricks creates a starter warehouse for you.
- 2.To create a new SQL Warehouse, select "Create SQL warehouse". To edit an existing SQL warehouse, click the 3 vertical dots on the right of each listed warehouse and click "Edit".
- 3.Configure your SQL warehouse your preferred specifications. To learn more about configuring your SQL warehouse, take a look at the Databricks documentation.
- 4.Click "Save" to create the SQL warehouse and take note of the warehouse ID on the overview page. We'll need this ID when creating the Store later on. You can also access the warehouse overview by clicking on the name of the SQL warehouse from the "SQL Warehouses" initial landing page from step 1.
- 2.At the top of the page, select "+ Add" and in the dropdown select "Add an external location"
- 3.Select AWS Quickstart to set up the Databricks and S3 connection. Optionally, advanced users can set up their external location manually instead. For this tutorial, we'll continue with the AWS Quickstart option. Select "Next".
- 4.Enter the name of the S3 bucket to link to your Databricks workspace and click "Generate new token". Copy that token, then select "Launch in Quickstart" which will bring you back to the AWS console in a page called "Quick create stack".
- 5.Enter the access token copied in step 5 into the "Databricks Personal Access Token" field in the AWS "Quick create stack" page. Then at the bottom of the page, click that you acknowledge that AWS CloudFormation might create IAM resources with custom names, and select "Create stack" to launch stack initialization.
- 6.After a couple minutes, you'll see the stack creation complete
- 2.Select "Stores" in the left panel then select "+ New Store" in the top right
- 3.Enter the following information
- Store Type – Databricks
- Databricks Cloud Region – The AWS region that the "Cloud S3 Bucket" exists
- Access Key ID – Access key associated with the AWS account where the "Cloud S3 Bucket" exists
- Secret Access Key – Secret access key associated with the AWS account where the "Cloud S3 Bucket" exists
- 4.Click "Save" to create the Store
For the steps below, let's assume you already have a Stream defined called 'pageviews' which is backed by a topic in Kafka. We'll also assume that there is a Databricks Store called 'databricks_store' (See Adding Databricks as a DeltaStream Store). We are going to perform a simple filter on the pageviews Stream and sink the results into Databricks.
- 1.Navigate to the "Stores" tab in the left side panel. This will display a list of the existing Stores.
- 2.Select the Store called "databricks_store" then select "Entities". This will bring up a list of the existing catalogs in your Databricks workspace.
- 3.(Optional) Create a new catalog
- 1.Click on the 3 vertical dots next to your store name and select "Create Entity"
- 2.In the popup, enter a name for the new catalog and select "Create". You should now be able to see the new catalog in the entities list.
- 4.Click on a catalog name to see the schemas that exist under that catalog
- 5.(Optional) Create a new schema
- 1.Select "+ New Entity" to create a new schema
- 2.In the popup, enter a name for the new schema and select "Create". You should now be able to see the new schema in the entities list.
- 6.Click on a schema name to see the tables that exist under that schema
CREATE TABLE pv_table WITH (
'store' = 'databricks_store',
'databricks.catalog.name' = 'new_catalog',
'databricks.schema.name' = 'new_schema',
'databricks.table.name' = 'pageviews',
'table.data.file.location' = 's3://deltastream-databricks-bucket2/test'
pageid != 'Page_3';
- 3.Select "Run"
- 4.Navigate to "Queries" tab in the left side panel to see the existing queries, including the query from step 2. It takes a little bit of time for the query to transition into the 'Running' state.
- 5.Refresh until you see the query is in the 'Running' state
- 1.Navigate to the "Stores" tab in the left side panel
- 2.Navigate to the new table created by the above CTAS with the following clicks "databricks_store" --> "Entities" --> "new_catalog" --> "new_schema" --> "pageviews". If you wrote your CTAS such that the store/catalog/schema/table names are different, then navigate accordingly.
- 3.Select "Print" to see a sample of the data in your Databricks table