LogoLogo
Start Trial
  • Overview
    • What is DeltaStream?
    • Core Concepts
      • Access Control
      • Region
      • SQL
      • Store
      • Database
      • Query
      • Visualizing Data Lineage
      • Function
  • Getting Started
    • Free Trial Quick Start
    • Starting with the Web App
    • Starting with the CLI
  • Tutorials
    • Managing Users and User Roles
      • Inviting Users to an Organization
      • Administering Users in your Organization
      • Using the CLI to Manage User Roles
      • Example: Setting Up Custom Roles for Production and Stage
    • Creating Stores for Streaming Data
    • Using Multiple Stores in Queries
    • Creating Relations to Structure Raw Data
    • Namespacing with Database and Schema
    • Creating and Querying Materialized Views
    • Creating a Function
    • Securing Your Connections to Data Stores
      • Introducing DeltaStream Private Links
      • Creating an AWS Private Link from DeltaStream to your Confluent Kafka Dedicated Cluster
      • Enabling Private Link Connectivity to Confluent Enterprise Cluster and Schema Registry
      • Creating a Private Link from DeltaStream to Amazon MSK
      • Creating a Private Link for RDS Databases
      • Deleting a Private Link
    • Integrations
      • Connecting to Confluent Cloud
      • Databricks
      • PostgreSQL
      • Snowflake
      • WarpStream
    • Serialization
      • Working with ProtoBuf Serialized Data and DeltaStream Descriptors
      • Working with Avro Serialized Data and Schema Registries
      • Configuring Deserialization Error Handling
  • Reference
    • Enterprise Security Integrations
      • Okta SAML Integration
      • Okta SCIM Integration
    • Metrics
      • Prometheus Integration
      • Built-In Metrics
      • Custom Metrics in Functions
    • SQL Syntax
      • Data Formats (Serialization)
        • Serializing with JSON
        • Serializing with Primitive Data Types
        • Serializing with Protobuf
      • Data Types
      • Identifiers and Keywords
      • Command
        • ACCEPT INVITATION
        • CAN I
        • COPY DESCRIPTOR_SOURCE
        • COPY FUNCTION_SOURCE
        • DESCRIBE ENTITY
        • DESCRIBE QUERY
        • DESCRIBE QUERY METRICS
        • DESCRIBE QUERY EVENTS
        • DESCRIBE QUERY STATE
        • DESCRIBE RELATION
        • DESCRIBE RELATION COLUMNS
        • DESCRIBE ROLE
        • DESCRIBE SECURITY INTEGRATION
        • DESCRIBE <statement>
        • DESCRIBE STORE
        • DESCRIBE USER
        • GENERATE COLUMNS
        • GENERATE TEMPLATE
        • GRANT OWNERSHIP
        • GRANT PRIVILEGES
        • GRANT ROLE
        • INVITE USER
        • LIST API_TOKENS
        • LIST DATABASES
        • LIST DESCRIPTORS
        • LIST DESCRIPTOR_SOURCES
        • LIST ENTITIES
        • LIST FUNCTIONS
        • LIST FUNCTION_SOURCES
        • LIST INVITATIONS
        • LIST METRICS INTEGRATIONS
        • LIST ORGANIZATIONS
        • LIST QUERIES
        • LIST REGIONS
        • LIST RELATIONS
        • LIST ROLES
        • LIST SCHEMAS
        • LIST SCHEMA_REGISTRIES
        • LIST SECRETS
        • LIST SECURITY INTEGRATIONS
        • LIST STORES
        • LIST USERS
        • PRINT ENTITY
        • REJECT INVITATION
        • REVOKE INVITATION
        • REVOKE PRIVILEGES
        • REVOKE ROLE
        • SET DEFAULT
        • USE
      • DDL
        • ALTER API_TOKEN
        • ALTER SECURITY INTEGRATION
        • CREATE API_TOKEN
        • CREATE CHANGELOG
        • CREATE DATABASE
        • CREATE DESCRIPTOR_SOURCE
        • CREATE ENTITY
        • CREATE FUNCTION_SOURCE
        • CREATE FUNCTION
        • CREATE INDEX
        • CREATE METRICS INTEGRATION
        • CREATE ORGANIZATION
        • CREATE ROLE
        • CREATE SCHEMA_REGISTRY
        • CREATE SCHEMA
        • CREATE SECRET
        • CREATE SECURITY INTEGRATION
        • CREATE STORE
        • CREATE STREAM
        • CREATE TABLE
        • DROP API_TOKEN
        • DROP CHANGELOG
        • DROP DATABASE
        • DROP DESCRIPTOR_SOURCE
        • DROP ENTITY
        • DROP FUNCTION_SOURCE
        • DROP FUNCTION
        • DROP METRICS INTEGRATION
        • DROP RELATION
        • DROP ROLE
        • DROP SCHEMA
        • DROP SCHEMA_REGISTRY
        • DROP SECRET
        • DROP SECURITY INTEGRATION
        • DROP STORE
        • DROP STREAM
        • DROP USER
        • UPDATE ENTITY
        • UPDATE SCHEMA_REGISTRY
        • UPDATE SECRET
        • UPDATE STORE
      • Query
        • APPLICATION
        • Change Data Capture (CDC)
        • CREATE CHANGELOG AS SELECT
        • CREATE STREAM AS SELECT
        • CREATE TABLE AS SELECT
        • Function
          • Built-in Functions
          • Row Metadata Functions
        • INSERT INTO
        • Materialized View
          • CREATE MATERIALIZED VIEW AS
          • SELECT (FROM MATERIALIZED VIEW)
        • Query Name and Version
        • Resume Query
        • RESTART QUERY
        • SELECT
          • FROM
          • JOIN
          • MATCH_RECOGNIZE
          • WITH (Common Table Expression)
        • TERMINATE QUERY
      • Sandbox
        • START SANDBOX
        • DESCRIBE SANDBOX
        • STOP SANDBOX
      • Row Key Definition
    • Rest API
Powered by GitBook
On this page
  • Syntax
  • Description
  • Arguments
  • Table Parameters
  • Snowflake Specific Parameters
  • Databricks Specific Parameters
  • Examples
  1. Reference
  2. SQL Syntax
  3. Query

CREATE TABLE AS SELECT

PreviousCREATE STREAM AS SELECTNextFunction

Last updated 5 months ago

Syntax

CREATE TABLE
    table_name
[WITH (table_parameter = value [, ... ])] 
AS select_statement;

Description

CREATE TABLE AS is a statement that:

  • Generates a DDL statement to create a new .

  • Launches a new query to write the results of the SELECT statement into the newly created table.

Arguments

table_name

This specifies the name of the new table. Optionally, use <database_name>.<schema_name> as the prefix to the name to create the relation in that namespace. If the name is case sensitive you must wrap it in double quotes; otherwise the system uses the lower case name.

WITH (<table_parameter> = <value> [, …​ ])

Optionally, this clause specifies Table Parameters.

select_statement

This statement specifies the SELECT statement to run.

Table Parameters

Parameter Name
Description

store

The name of the store that hosts the entity for this stream.

Snowflake Specific Parameters

Parameter Name
Description

snowflake.db.name

snowflake.schema.name

snowflake.table.name

snowflake.buffer.millis

It is important to note that the longer events are buffered, the larger the internal state of the query gets. Depending on the volume and size of the events for the corresponding query, memory limitations may be reached. Required: No Default value: 1000 Type: Long Valid values: (0, ...]

Databricks Specific Parameters

Parameter Name
Description

databricks.catalog.name

databricks.schema.name

databricks.table.name

table.data.file.location

Examples

Create a copy of a stream in a Snowflake table

CREATE TABLE "PV_TABLE" WITH (
  'store' = 'sfstore',
  'snowflake.db.name' = 'DELTA_STREAMING',
  'snowflake.schema.name' = 'PUBLIC'
) AS SELECT * FROM pageviews;

Create a stream of changes for a changelog in a Snowflake table

CREATE TABLE "CC_TYPE_USAGE" WITH (
  'store' = 'sfstore',
  'snowflake.db.name' = 'DELTA_STREAMING',
  'snowflake.schema.name' = 'PUBLIC'
) AS SELECT
  cc_type AS "CREDIT_TYPE",
  tx_time AS "TRANSACTION_TS",
  tx_id AS "TRANSACTION_ID"
FROM transactions
GROUP BY cc_type;

This query stores all changes to the grouping column cc_type to the sink table CC_TYPE_USAGE.

Create a copy of a stream in a Databricks table

CREATE TABLE pageviews_db WITH (
  'store' = 'databricks_store', 
  'databricks.catalog.name' = 'catalog1', 
  'databricks.schema.name' = 'schema1', 
  'databricks.table.name' = 'pageviews', 
  'table.data.file.location' = 's3://mybucket/test/0/pageviews'
) AS SELECT * FROM pageviews;

Upon issuing this query, a Databricks table is created in catalog1.schema1.pageviews that uses s3://mybucket/test/0/pageviews as its external location. This query writes the Delta-formatted parquet files and updates the Delta log in that S3 location.

Required: No Default value: User’s default .

Type: String Valid values: See

The name of the Snowflake database that would host the Snowflake .

Required: Yes Default values: None Type: String Valid values: Database names available from .

The name of the Snowflake schema that would host the Snowflake .

Required: Yes Default value: None Type: String Valid values: Schema names available from under the snowflake.db.name database name.

The name of the Snowflake table to use when creating the . If the table doesn't exist in the store, a table with the table_name is created in the corresponding store. Required: No Default value: table_name Type: String

The amount of time to buffer events with the sink before writing the data into the corresponding Snowflake table.

Snowflake stores provide a delivery guarantee of at_least_once when producing events into a sink . Snowflake stores provide an insert-only mode when writing to Snowflake tables.

The name of the Databricks catalog that would host the Databricks .

Required: Yes Default values: None Type: String Valid values: Database names available from .

The name of the Databricks schema that would host the Databricks .

Required: Yes Default value: None Type: String Valid values: Schema names available from under the databricks.catalog.name catalog name.

The name of the Databricks table to use when creating the . A table with the table_name is created in the corresponding store. If that table already exists, then an error is reported back to the user. Required: No Default value: table_name Type: String

The S3 directory location for the Delta-formatted data to be written. The credentials for writing to S3 are given during store creation (see ). Note that the S3 bucket from the location specified by this parameter must match the databricks.cloud.s3.bucket property defined in the store. Required: Yes Default value: None Type: String

Databricks stores provide a delivery guarantee of exactly_once when producing events into a sink . Databricks stores provide an insert-only mode when writing to Databricks tables.

The following creates a replica of the source , pageviews in the Snowflake , PV_TABLE:

The following CTAS query creates a new Snowflake to store incremental changes resulting from a grouping aggregation on the transactions :

The following creates a replica of the source , pageviews in the Databricks , pageviews_db:

Store
LIST STORES
LIST ENTITIES
LIST ENTITIES
LIST ENTITIES
LIST ENTITIES
CREATE STORE
Table
Table
Table
Stream
Table
Table
Stream
Stream
Table
Table
Table
Table
Table
Table
Table
Table