CREATE STREAM AS SELECT
Syntax
Description
CREATE STREAM AS
is essentially a combination of two statements:
An INSERT INTO statement that runs a SELECT statement and adds the results into the newly created Stream.
Arguments
stream_name
This specifies the name of the new Stream. Optionally, use <database_name>.<schema_name>
as the prefix to the name to create the Relation in that scope. For case-sensitive names, the name must be wrapped in double quotes; otherwise, the lowercase name will be used.
WITH (<stream_parameter> = <value> [, … ])
Optionally, this clause specifies #_stream_parameters.
select_statement
This statement specifies the SELECT statement to run.
PARTITION BY partition_by_clause
Optionally, this clause allows the user to set the partition key of records according to their values for a given set of columns. The PARTITION BY
clause in the statement defines a list of one or more columns (separated by commas) as partitioning columns. By default, the key for the sink's records will have a data format equal to the sink's value data format. To set a specific key format, Set the key.format
Stream parameter to specify a different key format. PARTITION BY
is supported for CREATE STREAM AS SELECT and INSERT INTO queries where the sink is a Stream. Currently, PARTITION BY
only applies for queries whose sink Stream is backed by a Kafka store.
Stream Parameters
Kafka Specific Parameters
Kinesis Specific Parameters
Format Specific Properties
Examples
Create a copy Stream
The following creates a replica of the source Stream.
Create a Stream in a specific Schema within default Database
The following creates a replica of the source Stream, but the new Relation belongs to the Schema named schema2
in the session’s current Database.
Create Stream in specific Schema and Database
The following creates a replica of the source Stream, but the new Relation belongs to the Schema named schema2
in the Database named db
.
Create a case-sensitive Stream
The following creates a replica of the source Stream. The new sink Relation has a case-sensitive name.
Create a case-sensitive Stream in a case-sensitive Schema and Database
The following creates a replica of the source Stream. The new sink Relation has a case-sensitive name and is in a case-sensitive Database and Schema.
Create a new Stream that is backed by a specific Entity
The following creates a replica of the source Stream, but this new Stream is associated with the specified Entity called pageviewstwo
.
Copy data from one Store to another
The following moves data from a Kafka Store to a Kinesis Store. The query creates a replica of the source Stream, but this new Stream is associated with the specified Store called kinesis_store
.
Convert data from JSON to Avro with a Kafka Store
The following creates a replica of the source Stream that has a data format of JSON, but the new sink Stream has a data format of Avro for its value and key.
Convert data from JSON to Avro with a Kinesis Store
The following creates a replica of the source Stream that has a data format of JSON, but the new sink Stream has a data format of Avro for its value. Since the sink is a Kinesis stream, there is no key associated with the record, and so the value.format
property is the only one that is necessary.
Simple projection to a Kafka topic with a specific number of partitions and replicas
The following is a simple projection query where the sink Kafka topic has a specific number of partitions and replicas set.
Simple projection to a Kinesis stream with a specific number of shards
The following is a simple projection query where the sink Kinesis stream has a specific number of shards set.
Create a Stream using an interval join
Interval joins between two Streams result in a STREAM
sink Relation type.
Create a Stream using a temporal join
A temporal join of two Relations where the left join side source is a Stream and the right join side source is a Changelog results in a STREAM
output Relation type. In the example below, a new Stream called users_visits
is created by performing a temporal join between the pageviews
Stream and the users_log
Changelog.
Create a Stream with specifying the timestamp column
The below statement creates a new Stream, called pagestats
, from the already existing Stream pageviews
. The timestamp
Stream parameter, specified in the WITH
clause, is used to mark the viewtime
column in pagestats
as the timestamp column. Therefore, any subsequent query that refers to pagestats
in its FROM
clause will use this column for time based operations.
Create a Stream with specifying the Kafka delivery guarantee
The below statement creates a new Stream, called pageviews_exactly_once
, from the already existing Stream pageviews
. The delivery.guarantee
Stream parameter, specified in the WITH
clause, is used to override the default delivery.guarantee
of at_least_once
to exactly_once
. A user may want to use this configuration if their use case can tolerate higher latencies but cannot tolerate duplicate outputs.
Create a Stream with the PARTITION BY clause
The below statement creates a new Stream, called pageviews_partition_by
, from the already existing Stream pageviews
. The PARTITION BY
clause is sets the key type for the output pageviews_partition_by
Stream. Notice in this example the source Stream's records don't set a key value and the sink Stream has the PARTITION BY
values as key. The sink Stream's key data format is JSON
in this example because it inherits the sink's value data format by default.
Given this input for pageviews
:
We can expect the following output in pageviews_partition_by
:
Create a Stream with the PARTITION BY clause to override existing key
The below statement creates a new Stream, called pageviews_partition_by
, from the already existing Stream pageviews
. The PARTITION BY
clause sets the key type for the output pageviews_partition_by
Stream. Further, this query also sets the key.format
property for the sink Stream to be PRIMITIVE
. Notice in this example the source Stream's records have the pageid
column value set as the key in JSON
format and the output Stream has the PARTITION BY
value as key in the PRIMITIVE
format.
Given this input for pageviews
:
We can expect the following output in pageviews_partition_by
:
Create a Stream with `event.time` Sink timestamp strategy
The below statement creates a new Stream, called pageviews_copy
, from the already existing Stream pageviews
by selecting all columns. The sink timestamp strategy is set to event.time
; this way when writing to the sink's Kafka topic, the timestamp for each record in pageviews_copy
is set to its source record's timestamp, coming from the Entity backing pageviews
.
Create a Stream with format specific properties for Avro
The following creates a new Stream, usersInfo,
by selecting records' key and value from another given Stream users_avro
. Assuming users_avro
key and value are in avro
, two subjects are provided to generate the Avro schemas for userInfo
's key and value. These subjects are stored in the Schema Registry of a Store called sr_store
, and users_data-key
subject is used to generate key's Avro schema and users_data-value
subject is used to generate value's Avro schema for the records written into usersInfo.
Last updated