JDBC Snowflake Sink Connector

Support Those Engines

Spark
Flink
SeaTunnel Zeta

Key Features

Description

Write data through jdbc. Support Batch mode and Streaming mode, support concurrent writing.

Supported DataSource list

Datasource Supported Versions Driver Url Maven
snowflake Different dependency version has different driver class. net.snowflake.client.jdbc.SnowflakeDriver jdbc:snowflake://.snowflakecomputing.com Download

Database dependency

Please download the support list corresponding to ‘Maven’ and copy it to the ‘$SEATNUNNEL_HOME/plugins/jdbc/lib/‘ working directory
For example Snowflake datasource: cp snowflake-connector-java-xxx.jar $SEATNUNNEL_HOME/plugins/jdbc/lib/

Data Type Mapping

Snowflake Data Type SeaTunnel Data Type
BOOLEAN BOOLEAN
TINYINT
SMALLINT
BYTEINT
SHORT_TYPE
INT
INTEGER
INT
BIGINT LONG
DECIMAL
NUMERIC
NUMBER
DECIMAL(x,y)
DECIMAL(x,y)(Get the designated column’s specified column size.>38) DECIMAL(38,18)
REAL
FLOAT4
FLOAT
DOUBLE
DOUBLE PRECISION
FLOAT8
FLOAT
DOUBLE
CHAR
CHARACTER
VARCHAR
STRING
TEXT
VARIANT
OBJECT
STRING
DATE DATE
TIME TIME
DATETIME
TIMESTAMP
TIMESTAMP_LTZ
TIMESTAMP_NTZ
TIMESTAMP_TZ
TIMESTAMP
BINARY
VARBINARY
GEOGRAPHY
GEOMETRY
BYTES

Options

Name Type Required Default Description
url String Yes - The URL of the JDBC connection. Refer to a case: jdbc:snowflake://.snowflakecomputing.com
driver String Yes - The jdbc class name used to connect to the remote data source,
if you use Snowflake the value is net.snowflake.client.jdbc.SnowflakeDriver.
user String No - Connection instance user name
password String No - Connection instance password
query String No - Use this sql write upstream input datas to database. e.g INSERT ...,query have the higher priority
database String No - Use this database and table-name auto-generate sql and receive upstream input datas write to database.
This option is mutually exclusive with query and has a higher priority.
table String No - Use database and this table-name auto-generate sql and receive upstream input datas write to database.
This option is mutually exclusive with query and has a higher priority.
primary_keys Array No - This option is used to support operations such as insert, delete, and update when automatically generate sql.
support_upsert_by_query_primary_key_exist Boolean No false Choose to use INSERT sql, UPDATE sql to process update events(INSERT, UPDATE_AFTER) based on query primary key exists. This configuration is only used when database unsupport upsert syntax. Note: that this method has low performance
connection_check_timeout_sec Int No 30 The time in seconds to wait for the database operation used to validate the connection to complete.
max_retries Int No 0 The number of retries to submit failed (executeBatch)
batch_size Int No 1000 For batch writing, when the number of buffered records reaches the number of batch_size or the time reaches checkpoint.interval
, the data will be flushed into the database
max_commit_attempts Int No 3 The number of retries for transaction commit failures
transaction_timeout_sec Int No -1 The timeout after the transaction is opened, the default is -1 (never timeout). Note that setting the timeout may affect
exactly-once semantics
auto_commit Boolean No true Automatic transaction commit is enabled by default
properties Map No - Additional connection configuration parameters,when properties and URL have the same parameters, the priority is determined by the
specific implementation of the driver. For example, in MySQL, properties take precedence over the URL.
common-options No - Sink plugin common parameters, please refer to Sink Common Options for details
enable_upsert Boolean No true Enable upsert by primary_keys exist, If the task has no key duplicate data, setting this parameter to false can speed up data import

tips

If partition_column is not set, it will run in single concurrency, and if partition_column is set, it will be executed in parallel according to the concurrency of tasks.

Task Example

simple:

This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your snowflake database. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in Install SeaTunnel to install and deploy SeaTunnel. And then follow the instructions in Quick Start With SeaTunnel Engine to run this job.

  1. # Defining the runtime environment
  2. env {
  3. parallelism = 1
  4. job.mode = "BATCH"
  5. }
  6. source {
  7. # This is a example source plugin **only for test and demonstrate the feature source plugin**
  8. FakeSource {
  9. parallelism = 1
  10. result_table_name = "fake"
  11. row.num = 16
  12. schema = {
  13. fields {
  14. name = "string"
  15. age = "int"
  16. }
  17. }
  18. }
  19. # If you would like to get more information about how to configure seatunnel and see full list of source plugins,
  20. # please go to https://seatunnel.apache.org/docs/category/source-v2
  21. }
  22. transform {
  23. # If you would like to get more information about how to configure seatunnel and see full list of transform plugins,
  24. # please go to https://seatunnel.apache.org/docs/category/transform-v2
  25. }
  26. sink {
  27. jdbc {
  28. url = "jdbc:snowflake://<account_name>.snowflakecomputing.com"
  29. driver = "net.snowflake.client.jdbc.SnowflakeDriver"
  30. user = "root"
  31. password = "123456"
  32. query = "insert into test_table(name,age) values(?,?)"
  33. }
  34. # If you would like to get more information about how to configure seatunnel and see full list of sink plugins,
  35. # please go to https://seatunnel.apache.org/docs/category/sink-v2
  36. }

CDC(Change data capture) event

CDC change data is also supported by us In this case, you need config database, table and primary_keys.

  1. sink {
  2. jdbc {
  3. url = "jdbc:snowflake://<account_name>.snowflakecomputing.com"
  4. driver = "net.snowflake.client.jdbc.SnowflakeDriver"
  5. user = "root"
  6. password = "123456"
  7. generate_sink_sql = true
  8. # You need to configure both database and table
  9. database = test
  10. table = sink_table
  11. primary_keys = ["id","name"]
  12. }
  13. }