Create catalogs

The following statement can be executed to create a Flink catalog:

  1. CREATE CATALOG <catalog_name> WITH (
  2. 'type'='arctic',
  3. `<config_key>`=`<config_value>`
  4. );

Where <catalog_name> is the user-defined name of the Flink catalog, and <config_key>=<config_value> has the following configurations:

Key Default Value Type Required Description
metastore.url (none) String Yes The URL for Amoro Metastore is thrift://<ip>:<port>/<catalog_name_in_metastore>.
If high availability is enabled for AMS, it can also be specified in the form of zookeeper://{zookeeper-server}/{cluster-name}/{catalog-name}.
default-database default String No The default database to use
property-version 1 Integer No Catalog properties version, this option is for future backward compatibility

The authentication information of AMS catalog can upload configuration files on AMS website, or specify the authentication information and configuration file paths when creating catalogs with Flink DDL

Key Default Value Type Required Description
properties.auth.load-from-ams True BOOLEAN No Whether to load security verification configuration from AMS.
True: load from AMS;
false: do not use AMS configuration.
Note: regardless of whether this parameter is configured, as long as the user has configured the auth.* related parameters below, this configuration will be used for access.
properties.auth.type (none) String No Table security verification type, valid values: simple, kerberos, or not configured. Default not configured, no permission check is required. simple: use the hadoop username, used in conjunction with the parameter ‘properties.auth.simple.hadoop_username’; kerberos: configure kerberos permission verification, used in conjunction with the parameters ‘properties.auth.kerberos.principal’, ‘properties.auth.kerberos.keytab’, ‘properties.auth.kerberos.krb’
properties.auth.simple.hadoop_username (none) String No Access using this hadoop username, required when ‘properties.auth.type’=’simple’.
properties.auth.kerberos.principal (none) String No Configuration of kerberos principal, required when ‘properties.auth.type’=’kerberos’.
properties.auth.kerberos.krb.path (none) String No The absolute path to the krb5.conf configuration file for kerberos (the local file path of the Flink SQL submission machine, if the SQL task is submitted with the Flink SQL Client, the path is the local path of the same node, e.g. /XXX/XXX/krb5.conf).’ required if ‘properties.auth.type’ = ‘kerberos’.
properties.auth.kerberos.keytab.path (none) String No The absolute path to the XXX.keytab configuration file for kerberos (the local file path of the Flink SQL submission machine, if the SQL task is submitted with the Flink SQL Client, the path is the local path of the same node, e.g. /XXX/XXX/XXX.keytab).’ required if ‘properties.auth.type’ = ‘kerberos’.

YAML configuration

Refer to the Flink SQL Client official configuration. Modify the conf/sql-client-defaults.yaml file in the Flink directory.

  1. catalogs:
  2. - name: <catalog_name>
  3. type: arctic
  4. metastore.url: ...
  5. ...

CREATE statement

CREATE DATABASE

By default, the default-database configuration (default value: default) when creating catalog is used. You can create a database using the following example:

  1. CREATE DATABASE [catalog_name.]arctic_db;
  2. USE arctic_db;

CREATE TABLE

  1. CREATE TABLE `arctic_catalog`.`arctic_db`.`test_table` (
  2. id BIGINT,
  3. name STRING,
  4. op_time TIMESTAMP,
  5. PRIMARY KEY (id) NOT ENFORCED
  6. ) WITH (
  7. 'key' = 'value'
  8. );

Currently, most of the syntax supported by Flink SQL create table is supported, including:

  • PARTITION BY (column1, column2, …): configure Flink partition fields, but Flink does not yet support hidden partitions.
  • PRIMARY KEY (column1, column2, …): configure primary keys.
  • WITH (‘key’=’value’, …): configure Amoro Table properties.

Currently, configuration of computed columns and watermark fields is not supported.

PARTITIONED BY

Create a partitioned table using PARTITIONED BY.

  1. CREATE TABLE `arctic_catalog`.`arctic_db`.`test_table` (
  2. id BIGINT,
  3. name STRING,
  4. op_time TIMESTAMP
  5. ) PARTITIONED BY(op_time) WITH (
  6. 'key' = 'value'
  7. );

Amoro tables support hidden partitions, but Flink does not support function-based partitions. Therefore, currently only partitions with the same value can be created through Flink SQL.

Alternatively, tables can be created without creating a Flink catalog:

  1. CREATE TABLE `test_table` (
  2. id BIGINT,
  3. name STRING,
  4. op_time TIMESTAMP,
  5. PRIMARY KEY (id) NOT ENFORCED
  6. ) WITH (
  7. 'connector' = 'arctic',
  8. 'metastore.url' = '',
  9. 'arctic.catalog' = '',
  10. 'arctic.database' = '',
  11. 'arctic.table' = ''
  12. );

where <metastore.url> is the URL of the Amoro Metastore, and arctic.catalog, arctic.database and arctic.table are the catalog name, database name and table name of this table under the AMS, respectively.

CREATE TABLE LIKE

Create a table with the same table structure, partitions, and table properties as an existing table. This can be achieved using CREATE TABLE LIKE.

  1. CREATE TABLE `arctic_catalog`.`arctic_db`.`test_table` (
  2. id BIGINT,
  3. name STRING,
  4. op_time TIMESTAMP
  5. );
  6. CREATE TABLE `arctic_catalog`.`arctic_db`.`test_table_like`
  7. LIKE `arctic_catalog`.`arctic_db`.`test_table`;

Further details can be found in Flink create table like

DROP statement

DROP DATABASE

  1. DROP DATABASE catalog_name.arctic_db

DROP TABLE

  1. DROP TABLE `arctic_catalog`.`arctic_db`.`test_table`;

SHOW statement

SHOW DATABASES

View all database names under the current catalog:

  1. SHOW DATABASES;

SHOW TABLES

View all table names in the current database:

  1. SHOW TABLES;

SHOW CREATE TABLE

View table details:

  1. SHOW CREATE TABLE;

DESC statement

View table description:

  1. DESC TABLE;

ALTER statement

Not supported at the moment

Supported types

Mixed-Hive data types

Flink Data Type Hive Data Type
STRING CHAR(p)
STRING VARCHAR(p)
STRING STRING
BOOLEAN BOOLEAN
INT TINYINT
INT SMALLINT
INT INT
BIGINT BIGINT
FLOAT FLOAT
DOUBLE DOUBLE
DECIMAL(p, s) DECIMAL(p, s)
DATE DATE
TIMESTAMP(6) TIMESTAMP
VARBINARY BINARY
ARRAY ARRAY
MAP MAP
ROW STRUCT

Mixed-Iceberg data types

Flink Data Type Mixed-Iceberg Data Type
CHAR(p) STRING
VARCHAR(p) STRING
STRING STRING
BOOLEAN BOOLEAN
TINYINT INT
SMALLINT INT
INT INT
BIGINT LONG
FLOAT FLOAT
DOUBLE DOUBLE
DECIMAL(p, s) DECIMAL(p, s)
DATE DATE
TIMESTAMP(6) TIMESTAMP
TIMESTAMP(6) WITH LOCAL TIME ZONE TIMESTAMPTZ
BINARY(p) FIXED(p)
BINARY(16) UUID
VARBINARY BINARY
ARRAY ARRAY
MAP MAP
ROW STRUCT
MULTISET MAP