Apache Iceberg supports both Apache Flink‘s DataStream API and Table API. See the Multi-Engine Support page for the integration of Apache Flink.

Feature support Flink Notes
SQL create catalog ✔️
SQL create database ✔️
SQL create table ✔️
SQL create table like ✔️
SQL alter table ✔️ Only support altering table properties, column and partition changes are not supported
SQL drop_table ✔️
SQL select ✔️ Support both streaming and batch mode
SQL insert into ✔️ ️ Support both streaming and batch mode
SQL insert overwrite ✔️ ️
DataStream read ✔️ ️
DataStream append ✔️ ️
DataStream overwrite ✔️ ️
Metadata tables ✔️
Rewrite files action ✔️ ️

To create Iceberg table in Flink, it is recommended to use Flink SQL Client as it’s easier for users to understand the concepts.

Download Flink from the Apache download page. Iceberg uses Scala 2.12 when compiling the Apache iceberg-flink-runtime jar, so it’s recommended to use Flink 1.16 bundled with Scala 2.12.

  1. FLINK_VERSION=1.16.2
  2. SCALA_VERSION=2.12
  3. APACHE_FLINK_URL=https://archive.apache.org/dist/flink/
  4. wget ${APACHE_FLINK_URL}/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-scala_${SCALA_VERSION}.tgz
  5. tar xzvf flink-${FLINK_VERSION}-bin-scala_${SCALA_VERSION}.tgz

Start a standalone Flink cluster within Hadoop environment:

  1. # HADOOP_HOME is your hadoop root directory after unpack the binary package.
  2. APACHE_HADOOP_URL=https://archive.apache.org/dist/hadoop/
  3. HADOOP_VERSION=2.8.5
  4. wget ${APACHE_HADOOP_URL}/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz
  5. tar xzvf hadoop-${HADOOP_VERSION}.tar.gz
  6. HADOOP_HOME=`pwd`/hadoop-${HADOOP_VERSION}
  7. export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`
  8. # Start the flink standalone cluster
  9. ./bin/start-cluster.sh

Start the Flink SQL client. There is a separate flink-runtime module in the Iceberg project to generate a bundled jar, which could be loaded by Flink SQL client directly. To build the flink-runtime bundled jar manually, build the iceberg project, and it will generate the jar under <iceberg-root-dir>/flink-runtime/build/libs. Or download the flink-runtime jar from the Apache repository.

  1. # HADOOP_HOME is your hadoop root directory after unpack the binary package.
  2. export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`
  3. # Below works for 1.15 or less
  4. ./bin/sql-client.sh embedded -j <flink-runtime-directory>/iceberg-flink-runtime-1.15-{{ icebergVersion }}.jar shell
  5. # 1.16 or above has a regression in loading external jar via -j option. See FLINK-30035 for details.
  6. put iceberg-flink-runtime-1.16-{{ icebergVersion }}.jar in flink/lib dir
  7. ./bin/sql-client.sh embedded shell

By default, Iceberg ships with Hadoop jars for Hadoop catalog. To use Hive catalog, load the Hive jars when opening the Flink SQL client. Fortunately, Flink has provided a bundled hive jar for the SQL client. An example on how to download the dependencies and get started:

  1. # HADOOP_HOME is your hadoop root directory after unpack the binary package.
  2. export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`
  3. ICEBERG_VERSION={{ icebergVersion }}
  4. MAVEN_URL=https://repo1.maven.org/maven2
  5. ICEBERG_MAVEN_URL=${MAVEN_URL}/org/apache/iceberg
  6. ICEBERG_PACKAGE=iceberg-flink-runtime
  7. wget ${ICEBERG_MAVEN_URL}/${ICEBERG_PACKAGE}-${FLINK_VERSION_MAJOR}/${ICEBERG_VERSION}/${ICEBERG_PACKAGE}-${FLINK_VERSION_MAJOR}-${ICEBERG_VERSION}.jar -P lib/
  8. HIVE_VERSION=2.3.9
  9. SCALA_VERSION=2.12
  10. FLINK_VERSION=1.16.2
  11. FLINK_CONNECTOR_URL=${MAVEN_URL}/org/apache/flink
  12. FLINK_CONNECTOR_PACKAGE=flink-sql-connector-hive
  13. wget ${FLINK_CONNECTOR_URL}/${FLINK_CONNECTOR_PACKAGE}-${HIVE_VERSION}_${SCALA_VERSION}/${FLINK_VERSION}/${FLINK_CONNECTOR_PACKAGE}-${HIVE_VERSION}_${SCALA_VERSION}-${FLINK_VERSION}.jar
  14. ./bin/sql-client.sh embedded shell

PyFlink 1.6.1 does not work on OSX with a M1 cpu

Install the Apache Flink dependency using pip:

  1. pip install apache-flink==1.16.2

Provide a file:// path to the iceberg-flink-runtime jar, which can be obtained by building the project and looking at <iceberg-root-dir>/flink-runtime/build/libs, or downloading it from the Apache official repository. Third-party jars can be added to pyflink via:

  • env.add_jars("file:///my/jar/path/connector.jar")
  • table_env.get_config().get_configuration().set_string("pipeline.jars", "file:///my/jar/path/connector.jar")

This is also mentioned in the official docs. The example below uses env.add_jars(..):

  1. import os
  2. from pyflink.datastream import StreamExecutionEnvironment
  3. env = StreamExecutionEnvironment.get_execution_environment()
  4. iceberg_flink_runtime_jar = os.path.join(os.getcwd(), "iceberg-flink-runtime-1.16-{{ icebergVersion }}.jar")
  5. env.add_jars("file://{}".format(iceberg_flink_runtime_jar))

Next, create a StreamTableEnvironment and execute Flink SQL statements. The below example shows how to create a custom catalog via the Python Table API:

  1. from pyflink.table import StreamTableEnvironment
  2. table_env = StreamTableEnvironment.create(env)
  3. table_env.execute_sql("""
  4. CREATE CATALOG my_catalog WITH (
  5. 'type'='iceberg',
  6. 'catalog-impl'='com.my.custom.CatalogImpl',
  7. 'my-additional-catalog-config'='my-value'
  8. )
  9. """)

Run a query:

  1. (table_env
  2. .sql_query("SELECT PULocationID, DOLocationID, passenger_count FROM my_catalog.nyc.taxis LIMIT 5")
  3. .execute()
  4. .print())
  1. +----+----------------------+----------------------+--------------------------------+
  2. | op | PULocationID | DOLocationID | passenger_count |
  3. +----+----------------------+----------------------+--------------------------------+
  4. | +I | 249 | 48 | 1.0 |
  5. | +I | 132 | 233 | 1.0 |
  6. | +I | 164 | 107 | 1.0 |
  7. | +I | 90 | 229 | 1.0 |
  8. | +I | 137 | 249 | 1.0 |
  9. +----+----------------------+----------------------+--------------------------------+
  10. 5 rows in set

For more details, please refer to the Python Table API.

Adding catalogs.

Flink support to create catalogs by using Flink SQL.

Catalog Configuration

A catalog is created and named by executing the following query (replace <catalog_name> with your catalog name and <config_key>=<config_value> with catalog implementation config):

  1. CREATE CATALOG <catalog_name> WITH (
  2. 'type'='iceberg',
  3. `<config_key>`=`<config_value>`
  4. );

The following properties can be set globally and are not limited to a specific catalog implementation:

  • type: Must be iceberg. (required)
  • catalog-type: hive, hadoop, rest, glue, jdbc or nessie for built-in catalogs, or left unset for custom catalog implementations using catalog-impl. (Optional)
  • catalog-impl: The fully-qualified class name of a custom catalog implementation. Must be set if catalog-type is unset. (Optional)
  • property-version: Version number to describe the property version. This property can be used for backwards compatibility in case the property format changes. The current property version is 1. (Optional)
  • cache-enabled: Whether to enable catalog cache, default value is true. (Optional)
  • cache.expiration-interval-ms: How long catalog entries are locally cached, in milliseconds; negative values like -1 will disable expiration, value 0 is not allowed to set. default value is -1. (Optional)

Hive catalog

This creates an Iceberg catalog named hive_catalog that can be configured using 'catalog-type'='hive', which loads tables from Hive metastore:

  1. CREATE CATALOG hive_catalog WITH (
  2. 'type'='iceberg',
  3. 'catalog-type'='hive',
  4. 'uri'='thrift://localhost:9083',
  5. 'clients'='5',
  6. 'property-version'='1',
  7. 'warehouse'='hdfs://nn:8020/warehouse/path'
  8. );

The following properties can be set if using the Hive catalog:

  • uri: The Hive metastore’s thrift URI. (Required)
  • clients: The Hive metastore client pool size, default value is 2. (Optional)
  • warehouse: The Hive warehouse location, users should specify this path if neither set the hive-conf-dir to specify a location containing a hive-site.xml configuration file nor add a correct hive-site.xml to classpath.
  • hive-conf-dir: Path to a directory containing a hive-site.xml configuration file which will be used to provide custom Hive configuration values. The value of hive.metastore.warehouse.dir from <hive-conf-dir>/hive-site.xml (or hive configure file from classpath) will be overwritten with the warehouse value if setting both hive-conf-dir and warehouse when creating iceberg catalog.
  • hadoop-conf-dir: Path to a directory containing core-site.xml and hdfs-site.xml configuration files which will be used to provide custom Hadoop configuration values.

Creating a table

  1. CREATE TABLE `hive_catalog`.`default`.`sample` (
  2. id BIGINT COMMENT 'unique id',
  3. data STRING
  4. );

Writing

To append new data to a table with a Flink streaming job, use INSERT INTO:

  1. INSERT INTO `hive_catalog`.`default`.`sample` VALUES (1, 'a');
  2. INSERT INTO `hive_catalog`.`default`.`sample` SELECT id, data from other_kafka_table;

To replace data in the table with the result of a query, use INSERT OVERWRITE in batch job (flink streaming job does not support INSERT OVERWRITE). Overwrites are atomic operations for Iceberg tables.

Partitions that have rows produced by the SELECT query will be replaced, for example:

  1. INSERT OVERWRITE `hive_catalog`.`default`.`sample` VALUES (1, 'a');

Iceberg also support overwriting given partitions by the select values:

  1. INSERT OVERWRITE `hive_catalog`.`default`.`sample` PARTITION(data='a') SELECT 6;

Flink supports writing DataStream<RowData> and DataStream<Row> to the sink iceberg table natively.

  1. StreamExecutionEnvironment env = ...;
  2. DataStream<RowData> input = ... ;
  3. Configuration hadoopConf = new Configuration();
  4. TableLoader tableLoader = TableLoader.fromHadoopTable("hdfs://nn:8020/warehouse/path", hadoopConf);
  5. FlinkSink.forRowData(input)
  6. .tableLoader(tableLoader)
  7. .append();
  8. env.execute("Test Iceberg DataStream");

Branch Writes

Writing to branches in Iceberg tables is also supported via the toBranch API in FlinkSink For more information on branches please refer to branches.

  1. FlinkSink.forRowData(input)
  2. .tableLoader(tableLoader)
  3. .toBranch("audit-branch")
  4. .append();

Reading

Submit a Flink batch job using the following sentences:

  1. -- Execute the flink job in batch mode for current session context
  2. SET execution.runtime-mode = batch;
  3. SELECT * FROM `hive_catalog`.`default`.`sample`;

Iceberg supports processing incremental data in flink streaming jobs which starts from a historical snapshot-id:

  1. -- Submit the flink job in streaming mode for current session.
  2. SET execution.runtime-mode = streaming;
  3. -- Enable this switch because streaming read SQL will provide few job options in flink SQL hint options.
  4. SET table.dynamic-table-options.enabled=true;
  5. -- Read all the records from the iceberg current snapshot, and then read incremental data starting from that snapshot.
  6. SELECT * FROM `hive_catalog`.`default`.`sample` /*+ OPTIONS('streaming'='true', 'monitor-interval'='1s')*/ ;
  7. -- Read all incremental data starting from the snapshot-id '3821550127947089987' (records from this snapshot will be excluded).
  8. SELECT * FROM `hive_catalog`.`default`.`sample` /*+ OPTIONS('streaming'='true', 'monitor-interval'='1s', 'start-snapshot-id'='3821550127947089987')*/ ;

SQL is also the recommended way to inspect tables. To view all of the snapshots in a table, use the snapshots metadata table:

  1. SELECT * FROM `hive_catalog`.`default`.`sample`.`snapshots`

Iceberg support streaming or batch read in Java API:

  1. DataStream<RowData> batch = FlinkSource.forRowData()
  2. .env(env)
  3. .tableLoader(tableLoader)
  4. .streaming(false)
  5. .build();

Type conversion

Iceberg’s integration for Flink automatically converts between Flink and Iceberg types. When writing to a table with types that are not supported by Flink, like UUID, Iceberg will accept and convert values from the Flink type.

Flink types are converted to Iceberg types according to the following table:

Flink Iceberg Notes
boolean boolean
tinyint integer
smallint integer
integer integer
bigint long
float float
double double
char string
varchar string
string string
binary binary
varbinary fixed
decimal decimal
date date
time time
timestamp timestamp without timezone
timestamp_ltz timestamp with timezone
array list
map map
multiset map
row struct
raw Not supported
interval Not supported
structured Not supported
timestamp with zone Not supported
distinct Not supported
null Not supported
symbol Not supported
logical Not supported

Iceberg types are converted to Flink types according to the following table:

Iceberg Flink
boolean boolean
struct row
list array
map map
integer integer
long bigint
float float
double double
date date
time time
timestamp without timezone timestamp(6)
timestamp with timezone timestamp_ltz(6)
string varchar(2147483647)
uuid binary(16)
fixed(N) binary(N)
binary varbinary(2147483647)
decimal(P, S) decimal(P, S)

Future improvements

There are some features that are do not yet supported in the current Flink Iceberg integration work:

  • Don’t support creating iceberg table with hidden partitioning. Discussion in flink mail list.
  • Don’t support creating iceberg table with computed column.
  • Don’t support creating iceberg table with watermark.
  • Don’t support adding columns, removing columns, renaming columns, changing columns. FLINK-19062 is tracking this.