Apache Hudi (pronounced “hoodie”) is the next generation streaming data lake platform. Apache Hudi brings core warehouse and database functionality directly to a data lake.

This article assumes that you have mastered the basic knowledge and operation of Hudi. For the knowledge about Hudi not mentioned in this article, you can obtain it from its Official Documentation.

By using Kyuubi, we can run SQL queries towards Hudi which is more convenient, easy to understand, and easy to expand than directly using flink to manipulate Hudi.

Hudi Integration

To enable the integration of kyuubi flink sql engine and Hudi through Catalog APIs, you need to:

Dependencies

The classpath of kyuubi flink sql engine with Hudi supported consists of

  1. kyuubi-flink-sql-engine-1.9.1_2.12.jar, the engine jar deployed with Kyuubi distributions
  2. a copy of flink distribution
  3. hudi-flink-bundle_-.jar (example: hudi-flink1.14-bundle_2.12-0.11.1.jar), which can be found in the Maven Central

In order to make the Hudi packages visible for the runtime classpath of engines, we can use one of these methods:

  1. Put the Hudi packages into $flink_HOME/lib directly
  2. Set pipeline.jars=/path/to/hudi-flink-bundle

Hudi Operations

Taking Create Table as a example,

  1. CREATE TABLE t1 (
  2. id INT PRIMARY KEY NOT ENFORCED,
  3. name STRING,
  4. price DOUBLE
  5. ) WITH (
  6. 'connector' = 'hudi',
  7. 'path' = 's3://bucket-name/hudi/',
  8. 'table.type' = 'MERGE_ON_READ' -- this creates a MERGE_ON_READ table, by default is COPY_ON_WRITE
  9. );

Taking Query Data as a example,

  1. SELECT * FROM t1;

Taking Insert and Update Data as a example,

  1. INSERT INTO t1 VALUES (1, 'Lucas' , 2.71828);

Taking Streaming Query as a example,

  1. CREATE TABLE t1 (
  2. uuid VARCHAR(20) PRIMARY KEY NOT ENFORCED,
  3. name VARCHAR(10),
  4. age INT,
  5. ts TIMESTAMP(3),
  6. `partition` VARCHAR(20)
  7. )
  8. PARTITIONED BY (`partition`)
  9. WITH (
  10. 'connector' = 'hudi',
  11. 'path' = '${path}',
  12. 'table.type' = 'MERGE_ON_READ',
  13. 'read.streaming.enabled' = 'true', -- this option enable the streaming read
  14. 'read.start-commit' = '20210316134557', -- specifies the start commit instant time
  15. 'read.streaming.check-interval' = '4' -- specifies the check interval for finding new source commits, default 60s.
  16. );
  17. -- Then query the table in stream mode
  18. SELECT * FROM t1;

Taking Delete Data,

The streaming query can implicitly auto delete data. When consuming data in streaming query, Hudi Flink source can also accepts the change logs from the underneath data source, it can then applies the UPDATE and DELETE by per-row level.