Apache Hudi (pronounced “hoodie”) is the next generation streaming data lake platform. Apache Hudi brings core warehouse and database functionality directly to a data lake.

This article assumes that you have mastered the basic knowledge and operation of Hudi. For the knowledge about Hudi not mentioned in this article, you can obtain it from its Official Documentation.

By using Kyuubi, we can run SQL queries towards Hudi which is more convenient, easy to understand, and easy to expand than directly using Trino to manipulate Hudi.

Hudi Integration

To enable the integration of Kyuubi Trino SQL engine and Hudi, you need to:

Configurations

Catalogs are registered by creating a file of catalog properties in the $TRINO_SERVER_HOME/etc/catalog directory. For example, we can create a $TRINO_SERVER_HOME/etc/catalog/hudi.properties with the following contents to mount the Hudi connector as a Hudi catalog:

  1. connector.name=hudi
  2. hive.metastore.uri=thrift://example.net:9083

Note: You need to replace $TRINO_SERVER_HOME above to your Trino server home path like /opt/trino-server-406.

More configuration properties can be found in the Hudi connector in Trino document.

Trino version 398 or higher, it is recommended to use the Hudi connector. You don’t need to install any dependencies in version 398 or higher.

Hudi Operations

The globally available and read operation statements are supported in Trino. These statements can be found in Trino SQL Support. Currently, Trino cannot write data to a Hudi table. A common scenario is to write data with Spark/Flink and read data with Trino. You can use the Kyuubi Trino SQL engine to query the table with the following SQL SELECT statement.

Taking Query Data as a example,

USE example.example_schema;

  1. USE example.example_schema;
  2. SELECT symbol, max(ts)
  3. FROM stock_ticks_cow
  4. GROUP BY symbol
  5. HAVING symbol = 'GOOG';