Hive Metastore is an RDBMS-backed service from Apache Hive that acts as a catalog for your data warehouse or data lake. It can store all the metadata about the tables, such as partitions, columns, column types, etc. One can sync the Hudi table metadata to the Hive metastore as well. This unlocks the capability to query Hudi tables not only through Hive but also using interactive query engines such as Presto and Trino. In this document, we will go through different ways to sync the Hudi table to Hive metastore.

Spark Data Source example

Prerequisites: setup hive metastore properly and configure the Spark installation to point to the hive metastore by placing hive-site.xml under $SPARK_HOME/conf

Assume that

  • hiveserver2 is running at port 10000
  • metastore is running at port 9083

Then start a spark-shell with Hudi spark bundle jar as a dependency (refer to Quickstart example)

We can run the following script to create a sample hudi table and sync it to hive.

  1. // spark-shell
  2. import org.apache.hudi.QuickstartUtils._
  3. import scala.collection.JavaConversions._
  4. import org.apache.spark.sql.SaveMode._
  5. import org.apache.hudi.DataSourceReadOptions._
  6. import org.apache.hudi.DataSourceWriteOptions._
  7. import org.apache.hudi.config.HoodieWriteConfig._
  8. import org.apache.spark.sql.types._
  9. import org.apache.spark.sql.Row
  10. val databaseName = "my_db"
  11. val tableName = "hudi_cow"
  12. val basePath = "/user/hive/warehouse/hudi_cow"
  13. val schema = StructType(Array(
  14. StructField("rowId", StringType,true),
  15. StructField("partitionId", StringType,true),
  16. StructField("preComb", LongType,true),
  17. StructField("name", StringType,true),
  18. StructField("versionId", StringType,true),
  19. StructField("toBeDeletedStr", StringType,true),
  20. StructField("intToLong", IntegerType,true),
  21. StructField("longToInt", LongType,true)
  22. ))
  23. val data0 = Seq(Row("row_1", "2021/01/01",0L,"bob","v_0","toBeDel0",0,1000000L),
  24. Row("row_2", "2021/01/01",0L,"john","v_0","toBeDel0",0,1000000L),
  25. Row("row_3", "2021/01/02",0L,"tom","v_0","toBeDel0",0,1000000L))
  26. var dfFromData0 = spark.createDataFrame(data0,schema)
  27. dfFromData0.write.format("hudi").
  28. options(getQuickstartWriteConfigs).
  29. option("hoodie.datasource.write.precombine.field", "preComb").
  30. option("hoodie.datasource.write.recordkey.field", "rowId").
  31. option("hoodie.datasource.write.partitionpath.field", "partitionId").
  32. option("hoodie.database.name", databaseName).
  33. option("hoodie.table.name", tableName).
  34. option("hoodie.datasource.write.table.type", "COPY_ON_WRITE").
  35. option("hoodie.datasource.write.operation", "upsert").
  36. option("hoodie.datasource.write.hive_style_partitioning","true").
  37. option("hoodie.datasource.meta.sync.enable", "true").
  38. option("hoodie.datasource.hive_sync.mode", "hms").
  39. option("hoodie.datasource.hive_sync.metastore.uris", "thrift://hive-metastore:9083").
  40. mode(Overwrite).
  41. save(basePath)

If prefer to use JDBC instead of HMS sync mode, omit hoodie.datasource.hive_sync.metastore.uris and configure these instead

  1. hoodie.datasource.hive_sync.mode=jdbc
  2. hoodie.datasource.hive_sync.jdbcurl=<e.g., jdbc:hive2://hiveserver:10000>
  3. hoodie.datasource.hive_sync.username=<username>
  4. hoodie.datasource.hive_sync.password=<password>

Query using HiveQL

  1. beeline -u jdbc:hive2://hiveserver:10000/my_db \
  2. --hiveconf hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat \
  3. --hiveconf hive.stats.autogather=false
  4. Beeline version 1.2.1.spark2 by Apache Hive
  5. 0: jdbc:hive2://hiveserver:10000> show tables;
  6. +-----------+--+
  7. | tab_name |
  8. +-----------+--+
  9. | hudi_cow |
  10. +-----------+--+
  11. 1 row selected (0.531 seconds)
  12. 0: jdbc:hive2://hiveserver:10000> select * from hudi_cow limit 1;
  13. +-------------------------------+--------------------------------+------------------------------+----------------------------------+----------------------------------------------------------------------------+-----------------+-------------------+----------------+---------------------+--------------------------+---------------------+---------------------+-----------------------+--+
  14. | hudi_cow._hoodie_commit_time | hudi_cow._hoodie_commit_seqno | hudi_cow._hoodie_record_key | hudi_cow._hoodie_partition_path | hudi_cow._hoodie_file_name | hudi_cow.rowid | hudi_cow.precomb | hudi_cow.name | hudi_cow.versionid | hudi_cow.tobedeletedstr | hudi_cow.inttolong | hudi_cow.longtoint | hudi_cow.partitionid |
  15. +-------------------------------+--------------------------------+------------------------------+----------------------------------+----------------------------------------------------------------------------+-----------------+-------------------+----------------+---------------------+--------------------------+---------------------+---------------------+-----------------------+--+
  16. | 20220120090023631 | 20220120090023631_1_2 | row_1 | partitionId=2021/01/01 | 0bf9b822-928f-4a57-950a-6a5450319c83-0_1-24-314_20220120090023631.parquet | row_1 | 0 | bob | v_0 | toBeDel0 | 0 | 1000000 | 2021/01/01 |
  17. +-------------------------------+--------------------------------+------------------------------+----------------------------------+----------------------------------------------------------------------------+-----------------+-------------------+----------------+---------------------+--------------------------+---------------------+---------------------+-----------------------+--+
  18. 1 row selected (5.475 seconds)
  19. 0: jdbc:hive2://hiveserver:10000>

Use partition extractor properly

When sync to hive metastore, partition values are extracted using hoodie.datasource.hive_sync.partition_value_extractor. Before 0.12, this is by default set to org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor and users usually need to overwrite this manually. Since 0.12, the default value is changed to a more generic org.apache.hudi.hive.MultiPartKeysValueExtractor which extracts partition values using / as the separator.

In case of using some key generator like TimestampBasedKeyGenerator, the partition values can be in form of yyyy/MM/dd. This is usually undesirable to have the partition values extracted as multiple parts like [yyyy, MM, dd]. Users can set org.apache.hudi.hive.SinglePartPartitionValueExtractor to extract the partition values as yyyy-MM-dd.

When the table is not partitioned, org.apache.hudi.hive.NonPartitionedExtractor should be set. And this is automatically inferred from partition fields configs, so users may not need to set it manually. Similarly, if hive-style partitioning is used for the table, then org.apache.hudi.hive.HiveStylePartitionValueExtractor will be inferred and set automatically.

Hive Sync Tool

Writing data with DataSource writer or HoodieStreamer supports syncing of the table’s latest schema to Hive metastore, such that queries can pick up new columns and partitions. In case, it’s preferable to run this from commandline or in an independent jvm, Hudi provides a HiveSyncTool, which can be invoked as below, once you have built the hudi-hive module. Following is how we sync the above Datasource Writer written table to Hive metastore.

  1. cd hudi-hive
  2. ./run_sync_tool.sh --jdbc-url jdbc:hive2:\/\/hiveserver:10000 --user hive --pass hive --partitioned-by partition --base-path <basePath> --database default --table <tableName>

Starting with Hudi 0.5.1 version read optimized version of merge-on-read tables are suffixed ‘_ro’ by default. For backwards compatibility with older Hudi versions, an optional HiveSyncConfig - --skip-ro-suffix, has been provided to turn off ‘_ro’ suffixing if desired. Explore other hive sync options using the following command:

  1. cd hudi-hive
  2. ./run_sync_tool.sh
  3. [hudi-hive]$ ./run_sync_tool.sh --help

Hive Sync Configuration

Please take a look at the arguments that can be passed to run_sync_tool in HiveSyncConfig. Among them, following are the required arguments:

  1. @Parameter(names = {"--database"}, description = "name of the target database in Hive", required = true);
  2. @Parameter(names = {"--table"}, description = "name of the target table in Hive", required = true);
  3. @Parameter(names = {"--base-path"}, description = "Basepath of Hudi table to sync", required = true);

Corresponding datasource options for the most commonly used hive sync configs are as follows:

In the table below (N/A) means there is no default value set.

HiveSyncConfig DataSourceWriteOption Default Value Description
—database hoodie.datasource.hive_sync.database default Name of the target database in Hive metastore
—table hoodie.datasource.hive_sync.table (N/A) Name of the target table in Hive. Inferred from the table name in Hudi table config if not specified.
—user hoodie.datasource.hive_sync.username hive Username for hive metastore
—pass hoodie.datasource.hive_sync.password hive Password for hive metastore
—jdbc-url hoodie.datasource.hive_sync.jdbcurl jdbc:hive2://localhost:10000 Hive server url if using jdbc mode to sync
—sync-mode hoodie.datasource.hive_sync.mode (N/A) Mode to choose for Hive ops. Valid values are hms, jdbc and hiveql. More details in the following section.
—partitioned-by hoodie.datasource.hive_sync.partition_fields (N/A) Comma-separated column names in the table to use for determining hive partition.
—partition-value-extractor hoodie.datasource.hive_sync.partition_extractor_class org.apache.hudi.hive.MultiPartKeysValueExtractor Class which implements PartitionValueExtractor to extract the partition values. Inferred automatically depending on the partition fields specified.

Sync modes

HiveSyncTool supports three modes, namely HMS, HIVEQL, JDBC, to connect to Hive metastore server. These modes are just three different ways of executing DDL against Hive. Among these modes, JDBC or HMS is preferable over HIVEQL, which is mostly used for running DML rather than DDL.

All these modes assume that hive metastore has been configured and the corresponding properties set in hive-site.xml configuration file. Additionally, if you’re using spark-shell/spark-sql to sync Hudi table to Hive then the hive-site.xml file also needs to be placed under <SPARK_HOME>/conf directory.

HMS

HMS mode uses the hive metastore client to sync Hudi table using thrift APIs directly. To use this mode, pass --sync-mode=hms to run_sync_tool and set --use-jdbc=false. Additionally, if you are using remote metastore, then hive.metastore.uris need to be set in hive-site.xml configuration file. Otherwise, the tool assumes that metastore is running locally on port 9083 by default.

JDBC

This mode uses the JDBC specification to connect to the hive metastore.

  1. @Parameter(names = {"--jdbc-url"}, description = "Hive jdbc connect url");

HIVEQL

HQL is Hive’s own SQL dialect. This mode simply uses the Hive QL’s driver to execute DDL as HQL command. To use this mode, pass --sync-mode=hiveql to run_sync_tool and set --use-jdbc=false.

Install

Now you can git clone Hudi master branch to test Flink hive sync. The first step is to install Hudi to get hudi-flink1.1x-bundle-0.x.x.jar. hudi-flink-bundle module pom.xml sets the scope related to hive as provided by default. If you want to use hive sync, you need to use the profile flink-bundle-shade-hive during packaging. Executing command below to install:

  1. # Maven install command
  2. mvn install -DskipTests -Drat.skip=true -Pflink-bundle-shade-hive2
  3. # For hive1, you need to use profile -Pflink-bundle-shade-hive1
  4. # For hive3, you need to use profile -Pflink-bundle-shade-hive3

Hive1.x can only synchronize metadata to hive, but cannot use hive query now. If you need to query, you can use spark to query hive table.

If using hive profile, you need to modify the hive version in the profile to your hive cluster version (Only need to modify the hive version in this profile). The location of this pom.xml is packaging/hudi-flink-bundle/pom.xml, and the corresponding profile is at the bottom of this file.

Hive Environment

  1. Import hudi-hadoop-mr-bundle into hive. Creating auxlib/ folder under the root directory of hive, and moving hudi-hadoop-mr-bundle-0.x.x-SNAPSHOT.jar into auxlib. hudi-hadoop-mr-bundle-0.x.x-SNAPSHOT.jar is at packaging/hudi-hadoop-mr-bundle/target.

  2. When Flink sql client connects hive metastore remotely, hive metastore and hiveserver2 services need to be enabled, and the port number need to be set correctly. Command to turn on the services:

  1. # Enable hive metastore and hiveserver2
  2. nohup ./bin/hive --service metastore &
  3. nohup ./bin/hive --service hiveserver2 &
  4. # While modifying the jar package under auxlib, you need to restart the service.

Sync Template

Flink hive sync now supports two hive sync mode, hms and jdbc. hms mode only needs to configure metastore uris. For the jdbc mode, the JDBC attributes and metastore uris both need to be configured. The options template is as below:

  1. -- hms mode template
  2. CREATE TABLE t1(
  3. uuid VARCHAR(20),
  4. name VARCHAR(10),
  5. age INT,
  6. ts TIMESTAMP(3),
  7. `partition` VARCHAR(20)
  8. )
  9. PARTITIONED BY (`partition`)
  10. WITH (
  11. 'connector' = 'hudi',
  12. 'path' = '${db_path}/t1',
  13. 'table.type' = 'COPY_ON_WRITE', -- If MERGE_ON_READ, hive query will not have output until the parquet file is generated
  14. 'hive_sync.enable' = 'true', -- Required. To enable hive synchronization
  15. 'hive_sync.mode' = 'hms', -- Required. Setting hive sync mode to hms, default hms. (Before 0.13, the default sync mode was jdbc.)
  16. 'hive_sync.metastore.uris' = 'thrift://${ip}:9083' -- Required. The port need set on hive-site.xml
  17. );
  18. -- jdbc mode template
  19. CREATE TABLE t1(
  20. uuid VARCHAR(20),
  21. name VARCHAR(10),
  22. age INT,
  23. ts TIMESTAMP(3),
  24. `partition` VARCHAR(20)
  25. )
  26. PARTITIONED BY (`partition`)
  27. WITH (
  28. 'connector' = 'hudi',
  29. 'path' = '${db_path}/t1',
  30. 'table.type' = 'COPY_ON_WRITE', --If MERGE_ON_READ, hive query will not have output until the parquet file is generated
  31. 'hive_sync.enable' = 'true', -- Required. To enable hive synchronization
  32. 'hive_sync.mode' = 'jdbc', -- Required. Setting hive sync mode to jdbc, default hms. (Before 0.13, the default sync mode was jdbc.)
  33. 'hive_sync.metastore.uris' = 'thrift://${ip}:9083', -- Required. The port need set on hive-site.xml
  34. 'hive_sync.jdbc_url'='jdbc:hive2://${ip}:10000', -- required, hiveServer port
  35. 'hive_sync.table'='${table_name}', -- required, hive table name
  36. 'hive_sync.db'='${db_name}', -- required, hive database name
  37. 'hive_sync.username'='${user_name}', -- required, JDBC username
  38. 'hive_sync.password'='${password}' -- required, JDBC password
  39. );

Query

While using hive beeline query, you need to enter settings:

  1. set hive.input.format = org.apache.hudi.hadoop.hive.HoodieCombineHiveInputFormat;