Local set up

Once hudi has been built, the shell can be fired by via cd hudi-cli && ./hudi-cli.sh.

Hudi CLI Bundle setup

In release 0.13.0 we have now added another way of launching the hudi cli, which is using the hudi-cli-bundle. (Note this is only supported for Spark3, for Spark2 please see the above Local setup section)

There are a couple of requirements when using this approach such as having spark installed locally on your machine. It is required to use a spark distribution with hadoop dependencies packaged such as spark-3.3.1-bin-hadoop2.tgz from https://archive.apache.org/dist/spark/. We also recommend you set an env variable $SPARK_HOME to the path of where spark is installed on your machine. One important thing to note is that the hudi-spark-bundle should also be present when using the hudi-cli-bundle.
To provide the locations of these bundle jars you can set them in your shell like so: export CLI_BUNDLE_JAR=<path-to-cli-bundle-jar-to-use> , export SPARK_BUNDLE_JAR=<path-to-spark-bundle-jar-to-use>.

For steps see below if you are not compiling the project and downloading the jars:

  1. Create an empty folder as a new directory
  2. Copy the hudi-cli-bundle jars and hudi-spark*-bundle jars to this directory
  3. Copy the following script and folder to this directory

    1. packaging/hudi-cli-bundle/hudi-cli-with-bundle.sh
    2. packaging/hudi-cli-bundle/conf . the `conf` folder should be in this directory.
  4. Start Hudi CLI shell with environment variables set ``` export SPARK_HOME= export CLI_BUNDLE_JAR= export SPARK_BUNDLE_JAR=

./hudi-cli-with-bundle.sh

  1. ### Base path
  2. A hudi table resides on DFS, in a location referred to as the `basePath` and
  3. we would need this location in order to connect to a Hudi table. Hudi library effectively manages this table internally, using `.hoodie` subfolder to track all metadata.
  4. ### Using Hudi-cli in S3
  5. If you are using hudi that comes packaged with AWS EMR, you can find instructions to use hudi-cli [here](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hudi-cli.html).
  6. If you are not using EMR, or would like to use latest hudi-cli from master, you can follow the below steps to access S3 dataset in your local environment (laptop).
  7. Build Hudi with corresponding Spark version, for eg, -Dspark3.1.x
  8. Set the following environment variables.

export AWS_REGION=us-east-2 export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export SPARK_HOME=

  1. Ensure you set the SPARK_HOME to your local spark home compatible to compiled hudi spark version above.
  2. Apart from these, we might need to add aws jars to class path so that accessing S3 is feasible from local.
  3. We need two jars, namely, aws-java-sdk-bundle jar and hadoop-aws jar which you can find online.
  4. For eg:

wget https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/3.2.0/hadoop-aws-3.2.0.jar -o /lib/spark-3.2.0-bin-hadoop3.2/jars/hadoop-aws-3.2.0.jar wget https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-bundle/1.11.375/aws-java-sdk-bundle-1.11.375.jar -o /lib/spark-3.2.0-bin-hadoop3.2/jars/aws-java-sdk-bundle-1.11.375.jar

  1. #### Note: These AWS jar versions below are specific to Spark 3.2.0

export CLIENT_JAR=/lib/spark-3.2.0-bin-hadoop3.2/jars/aws-java-sdk-bundle-1.12.48.jar:/lib/spark-3.2.0-bin-hadoop3.2/jars/hadoop-aws-3.3.1.jar

  1. Once these are set, you are good to launch hudi-cli and access S3 dataset.

./hudi-cli/hudi-cli.sh

  1. ### Using hudi-cli on Google Dataproc
  2. [Dataproc](https://cloud.google.com/dataproc) is Google's managed service for running Apache Hadoop, Apache Spark,
  3. Apache Flink, Presto and many other frameworks, including Hudi. If you want to run the Hudi CLI on a Dataproc node
  4. which has not been launched with Hudi support enabled, you can use the steps below:
  5. These steps use Hudi version 0.13.0. If you want to use a different version you will have to edit the below commands
  6. appropriately:
  7. 1. Once you've started the Dataproc cluster, you can ssh into it as follows:

$ gcloud compute ssh —zone “YOUR_ZONE” “HOSTNAME_OF_MASTER_NODE” —project “YOUR_PROJECT”

  1. 2. Download the Hudi CLI bundle

wget https://repo1.maven.org/maven2/org/apache/hudi/hudi-cli-bundle_2.12/0.13.0/hudi-cli-bundle_2.12-0.13.0.jar

  1. 3. Download the Hudi Spark bundle

wget https://repo1.maven.org/maven2/org/apache/hudi/hudi-spark-bundle_2.12/0.13.0/hudi-spark-bundle_2.12-0.13.0.jar

  1. 4. Download the shell script that launches Hudi CLI bundle

wget https://raw.githubusercontent.com/apache/hudi/release-0.13.0/packaging/hudi-cli-bundle/hudi-cli-with-bundle.sh

  1. 5. Launch Hudi CLI bundle with appropriate environment variables as follows:

CLIENT_JAR=$DATAPROC_DIR/lib/gcs-connector.jar CLI_BUNDLE_JAR=hudi-cli-bundle_2.12-0.13.0.jar SPARK_BUNDLE_JAR=hudi-spark-bundle_2.12-0.13.0.jar ./hudi-cli-with-bundle.sh

  1. 6. hudi->connect --path gs://path_to_some_table
  2. Metadata for table some_table loaded
  3. 7. hudi:some_table->commits show --limit 5
  4. This command should show the recent commits, if the above steps work correctly.
  5. ## Connect to a Kerberized cluster
  6. Before connecting to a Kerberized cluster, you can use **kerberos kinit** command. Following is the usage of this command.
  7. ```shell
  8. hudi->help kerberos kinit
  9. NAME
  10. kerberos kinit - Perform Kerberos authentication
  11. SYNOPSIS
  12. kerberos kinit --krb5conf String [--principal String] [--keytab String]
  13. OPTIONS
  14. --krb5conf String
  15. Path to krb5.conf
  16. [Optional, default = /etc/krb5.conf]
  17. --principal String
  18. Kerberos principal
  19. [Mandatory]
  20. --keytab String
  21. Path to keytab
  22. [Mandatory]

For example:

  1. hudi->kerberos kinit --principal user/host@DOMAIN --keytab /etc/security/keytabs/user.keytab
  2. Perform Kerberos authentication
  3. Parameters:
  4. --krb5conf: /etc/krb5.conf
  5. --principal: user/host@DOMAIN
  6. --keytab: /etc/security/keytabs/user.keytab
  7. Kerberos current user: user/host@DOMAIN (auth:KERBEROS)
  8. Kerberos login user: user/host@DOMAIN (auth:KERBEROS)
  9. Kerberos authentication success

If you see “Kerberos authentication success” in the command output, it means Kerberos authentication has been successful.

Using hudi-cli

To initialize a hudi table, use the following command.

  1. ===================================================================
  2. * ___ ___ *
  3. * /\__\ ___ /\ \ ___ *
  4. * / / / /\__\ / \ \ /\ \ *
  5. * / /__/ / / / / /\ \ \ \ \ \ *
  6. * / \ \ ___ / / / / / \ \__\ / \__\ *
  7. * / /\ \ /\__\ / /__/ ___ / /__/ \ |__| / /\/__/ *
  8. * \/ \ \/ / / \ \ \ /\__\ \ \ \ / / / /\/ / / *
  9. * \ / / \ \ / / / \ \ / / / \ /__/ *
  10. * / / / \ \/ / / \ \/ / / \ \__\ *
  11. * / / / \ / / \ / / \/__/ *
  12. * \/__/ \/__/ \/__/ Apache Hudi CLI *
  13. * *
  14. ===================================================================
  15. hudi->create --path /user/hive/warehouse/table1 --tableName hoodie_table_1 --tableType COPY_ON_WRITE
  16. .....

To see the description of hudi table, use the command:

  1. hudi:hoodie_table_1->desc
  2. 18/09/06 15:57:19 INFO timeline.HoodieActiveTimeline: Loaded instants []
  3. _________________________________________________________
  4. | Property | Value |
  5. |========================================================|
  6. | basePath | ... |
  7. | metaPath | ... |
  8. | fileSystem | hdfs |
  9. | hoodie.table.name | hoodie_table_1 |
  10. | hoodie.table.type | COPY_ON_WRITE |
  11. | hoodie.archivelog.folder| |

Following is a sample command to connect to a Hudi table contains uber trips.

  1. hudi:trips->connect --path /app/uber/trips
  2. 16/10/05 23:20:37 INFO model.HoodieTableMetadata: All commits :HoodieCommits{commitList=[20161002045850, 20161002052915, 20161002055918, 20161002065317, 20161002075932, 20161002082904, 20161002085949, 20161002092936, 20161002105903, 20161002112938, 20161002123005, 20161002133002, 20161002155940, 20161002165924, 20161002172907, 20161002175905, 20161002190016, 20161002192954, 20161002195925, 20161002205935, 20161002215928, 20161002222938, 20161002225915, 20161002232906, 20161003003028, 20161003005958, 20161003012936, 20161003022924, 20161003025859, 20161003032854, 20161003042930, 20161003052911, 20161003055907, 20161003062946, 20161003065927, 20161003075924, 20161003082926, 20161003085925, 20161003092909, 20161003100010, 20161003102913, 20161003105850, 20161003112910, 20161003115851, 20161003122929, 20161003132931, 20161003142952, 20161003145856, 20161003152953, 20161003155912, 20161003162922, 20161003165852, 20161003172923, 20161003175923, 20161003195931, 20161003210118, 20161003212919, 20161003215928, 20161003223000, 20161003225858, 20161004003042, 20161004011345, 20161004015235, 20161004022234, 20161004063001, 20161004072402, 20161004074436, 20161004080224, 20161004082928, 20161004085857, 20161004105922, 20161004122927, 20161004142929, 20161004163026, 20161004175925, 20161004194411, 20161004203202, 20161004211210, 20161004214115, 20161004220437, 20161004223020, 20161004225321, 20161004231431, 20161004233643, 20161005010227, 20161005015927, 20161005022911, 20161005032958, 20161005035939, 20161005052904, 20161005070028, 20161005074429, 20161005081318, 20161005083455, 20161005085921, 20161005092901, 20161005095936, 20161005120158, 20161005123418, 20161005125911, 20161005133107, 20161005155908, 20161005163517, 20161005165855, 20161005180127, 20161005184226, 20161005191051, 20161005193234, 20161005203112, 20161005205920, 20161005212949, 20161005223034, 20161005225920]}
  3. Metadata for table trips loaded

Once connected to the table, a lot of other commands become available. The shell has contextual autocomplete help (press TAB) and below is a list of all commands, few of which are reviewed in this section

  1. hudi:trips->help
  2. * ! - Allows execution of operating system (OS) commands
  3. * // - Inline comment markers (start of line only)
  4. * ; - Inline comment markers (start of line only)
  5. * bootstrap index showmapping - Show bootstrap index mapping
  6. * bootstrap index showpartitions - Show bootstrap indexed partitions
  7. * bootstrap run - Run a bootstrap action for current Hudi table
  8. * clean showpartitions - Show partition level details of a clean
  9. * cleans refresh - Refresh table metadata
  10. * cleans run - run clean
  11. * cleans show - Show the cleans
  12. * clear - Clears the console
  13. * cls - Clears the console
  14. * clustering run - Run Clustering
  15. * clustering schedule - Schedule Clustering
  16. * clustering scheduleAndExecute - Run Clustering. Make a cluster plan first and execute that plan immediately
  17. * commit rollback - Rollback a commit
  18. * commits compare - Compare commits with another Hoodie table
  19. * commit show_write_stats - Show write stats of a commit
  20. * commit showfiles - Show file level details of a commit
  21. * commit showpartitions - Show partition level details of a commit
  22. * commits refresh - Refresh table metadata
  23. * commits show - Show the commits
  24. * commits showarchived - Show the archived commits
  25. * commits sync - Sync commits with another Hoodie table
  26. * compaction repair - Renames the files to make them consistent with the timeline as dictated by Hoodie metadata. Use when compaction unschedule fails partially.
  27. * compaction run - Run Compaction for given instant time
  28. * compaction schedule - Schedule Compaction
  29. * compaction scheduleAndExecute - Schedule compaction plan and execute this plan
  30. * compaction show - Shows compaction details for a specific compaction instant
  31. * compaction showarchived - Shows compaction details for a specific compaction instant
  32. * compactions show all - Shows all compactions that are in active timeline
  33. * compactions showarchived - Shows compaction details for specified time window
  34. * compaction unschedule - Unschedule Compaction
  35. * compaction unscheduleFileId - UnSchedule Compaction for a fileId
  36. * compaction validate - Validate Compaction
  37. * connect - Connect to a hoodie table
  38. * create - Create a hoodie table if not present
  39. * date - Displays the local date and time
  40. * desc - Describe Hoodie Table properties
  41. * downgrade table - Downgrades a table
  42. * exit - Exits the shell
  43. * export instants - Export Instants and their metadata from the Timeline
  44. * fetch table schema - Fetches latest table schema
  45. * hdfsparquetimport - Imports Parquet table to a hoodie table
  46. * help - List all commands usage
  47. * marker delete - Delete the marker
  48. * metadata create - Create the Metadata Table if it does not exist
  49. * metadata delete - Remove the Metadata Table
  50. * metadata init - Update the metadata table from commits since the creation
  51. * metadata list-files - Print a list of all files in a partition from the metadata
  52. * metadata list-partitions - List all partitions from metadata
  53. * metadata refresh - Refresh table metadata
  54. * metadata set - Set options for Metadata Table
  55. * metadata stats - Print stats about the metadata
  56. * metadata validate-files - Validate all files in all partitions from the metadata
  57. * quit - Exits the shell
  58. * refresh - Refresh table metadata
  59. * repair addpartitionmeta - Add partition metadata to a table, if not present
  60. * repair corrupted clean files - repair corrupted clean files
  61. * repair deduplicate - De-duplicate a partition path contains duplicates & produce repaired files to replace with
  62. * repair migrate-partition-meta - Migrate all partition meta file currently stored in text format to be stored in base file format. See HoodieTableConfig#PARTITION_METAFILE_USE_DATA_FORMAT.
  63. * repair overwrite-hoodie-props - Overwrite hoodie.properties with provided file. Risky operation. Proceed with caution!
  64. * savepoint create - Savepoint a commit
  65. * savepoint delete - Delete the savepoint
  66. * savepoint rollback - Savepoint a commit
  67. * savepoints refresh - Refresh table metadata
  68. * savepoints show - Show the savepoints
  69. * script - Parses the specified resource file and executes its commands
  70. * set - Set spark launcher env to cli
  71. * show archived commits - Read commits from archived files and show details
  72. * show archived commit stats - Read commits from archived files and show details
  73. * show env - Show spark launcher env by key
  74. * show envs all - Show spark launcher envs
  75. * show fsview all - Show entire file-system view
  76. * show fsview latest - Show latest file-system view
  77. * show logfile metadata - Read commit metadata from log files
  78. * show logfile records - Read records from log files
  79. * show rollback - Show details of a rollback instant
  80. * show rollbacks - List all rollback instants
  81. * stats filesizes - File Sizes. Display summary stats on sizes of files
  82. * stats wa - Write Amplification. Ratio of how many records were upserted to how many records were actually written
  83. * sync validate - Validate the sync by counting the number of records
  84. * system properties - Shows the shell's properties
  85. * table delete-configs - Delete the supplied table configs from the table.
  86. * table recover-configs - Recover table configs, from update/delete that failed midway.
  87. * table update-configs - Update the table configs with configs with provided file.
  88. * temp_delete - Delete view name
  89. * temp_query - query against created temp view
  90. * temp delete - Delete view name
  91. * temp query - query against created temp view
  92. * temps_show - Show all views name
  93. * temps show - Show all views name
  94. * upgrade table - Upgrades a table
  95. * utils loadClass - Load a class
  96. * version - Displays shell version
  97. hudi:trips->

Inspecting Commits

The task of upserting or inserting a batch of incoming records is known as a commit in Hudi. A commit provides basic atomicity guarantees such that only committed data is available for querying. Each commit has a monotonically increasing string/number called the commit number. Typically, this is the time at which we started the commit.

To view some basic information about the last 10 commits,

  1. hudi:trips->commits show --sortBy "Total Bytes Written" --desc true --limit 10
  2. ________________________________________________________________________________________________________________________________________________________________________
  3. | CommitTime | Total Bytes Written| Total Files Added| Total Files Updated| Total Partitions Written| Total Records Written| Total Update Records Written| Total Errors|
  4. |=======================================================================================================================================================================|
  5. ....
  6. ....
  7. ....

At the start of each write, Hudi also writes a .inflight commit to the .hoodie folder. You can use the timestamp there to estimate how long the commit has been inflight

  1. $ hdfs dfs -ls /app/uber/trips/.hoodie/*.inflight
  2. -rw-r--r-- 3 vinoth supergroup 321984 2016-10-05 23:18 /app/uber/trips/.hoodie/20161005225920.inflight

Drilling Down to a specific Commit

To understand how the writes spread across specific partiions,

  1. hudi:trips->commit showpartitions --commit 20161005165855 --sortBy "Total Bytes Written" --desc true --limit 10
  2. __________________________________________________________________________________________________________________________________________
  3. | Partition Path| Total Files Added| Total Files Updated| Total Records Inserted| Total Records Updated| Total Bytes Written| Total Errors|
  4. |=========================================================================================================================================|
  5. ....
  6. ....

If you need file level granularity , we can do the following

  1. hudi:trips->commit showfiles --commit 20161005165855 --sortBy "Partition Path"
  2. ________________________________________________________________________________________________________________________________________________________
  3. | Partition Path| File ID | Previous Commit| Total Records Updated| Total Records Written| Total Bytes Written| Total Errors|
  4. |=======================================================================================================================================================|
  5. ....
  6. ....

FileSystem View

Hudi views each partition as a collection of file-groups with each file-group containing a list of file-slices in commit order (See concepts). The below commands allow users to view the file-slices for a data-set.

  1. hudi:stock_ticks_mor->show fsview all
  2. ....
  3. _______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
  4. | Partition | FileId | Base-Instant | Data-File | Data-File Size| Num Delta Files| Total Delta File Size| Delta Files |
  5. |==============================================================================================================================================================================================================================================================================================================================================================================================================|
  6. | 2018/08/31| 111415c3-f26d-4639-86c8-f9956f245ac3| 20181002180759| hdfs://namenode:8020/user/hive/warehouse/stock_ticks_mor/2018/08/31/111415c3-f26d-4639-86c8-f9956f245ac3_0_20181002180759.parquet| 432.5 KB | 1 | 20.8 KB | [HoodieLogFile {hdfs://namenode:8020/user/hive/warehouse/stock_ticks_mor/2018/08/31/.111415c3-f26d-4639-86c8-f9956f245ac3_20181002180759.log.1}]|
  7. hudi:stock_ticks_mor->show fsview latest --partitionPath "2018/08/31"
  8. ......
  9. __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
  10. | Partition | FileId | Base-Instant | Data-File | Data-File Size| Num Delta Files| Total Delta Size| Delta Size - compaction scheduled| Delta Size - compaction unscheduled| Delta To Base Ratio - compaction scheduled| Delta To Base Ratio - compaction unscheduled| Delta Files - compaction scheduled | Delta Files - compaction unscheduled|
  11. |=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================|
  12. | 2018/08/31| 111415c3-f26d-4639-86c8-f9956f245ac3| 20181002180759| hdfs://namenode:8020/user/hive/warehouse/stock_ticks_mor/2018/08/31/111415c3-f26d-4639-86c8-f9956f245ac3_0_20181002180759.parquet| 432.5 KB | 1 | 20.8 KB | 20.8 KB | 0.0 B | 0.0 B | 0.0 B | [HoodieLogFile {hdfs://namenode:8020/user/hive/warehouse/stock_ticks_mor/2018/08/31/.111415c3-f26d-4639-86c8-f9956f245ac3_20181002180759.log.1}]| [] |

Statistics

Since Hudi directly manages file sizes for DFS table, it might be good to get an overall picture

  1. hudi:trips->stats filesizes --partitionPath 2016/09/01 --sortBy "95th" --desc true --limit 10
  2. ________________________________________________________________________________________________
  3. | CommitTime | Min | 10th | 50th | avg | 95th | Max | NumFiles| StdDev |
  4. |===============================================================================================|
  5. | <COMMIT_ID> | 93.9 MB | 93.9 MB | 93.9 MB | 93.9 MB | 93.9 MB | 93.9 MB | 2 | 2.3 KB |
  6. ....
  7. ....

In case of Hudi write taking much longer, it might be good to see the write amplification for any sudden increases

  1. hudi:trips->stats wa
  2. __________________________________________________________________________
  3. | CommitTime | Total Upserted| Total Written| Write Amplifiation Factor|
  4. |=========================================================================|
  5. ....
  6. ....

Archived Commits

In order to limit the amount of growth of .commit files on DFS, Hudi archives older .commit files (with due respect to the cleaner policy) into a commits.archived file. This is a sequence file that contains a mapping from commitNumber => json with raw information about the commit (same that is nicely rolled up above).

Compactions

To get an idea of the lag between compaction and writer applications, use the below command to list down all pending compactions.

  1. hudi:trips->compactions show all
  2. ___________________________________________________________________
  3. | Compaction Instant Time| State | Total FileIds to be Compacted|
  4. |==================================================================|
  5. | <INSTANT_1> | REQUESTED| 35 |
  6. | <INSTANT_2> | INFLIGHT | 27 |

To inspect a specific compaction plan, use

  1. hudi:trips->compaction show --instant <INSTANT_1>
  2. _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
  3. | Partition Path| File Id | Base Instant | Data File Path | Total Delta Files| getMetrics |
  4. |================================================================================================================================================================================================================================================
  5. | 2018/07/17 | <UUID> | <INSTANT_1> | viewfs://ns-default/.../../UUID_<INSTANT>.parquet | 1 | {TOTAL_LOG_FILES=1.0, TOTAL_IO_READ_MB=1230.0, TOTAL_LOG_FILES_SIZE=2.51255751E8, TOTAL_IO_WRITE_MB=991.0, TOTAL_IO_MB=2221.0}|

To manually schedule or run a compaction, use the below command. This command uses spark launcher to perform compaction operations.

NOTE: Make sure no other application is scheduling compaction for this table concurrently {: .notice—info}

  1. hudi:trips->help compaction schedule
  2. Keyword: compaction schedule
  3. Description: Schedule Compaction
  4. Keyword: sparkMemory
  5. Help: Spark executor memory
  6. Mandatory: false
  7. Default if specified: '__NULL__'
  8. Default if unspecified: '1G'
  9. * compaction schedule - Schedule Compaction
  1. hudi:trips->help compaction run
  2. Keyword: compaction run
  3. Description: Run Compaction for given instant time
  4. Keyword: tableName
  5. Help: Table name
  6. Mandatory: true
  7. Default if specified: '__NULL__'
  8. Default if unspecified: '__NULL__'
  9. Keyword: parallelism
  10. Help: Parallelism for hoodie compaction
  11. Mandatory: true
  12. Default if specified: '__NULL__'
  13. Default if unspecified: '__NULL__'
  14. Keyword: schemaFilePath
  15. Help: Path for Avro schema file
  16. Mandatory: true
  17. Default if specified: '__NULL__'
  18. Default if unspecified: '__NULL__'
  19. Keyword: sparkMemory
  20. Help: Spark executor memory
  21. Mandatory: true
  22. Default if specified: '__NULL__'
  23. Default if unspecified: '__NULL__'
  24. Keyword: retry
  25. Help: Number of retries
  26. Mandatory: true
  27. Default if specified: '__NULL__'
  28. Default if unspecified: '__NULL__'
  29. Keyword: compactionInstant
  30. Help: Base path for the target hoodie table
  31. Mandatory: true
  32. Default if specified: '__NULL__'
  33. Default if unspecified: '__NULL__'
  34. * compaction run - Run Compaction for given instant time

Validate Compaction

Validating a compaction plan : Check if all the files necessary for compactions are present and are valid

  1. hudi:stock_ticks_mor->compaction validate --instant 20181005222611
  2. ...
  3. COMPACTION PLAN VALID
  4. ___________________________________________________________________________________________________________________________________________________________________________________________________________________________
  5. | File Id | Base Instant Time| Base Data File | Num Delta Files| Valid| Error|
  6. |==========================================================================================================================================================================================================================|
  7. | 05320e98-9a57-4c38-b809-a6beaaeb36bd| 20181005222445 | hdfs://namenode:8020/user/hive/warehouse/stock_ticks_mor/2018/08/31/05320e98-9a57-4c38-b809-a6beaaeb36bd_0_20181005222445.parquet| 1 | true | |
  8. hudi:stock_ticks_mor->compaction validate --instant 20181005222601
  9. COMPACTION PLAN INVALID
  10. _______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
  11. | File Id | Base Instant Time| Base Data File | Num Delta Files| Valid| Error |
  12. |=====================================================================================================================================================================================================================================================================================================|
  13. | 05320e98-9a57-4c38-b809-a6beaaeb36bd| 20181005222445 | hdfs://namenode:8020/user/hive/warehouse/stock_ticks_mor/2018/08/31/05320e98-9a57-4c38-b809-a6beaaeb36bd_0_20181005222445.parquet| 1 | false| All log files specified in compaction operation is not present. Missing .... |

NOTE: The following commands must be executed without any other writer/ingestion application running. {: .notice—warning}

Sometimes, it becomes necessary to remove a fileId from a compaction-plan inorder to speed-up or unblock compaction operation. Any new log-files that happened on this file after the compaction got scheduled will be safely renamed so that are preserved. Hudi provides the following CLI to support it

Unscheduling Compaction

  1. hudi:trips->compaction unscheduleFileId --fileId <FileUUID>
  2. ....
  3. No File renames needed to unschedule file from pending compaction. Operation successful.

In other cases, an entire compaction plan needs to be reverted. This is supported by the following CLI

  1. hudi:trips->compaction unschedule --instant <compactionInstant>
  2. .....
  3. No File renames needed to unschedule pending compaction. Operation successful.

Repair Compaction

The above compaction unscheduling operations could sometimes fail partially (e:g -> DFS temporarily unavailable). With partial failures, the compaction operation could become inconsistent with the state of file-slices. When you run compaction validate, you can notice invalid compaction operations if there is one. In these cases, the repair command comes to the rescue, it will rearrange the file-slices so that there is no loss and the file-slices are consistent with the compaction plan

  1. hudi:stock_ticks_mor->compaction repair --instant 20181005222611
  2. ......
  3. Compaction successfully repaired
  4. .....

Savepoint and Restore

As the name suggest, “savepoint” saves the table as of the commit time, so that it lets you restore the table to this savepoint at a later point in time if need be. You can read more about savepoints and restore here

To trigger savepoint for a hudi table

  1. connect --path /tmp/hudi_trips_cow/
  2. commits show
  3. set --conf SPARK_HOME=<SPARK_HOME>
  4. savepoint create --commit 20220128160245447 --sparkMaster local[2]

To restore the table to one of the savepointed commit:

  1. connect --path /tmp/hudi_trips_cow/
  2. commits show
  3. set --conf SPARK_HOME=<SPARK_HOME>
  4. savepoints show
  5. ╔═══════════════════╗
  6. SavepointTime
  7. ╠═══════════════════╣
  8. 20220128160245447
  9. ╚═══════════════════╝
  10. savepoint rollback --savepoint 20220128160245447 --sparkMaster local[2]

Upgrade and Downgrade Table

In case the user needs to downgrade the version of Hudi library used, the Hudi table needs to be manually downgraded on the newer version of Hudi CLI before library downgrade. To downgrade a Hudi table through CLI, user needs to specify the target Hudi table version as follows:

  1. connect --path <table_path>
  2. downgrade table --toVersion <target_version>

The following table shows the Hudi table versions corresponding to the Hudi release versions:

Hudi Table Version Hudi Release Version(s)
FIVE or 5 0.12.x
FOUR or 4 0.11.x
THREE or 3 0.10.x
TWO or 2 0.9.x
ONE or 1 0.6.x - 0.8.x
ZERO or 0 0.5.x and below

For example, to downgrade a table from version FIVE(5) (current version) to TWO(2), you should run (use proper Spark master based on your environment)

  1. downgrade table --toVersion TWO --sparkMaster local[2]

or

  1. downgrade table --toVersion 2 --sparkMaster local[2]

You can verify the table version by looking at the hoodie.table.version property in .hoodie/hoodie.properties under the table path:

  1. hoodie.table.version=2

Hudi CLI also provides the ability to manually upgrade a Hudi table. To upgrade a Hudi table through CLI:

  1. upgrade table --toVersion <target_version>

:::note Table upgrade is automatically handled by the Hudi write client in different deployment modes such as DeltaStreamer after upgrading the Hudi library so that the user does not have to do manual upgrade. Such automatic table upgrade is the recommended way in general, instead of using upgrade CLI command.

Table upgrade from table version ONE to TWO requires key generator related configs such as “hoodie.datasource.write.recordkey.field”, which is only available when user configures the write job. So the table upgrade from version ONE to TWO through CLI is not supported, and user should rely on the automatic upgrade in the write client instead. :::

You may also run the upgrade command without specifying the target version. In such a case, the latest table version corresponding to the library release version is used:

  1. upgrade table

Change Hudi Table Type

There are cases we want to change the hudi table type. For example, change COW table to MOR for more efficient and lower latency ingestion; change MOR to COW for better read performance and compatibility with downstream engines. So we offer the table command to perform this modification conveniently.

Changing COW to MOR, we can simply modify the hoodie.table.type in hoodie.properties to MERGE_ON_READ.

While changing MOR to COW, we must make sure all the log files are compacted before modifying the table type, or it will cause data loss.

  1. connect --path <table_path>
  2. table change-table-type <target_table_type>

The parameter target_table_type candidates are below:

target table type comment
MOR Change COW table to MERGE_ON_READ.
COW Change MOR table to COPY_ON_WRITE.
By default, changing to COW will execute all pending compactions and perform a full compaction if any log file left. Set --enable-compaction=false will disable the default compaction.
There are params can be set for the compaction operation:
--parallelism: Default 3. Parallelism for hoodie compaction
--sparkMaster: Default local. Spark Master
--sparkMemory: Default 4G. Spark executor memory
--retry: Default 1. Number of retries
--propsFilePath: Default `. path to properties file on localfs or dfs with configurations for hoodie client for compacting<br/>—hoodieConfigs: Default `. Any configuration that can be set in the properties file can be passed here in the form of an array

Example below is changing MOR table to COW:

  1. connect --path /var/dataset/test_table_mor2cow
  2. desc
  3. ╔════════════════════════════════════════════════╤═════════════════════════════════════════╗
  4. Property Value
  5. ╠════════════════════════════════════════════════╪═════════════════════════════════════════╣
  6. basePath /var/dataset/test_table_mor2cow
  7. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  8. metaPath /var/dataset/test_table_mor2cow/.hoodie
  9. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  10. fileSystem file
  11. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  12. hoodie.table.name test_table
  13. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  14. hoodie.compaction.record.merger.strategy eeb8d96f-b1e4-49fd-bbf8-28ac514178e5
  15. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  16. hoodie.table.metadata.partitions files
  17. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  18. hoodie.table.type MERGE_ON_READ
  19. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  20. hoodie.table.metadata.partitions.inflight
  21. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  22. hoodie.archivelog.folder archived
  23. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  24. hoodie.timeline.layout.version 1
  25. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  26. hoodie.table.checksum 2702201862
  27. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  28. hoodie.compaction.payload.type HOODIE_AVRO
  29. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  30. hoodie.table.version 6
  31. ╟────────────────────────────────────────────────┼─────────────────────────────────────────╢
  32. hoodie.datasource.write.drop.partition.columns false
  33. ╚════════════════════════════════════════════════╧═════════════════════════════════════════╝
  34. table change-table-type COW
  35. ╔════════════════════════════════════════════════╤══════════════════════════════════════╤══════════════════════════════════════╗
  36. Property Old Value New Value
  37. ╠════════════════════════════════════════════════╪══════════════════════════════════════╪══════════════════════════════════════╣
  38. hoodie.archivelog.folder archived archived
  39. ╟────────────────────────────────────────────────┼──────────────────────────────────────┼──────────────────────────────────────╢
  40. hoodie.compaction.payload.type HOODIE_AVRO HOODIE_AVRO
  41. ╟────────────────────────────────────────────────┼──────────────────────────────────────┼──────────────────────────────────────╢
  42. hoodie.compaction.record.merger.strategy eeb8d96f-b1e4-49fd-bbf8-28ac514178e5 eeb8d96f-b1e4-49fd-bbf8-28ac514178e5
  43. ╟────────────────────────────────────────────────┼──────────────────────────────────────┼──────────────────────────────────────╢
  44. hoodie.datasource.write.drop.partition.columns false false
  45. ╟────────────────────────────────────────────────┼──────────────────────────────────────┼──────────────────────────────────────╢
  46. hoodie.table.checksum 2702201862 2702201862
  47. ╟────────────────────────────────────────────────┼──────────────────────────────────────┼──────────────────────────────────────╢
  48. hoodie.table.metadata.partitions files files
  49. ╟────────────────────────────────────────────────┼──────────────────────────────────────┼──────────────────────────────────────╢
  50. hoodie.table.metadata.partitions.inflight
  51. ╟────────────────────────────────────────────────┼──────────────────────────────────────┼──────────────────────────────────────╢
  52. hoodie.table.name test_table test_table
  53. ╟────────────────────────────────────────────────┼──────────────────────────────────────┼──────────────────────────────────────╢
  54. hoodie.table.type MERGE_ON_READ COPY_ON_WRITE
  55. ╟────────────────────────────────────────────────┼──────────────────────────────────────┼──────────────────────────────────────╢
  56. hoodie.table.version 6 6
  57. ╟────────────────────────────────────────────────┼──────────────────────────────────────┼──────────────────────────────────────╢
  58. hoodie.timeline.layout.version 1 1
  59. ╚════════════════════════════════════════════════╧══════════════════════════════════════╧══════════════════════════════════════╝