Creating your first interoperable table

Using Apache XTable™ (Incubating) to sync your source tables in different target format involves running sync on your current dataset using a bundled jar. You can create this bundled jar by following the instructions on the Installation page. Read through Apache XTable™’s GitHub page for more information.

In this tutorial we will look at how to use Apache XTable™ (Incubating) to add interoperability between table formats. For example, you can expose a table ingested with Hudi as an Iceberg and/or Delta Lake table without copying or moving the underlying data files used for that table while maintaining a similar commit history to enable proper point in time queries.

Pre-requisites

  1. A compute instance where you can run Apache Spark. This can be your local machine, docker, or a distributed service like Amazon EMR, Google Cloud’s Dataproc, Azure HDInsight etc
  2. Clone the Apache XTable™ (Incubating) repository and create the xtable-utilities_2.12-0.2.0-SNAPSHOT-bundled.jar by following the steps on the Installation page
  3. Optional: Setup access to write to and/or read from distributed storage services like:
    • Amazon S3 by following the steps here to install AWSCLIv2 and setup access credentials by following the steps here
    • Google Cloud Storage by following the steps here

For the purpose of this tutorial, we will walk through the steps to using Apache XTable™ (Incubating) locally.

Steps

Initialize a pyspark shell

You can choose to follow this example with spark-sql or spark-shell as well.

Hudi

  1. pyspark \
  2. --packages org.apache.hudi:hudi-spark3.2-bundle_2.12:0.14.0 \
  3. --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" \
  4. --conf "spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog" \
  5. --conf "spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension"

Delta

  1. pyspark \
  2. --packages io.delta:delta-core_2.12:2.1.0 \
  3. --conf "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension" \
  4. --conf "spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog"

Iceberg”>

  1. pyspark \
  2. --packages org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:1.4.1 \
  3. --conf "spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions" \
  4. --conf "spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkSessionCatalog"

You may need additional configurations to write to external cloud storage locations like Amazon S3, GCS or ADLS when you are working with spark locally. Refer to the respective cloud provider’s documentation for more information.

Create dataset

Write a source table locally.

Hudi

  1. from pyspark.sql.types import *
  2. # initialize the bucket
  3. table_name = "people"
  4. local_base_path = "file:/tmp/hudi-dataset"
  5. records = [
  6. (1, 'John', 25, 'NYC', '2023-09-28 00:00:00'),
  7. (2, 'Emily', 30, 'SFO', '2023-09-28 00:00:00'),
  8. (3, 'Michael', 35, 'ORD', '2023-09-28 00:00:00'),
  9. (4, 'Andrew', 40, 'NYC', '2023-10-28 00:00:00'),
  10. (5, 'Bob', 28, 'SEA', '2023-09-23 00:00:00'),
  11. (6, 'Charlie', 31, 'DFW', '2023-08-29 00:00:00')
  12. ]
  13. schema = StructType([
  14. StructField("id", IntegerType(), True),
  15. StructField("name", StringType(), True),
  16. StructField("age", IntegerType(), True),
  17. StructField("city", StringType(), True),
  18. StructField("create_ts", StringType(), True)
  19. ])
  20. df = spark.createDataFrame(records, schema)
  21. hudi_options = {
  22. 'hoodie.table.name': table_name,
  23. 'hoodie.datasource.write.partitionpath.field': 'city',
  24. 'hoodie.datasource.write.hive_style_partitioning': 'true'
  25. }
  26. (
  27. df.write
  28. .format("hudi")
  29. .options(**hudi_options)
  30. .save(f"{local_base_path}/{table_name}")
  31. )

Delta

  1. from pyspark.sql.types import *
  2. # initialize the bucket
  3. table_name = "people"
  4. local_base_path = "file:/tmp/delta-dataset"
  5. records = [
  6. (1, 'John', 25, 'NYC', '2023-09-28 00:00:00'),
  7. (2, 'Emily', 30, 'SFO', '2023-09-28 00:00:00'),
  8. (3, 'Michael', 35, 'ORD', '2023-09-28 00:00:00'),
  9. (4, 'Andrew', 40, 'NYC', '2023-10-28 00:00:00'),
  10. (5, 'Bob', 28, 'SEA', '2023-09-23 00:00:00'),
  11. (6, 'Charlie', 31, 'DFW', '2023-08-29 00:00:00')
  12. ]
  13. schema = StructType([
  14. StructField("id", IntegerType(), True),
  15. StructField("name", StringType(), True),
  16. StructField("age", IntegerType(), True),
  17. StructField("city", StringType(), True),
  18. StructField("create_ts", StringType(), True)
  19. ])
  20. df = spark.createDataFrame(records, schema)
  21. (
  22. df.write
  23. .format("delta")
  24. .partitionBy("city")
  25. .save(f"{local_base_path}/{table_name}")
  26. )

Iceberg

  1. from pyspark.sql.types import *
  2. # initialize the bucket
  3. table_name = "people"
  4. local_base_path = "file:/tmp/iceberg-dataset"
  5. records = [
  6. (1, 'John', 25, 'NYC', '2023-09-28 00:00:00'),
  7. (2, 'Emily', 30, 'SFO', '2023-09-28 00:00:00'),
  8. (3, 'Michael', 35, 'ORD', '2023-09-28 00:00:00'),
  9. (4, 'Andrew', 40, 'NYC', '2023-10-28 00:00:00'),
  10. (5, 'Bob', 28, 'SEA', '2023-09-23 00:00:00'),
  11. (6, 'Charlie', 31, 'DFW', '2023-08-29 00:00:00')
  12. ]
  13. schema = StructType([
  14. StructField("id", IntegerType(), True),
  15. StructField("name", StringType(), True),
  16. StructField("age", IntegerType(), True),
  17. StructField("city", StringType(), True),
  18. StructField("create_ts", StringType(), True)
  19. ])
  20. df = spark.createDataFrame(records, schema)
  21. (
  22. df.write
  23. .format("iceberg")
  24. .partitionBy("city")
  25. .save(f"{local_base_path}/{table_name}")
  26. )

Running sync

Create my_config.yaml in the cloned xtable directory.

Hudi

  1. sourceFormat: HUDI
  2. targetFormats:
  3. - DELTA
  4. - ICEBERG
  5. datasets:
  6. -
  7. tableBasePath: file:///tmp/hudi-dataset/people
  8. tableName: people
  9. partitionSpec: city:VALUE

Delta

  1. sourceFormat: DELTA
  2. targetFormats:
  3. - HUDI
  4. - ICEBERG
  5. datasets:
  6. -
  7. tableBasePath: file:///tmp/delta-dataset/people
  8. tableName: people

Iceberg

  1. sourceFormat: ICEBERG
  2. targetFormats:
  3. - HUDI
  4. - DELTA
  5. datasets:
  6. -
  7. tableBasePath: file:///tmp/iceberg-dataset/people
  8. tableDataPath: file:///tmp/iceberg-dataset/people/data
  9. tableName: people

Add tableDataPath for ICEBERG sourceFormat if the tableBasePath is different from the path to the data.

Optional: If your source table exists in Amazon S3, GCS or ADLS you should use a yaml file similar to below.

Hudi

  1. sourceFormat: HUDI
  2. targetFormats:
  3. - DELTA
  4. - ICEBERG
  5. datasets:
  6. -
  7. tableBasePath: s3://path/to/hudi-dataset/people # replace this with gs://path/to/hudi-dataset/people if your data is in GCS.
  8. tableName: people
  9. partitionSpec: city:VALUE

Delta

  1. sourceFormat: DELTA
  2. targetFormats:
  3. - HUDI
  4. - ICEBERG
  5. datasets:
  6. -
  7. tableBasePath: s3://path/to/delta-dataset/people # replace this with gs://path/to/delta-dataset/people if your data is in GCS.
  8. tableName: people

Iceberg

  1. sourceFormat: ICEBERG
  2. targetFormats:
  3. - HUDI
  4. - DELTA
  5. datasets:
  6. -
  7. tableBasePath: s3://path/to/iceberg-dataset/people # replace this with gs://path/to/iceberg-dataset/people if your data is in GCS.
  8. tableDataPath: s3://path/to/iceberg-dataset/people/data
  9. tableName: people

Add tableDataPath for ICEBERG sourceFormat if the tableBasePath is different from the path to the data.

Authentication for AWS is done with com.amazonaws.auth.DefaultAWSCredentialsProviderChain. To override this setting, specify a different implementation with the --awsCredentialsProvider option.

Authentication for GCP requires service account credentials to be exported. i.e. export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account_key.json

In your terminal under the cloned Apache XTable™ (Incubating) directory, run the below command.

  1. java -jar xtable-utilities/target/xtable-utilities_2.12-0.2.0-SNAPSHOT-bundled.jar --datasetConfig my_config.yaml

Optional: At this point, if you check your local path, you will be able to see the necessary metadata files that contain the schema, commit history, partitions, and column stats that helps query engines to interpret the data in the target table format.

Conclusion

In this tutorial, we saw how to create a source table and use Apache XTable™ (Incubating) to create the metadata files that can be used to query the source table in different target table formats.

Next steps

Go through the Catalog Integration guides to register the Apache XTable™ (Incubating) synced tables in different data catalogs.