Iceberg tables
To read a OneTable synced Iceberg table from BigQuery, you have two options:
Using Iceberg JSON metadata file to create the Iceberg BigLake tables:
OneTable outputs metadata files for Iceberg target format syncs which can be used by BigQuery to read the BigLake tables.
CREATE EXTERNAL TABLE onetable_synced_iceberg_table
WITH CONNECTION `myproject.mylocation.myconnection`
OPTIONS (
format = 'ICEBERG',
uris = ["gs://mybucket/mydata/mytable/metadata/iceberg.metadata.json"]
)
This method requires you to manually update the latest metadata when there are table updates and hence Google recommends using BigLake Metastore for creating Iceberg BigLake tables. Follow the guide on Syncing to BigLake Metastore for the steps.
Important: For Hudi source format to Iceberg target format use cases
- The Hudi extensions provide the ability to add field IDs to the parquet schema when writing with Hudi. This is a requirement for some engines, like BigQuery and Snowflake, when reading an Iceberg table. If you are not planning on using Iceberg, then you do not need to add these to your Hudi writers.
- To avoid inserts going through row writer, we need to disable it manually. Support for row writer will be added soon.
Steps to add additional configurations to the Hudi writers:
- Add the extensions jar (
hudi-extensions-0.1.0-SNAPSHOT-bundled.jar
) to your class path
For example, if you’re using the Hudi quick-start guide for spark you can just add--jars hudi-extensions-0.1.0-SNAPSHOT-bundled.jar
to the end of the command. - Set the following configurations in your writer options:
shell md title="shell" hoodie.avro.write.support.class: io.onetable.hudi.extensions.HoodieAvroWriteSupportWithFieldIds hoodie.client.init.callback.classes: io.onetable.hudi.extensions.AddFieldIdsClientInitCallback hoodie.datasource.write.row.writer.enable : false
- Run your existing code that use Hudi writers
Using BigLake Metastore to create the Iceberg BigLake tables:
You can use two options to register OneTable synced Iceberg tables to BigLake Metastore:
- To directly register the OneTable synced Iceberg table to BigLake Metastore, follow the OneTable guide to integrate with BigLake Metastore
- Use stored procedures for Spark on BigQuery to register the table in BigLake Metastore and query the tables from BigQuery.
Hudi and Delta tables
This document explains how to query Hudi and Delta table formats through the use of manifest files.