Custom Catalog It’s possible to read an iceberg table either from an hdfs path or from a hive table. It’s also possible to use a custom metastore in place of hive. The steps to do...
Hive Connector Integration Dependencies Configurations Hive Connector Operations The Kyuubi Hive Connector is a datasource for both reading and writing Hive table, It is imple...
Evolution Iceberg supports in-place table evolution . You can evolve a table schema just like SQL — even in nested structures — or change partition layout when data volume chang...
Mixed-Hive format is a format that has better compatibility with Hive than Mixed-Iceberg format. Mixed-Hive format uses a Hive table as the BaseStore and an Iceberg table as the C...
Introduction Hive SerDe Integration Writing to an ORC File Data Flow Extending Gobblin’s SerDe Integration Introduction Gobblin is capable of writing data to ORC files by le...
Flink Connector Apache Flink supports creating Iceberg table directly without creating the explicit Flink catalog in Flink SQL. That means we can just create an iceberg table by s...
Features and Limitations Features Apache XTable™ (Incubating) provides users with the ability to translate metadata from one table format to another. Apache XTable™ (Incubatin...
How Hive Registration Works in Gobblin HiveSpec HiveRegistrationPolicy HiveSerDeManager Predicate and Activity How to Use Hive Registration in Your Gobblin Job Hive Regist...