Introduction Pre-requisites Steps Configuration Details What Next? Introduction The Kafka writer allows users to create pipelines that ingest data from Gobblin sources into ...
Flink Configuration Catalog Configuration A catalog is created and named by executing the following query (replace <catalog_name> with your catalog name and <config_key> =<confi...
Getting Started The latest version of Iceberg is 1.8.1 . Spark is currently the most feature-rich compute engine for Iceberg operations. We recommend you to get started with Spar...
Aliyun OSS configs Aliyun OSS Credentials Aliyun OSS Libs In this page, we explain how to get your Hudi spark job to store into Aliyun OSS. Aliyun OSS configs There are two c...
Iceberg Dell Integration Dell ECS Integration Iceberg can be used with Dell’s Enterprise Object Storage (ECS) by using the ECS catalog since 0.15.0. See Dell ECS for more infor...
Custom Catalog It’s possible to read an iceberg table either from an hdfs path or from a hive table. It’s also possible to use a custom metastore in place of hive. The steps to do...
GCS Configs GCS Credentials GCS Libs For Hudi storage on GCS, regional buckets provide an DFS API with strong consistency. GCS Configs There are two configurations required ...
Baidu BOS configs Baidu BOS Credentials Baidu bos Libs In this page, we explain how to get your Hudi job to store into Baidu BOS. Baidu BOS configs There are two configuratio...