Deploying Hudi Streamer Spark Datasource Writer Jobs Upgrading Downgrading Migrating This section provides all the help you need to deploy and operate Hudi tables at scale. ...
Using Apache Hadoop resource in Flink on Kubernetes 1. Apache HDFS 1.1 Add the shaded jar 1.2. add core-site.xml and hdfs-site.xml 2. Apache Hive 2.1. Add Hive-related jars 2...
Dependency of elastic writing Write data to Elasticsearch based on the official Using Apache StreamPark™ writes to Elasticsearch 1. 配置策略和连接信息 2. 写入Elasticsearch Other configur...
Kyuubi On Apache Kudu What is Apache Kudu Why Kyuubi on Kudu Kudu Integration with Apache Spark Kudu Integration with Kyuubi Install Kudu Spark Dependency Start Kyuubi Start B...
Ingest into one table Iceberg format Mixed-Iceberg format Ingest Into multiple tables Iceberg format Mixed-Iceberg format CDC stands for Change Data Capture, which is a broa...
Spark DataSource API The hudi-spark module offers the DataSource API to write a Spark DataFrame into a Hudi table. There are a number of options available: HoodieWriteConfig :...
Schema Evolution on Write Type Promotions Schema Evolution on read Adding Columns Altering Columns Deleting Columns Renaming columns Schema Evolution in Action Related Reso...