source_table_name [string] parallelism [int] Examples Common parameters of sink connectors name type required default value source_table_name string no - p...
Managing Watermarks in a Job Basics Task Failures Multi-Dataset Jobs Gobblin State Deep Dive State class hierarchy How States are Used in a Gobblin Job This page has two p...
Spark Writes To use Iceberg in Spark, first configure Spark catalogs . Some plans are only available when using Iceberg SQL extensions in Spark 3. Iceberg uses Apache Spark’s D...
Introduction Quartz Azkaban Oozie Launching Gobblin in Local Mode Example Config Files Uploading Files to HDFS Adding Gobblin jar Dependencies Launching the Job Launching ...
Spark Streaming Spark Streaming You can write Hudi tables using spark’s structured streaming. Scala // spark-shell // prepare to stream write to new table import org ....