Introduction Pre-requisites Steps Configuration Details What Next? Introduction The Kafka writer allows users to create pipelines that ingest data from Gobblin sources into ...
Flink Configuration Catalog Configuration A catalog is created and named by executing the following query (replace <catalog_name> with your catalog name and <config_key> =<confi...
Advantages of Migrating to Gobblin Kafka Ingestion Related Job Config Properties Config properties for pulling Kafka topics Config properties for compaction Deployment and Chec...
Source schema Converters Converters available in Gobblin Schema specification Supported data types by different converters Primitive types Complex types Array Map Record En...
Over the years, LinkedIn’s data infrastructure team built custom solutions for ingesting diverse data entities into our Hadoop eco-system. At one point, we were running 15 t...
Introduction to EventMesh Features Introduction to EventMesh Apache EventMesh is a fully serverless platform used to build distributed event-driven applications. ...