Spark Procedures To use Iceberg in Spark, first configure Spark catalogs . Stored procedures are only available when using Iceberg SQL extensions in Spark 3. Usage Procedures c...
Configure Kyuubi to use Custom Authentication Inject Session Conf with Custom Config Advisor Configure Kyuubi to use Custom EventHandler Manage Applications against Extra Cluste...
Syncing to Hive Metastore This document walks through the steps to register an Apache XTable™ (Incubating) synced table on Hive Metastore (HMS). Pre-requisites Source table(s) ...
Spark Configuration Catalogs Spark adds an API to plug in table catalogs that are used to load, create, and manage Iceberg tables. Spark catalogs are configured by setting Spark ...
Spark Writes To use Iceberg in Spark, first configure Spark catalogs . Some plans are only available when using Iceberg SQL extensions in Spark 3. Iceberg uses Apache Spark’s D...
Getting Started The latest version of Iceberg is 1.8.1 . Spark is currently the most feature-rich compute engine for Iceberg operations. We recommend you to get started with Spar...
Configuration Table properties Iceberg tables support table properties to configure table behavior, like the default split size for readers. Read properties Property Defaul...
Flink Queries Iceberg support streaming and batch read With Apache Flink ‘s DataStream API and Table API. Reading with SQL Iceberg support both streaming and batch read in Flink...
Spark DDL To use Iceberg in Spark, first configure Spark catalogs . Iceberg uses Apache Spark’s DataSourceV2 API for data source and catalog implementations. CREATE TABLE Spark...