Introduction Record format Configuration General configuration values Authentication No credentials Using certificates Using bucket password Document level expiration 1 - Ex...
Flink Connector Apache Flink supports creating Iceberg table directly without creating the explicit Flink catalog in Flink SQL. That means we can just create an iceberg table by s...
Id generator Redis or Valkey based Java Id generator RIdGenerator generates unique numbers but not monotonically increased. At first request, batch of id numbers is allocated and...
is type isNil propIs is (* → {*}) → a → Boolean Parameters ctorA constructor valThe value to test Returns Boolean Added in v0.3.0 See if an object (i.e. val ) is a...
Evolution Iceberg supports in-place table evolution . You can evolve a table schema just like SQL — even in nested structures — or change partition layout when data volume chang...
is type isNil propIs is (* → {*}) → a → Boolean Parameters ctorA constructor valThe value to test Returns Boolean Added in v0.3.0 See if an object (i.e. val ) is a...
is type isNil propIs is (* → {*}) → a → Boolean Parameters ctorA constructor valThe value to test Returns Boolean Added in v0.3.0 See if an object (i.e. val ) is a...
DDL commands CREATE Catalog Hive catalog This creates an Iceberg catalog named hive_catalog that can be configured using 'catalog-type'='hive' , which loads tables from Hive m...
Spark Queries To use Iceberg in Spark, first configure Spark catalogs . Iceberg uses Apache Spark’s DataSourceV2 API for data source and catalog implementations. Querying with S...