Tables

The main purpose of the Iceberg API is to manage table metadata, like schema, partition spec, metadata, and data files that store table data.

Table metadata and operations are accessed through the Table interface. This interface will return table information.

Table metadata

The Table interface provides access to the table metadata:

  • schema returns the current table schema
  • spec returns the current table partition spec
  • properties returns a map of key-value properties
  • currentSnapshot returns the current table snapshot
  • snapshots returns all valid snapshots for the table
  • snapshot(id) returns a specific snapshot by ID
  • location returns the table’s base location

Tables also provide refresh to update the table to the latest version, and expose helpers:

  • io returns the FileIO used to read and write table files
  • locationProvider returns a LocationProvider used to create paths for data and metadata files

Scanning

File level

Iceberg table scans start by creating a TableScan object with newScan.

  1. TableScan scan = table.newScan();

To configure a scan, call filter and select on the TableScan to get a new TableScan with those changes.

  1. TableScan filteredScan = scan.filter(Expressions.equal("id", 5))

Calls to configuration methods create a new TableScan so that each TableScan is immutable and won’t change unexpectedly if shared across threads.

When a scan is configured, planFiles, planTasks, and schema are used to return files, tasks, and the read projection.

  1. TableScan scan = table.newScan()
  2. .filter(Expressions.equal("id", 5))
  3. .select("id", "data");
  4. Schema projection = scan.schema();
  5. Iterable<CombinedScanTask> tasks = scan.planTasks();

Available operations to update a table are:

  • updateSchema — update the table schema
  • updateProperties — update table properties
  • updateLocation — update the table’s base location
  • newAppend — used to append data files
  • newFastAppend — used to append data files, will not compact metadata
  • newOverwrite — used to append data files and remove files that are overwritten
  • newDelete — used to delete data files
  • newRewrite — used to rewrite data files; will replace existing files with new versions
  • newTransaction — create a new table-level transaction
  • rewriteManifests — rewrite manifest data by clustering files, for faster scan planning
  • rollback — rollback the table state to a specific snapshot

Transactions

Transactions are used to commit multiple table changes in a single atomic operation. A transaction is used to create individual operations using factory methods, like newAppend, just like working with a Table. Operations created by a transaction are committed as a group when commitTransaction is called.

For example, deleting and appending a file in the same transaction:

  1. Transaction t = table.newTransaction();
  2. // commit operations to the transaction
  3. t.newDelete().deleteFromRowFilter(filter).commit();
  4. t.newAppend().appendFile(data).commit();
  5. // commit all the changes to the table
  6. t.commitTransaction();

Types

Iceberg data types are located in the org.apache.iceberg.types package.

Primitives

Primitive type instances are available from static methods in each type class. Types without parameters use get, and types like decimal use factory methods:

  1. Types.IntegerType.get() // int
  2. Types.DoubleType.get() // double
  3. Types.DecimalType.of(9, 2) // decimal(9, 2)

Nested types

Structs, maps, and lists are created using factory methods in type classes.

Like struct fields, map keys or values and list elements are tracked as nested fields. Nested fields track field IDs and nullability.

Struct fields are created using NestedField.optional or NestedField.required. Map value and list element nullability is set in the map and list factory methods.

  1. // struct<1 id: int, 2 data: optional string>
  2. StructType struct = Struct.of(
  3. Types.NestedField.required(1, "id", Types.IntegerType.get()),
  4. Types.NestedField.optional(2, "data", Types.StringType.get())
  5. )
  1. // map<1 key: int, 2 value: optional string>
  2. MapType map = MapType.ofOptional(
  3. 1, 2,
  4. Types.IntegerType.get(),
  5. Types.StringType.get()
  6. )
  1. // array<1 element: int>
  2. ListType list = ListType.ofRequired(1, IntegerType.get());

Expressions

Iceberg’s expressions are used to configure table scans. To create expressions, use the factory methods in Expressions.

Supported predicate expressions are:

  • isNull
  • notNull
  • equal
  • notEqual
  • lessThan
  • lessThanOrEqual
  • greaterThan
  • greaterThanOrEqual
  • in
  • notIn
  • startsWith
  • notStartsWith

Supported expression operations are:

  • and
  • or
  • not

Constant expressions are:

  • alwaysTrue
  • alwaysFalse

Expression binding

When created, expressions are unbound. Before an expression is used, it will be bound to a data type to find the field ID the expression name represents, and to convert predicate literals.

For example, before using the expression lessThan("x", 10), Iceberg needs to determine which column "x" refers to and convert 10 to that column’s data type.

If the expression could be bound to the type struct<1 x: long, 2 y: long> or to struct<11 x: int, 12 y: int>.

Expression example

  1. table.newScan()
  2. .filter(Expressions.greaterThanOrEqual("x", 5))
  3. .filter(Expressions.lessThan("x", 10))

Modules

Iceberg table support is organized in library modules:

  • iceberg-common contains utility classes used in other modules
  • iceberg-api contains the public Iceberg API, including expressions, types, tables, and operations
  • iceberg-arrow is an implementation of the Iceberg type system for reading and writing data stored in Iceberg tables using Apache Arrow as the in-memory data format
  • iceberg-aws contains implementations of the Iceberg API to be used with tables stored on AWS S3 and/or for tables defined using the AWS Glue data catalog
  • iceberg-core contains implementations of the Iceberg API and support for Avro data files, this is what processing engines should depend on
  • iceberg-parquet is an optional module for working with tables backed by Parquet files
  • iceberg-orc is an optional module for working with tables backed by ORC files (experimental)
  • iceberg-hive-metastore is an implementation of Iceberg tables backed by the Hive metastore Thrift client

This project Iceberg also has modules for adding Iceberg support to processing engines and associated tooling:

  • iceberg-spark is an implementation of Spark’s Datasource V2 API for Iceberg with submodules for each spark versions (use runtime jars for a shaded version)
  • iceberg-flink is an implementation of Flink’s Table and DataStream API for Iceberg (use iceberg-flink-runtime for a shaded version)
  • iceberg-hive3 is an implementation of Hive 3 specific SerDe’s for Timestamp, TimestampWithZone, and Date object inspectors (use iceberg-hive-runtime for a shaded version).
  • iceberg-mr is an implementation of MapReduce and Hive InputFormats and SerDes for Iceberg (use iceberg-hive-runtime for a shaded version for use with Hive)
  • iceberg-nessie is a module used to integrate Iceberg table metadata history and operations with Project Nessie
  • iceberg-data is a client library used to read Iceberg tables from JVM applications
  • iceberg-pig is an implementation of Pig’s LoadFunc API for Iceberg
  • iceberg-runtime generates a shaded runtime jar for Spark to integrate with iceberg tables