DDL commands CREATE Catalog Hive catalog This creates an Iceberg catalog named hive_catalog that can be configured using 'catalog-type'='hive' , which loads tables from Hive m...
Querying from Apache Spark To read an Apache XTable™ (Incubating) synced target table (regardless of the table format) in Apache Spark locally or on services like Amazon EMR, Goog...
Spark Queries To use Iceberg in Spark, first configure Spark catalogs . Iceberg uses Apache Spark’s DataSourceV2 API for data source and catalog implementations. Querying with S...
Querying from Google BigQuery Iceberg tables To read an Apache XTable™ (Incubating) synced Iceberg table from BigQuery , you have two options: Using Iceberg JSON metadata file ...
Querying from StarRocks StarRocks allows you to query table formats like Hudi, Delta and Iceberg tables using our external catalog feature. Users do not need additional configura...
Iceberg JDBC Integration JDBC Catalog Iceberg supports using a table in a relational database to manage Iceberg tables through JDBC. The database that JDBC connects to must suppo...
Querying from Snowflake Currently, Snowflake supports Iceberg tables through External Tables and also Native Iceberg Tables . Iceberg on Snowflake is currently supported in pub...
Reliability Iceberg was designed to solve correctness problems that affect Hive tables running in S3. Hive tables track data files using both a central metastore for partitions a...
Performance Iceberg is designed for huge tables and is used in production where a single table can contain tens of petabytes of data. Even multi-petabyte tables can be read from ...