Devlive 开源社区 本次搜索耗时 0.567 秒,为您找到 280 个相关结果.
  • Connectors

    375 2024-07-05 《Apache Kyuubi 1.9.1》
    Connectors for Spark SQL Query Engine Connectors For Flink SQL Query Engine Connectors for Hive SQL Query Engine Connectors For Trino SQL Engine
  • Engine Side Extensions

    348 2024-07-05 《Apache Kyuubi 1.9.1》
    Extensions for Spark Extensions for Flink Extensions for Hive Extensions for Trino
  • Flink Writes

    Flink Writes Iceberg support batch and streaming writes With Apache Flink ‘s DataStream API and Table API. Writing with SQL Iceberg support both INSERT INTO and INSERT OVERWRIT...
  • Java API

    Iceberg Java API Tables The main purpose of the Iceberg API is to manage table metadata, like schema, partition spec, metadata, and data files that store table data. Table metad...
  • Ingestion

    327 2024-06-27 《Apache Hudi 0.15.0》
    Using Spark Using Flink Using Kafka Connect
  • Introduction

    Documentation Apache Iceberg is an open table format for huge analytic datasets. Iceberg adds tables to compute engines including Spark, Trino, PrestoDB, Flink, Hive and Impala u...
  • Extensions for Flink

    292 2024-07-05 《Apache Kyuubi 1.9.1》
    Connectors For Flink SQL Query Engine Auxiliary SQL Functions
  • Configuration

    Configuration Table properties Iceberg tables support table properties to configure table behavior, like the default split size for readers. Read properties Property Defaul...
  • DDL

    Spark DDL To use Iceberg in Spark, first configure Spark catalogs . Iceberg uses Apache Spark’s DataSourceV2 API for data source and catalog implementations. CREATE TABLE Spark...
  • Flink Queries

    Flink Queries Iceberg support streaming and batch read With Apache Flink ‘s DataStream API and Table API. Reading with SQL Iceberg support both streaming and batch read in Flink...