Devlive 开源社区 本次搜索耗时 0.601 秒,为您找到 282 个相关结果.
  • 扩展链接

    背景介绍 如何创建扩展链接 扩展链接的使用 背景介绍 在生产实践中,为了更好地管理 Flink作业,总会需要将其与外部服务集成,例如代码仓库、指标监控页面、实时日志记录或 HDFS/OSS 上的检查点/保存点文件夹等。 Streampark 作为一站式 Flink DevOps 平台,如果能够提供以扩展链接的形式集成这些服务的能力,在统一的地方集...
  • Nessie

    1100 2025-03-11 《Apache Iceberg 1.8.1》
    Iceberg Nessie Integration Iceberg provides integration with Nessie through the iceberg-nessie module. This section describes how to use Iceberg with Nessie. Nessie provides seve...
  • Command usage

    Command Entrypoint Options Example Command Entrypoint Spark 2 bin / start - seatunnel - spark - 2 - connector - v2 . sh Spark 3 bin / start - seatunnel - spark - 3...
  • Flink Writes

    Flink Writes Iceberg support batch and streaming writes With Apache Flink ‘s DataStream API and Table API. Writing with SQL Iceberg support both INSERT INTO and INSERT OVERWRIT...
  • Java API

    Iceberg Java API Tables The main purpose of the Iceberg API is to manage table metadata, like schema, partition spec, metadata, and data files that store table data. Table metad...
  • Configuration

    Configuration Table properties Iceberg tables support table properties to configure table behavior, like the default split size for readers. Read properties Property Defaul...
  • Flink Queries

    Flink Queries Iceberg support streaming and batch read With Apache Flink ‘s DataStream API and Table API. Reading with SQL Iceberg support both streaming and batch read in Flink...
  • Introduction

    Documentation Apache Iceberg is an open table format for huge analytic datasets. Iceberg adds tables to compute engines including Spark, Trino, PrestoDB, Flink, Hive and Impala u...
  • Connectors For Flink SQL Query Engine

    768 2024-07-05 《Apache Kyuubi 1.9.1》
    Apache Paimon (Incubating) Hudi Iceberg
  • DDL

    Spark DDL To use Iceberg in Spark, first configure Spark catalogs . Iceberg uses Apache Spark’s DataSourceV2 API for data source and catalog implementations. CREATE TABLE Spark...