Devlive 开源社区 本次搜索耗时 0.686 秒,为您找到 377 个相关结果.
  • HdfsFile

    Support Those Engines Key Features Description Supported DataSource Info Source Options delimiter/field_delimiter [string] compress_codec [string] encoding [string] Tips T...
  • 快速开始

    如何使用 部署 DataStream 任务 部署 FlinkSql 任务 任务启动流程 如何使用 在上个章节已经详细介绍了一站式平台 streampark-console 的安装, 本章节看看如果用 streampark-console 快速部署运行一个作业, streampark-console 对标准的 Flink 程序 ( 按照 Fl...
  • Flink Configuration

    6086 2024-06-29 《Apache Iceberg 1.5.2》
    Catalog Configuration Runtime configuration Read options Write options Catalog Configuration A catalog is created and named by executing the following query (replace <catalog...
  • Kingbase

    Support Connector Version Support Those Engines Key Features Description Supported DataSource Info Database Dependency Data Type Mapping Source Options Tips Task Example S...
  • Z-Ordering Support

    6075 2024-07-05 《Apache Kyuubi 1.9.1》
    Introduction Supported table format Supported column data type How to use Optimize history data Syntax Examples Optimize incremental data To improve query speed, Kyuubi su...
  • Apache Paimon (Incubating)

    6063 2024-07-05 《Apache Kyuubi 1.9.1》
    Apache Paimon (Incubating) Integration Dependencies Configurations Apache Paimon (Incubating) Operations Apache Paimon(incubating) is a streaming data lake platform that suppo...
  • OssJindoFile

    Support Those Engines Key features Description Options path [string] file_format_type [string] bucket [string] access_key [string] access_secret [string] endpoint [string] ...
  • DB2

    Support Those Engines Description Using Dependency For Spark/Flink Engine For SeaTunnel Zeta Engine Key Features Supported DataSource Info Data Type Mapping Sink Options Ti...
  • Paimon

    Description Key features Options Examples Single table Single table(Specify hadoop HA config and kerberos config) Single table with write props of paimon Multiple table P...
  • Procedures

    6030 2025-03-11 《Apache Iceberg 1.8.1》
    Spark Procedures To use Iceberg in Spark, first configure Spark catalogs . Stored procedures are only available when using Iceberg SQL extensions in Spark 3. Usage Procedures c...