Devlive 开源社区 本次搜索耗时 0.496 秒,为您找到 839 个相关结果.
  • JsonPath

    Description Options common options [string] fields[array] option src_field dest_field dest_type path Read Json Example Read SeatunnelRow Example Changelog JsonPath tr...
  • Sink Common Options

    source_table_name [string] parallelism [int] Examples Common parameters of sink connectors name type required default value source_table_name string no - p...
  • DAMENG

    DAMENG数据源 是否原生支持 DAMENG数据源 数据源:选择 DAMENG 数据源名称:输入数据源的名称 描述:输入数据源的描述 IP 主机名:输入连接 DAMENG 的 IP 端口:输入连接 DAMENG 的端口 用户名:设置连接 DAMENG 的用户名 密码:设置连接 DAMENG 的密码 数据库名:输入连接 DAMENG 的 sch...
  • Object

    1566 2024-06-05 《Ramda 0.4.0》
    clone values eqProps keys omit pick pickAll project prop props keysIn path valuesIn toPairs toPairsIn clone {*} → {*} Parameters valueThe object or array to c...
  • State Management and Watermarks

    1557 2025-03-18 《Apache Gobblin 0.17.0》
    Managing Watermarks in a Job Basics Task Failures Multi-Dataset Jobs Gobblin State Deep Dive State class hierarchy How States are Used in a Gobblin Job This page has two p...
  • Replace

    Description Options replace_field [string] pattern [string] replacement [string] is_regex [boolean] replace_first [boolean] common options [string] Example Job Config Exam...
  • Sentry

    Description Key features Options dsn [string] env [string] release [string] cacheDirPath [string] enableExternalConfiguration [boolean] maxCacheItems [number] flushTimeoutM...
  • Writes

    1543 2025-03-11 《Apache Iceberg 1.8.1》
    Spark Writes To use Iceberg in Spark, first configure Spark catalogs . Some plans are only available when using Iceberg SQL extensions in Spark 3. Iceberg uses Apache Spark’s D...
  • Schedulers

    1539 2025-03-19 《Apache Gobblin 0.17.0》
    Introduction Quartz Azkaban Oozie Launching Gobblin in Local Mode Example Config Files Uploading Files to HDFS Adding Gobblin jar Dependencies Launching the Job Launching ...
  • Streaming Writes

    1523 2024-06-30 《Apache Hudi 0.15.0》
    Spark Streaming Spark Streaming You can write Hudi tables using spark’s structured streaming. Scala // spark-shell // prepare to stream write to new table import org ....