Devlive 开源社区 本次搜索耗时 0.685 秒,为您找到 1079 个相关结果.
  • IoTDB

    Support Those Engines Description Using Dependency For Spark/Flink Engine For SeaTunnel Zeta Engine Key Features Supported DataSource Info Data Type Mapping Sink Options E...
  • Scan Executors

    4271 2024-06-22 《Apache Accumulo 2.x》
    Configuring and using Scan Executors Configuring and using Scan Prioritizers. Providing hints from the client side. Accumulo scans operate by repeatedly fetching batches of dat...
  • deployment

    Deployment SeaTunnel Engine 1. Download 2 Config SEATUNNEL_HOME 3. Config SeaTunnel Engine JVM options 4. Config SeaTunnel Engine 4.1 Backup count 4.2 Slot service 4.3 Checkpo...
  • Attach-Detach

    4264 2024-05-25 《Apache JDO 3.2.1》
    Detach All On Commit Copy On Attach Serialization of Detachable classes JDO provides an interface to the persistence of objects. JDO 1.0 didn’t provide a way of taking an objec...
  • My Hours

    Support Those Engines Key Features Description Key features Supported DataSource Info Source Options How to Create a My Hours Data Synchronization Jobs Parameter Interpretat...
  • Hive

    Description Key features Options table_name [string] metastore_uri [string] hdfs_site_path [string] hive_site_path [string] hive.hadoop.conf [map] hive.hadoop.conf-path [str...
  • AWS Datasync

    DataSync 节点 综述 创建任务 任务样例 独有参数 环境配置 DataSync 节点 综述 AWS DataSync 是一种在线数据传输服务,可简化、自动化和加速本地存储系统和 AWS Storage 服务之间,以及不同 AWS Storage 服务之间的数据移动。 DataSync 支持的组件: Network File ...
  • Partitioning

    4239 2024-06-29 《Apache Iceberg 1.5.2》
    What is partitioning? What does Iceberg do differently? Partitioning in Hive Problems with Hive partitioning Iceberg’s hidden partitioning What is partitioning? Partitioning...
  • 快速开始

    如何使用 部署 DataStream 任务 部署 FlinkSql 任务 任务启动流程 如何使用 在上个章节已经详细介绍了一站式平台 streampark-console 的安装, 本章节看看如果用 streampark-console 快速部署运行一个作业, streampark-console 对标准的 Flink 程序 ( 按照 Fl...
  • Batch Writes

    4230 2024-06-30 《Apache Hudi 0.15.0》
    Spark DataSource API The hudi-spark module offers the DataSource API to write a Spark DataFrame into a Hudi table. There are a number of options available: HoodieWriteConfig :...