Devlive 开源社区 本次搜索耗时 0.680 秒,为您找到 377 个相关结果.
  • Java API

    6500 2024-06-29 《Apache Iceberg 1.5.2》
    Tables Table metadata Scanning File level Transactions Types Primitives Nested types Expressions Expression binding Expression example Modules Tables The main purpose...
  • Design & Concepts

    6500 2024-07-01 《Apache Hudi 0.15.0》
    How does Hudi ensure atomicity? Does Hudi extend the Hive table layout? What concurrency control approaches does Hudi adopt? Hudi’s commits are based on transaction start time i...
  • 2.2. How To Use Spark Adaptive Query Execution (AQE) in Kyuubi

    How To Use Spark Adaptive Query Execution (AQE) in Kyuubi The Basics of AQE Dynamically Switch Join Strategies Dynamically Coalesce Shuffle Partitions Other Tips for Best Practise...
  • SQL DDL

    6465 2024-06-30 《Apache Hudi 0.15.0》
    Spark SQL Create table Create non-partitioned table Create partitioned table Create table with record keys and ordering fields Create table from an external location Create Ta...
  • Clustering

    6460 2024-06-30 《Apache Hudi 0.15.0》
    Background How is compaction different from clustering? Clustering Architecture Overall, there are 2 steps to clustering Schedule clustering Execute clustering Clustering Use...
  • Docker 部署

    前置条件 1. 安装 docker 2. 安装 docker-compose 部署 Apache StreamPark™ 1. 基于 h2 和 docker-compose 部署 Apache StreamPark™ 2. 部署 3. 配置flink home 4. 配置session集群 5. 提交 Flink 作业 使用已有的 Mysql ...
  • JDBC Connector

    JDBC 信息配置 semantic 语义配置 EXACTLY_ONCE AT_LEAST_ONCE && NONE 其他配置 JDBC 读取数据 queryFunc获取一条sql resultFunc 处理查询到的数据 JDBC 读取写入 根据数据流生成目标SQL 设置写入批次大小 多实例 JDBC 支持 手动指定 JDBC 连接信...
  • 2.1. How To Use Spark Dynamic Resource Allocation (DRA) in Kyuubi

    How To Use Spark Dynamic Resource Allocation (DRA) in Kyuubi The Basics of Dynamic Resource Allocation How to Enable Dynamic Resource Allocation Dynamic Resource Allocation w/ Ext...
  • MongoDB

    Support Those Engines Key Features Description Supported DataSource Info Data Type Mapping Source Options Tips How to Create a MongoDB Data Synchronization Jobs Parameter I...
  • HTTP Connector

    HTTP 异步写入 Apache StreamPark™ 方式写入 http异步写入支持类型 http异步写入配置参数列表 http异步写入数据 其他配置 一些后台服务通过 HTTP 请求接收数据,这种场景下 Apache Flink 可以通过 HTTP 请求写入结果数据,目前 Apache Flink 官方未提供通过 HTTP 请求写入 数据...