Devlive 开源社区 本次搜索耗时 0.577 秒,为您找到 1079 个相关结果.
  • Oracle

    Description Support Those Engines Using Dependency For Spark/Flink Engine For SeaTunnel Zeta Engine Key Features Supported DataSource Info Database Dependency Data Type Map...
  • SQL DDL

    6809 2024-06-30 《Apache Hudi 0.15.0》
    Spark SQL Create table Create non-partitioned table Create partitioned table Create table with record keys and ordering fields Create table from an external location Create Ta...
  • Clustering

    6796 2024-06-30 《Apache Hudi 0.15.0》
    Background How is compaction different from clustering? Clustering Architecture Overall, there are 2 steps to clustering Schedule clustering Execute clustering Clustering Use...
  • 2.2. How To Use Spark Adaptive Query Execution (AQE) in Kyuubi

    How To Use Spark Adaptive Query Execution (AQE) in Kyuubi The Basics of AQE Dynamically Switch Join Strategies Dynamically Coalesce Shuffle Partitions Other Tips for Best Practise...
  • Query API

    6772 2024-05-25 《Apache JDO 3.2.1》
    Creating a query Closing a query Named Query Saving a Query as a Named Query Query Extensions Setting query parameters Compiling a query Executing a query Result Class Con...
  • 2. Getting Started With Kyuubi and DBeaver

    Getting Started With Kyuubi and DBeaver What is DBeaver Preparation Get DBeaver and Install Get Kyuubi Started Configurations Start DBeaver Select a database Edit the Driver ...
  • SQL

    SQL 综述 创建数据源 创建任务 任务参数 任务样例 Hive表创建示例 在hive中创建临时表并写入数据 运行该任务成功之后在hive中查询结果 使用前置sql和后置sql示例 注意事项 SQL 综述 SQL任务类型,用于连接数据库并执行相应SQL。 创建数据源 可参考 数据源配置 数据源中心 。 创建任务 点击项...
  • Design

    6713 2024-06-21 《Apache Accumulo 2.x》
    Background Data Model Architecture Components Tablet Server Garbage Collector Manager Tracer Monitor Compactor (experimental) Compaction Coordinator (experimental) Scan S...
  • JDBC Connector

    JDBC 信息配置 semantic 语义配置 EXACTLY_ONCE AT_LEAST_ONCE && NONE 其他配置 JDBC 读取数据 queryFunc获取一条sql resultFunc 处理查询到的数据 JDBC 读取写入 根据数据流生成目标SQL 设置写入批次大小 多实例 JDBC 支持 手动指定 JDBC 连接信...
  • Docker 部署

    前置条件 1. 安装 docker 2. 安装 docker-compose 部署 Apache StreamPark™ 1. 基于 h2 和 docker-compose 部署 Apache StreamPark™ 2. 部署 3. 配置flink home 4. 配置session集群 5. 提交 Flink 作业 使用已有的 Mysql ...