Devlive 开源社区 本次搜索耗时 0.693 秒,为您找到 370 个相关结果.
  • Iceberg

    1665 2024-07-05 《Apache Kyuubi 1.9.1》
    Iceberg Integration Dependencies Iceberg Operations Apache Iceberg is an open table format for huge analytic datasets. Iceberg adds tables to compute engines including Spark, T...
  • AmazonSqs

    AmazonSqs AmazonSqs source connector Support Those Engines Spark Flink SeaTunnel Zeta Key Features batch stream exactly-once column projection parallelism su...
  • AmazonDynamoDB

    Description Key features Options url [string] region [string] accessKeyId [string] secretAccessKey [string] table [string] schema [Config] fields [config] common options ...
  • 安装后切换到 Oracle 数据库

    关于此任务 条件 步骤 关于此任务 如果在执行初始 HDF 安装或升级后想要将 Oracle 数据库与 SAM 或架构注册表一起使用,则可以切换到 Oracle 数据库。支持 Oracle 数据库 12c 和 11g 第 2 版 条件 您已安装并配置了 Oracle 数据库。 步骤 登录 Ambari Server 并关闭 SAM ...
  • Variable Management

    Background Introduction Create Variable Reference variables in Flink SQL Reference variables in args of Flink JAR jobs Background Introduction In the actual production enviro...
  • Function

    1646 2024-06-27 《Lodash 4.17.15》
    _.after(n, func) Since Arguments Returns Example _.ary(func, [n=func.length]) Since Arguments Returns Example _.before(n, func) Since Arguments Returns Example _....
  • Job Env Config

    Common Parameter job.name jars job.mode checkpoint.interval parallelism job.retry.times shade.identifier Flink Engine Parameter Spark Engine Parameter This document desc...
  • Feishu

    Support Those Engines Key Features Description Data Type Mapping Sink Options Task Example Simple: Changelog 2.2.0-beta 2022-09-26 Feishu sink connector Support Thos...
  • Developer Tools

    1614 2024-07-05 《Apache Kyuubi 1.9.1》
    Update Project Version Update Dependency List Format All Code Append descriptions of new configurations to settings.md Generative Tooling Usage Update Project Version buil...
  • External Link Management

    Background How to create the external link Where can see the external link Background In production practice, in order to manage the Flink job properly, there is always a need...