Devlive 开源社区 本次搜索耗时 0.765 秒,为您找到 197 个相关结果.
  • HTTP Connector

    HTTP 异步写入 Apache StreamPark™ 方式写入 http异步写入支持类型 http异步写入配置参数列表 http异步写入数据 其他配置 一些后台服务通过 HTTP 请求接收数据,这种场景下 Apache Flink 可以通过 HTTP 请求写入结果数据,目前 Apache Flink 官方未提供通过 HTTP 请求写入 数据...
  • GitHub 趋势日报 (2025年05月05日)

    📈 今日整体趋势 Top 10 📊 分语言趋势 Top 5 C++ Java C Python Rust PHP Go Lua C Kotlin Vim Script Swift HTML MDX Dart Ruby Shell TypeScript Jupyter Notebook Markdown JavaScr...
  • 使用插件

    介绍 官方插件 插件类型 建造 先决条件 命令 构建docker镜像与插件从回答基础镜像 第三方插件 用法 升级 开发和贡献 设计与原理 当我们需要对Answer的功能做一些扩展时,例如OAuth登录,我们设计了一种使用插件的方式来实现这些功能。 介绍 官方插件 您可以在[此处]找到官方支持的 Answer 插件列表(h...
  • HTTP Connector

    http asynchronous write Write with Apache StreamPark™ http asynchronous write support type Configuration list of HTTP asynchronous write HTTP writes data asynchronously Other ...
  • Indexing

    6865 2024-06-28 《Apache Hudi 0.15.0》
    Indexing Multi-modal Indexing Index Types in Hudi Global and Non-Global Indexes Configs Spark based configs Flink based configs Indexing Strategies Workload 1: Late arriving...
  • 1. Kyuubi On Apache Kudu

    Kyuubi On Apache Kudu What is Apache Kudu Why Kyuubi on Kudu Kudu Integration with Apache Spark Kudu Integration with Kyuubi Install Kudu Spark Dependency Start Kyuubi Start B...
  • How To Use Spark Adaptive Query Execution (AQE) in Kyuubi

    6800 2024-07-05 《Apache Kyuubi 1.9.1》
    The Basics of AQE Dynamically Switch Join Strategies Dynamically Coalesce Shuffle Partitions Other Tips for Best Practises How to set spark.sql.adaptive.advisoryPartitionSizeInByt...
  • Deployment

    6771 2024-07-01 《Apache Hudi 0.15.0》
    Deploying Hudi Streamer Spark Datasource Writer Jobs Upgrading Downgrading Migrating This section provides all the help you need to deploy and operate Hudi tables at scale. ...
  • Hadoop Resource Integration

    Using Apache Hadoop resource in Flink on Kubernetes 1. Apache HDFS 1.1 Add the shaded jar 1.2. add core-site.xml and hdfs-site.xml 2. Apache Hive 2.1. Add Hive-related jars 2...
  • Creating your first interoperable table

    Pre-requisites Steps Initialize a pyspark shell Create dataset Running sync Conclusion Next steps Using OneTable to sync your source tables in different target format invo...