Devlive 开源社区 本次搜索耗时 2.769 秒,为您找到 141 个相关结果.
  • Slack

    Support Those Engines Key features Description Data Type Mapping Options Task Example Simple: Changelog new version Slack sink connector Support Those Engines Spark...
  • Hudi

    1365 2024-07-05 《Apache Kyuubi 1.9.1》
    Hudi Integration Dependencies Hudi Operations Apache Hudi (pronounced “hoodie”) is the next generation streaming data lake platform. Apache Hudi brings core warehouse and datab...
  • Phoenix

    Description Key features Options driver [string] url [string] common options Example Changelog 2.2.0-beta 2022-09-26 Phoenix sink connector Description Write Phoenix...
  • Solution for Big Result Sets

    1327 2024-07-05 《Apache Kyuubi 1.9.1》
    Incremental collection Use in single connections Change incremental collection mode in session Typically, when a user submits a SELECT query to Spark SQL engine, the Driver cal...
  • Streaming Reads

    1322 2024-06-30 《Apache Hudi 0.15.0》
    Spark Streaming Spark Streaming Structured Streaming reads are based on Hudi’s Incremental Query feature, therefore streaming read can return data for which commits and base fil...
  • Hudi

    1318 2024-07-05 《Apache Kyuubi 1.9.1》
    Hudi Integration Dependencies Configurations Hudi Operations Apache Hudi (pronounced “hoodie”) is the next generation streaming data lake platform. Apache Hudi brings core war...
  • Socket

    Support Those Engines Key features Description Sink Options Task Example Changelog 2.2.0-beta 2022-09-26 Socket sink connector Support Those Engines Spark Flink SeaTu...
  • Console

    Support Connector Version Support Those Engines Description Key Features Options Task Example Simple: Multiple Sources Simple: Console Sample Data Console sink connector...
  • Avro format

    How To Use Kafka uses example Avro is very popular in streaming data pipeline. Now seatunnel supports Avro format in kafka connector. How To Use Kafka uses example This is a...
  • Streaming Writes

    1019 2024-06-30 《Apache Hudi 0.15.0》
    Spark Streaming Spark Streaming You can write Hudi tables using spark’s structured streaming. Scala // spark-shell // prepare to stream write to new table import org ....