Devlive 开源社区 本次搜索耗时 0.799 秒,为您找到 388 个相关结果.
  • REST API v1

    6525 2024-07-05 《Apache Kyuubi 1.9.1》
    REST API v1 Session Resource GET /sessions Response Body GET /sessions/${sessionHandle} Response Body GET /sessions/${sessionHandle}/info/${infoType} Request Parameters Respon...
  • Postgre CDC

    Support Those Engines Key features Description Supported DataSource Info Using Dependency Install Jdbc Driver For Spark/Flink Engine For SeaTunnel Zeta Engine Data Type Map...
  • Indexing

    6459 2024-06-28 《Apache Hudi 0.15.0》
    Indexing Multi-modal Indexing Index Types in Hudi Global and Non-Global Indexes Configs Spark based configs Flink based configs Indexing Strategies Workload 1: Late arriving...
  • Apache Iceberg

    Support Iceberg Version Support Those Engines Description Supported DataSource Info Database Dependency Data Type Mapping Sink Options Task Example Simple: Hive Catalog: H...
  • SQL Server

    Support SQL Server Version Support Those Engines Using Dependency For Spark/Flink Engine For SeaTunnel Zeta Engine Key Features Description Supported DataSource Info Databa...
  • 1. Kyuubi On Apache Kudu

    Kyuubi On Apache Kudu What is Apache Kudu Why Kyuubi on Kudu Kudu Integration with Apache Spark Kudu Integration with Kyuubi Install Kudu Spark Dependency Start Kyuubi Start B...
  • How To Use Spark Adaptive Query Execution (AQE) in Kyuubi

    6393 2024-07-05 《Apache Kyuubi 1.9.1》
    The Basics of AQE Dynamically Switch Join Strategies Dynamically Coalesce Shuffle Partitions Other Tips for Best Practises How to set spark.sql.adaptive.advisoryPartitionSizeInByt...
  • Deployment

    6381 2024-07-01 《Apache Hudi 0.15.0》
    Deploying Hudi Streamer Spark Datasource Writer Jobs Upgrading Downgrading Migrating This section provides all the help you need to deploy and operate Hudi tables at scale. ...
  • JDBC

    Description Using Dependency For Spark/Flink Engine For SeaTunnel Zeta Engine Key Features Options driver [string] user [string] password [string] url [string] query [stri...
  • Creating your first interoperable table

    Pre-requisites Steps Initialize a pyspark shell Create dataset Running sync Conclusion Next steps Using OneTable to sync your source tables in different target format invo...