Devlive 开源社区 本次搜索耗时 0.787 秒,为您找到 388 个相关结果.
  • JDBC

    Description Using Dependency For Spark/Flink Engine For SeaTunnel Zeta Engine Key Features Options driver [string] user [string] password [string] url [string] query [stri...
  • SeaTunnel

    Apache SeaTunnel 综述 创建任务 任务参数 任务样例 在 DolphinScheduler 中配置 SeaTunnel 环境 配置 SeaTunnel 任务节点 Config 样例 支持 SeaTunnel 版本 Apache SeaTunnel 综述 SeaTunnel 任务类型,用于创建并执行 SeaTunnel...
  • SQL Server

    Support SQL Server Version Support Those Engines Using Dependency For Spark/Flink Engine For SeaTunnel Zeta Engine Key Features Description Supported DataSource Info Databa...
  • Java Quickstart

    3458 2024-06-29 《Apache Iceberg 1.5.2》
    Create a table Using a Hive catalog Using a Hadoop catalog Branching and Tagging Creating branches and tags Committing to branches Reading from branches and tags Replacing an...
  • 3. Trouble Shooting

    Trouble Shooting Common Issues java.lang.UnsupportedClassVersionError .. Unsupported major.minor version 52.0 org.apache.spark.SparkException: When running with master ‘yarn’ eith...
  • Hive

    3369 2024-06-29 《Apache Iceberg 1.5.2》
    Feature support Enabling Iceberg support in Hive Hive 4.0.0-beta-1 Hive 4.0.0-alpha-2 Hive 4.0.0-alpha-1 Hive 2.3.x, Hive 3.1.x Loading runtime jar Enabling support Hadoop con...
  • REST API v1

    3366 2024-07-05 《Apache Kyuubi 1.9.1》
    REST API v1 Session Resource GET /sessions Response Body GET /sessions/${sessionHandle} Response Body GET /sessions/${sessionHandle}/info/${infoType} Request Parameters Respon...
  • OssFile

    Support Those Engines Usage Dependency For Spark/Flink Engine For SeaTunnel Zeta Engine Key features Data Type Mapping Orc File Type Parquet File Type Options path [string]...
  • How To Use Spark Dynamic Resource Allocation (DRA) in Kyuubi

    3345 2024-07-05 《Apache Kyuubi 1.9.1》
    The Basics of Dynamic Resource Allocation How to Enable Dynamic Resource Allocation Dynamic Resource Allocation w/ External Shuffle Service Dynamic Allocation w/o External Shuffl...
  • Creating your first interoperable table

    Pre-requisites Steps Initialize a pyspark shell Create dataset Running sync Conclusion Next steps Using OneTable to sync your source tables in different target format invo...