Devlive 开源社区 本次搜索耗时 0.592 秒,为您找到 356 个相关结果.
  • Kafka

    Support Those Engines Key Features Description Supported DataSource Info Source Options Task Example Simple Regex Topic AWS MSK SASL/SCRAM AWS MSK IAM Kerberos Authenticat...
  • Hive Metastore

    1574 2024-06-30 《Apache Hudi 0.15.0》
    Spark Data Source example Query using HiveQL Use partition extractor properly Hive Sync Tool Hive Sync Configuration Sync modes HMS JDBC HIVEQL Flink Setup Install Hive E...
  • Deployment

    1573 2024-06-26 《Apache Amoro 0.6.1》
    System requirements Download the distribution Source code compilation Configuration Configure the service address Configure system database Configure high availability Config...
  • Mixed-Iceberg

    1538 2024-06-26 《Apache Amoro 0.6.1》
    Compared with Iceberg format, Mixed-Iceberg format provides more features: Stronger primary key constraints that also apply to Spark OLAP performance that is production-ready fo...
  • 基于Rainbond部署(Cluster)

    基于 Rainbond 部署 DolphinScheduler 高可用集群 前提条件 DolphinScheduler 集群一键部署 API Master Worker 节点伸缩 配置文件 如何支持 Python 3? 如何支持 Hadoop, Spark, DataX 等? 基于 Rainbond 部署 DolphinScheduler ...
  • S3File

    Support Those Engines Key Features Description Supported DataSource Info Dependency Data Type Mapping JSON File Type Text Or CSV File Type Orc File Type Parquet File Type ...
  • JDBC

    Description Using Dependency For Spark/Flink Engine For SeaTunnel Zeta Engine Key Features Options driver [string] user [string] password [string] url [string] query [stri...
  • Hive

    1510 2024-06-29 《Apache Iceberg 1.5.2》
    Feature support Enabling Iceberg support in Hive Hive 4.0.0-beta-1 Hive 4.0.0-alpha-2 Hive 4.0.0-alpha-1 Hive 2.3.x, Hive 3.1.x Loading runtime jar Enabling support Hadoop con...
  • OssFile

    Support Those Engines Usage Dependency For Spark/Flink Engine For SeaTunnel Zeta Engine Key features Data Type Mapping Orc File Type Parquet File Type Options path [string]...
  • Creating your first interoperable table

    Pre-requisites Steps Initialize a pyspark shell Create dataset Running sync Conclusion Next steps Using OneTable to sync your source tables in different target format invo...