Devlive 开源社区 本次搜索耗时 0.748 秒,为您找到 388 个相关结果.
  • Getting Started

    2122 2025-03-11 《Apache Iceberg 1.8.1》
    Getting Started The latest version of Iceberg is 1.8.1 . Spark is currently the most feature-rich compute engine for Iceberg operations. We recommend you to get started with Spar...
  • 2. The Engine Configuration Guide

    The Engine Configuration Guide The Engine Configuration Guide Kyuubi aims to bring Spark to end-users who need not qualify with Spark or something else related to the big data a...
  • Dell

    2019 2025-03-11 《Apache Iceberg 1.8.1》
    Iceberg Dell Integration Dell ECS Integration Iceberg can be used with Dell’s Enterprise Object Storage (ECS) by using the ECS catalog since 0.15.0. See Dell ECS for more infor...
  • Getting Started

    2013 2024-06-27 《Apache Hudi 0.15.0》
    Overview Spark Quick Start Flink Quick Start Docker Demo Use Cases
  • Alibaba Cloud

    1987 2024-07-01 《Apache Hudi 0.15.0》
    Aliyun OSS configs Aliyun OSS Credentials Aliyun OSS Libs In this page, we explain how to get your Hudi spark job to store into Aliyun OSS. Aliyun OSS configs There are two c...
  • Evolution

    1960 2025-03-10 《Apache Iceberg 1.8.1》
    Evolution Iceberg supports in-place table evolution . You can evolve a table schema just like SQL — even in nested structures — or change partition layout when data volume chang...
  • Streaming Writes

    1919 2024-06-30 《Apache Hudi 0.15.0》
    Spark Streaming Spark Streaming You can write Hudi tables using spark’s structured streaming. Scala // spark-shell // prepare to stream write to new table import org ....
  • Java Custom Catalog

    1908 2025-03-14 《Apache Iceberg 1.8.1》
    Custom Catalog It’s possible to read an iceberg table either from an hdfs path or from a hive table. It’s also possible to use a custom metastore in place of hive. The steps to do...
  • Command usage

    Command Entrypoint Options Example Command Entrypoint Spark 2 bin / start - seatunnel - spark - 2 - connector - v2 . sh Spark 3 bin / start - seatunnel - spark - 3...
  • Spark

    Spark数据源 是否原生支持 Spark数据源 数据源:选择 Spark 数据源名称:输入数据源的名称 描述:输入数据源的描述 IP/主机名:输入连接Spark的IP 端口:输入连接Spark的端口 用户名:设置连接Spark的用户名 密码:设置连接Spark的密码 数据库名:输入连接Spark的数据库名称 Jdbc连接参数:用于Spark连...