Devlive 开源社区 本次搜索耗时 0.643 秒,为您找到 701 个相关结果.
  • Java Custom Catalog

    Custom Catalog It’s possible to read an iceberg table either from an hdfs path or from a hive table. It’s also possible to use a custom metastore in place of hive. The steps to do...
  • Query

    565 2024-05-25 《Apache JDO 3.2.1》
    Query API JDOQL Methods Result Quick Ref PDF JDOQL Typed API SQL
  • Features and Limitations

    Features and Limitations Features Apache XTable™ (Incubating) provides users with the ability to translate metadata from one table format to another. Apache XTable™ (Incubatin...
  • Apache Spark

    Querying from Apache Spark To read an Apache XTable™ (Incubating) synced target table (regardless of the table format) in Apache Spark locally or on services like Amazon EMR, Goog...
  • The Spark SQL Engine Configuration Guide

    539 2024-07-05 《Apache Kyuubi 1.9.1》
    How To Use Spark Dynamic Resource Allocation (DRA) in Kyuubi How To Use Spark Adaptive Query Execution (AQE) in Kyuubi Solution for Big Result Sets Gluten
  • Flink Connector

    Flink Connector Apache Flink supports creating Iceberg table directly without creating the explicit Flink catalog in Flink SQL. That means we can just create an iceberg table by s...
  • Queries

    Spark Queries To use Iceberg in Spark, first configure Spark catalogs . Iceberg uses Apache Spark’s DataSourceV2 API for data source and catalog implementations. Querying with S...
  • Transform-V2

    Transform Common Options Copy FieldMapper FilterRowKind Filter JsonPath Replace Split SQL Functions SQL UDF SQL
  • Evolution

    Evolution Iceberg supports in-place table evolution . You can evolve a table schema just like SQL — even in nested structures — or change partition layout when data volume chang...
  • Flink DDL

    DDL commands CREATE Catalog Hive catalog This creates an Iceberg catalog named hive_catalog that can be configured using 'catalog-type'='hive' , which loads tables from Hive m...