Devlive 开源社区 本次搜索耗时 1.061 秒,为您找到 172 个相关结果.
  • GitHub 趋势日报 (2025年04月26日)

    📈 今日整体趋势 Top 10 📊 分语言趋势 Top 5 Python Rust TypeScript C++ Go PHP MDX C C Dart Java Ruby Jupyter Notebook Kotlin Lua Vim Script Dockerfile JavaScript HTML Batchfi...
  • Configuration Glossary

    Table of Contents Properties File Format Creating a Basic Properties File Job Launcher Properties Common Job Launcher Properties SchedulerDaemon Properties CliMRJobLauncher Pr...
  • Procedures

    Spark Procedures To use Iceberg in Spark, first configure Spark catalogs . Stored procedures are only available when using Iceberg SQL extensions in Spark 3. Usage Procedures c...
  • Hive Metastore

    Syncing to Hive Metastore This document walks through the steps to register an Apache XTable™ (Incubating) synced table on Hive Metastore (HMS). Pre-requisites Source table(s) ...
  • Email

    Description Key features Options email_from_address [string] email_to_address [string] email_host [string] email_transport_protocol [string] email_smtp_auth [string] email_a...
  • Configuration

    Configuration Table properties Iceberg tables support table properties to configure table behavior, like the default split size for readers. Read properties Property Defaul...
  • Flink Queries

    Flink Queries Iceberg support streaming and batch read With Apache Flink ‘s DataStream API and Table API. Reading with SQL Iceberg support both streaming and batch read in Flink...
  • Configuration

    Spark Configuration Catalogs Spark adds an API to plug in table catalogs that are used to load, create, and manage Iceberg tables. Spark catalogs are configured by setting Spark ...
  • Writes

    Spark Writes To use Iceberg in Spark, first configure Spark catalogs . Some plans are only available when using Iceberg SQL extensions in Spark 3. Iceberg uses Apache Spark’s D...
  • Java Quickstart

    Java API Quickstart Create a table Tables are created using either a Catalog or an implementation of the Tables interface. Using a Hive catalog The Hive catalog connects to...