Java API Quickstart Create a table Tables are created using either a Catalog or an implementation of the Tables interface. Using a Hive catalog The Hive catalog connects to...
Debugging Server Debugging Engine Flink Engine Trino Engine Hive Engine Debugging Apps Spark Engine Flink Engine You can use the Java Debug Wire Protocol to debug Kyuubi ...
Engine Management Details This engine UI is able to help you understand status of the engine behind Kyuubi servers. Engine Management Details The Engine UI offers an Engine Man...
Features Usage Hive Dialect plugin aims to provide Hive Dialect support to Spark’s JDBC source. It will auto registered to Spark and applied to JDBC sources with url prefix of j...
Writing with SQL INSERT OVERWRITE INSERT INTO Upsert to table with primary keys. DELETE FROM UPDATE MERGE INTO Writing with DataFrames Appending data Overwriting data Crea...
Hive Connector Integration Dependencies Configurations Hive Connector Operations The Kyuubi Hive Connector is a datasource for both reading and writing Hive table, It is imple...
Usage Manage kyuubi servers List server Create server Get server Delete server Manage kyuubi engines List engine Get engine Delete engine Usage bin / kyuubi - ctl -- h...
Syncing to BigLake Metastore This document walks through the steps to register an Apache XTable™ (Incubating) synced Iceberg table in BigLake Metastore on GCP. Pre-requisites S...
Running Tests Fully Running Tests for a Module Running Tests for a Single Test Kyuubi can be tested based on Apache Maven and the ScalaTest Maven Plugin, please refer to the ...
Spark DDL To use Iceberg in Spark, first configure Spark catalogs . Iceberg uses Apache Spark’s DataSourceV2 API for data source and catalog implementations. CREATE TABLE Spark...