Kyuubi provides several ways to configure the system and corresponding engines.
Environments
You can configure the environment variables in $KYUUBI_HOME/conf/kyuubi-env.sh
, e.g, JAVA_HOME
, then this java runtime will be used both for Kyuubi server instance and the applications it launches. You can also change the variable in the subprocess’s env configuration file, e.g.$SPARK_HOME/conf/spark-env.sh
to use more specific ENV for SQL engine applications. see $KYUUBI_HOME/conf/kyuubi-env.sh.template
as an example.
For the environment variables that only needed to be transferred into engine side, you can set it with a Kyuubi configuration item formatted kyuubi.engineEnv.VAR_NAME
. For example, with kyuubi.engineEnv.SPARK_DRIVER_MEMORY=4g
, the environment variable SPARK_DRIVER_MEMORY
with value 4g
would be transferred into engine side. With kyuubi.engineEnv.SPARK_CONF_DIR=/apache/confs/spark/conf
, the value of SPARK_CONF_DIR
on the engine side is set to /apache/confs/spark/conf
.
Kyuubi Configurations
You can configure the Kyuubi properties in $KYUUBI_HOME/conf/kyuubi-defaults.conf
, see $KYUUBI_HOME/conf/kyuubi-defaults.conf.template
as an example.
Authentication
Key | Default | Meaning | Type | Since | ||
---|---|---|---|---|---|---|
kyuubi.authentication | NONE | A comma-separated list of client authentication types.
|
seq | 1.0.0 | ||
kyuubi.authentication.custom.class | <undefined> | User-defined authentication implementation of org.apache.kyuubi.service.authentication.PasswdAuthenticationProvider | string | 1.3.0 | ||
kyuubi.authentication.jdbc.driver.class | <undefined> | Driver class name for JDBC Authentication Provider. | string | 1.6.0 | ||
kyuubi.authentication.jdbc.password | <undefined> | Database password for JDBC Authentication Provider. | string | 1.6.0 | ||
kyuubi.authentication.jdbc.query | <undefined> | Query SQL template with placeholders for JDBC Authentication Provider to execute. Authentication passes if the result set is not empty.The SQL statement must start with the SELECT clause. Available placeholders are ${user} and ${password} . |
string | 1.6.0 | ||
kyuubi.authentication.jdbc.url | <undefined> | JDBC URL for JDBC Authentication Provider. | string | 1.6.0 | ||
kyuubi.authentication.jdbc.user | <undefined> | Database user for JDBC Authentication Provider. | string | 1.6.0 | ||
kyuubi.authentication.ldap.baseDN | <undefined> | LDAP base DN. | string | 1.7.0 | ||
kyuubi.authentication.ldap.binddn | <undefined> | The user with which to bind to the LDAP server, and search for the full domain name of the user being authenticated. This should be the full domain name of the user, and should have search access across all users in the LDAP tree. If not specified, then the user being authenticated will be used as the bind user. For example: CN=bindUser,CN=Users,DC=subdomain,DC=domain,DC=com | string | 1.7.0 | ||
kyuubi.authentication.ldap.bindpw | <undefined> | The password for the bind user, to be used to search for the full name of the user being authenticated. If the username is specified, this parameter must also be specified. | string | 1.7.0 | ||
kyuubi.authentication.ldap.customLDAPQuery | <undefined> | A full LDAP query that LDAP Atn provider uses to execute against LDAP Server. If this query returns a null resultset, the LDAP Provider fails the Authentication request, succeeds if the user is part of the resultset.For example: (&(objectClass=group)(objectClass=top)(instanceType=4)(cn=Domain*)) , `(&(objectClass=person)( |
(sAMAccountName=admin)( | (memberOf=CN=Domain Admins,CN=Users,DC=domain,DC=com)(memberOf=CN=Administrators,CN=Builtin,DC=domain,DC=com))))` | string | 1.7.0 |
kyuubi.authentication.ldap.domain | <undefined> | LDAP domain. | string | 1.0.0 | ||
kyuubi.authentication.ldap.groupClassKey | groupOfNames | LDAP attribute name on the group entry that is to be used in LDAP group searches. For example: group, groupOfNames or groupOfUniqueNames. | string | 1.7.0 | ||
kyuubi.authentication.ldap.groupDNPattern | <undefined> | COLON-separated list of patterns to use to find DNs for group entities in this directory. Use %s where the actual group name is to be substituted for. For example: CN=%s,CN=Groups,DC=subdomain,DC=domain,DC=com. | string | 1.7.0 | ||
kyuubi.authentication.ldap.groupFilter | COMMA-separated list of LDAP Group names (short name not full DNs). For example: HiveAdmins,HadoopAdmins,Administrators | set | 1.7.0 | |||
kyuubi.authentication.ldap.groupMembershipKey | member | LDAP attribute name on the group object that contains the list of distinguished names for the user, group, and contact objects that are members of the group. For example: member, uniqueMember or memberUid | string | 1.7.0 | ||
kyuubi.authentication.ldap.guidKey | uid | LDAP attribute name whose values are unique in this LDAP server. For example: uid or CN. | string | 1.2.0 | ||
kyuubi.authentication.ldap.url | <undefined> | SPACE character separated LDAP connection URL(s). | string | 1.0.0 | ||
kyuubi.authentication.ldap.userDNPattern | <undefined> | COLON-separated list of patterns to use to find DNs for users in this directory. Use %s where the actual group name is to be substituted for. For example: CN=%s,CN=Users,DC=subdomain,DC=domain,DC=com. | string | 1.7.0 | ||
kyuubi.authentication.ldap.userFilter | COMMA-separated list of LDAP usernames (just short names, not full DNs). For example: hiveuser,impalauser,hiveadmin,hadoopadmin | set | 1.7.0 | |||
kyuubi.authentication.ldap.userMembershipKey | <undefined> | LDAP attribute name on the user object that contains groups of which the user is a direct member, except for the primary group, which is represented by the primaryGroupId. For example: memberOf | string | 1.7.0 | ||
kyuubi.authentication.sasl.qop | auth | Sasl QOP enable higher levels of protection for Kyuubi communication with clients.
|
string | 1.0.0 |
Backend
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.backend.engine.exec.pool.keepalive.time | PT1M | Time(ms) that an idle async thread of the operation execution thread pool will wait for a new task to arrive before terminating in SQL engine applications | duration | 1.0.0 |
kyuubi.backend.engine.exec.pool.shutdown.timeout | PT10S | Timeout(ms) for the operation execution thread pool to terminate in SQL engine applications | duration | 1.0.0 |
kyuubi.backend.engine.exec.pool.size | 100 | Number of threads in the operation execution thread pool of SQL engine applications | int | 1.0.0 |
kyuubi.backend.engine.exec.pool.wait.queue.size | 100 | Size of the wait queue for the operation execution thread pool in SQL engine applications | int | 1.0.0 |
kyuubi.backend.server.event.json.log.path | file:///tmp/kyuubi/events | The location of server events go for the built-in JSON logger | string | 1.4.0 |
kyuubi.backend.server.event.kafka.close.timeout | PT5S | Period to wait for Kafka producer of server event handlers to close. | duration | 1.8.0 |
kyuubi.backend.server.event.kafka.topic | <undefined> | The topic of server events go for the built-in Kafka logger | string | 1.8.0 |
kyuubi.backend.server.event.loggers | A comma-separated list of server history loggers, where session/operation etc events go.
|
seq | 1.4.0 | |
kyuubi.backend.server.exec.pool.keepalive.time | PT1M | Time(ms) that an idle async thread of the operation execution thread pool will wait for a new task to arrive before terminating in Kyuubi server | duration | 1.0.0 |
kyuubi.backend.server.exec.pool.shutdown.timeout | PT10S | Timeout(ms) for the operation execution thread pool to terminate in Kyuubi server | duration | 1.0.0 |
kyuubi.backend.server.exec.pool.size | 100 | Number of threads in the operation execution thread pool of Kyuubi server | int | 1.0.0 |
kyuubi.backend.server.exec.pool.wait.queue.size | 100 | Size of the wait queue for the operation execution thread pool of Kyuubi server | int | 1.0.0 |
Batch
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.batch.application.check.interval | PT5S | The interval to check batch job application information. | duration | 1.6.0 |
kyuubi.batch.application.starvation.timeout | PT3M | Threshold above which to warn batch application may be starved. | duration | 1.7.0 |
kyuubi.batch.conf.ignore.list | A comma-separated list of ignored keys for batch conf. If the batch conf contains any of them, the key and the corresponding value will be removed silently during batch job submission. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering. You can also pre-define some config for batch job submission with the prefix: kyuubi.batchConf.[batchType]. For example, you can pre-define spark.master for the Spark batch job with key kyuubi.batchConf.spark.spark.master . |
set | 1.6.0 | |
kyuubi.batch.session.idle.timeout | PT6H | Batch session idle timeout, it will be closed when it’s not accessed for this duration | duration | 1.6.2 |
Credentials
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.credentials.check.interval | PT5M | The interval to check the expiration of cached |
duration | 1.6.0 |
kyuubi.credentials.hadoopfs.enabled | true | Whether to renew Hadoop filesystem delegation tokens | boolean | 1.4.0 |
kyuubi.credentials.hadoopfs.uris | Extra Hadoop filesystem URIs for which to request delegation tokens. The filesystem that hosts fs.defaultFS does not need to be listed here. | seq | 1.4.0 | |
kyuubi.credentials.hive.enabled | true | Whether to renew Hive metastore delegation token | boolean | 1.4.0 |
kyuubi.credentials.idle.timeout | PT6H | The inactive users’ credentials will be expired after a configured timeout | duration | 1.6.0 |
kyuubi.credentials.renewal.interval | PT1H | How often Kyuubi renews one user’s delegation tokens | duration | 1.4.0 |
kyuubi.credentials.renewal.retry.wait | PT1M | How long to wait before retrying to fetch new credentials after a failure. | duration | 1.4.0 |
kyuubi.credentials.update.wait.timeout | PT1M | How long to wait until the credentials are ready. | duration | 1.5.0 |
Ctl
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.ctl.batch.log.on.failure.timeout | PT10S | The timeout for fetching remaining batch logs if the batch failed. | duration | 1.6.1 |
kyuubi.ctl.batch.log.query.interval | PT3S | The interval for fetching batch logs. | duration | 1.6.0 |
kyuubi.ctl.rest.auth.schema | basic | The authentication schema. Valid values are: basic, spnego. | string | 1.6.0 |
kyuubi.ctl.rest.base.url | <undefined> | The REST API base URL, which contains the scheme (http:// or https://), hostname, port number | string | 1.6.0 |
kyuubi.ctl.rest.connect.timeout | PT30S | The timeout[ms] for establishing the connection with the kyuubi server. A timeout value of zero is interpreted as an infinite timeout. | duration | 1.6.0 |
kyuubi.ctl.rest.request.attempt.wait | PT3S | How long to wait between attempts of ctl rest request. | duration | 1.6.0 |
kyuubi.ctl.rest.request.max.attempts | 3 | The max attempts number for ctl rest request. | int | 1.6.0 |
kyuubi.ctl.rest.socket.timeout | PT2M | The timeout[ms] for waiting for data packets after connection is established. A timeout value of zero is interpreted as an infinite timeout. | duration | 1.6.0 |
kyuubi.ctl.rest.spnego.host | <undefined> | When auth schema is spnego, need to config spnego host. | string | 1.6.0 |
Delegation
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.delegation.key.update.interval | PT24H | unused yet | duration | 1.0.0 |
kyuubi.delegation.token.gc.interval | PT1H | unused yet | duration | 1.0.0 |
kyuubi.delegation.token.max.lifetime | PT168H | unused yet | duration | 1.0.0 |
kyuubi.delegation.token.renew.interval | PT168H | unused yet | duration | 1.0.0 |
Engine
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.engine.chat.ernie.http.connect.timeout | PT2M | The timeout[ms] for establishing the connection with the ernie bot server. A timeout value of zero is interpreted as an infinite timeout. | duration | 1.9.0 |
kyuubi.engine.chat.ernie.http.proxy | <undefined> | HTTP proxy url for API calling in ernie bot engine. e.g. http://127.0.0.1:1088 | string | 1.9.0 |
kyuubi.engine.chat.ernie.http.socket.timeout | PT2M | The timeout[ms] for waiting for data packets after ernie bot server connection is established. A timeout value of zero is interpreted as an infinite timeout. | duration | 1.9.0 |
kyuubi.engine.chat.ernie.model | completions | ID of the model used in ernie bot. Available models are completions_pro, ernie_bot_8k, completions and eb-instantModel overview. | string | 1.9.0 |
kyuubi.engine.chat.ernie.token | <undefined> | The token to access ernie bot open API, which could be got at https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Ilkkrb0i5 | string | 1.9.0 |
kyuubi.engine.chat.extra.classpath | <undefined> | The extra classpath for the Chat engine, for configuring the location of the SDK and etc. | string | 1.8.0 |
kyuubi.engine.chat.gpt.apiKey | <undefined> | The key to access OpenAI open API, which could be got at https://platform.openai.com/account/api-keys | string | 1.8.0 |
kyuubi.engine.chat.gpt.http.connect.timeout | PT2M | The timeout[ms] for establishing the connection with the Chat GPT server. A timeout value of zero is interpreted as an infinite timeout. | duration | 1.8.0 |
kyuubi.engine.chat.gpt.http.proxy | <undefined> | HTTP proxy url for API calling in Chat GPT engine. e.g. http://127.0.0.1:1087 | string | 1.8.0 |
kyuubi.engine.chat.gpt.http.socket.timeout | PT2M | The timeout[ms] for waiting for data packets after Chat GPT server connection is established. A timeout value of zero is interpreted as an infinite timeout. | duration | 1.8.0 |
kyuubi.engine.chat.gpt.model | gpt-3.5-turbo | ID of the model used in ChatGPT. Available models refer to OpenAI’s Model overview. | string | 1.8.0 |
kyuubi.engine.chat.java.options | <undefined> | The extra Java options for the Chat engine | string | 1.8.0 |
kyuubi.engine.chat.memory | 1g | The heap memory for the Chat engine | string | 1.8.0 |
kyuubi.engine.chat.provider | ECHO | The provider for the Chat engine. Candidates:
|
string | 1.8.0 |
kyuubi.engine.connection.url.use.hostname | true | (deprecated) When true, the engine registers with hostname to zookeeper. When Spark runs on K8s with cluster mode, set to false to ensure that server can connect to engine | boolean | 1.3.0 |
kyuubi.engine.deregister.exception.classes | A comma-separated list of exception classes. If there is any exception thrown, whose class matches the specified classes, the engine would deregister itself. | set | 1.2.0 | |
kyuubi.engine.deregister.exception.messages | A comma-separated list of exception messages. If there is any exception thrown, whose message or stacktrace matches the specified message list, the engine would deregister itself. | set | 1.2.0 | |
kyuubi.engine.deregister.exception.ttl | PT30M | Time to live(TTL) for exceptions pattern specified in kyuubi.engine.deregister.exception.classes and kyuubi.engine.deregister.exception.messages to deregister engines. Once the total error count hits the kyuubi.engine.deregister.job.max.failures within the TTL, an engine will deregister itself and wait for self-terminated. Otherwise, we suppose that the engine has recovered from temporary failures. | duration | 1.2.0 |
kyuubi.engine.deregister.job.max.failures | 4 | Number of failures of job before deregistering the engine. | int | 1.2.0 |
kyuubi.engine.doAs.enabled | true | Whether to enable user impersonation on launching engine. When enabled, for engines which supports user impersonation, e.g. SPARK, depends on the kyuubi.engine.share.level , different users will be used to launch the engine. Otherwise, Kyuubi Server’s user will always be used to launch the engine. |
boolean | 1.9.0 |
kyuubi.engine.event.json.log.path | file:///tmp/kyuubi/events | The location where all the engine events go for the built-in JSON logger.
|
string | 1.3.0 |
kyuubi.engine.event.loggers | SPARK | A comma-separated list of engine history loggers, where engine/session/operation etc events go.
org.apache.kyuubi.events.handler.CustomEventHandlerProvider which has a zero-arg constructor. |
seq | 1.3.0 |
kyuubi.engine.flink.application.jars | <undefined> | A comma-separated list of the local jars to be shipped with the job to the cluster. For example, SQL UDF jars. Only effective in yarn application mode. | string | 1.8.0 |
kyuubi.engine.flink.extra.classpath | <undefined> | The extra classpath for the Flink SQL engine, for configuring the location of hadoop client jars, etc. Only effective in yarn session mode. | string | 1.6.0 |
kyuubi.engine.flink.initialize.sql | SHOW DATABASES | The initialize sql for Flink engine. It fallback to kyuubi.engine.initialize.sql . |
seq | 1.8.1 |
kyuubi.engine.flink.java.options | <undefined> | The extra Java options for the Flink SQL engine. Only effective in yarn session mode. | string | 1.6.0 |
kyuubi.engine.flink.memory | 1g | The heap memory for the Flink SQL engine. Only effective in yarn session mode. | string | 1.6.0 |
kyuubi.engine.hive.deploy.mode | LOCAL | Configures the hive engine deploy mode, The value can be ‘local’, ‘yarn’. In local mode, the engine operates on the same node as the KyuubiServer. In YARN mode, the engine runs within the Application Master (AM) container of YARN. | string | 1.9.0 |
kyuubi.engine.hive.event.loggers | JSON | A comma-separated list of engine history loggers, where engine/session/operation etc events go.
|
seq | 1.7.0 |
kyuubi.engine.hive.extra.classpath | <undefined> | The extra classpath for the Hive query engine, for configuring location of the hadoop client jars and etc. | string | 1.6.0 |
kyuubi.engine.hive.java.options | <undefined> | The extra Java options for the Hive query engine | string | 1.6.0 |
kyuubi.engine.hive.memory | 1g | The heap memory for the Hive query engine | string | 1.6.0 |
kyuubi.engine.initialize.sql | SHOW DATABASES | SemiColon-separated list of SQL statements to be initialized in the newly created engine before queries. i.e. use SHOW DATABASES to eagerly active HiveClient. This configuration can not be used in JDBC url due to the limitation of Beeline/JDBC driver. |
seq | 1.2.0 |
kyuubi.engine.jdbc.connection.password | <undefined> | The password is used for connecting to server | string | 1.6.0 |
kyuubi.engine.jdbc.connection.propagateCredential | false | Whether to use the session’s user and password to connect to database | boolean | 1.8.0 |
kyuubi.engine.jdbc.connection.properties | The additional properties are used for connecting to server | seq | 1.6.0 | |
kyuubi.engine.jdbc.connection.provider | <undefined> | A JDBC connection provider plugin for the Kyuubi Server to establish a connection to the JDBC URL. The configuration value should be a subclass of org.apache.kyuubi.engine.jdbc.connection.JdbcConnectionProvider . Kyuubi provides the following built-in implementations: |
string | 1.6.0 |
kyuubi.engine.jdbc.connection.url | <undefined> | The server url that engine will connect to | string | 1.6.0 |
kyuubi.engine.jdbc.connection.user | <undefined> | The user is used for connecting to server | string | 1.6.0 |
kyuubi.engine.jdbc.driver.class | <undefined> | The driver class for JDBC engine connection | string | 1.6.0 |
kyuubi.engine.jdbc.extra.classpath | <undefined> | The extra classpath for the JDBC query engine, for configuring the location of the JDBC driver and etc. | string | 1.6.0 |
kyuubi.engine.jdbc.fetch.size | 1000 | The fetch size of JDBC engine | int | 1.9.0 |
kyuubi.engine.jdbc.initialize.sql | SELECT 1 | SemiColon-separated list of SQL statements to be initialized in the newly created engine before queries. i.e. use SELECT 1 to eagerly active JDBCClient. |
seq | 1.8.0 |
kyuubi.engine.jdbc.java.options | <undefined> | The extra Java options for the JDBC query engine | string | 1.6.0 |
kyuubi.engine.jdbc.memory | 1g | The heap memory for the JDBC query engine | string | 1.6.0 |
kyuubi.engine.jdbc.session.initialize.sql | SemiColon-separated list of SQL statements to be initialized in the newly created engine session before queries. | seq | 1.8.0 | |
kyuubi.engine.jdbc.type | <undefined> | The short name of JDBC type | string | 1.6.0 |
kyuubi.engine.kubernetes.submit.timeout | PT30S | The engine submit timeout for Kubernetes application. | duration | 1.7.2 |
kyuubi.engine.operation.convert.catalog.database.enabled | true | When set to true, The engine converts the JDBC methods of set/get Catalog and set/get Schema to the implementation of different engines | boolean | 1.6.0 |
kyuubi.engine.operation.log.dir.root | engine_operation_logs | Root directory for query operation log at engine-side. | string | 1.4.0 |
kyuubi.engine.pool.name | engine-pool | The name of the engine pool. | string | 1.5.0 |
kyuubi.engine.pool.selectPolicy | RANDOM | The select policy of an engine from the corresponding engine pool engine for a session.
|
string | 1.7.0 |
kyuubi.engine.pool.size | -1 | The size of the engine pool. Note that, if the size is less than 1, the engine pool will not be enabled; otherwise, the size of the engine pool will be min(this, kyuubi.engine.pool.size.threshold). | int | 1.4.0 |
kyuubi.engine.pool.size.threshold | 9 | This parameter is introduced as a server-side parameter controlling the upper limit of the engine pool. | int | 1.4.0 |
kyuubi.engine.session.initialize.sql | SemiColon-separated list of SQL statements to be initialized in the newly created engine session before queries. This configuration can not be used in JDBC url due to the limitation of Beeline/JDBC driver. | seq | 1.3.0 | |
kyuubi.engine.share.level | USER | Engines will be shared in different levels, available configs are:
kyuubi.engine.share.level.subdomain and kyuubi.engine.doAs.enabled . |
string | 1.2.0 |
kyuubi.engine.share.level.sub.domain | <undefined> | (deprecated) - Using kyuubi.engine.share.level.subdomain instead | string | 1.2.0 |
kyuubi.engine.share.level.subdomain | <undefined> | Allow end-users to create a subdomain for the share level of an engine. A subdomain is a case-insensitive string values that must be a valid zookeeper subpath. For example, for the USER share level, an end-user can share a certain engine within a subdomain, not for all of its clients. End-users are free to create multiple engines in the USER share level. When disable engine pool, use ‘default’ if absent. |
string | 1.4.0 |
kyuubi.engine.single.spark.session | false | When set to true, this engine is running in a single session mode. All the JDBC/ODBC connections share the temporary views, function registries, SQL configuration and the current database. | boolean | 1.3.0 |
kyuubi.engine.spark.event.loggers | SPARK | A comma-separated list of engine loggers, where engine/session/operation etc events go.
|
seq | 1.7.0 |
kyuubi.engine.spark.initialize.sql | SHOW DATABASES | The initialize sql for Spark engine. It fallback to kyuubi.engine.initialize.sql . |
seq | 1.8.1 |
kyuubi.engine.spark.output.mode | AUTO | The output mode of Spark engine:
|
string | 1.9.0 |
kyuubi.engine.spark.python.env.archive | <undefined> | Portable Python env archive used for Spark engine Python language mode. | string | 1.7.0 |
kyuubi.engine.spark.python.env.archive.exec.path | bin/python | The Python exec path under the Python env archive. | string | 1.7.0 |
kyuubi.engine.spark.python.home.archive | <undefined> | Spark archive containing $SPARK_HOME/python directory, which is used to init session Python worker for Python language mode. | string | 1.7.0 |
kyuubi.engine.submit.timeout | PT30S | Period to tolerant Driver Pod ephemerally invisible after submitting. In some Resource Managers, e.g. K8s, the Driver Pod is not visible immediately after spark-submit is returned. |
duration | 1.7.1 |
kyuubi.engine.trino.connection.keystore.password | <undefined> | The keystore password used for connecting to trino cluster | string | 1.8.0 |
kyuubi.engine.trino.connection.keystore.path | <undefined> | The keystore path used for connecting to trino cluster | string | 1.8.0 |
kyuubi.engine.trino.connection.keystore.type | <undefined> | The keystore type used for connecting to trino cluster | string | 1.8.0 |
kyuubi.engine.trino.connection.password | <undefined> | The password used for connecting to trino cluster | string | 1.8.0 |
kyuubi.engine.trino.connection.truststore.password | <undefined> | The truststore password used for connecting to trino cluster | string | 1.8.0 |
kyuubi.engine.trino.connection.truststore.path | <undefined> | The truststore path used for connecting to trino cluster | string | 1.8.0 |
kyuubi.engine.trino.connection.truststore.type | <undefined> | The truststore type used for connecting to trino cluster | string | 1.8.0 |
kyuubi.engine.trino.connection.user | <undefined> | The user used for connecting to trino cluster | string | 1.9.0 |
kyuubi.engine.trino.event.loggers | JSON | A comma-separated list of engine history loggers, where engine/session/operation etc events go.
|
seq | 1.7.0 |
kyuubi.engine.trino.extra.classpath | <undefined> | The extra classpath for the Trino query engine, for configuring other libs which may need by the Trino engine | string | 1.6.0 |
kyuubi.engine.trino.java.options | <undefined> | The extra Java options for the Trino query engine | string | 1.6.0 |
kyuubi.engine.trino.memory | 1g | The heap memory for the Trino query engine | string | 1.6.0 |
kyuubi.engine.type | SPARK_SQL | Specify the detailed engine supported by Kyuubi. The engine type bindings to SESSION scope. This configuration is experimental. Currently, available configs are:
|
string | 1.4.0 |
kyuubi.engine.ui.retainedSessions | 200 | The number of SQL client sessions kept in the Kyuubi Query Engine web UI. | int | 1.4.0 |
kyuubi.engine.ui.retainedStatements | 200 | The number of statements kept in the Kyuubi Query Engine web UI. | int | 1.4.0 |
kyuubi.engine.ui.stop.enabled | true | When true, allows Kyuubi engine to be killed from the Spark Web UI. | boolean | 1.3.0 |
kyuubi.engine.user.isolated.spark.session | true | When set to false, if the engine is running in a group or server share level, all the JDBC/ODBC connections will be isolated against the user. Including the temporary views, function registries, SQL configuration, and the current database. Note that, it does not affect if the share level is connection or user. | boolean | 1.6.0 |
kyuubi.engine.user.isolated.spark.session.idle.interval | PT1M | The interval to check if the user-isolated Spark session is timeout. | duration | 1.6.0 |
kyuubi.engine.user.isolated.spark.session.idle.timeout | PT6H | If kyuubi.engine.user.isolated.spark.session is false, we will release the Spark session if its corresponding user is inactive after this configured timeout. | duration | 1.6.0 |
kyuubi.engine.yarn.app.name | <undefined> | The YARN app name when the engine deploy mode is YARN. | string | 1.9.0 |
kyuubi.engine.yarn.cores | 1 | kyuubi engine container core number when the engine deploy mode is YARN. | int | 1.9.0 |
kyuubi.engine.yarn.java.options | <undefined> | The extra Java options for the AM when the engine deploy mode is YARN. | string | 1.9.0 |
kyuubi.engine.yarn.memory | 1024 | kyuubi engine container memory in mb when the engine deploy mode is YARN. | int | 1.9.0 |
kyuubi.engine.yarn.priority | <undefined> | kyuubi engine yarn priority when the engine deploy mode is YARN. | int | 1.9.0 |
kyuubi.engine.yarn.queue | default | kyuubi engine yarn queue when the engine deploy mode is YARN. | string | 1.9.0 |
kyuubi.engine.yarn.stagingDir | <undefined> | Staging directory used while submitting kyuubi engine to YARN, It should be a absolute path in HDFS. | string | 1.9.0 |
kyuubi.engine.yarn.submit.timeout | PT30S | The engine submit timeout for YARN application. | duration | 1.7.2 |
kyuubi.engine.yarn.tags | <undefined> | kyuubi engine yarn tags when the engine deploy mode is YARN. | seq | 1.9.0 |
Event
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.event.async.pool.keepalive.time | PT1M | Time(ms) that an idle async thread of the async event handler thread pool will wait for a new task to arrive before terminating | duration | 1.7.0 |
kyuubi.event.async.pool.size | 8 | Number of threads in the async event handler thread pool | int | 1.7.0 |
kyuubi.event.async.pool.wait.queue.size | 100 | Size of the wait queue for the async event handler thread pool | int | 1.7.0 |
Frontend
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.frontend.advertised.host | <undefined> | Hostname or IP of the Kyuubi server’s frontend services to publish to external systems such as the service discovery ensemble and metadata store. Use it when you want to advertise a different hostname or IP than the bind host. | string | 1.8.0 |
kyuubi.frontend.bind.host | <undefined> | Hostname or IP of the machine on which to run the frontend services. | string | 1.0.0 |
kyuubi.frontend.bind.port | 10009 | (deprecated) Port of the machine on which to run the thrift frontend service via the binary protocol. | int | 1.0.0 |
kyuubi.frontend.connection.url.use.hostname | true | When true, frontend services prefer hostname, otherwise, ip address. Note that, the default value is set to false when engine running on Kubernetes to prevent potential network issues. |
boolean | 1.5.0 |
kyuubi.frontend.max.message.size | 104857600 | (deprecated) Maximum message size in bytes a Kyuubi server will accept. | int | 1.0.0 |
kyuubi.frontend.max.worker.threads | 999 | (deprecated) Maximum number of threads in the frontend worker thread pool for the thrift frontend service | int | 1.0.0 |
kyuubi.frontend.min.worker.threads | 9 | (deprecated) Minimum number of threads in the frontend worker thread pool for the thrift frontend service | int | 1.0.0 |
kyuubi.frontend.mysql.bind.host | <undefined> | Hostname or IP of the machine on which to run the MySQL frontend service. | string | 1.4.0 |
kyuubi.frontend.mysql.bind.port | 3309 | Port of the machine on which to run the MySQL frontend service. | int | 1.4.0 |
kyuubi.frontend.mysql.max.worker.threads | 999 | Maximum number of threads in the command execution thread pool for the MySQL frontend service | int | 1.4.0 |
kyuubi.frontend.mysql.min.worker.threads | 9 | Minimum number of threads in the command execution thread pool for the MySQL frontend service | int | 1.4.0 |
kyuubi.frontend.mysql.netty.worker.threads | <undefined> | Number of thread in the netty worker event loop of MySQL frontend service. Use min(cpu_cores, 8) in default. | int | 1.4.0 |
kyuubi.frontend.mysql.worker.keepalive.time | PT1M | Time(ms) that an idle async thread of the command execution thread pool will wait for a new task to arrive before terminating in MySQL frontend service | duration | 1.4.0 |
kyuubi.frontend.protocols | THRIFT_BINARY,REST | A comma-separated list for all frontend protocols
|
seq | 1.4.0 |
kyuubi.frontend.proxy.http.client.ip.header | X-Real-IP | The HTTP header to record the real client IP address. If your server is behind a load balancer or other proxy, the server will see this load balancer or proxy IP address as the client IP address, to get around this common issue, most load balancers or proxies offer the ability to record the real remote IP address in an HTTP header that will be added to the request for other devices to use. Note that, because the header value can be specified to any IP address, so it will not be used for authentication. | string | 1.6.0 |
kyuubi.frontend.rest.bind.host | <undefined> | Hostname or IP of the machine on which to run the REST frontend service. | string | 1.4.0 |
kyuubi.frontend.rest.bind.port | 10099 | Port of the machine on which to run the REST frontend service. | int | 1.4.0 |
kyuubi.frontend.rest.jetty.stopTimeout | PT5S | Stop timeout for Jetty server used by the RESTful frontend service. | duration | 1.8.1 |
kyuubi.frontend.rest.max.worker.threads | 999 | Maximum number of threads in the frontend worker thread pool for the rest frontend service | int | 1.6.2 |
kyuubi.frontend.ssl.keystore.algorithm | <undefined> | SSL certificate keystore algorithm. | string | 1.7.0 |
kyuubi.frontend.ssl.keystore.password | <undefined> | SSL certificate keystore password. | string | 1.7.0 |
kyuubi.frontend.ssl.keystore.path | <undefined> | SSL certificate keystore location. | string | 1.7.0 |
kyuubi.frontend.ssl.keystore.type | <undefined> | SSL certificate keystore type. | string | 1.7.0 |
kyuubi.frontend.thrift.binary.bind.host | <undefined> | Hostname or IP of the machine on which to run the thrift frontend service via the binary protocol. | string | 1.4.0 |
kyuubi.frontend.thrift.binary.bind.port | 10009 | Port of the machine on which to run the thrift frontend service via the binary protocol. | int | 1.4.0 |
kyuubi.frontend.thrift.binary.ssl.disallowed.protocols | SSLv2,SSLv3 | SSL versions to disallow for Kyuubi thrift binary frontend. | set | 1.7.0 |
kyuubi.frontend.thrift.binary.ssl.enabled | false | Set this to true for using SSL encryption in thrift binary frontend server. | boolean | 1.7.0 |
kyuubi.frontend.thrift.binary.ssl.include.ciphersuites | A comma-separated list of include SSL cipher suite names for thrift binary frontend. | seq | 1.7.0 | |
kyuubi.frontend.thrift.http.bind.host | <undefined> | Hostname or IP of the machine on which to run the thrift frontend service via http protocol. | string | 1.6.0 |
kyuubi.frontend.thrift.http.bind.port | 10010 | Port of the machine on which to run the thrift frontend service via http protocol. | int | 1.6.0 |
kyuubi.frontend.thrift.http.compression.enabled | true | Enable thrift http compression via Jetty compression support | boolean | 1.6.0 |
kyuubi.frontend.thrift.http.cookie.auth.enabled | true | When true, Kyuubi in HTTP transport mode, will use cookie-based authentication mechanism | boolean | 1.6.0 |
kyuubi.frontend.thrift.http.cookie.domain | <undefined> | Domain for the Kyuubi generated cookies | string | 1.6.0 |
kyuubi.frontend.thrift.http.cookie.is.httponly | true | HttpOnly attribute of the Kyuubi generated cookie. | boolean | 1.6.0 |
kyuubi.frontend.thrift.http.cookie.max.age | 86400 | Maximum age in seconds for server side cookie used by Kyuubi in HTTP mode. | int | 1.6.0 |
kyuubi.frontend.thrift.http.cookie.path | <undefined> | Path for the Kyuubi generated cookies | string | 1.6.0 |
kyuubi.frontend.thrift.http.max.idle.time | PT30M | Maximum idle time for a connection on the server when in HTTP mode. | duration | 1.6.0 |
kyuubi.frontend.thrift.http.path | cliservice | Path component of URL endpoint when in HTTP mode. | string | 1.6.0 |
kyuubi.frontend.thrift.http.request.header.size | 6144 | Request header size in bytes, when using HTTP transport mode. Jetty defaults used. | int | 1.6.0 |
kyuubi.frontend.thrift.http.response.header.size | 6144 | Response header size in bytes, when using HTTP transport mode. Jetty defaults used. | int | 1.6.0 |
kyuubi.frontend.thrift.http.ssl.exclude.ciphersuites | A comma-separated list of exclude SSL cipher suite names for thrift http frontend. | seq | 1.7.0 | |
kyuubi.frontend.thrift.http.ssl.keystore.password | <undefined> | SSL certificate keystore password. | string | 1.6.0 |
kyuubi.frontend.thrift.http.ssl.keystore.path | <undefined> | SSL certificate keystore location. | string | 1.6.0 |
kyuubi.frontend.thrift.http.ssl.protocol.blacklist | SSLv2,SSLv3 | SSL Versions to disable when using HTTP transport mode. | seq | 1.6.0 |
kyuubi.frontend.thrift.http.use.SSL | false | Set this to true for using SSL encryption in http mode. | boolean | 1.6.0 |
kyuubi.frontend.thrift.http.xsrf.filter.enabled | false | If enabled, Kyuubi will block any requests made to it over HTTP if an X-XSRF-HEADER header is not present | boolean | 1.6.0 |
kyuubi.frontend.thrift.max.message.size | 104857600 | Maximum message size in bytes a Kyuubi server will accept. | int | 1.4.0 |
kyuubi.frontend.thrift.max.worker.threads | 999 | Maximum number of threads in the frontend worker thread pool for the thrift frontend service | int | 1.4.0 |
kyuubi.frontend.thrift.min.worker.threads | 9 | Minimum number of threads in the frontend worker thread pool for the thrift frontend service | int | 1.4.0 |
kyuubi.frontend.thrift.worker.keepalive.time | PT1M | Keep-alive time (in milliseconds) for an idle worker thread | duration | 1.4.0 |
kyuubi.frontend.trino.bind.host | <undefined> | Hostname or IP of the machine on which to run the TRINO frontend service. | string | 1.7.0 |
kyuubi.frontend.trino.bind.port | 10999 | Port of the machine on which to run the TRINO frontend service. | int | 1.7.0 |
kyuubi.frontend.trino.jetty.stopTimeout | PT5S | Stop timeout for Jetty server used by the Trino frontend service. | duration | 1.8.1 |
kyuubi.frontend.trino.max.worker.threads | 999 | Maximum number of threads in the frontend worker thread pool for the Trino frontend service | int | 1.7.0 |
kyuubi.frontend.worker.keepalive.time | PT1M | (deprecated) Keep-alive time (in milliseconds) for an idle worker thread | duration | 1.0.0 |
Ha
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.ha.addresses | The connection string for the discovery ensemble | string | 1.6.0 | |
kyuubi.ha.client.class | org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient | Class name for service discovery client.
|
string | 1.6.0 |
kyuubi.ha.etcd.lease.timeout | PT10S | Timeout for etcd keep alive lease. The kyuubi server will know the unexpected loss of engine after up to this seconds. | duration | 1.6.0 |
kyuubi.ha.etcd.ssl.ca.path | <undefined> | Where the etcd CA certificate file is stored. | string | 1.6.0 |
kyuubi.ha.etcd.ssl.client.certificate.path | <undefined> | Where the etcd SSL certificate file is stored. | string | 1.6.0 |
kyuubi.ha.etcd.ssl.client.key.path | <undefined> | Where the etcd SSL key file is stored. | string | 1.6.0 |
kyuubi.ha.etcd.ssl.enabled | false | When set to true, will build an SSL secured etcd client. | boolean | 1.6.0 |
kyuubi.ha.namespace | kyuubi | The root directory for the service to deploy its instance uri | string | 1.6.0 |
kyuubi.ha.zookeeper.acl.enabled | false | (deprecated) Set to true if the ZooKeeper ensemble is kerberized | boolean | 1.0.0 |
kyuubi.ha.zookeeper.auth.digest | <undefined> | The digest auth string is used for ZooKeeper authentication, like: username:password. | string | 1.3.2 |
kyuubi.ha.zookeeper.auth.keytab | <undefined> | Location of the Kyuubi server’s keytab that is used for ZooKeeper authentication. | string | 1.3.2 |
kyuubi.ha.zookeeper.auth.principal | <undefined> | Kerberos principal name that is used for ZooKeeper authentication. | string | 1.3.2 |
kyuubi.ha.zookeeper.auth.serverPrincipal | <undefined> | Kerberos principal name of ZooKeeper Server. It only takes effect when Zookeeper client’s version at least 3.5.7 or 3.6.0 or applies ZOOKEEPER-1467. To use Zookeeper 3.6 client, compile Kyuubi with -Pzookeeper-3.6 . |
string | 1.8.0 |
kyuubi.ha.zookeeper.auth.type | NONE | The type of ZooKeeper authentication, all candidates are
|
string | 1.3.2 |
kyuubi.ha.zookeeper.connection.base.retry.wait | 1000 | Initial amount of time to wait between retries to the ZooKeeper ensemble | int | 1.0.0 |
kyuubi.ha.zookeeper.connection.max.retries | 3 | Max retry times for connecting to the ZooKeeper ensemble | int | 1.0.0 |
kyuubi.ha.zookeeper.connection.max.retry.wait | 30000 | Max amount of time to wait between retries for BOUNDED_EXPONENTIAL_BACKOFF policy can reach, or max time until elapsed for UNTIL_ELAPSED policy to connect the zookeeper ensemble | int | 1.0.0 |
kyuubi.ha.zookeeper.connection.retry.policy | EXPONENTIAL_BACKOFF | The retry policy for connecting to the ZooKeeper ensemble, all candidates are:
|
string | 1.0.0 |
kyuubi.ha.zookeeper.connection.timeout | 15000 | The timeout(ms) of creating the connection to the ZooKeeper ensemble | int | 1.0.0 |
kyuubi.ha.zookeeper.engine.auth.type | NONE | The type of ZooKeeper authentication for the engine, all candidates are
|
string | 1.3.2 |
kyuubi.ha.zookeeper.namespace | kyuubi | (deprecated) The root directory for the service to deploy its instance uri | string | 1.0.0 |
kyuubi.ha.zookeeper.node.creation.timeout | PT2M | Timeout for creating ZooKeeper node | duration | 1.2.0 |
kyuubi.ha.zookeeper.publish.configs | false | When set to true, publish Kerberos configs to Zookeeper. Note that the Hive driver needs to be greater than 1.3 or 2.0 or apply HIVE-11581 patch. | boolean | 1.4.0 |
kyuubi.ha.zookeeper.quorum | (deprecated) The connection string for the ZooKeeper ensemble | string | 1.0.0 | |
kyuubi.ha.zookeeper.session.timeout | 60000 | The timeout(ms) of a connected session to be idled | int | 1.0.0 |
Kinit
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.kinit.interval | PT1H | How often will the Kyuubi server run kinit -kt [keytab] [principal] to renew the local Kerberos credentials cache |
duration | 1.0.0 |
kyuubi.kinit.keytab | <undefined> | Location of Kyuubi server’s keytab. | string | 1.0.0 |
kyuubi.kinit.max.attempts | 10 | How many times will kinit process retry |
int | 1.0.0 |
kyuubi.kinit.principal | <undefined> | Name of the Kerberos principal. | string | 1.0.0 |
Kubernetes
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.kubernetes.application.state.container | spark-kubernetes-driver | The container name to retrieve the application state from. | string | 1.8.1 |
kyuubi.kubernetes.application.state.source | POD | The source to retrieve the application state from. The valid values are pod and container. If the source is container and there is container inside the pod with the name of kyuubi.kubernetes.application.state.container, the application state will be from the matched container state. Otherwise, the application state will be from the pod state. | string | 1.8.1 |
kyuubi.kubernetes.authenticate.caCertFile | <undefined> | Path to the CA cert file for connecting to the Kubernetes API server over TLS from the kyuubi. Specify this as a path as opposed to a URI (i.e. do not provide a scheme) | string | 1.7.0 |
kyuubi.kubernetes.authenticate.clientCertFile | <undefined> | Path to the client cert file for connecting to the Kubernetes API server over TLS from the kyuubi. Specify this as a path as opposed to a URI (i.e. do not provide a scheme) | string | 1.7.0 |
kyuubi.kubernetes.authenticate.clientKeyFile | <undefined> | Path to the client key file for connecting to the Kubernetes API server over TLS from the kyuubi. Specify this as a path as opposed to a URI (i.e. do not provide a scheme) | string | 1.7.0 |
kyuubi.kubernetes.authenticate.oauthToken | <undefined> | The OAuth token to use when authenticating against the Kubernetes API server. Note that unlike, the other authentication options, this must be the exact string value of the token to use for the authentication. | string | 1.7.0 |
kyuubi.kubernetes.authenticate.oauthTokenFile | <undefined> | Path to the file containing the OAuth token to use when authenticating against the Kubernetes API server. Specify this as a path as opposed to a URI (i.e. do not provide a scheme) | string | 1.7.0 |
kyuubi.kubernetes.context | <undefined> | The desired context from your kubernetes config file used to configure the K8s client for interacting with the cluster. | string | 1.6.0 |
kyuubi.kubernetes.context.allow.list | The allowed kubernetes context list, if it is empty, there is no kubernetes context limitation. | set | 1.8.0 | |
kyuubi.kubernetes.master.address | <undefined> | The internal Kubernetes master (API server) address to be used for kyuubi. | string | 1.7.0 |
kyuubi.kubernetes.namespace | default | The namespace that will be used for running the kyuubi pods and find engines. | string | 1.7.0 |
kyuubi.kubernetes.namespace.allow.list | The allowed kubernetes namespace list, if it is empty, there is no kubernetes namespace limitation. | set | 1.8.0 | |
kyuubi.kubernetes.spark.cleanupTerminatedDriverPod.checkInterval | PT1M | Kyuubi server use guava cache as the cleanup trigger with time-based eviction, but the eviction would not happened until any get/put operation happened. This option schedule a daemon thread evict cache periodically. | duration | 1.8.1 |
kyuubi.kubernetes.spark.cleanupTerminatedDriverPod.kind | NONE | Kyuubi server will delete the spark driver pod after the application terminates for kyuubi.kubernetes.terminatedApplicationRetainPeriod. Available options are NONE, ALL, COMPLETED and default value is None which means none of the pod will be deleted | string | 1.8.1 |
kyuubi.kubernetes.spark.forciblyRewriteDriverPodName.enabled | false | Whether to forcibly rewrite Spark driver pod name with ‘kyuubi- |
boolean | 1.8.1 |
kyuubi.kubernetes.spark.forciblyRewriteExecutorPodNamePrefix.enabled | false | Whether to forcibly rewrite Spark executor pod name prefix with ‘kyuubi- |
boolean | 1.8.1 |
kyuubi.kubernetes.terminatedApplicationRetainPeriod | PT5M | The period for which the Kyuubi server retains application information after the application terminates. | duration | 1.7.1 |
kyuubi.kubernetes.trust.certificates | false | If set to true then client can submit to kubernetes cluster only with token | boolean | 1.7.0 |
Lineage
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.lineage.parser.plugin.provider | org.apache.kyuubi.plugin.lineage.LineageParserProvider | The provider for the Spark lineage parser plugin. | string | 1.8.0 |
Metadata
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.metadata.cleaner.enabled | true | Whether to clean the metadata periodically. If it is enabled, Kyuubi will clean the metadata that is in the terminate state with max age limitation. | boolean | 1.6.0 |
kyuubi.metadata.cleaner.interval | PT30M | The interval to check and clean expired metadata. | duration | 1.6.0 |
kyuubi.metadata.max.age | PT72H | The maximum age of metadata, the metadata exceeding the age will be cleaned. | duration | 1.6.0 |
kyuubi.metadata.recovery.threads | 10 | The number of threads for recovery from the metadata store when the Kyuubi server restarts. | int | 1.6.0 |
kyuubi.metadata.request.async.retry.enabled | true | Whether to retry in async when metadata request failed. When true, return success response immediately even the metadata request failed, and schedule it in background until success, to tolerate long-time metadata store outages w/o blocking the submission request. | boolean | 1.7.0 |
kyuubi.metadata.request.async.retry.queue.size | 65536 | The maximum queue size for buffering metadata requests in memory when the external metadata storage is down. Requests will be dropped if the queue exceeds. Only take affect when kyuubi.metadata.request.async.retry.enabled is true . |
int | 1.6.0 |
kyuubi.metadata.request.async.retry.threads | 10 | Number of threads in the metadata request async retry manager thread pool. Only take affect when kyuubi.metadata.request.async.retry.enabled is true . |
int | 1.6.0 |
kyuubi.metadata.request.retry.interval | PT5S | The interval to check and trigger the metadata request retry tasks. | duration | 1.6.0 |
kyuubi.metadata.store.class | org.apache.kyuubi.server.metadata.jdbc.JDBCMetadataStore | Fully qualified class name for server metadata store. | string | 1.6.0 |
kyuubi.metadata.store.jdbc.database.schema.init | true | Whether to init the JDBC metadata store database schema. | boolean | 1.6.0 |
kyuubi.metadata.store.jdbc.database.type | SQLITE | The database type for server jdbc metadata store.
|
string | 1.6.0 |
kyuubi.metadata.store.jdbc.driver | <undefined> | JDBC driver class name for server jdbc metadata store. | string | 1.6.0 |
kyuubi.metadata.store.jdbc.password | The password for server JDBC metadata store. | string | 1.6.0 | |
kyuubi.metadata.store.jdbc.priority.enabled | false | Whether to enable the priority scheduling for batch impl v2. When false, ignore kyuubi.batch.priority and use the FIFO ordering strategy for batch job scheduling. Note: this feature may cause significant performance issues when using MySQL 5.7 as the metastore backend due to the lack of support for mixed order index. See more details at KYUUBI #5329. | boolean | 1.8.0 |
kyuubi.metadata.store.jdbc.url | jdbc:sqlite:<KYUUBI_HOME>/kyuubi_state_store.db | The JDBC url for server JDBC metadata store. By default, it is a SQLite database url, and the state information is not shared across Kyuubi instances. To enable high availability for multiple kyuubi instances, please specify a production JDBC url. Note: this value support the variables substitution: <KYUUBI_HOME> . |
string | 1.6.0 |
kyuubi.metadata.store.jdbc.user | The username for server JDBC metadata store. | string | 1.6.0 |
Metrics
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.metrics.console.interval | PT5S | How often should report metrics to console | duration | 1.2.0 |
kyuubi.metrics.enabled | true | Set to true to enable kyuubi metrics system | boolean | 1.2.0 |
kyuubi.metrics.json.interval | PT5S | How often should report metrics to JSON file | duration | 1.2.0 |
kyuubi.metrics.json.location | metrics | Where the JSON metrics file located | string | 1.2.0 |
kyuubi.metrics.prometheus.path | /metrics | URI context path of prometheus metrics HTTP server | string | 1.2.0 |
kyuubi.metrics.prometheus.port | 10019 | Prometheus metrics HTTP server port | int | 1.2.0 |
kyuubi.metrics.reporters | PROMETHEUS | A comma-separated list for all metrics reporters
|
set | 1.2.0 |
kyuubi.metrics.slf4j.interval | PT5S | How often should report metrics to SLF4J logger | duration | 1.2.0 |
Operation
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.operation.getTables.ignoreTableProperties | false | Speed up the GetTables operation by ignoring tableTypes query criteria, and returning table identities only. |
boolean | 1.8.0 |
kyuubi.operation.idle.timeout | PT3H | Operation will be closed when it’s not accessed for this duration of time | duration | 1.0.0 |
kyuubi.operation.interrupt.on.cancel | true | When true, all running tasks will be interrupted if one cancels a query. When false, all running tasks will remain until finished. | boolean | 1.2.0 |
kyuubi.operation.language | SQL | Choose a programing language for the following inputs
|
string | 1.5.0 |
kyuubi.operation.log.dir.root | server_operation_logs | Root directory for query operation log at server-side. | string | 1.4.0 |
kyuubi.operation.plan.only.excludes | SetCatalogAndNamespace,UseStatement,SetNamespaceCommand,SetCommand,ResetCommand | Comma-separated list of query plan names, in the form of simple class names, i.e, for SET abc=xyz , the value will be SetCommand . For those auxiliary plans, such as switch databases , set properties , or create temporary view etc., which are used for setup evaluating environments for analyzing actual queries, we can use this config to exclude them and let them take effect. See also kyuubi.operation.plan.only.mode. |
set | 1.5.0 |
kyuubi.operation.plan.only.mode | none | Configures the statement performed mode, The value can be ‘parse’, ‘analyze’, ‘optimize’, ‘optimize_with_stats’, ‘physical’, ‘execution’, ‘lineage’ or ‘none’, when it is ‘none’, indicate to the statement will be fully executed, otherwise only way without executing the query. different engines currently support different modes, the Spark engine supports all modes, and the Flink engine supports ‘parse’, ‘physical’, and ‘execution’, other engines do not support planOnly currently. | string | 1.4.0 |
kyuubi.operation.plan.only.output.style | plain | Configures the planOnly output style. The value can be ‘plain’ or ‘json’, and the default value is ‘plain’. This configuration supports only the output styles of the Spark engine | string | 1.7.0 |
kyuubi.operation.progress.enabled | false | Whether to enable the operation progress. When true, the operation progress will be returned in GetOperationStatus . |
boolean | 1.6.0 |
kyuubi.operation.query.timeout | <undefined> | Timeout for query executions at server-side, take effect with client-side timeout(java.sql.Statement.setQueryTimeout ) together, a running query will be cancelled automatically if timeout. It’s off by default, which means only client-side take full control of whether the query should timeout or not. If set, client-side timeout is capped at this point. To cancel the queries right away without waiting for task to finish, consider enabling kyuubi.operation.interrupt.on.cancel together. |
duration | 1.2.0 |
kyuubi.operation.result.arrow.timestampAsString | false | When true, arrow-based rowsets will convert columns of type timestamp to strings for transmission. | boolean | 1.7.0 |
kyuubi.operation.result.format | thrift | Specify the result format, available configs are:
|
string | 1.7.0 |
kyuubi.operation.result.max.rows | 0 | Max rows of Spark query results. Rows exceeding the limit would be ignored. By setting this value to 0 to disable the max rows limit. | int | 1.6.0 |
kyuubi.operation.result.saveToFile.dir | /tmp/kyuubi/tmp_kyuubi_result | The Spark query result save dir, it should be a public accessible to every engine. Results are saved in ORC format, and the directory structure is /OPERATION_RESULT_SAVE_TO_FILE_DIR/engineId/sessionId/statementId . Each query result will delete when query finished. |
string | 1.9.0 |
kyuubi.operation.result.saveToFile.enabled | false | The switch for Spark query result save to file. | boolean | 1.9.0 |
kyuubi.operation.result.saveToFile.minRows | 10000 | The minRows of Spark result save to file, default value is 10000. | long | 1.9.1 |
kyuubi.operation.result.saveToFile.minSize | 209715200 | The minSize of Spark result save to file, default value is 200 MB.we use spark’s EstimationUtils#getSizePerRowestimate to estimate the output size of the execution plan. |
long | 1.9.0 |
kyuubi.operation.scheduler.pool | <undefined> | The scheduler pool of job. Note that, this config should be used after changing Spark config spark.scheduler.mode=FAIR. | string | 1.1.1 |
kyuubi.operation.spark.listener.enabled | true | When set to true, Spark engine registers an SQLOperationListener before executing the statement, logging a few summary statistics when each stage completes. | boolean | 1.6.0 |
kyuubi.operation.status.polling.timeout | PT5S | Timeout(ms) for long polling asynchronous running sql query’s status | duration | 1.0.0 |
Server
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.server.administrators | Comma-separated list of Kyuubi service administrators. We use this config to grant admin permission to any service accounts when security mechanism is enabled. Note, when kyuubi.authentication is configured to NOSASL or NONE, everyone is treated as administrator. | set | 1.8.0 | |
kyuubi.server.info.provider | ENGINE | The server information provider name, some clients may rely on this information to check the server compatibilities and functionalities. |
string | 1.6.1 |
kyuubi.server.limit.batch.connections.per.ipaddress | <undefined> | Maximum kyuubi server batch connections per ipaddress. Any user exceeding this limit will not be allowed to connect. | int | 1.7.0 |
kyuubi.server.limit.batch.connections.per.user | <undefined> | Maximum kyuubi server batch connections per user. Any user exceeding this limit will not be allowed to connect. | int | 1.7.0 |
kyuubi.server.limit.batch.connections.per.user.ipaddress | <undefined> | Maximum kyuubi server batch connections per user:ipaddress combination. Any user-ipaddress exceeding this limit will not be allowed to connect. | int | 1.7.0 |
kyuubi.server.limit.client.fetch.max.rows | <undefined> | Max rows limit for getting result row set operation. If the max rows specified by client-side is larger than the limit, request will fail directly. | int | 1.8.0 |
kyuubi.server.limit.connections.ip.deny.list | The client ip in the deny list will be denied to connect to kyuubi server. | set | 1.9.1 | |
kyuubi.server.limit.connections.per.ipaddress | <undefined> | Maximum kyuubi server connections per ipaddress. Any user exceeding this limit will not be allowed to connect. | int | 1.6.0 |
kyuubi.server.limit.connections.per.user | <undefined> | Maximum kyuubi server connections per user. Any user exceeding this limit will not be allowed to connect. | int | 1.6.0 |
kyuubi.server.limit.connections.per.user.ipaddress | <undefined> | Maximum kyuubi server connections per user:ipaddress combination. Any user-ipaddress exceeding this limit will not be allowed to connect. | int | 1.6.0 |
kyuubi.server.limit.connections.user.deny.list | The user in the deny list will be denied to connect to kyuubi server, if the user has configured both user.unlimited.list and user.deny.list, the priority of the latter is higher. | set | 1.8.0 | |
kyuubi.server.limit.connections.user.unlimited.list | The maximum connections of the user in the white list will not be limited. | set | 1.7.0 | |
kyuubi.server.name | <undefined> | The name of Kyuubi Server. | string | 1.5.0 |
kyuubi.server.periodicGC.interval | PT30M | How often to trigger a garbage collection. | duration | 1.7.0 |
kyuubi.server.redaction.regex | <undefined> | Regex to decide which Kyuubi contain sensitive information. When this regex matches a property key or value, the value is redacted from the various logs. | 1.6.0 | |
kyuubi.server.thrift.resultset.default.fetch.size | 1000 | The number of rows sent in one Fetch RPC call by the server to the client, if not specified by the client. Respect hive.server2.thrift.resultset.default.fetch.size hive conf. |
int | 1.9.1 |
Session
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.session.check.interval | PT5M | The check interval for session timeout. | duration | 1.0.0 |
kyuubi.session.close.on.disconnect | true | Session will be closed when client disconnects from kyuubi gateway. Set this to false to have session outlive its parent connection. | boolean | 1.8.0 |
kyuubi.session.conf.advisor | <undefined> | A config advisor plugin for Kyuubi Server. This plugin can provide a list of custom configs for different users or session configs and overwrite the session configs before opening a new session. This config value should be a subclass of org.apache.kyuubi.plugin.SessionConfAdvisor which has a zero-arg constructor. |
seq | 1.5.0 |
kyuubi.session.conf.file.reload.interval | PT10M | When FileSessionConfAdvisor is used, this configuration defines the expired time of $KYUUBI_CONF_DIR/kyuubi-session-<profile>.conf in the cache. After exceeding this value, the file will be reloaded. |
duration | 1.7.0 |
kyuubi.session.conf.ignore.list | A comma-separated list of ignored keys. If the client connection contains any of them, the key and the corresponding value will be removed silently during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax. | set | 1.2.0 | |
kyuubi.session.conf.profile | <undefined> | Specify a profile to load session-level configurations from $KYUUBI_CONF_DIR/kyuubi-session-<profile>.conf . This configuration will be ignored if the file does not exist. This configuration only takes effect when kyuubi.session.conf.advisor is set as org.apache.kyuubi.session.FileSessionConfAdvisor . |
string | 1.7.0 |
kyuubi.session.conf.restrict.list | A comma-separated list of restricted keys. If the client connection contains any of them, the connection will be rejected explicitly during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax. | set | 1.2.0 | |
kyuubi.session.engine.alive.max.failures | 3 | The maximum number of failures allowed for the engine. | int | 1.8.1 |
kyuubi.session.engine.alive.probe.enabled | false | Whether to enable the engine alive probe, it true, we will create a companion thrift client that keeps sending simple requests to check whether the engine is alive. | boolean | 1.6.0 |
kyuubi.session.engine.alive.probe.interval | PT10S | The interval for engine alive probe. | duration | 1.6.0 |
kyuubi.session.engine.alive.timeout | PT2M | The timeout for engine alive. If there is no alive probe success in the last timeout window, the engine will be marked as no-alive. | duration | 1.6.0 |
kyuubi.session.engine.check.interval | PT1M | The check interval for engine timeout | duration | 1.0.0 |
kyuubi.session.engine.flink.fetch.timeout | <undefined> | Result fetch timeout for Flink engine. If the timeout is reached, the result fetch would be stopped and the current fetched would be returned. If no data are fetched, a TimeoutException would be thrown. | duration | 1.8.0 |
kyuubi.session.engine.flink.initialize.sql | The initialize sql for Flink session. It fallback to kyuubi.engine.session.initialize.sql |
seq | 1.8.1 | |
kyuubi.session.engine.flink.main.resource | <undefined> | The package used to create Flink SQL engine remote job. If it is undefined, Kyuubi will use the default | string | 1.4.0 |
kyuubi.session.engine.flink.max.rows | 1000000 | Max rows of Flink query results. For batch queries, rows exceeding the limit would be ignored. For streaming queries, the query would be canceled if the limit is reached. | int | 1.5.0 |
kyuubi.session.engine.hive.main.resource | <undefined> | The package used to create Hive engine remote job. If it is undefined, Kyuubi will use the default | string | 1.6.0 |
kyuubi.session.engine.idle.timeout | PT30M | engine timeout, the engine will self-terminate when it’s not accessed for this duration. 0 or negative means not to self-terminate. | duration | 1.0.0 |
kyuubi.session.engine.initialize.timeout | PT3M | Timeout for starting the background engine, e.g. SparkSQLEngine. | duration | 1.0.0 |
kyuubi.session.engine.launch.async | true | When opening kyuubi session, whether to launch the backend engine asynchronously. When true, the Kyuubi server will set up the connection with the client without delay as the backend engine will be created asynchronously. | boolean | 1.4.0 |
kyuubi.session.engine.log.timeout | PT24H | If we use Spark as the engine then the session submit log is the console output of spark-submit. We will retain the session submit log until over the config value. | duration | 1.1.0 |
kyuubi.session.engine.login.timeout | PT15S | The timeout of creating the connection to remote sql query engine | duration | 1.0.0 |
kyuubi.session.engine.open.max.attempts | 9 | The number of times an open engine will retry when encountering a special error. | int | 1.7.0 |
kyuubi.session.engine.open.onFailure | RETRY | The behavior when opening engine failed:
|
string | 1.8.1 |
kyuubi.session.engine.open.retry.wait | PT10S | How long to wait before retrying to open the engine after failure. | duration | 1.7.0 |
kyuubi.session.engine.share.level | USER | (deprecated) - Using kyuubi.engine.share.level instead | string | 1.0.0 |
kyuubi.session.engine.spark.initialize.sql | The initialize sql for Spark session. It fallback to kyuubi.engine.session.initialize.sql |
seq | 1.8.1 | |
kyuubi.session.engine.spark.main.resource | <undefined> | The package used to create Spark SQL engine remote application. If it is undefined, Kyuubi will use the default | string | 1.0.0 |
kyuubi.session.engine.spark.max.initial.wait | PT1M | Max wait time for the initial connection to Spark engine. The engine will self-terminate no new incoming connection is established within this time. This setting only applies at the CONNECTION share level. 0 or negative means not to self-terminate. | duration | 1.8.0 |
kyuubi.session.engine.spark.max.lifetime | PT0S | Max lifetime for Spark engine, the engine will self-terminate when it reaches the end of life. 0 or negative means not to self-terminate. | duration | 1.6.0 |
kyuubi.session.engine.spark.max.lifetime.gracefulPeriod | PT0S | Graceful period for Spark engine to wait the connections disconnected after reaching the end of life. After the graceful period, all the connections without running operations will be forcibly disconnected. 0 or negative means always waiting the connections disconnected. | duration | 1.8.1 |
kyuubi.session.engine.spark.progress.timeFormat | yyyy-MM-dd HH:mm:ss.SSS | The time format of the progress bar | string | 1.6.0 |
kyuubi.session.engine.spark.progress.update.interval | PT1S | Update period of progress bar. | duration | 1.6.0 |
kyuubi.session.engine.spark.showProgress | false | When true, show the progress bar in the Spark’s engine log. | boolean | 1.6.0 |
kyuubi.session.engine.startup.destroy.timeout | PT5S | Engine startup process destroy wait time, if the process does not stop after this time, force destroy instead. This configuration only takes effect when kyuubi.session.engine.startup.waitCompletion=false . |
duration | 1.8.0 |
kyuubi.session.engine.startup.error.max.size | 8192 | During engine bootstrapping, if anderror occurs, using this config to limit the length of error message(characters). | int | 1.1.0 |
kyuubi.session.engine.startup.maxLogLines | 10 | The maximum number of engine log lines when errors occur during the engine startup phase. Note that this config effects on client-side to help track engine startup issues. | int | 1.4.0 |
kyuubi.session.engine.startup.waitCompletion | true | Whether to wait for completion after the engine starts. If false, the startup process will be destroyed after the engine is started. Note that only use it when the driver is not running locally, such as in yarn-cluster mode; Otherwise, the engine will be killed. | boolean | 1.5.0 |
kyuubi.session.engine.trino.connection.catalog | <undefined> | The default catalog that Trino engine will connect to | string | 1.5.0 |
kyuubi.session.engine.trino.connection.url | <undefined> | The server url that Trino engine will connect to | string | 1.5.0 |
kyuubi.session.engine.trino.main.resource | <undefined> | The package used to create Trino engine remote job. If it is undefined, Kyuubi will use the default | string | 1.5.0 |
kyuubi.session.engine.trino.showProgress | true | When true, show the progress bar and final info in the Trino engine log. | boolean | 1.6.0 |
kyuubi.session.engine.trino.showProgress.debug | false | When true, show the progress debug info in the Trino engine log. | boolean | 1.6.0 |
kyuubi.session.group.provider | hadoop | A group provider plugin for Kyuubi Server. This plugin can provide primary group and groups information for different users or session configs. This config value should be a subclass of org.apache.kyuubi.plugin.GroupProvider which has a zero-arg constructor. Kyuubi provides the following built-in implementations: |
string | 1.7.0 |
kyuubi.session.idle.timeout | PT6H | session idle timeout, it will be closed when it’s not accessed for this duration | duration | 1.2.0 |
kyuubi.session.local.dir.allow.list | The local dir list that are allowed to access by the kyuubi session application. End-users might set some parameters such as spark.files and it will upload some local files when launching the kyuubi engine, if the local dir allow list is defined, kyuubi will check whether the path to upload is in the allow list. Note that, if it is empty, there is no limitation for that. And please use absolute paths. |
set | 1.6.0 | |
kyuubi.session.name | <undefined> | A human readable name of the session and we use empty string by default. This name will be recorded in the event. Note that, we only apply this value from session conf. | string | 1.4.0 |
kyuubi.session.proxy.user | <undefined> | An alternative to hive.server2.proxy.user. The current behavior is consistent with hive.server2.proxy.user and now only takes effect in RESTFul API. When both parameters are set, kyuubi.session.proxy.user takes precedence. | string | 1.9.0 |
kyuubi.session.timeout | PT6H | (deprecated)session timeout, it will be closed when it’s not accessed for this duration | duration | 1.0.0 |
kyuubi.session.user.sign.enabled | false | Whether to verify the integrity of session user name on the engine side, e.g. Authz plugin in Spark. | boolean | 1.7.0 |
Spnego
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.spnego.keytab | <undefined> | Keytab file for SPNego principal | string | 1.6.0 |
kyuubi.spnego.principal | <undefined> | SPNego service principal, typical value would look like HTTP/_HOST@EXAMPLE.COM. SPNego service principal would be used when restful Kerberos security is enabled. This needs to be set only if SPNEGO is to be used in authentication. | string | 1.6.0 |
Yarn
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.yarn.user.admin | yarn | When kyuubi.yarn.user.strategy is set to ADMIN, use this admin user to construct YARN client for application management, e.g. kill application. | string | 1.8.0 |
kyuubi.yarn.user.strategy | NONE | Determine which user to use to construct YARN client for application management, e.g. kill application. Options:
|
string | 1.8.0 |
Zookeeper
Key | Default | Meaning | Type | Since |
---|---|---|---|---|
kyuubi.zookeeper.embedded.client.port | 2181 | clientPort for the embedded ZooKeeper server to listen for client connections, a client here could be Kyuubi server, engine, and JDBC client | int | 1.2.0 |
kyuubi.zookeeper.embedded.client.port.address | <undefined> | clientPortAddress for the embedded ZooKeeper server to | string | 1.2.0 |
kyuubi.zookeeper.embedded.client.use.hostname | false | When true, embedded Zookeeper prefer to bind hostname, otherwise, ip address. | boolean | 1.7.2 |
kyuubi.zookeeper.embedded.data.dir | embedded_zookeeper | dataDir for the embedded zookeeper server where stores the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database. If it is a relative path, it is resolved relative to KYUUBI_HOME. | string | 1.2.0 |
kyuubi.zookeeper.embedded.data.log.dir | embedded_zookeeper | dataLogDir for the embedded ZooKeeper server where writes the transaction log. If it is a relative path, it is resolved relative to KYUUBI_HOME. | string | 1.2.0 |
kyuubi.zookeeper.embedded.directory | embedded_zookeeper | (deprecated) The temporary directory for the embedded ZooKeeper server. If it is a relative path, it is resolved relative to KYUUBI_HOME. | string | 1.0.0 |
kyuubi.zookeeper.embedded.max.client.connections | 120 | maxClientCnxns for the embedded ZooKeeper server to limit the number of concurrent connections of a single client identified by IP address | int | 1.2.0 |
kyuubi.zookeeper.embedded.max.session.timeout | 60000 | maxSessionTimeout in milliseconds for the embedded ZooKeeper server will allow the client to negotiate. Defaults to 20 times the tickTime | int | 1.2.0 |
kyuubi.zookeeper.embedded.min.session.timeout | 6000 | minSessionTimeout in milliseconds for the embedded ZooKeeper server will allow the client to negotiate. Defaults to 2 times the tickTime | int | 1.2.0 |
kyuubi.zookeeper.embedded.port | 2181 | (deprecated) The port of the embedded ZooKeeper server | int | 1.0.0 |
kyuubi.zookeeper.embedded.tick.time | 3000 | tickTime in milliseconds for the embedded ZooKeeper server | int | 1.2.0 |
Spark Configurations
Via spark-defaults.conf
Setting them in $SPARK_HOME/conf/spark-defaults.conf
supplies with default values for SQL engine application. Available properties can be found at Spark official online documentation for Spark Configurations
Via kyuubi-defaults.conf
Setting them in $KYUUBI_HOME/conf/kyuubi-defaults.conf
supplies with default values for SQL engine application too. These properties will override all settings in $SPARK_HOME/conf/spark-defaults.conf
Via JDBC Connection URL
Setting them in the JDBC Connection URL supplies session-specific for each SQL engine. For example:
jdbc:hive2://localhost:10009/default;#spark.sql.shuffle.partitions=2;spark.executor.memory=5g
- Runtime SQL Configuration
- For Runtime SQL Configurations, they will take affect every time
- Static SQL and Spark Core Configuration
- For Static SQL Configurations and other spark core configs, e.g.
spark.executor.memory
, they will take effect if there is no existing SQL engine application. Otherwise, they will just be ignored
- For Static SQL Configurations and other spark core configs, e.g.
Via SET Syntax
Please refer to the Spark official online documentation for SET Command
Flink Configurations
Via flink-conf.yaml
Setting them in $FLINK_HOME/conf/flink-conf.yaml
supplies with default values for SQL engine application. Available properties can be found at Flink official online documentation for Flink Configurations
Via kyuubi-defaults.conf
Setting them in $KYUUBI_HOME/conf/kyuubi-defaults.conf
supplies with default values for SQL engine application too. You can use properties with the additional prefix flink.
to override settings in $FLINK_HOME/conf/flink-conf.yaml
.
For example:
flink.parallelism.default 2
flink.taskmanager.memory.process.size 5g
The below options in kyuubi-defaults.conf
will set parallelism.default: 2
and taskmanager.memory.process.size: 5g
into flink configurations.
Via JDBC Connection URL
Setting them in the JDBC Connection URL supplies session-specific for each SQL engine. For example:
jdbc:hive2://localhost:10009/default;#flink.parallelism.default=2;flink.taskmanager.memory.process.size=5g
Via SET Statements
Please refer to the Flink official online documentation for SET Statements
Trino Configurations
Via config.properties
Setting them in $TRINO_HOME/etc/config.properties
supplies with default values for SQL engine application. Available properties can be found at Trino official online documentation for Trino Configurations
Via kyuubi-defaults.conf
Setting them in $KYUUBI_HOME/conf/kyuubi-defaults.conf
supplies with default values for SQL engine application too. You can use properties with the additional prefix trino.
to override settings in $TRINO_HOME/etc/config.properties
.
For example:
trino.query_max_stage_count 500
trino.parse_decimal_literals_as_double true
The below options in kyuubi-defaults.conf
will set query_max_stage_count: 500
and parse_decimal_literals_as_double: true
into trino session properties.
Via JDBC Connection URL
Setting them in the JDBC Connection URL supplies session-specific for each SQL engine. For example:
jdbc:hive2://localhost:10009/default;#trino.query_max_stage_count=500;trino.parse_decimal_literals_as_double=true
Via SET Statements
Please refer to the Trino official online documentation for SET Statements
Logging
Kyuubi uses log4j for logging. You can configure it using $KYUUBI_HOME/conf/log4j2.xml
, see $KYUUBI_HOME/conf/log4j2.xml.template
as an example.
Other Configurations
Hadoop Configurations
Specifying HADOOP_CONF_DIR
to the directory containing Hadoop configuration files or treating them as Spark properties with a spark.hadoop.
prefix. Please refer to the Spark official online documentation for Inheriting Hadoop Cluster Configuration. Also, please refer to the Apache Hadoop‘s online documentation for an overview on how to configure Hadoop.
Hive Configurations
These configurations are used for SQL engine application to talk to Hive MetaStore and could be configured in a hive-site.xml
. Placed it in $SPARK_HOME/conf
directory, or treat them as Spark properties with a spark.hadoop.
prefix.
User Defaults
In Kyuubi, we can configure user default settings to meet separate needs. These user defaults override system defaults, but will be overridden by those from JDBC Connection URL or Set Command if could be. They will take effect when creating the SQL engine application ONLY.
User default settings are in the form of ___{username}___.{config key}
. There are three continuous underscores(_
) at both sides of the username
and a dot(.
) that separates the config key and the prefix. For example:
# For system defaults
spark.master=local
spark.sql.adaptive.enabled=true
# For a user named kent
___kent___.spark.master=yarn
___kent___.spark.sql.adaptive.enabled=false
# For a user named bob
___bob___.spark.master=spark://master:7077
___bob___.spark.executor.memory=8g
In the above case, if there are related configurations from JDBC Connection URL, kent
will run his SQL engine application on YARN and prefer the Spark AQE to be off, while bob
will activate his SQL engine application on a Spark standalone cluster with 8g heap memory for each executor and obey the Spark AQE behavior of Kyuubi system default. On the other hand, for those users who do not have custom configurations will use system defaults.