3. Trouble Shooting - 图1

Trouble Shooting

Common Issues

java.lang.UnsupportedClassVersionError .. Unsupported major.minor version 52.0

  1. Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/kyuubi/server/KyuubiServer : Unsupported major.minor version 52.0
  2. at java.lang.ClassLoader.defineClass1(Native Method)
  3. at java.lang.ClassLoader.defineClass(ClassLoader.java:803)
  4. at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
  5. at java.net.URLClassLoader.defineClass(URLClassLoader.java:442)
  6. at java.net.URLClassLoader.access$100(URLClassLoader.java:64)
  7. at java.net.URLClassLoader$1.run(URLClassLoader.java:354)
  8. at java.net.URLClassLoader$1.run(URLClassLoader.java:348)
  9. at java.security.AccessController.doPrivileged(Native Method)
  10. at java.net.URLClassLoader.findClass(URLClassLoader.java:347)
  11. at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
  12. at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:312)
  13. at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
  14. at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)

Firstly, you should check the version of Java JRE used to run Kyuubi is actually matched with the version of Java compiler used to build Kyuubi.

  1. $ java -version
  2. java version "1.7.0_171"
  3. OpenJDK Runtime Environment (rhel-2.6.13.2.el7-x86_64 u171-b01)
  4. OpenJDK 64-Bit Server VM (build 24.171-b01, mixed mode)
  1. $ cat RELEASE
  2. Kyuubi 1.0.0-SNAPSHOT (git revision 39e5da5) built for
  3. Java 1.8.0_251
  4. Scala 2.12
  5. Spark 3.0.1
  6. Hadoop 2.7.4
  7. Hive 2.3.7
  8. Build flags:

To fix this problem you should export JAVA_HOME with a compatible one in conf/kyuubi-env.sh

  1. echo "export JAVA_HOME=/path/to/jdk1.8.0_251" >> conf/kyuubi-env.sh

org.apache.spark.SparkException: When running with master ‘yarn’ either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment

  1. Exception in thread "main" org.apache.spark.SparkException: When running with master 'yarn' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment.
  2. at org.apache.spark.deploy.SparkSubmitArguments.error(SparkSubmitArguments.scala:630)
  3. at org.apache.spark.deploy.SparkSubmitArguments.validateSubmitArguments(SparkSubmitArguments.scala:270)
  4. at org.apache.spark.deploy.SparkSubmitArguments.validateArguments(SparkSubmitArguments.scala:233)
  5. at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:119)
  6. at org.apache.spark.deploy.SparkSubmit$$anon$2$$anon$3.<init>(SparkSubmit.scala:990)
  7. at org.apache.spark.deploy.SparkSubmit$$anon$2.parseArguments(SparkSubmit.scala:990)
  8. at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:85)
  9. at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
  10. at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
  11. at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

When Kyuubi gets the spark.master=yarn, HADOOP_CONF_DIR should also be exported in $KYUUBI_HOME/conf/kyuubi-env.sh.

To fix this problem you should export HADOOP_CONF_DIR to the folder that contains the hadoop client settings in conf/kyuubi-env.sh.

  1. echo "export HADOOP_CONF_DIR=/path/to/hadoop/conf" >> conf/kyuubi-env.sh

javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)];

org.apache.hadoop.security.AccessControlException: Permission denied: user=hzyanqin, access=WRITE, inode=”/user”:hdfs:hdfs:drwxr-xr-x

  1. org.apache.hadoop.security.AccessControlException: Permission denied: user=hzyanqin, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x
  2. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350)
  3. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251)
  4. at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:306)
  5. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
  6. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1767)
  7. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1751)
  8. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1710)
  9. at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
  10. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3062)
  11. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1156)
  12. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652)
  13. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  14. at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
  15. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
  16. at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
  17. at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
  18. at java.security.AccessController.doPrivileged(Native Method)
  19. at javax.security.auth.Subject.doAs(Subject.java:422)
  20. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
  21. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)
  22. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  23. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  24. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  25. at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  26. at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
  27. at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
  28. at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3007)
  29. at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2975)
  30. at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
  31. at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
  32. at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  33. at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
  34. at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
  35. at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1881)
  36. at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:600)
  37. at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:441)
  38. at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:876)
  39. at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:196)
  40. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:60)
  41. at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:201)
  42. at org.apache.spark.SparkContext.<init>(SparkContext.scala:555)
  43. at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2574)
  44. at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:934)
  45. at scala.Option.getOrElse(Option.scala:189)
  46. at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:928)
  47. at org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:72)
  48. at org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:101)
  49. at org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
  50. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  51. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  52. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  53. at java.lang.reflect.Method.invoke(Method.java:498)
  54. at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
  55. at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
  56. at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
  57. at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
  58. at java.security.AccessController.doPrivileged(Native Method)
  59. at javax.security.auth.Subject.doAs(Subject.java:422)
  60. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
  61. at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
  62. at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
  63. at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
  64. at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
  65. at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
  66. at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

The user do not have permission to create to Hadoop home dir, which is /user/hzyanqin in the case above.

To fix this problem you need to create this directory first and grant ACL permission for hzyanqin.

org.apache.thrift.TApplicationException: Invalid method name: ‘get_table_req’

  1. Caused by: org.apache.thrift.TApplicationException: Invalid method name: 'get_table_req'
  2. at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79)
  3. at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_req(ThriftHiveMetastore.java:1567)
  4. at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_req(ThriftHiveMetastore.java:1554)
  5. at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1350)
  6. at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.getTable(SessionHiveMetaStoreClient.java:127)
  7. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  8. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  9. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  10. at java.lang.reflect.Method.invoke(Method.java:498)
  11. at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
  12. at com.sun.proxy.$Proxy37.getTable(Unknown Source)
  13. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  14. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  15. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  16. at java.lang.reflect.Method.invoke(Method.java:498)
  17. at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2336)
  18. at com.sun.proxy.$Proxy37.getTable(Unknown Source)
  19. at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1274)
  20. ... 93 more

This error means that you are using incompatible version of Hive metastore client to connect the Hive metastore server.

To fix this problem you could use a compatible version for Hive client by configuring spark.sql.hive.metastore.jars and spark.sql.hive.metastore.version at Spark side.

hive.server2.thrift.max.worker.threads

  1. Unexpected end of file when reading from HS2 server. The root cause might be too many concurrent connections. Please ask the administrator to check the number of active connections, and adjust hive.server2.thrift.max.worker.threads if applicable.
  2. Error: org.apache.thrift.transport.TTransportException (state=08S01,code=0)

In Kyuubi, we should increase kyuubi.frontend.min.worker.threads instead of hive.server2.thrift.max.worker.threads

Failed to create function using jar

CREATE TEMPORARY FUNCTION TEST AS 'com.netease.UDFTest' using jar 'hdfs:///tmp/udf.jar'

  1. Error operating EXECUTE_STATEMENT: org.apache.spark.sql.AnalysisException: Can not load class 'com.netease.UDFTest' when registering the function 'test', please make sure it is on the classpath;
  2. at org.apache.spark.sql.catalyst.catalog.SessionCatalog.$anonfun$registerFunction$1(SessionCatalog.scala:1336)
  3. at scala.Option.getOrElse(Option.scala:189)
  4. at org.apache.spark.sql.catalyst.catalog.SessionCatalog.registerFunction(SessionCatalog.scala:1333)
  5. at org.apache.spark.sql.execution.command.CreateFunctionCommand.run(functions.scala:82)
  6. at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  7. at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  8. at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
  9. at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
  10. at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
  11. at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
  12. at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
  13. at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
  14. at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  15. at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
  16. at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
  17. at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
  18. at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
  19. at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  20. at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
  21. at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
  22. at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  23. at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
  24. at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.org$apache$kyuubi$engine$spark$operation$ExecuteStatement$$executeStatement(ExecuteStatement.scala:64)
  25. at org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:80)
  26. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  27. at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  28. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  29. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  30. at java.lang.Thread.run(Thread.java:745)

If you get this exception when creating a function, you can check your JDK version. You should update JDK to JDK1.8.0_121 and later, since JDK1.8.0_121 fix a security issue Additional access restrictions for URLClassLoader.newInstance.

Failed to start Spark 3.1 with error msg ‘Cannot modify the value of a Spark config’

Here is the error message

  1. Caused by: org.apache.spark.sql.AnalysisException: Cannot modify the value of a Spark config: spark.yarn.queue
  2. at org.apache.spark.sql.RuntimeConfig.requireNonStaticConf(RuntimeConfig.scala:156)
  3. at org.apache.spark.sql.RuntimeConfig.set(RuntimeConfig.scala:40)
  4. at org.apache.kyuubi.engine.spark.session.SparkSQLSessionManager.$anonfun$openSession$2(SparkSQLSessionManager.scala:68)
  5. at org.apache.kyuubi.engine.spark.session.SparkSQLSessionManager.$anonfun$openSession$2$adapted(SparkSQLSessionManager.scala:56)
  6. at scala.collection.immutable.Map$Map4.foreach(Map.scala:236)
  7. at org.apache.kyuubi.engine.spark.session.SparkSQLSessionManager.openSession(SparkSQLSessionManager.scala:56)
  8. ... 12 more

This is because Spark-3.1 will check the config which you set and throw exception if the config is static or used in other module (e.g. yarn/core).

You can add a config spark.sql.legacy.setCommandRejectsSparkCoreConfs=false in spark-defaults.conf to disable this behavior.