【问题标题】:Spark Scala Unit test getting failedSpark Scala 单元测试失败
【发布时间】:2021-09-18 21:32:02
【问题描述】:

我在运行 maven 测试时遇到以下错误。我有 HADOOP_HOME、hadoop.dll 文件,一切都设置在我的本地机器和路径、我机器的环境变量中。以前它运行良好。当我不得不克隆一个不同的存储库时,我开始收到这个错误。谁能帮我解决这个问题。

  org.apache.spark.sql.AnalysisException: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V;
  at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:108)
  at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:196)
  at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
  at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
  at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
  at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
  at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
  at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anon$1.<init>(HiveSessionStateBuilder.scala:69)
  at org.apache.spark.sql.hive.HiveSessionStateBuilder.analyzer(HiveSessionStateBuilder.scala:69)
  at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
  ...
  Cause: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V
  at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Native Method)
  at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode(NativeIO.java:524)
  at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:465)
  at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:518)
  at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:496)
  at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:316)
  at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:694)
  at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:613)
  at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:547)
  at org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:180)

【问题讨论】:

    标签: scala apache-spark hadoop hive


    【解决方案1】:

    问题是您没有设置库路径。尝试使用 hadoop.dll 所在的文件夹设置 java.library.path java 属性。如果您使用 maven 执行测试,您可以使用 argLine 选项设置执行测试的分叉 jvm 的属性:

    mvn -DargLine="-Djava.library.path=[hadoop_dll_dir_path]" 
    

    【讨论】:

      猜你喜欢
      • 2020-05-03
      • 1970-01-01
      • 2017-02-16
      • 1970-01-01
      • 2014-05-04
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多