【发布时间】:2020-04-21 17:17:59
【问题描述】:
我尝试了以下这些方法:
>>./spark-shell –-jars /home/my_path/my_jar.jar
在 shell 中,我尝试导入包:
scala> import com.vertica.spark._
<console>:23: error: object vertica is not a member of package com
import com.vertica.spark._
它不起作用,我也尝试从 jar 的路径中删除斜杠(/)
>>./spark-shell –-jars home/my_path/my_jar.jar
还是一样..虽然有警告
20/04/21 22:34:40 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://ubuntu:4040
Spark context available as 'sc' (master = local[*], app id = local-1587488711233).
Spark session available as 'spark'.
Welcome to
但另一方面,如果我进入 shell 并尝试使用相同的 jar 路径添加 require,那么它会成功导入:
scala> :require /home/my_path/my_jar.jar
Added '/home/my_path/my_jar.jar' to classpath.
scala> import com.vertica.spark._
import com.vertica.spark._
在添加带有 spark-shell 本身的罐子时我缺少什么?
【问题讨论】:
-
加载 jars 时是否有任何警告?
-
是的,我在问题中添加了@BlueSheepToken
-
./spark-shell –-jars /home/my_path/my_jar.jar应该可以工作。完整的命令是什么?
标签: scala apache-spark read-eval-print-loop