【问题标题】:Unable to load spark sql datasource for HBase无法为 HBase 加载 spark sql 数据源
【发布时间】:2016-03-30 11:45:29
【问题描述】:

我想使用 Spark SQL 从 HBase 表中获取数据。但是我在创建 DataFrame 时得到了 classNotFoundException。这是我的例外。

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/types/NativeType
    at org.apache.hadoop.hbase.spark.DefaultSource$$anonfun$generateSchemaMappingMap$1.apply(DefaultSource.scala:127)
    at org.apache.hadoop.hbase.spark.DefaultSource$$anonfun$generateSchemaMappingMap$1.apply(DefaultSource.scala:116)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
    at org.apache.hadoop.hbase.spark.DefaultSource.generateSchemaMappingMap(DefaultSource.scala:116)
    at org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:97)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
    at com.apache.spark.gettingStarted.SparkSQLOnHBaseTable.createTableAndPutData(SparkSQLOnHBaseTable.java:146)
    at com.apache.spark.gettingStarted.SparkSQLOnHBaseTable.main(SparkSQLOnHBaseTable.java:154)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.types.NativeType
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 14 more

你们中有人遇到过这样的问题吗?你是怎么解决的?

这是我的代码

// initializing spark context
    SparkConf sconf = new SparkConf().setMaster("local").setAppName("Test");
    // SparkContext sc = new SparkContext("local", "test", sconf);
    Configuration conf = HBaseConfiguration.create();
    JavaSparkContext jsc = new JavaSparkContext(sconf);
    try {
        HBaseAdmin.checkHBaseAvailable(conf);
        System.out.println("HBase is running");
    } catch (ServiceException e) {
        System.out.println("HBase is not running");
        e.printStackTrace();
    }
    SQLContext sqlContext = new SQLContext(jsc);

    String sqlMapping = "KEY_FIELD STRING :key" + " sql_city STRING personal:city" + ","
            + "sql_name STRING personal:name" + "," + "sql_designation STRING professional:designation" + ","
            + "sql_salary STRING professional:salary";

    HashMap<String, String> colMap = new HashMap<String, String>();
    colMap.put("hbase.columns.mapping", sqlMapping);
    colMap.put("hbase.table", "emp");

    // DataFrame dfJail =
    DataFrame df = sqlContext.read().format("org.apache.hadoop.hbase.spark").options(colMap).load();
    //DataFrame df = sqlContext.load("org.apache.hadoop.hbase.spark", colMap);

    // This is useful when issuing SQL text queries directly against the
    // sqlContext object.
    df.registerTempTable("temp_emp");

    DataFrame result = sqlContext.sql("SELECT count(*) from temp_emp");
    System.out.println("df  " + df);
    System.out.println("result " + result);

这里是 pom.xml 依赖项

<dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.10</artifactId>
        <version>1.6.0</version>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hbase</groupId>
        <artifactId>hbase-client</artifactId>
        <version>1.1.3</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hbase</groupId>
        <artifactId>hbase-spark</artifactId>
        <version>2.0.0-SNAPSHOT</version>
    </dependency>
</dependencies>

【问题讨论】:

    标签: hbase apache-spark-sql


    【解决方案1】:

    NativeType 不再存在:(也不存在 dataTypes.scala)

    Class not available in package

    它曾经存在于 Spark 1.3.1 的 dataTypes.scala 中。

    您可以在这里看到 NativeType 受到保护:

    Commit that makes NativeType protected

    您可能使用的是旧示例。

    【讨论】:

    • 虽然此链接可能会回答问题,但最好在此处包含答案的基本部分并提供链接以供参考。如果链接页面发生更改,仅链接的答案可能会失效。
    • 谢谢 MegaTron 我把它们改成了截图
    猜你喜欢
    • 2016-09-19
    • 1970-01-01
    • 1970-01-01
    • 2016-05-11
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多