【发布时间】:2020-10-15 06:10:54
【问题描述】:
Hive 表属性:
| ROW FORMAT SERDE |
| 'org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe' |
| WITH SERDEPROPERTIES ( |
| 'field.delim'='<~^~>') |
| STORED AS INPUTFORMAT |
| 'org.apache.hadoop.mapred.TextInputFormat' |
| OUTPUTFORMAT |
| 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
| | TBLPROPERTIES ( |
| 'TRANSLATED_TO_EXTERNAL'='TRUE')
/usr/hdp/3.1.5.0-152/spark2/bin/spark-shell --jars /usr/hdp/3.1.5.0-152/hive/lib/hive-contrib-3.1.0.3.1.5.0 -152.jar
spark.read.table("db.table").show
20/06/08 03:04:35 ERROR log: error in initSerDe: java.lang.ClassNotFoundException Class org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found
java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2501)
at org.apache.hadoop.hive.metastore.HiveMetaStoreUtils.getDeserializer(HiveMetaStoreUtils.java:84)
at org.apache.hadoop.hive.metastore.HiveMetaStoreUtils.getDeserializer(HiveMetaStoreUtils.java:77)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:302)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:284)
at org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:676)
at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:659)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$7.apply(HiveClientImpl.scala:371)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$7.apply(HiveClientImpl.scala:368)
at scala.Option.map(Option.scala:146)
【问题讨论】:
-
可以添加你正在使用的代码吗?
-
感谢 Mahesh 的回复。没有这样的代码。我有一个以“”作为分隔符的配置单元表,使用上述给定的表属性创建。我正在尝试使用 spark shell 中的命令 spark.read.table("db.table").show 读取它。
标签: apache-spark hive