【问题标题】:Reading data from Amazon redshift in Spark 2.4在 Spark 2.4 中从 Amazon redshift 读取数据
【发布时间】:2019-09-08 05:14:36
【问题描述】:

我们曾经在 Spark 2.3 中使用带有以下代码段的 databricks 读取数据 Spark-Shell 初始化:

spark-shell --jars RedshiftJDBC42-1.2.10.1009.jar --packages com.databricks:spark-redshift_2.11:3.0.0-preview1,com.databricks:spark-avro_2.11:3.2.0

然后

val url = "jdbc:redshift://cluster-link?user=username&password=password"
val queryFinal = "select count(*) as cnt from table1"
val df = spark.read.format("com.databricks.spark.redshift").option("url", url).option("tempdir", "s3n://temp-bucket/").option("query",queryFinal).option("forward_spark_s3_credentials", "true").load().cache

最近升级了 Spark 2.4,我们无法这样做,并且出现以下异常

java.lang.AbstractMethodError: com.databricks.spark.redshift.RedshiftFileFormat.supportDataType(Lorg/apache/spark/sql/types/DataType;Z)Z
  at org.apache.spark.sql.execution.datasources.DataSourceUtils$$anonfun$verifySchema$1.apply(DataSourceUtils.scala:48)
  at org.apache.spark.sql.execution.datasources.DataSourceUtils$$anonfun$verifySchema$1.apply(DataSourceUtils.scala:47)
  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
  at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
  at org.apache.spark.sql.execution.datasources.DataSourceUtils$.verifySchema(DataSourceUtils.scala:47)
  at org.apache.spark.sql.execution.datasources.DataSourceUtils$.verifyReadSchema(DataSourceUtils.scala:39)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:400)
  at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
  at com.databricks.spark.redshift.RedshiftRelation.buildScan(RedshiftRelation.scala:168)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:326)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:325)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProjectRaw(DataSourceStrategy.scala:403)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProject(DataSourceStrategy.scala:321)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy.apply(DataSourceStrategy.scala:289)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
  at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
  at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
  at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
  at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
  at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
  at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
  at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:72)
  at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:68)
  at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:77)
  at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:77)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3360)
  at org.apache.spark.sql.Dataset.head(Dataset.scala:2545)
  at org.apache.spark.sql.Dataset.take(Dataset.scala:2759)
  at org.apache.spark.sql.Dataset.getRows(Dataset.scala:255)
  at org.apache.spark.sql.Dataset.showString(Dataset.scala:292)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:746)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:705)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:714)

我查看了在线论坛并了解到 Spark 2.4 已添加内置 Avro 源代码,这就是使用数据块的原因,我们无法反序列化数据。

我尝试了两种方法:

  1. spark.sql.legacy.replaceDatabricksSparkAvro.enabled设置为真

https://spark.apache.org/docs/latest/sql-data-sources-avro.html

这里的异常保持不变。

  1. 使用 JDBC URL 连接 https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html 我的连接超时。

有谁知道这个问题有没有解决办法?这将非常有帮助。

【问题讨论】:

    标签: apache-spark pyspark amazon-emr


    【解决方案1】:

    正如databricks spark-redshift 连接器的issue 中所说,该库不再作为单独的项目维护,因此它不支持Spark 2.4.x

    如果您想在 Spark 2.4.x 中继续使用 Redshift,还有一个替代方案:Udemy fork。有了这个,你必须在你的依赖文件中添加 Avro 依赖项("org.apache.spark" %% "spark-avro",包含在版本 2.4.0 中的 Spark 中)作为“提供”,并在 spark-submit中添加选项 --packages org.apache.spark:spark-avro_2.12:2.4.3 > 命令,如Avro Documentation 中所述。

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 2017-10-03
      • 2020-02-04
      • 2022-08-24
      • 2020-12-04
      • 1970-01-01
      • 2020-03-16
      • 1970-01-01
      相关资源
      最近更新 更多