【问题标题】:SparkR Error while writing dataframe to csv and parquet将数据帧写入 csv 和 parquet 时出现 SparkR 错误
【发布时间】:2023-03-28 01:47:01
【问题描述】:

我在将 spark 数据帧写入 csv 和 parquet 时出错。我已经尝试安装winutil,但仍然没有解决错误。

我的代码

    INVALID_IMEI <- c("012345678901230","000000000000000")
    setwd("D:/Revas/Jatim Old")
    fileList <- list.files()
    cdrSchema <- structType(structField("date","string"),
                      structField("time","string"),
                      structField("a_number","string"),
                      structField("b_number", "string"),
                      structField("duration","integer"),
                      structField("lac_cid","string"),
                      structField("imei","string"))
    file <- fileList[1]
    filePath <- paste0("D:/Revas/Jatim Old/",file)
    dataset <- read.df(filePath, header="false",source="csv",delimiter="|",schema=cdrSchema)
    dataset <- filter(dataset, ifelse(dataset$imei %in% INVALID_IMEI,FALSE,TRUE))
    dataset <- filter(dataset, ifelse(isnan(dataset$imei),FALSE,TRUE))
    dataset <- filter(dataset, ifelse(isNull(dataset$imei),FALSE,TRUE))

要导出数据框,我尝试以下代码

    write.df(dataset, "D:/spark/dataset",mode="overwrite")
    write.parquet(dataset, "D:/spark/dataset",mode="overwrite")

我得到以下错误

Error: Error in save : org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:215)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:173)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:173)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:173)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:145)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.comma

【问题讨论】:

    标签: rstudio sparkr


    【解决方案1】:

    我已经找到了可能的原因。问题似乎在于 winutil 版本,以前我使用的是 2.6。将其更改为 2.8 似乎可以解决问题

    【讨论】:

      猜你喜欢
      • 2022-01-04
      • 1970-01-01
      • 1970-01-01
      • 2022-01-03
      • 2017-10-30
      • 2018-11-21
      • 2017-11-08
      • 2014-01-21
      • 1970-01-01
      相关资源
      最近更新 更多