【问题标题】:Using a CSV file in Scala在 Scala 中使用 CSV 文件
【发布时间】:2019-03-08 00:45:52
【问题描述】:

我正在尝试使用 Scala 在 Apache Spark 上运行 K-means。当我使用 Spark 网站上的示例时 https://spark.apache.org/docs/2.3.0/ml-clustering.html 一切都很好,但是当我尝试使用 cvs 文件时,我遇到了这个问题

scala> val censocsv = spark.read.format("csv").option("sep",",").option("inferSchema","true").option("header", "true").load("censodiscapacidad.csv")
2018-10-01 21:58:31 WARN  SizeEstimator:66 - Failed to check whether UseCompressedOops is set; assuming yes
2018-10-01 21:58:49 WARN  ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException
censocsv: org.apache.spark.sql.DataFrame = [ANIO: int, DELEGACION: double ... 123 more fields]

scala> val kmeans = new KMeans().setK(2).setSeed(1L)
kmeans: org.apache.spark.ml.clustering.KMeans = kmeans_860c02e56190

scala> val model = kmeans.fit(censocsv)
java.lang.IllegalArgumentException: Field "features" does not exist.
  at org.apache.spark.sql.types.StructType$$anonfun$apply$1.apply(StructType.scala:267)
  at org.apache.spark.sql.types.StructType$$anonfun$apply$1.apply(StructType.scala:267)
  at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
  at scala.collection.AbstractMap.getOrElse(Map.scala:59)
  at org.apache.spark.sql.types.StructType.apply(StructType.scala:266)
  at org.apache.spark.ml.util.SchemaUtils$.checkColumnType(SchemaUtils.scala:40)
  at org.apache.spark.ml.clustering.KMeansParams$class.validateAndTransformSchema(KMeans.scala:93)
  at org.apache.spark.ml.clustering.KMeans.validateAndTransformSchema(KMeans.scala:254)
  at org.apache.spark.ml.clustering.KMeans.transformSchema(KMeans.scala:340)
  at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:74)
  at org.apache.spark.ml.clustering.KMeans.fit(KMeans.scala:305)
  ... 51 elided

scala> val predictions = model.transform(censocsv)
<console>:31: error: not found: value model
       val predictions = model.transform(censocsv)
                         ^

scala> 

【问题讨论】:

  • 好的,谢谢@BrianMcCutchon

标签: scala apache-spark k-means


【解决方案1】:

这看起来像是 Field "features" does not exist. SparkML 的副本
您需要将包含特征列的 Vector 添加到 DataFrame。

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2018-04-26
    • 1970-01-01
    • 2019-03-10
    • 1970-01-01
    相关资源
    最近更新 更多