【问题标题】:Task not serializable Flink任务不可序列化 Flink
【发布时间】:2015-09-27 17:02:26
【问题描述】:

我正在尝试在 flink 中进行 pagerank Basic 示例,并稍作修改(仅在读取输入文件时,其他一切都相同)我收到错误 Task not serializable 和下面是输出错误的一部分

atorg.apache.flink.api.scala.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:179) 在 org.apache.flink.api.scala.ClosureCleaner$.clean(ClosureCleaner.scala:171)

下面是我的代码

object hpdb {

  def main(args: Array[String]) {

    val env = ExecutionEnvironment.getExecutionEnvironment

    val maxIterations = 10000

    val DAMPENING_FACTOR: Double = 0.85

    val EPSILON: Double = 0.0001

    val outpath = "/home/vinoth/bigdata/assign10/pagerank.csv"

    val links = env.readCsvFile[Tuple2[Long,Long]]("/home/vinoth/bigdata/assign10/ppi.csv",
                fieldDelimiter = "\t", includedFields = Array(1,4)).as('sourceId,'targetId).toDataSet[Link]//source and target

    val pages = env.readCsvFile[Tuple1[Long]]("/home/vinoth/bigdata/assign10/ppi.csv",
      fieldDelimiter = "\t", includedFields = Array(1)).as('pageId).toDataSet[Id]//Pageid

    val noOfPages = pages.count()

    val pagesWithRanks = pages.map(p => Page(p.pageId, 1.0 / noOfPages))

    val adjacencyLists = links
      // initialize lists ._1 is the source id and ._2 is the traget id
      .map(e => AdjacencyList(e.sourceId, Array(e.targetId)))
      // concatenate lists
      .groupBy("sourceId").reduce {
      (l1, l2) => AdjacencyList(l1.sourceId, l1.targetIds ++ l2.targetIds)
    }

    // start iteration

    val finalRanks = pagesWithRanks.iterateWithTermination(maxIterations) {
     // **//the output shows error here**     
     currentRanks =>
        val newRanks = currentRanks
          // distribute ranks to target pages
          .join(adjacencyLists).where("pageId").equalTo("sourceId") {
          (page, adjacent, out: Collector[Page]) =>
            for (targetId <- adjacent.targetIds) {
              out.collect(Page(targetId, page.rank / adjacent.targetIds.length))
            }
        }

          // collect ranks and sum them up

          .groupBy("pageId").aggregate(SUM, "rank")
          // apply dampening factor
         //**//the output shows error here** 
           .map { p =>
          Page(p.pageId, (p.rank * DAMPENING_FACTOR) + ((1 - DAMPENING_FACTOR) / pages.count()))
        }

        // terminate if no rank update was significant
        val termination = currentRanks.join(newRanks).where("pageId").equalTo("pageId") {
          (current, next, out: Collector[Int]) =>
            // check for significant update
            if (math.abs(current.rank - next.rank) > EPSILON) out.collect(1)
        }

        (newRanks, termination)
    }

    val result = finalRanks

    // emit result
    result.writeAsCsv(outpath, "\n", " ")

    env.execute()

    }
}

非常感谢您对正确方向的任何帮助?谢谢。

【问题讨论】:

    标签: scala apache-flink


    【解决方案1】:

    问题是您在MapFunction 中引用了DataSet pages。这是不可能的,因为DataSet 只是数据流的逻辑表示,不能在运行时访问。

    解决这个问题你要做的就是将val pagesCount = pages.count的值赋给一个变量pagesCount,并在你的MapFunction中引用这个变量。

    pages.count实际上是做的,就是触发数据流图的执行,这样就可以统计pages中的元素个数了。然后将结果返回到您的程序。

    【讨论】:

      猜你喜欢
      • 2019-02-22
      • 1970-01-01
      • 2018-04-06
      • 2021-03-16
      • 2018-09-19
      • 2015-05-31
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多