【问题标题】:Zeppling twitter streaming example, unable toZeppling twitter 流媒体示例,无法
【发布时间】:2017-06-23 12:27:22
【问题描述】:

当按照 zeppelin 教程流式传输推文并使用 SparkSQL 查询它们时,我遇到了找不到“推文”临时表的错误。使用的具体代码和链接如下

参考:https://zeppelin.apache.org/docs/0.6.2/quickstart/tutorial.html

import scala.collection.mutable.HashMap
import org.apache.spark.streaming._
import org.apache.spark.streaming.twitter._
import org.apache.spark.storage.StorageLevel
import scala.io.Source
import scala.collection.mutable.HashMap
import java.io.File
import org.apache.log4j.Logger
import org.apache.log4j.Level
import sys.process.stringSeqToProcess

/** Configures the Oauth Credentials for accessing Twitter */
def configureTwitterCredentials(apiKey: String, apiSecret: String, accessToken: String, accessTokenSecret: String) {
  val configs = new HashMap[String, String] ++= Seq(
    "apiKey" -> apiKey, "apiSecret" -> apiSecret, "accessToken" -> accessToken, "accessTokenSecret" -> accessTokenSecret)
  println("Configuring Twitter OAuth")
  configs.foreach{ case(key, value) =>
    if (value.trim.isEmpty) {
      throw new Exception("Error setting authentication - value for " + key + " not set")
    }
    val fullKey = "twitter4j.oauth." + key.replace("api", "consumer")
    System.setProperty(fullKey, value.trim)
    println("\tProperty " + fullKey + " set as [" + value.trim + "]")
  }
  println()
}

// Configure Twitter credentials
val apiKey = "xxx"
val apiSecret = "xxx"
val accessToken = "xx-xxx"
val accessTokenSecret = "xxx"

configureTwitterCredentials(apiKey, apiSecret, accessToken, accessTokenSecret)

import org.apache.spark.streaming._
import org.apache.spark.streaming.twitter._


@transient val ssc = new StreamingContext(sc, Seconds(2))
@transient val tweets = TwitterUtils.createStream(ssc, None)
@transient val twt = tweets.window(Seconds(60), Seconds(2))

val sqlContext= new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
case class Tweet(createdAt:Long, text:String)
twt.map(status=>
        Tweet(status.getCreatedAt().getTime()/1000, status.getText())).foreachRDD(rdd=>
        // Below line works only in spark 1.3.0.
        // For spark 1.1.x and spark 1.2.x,
        // use rdd.registerTempTable("tweets") instead.
        rdd.toDF().registerTempTable("tweets")
)

ssc.start()

在下一段中,我有 SQL 选择语句

%sql select createdAt, count(1) from tweets group by createdAt order by createdAt

这会引发以下异常

org.apache.spark.sql.AnalysisException: Table not found: tweets;
    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:305)

【问题讨论】:

    标签: apache-spark apache-spark-sql apache-zeppelin


    【解决方案1】:

    能够通过进行以下编辑来运行上述示例。我不确定,如果由于 Spark (v1.6.3) 的版本升级或其他一些底层架构细微差别而需要此更改,我可能会丢失,但无论如何

    参考号:SparkSQL error Table Not Found

    在第二段'而不是直接调用为 SQL 语法,尝试使用 sqlContext 如下

    val my_df = sqlContext.sql("SELECT * from sweets LIMIT 5")
    my_df.collect().foreach(println)
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2013-04-05
      • 2013-01-07
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多