【问题标题】:Pyspark (Dataframes) read file line wise (Convert row to string)Pyspark(Dataframes)逐行读取文件(将行转换为字符串)
【发布时间】:2018-08-27 23:01:25
【问题描述】:

我需要逐行读取文件并将每一行拆分为单词并对单词执行操作。

我该怎么做?

我写了以下代码:

logFile = "/home/hadoop/spark-2.3.1-bin-hadoop2.7/README.md"  # Should be 
some file on your system
spark = SparkSession.builder.appName("SimpleApp1").getOrCreate()
logData = spark.read.text(logFile).cache()
logData.printSchema()
logDataLines = logData.collect()

#The line variable below seems to be of type row. How I perform similar operations 
on row or how do I convert row to a string.

for line in logDataLines:
    words = line.select(explode(split(line,"\s+")))
    for word in words:
        print(word)
    print("----------------------------------")

【问题讨论】:

标签: apache-spark pyspark pyspark-sql


【解决方案1】:

我认为您应该将map 函数应用于您的行。 您可以在自创函数中应用任何内容:

data = spark.read.text("/home/spark/test_it.txt").cache()

def someFunction(row):
    wordlist = row[0].split(" ")
    result = list()
    for word in wordlist:
        result.append(word.upper())
    return result

data.rdd.map(someFunction).collect()

输出:

[[u'THIS', u'IS', u'JUST', u'A', u'TEST'], [u'TO', u'UNDERSTAND'], [u'THE', u'PROCESSING']]

【讨论】:

    猜你喜欢
    • 2015-05-17
    • 1970-01-01
    • 1970-01-01
    • 2010-11-08
    • 1970-01-01
    • 2019-01-29
    • 2018-07-10
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多