【发布时间】:2017-10-28 23:59:33
【问题描述】:
感谢是否有人可以阐明以下代码 sn-p 问题
lineStr= sc.textFile("/input/words.txt")
print (lineStr.collect())
['this file is created to count the no of texts', 'other wise i am just doing fine', 'lets see the output is there']
wc = lineStr.flatMap(lambda l: l.split(" ")).map(lambda x: (x,1)).reduceByKey(lambda w,c: w+c)
print (wc.glom().collect())
[[('this', 1), ('there', 1), ('i', 1), ('texts', 1), ('just', 1), ('fine', 1), ('is', 2), ('other', 1), ('created', 1), ('count', 1), ('of', 1), ('am', 1), ('no', 1), ('output', 1)], [('lets', 1), ('see', 1), ('the', 2), ('file', 1), ('doing', 1), ('wise', 1), ('to', 1)]]
现在,当我尝试使用以下过滤上述数据集以获取大于 1 的计数值时,我收到错误
s = wc.filter(lambda a,b:b>1)
print (s.collect())
错误:vs = list(itertools.islice(iterator, batch))
TypeError: () 缺少 1 个必需的位置参数:'b'
【问题讨论】:
标签: python apache-spark filter word-count