【发布时间】:2021-01-11 22:18:47
【问题描述】:
这是我的第一篇文章,我需要一点帮助,在 Scala 编程任务中,这不是那么简单(至少对我来说)。
我在 2.10 版中使用 scala,在 Spark 3.0.0-preview2 版本下。
从 mysql 数据库导入,我的数据属于这种类型:
95,118.37,118.47,111.725,114.3,1049181,AMP,2020-04-14
96,116.88,117.84,113.11,114.92,827085,AMP,2020-04-13
97,113.64,124.61,113.64,120.47,1608575,AMP,2020-04-09
98,104.48,112.48,102.28,111.69,996230,AMP,2020-04-08
99,109.17,112.23,102.41,103.48,1302910,AMP,2020-04-07
100,42.25,42.25,41.73,41.82,639964,G,2020-08-26
101,41.98,42.15,41.76,42.12,501219,G,2020-08-25
102,41.52,42.015,41.45,41.9,479076,G,2020-08-24
103,41.27,41.46,40.99,41.16,752730,G,2020-08-21
104,41.74,41.965,41.25,41.3,596435,G,2020-08-20
105,42.14,42.21,41.87,41.94,422493,G,2020-08-19
然后,通过映射过程,这些数据被重新格式化为这种类型的 Tuple2
(AMP,(1,156.77,156.915,155.03,155.74,527938,AMP,2020-08-26))
(AMP,(2,159.48,159.88,156.86,156.99,535905,AMP,2020-08-25))
(AMP,(3,155.38,157.75,155.33,157.72,758272,AMP,2020-08-24))
(AMP,(4,155.24,156.79,153.92,154.51,653496,AMP,2020-08-21))
(AMP,(5,155.24,157.39,154.27,155.14,516138,AMP,2020-08-20))
(AMP,(6,156.65,160.06,156.57,156.85,577637,AMP,2020-08-19))
(AMP,(7,158.05,158.35,156.34,156.5,544429,AMP,2020-08-18))
(AMP,(8,159.69,159.82,157.76,157.83,437624,AMP,2020-08-17))
其中,每条记录都属于以下类型:
org.apache.spark.rdd.RDD[(String, (Int, Double, Double, Double, Double, Int, String, String))]
然后,我需要对所有的键进行分组,并编写了一个“groupByKey”程序:
val SA = Simboli.groupByKey
产生一个变量:
org.apache.spark.rdd.RDD[(String, Iterable[(Int, Double, Double, Double, Double, Int, String, String)])]
我现在的问题是:我可以创建一个“向量”或“序列”类型的新变量,将这种奇怪类型的每条记录插入向量列表中吗?
例如,一个向量,其中每个项目都是新的:
RDD[(String, Iterable[(Int, .....
我发现的唯一方法是这样转换这种变量:
- 只取第一个“组”
val SAG : Array[(String, Iterable[(Int, Double,
Double, Double,
Double, Int,
String, String)])] = SA.take(1);
提取“Iterable”部分:
val SAGITB : Array[Iterable[(Int, Double,
Double, Double,
Double, Int,
String, String)]] = SAG.map(item => item._2);
在“Iterator”中转换“Iterable”:
val SAGITT : Array[Iterator[(Int, Double,
Double, Double,
Double, Int,
String, String)]] = SAGITB.map(item => item.iterator);
提取值:
val SARDD : Array[(Int, Double,
Double, Double,
Double, Int,
String, String)] = SAGITT.map(item => item.next);
最后,我尝试在 for 循环中为每个项目填充向量或序列,但我做不到。这是我最后一次尝试:
val SV3 : Vector[Array[(Int, Double, Double,
Double, Double, Int,
String, String)]] = Vector.empty;
for (it <- 0 to 20){
println("Riga numero: " + it);
SV3 :+ SAGITT.map(item => item.next);
}
最后我的问题是:如何使用“可迭代”或“迭代器”类型的数据填充向量或序列,或者,如何从可迭代的 RDD 中提取所有数据、转换和填充这些数据是一个简单的向量吗?
非常感谢!!
【问题讨论】:
标签: scala apache-spark iterator rdd iterable