【发布时间】:2019-04-19 06:31:16
【问题描述】:
我的数据框如下
val employees = sc.parallelize(Array[(String, Int, BigInt)](
("Rafferty", 31, 222222222), ("Jones", 33, 111111111), ("Heisenberg", 33, 222222222), ("Robinson", 34, 111111111), ("Smith", 34, 333333333), ("Williams", 15, 222222222)
)).toDF("LastName", "DepartmentID", "Code")
employees.show()
+----------+------------+---------+
| LastName|DepartmentID| Code|
+----------+------------+---------+
| Rafferty| 31|222222222|
| Jones| 33|111111111|
|Heisenberg| 33|222222222|
| Robinson| 34|111111111|
| Smith| 34|333333333|
| Williams| 15|222222222|
+----------+------------+---------+
我想创建另一列作为personal_id 作为集中DepartmentId 和Code。示例:拉弗蒂 => 31222222222
所以我写代码如下:
val anotherdf = employees.withColumn("personal_id", $"DepartmentID".cast("String") + $"Code".cast("String"))
+----------+------------+---------+------------+
| LastName|DepartmentID| Code| personal_id|
+----------+------------+---------+------------+
| Rafferty| 31|222222222|2.22222253E8|
| Jones| 33|111111111|1.11111144E8|
|Heisenberg| 33|222222222|2.22222255E8|
| Robinson| 34|111111111|1.11111145E8|
| Smith| 34|333333333|3.33333367E8|
| Williams| 15|222222222|2.22222237E8|
+----------+------------+---------+------------+
但我的personal_id是double。
anotherdf.printSchema
root
|-- LastName: string (nullable = true)
|-- DepartmentID: integer (nullable = false)
|-- Code: decimal(38,0) (nullable = true)
|-- personal_id: double (nullable = true)
【问题讨论】:
标签: scala apache-spark apache-spark-sql