【发布时间】:2016-10-19 23:39:03
【问题描述】:
我有一张这样的桌子
+------+------------+
| fruit|fruit_number|
+------+------------+
| apple| 20|
|orange| 33|
| pear| 27|
| melon| 31|
| plum| 8|
|banana| 4|
+------+------------+
我想生成每行的百分比,但是当我总结百分比列时,我无法得到 100% 这是我在 pyspark 中生成的代码
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext, HiveContext,Row
sqlContext = HiveContext(sc)
from pyspark.sql.types import StringType, IntegerType, StructType, StructField,LongType
from pyspark.sql.functions import sum, mean,col
rdd = sc.parallelize([('apple', 20),
('orange',33),
('pear',27),
('melon',31),
('plum',8),
('banana',4)])
schema = StructType([StructField('fruit', StringType(), True),
StructField('fruit_number', IntegerType(),True)])
df = sqlContext.createDataFrame(rdd, schema)
df.registerTempTable('fruit_df_sql')
#total_num = 123
df_percent=spark.sql("""select fruit, round(fruit_number/123*100,2) as cnt_percent
from fruit_df_sql
order by cnt_percent desc """)
df_percent.agg(sum('cnt_percent')).show()
但我得到了这样的结果
+----------------+
|sum(cnt_percent)|
+----------------+
| 99.99|
+----------------+
不是100%,如何处理这个精度误差? 谢谢
【问题讨论】: