【问题标题】:Sum of column returning all null values in PySpark SQL在 PySpark SQL 中返回所有空值的列的总和
【发布时间】:2020-09-03 20:21:26
【问题描述】:

我是 Spark 的新手,这可能是一个简单的问题。

我有一个名为 sql_left 的 SQL,格式如下:

这是使用 sql_left.take(1) 生成的示例数据:

[Row(REPORT_ID='2016-30-15/08/2019', Stats Area='2 Metropolitan', Suburb='GREENACRES', Postcode=5086, LGA Name='CITY OF PORT ADELAIDE ENFIELD', Total Units=3, Total Cas=0, Total Fats=0, Total SI=0, Total MI=0, Year=2016, Month='November', Day='Wednesday', Time='01:20 am', Area Speed=50, Position Type='Not Divided', Horizontal Align='Straight road', Vertical Align='Level', Other Feat='Not Applicable', Road Surface='Sealed', Moisture Cond='Dry', Weather Cond='Not Raining', DayNight='Night', Crash Type='Hit Parked Vehicle', Unit Resp=1, Entity Code='Driver Rider', CSEF Severity='1: PDO', Traffic Ctrls='No Control', DUI Involved=None, Drugs Involved=None, ACCLOC_X=1331135.04, ACCLOC_Y=1677256.22, UNIQUE_LOC=13311351677256, REPORT_ID='2016-30-15/08/2019', Unit No=2, No Of Cas=0, Veh Reg State='UNKNOWN', Unit Type='Motor Vehicle - Type Unknown', Veh Year='XXXX', Direction Of Travel='East', Sex=None, Age=None, Lic State=None, Licence Class=None, Licence Type=None, Towing='Unknown', Unit Movement='Parked', Number Occupants='000', Postcode=None, Rollover=None, Fire=None)]

注意:年龄列有 'XXX'、'NUll' 和其他整数值,如 023,034 等。
printSchema 将 Age,Total Cas 显示为整数。

我已经尝试了下面的代码来首先加入两个表:

sql_left = spark.sql('''
SELECT * 
FROM sql_crash c Left JOIN sql_units u ON c.REPORT_ID=u.REPORT_ID''')
sql_left.createOrReplaceTempView("mytable")

下面的代码生成总Cas:

sql_result = spark.sql('''select concat_ws(' ', Day, Month,Year,Time) as Date_Time,Age,"Licence Type","Unit Type",Sex,COALESCE(sum("Total Cas"),0) as Total_casualities from mytable where Suburb in ('ADELAIDE','ADELAIDE AIRPORT','NORTH ADELAIDE','PORT ADELAIDE') Group by Date_Time, Age,"Licence Type","Unit Type",Sex order by Total_casualities desc''')
sql_result.show(20,truncate=False)

我得到的输出低于 sum 为 0。

+--------------------------------+---+------------+---------+-------+-----------------+
|Date_Time                       |Age|Licence Type|Unit Type|Sex    |Total_casualities|
+--------------------------------+---+------------+---------+-------+-----------------+
|Friday December 2016 02:45 pm   |XXX|Licence Type|Unit Type|Unknown|0.0              |
|Saturday September 2017 06:35 pm|023|Licence Type|Unit Type|Male   |0.0              |
+--------------------------------+---+------------+---------+-------+-----------------+

我尝试了多种选择,但都没有成功。 我的主要问题是,如果我使用 COALESCE(sum("Total Cas"),0),Total_casualities 会为所有行返回 0.0。如果我不使用 COALESCE,它会将值显示为 NULL。

非常感谢您的帮助。

【问题讨论】:

  • 在 pyspark sql 中返回所有 0.0 的列的总和。我的问题是 Total_casualities 为所有行返回 0.0

标签: sql pyspark apache-spark-sql data-science pyspark-dataframes


【解决方案1】:

不要在双引号中指定 Total Cas(“Total Cas”),而是在反引号中提及它。

i.e. `Total Cas`  

注意:中间有空格的列名需要用反引号指定。正如您在引号中提到的那样,它认为它是一个字符串,这就是您没有得到总和的原因。此外,对于其他列(如Licence TypeUnit Type),它显示的内容与字符串相同,而不是其值。希望你明白了。

sql_result = spark.sql('''select concat_ws(' ', Day, Month,Year,Time) as Date_Time,Age,`Licence Type`,`Unit Type`,Sex,**sum(`Total Cas`)** as Total_casualities from mytable where Suburb in ('ADELAIDE','ADELAIDE AIRPORT','NORTH ADELAIDE','PORT ADELAIDE') Group by Date_Time, Age,`Licence Type`,`Unit Type`,Sex order by Total_casualities desc''')

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2019-12-28
    • 1970-01-01
    • 2022-07-19
    • 1970-01-01
    • 2018-04-26
    相关资源
    最近更新 更多