【问题标题】:pandas groupby and compare multiple columns and rowspandas groupby 并比较多列和多行
【发布时间】:2019-12-26 22:09:49
【问题描述】:

我有一个csv,有超过 600 列和数千行。原始文件包含更多客户和部门,但这个example 包含关键部分。

注意:我从 A_Loc1B_Loc1 列派生了 Site 列,以便更轻松地比较和分组行,但这不是必需的。如果不用这个也可以执行 groupby,我对其他方法持开放态度。

我需要根据Cust_IDSite 比较不同行和列的日期。例如,确认A_Date1 小于B_Date1,但仅适用于相同的Cust_IDSite 值。

所以对于Cust_ID100SiteCA2.2A_Date18/1/2015B_Date16/15/2018

if A_Date1 > B_Date1:
     df['Result'] = "Fail"
 else:
     result = ""

在上述情况下,不需要任何操作,因为A_Date1 小于B_Date1

但是,对于 Cust_ID 100Site CA2.0A_Date17/1/2019B_Date112/15/2018,所以 Result2 的列应该是 @9876543 987654353@ 行,其中SiteCA2.0

我愿意使用任何有效、灵活的方法来执行此操作,但是,我还需要对不同的行和列执行其他比较,但这应该让我开始。

预期结果:

+----+----------+-----------+-------+-------------+--------+-------------+-------------+-----------+----------+-----------+----------+------------+------------+-----------+------------+----------+-----------+
|    | Result   |   Cust_ID | Dep   |   Order_Num | Site   | Rec_Date1   | Rec_DateX   | A_Date1   | A_Loc1   | A_DateX   | B_Loc1   | B_Date1    | B_Date2    | B_DateX   | C_Date1    | C_Loc1   | C_DateX   |
|----+----------+-----------+-------+-------------+--------+-------------+-------------+-----------+----------+-----------+----------+------------+------------+-----------+------------+----------+-----------|
|  0 |          |       100 | A     |           1 | CA2.2  |             |             | 8/1/2015  | CA2.2    |           |          |            |            |           |            |          |           |
|  1 |          |       100 | A     |           2 | CA2.0  |             |             | 7/1/2019  | CA2.0    | 8/21/2019 |          |            |            |           |            |          |           |
|  2 |          |       100 | B     |           1 | CA2.2  |             |             |           |          |           | CA2.2    | 6/15/2018  | 6/15/2016  | 8/1/2019  |            |          |           |
|  3 | Fail     |       100 | B     |           2 | CA2.0  |             |             |           |          |           | CA2.0    | 12/15/2018 | 12/15/2016 |           |            |          |           |
|  4 | Fail     |       100 | B     |           3 | CA2.0  |             |             |           |          |           | CA2.0    | 12/15/2018 | 12/15/2016 | 8/21/2019 |            |          |           |
|  5 |          |       100 | C     |           1 | CA2.2  |             |             |           |          |           |          |            |            |           | 6/15/2016  | CA2.2    |           |
|  6 |          |       100 | C     |           2 | CA2.0  |             |             |           |          |           |          |            |            |           | 12/15/2017 | CA2.0    | 8/21/2019 |
|  7 |          |       100 | Rec   |             |        | 6/12/2019   | 8/1/2019    |           |          |           |          |            |            |           |            |          |           |
|  8 |          |       200 | A     |           1 | CA2.2  |             |             | 8/1/2015  | CA2.2    |           |          |            |            |           |            |          |           |
|  9 |          |       200 | A     |           2 | CA2.0  |             |             | 7/1/2015  | CA2.0    | 8/21/2019 |          |            |            |           |            |          |           |
| 10 |          |       200 | B     |           1 | CA2.2  |             |             |           |          |           | CA2.2    | 6/15/2018  | 6/15/2016  | 8/1/2019  |            |          |           |
| 11 |          |       200 | B     |           2 | CA2.0  |             |             |           |          |           | CA2.0    | 12/15/2018 | 12/15/2016 |           |            |          |           |
| 12 |          |       200 | B     |           3 | CA2.0  |             |             |           |          |           | CA2.0    | 12/15/2018 | 12/15/2016 | 8/21/2019 |            |          |           |
| 13 |          |       200 | C     |           1 | CA2.2  |             |             |           |          |           |          |            |            |           | 6/15/2016  | CA2.2    |           |
| 14 |          |       200 | C     |           2 | CA2.0  |             |             |           |          |           |          |            |            |           | 12/15/2017 | CA2.0    | 8/21/2019 |
| 15 |          |       200 | Rec   |             |        | 6/12/2019   | 8/1/2019    |           |          |           |          |            |            |           |            |          |           |
+----+----------+-----------+-------+-------------+--------+-------------+-------------+-----------+----------+-----------+----------+------------+------------+-----------+------------+----------+-----------+

我的尝试:

# Returns: ValueError: Length of values does not match length of index
df['Result'] = df.loc[df.A_Date1 < df.B_Date1].groupby(['Cust_ID','Site'],as_index=False)

# Returns: ValueError: Length of values does not match length of index
df["Result"] = df.loc[(((df["A_Date1"] != "N/A") 
               & (df["B_Date1"] != "N/A"))
               & (df.A_Date1 < df.B_Date1))].groupby([
               'Cust_ID','Site'],as_index=False)

# Returns: ValueError: unknown type str224
conditions = "(x['A_Date1'].notna()) & (x['B_Date1'].notna()) & (x['A_Date1'] < x['B_Date1'])"
df["Result"] = df.groupby(['Cust_ID','Site']).apply(lambda x: pd.eval(conditions))

# TypeError: incompatible index of inserted column with frame index
df = df[df.Dep != 'Rec']
df['Result'] = df.groupby(['Cust_ID','Site'],as_index = False).apply(lambda x: (x['A_Date1'].notna()) & (x['B_Date1'].notna()) & (x['A_Date1'] < x['B_Date1']))

# This produces FALSE for all rows
grouped_df = df.groupby(['Cust_ID','Site']).apply(lambda x: (x['A_Date1'].notna()) & (x['B_Date1'].notna()) & (x['A_Date1'] < x['B_Date1']))

更新:

我已经为这两个特定列(A_Loc1B_Loc1)找到了解决方案。首先将这些列转换为datetime,添加Result 列,分组并执行比较。

但是,我的原始文件中有大约 50 列需要比较。迭代列(或字典)列表来执行这些步骤是理想的。

## Solution for A_Loc1 and B_Loc1
## Convert all date columns to datetime, replace with NaN if error
df['A_Date1'] = pd.to_datetime(df['A_Date1'], errors ="coerce")
df['B_Date1'] = pd.to_datetime(df['B_Date1'], errors ="coerce")

# Add Result column
df.insert(loc=0, column="Result", value=np.nan)

# groupby Cust_ID and Site, then fill A_Date1 forward and back 
df['A_Date1'] = df.groupby(['Cust_ID','Site'], sort=False)['A_Date1'].apply(lambda x: x.ffill().bfill())

# Perform comparison
df.loc[(((df["A_Date1"].notna()) & (df["B_Date1"].notna()))
        & ((df["A_Date1"]) > (df["B_Date1"]))), 
       "Result"] = "Fail"

【问题讨论】:

  • 对于 Cust_ID 100 和站点 CA2.2,如何获得 B_Date1 为 7/1/2019?这不是站点 CA2.0 的日期,还是我看错了?
  • @Dan,谢谢。那是一个错字。刚刚更正了。
  • 你说的 50 列是什么意思?
  • @Ben.T,正如我提到的,我还有其他需要执行的比较。我不是写出所有的列,而是寻找一种有效的方法。
  • @m8_ 得到了这部分,但你的意思是其他比较,仍然在日期之间,并且列的名称有模式?

标签: python-3.x pandas group-by compare


【解决方案1】:

将发布此解决方案,希望找到更优雅和可扩展的实现。

import pandas as pd
import numpy as np
import os

data = [[100,'A','1','','','8/1/2015','CA2.2','','','','','','','',''],
        [100,'A','2','','','7/1/2019','CA2.0','8/21/2019','','','','','','',''],
        [100,'B','1','','','','','','CA2.2','6/15/2018','6/15/2016','8/1/2019','','',''],
        [100,'B','2','','','','','','CA2.0','12/15/2018','12/15/2016','','','',''],       
        [100,'B','3','','','','','','CA2.0','12/15/2018','12/15/2016','8/21/2019','','',''],
        [100,'C','1','','','','','','','','','','6/15/2016','CA2.2',''],
        [100,'C','2','','','','','','','','','','12/15/2017','CA2.0','8/21/2019'],
        [100,'Rec','','6/12/2019','8/1/2019','','','','','','','','','',''],
        [200,'A','1','','','8/1/2015','CA2.2','','','','','','','',''],
        [200,'A','2','','','7/1/2015','CA2.0','8/21/2019','','','','','','',''],
        [200,'B','1','','','','','','CA2.2','6/15/2018','6/15/2016','8/1/2019','','',''],
        [200,'B','2','','','','','','CA2.0','12/15/2018','12/15/2016','','','',''],       
        [200,'B','3','','','','','','CA2.0','12/15/2018','12/15/2016','8/21/2019','','',''],
        [200,'C','1','','','','','','','','','','6/15/2016','CA2.2',''],
        [200,'C','2','','','','','','','','','','12/15/2017','CA2.0','8/21/2019'],
        [200,'Rec','','6/12/2019','8/1/2019','','','','','','','','','','']]

df = pd.DataFrame(data,columns=['Cust_ID','Dep','Order_Num','Rec_Date1',
                                'Rec_DateX','A_Date1','A_Loc1','A_DateX',
                                'B_Loc1','B_Date1','B_Date2','B_DateX',
                                'C_Date1','C_Loc1','C_DateX'])

# replace blanks with np.NaN
df.replace(r"^s*$", np.nan, regex=True, inplace = True)

## Convert all date columns to datetime, replace with NaN if error
df['A_Date1'] = pd.to_datetime(df['A_Date1'], errors ="coerce")
df['B_Date1'] = pd.to_datetime(df['B_Date1'], errors ="coerce")


# Add Site and Result column
df.insert(loc=4, column="Site", value=np.nan)
df.insert(loc=0, column="Result", value=np.nan)

# Populate Site column based on related column
df.loc[df["A_Loc1"].notna(), 
       "Site"] = df["A_Loc1"]

df.loc[df["B_Loc1"].notna(), 
       "Site"] = df["B_Loc1"]

df.loc[df["C_Loc1"].notna(), 
       "Site"] = df["C_Loc1"]

# groupby Cust_ID and Site, and fill A_Date1 forward and back
df['A_Date1'] = df.groupby(['Cust_ID','Site'], sort=False)['A_Date1'].apply(lambda x: x.ffill().bfill())

# Perform comparison
df.loc[(((df["A_Date1"].notna()) & (df["B_Date1"].notna()))
        & ((df["A_Date1"]) > (df["B_Date1"]))), 
       "Result"] = "Fail"

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 2019-10-12
    • 1970-01-01
    • 2019-05-11
    • 1970-01-01
    • 2023-03-07
    • 2021-09-27
    相关资源
    最近更新 更多