【问题标题】:Find time interval with groupby and agg使用 groupby 和 agg 查找时间间隔
【发布时间】:2020-04-25 12:55:02
【问题描述】:

df:

          Id    timestamp               data    sig     events1 Start   Peak    Timediff    Datadiff
104513  104754  2012-03-21 16:23:21.923 19.5    1.0     0.0     1.0     0.0     28732.920   0.5
104514  104755  2012-03-21 16:23:22.023 20.0    -1.0    0.0     0.0     1.0     0.100       0.5
104623  104864  2012-03-22 04:27:04.550 19.5    0.0     0.0     0.0     0.0     43423.127   -0.5
104630  104871  2012-03-22 04:27:11.670 19.5    -1.0    0.0     0.0     0.0     7.120       0.0
105147  105388  2012-03-23 06:12:24.523 19.0    -1.0    0.0     1.0     0.0     92712.853   -0.5
105148  105389  2012-03-23 06:12:24.623 18.5    1.0     1.0     0.0     0.0     0.100       -0.5

我想找到Peak ==1 行之间的时间间隔,其中开始时间戳是1Start 中出现的第一个时间间隔,结束时间戳是Peak ==1Peak ==1 只有一行,Start ==1 可能是多行

我试图按df['group'] = df['Peak'].cumsum() 分组,然后使用agg,类似于df = df.groupby('group').agg({'Start': 'first', 'timestamp' :'first' ...}),但我不确定如何指定每个组中的开始和结束时间戳。

预期输出:

timestamp1(i.e.Start ==1) timestamp2(i.e. Peak ==1) TimeInterval            
2012-03-21 16:23:21.923   2012-03-21 16:23:22.023   0.1
                          ...

编辑:

可重现的例子:

        Id      timestamp               Start   Peak
51253   51494   2012-01-27 06:22:08.330 NaN     1.0  # Time interval are divided by these rows where `Peak==1`.
51254   51495   2012-01-27 06:22:08.430 0.0     0.0
51255   51496   2012-01-27 07:19:06.297 1.0*    0.0
51256   51497   2012-01-27 07:19:06.397 0.0     0.0
51259   51500   2012-01-27 07:32:19.587 0.0     0.0
51260   51501   2012-01-27 07:32:19.687 0.0     1.0  # Time interval are divided by these rows where `Peak==1`.
51261   51502   2012-01-27 07:32:37.607 0.0     0.0
51262   51503   2012-01-27 07:32:37.707 0.0     0.0
51325   51566   2012-01-27 09:00:23.053 1.0*    0.0
51326   51567   2012-01-27 09:00:23.153 0.0     0.0
51327   51568   2012-01-27 09:00:28.047 0.0     0.0
51328   51569   2012-01-27 09:00:28.147 0.0     1.0  # Time interval are divided by these rows where `Peak==1`.
51349   51590   2012-01-27 09:06:23.110 0.0     0.0
51350   51591   2012-01-27 09:06:23.210 0.0     0.0
51351   51592   2012-01-27 09:06:33.113 0.0     0.0
51352   51593   2012-01-27 09:06:33.213 0.0     0.0
51389   51630   2012-01-27 10:00:32.037 1.0*    0.0
51390   51631   2012-01-27 10:00:32.137 0.0     0.0
51393   51634   2012-01-27 10:06:00.187 0.0     0.0
51394   51635   2012-01-27 10:06:00.287 0.0     0.0
51535   51776   2012-01-27 10:40:48.693 0.0     0.0  # From here onwards are the additional data where an issue occurred. 
51536   51777   2012-01-27 10:40:48.793 0.0     0.0
51537   51778   2012-01-27 10:40:51.697 0.0     0.0
51538   51779   2012-01-27 10:40:51.797 0.0     0.0
51539   51780   2012-01-27 10:40:53.697 0.0     0.0
51540   51781   2012-01-27 10:40:53.797 1.0*    0.0
51541   51782   2012-01-27 10:40:55.700 0.0     0.0
51542   51783   2012-01-27 10:40:55.800 1.0*    0.0
51543   51784   2012-01-27 10:40:56.703 0.0     0.0
51544   51785   2012-01-27 10:40:56.803 1.0*    0.0
51545   51786   2012-01-27 10:40:58.707 0.0     0.0
51546   51787   2012-01-27 10:40:58.807 0.0     0.0
51547   51788   2012-01-27 10:41:01.770 0.0     0.0
51548   51789   2012-01-27 10:41:01.870 0.0     0.0
51549   51790   2012-01-27 10:41:03.673 0.0     0.0
51550   51791   2012-01-27 10:41:03.773 0.0     0.0
51551   51792   2012-01-27 10:41:05.777 0.0     0.0
51552   51793   2012-01-27 10:41:05.877 1.0*    0.0
51553   51794   2012-01-27 10:41:08.780 0.0     0.0
51554   51795   2012-01-27 10:41:08.880 0.0     0.0
51555   51796   2012-01-27 10:41:09.783 0.0     0.0
51556   51797   2012-01-27 10:41:09.883 1.0*    0.0
51557   51798   2012-01-27 10:41:12.687 0.0     0.0
51558   51799   2012-01-27 10:41:12.787 0.0     0.0
51559   51800   2012-01-27 10:41:15.690 0.0     0.0
51560   51801   2012-01-27 10:41:15.790 0.0     0.0
51561   51802   2012-01-27 10:41:17.693 0.0     0.0
51562   51803   2012-01-27 10:41:17.793 0.0     1.0  # Time interval are divided by these rows where `Peak==1`.
51567   51808   2012-01-27 10:42:47.810 0.0     0.0

* - refers to the start timestamp of each time interval.

所以预期的结果是:

timestamp1(i.e.Start ==1)  timestamp2(i.e. Peak ==1) TimeInterval            
2012-01-27 07:19:06.297    2012-01-27 07:32:19.687   00:13:13.390000 (timestamp2 - timestamp1)
2012-01-27 09:00:23.053    2012-01-27 09:00:28.147   00:00:05.094000 (timestamp2 - timestamp1)
                          ...

更新:

使用


df['timestamp'] = pd.to_datetime(df['timestamp'])

df['group'] = df['Start'].cumsum()
df['group1'] = df['Peak'].iloc[::-1].cumsum()  
df

mask = df['group1'].eq(df.groupby('group')['group1'].transform('first'))
df1 = df[mask & df['group'].gt(0) & df['group1'].gt(0)]  
df1

df2 = (df1.groupby('group').agg(timestamp1=('timestamp','first'),
                                timestamp2=('timestamp','last'))
                           .reset_index(drop=True)) 
df2['TimeInterval'] = df2['timestamp2'].sub(df2['timestamp1']) 
df2 

它返回:

    timestamp1              timestamp2              TimeInterval
0   2012-01-27 07:19:06.297 2012-01-27 07:32:19.687 00:13:13.390000
1   2012-01-27 09:00:23.053 2012-01-27 09:00:28.147 00:00:05.094000
2   2012-01-27 10:00:32.037 2012-01-27 10:40:53.697 00:40:21.660000 # Should be from `10:00:32.037` to `10:41:17.793`.
3   2012-01-27 10:40:53.797 2012-01-27 10:40:55.700 00:00:01.903000
4   2012-01-27 10:40:55.800 2012-01-27 10:40:56.703 00:00:00.903000
5   2012-01-27 10:40:56.803 2012-01-27 10:41:05.777 00:00:08.974000
6   2012-01-27 10:41:05.877 2012-01-27 10:41:09.783 00:00:03.906000
7   2012-01-27 10:41:09.883 2012-01-27 10:41:17.793 00:00:07.910000

【问题讨论】:

  • 你能添加预期的输出吗?
  • @jezrael 当然,请查看已编辑的问题。

标签: python pandas numpy


【解决方案1】:

我认为您需要按Series.cumsum 创建组,还需要从后面为Peak 创建组,然后按GroupBy.first 过滤第一组,排除由0 填充的第一行和最后一行:

df['timestamp'] = pd.to_datetime(df['timestamp'])

df['group'] = df['Start'].cumsum()
df['group1'] = df['Peak'].iloc[::-1].cumsum()

mask = df['group1'].eq(df.groupby('group')['group1'].transform('first'))
df1 = df[mask & df['group'].gt(0) & df['group1'].gt(0)]

最后通过GroupBy.agg 加减时间戳聚合:

df2 = (df1.groupby('group1', sort=False).agg(timestamp1=('timestamp','first'),
                                             timestamp2=('timestamp','last'))
                                        .reset_index(drop=True)) 
df2['TimeInterval'] = df2['timestamp2'].sub(df2['timestamp1']) 
print (df2)
               timestamp1              timestamp2    TimeInterval
0 2012-01-27 07:19:06.297 2012-01-27 07:32:19.687 00:13:13.390000
1 2012-01-27 09:00:23.053 2012-01-27 09:00:28.147 00:00:05.094000
2 2012-01-27 10:00:32.037 2012-01-27 10:41:17.793 00:40:45.756000

【讨论】:

  • 请问为什么我们按Start而不是Peak分组?我想为Peak ==1 行之间的每个间隔找到第一个Start ==1Peak ==1 数据点之间的时间间隔,我们不应该按Peak 分组吗?
  • @nilsinelabore - 如果只有一组,硬测试数据,这里需要创建minimal, complete, and verifiable example(意思是复制数据和更改)
  • 您好,我刚刚在我的问题中添加了一个可重现的示例。
  • 非常接近,但有些行似乎是错误的。而不是只使用第一个Start ==1,它似乎多次使用Start ==1(但我只需要第一个Start ==1和从那里到Peak ==1的时间间隔)。我应该使用mask = df['group1'].eq(df.groupby('group1')['group'].transform('first')) 吗?请忽略我之前的评论。
  • @nilsinelabore - 最后一段更改为df1.groupby('group1', sort=False)
猜你喜欢
  • 1970-01-01
  • 2018-10-08
  • 2020-11-17
  • 1970-01-01
  • 2023-04-10
  • 1970-01-01
  • 2021-05-02
  • 1970-01-01
  • 2020-02-19
相关资源
最近更新 更多