【问题标题】:Python re-sampling time series data which can not be indexedPython重新采样无法索引的时间序列数据
【发布时间】:2018-06-30 22:40:50
【问题描述】:

这个问题的目的是了解每秒“发生”了多少笔交易(计数)以及交易总量(总和)。

我有无法索引的时间序列数据(因为有多个具有相同时间戳的条目 - 可以在同一毫秒内获得许多交易),因此使用 resample 在这里解释是行不通的。

另一种方法是首先按时间分组,如here 所示(然后每秒重新采样)。问题是分组只会导致分组项目上的一个基本算术(我只能求和/平均值/标准等),而在这个数据中,我需要将“tradeVolume”列按总和分组,而列“ask1”按均值分组。

所以我的问题是/是: 1.如何group by每列使用不同的算术 如果不可能,是否有任何其他方法可以在没有日期时间索引的情况下将毫秒数据重新采样为秒。

谢谢!

时间序列(样本)在这里:

SecurityID,dateTime,ask1,ask1Volume,bid1,bid1Volume,ask2,ask2Volume,bid2,bid2Volume,ask3,ask3Volume,bid3,bid3Volume,tradePrice,tradeVolume,isTrade
2318276,2017-11-20 08:00:09.052240,12869.0,1,12868.0,3,12870.0,19,12867.5,2,12872.5,2,12867.0,1,0.0,0,0
2318276,2017-11-20 08:00:09.052260,12869.0,1,12868.0,3,12870.0,19,12867.5,2,12872.5,2,12867.0,1,12861.0,1,1
2318276,2017-11-20 08:00:09.052260,12869.0,1,12868.0,2,12870.0,19,12867.5,2,12872.5,2,12867.0,1,12868.0,1,0
2318276,2017-11-20 08:00:09.052270,12869.0,1,12868.0,2,12870.0,19,12867.5,2,12872.5,2,12867.0,1,12868.0,1,1
2318276,2017-11-20 08:00:09.052270,12869.0,1,12868.0,1,12870.0,19,12867.5,2,12872.5,2,12867.0,1,12868.0,1,0
2318276,2017-11-20 08:00:09.052282,12869.0,1,12868.0,1,12870.0,19,12867.5,2,12872.5,2,12867.0,1,12868.0,1,1
2318276,2017-11-20 08:00:09.052282,12869.0,1,12867.5,2,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12868.0,1,0
2318276,2017-11-20 08:00:09.052291,12869.0,1,12867.5,2,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12867.5,1,1
2318276,2017-11-20 08:00:09.052291,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12867.5,1,0
2318276,2017-11-20 08:00:09.052315,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12867.5,1,1
2318276,2017-11-20 08:00:09.052315,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12867.0,1,1
2318276,2017-11-20 08:00:09.052315,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12865.5,1,1
2318276,2017-11-20 08:00:09.052315,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12865.0,1,1
2318276,2017-11-20 08:00:09.052315,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12864.0,1,1
2318276,2017-11-20 08:00:09.052315,12869.0,1,12861.5,2,12870.0,19,12861.0,1,12872.5,2,12860.0,5,12864.0,1,0
2318276,2017-11-20 08:00:09.052335,12869.0,1,12861.5,2,12870.0,19,12861.0,1,12872.5,2,12860.0,5,12861.5,1,1
2318276,2017-11-20 08:00:09.052335,12869.0,1,12861.5,1,12870.0,19,12861.0,1,12872.5,2,12860.0,5,12861.5,1,0
2318276,2017-11-20 08:00:09.052348,12869.0,1,12861.5,1,12870.0,19,12861.0,1,12872.5,2,12860.0,5,12861.5,1,1
2318276,2017-11-20 08:00:09.052348,12869.0,1,12861.0,1,12870.0,19,12860.0,5,12872.5,2,12859.5,3,12861.5,1,0
2318276,2017-11-20 08:00:09.052357,12869.0,1,12861.0,1,12870.0,19,12860.0,5,12872.5,2,12859.5,3,12861.0,1,1
2318276,2017-11-20 08:00:09.052357,12869.0,1,12860.0,5,12870.0,19,12859.5,3,12872.5,2,12858.0,1,12861.0,1,0

【问题讨论】:

  • 您的意思是要计算 ask1 的平均值和每组 tradeVolume 的总和,其中每组包含一秒内的所有交易?
  • 是的。每个相同的组(例如 group:2017-11-20 08:00:09.052315)。只有这样我才能稍后按 dateTime 索引 df,然后重新采样(或者我错了吗?)
  • “如何对每列使用不同的算术进行分组” - df.groupby.agg() 并传递函数字典呢?

标签: python pandas pandas-groupby


【解决方案1】:

首先你需要有一列用于秒(从纪元开始),然后groupby 使用该列,然后对你想要的列进行聚合。

您希望将时间戳降低到一秒的精度,并使用它进行分组。然后应用聚合来获得你需要的平均值/总和/标准

df = pd.read_csv('data.csv')
df['dateTime'] = df['dateTime'].astype('datetime64[s]')
groups = df.groupby('dateTime')
groups.agg({'ask1': np.mean, 'tradeVolume': np.sum}) 

我修改了数据以确保其中实际上有不同的秒数,

SecurityID,dateTime,ask1,ask1Volume,bid1,bid1Volume,ask2,ask2Volume,bid2,bid2Volume,ask3,ask3Volume,bid3,bid3Volume,tradePrice,tradeVolume,isTrade
2318276,2017-11-20 08:00:09.052240,12869.0,1,12868.0,3,12870.0,19,12867.5,2,12872.5,2,12867.0,1,0.0,0,0
2318276,2017-11-20 08:00:09.052260,12869.0,1,12868.0,3,12870.0,19,12867.5,2,12872.5,2,12867.0,1,12861.0,1,1
2318276,2017-11-20 08:00:09.052260,12869.0,1,12868.0,2,12870.0,19,12867.5,2,12872.5,2,12867.0,1,12868.0,1,0
2318276,2017-11-20 08:00:09.052270,12869.0,1,12868.0,2,12870.0,19,12867.5,2,12872.5,2,12867.0,1,12868.0,1,1
2318276,2017-11-20 08:00:09.052270,12869.0,1,12868.0,1,12870.0,19,12867.5,2,12872.5,2,12867.0,1,12868.0,1,0
2318276,2017-11-20 08:00:09.052282,12869.0,1,12868.0,1,12870.0,19,12867.5,2,12872.5,2,12867.0,1,12868.0,1,1
2318276,2017-11-20 08:00:09.052282,12869.0,1,12867.5,2,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12868.0,1,0
2318276,2017-11-20 08:00:09.052291,12869.0,1,12867.5,2,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12867.5,1,1
2318276,2017-11-20 08:00:09.052291,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12867.5,1,0
2318276,2017-11-20 08:00:09.052315,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12867.5,1,1
2318276,2017-11-20 08:00:09.052315,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12867.0,1,1
2318276,2017-11-20 08:00:10.052315,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12865.5,1,1
2318276,2017-11-20 08:00:10.052315,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12865.0,1,1
2318276,2017-11-20 08:00:10.052315,12869.0,1,12867.5,1,12870.0,19,12867.0,1,12872.5,2,12865.5,1,12864.0,1,1
2318276,2017-11-20 08:00:10.052315,12869.0,1,12861.5,2,12870.0,19,12861.0,1,12872.5,2,12860.0,5,12864.0,1,0
2318276,2017-11-20 08:00:10.052335,12869.0,1,12861.5,2,12870.0,19,12861.0,1,12872.5,2,12860.0,5,12861.5,1,1
2318276,2017-11-20 08:00:10.052335,12869.0,1,12861.5,1,12870.0,19,12861.0,1,12872.5,2,12860.0,5,12861.5,1,0
2318276,2017-11-20 08:00:10.052348,12869.0,1,12861.5,1,12870.0,19,12861.0,1,12872.5,2,12860.0,5,12861.5,1,1
2318276,2017-11-20 08:00:10.052348,12869.0,1,12861.0,1,12870.0,19,12860.0,5,12872.5,2,12859.5,3,12861.5,1,0
2318276,2017-11-20 08:00:10.052357,12869.0,1,12861.0,1,12870.0,19,12860.0,5,12872.5,2,12859.5,3,12861.0,1,1
2318276,2017-11-20 08:00:10.052357,12869.0,1,12860.0,5,12870.0,19,12859.5,3,12872.5,2,12858.0,1,12861.0,1,0

和输出

In [53]: groups.agg({'ask1': np.mean, 'tradeVolume': np.sum})
Out[53]: 
               ask1  tradeVolume
seconds                         
1511164809  12869.0           10
1511164810  12869.0           10

脚注

OP说原版(下)比较快,所以跑了一些计时

def test1(df):
    """This is the fastest and cleanest."""
    df['dateTime'] = df['dateTime'].astype('datetime64[s]')
    groups = df.groupby('dateTime')
    agg = groups.agg({'ask1': np.mean, 'tradeVolume': np.sum})

def test2(df):
    """Totally unnecessary amount of datetime floors."""
    def group_by_second(index_loc):
        return df.loc[index_loc, 'dateTime'].floor('S')
    df['dateTime'] = df['dateTime'].astype('datetime64[ns]')
    groups = df.groupby(group_by_second)
    result = groups.agg({'ask1': np.mean, 'tradeVolume': np.sum})

def test3(df):
    """Original version, but the conversion to/from nanoseconds is unnecessary."""
    df['dateTime'] = df['dateTime'].astype('datetime64[ns]')
    df['seconds'] = df['dateTime'].apply(lambda v: v.value // 1e9)
    groups = df.groupby('dateTime')
    agg = groups.agg({'ask1': np.mean, 'tradeVolume': np.sum})

if __name__ == '__main__':
    import timeit
    print('22 rows')
    df = pd.read_csv('data_small.csv')
    print('test1', timeit.repeat("test1(df.copy())", number=50, globals=globals()))
    print('test2', timeit.repeat("test2(df.copy())", number=50, globals=globals()))
    print('test3', timeit.repeat("test3(df.copy())", number=50, globals=globals()))

    print('220 rows')
    df = pd.read_csv('data.csv')
    print('test1', timeit.repeat("test1(df.copy())", number=50, globals=globals()))
    print('test2', timeit.repeat("test2(df.copy())", number=50, globals=globals()))
    print('test3', timeit.repeat("test3(df.copy())", number=50, globals=globals()))

我在两个数据集上测试了它们,一个是第一个大小的 10 倍,结果

22 rows
test1 [0.08138518501073122, 0.07786444900557399, 0.0775048139039427]
test2 [0.2644687460269779, 0.26298125297762454, 0.2618108610622585]
test3 [0.10624988097697496, 0.1028324980288744, 0.10304366517812014]
220 rows
test1 [0.07999306707642972, 0.07842653687112033, 0.07848454895429313]
test2 [1.9794962559826672, 1.966513831866905, 1.9625889619346708]
test3 [0.12691736104898155, 0.12642419710755348, 0.126510804053396]

因此,最好使用.astype('datetime[s]') 版本,因为它最快且扩展性最好。

【讨论】:

  • 谢谢!。 1.必须在这里做一个int除法=? 2.1000000000 = 这是文件中的行数吗?
  • int 除法表示除法返回一个整数,即忽略小数并将结果向上舍入到最接近的秒数。 .astype(np.datetime64) 转换将dateTime 列转换为pandas Timestamp,以纳秒精度表示时间,从纳秒到秒除以1000000000。我刚刚找到了一种更易读的方法,请参阅上面的编辑
  • 1.第一种方法需要 0.111 秒才能完成,而第二种(已编辑)方法需要将近 3 分钟!!!。同样在第一种方法中,秒的“外观”是序列化的,我需要弄清楚如何将其再次转换为某种可读格式。在编辑时,格式相同。
  • 这是个问题吗?
  • @Giladbi 今天我学到了一些关于pandas datetimes 的知识。事实证明,.astype(datetime64) 转换可以直接完成到指定的精度,这也是最快的,请参阅上面的更新答案。如果您需要将dateTime 列保持在一秒以下,只需将转换后的dateTime 分配给一个新列,例如seconds 并在groupby 中使用它。
猜你喜欢
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2014-12-15
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2017-11-26
相关资源
最近更新 更多