【问题标题】:Dataframe append and drop_duplicates problemDataframe append 和 drop_duplicates 问题
【发布时间】:2021-05-08 15:37:48
【问题描述】:

所以,我有一个像这样的虚拟 df 并将其保存到 csv 中:

import pandas as pd
import io

old_data = """date,time,open,high,low,close,volume
2021-05-06,04:08:00,9150090.0,9150090.0,9125001.0,9130000.0,9.015642
2021-05-06,04:09:00,9140000.0,9145000.0,9125012.0,9134068.0,3.121043
2021-05-06,04:10:00,9133882.0,9133882.0,9125002.0,9132999.0,5.536345
2021-05-06,04:11:00,9132999.0,9135013.0,9131000.0,9132999.0,5.880620"""

new_data = """timestamp,open,high,low,close,volume
1620274080000,9150090.0,9150090.0,9125001.0,9130000.0,9.015641820000004
1620274140000,9140000.0,9145000.0,9125012.0,9134068.0,3.121042509999999
1620274200000,9133882.0,9133882.0,9125002.0,9132999.0,5.5363449
1620274260000,9132999.0,9135013.0,9131000.0,9132999.0,5.88062024"""

我尝试检查 df_old 和 df_new 之间是否存在重复数据,如果有,我将其删除:

raw = pd.read_csv(io.StringIO(new_data), encoding='UTF-8')

stream = pd.DataFrame(raw, columns=['timestamp', 'open', 'high', 'low', 'close', 'volume'])
stream['timestamp'] = pd.to_datetime(stream['timestamp'], unit='ms')
stream['date'] = pd.to_datetime(stream['timestamp']).dt.date
stream['time'] = pd.to_datetime(stream['timestamp']).dt.time
stream = stream[['date', 'time', 'open', 'high', 'low', 'close', 'volume']]

for dif_date in stream.date.unique():
    grouped = stream.groupby(stream.date)
    df_new = grouped.get_group(dif_date)
    df_old = pd.read_csv(io.StringIO(old_data), encoding='UTF-8')

df_stream = df_old.append(df_new).reset_index(drop=True)
df_stream = df_stream.drop_duplicates(subset=['time'])
print(df_stream)

>    date        time      open       high       low        close      volume
> 0  2021-05-06  04:08:00  9150090.0  9150090.0  9125001.0  9130000.0  9.015642
> 1  2021-05-06  04:09:00  9140000.0  9145000.0  9125012.0  9134068.0  3.121043
> 2  2021-05-06  04:10:00  9133882.0  9133882.0  9125002.0  9132999.0  5.536345
> 3  2021-05-06  04:11:00  9132999.0  9135013.0  9131000.0  9132999.0  5.880620
> 4  2021-05-06  04:08:00  9150090.0  9150090.0  9125001.0  9130000.0  9.015642
> 5  2021-05-06  04:09:00  9140000.0  9145000.0  9125012.0  9134068.0  3.121043
> 6  2021-05-06  04:10:00  9133882.0  9133882.0  9125002.0  9132999.0  5.536345
> 7  2021-05-06  04:11:00  9132999.0  9135013.0  9131000.0  9132999.0  5.880620

但结果仍然返回重复值,如何解决此问题或重新排序? https://colab.research.google.com/drive/1vMx9hXKcbz8SDawTnHbzpV6JiRZsEuVP?usp=sharing 谢谢之前

【问题讨论】:

  • df_old 的时间列是 <class 'str'> 类型,而 df_new 的时间列是 <class 'datetime.time'> 类型,因此它们是不相等的并且不会被丢弃。 print(df_stream.time.apply(type))
  • 非常感谢,我不知道列类型,现在已经解决了

标签: python pandas dataframe stock data-stream


【解决方案1】:

时间列的类型不是恒定的,因此 python 无法判断行是否相等。

例如,如果你运行:

df_stream.time.loc[0] == df_stream.time.loc[4]

您会得到 False,因为左侧是字符串,右侧是 datetime.time 对象。

您应该使用 astype() 强制在“时间”列上输入类型

【讨论】:

  • 非常感谢,我不知道列类型,现在已经解决了
猜你喜欢
  • 2012-11-19
  • 2021-02-22
  • 2012-12-26
  • 2017-06-30
  • 2010-10-10
  • 1970-01-01
  • 1970-01-01
  • 2013-06-07
  • 2019-03-26
相关资源
最近更新 更多