首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >数据帧追加和drop_duplicates问题

数据帧追加和drop_duplicates问题
EN

Stack Overflow用户
提问于 2021-05-08 23:37:49
回答 1查看 42关注 0票数 0

所以,我有一个这样的虚拟df,并将其保存到csv中:

代码语言:javascript
复制
import pandas as pd
import io

old_data = """date,time,open,high,low,close,volume
2021-05-06,04:08:00,9150090.0,9150090.0,9125001.0,9130000.0,9.015642
2021-05-06,04:09:00,9140000.0,9145000.0,9125012.0,9134068.0,3.121043
2021-05-06,04:10:00,9133882.0,9133882.0,9125002.0,9132999.0,5.536345
2021-05-06,04:11:00,9132999.0,9135013.0,9131000.0,9132999.0,5.880620"""

new_data = """timestamp,open,high,low,close,volume
1620274080000,9150090.0,9150090.0,9125001.0,9130000.0,9.015641820000004
1620274140000,9140000.0,9145000.0,9125012.0,9134068.0,3.121042509999999
1620274200000,9133882.0,9133882.0,9125002.0,9132999.0,5.5363449
1620274260000,9132999.0,9135013.0,9131000.0,9132999.0,5.88062024"""

我尝试检查df_old和df_new之间是否有重复的数据,如果有,我就丢弃它:

代码语言:javascript
复制
raw = pd.read_csv(io.StringIO(new_data), encoding='UTF-8')

stream = pd.DataFrame(raw, columns=['timestamp', 'open', 'high', 'low', 'close', 'volume'])
stream['timestamp'] = pd.to_datetime(stream['timestamp'], unit='ms')
stream['date'] = pd.to_datetime(stream['timestamp']).dt.date
stream['time'] = pd.to_datetime(stream['timestamp']).dt.time
stream = stream[['date', 'time', 'open', 'high', 'low', 'close', 'volume']]

for dif_date in stream.date.unique():
    grouped = stream.groupby(stream.date)
    df_new = grouped.get_group(dif_date)
    df_old = pd.read_csv(io.StringIO(old_data), encoding='UTF-8')

df_stream = df_old.append(df_new).reset_index(drop=True)
df_stream = df_stream.drop_duplicates(subset=['time'])
print(df_stream)

>    date        time      open       high       low        close      volume
> 0  2021-05-06  04:08:00  9150090.0  9150090.0  9125001.0  9130000.0  9.015642
> 1  2021-05-06  04:09:00  9140000.0  9145000.0  9125012.0  9134068.0  3.121043
> 2  2021-05-06  04:10:00  9133882.0  9133882.0  9125002.0  9132999.0  5.536345
> 3  2021-05-06  04:11:00  9132999.0  9135013.0  9131000.0  9132999.0  5.880620
> 4  2021-05-06  04:08:00  9150090.0  9150090.0  9125001.0  9130000.0  9.015642
> 5  2021-05-06  04:09:00  9140000.0  9145000.0  9125012.0  9134068.0  3.121043
> 6  2021-05-06  04:10:00  9133882.0  9133882.0  9125002.0  9132999.0  5.536345
> 7  2021-05-06  04:11:00  9132999.0  9135013.0  9131000.0  9132999.0  5.880620

但是结果还是返回了重复的值,如何解决这个问题或者重新排序?https://colab.research.google.com/drive/1vMx9hXKcbz8SDawTnHbzpV6JiRZsEuVP?usp=sharing之前谢谢

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-05-08 23:52:46

沿time列的类型不是常量,因此python无法判断行是否相等。

例如,如果您运行:

代码语言:javascript
复制
df_stream.time.loc[0] == df_stream.time.loc[4]

您将得到False,因为左侧是一个字符串,右侧是一个datetime.time对象。

应使用astype()在列'time‘上强制输入类型

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/67449300

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档