首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何使熊猫的str.contains搜索速度更快

如何使熊猫的str.contains搜索速度更快
EN

Stack Overflow用户
提问于 2016-06-18 06:31:26
回答 3查看 6.3K关注 0票数 14

我在400万行的dataframe中搜索子字符串或多个子字符串。

代码语言:javascript
复制
df[df.col.str.contains('Donald',case=True,na=False)]

代码语言:javascript
复制
df[df.col.str.contains('Donald|Trump|Dump',case=True,na=False)]

DataFrame(df)如下所示(有400万行字符串)

代码语言:javascript
复制
df = pd.DataFrame({'col': ["very definition of the American success story, continually setting the standards of excellence in business, real estate and entertainment.",
                       "The myriad vulgarities of Donald Trump—examples of which are retailed daily on Web sites and front pages these days—are not news to those of us who have",
                       "While a fearful nation watched the terrorists attack again, striking the cafés of Paris and the conference rooms of San Bernardino"]})

有什么提示可以让这个字符串搜索更快吗?例如,首先对数据进行排序,某些索引方式,将列名更改为数字,将"na=False“从查询中删除等等?即使是毫秒的提速也会很有帮助!

EN

回答 3

Stack Overflow用户

回答已采纳

发布于 2016-06-18 06:39:47

如果子字符串的数量很小,那么一次搜索一个子字符串可能会更快,因为您可以将regex=False参数传递给contains,从而加快它的速度。

在我在两个样本子字符串上测试过的大约6000行的示例DataFrame上,blah.contains("foo", regex=False)\ blah.contains("bar", regex=False)的速度大约是blah.contains("foo|bar")的两倍。你必须用你的数据来测试它,看看它是如何扩展的。

票数 14
EN

Stack Overflow用户

发布于 2016-06-18 07:17:50

你可以把它转换成一个列表。在列表中搜索,而不是将字符串方法应用于一个系列,似乎要快得多。

样本代码:

代码语言:javascript
复制
import timeit
df = pd.DataFrame({'col': ["very definition of the American success story, continually setting the standards of excellence in business, real estate and entertainment.",
                       "The myriad vulgarities of Donald Trump—examples of which are retailed daily on Web sites and front pages these days—are not news to those of us who have",
                       "While a fearful nation watched the terrorists attack again, striking the cafés of Paris and the conference rooms of San Bernardino"]})



def first_way():
    df["new"] = pd.Series(df["col"].str.contains('Donald',case=True,na=False))
    return None
print "First_way: "
%timeit for x in range(10): first_way()
print df

df = pd.DataFrame({'col': ["very definition of the American success story, continually setting the standards of excellence in business, real estate and entertainment.",
                       "The myriad vulgarities of Donald Trump—examples of which are retailed daily on Web sites and front pages these days—are not news to those of us who have",
                       "While a fearful nation watched the terrorists attack again, striking the cafés of Paris and the conference rooms of San Bernardino"]})


def second_way():
    listed = df["col"].tolist()
    df["new"] = ["Donald" in n for n in listed]
    return None

print "Second way: "
%timeit for x in range(10): second_way()
print df

结果:

代码语言:javascript
复制
First_way: 
100 loops, best of 3: 2.77 ms per loop
                                                 col    new
0  very definition of the American success story,...  False
1  The myriad vulgarities of Donald Trump—example...   True
2  While a fearful nation watched the terrorists ...  False
Second way: 
1000 loops, best of 3: 1.79 ms per loop
                                                 col    new
0  very definition of the American success story,...  False
1  The myriad vulgarities of Donald Trump—example...   True
2  While a fearful nation watched the terrorists ...  False
票数 2
EN

Stack Overflow用户

发布于 2019-05-18 09:56:20

BrenBarn上面的回答帮助我解决了我的问题。只是写下了我的问题以及下面是如何解决的。希望它能帮助到某人:)

我拥有的数据大约有2000行。大部分都是短信。以前,我在忽略大小写中使用正则表达式,如下所示

代码语言:javascript
复制
reg_exp = ''.join(['(?=.*%s)' % (i) for i in search_list])
series_to_search = data_new.iloc[:,title_column_index] + ' : ' + data_new.iloc[:,description_column_index]  
data_new = data_new[series_to_search.str.contains(reg_exp, flags=re.IGNORECASE)]

对于包含“exception”、“VE20”的搜索列表,这段代码花费了58.710898秒。

当我用一个简单的for循环替换这段代码时,只花了0.055304秒。改善1,061.60倍

代码语言:javascript
复制
for search in search_list:            
    series_to_search = data_new.iloc[:,title_column_index] + ' : ' + data_new.iloc[:,description_column_index]
    data_new = data_new[series_to_search.str.lower().str.contains(search.lower())]
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/37894003

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档