我试图使用Elasticsearch客户机,使用https://elasticsearch-py.readthedocs.org/en/master/helpers.html#elasticsearch.helpers.reindex重新索引。但我仍然得到以下例外:elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeout
错误的堆栈跟踪是
Traceback (most recent call last):
File "~/es_test.py", line 33, in <module>
main()
File "~/es_test.py", line 30, in main
target_index='users-2')
File "~/ENV/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 306, in reindex
chunk_size=chunk_size, **kwargs)
File "~/ENV/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 182, in bulk
for ok, item in streaming_bulk(client, actions, **kwargs):
File "~/ENV/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 124, in streaming_bulk
raise e
elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeout(HTTPSConnectionPool(host='myhost', port=9243): Read timed out. (read timeout=10))除了增加超时之外,还有防止此异常的方法吗?
编辑: python代码
from elasticsearch import Elasticsearch, RequestsHttpConnection, helpers
es = Elasticsearch(connection_class=RequestsHttpConnection,
host='myhost',
port=9243,
http_auth=HTTPBasicAuth(username, password),
use_ssl=True,
verify_certs=True,
timeout=600)
helpers.reindex(es, source_index=old_index, target_index=new_index)发布于 2016-09-20 18:48:20
我已经为这个问题困扰了几天了,我将request_timeout参数更改为30 (即30秒)没有工作。最后,我必须在stream_bulk中编辑elasticsearch.py和重新索引API。
将chunk_size参数从默认的500 (正在处理500个文档)更改为每批减少文档数量。我把我的换成了50,这对我来说很好。不再出现读取超时错误。
def streaming_bulk(client,actions,chunk_size=50,raise_on_error=True,expand_action_callback=expand_action,raise_on_exception=True,**kwargs):
def重新索引(client、source_index、target_index、query=None、target_client=None、target_client=None滚动=‘5m’、scan_kwargs={}、bulk_kwargs={}):
发布于 2016-04-13 14:30:35
这可能是因为OutOfMemoryError for Java堆空间,这意味着您没有给elasticsearch足够的内存来完成您想要做的事情。如果有类似的例外情况,请试着查看您的/var/log/elasticsearch。
https://stackoverflow.com/questions/31576270
复制相似问题