我想在沙发上做些大容量的插入。我试着在SO和google上搜索例子,但是我没有得到任何线索。这里有人提到这不可能。
但我想这个问题是三年前提出的。我搜索,如果我正确理解从下面给定的链接,它可以插入大量文件。
https://developer.couchbase.com/documentation/server/current/sdk/batching-operations.html
https://pythonhosted.org/couchbase/api/couchbase.html#batch-operation-pipeline
下面是我希望在couchbase中实现大容量插入的代码
import time
import csv
from couchbase import Couchbase
from couchbase.bucket import Bucket
from couchbase.exceptions import CouchbaseError
c = Bucket('couchbase://localhost/bulk-load')
from couchbase.exceptions import CouchbaseTransientError
BYTES_PER_BATCH = 1024 * 256 # 256K
with open('/home/royshah/Desktop/bulk_try/roy.csv') as csvfile:
lines = csvfile.readlines()[4:]
for k, line in enumerate(lines):
data_tmp = line.strip().split(',')
strDate = data_tmp[0].replace("\"", "")
timerecord = datetime.datetime.strptime(strDate,
'%Y-%m-%d %H:%M:%S.%f')
microsecs = timerecord.microsecond
strDate = "\"" + strDate + "\""
ts = calendar.timegm(timerecord.timetuple())*1000000 + microsecs
datastore = [ts] + data_tmp[1:]
stre = {'col1 ': datastore[1], # I am making key-values on the fly from csv file
'col2': datastore[2],
'col3': datastore[3],
'col4': datastore[4],
'col5': datastore[5],
'col6': datastore[6]}
cb.upsert(str(datastore[0]), (stre)) # datastore[0] is used as document
id and (stre) is used as key-value to be
inserted for respective id. cb.upsert(str(datastore),(stre))正在执行单个插入,我想使它成为批量插入,以使它更快。我不知道如何在couchbase中进行批量插入。我发现这个例子,但不确定如何实现。
https://developer.couchbase.com/documentation/server/current/sdk/batching-operations.html
如果有人指出了couchbase中的大容量加载的一些例子,或者帮助我找出如何通过我的代码进行批量插入。我真的很感激。.thanx为任何想法或帮助做了大量的工作。
发布于 2017-02-04 09:51:02
我尝试将文档中的示例调整到您的用例中。你可能需要改变一两个细节,但你应该有个想法。
c = Bucket('couchbase://localhost/bulk-load')
from couchbase.exceptions import CouchbaseTransientError
BYTES_PER_BATCH = 1024 * 256 # 256K
batches = []
cur_batch = {}
cur_size = 0
batches.append(cur_batch)
with open('/home/royshah/Desktop/bulk_try/roy.csv') as csvfile:
lines = csvfile.readlines()[4:]
for key, line in enumerate(lines):
#Format your data
data_tmp = line.strip().split(',')
strDate = data_tmp[0].replace("\"", "")
timerecord = datetime.datetime.strptime(strDate,
'%Y-%m-%d %H:%M:%S.%f')
microsecs = timerecord.microsecond
strDate = "\"" + strDate + "\""
timestamp = calendar.timegm(timerecord.timetuple())*1000000 + microsecs
#Build kv
datastore = [ts] + data_tmp[1:]
value = {'col1 ': datastore[1], # I am making key-values on the fly from csv file
'col2': datastore[2],
'col3': datastore[3],
'col4': datastore[4],
'col5': datastore[5],
'col6': datastore[6]}
key = str(datastore[0]
cur_batch[key] = value
cur_size += len(key) + len(value) + 24
if cur_size > BYTES_PER_BATCH:
cur_batch = {}
batches.append(cur_batch)
cur_size = 0
print "Have {} batches".format(len(batches))
num_completed = 0
while batches:
batch = batches[-1]
try:
cb.upsert_multi(batch)
num_completed += len(batch)
batches.pop()
except CouchbaseTransientError as e:
print e
ok, fail = e.split_results()
new_batch = {}
for key in fail:
new_batch[key] = all_data[key]
batches.pop()
batches.append(new_batch)
num_completed += len(ok)
print "Retrying {}/{} items".format(len(new_batch), len(ok))https://stackoverflow.com/questions/41967628
复制相似问题