这是一个与下面相同的问题,但不同的是我使用的是docplex。
cplex.linear_constraints.add too slow for large models
如何在docplex中使用索引添加约束?
我的代码如下所示。
x = lm.binary_var_dict(range(n),name="x");
xv = [ax for i,ax in x.items()];
for i in range(l):
Bx = {xv[j]:B[i,j] for j in range(n)};
Bx = lm.linear_expr(Bx);
lm.add_constraint(Bx == 1);发布于 2019-10-10 14:05:20
你能尝试批量添加约束吗?
使用Model.add_constraints()批量向模型添加约束通常效率更高。尝试在列表或理解中对约束进行分组(两者都有效)。
示例:
m.add_constraints((m.dotf(ys, lambda j_: i + (i+j_) % 3) >= i for i in rsize),
("ct_%d" % i for i in rsize))发布于 2019-10-10 15:29:59
有许多可供选择的方法可用于创建约束。例如,您可以使用函数sum或scal_prod。你可以批量创建或者不批量创建。下面是一个小的测试代码,它演示了不同的变体:
from docplex.mp.model import Model
import time
n = 1000
l = n
B = { (i, j) : i * n + j for i in range(l) for j in range(n) }
with Model() as m:
x = m.binary_var_dict(range(n),name="x");
xv = [ax for i,ax in x.items()];
start = time.time()
for i in range(l):
Bx = {xv[j]:B[i,j] for j in range(n)};
Bx = m.linear_expr(Bx);
m.add_constraint(Bx == 1);
elapsed1 = time.time() - start
print('Original: %.2f' % elapsed1)
with Model() as m:
x = m.binary_var_dict(range(n),name="x");
xv = [ax for i,ax in x.items()];
start = time.time()
m.add_constraints(m.linear_expr({xv[j]:B[i,j] for j in range(n)}) == 1 for i in range(l))
elapsed2 = time.time() - start
print('Original batched: %.2f' % elapsed2)
with Model() as m:
x = m.binary_var_dict(range(n),name="x");
xv = [ax for i,ax in x.items()];
start = time.time()
for i in range(l):
m.add_constraint(m.sum(B[i,j] * xv[j] for j in range(n)) == 1)
elapsed3 = time.time() - start
print('Sum: %.2f' % elapsed3)
with Model() as m:
x = m.binary_var_dict(range(n),name="x");
xv = [ax for i,ax in x.items()];
start = time.time()
Bx = m.linear_expr(Bx);
m.add_constraints(m.sum(B[i,j] * xv[j] for j in range(n)) == 1 for i in range(l))
elapsed4 = time.time() - start
print('Sum batched: %.2f' % elapsed4)
with Model() as m:
x = m.binary_var_dict(range(n),name="x");
xv = [ax for i,ax in x.items()];
start = time.time()
for i in range(l):
m.add_constraint(m.scal_prod([xv[j] for j in range(n)],
[B[i,j] for j in range(n)]) == 1)
elapsed5 = time.time() - start
print('scal_prod: %.2f' % elapsed5)
with Model() as m:
x = m.binary_var_dict(range(n),name="x");
xv = [ax for i,ax in x.items()];
start = time.time()
Bx = m.linear_expr(Bx);
m.add_constraints(m.scal_prod([xv[j] for j in range(n)],
[B[i,j]for j in range(n)]) == 1 for i in range(l))
elapsed6 = time.time() - start
print('scal_prod batched: %.2f' % elapsed6)在我的盒子上这是
Original: 1.86
Original batched: 1.82
Sum: 2.84
Sum batched: 2.81
scal_prod: 1.55
scal_prod batched: 1.50因此,批处理不会带来太多开销,但scal_prod比linear_expr更快
https://stackoverflow.com/questions/58290207
复制相似问题