当我试图开发梯度下降时,我发现了一个有趣的问题,我无法有效地使用**kwargs。我的功能看起来就像
def gradient_descent(g,x,y,alpha,max_its,w,**kwargs):
# switch for verbose
verbose = True
if 'verbose' in kwargs:
verbose = kwargs['verbose']
# determine num train and batch size
num_train = y.size()[1]
batch_size = num_train
if 'batch_size' in kwargs:
batch_size = kwargs['batch_size']
........这个错误看起来是:
TypeError Traceback (most recent call last)
<ipython-input-12-f71adb8a241b> in <module>()
3 w_train = Variable(torch.Tensor(w_init), requires_grad=True)
4 g = softmax; alpha_choice = 10**(-1); max_its = 100; num_pts = y.size;
batch_size = 10;
----> 5 weight_hist_2,train_hist_2 = gradient_descent(g,x_train,y_train,alpha_choice,max_its,w_train,num_pts,batch_size,verbose = False)TypeError: gradient_descent()接受6个位置参数,但给出了8个.
开发这个函数有什么我没注意到的吗?
发布于 2018-05-23 15:51:26
您的函数签名与您使用它的参数数不匹配:
gradient_descent(g,x,y,alpha,max_its,w,**kwargs)有6个位置论证g,x,y,alpha,max_its,w,但是,在您的调用中:
gradient_descent(g,x_train,y_train,alpha_choice,max_its,w_train,num_pts,batch_size,verbose = False)你要给它8 g,x_train,y_train,alpha_choice,max_its,w_train,num_pts,batch_size
我猜您希望使用num_pts作为batch_size参数,因此它将如下所示:
weight_hist_2,train_hist_2 = gradient_descent(
g,
x_train,
y_train,
alpha_choice,
max_its,
w_train,
batch_size=num_pts,
verbose = False)https://stackoverflow.com/questions/50492607
复制相似问题