在使用relu激活函数时,我在实现反向支柱时遇到了问题。我的模型有两个隐层,两个隐层都有10个节点,输出层有一个节点(因此有3个权重,3个偏差)。我的模型不适用于这个破损的backward_prop函数。但是,该函数可以使用sigmoid激活函数(包括在函数中作为注释)来处理backprop。因此,我相信我搞砸了relu的推导。
有人能把我推向正确的方向吗?
# The derivative of relu function is 1 if z > 0, and 0 if z <= 0
def relu_deriv(z):
z[z > 0] = 1
z[z <= 0] = 0
return z
# Handles a single backward pass through the neural network
def backward_prop(X, y, c, p):
"""
cache (c): includes activations (A) and linear transformations (Z)
params (p): includes weights (W) and biases (b)
"""
m = X.shape[1] # Number of training ex
dZ3 = c['A3'] - y
dW3 = 1/m * np.dot(dZ3,c['A2'].T)
db3 = 1/m * np.sum(dZ3, keepdims=True, axis=1)
dZ2 = np.dot(p['W3'].T, dZ3) * relu_deriv(c['A2']) # sigmoid: replace relu_deriv w/ (1-np.power(c['A2'], 2))
dW2 = 1/m * np.dot(dZ2,c['A1'].T)
db2 = 1/m * np.sum(dZ2, keepdims=True, axis=1)
dZ1 = np.dot(p['W2'].T,dZ2) * relu_deriv(c['A1']) # sigmoid: replace relu_deriv w/ (1-np.power(c['A1'], 2))
dW1 = 1/m * np.dot(dZ1,X.T)
db1 = 1/m * np.sum(dZ1, keepdims=True, axis=1)
grads = {"dW1":dW1,"db1":db1,"dW2":dW2,"db2":db2,"dW3":dW3,"db3":db3}
return grads发布于 2018-10-15 20:59:10
您的代码是抛出了错误,还是您对培训有问题?你能说清楚吗?
或者,如果您处理二进制分类,您能否尝试只使输出激活函数sigmoid和其他ReLU?
请说明具体情况。
回复编辑:
你能试试这个吗?
def dReLU(x):
return 1. * (x > 0)我指的是:https://gist.github.com/yusugomori/cf7bce19b8e16d57488a
https://stackoverflow.com/questions/52789826
复制相似问题