我尝试在Python中仅使用numpy实现逻辑回归,但结果并不令人满意。预测似乎不正确,损失也没有改善,所以可能是代码出了问题。有谁知道怎么解决这个问题吗?非常感谢!
下面是算法:
import numpy as np
# training data and labels
X = np.concatenate((np.random.normal(0.25, 0.1, 50), np.random.normal(0.75, 0.1, 50)), axis=None)
Y = np.concatenate((np.zeros((50,), dtype=np.int32), np.ones((50,), dtype=np.int32)), axis=None)
def logistic_sigmoid(a):
return 1 / (1 + np.exp(-a))
# forward pass
def forward_pass(w, x):
return logistic_sigmoid(w * x)
# gradient computation
def backward_pass(x, y, y_real):
return np.sum((y - y_real) * x)
# computing loss
def loss(y, y_real):
return -np.sum(y_real * np.log(y) + (1 - y_real) * np.log(1 - y))
# training
def train():
w = 0.0
learning_rate = 0.01
i = 200
test_number = 0.3
for epoch in range(i):
y = forward_pass(w, X)
gradient = backward_pass(X, y, Y)
w = w - learning_rate * gradient
print(f'epoch {epoch + 1}, x = {test_number}, y = {forward_pass(w, test_number):.3f}, loss = {loss(y, Y):.3f}')
train()发布于 2021-02-05 07:46:43
乍一看,你错过了截取项(通常称为b_0,或bias)及其梯度更新。此外,在backward_pass和损失计算中,您不能除以数据样本的数量。
你可以在这里看到两个如何从头开始实现它的例子:
1:Example based on Andrew Ng explanations in the Machine Learning course in Coursera
2:Implementation of Jason Brownlee from Machine Learning mastery website
https://stackoverflow.com/questions/66051281
复制相似问题