首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >逻辑回归成本= nan

逻辑回归成本= nan
EN

Stack Overflow用户
提问于 2020-01-22 06:35:40
回答 2查看 306关注 0票数 0

我正在尝试实现逻辑回归模型,但仍然将'nan‘值作为成本。我用多个数据集尝试了它,但它给出了相同的结果。不同的源给出的梯度下降的实现略有不同,所以我不确定这里的梯度实现是否正确。下面是完整的代码:

代码语言:javascript
复制
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split

class LogisticRegression:
    def __init__(self, lr=0.001, n_iter=8000):
        self.lr = lr
        self.n_iter = n_iter
        self.weights = None

    """
    z is dot product of features and weights, which is then mapped to discrete values, such as between 0 and 1
    """
    def sigmoid(self, z):
        return 1.0/(1+np.exp(-z))

    def predict(self, x_features, weights):
        """Returns 1d array of probabilities that the class label == 1"""
        z = np.dot(x_features, weights)
        return self.sigmoid(z)

    def cost(self, x_features, labels, weights):
        """
        Using Mean Absolute Error

        Cost = (labels*log(predictions) + (1-labels)*log(1-predictions) ) / len(labels) 
        """
        observation = len(labels)
        predictions = self.predict(x_features, weights)
        #take the error when label = 1
        class1_cost = -labels*np.log(predictions)
        #take the error when label = 0
        class2_cost = (1-labels)*np.log(1-predictions)
        #take sum of both the cost
        cost = class1_cost+class2_cost
        #take the average cost
        cost = cost.sum()/observation
        return cost

    def update_weight(self, x_features, labels, weights):
        """
        Vectorized Gradient Descent
        """
        N = len(x_features)
        #get predictions (approximation of y)
        predictions = self.predict(x_features, weights)
        gradient = np.dot(x_features.T, predictions-labels)
        #take the average cost of derivative for each feature
        gradient /= N
        #multiply gradients by our learning rate
        gradient *= self.lr
        #subtract from our weights to minimize cost
        weights -= gradient
        return weights

    def give_predictions(self, x_features, weights):
        linear_model_prediction =  self.predict(x_features, weights)
        y_predicted_cls = [1 if i>0.5 else 0 for i in linear_model_prediction]
        return y_predicted_cls

    def train(self, features, labels):
        n_samples, n_features = features.shape
        self.weights = np.zeros((n_features,1)) #initialize the weight matrix
        cost_history = []
        for i in range(self.n_iter):
            self.weights = self.update_weight(features, labels, self.weights)
            #calculate error for auditing purposes
            cost = self.cost(features, labels, self.weights)
            cost_history.append(cost)
            #Log process
            if i%1000 == 0:
                print("iter: {}, cost: {}".format(str(i),str(cost)))

        return self.weights, cost_history

def generate_data():
    bc = datasets.load_breast_cancer()
    x_features, labels = bc.data, bc.target

    x_train, x_test, y_train, y_test = train_test_split(x_features, labels, test_size=0.2, random_state=1234)
    return x_train, x_test, y_train, y_test

x_train, x_test, y_train, y_test = generate_data()

model = LogisticRegression()
model.train(x_train, y_train)
EN

回答 2

Stack Overflow用户

发布于 2020-01-23 10:05:04

在训练模型之前,我必须对x_train应用特征缩放。我使用了sklearn StandardScaler库

代码语言:javascript
复制
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
x_train = sc_X.fit_transform(x_train)
票数 1
EN

Stack Overflow用户

发布于 2020-01-22 08:11:09

您的成本函数似乎是正确的,但您需要将'y‘作为0和1的向量(one_hot_encoding)。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/59850219

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档