首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Tensorflow预测都成真了

Tensorflow预测都成真了
EN

Stack Overflow用户
提问于 2017-07-07 02:18:50
回答 2查看 69关注 0票数 0

在我的tensorflow应用程序中,所有的预测都是正确的。我正在尝试使MNIST示例适应我的问题,但我担心使用该技术是错误的,因为它适用于多个类,而且我有二进制分类。

代码语言:javascript
复制
# In[1]:


import tensorflow as tf
import numpy


# In[2]:


X = tf.placeholder(tf.float32, [None, 3], "training-data")
W1 = tf.Variable(tf.truncated_normal([3, 2]), "W")
b1 = tf.Variable(tf.zeros([2]), "B") # number of neurons

W2 = tf.Variable(tf.truncated_normal([2, 1]), "W1")
b2 = tf.Variable(tf.zeros([1]), "B1") # number of neurons


# In[9]:


Y = tf.nn.sigmoid(tf.matmul(X, W1) + b1)
Y1 = tf.nn.softmax(tf.matmul(Y, W2) + b2)
Y_ = tf.placeholder(tf.float32, [None, 1], "labels") # labels

#cross_entropy = -tf.reduce_sum(Y_ * tf.log(Y1)) # error function
cross_entropy = tf.reduce_sum(tf.abs(Y1 - Y_))
is_correct = tf.equal(Y1, Y_)
# All the predictions are coming out True ?!?
accuracy = tf.reduce_sum(tf.cast(is_correct, tf.int32)) / tf.size(is_correct)

print("X", X)
print("Y", Y)
print("Y1", Y1)
print("Y_", Y_)
print("cross-entropy", cross_entropy)
print("is-correct", is_correct)
print("accuracy", accuracy)


# In[10]:


optimizer = tf.train.GradientDescentOptimizer(0.005)
train_step = optimizer.minimize(cross_entropy)


# In[11]:


def load(filename):
    filename_queue = tf.train.string_input_producer([filename])
    key, value = tf.TextLineReader(skip_header_lines=1).read(filename_queue)
    col1, col2, col3, col4, col5 = tf.decode_csv(records = value, record_defaults=[[1.0], [1.0], [1.0], [1.0], [1.0]])
    batch_size=100
    # A tensor for each column of the CSV
    load_time, is_east, is_west, is_europe, labels = tf.train.shuffle_batch([col1, col2, col3, col4, col5], batch_size=batch_size, capacity=batch_size*50, min_after_dequeue=batch_size)
    #features = tf.stack([load_time, is_east, is_west, is_europe], 1)
    features = tf.stack([is_east, is_west, is_europe], 1)
    return features, tf.reshape(labels, [-1, 1])


# In[12]:


features, labels = load("/Users/andrew.ehrlich/Desktop/labelled_clicks.csv")


# In[13]:


# Run!

test_features = numpy.loadtxt(open("/Users/andrew.ehrlich/Desktop/labelled_clicks_test.csv", "rb"), delimiter=",", skiprows=1, usecols = [1,2,3])
test_labels = numpy.loadtxt(open("/Users/andrew.ehrlich/Desktop/labelled_clicks_test.csv", "rb"), delimiter=",", skiprows=1, usecols = [4], ndmin = 2)

summ = tf.reduce_sum(test_labels)
size = tf.size(test_labels)

with tf.Session() as sess:
    file_writer = tf.summary.FileWriter('/Users/andrew.ehrlich/tf.log', sess.graph)
    init = tf.global_variables_initializer()

    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(sess=sess, coord=coord)

    sess.run(init)
    for i in range(1000):

        ran_features = sess.run(features)
        ran_labels = sess.run(labels)

        train_data = {X: ran_features, Y_: ran_labels}
        sess.run(train_step, feed_dict=train_data) # I guess this updates the tensors behind train_step (W and b)

        if (i % 100 == 0):
            train_acc, train_ent = sess.run([accuracy, cross_entropy], feed_dict=train_data)

            test_data = {X: test_features, Y_: test_labels}
            test_acc, test_ent = sess.run([accuracy, cross_entropy], feed_dict=test_data)

            size = sess.run(tf.size(ran_labels))
            print("batch size: %d [TRAIN - acc:%1.4f ent: %10.4f]    [TEST - acc:%1.4f ent: %10.4f]" % (size, train_acc, train_ent, test_acc, test_ent))


# In[ ]:

输出:

代码语言:javascript
复制
batch size: 100 [TRAIN - acc:0.4100 ent:    59.0000]    [TEST - acc:0.4787 ent:  9423.0000]
batch size: 100 [TRAIN - acc:0.5300 ent:    47.0000]    [TEST - acc:0.4787 ent:  9423.0000]
batch size: 100 [TRAIN - acc:0.5900 ent:    41.0000]    [TEST - acc:0.4787 ent:  9423.0000]
batch size: 100 [TRAIN - acc:0.4700 ent:    53.0000]    [TEST - acc:0.4787 ent:  9423.0000]
batch size: 100 [TRAIN - acc:0.5200 ent:    48.0000]    [TEST - acc:0.4787 ent:  9423.0000]
batch size: 100 [TRAIN - acc:0.6000 ent:    40.0000]    [TEST - acc:0.4787 ent:  9423.0000]
batch size: 100 [TRAIN - acc:0.5500 ent:    45.0000]    [TEST - acc:0.4787 ent:  9423.0000]
batch size: 100 [TRAIN - acc:0.6100 ent:    39.0000]    [TEST - acc:0.4787 ent:  9423.0000]
batch size: 100 [TRAIN - acc:0.4100 ent:    59.0000]    [TEST - acc:0.4787 ent:  9423.0000]
batch size: 100 [TRAIN - acc:0.5300 ent:    47.0000]    [TEST - acc:0.4787 ent:  9423.0000]

准确性不会改变,因为Y_的值始终都是True,这会产生一个仅显示测试集中正标签数量的数字。请让我知道任何反馈!我很感激!

EN

回答 2

Stack Overflow用户

发布于 2017-07-07 02:35:30

当你在最后一层上使用softmax,然后计算cross_entropy时,将它们组合在一起,得到一个数值稳定的tf.softmax_cross_entropy_with_logits。一旦你看到损失在减少,但是你的准确性不好,那么你可以通过添加更多的层来增加网络的复杂性。

进行以下更改:

代码语言:javascript
复制
Y1 = (tf.matmul(Y, W2) + b2)
cross_entropy = tf.reduce_mean(tf.softmax_cross_entropy_with_logits(logits=Y1, labels=Y_))
票数 1
EN

Stack Overflow用户

发布于 2017-07-07 04:58:49

你只有一个输出,我假设你希望它是0和1,表示false和true。如果是这种情况,其中一个问题就是softmax功能。Softmax适用于多个类别,因为它的一个属性是使所有类别的总和等于1,并且更容易将结果解释为概率分布。在您的示例中,由于此属性,它只会将您的单个输出设置为1。

有两种基本的方法可以修复它。要么将softmax放到您的输出层上,并可能使用不同的损失函数(均方可能有效),要么让您的输出层返回两个值,保留softmax并将得到的两个数字解释为一次热编码,其中实际值是网络置信度的度量。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/44956256

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档