我是tensorflow的新手,所以我的代码就是这样展开的!
import tensorflow as tf
import tensorflow.contrib.learn as learn
mnist = learn.datasets.mnist.read_data_sets('MNIST-data',one_hot=True)
import numpy as np
M = tf.Variable(tf.zeros([784,10]))
B = tf.Variable(tf.zeros([10]))
image_holder = tf.placeholder(tf.float32,[None,784])
label_holder = tf.placeholder(tf.float32,[None,10])
predicted_value = tf.add(tf.matmul(image_holder,M),B)
loss= tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predicted_value , labels=label_holder))
learning_rate = 0.01
num_epochs = 1000
batch_size = 100
num_batches = int(mnist.train.num_examples/batch_size)
init = tf.global_variables_initializer()
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
with tf.Session() as sess:
sess.run(init)
for _ in range(num_epochs):
for each_batch in range(num_batches):
current_image, current_image_label = mnist.train.next_batch(batch_size)
optimizer_value,loss = sess.run([optimizer,loss],feed_dict={image_holder:current_image,label_holder:current_image_label})
print ("The loss value is {} \n".format(loss)) 但我遇到的问题是,这个奇怪的错误
'numpy.dtype' object has no attribute 'base_dtype'我不知道代码有什么问题,我认为这是绝对正确的。在这个问题上有什么帮助吗?
发布于 2019-04-10 07:18:51
所有评论的FIrst:
sess.run(variable)时,请确保不要将其命名为相同的变量。也就是说,确保您没有这样做:variable=sess.run(variable)。因为你覆盖了它。这里的错误是最后一个。因此,一旦代码生效,可能会出现以下情况:
M = tf.Variable(tf.zeros([784,10]), dtype=tf.float32)
B = tf.Variable(tf.zeros([10]), dtype=tf.float32)
image_holder = tf.placeholder(tf.float32,[None,784])
label_holder = tf.placeholder(tf.float32,[None,10])
predicted_value = tf.add(tf.matmul(image_holder,M),B)
loss= tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predicted_value , labels=label_holder))
learning_rate = 0.01
num_epochs = 1000
batch_size = 100
num_batches = int(mnist.train.num_examples/batch_size)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for _ in range(num_epochs):
for each_batch in range(num_batches):
current_image, current_image_label = mnist.train.next_batch(batch_size)
optimizer_value,loss_value = sess.run([train_op,loss],feed_dict={image_holder:current_image,label_holder:current_image_label})
print ("The loss value is {} \n".format(loss_value)) 希望这能帮上忙
发布于 2019-04-10 22:44:06
更明确地说,当您第一次执行时,您只是将节点loss 重写为“loss”的值。因此,在for循环的第二次,session在原始loss op的位置看到一个numpy值。
https://stackoverflow.com/questions/55605853
复制相似问题