我正在尝试在tensorflow中创建一个用于图像分类的输入管道,因此我希望创建图像批次和相应的标签。Tensorflow文档建议我们可以使用tf.train.batch进行批量输入:
train_batch, train_label_batch = tf.train.batch(
[train_image, train_image_label],
batch_size=batch_size,
num_threads=1,
capacity=10*batch_size,
enqueue_many=False,
shapes=[[224,224,3], [len(labels),]],
allow_smaller_final_batch=True
)然而,我在想,如果我像这样输入图表,会不会有问题:
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=train_label_batch, logits=Model(train_batch)))问题是,成本函数中的操作是将图像及其对应的标签出队,还是单独返回它们?从而导致训练具有错误的图像和标签。
发布于 2017-11-16 03:33:58
为了保持图像和标签的顺序,您需要考虑几件事。
假设我们需要一个函数来提供图像和标签。
def _get_test_images(_train=False):
"""
Gets the test images and labels as a batch
Inputs:
======
_train : Boolean if images are from training set
random_crop : Boolean if random cropping is allowed
random_flip : Boolean if random horizontal flip is allowed
distortion : Boolean if distortions are allowed
Outputs:
========
images_batch : Batch of images containing BATCH_SIZE images at a time
label_batch : Batch of labels corresponding to the images in images_batch
idx : Batch of indexes of images
"""
#get images and labels
_,_img_names,_img_class,index= _get_list(_train = _train)
#total number of distinct images used for train will be equal to the images
#fed in tf.train.slice_input_producer as _img_names
img_path,label,idx = tf.train.slice_input_producer([_img_names,_img_class,index],shuffle=False)
img_path,label,idx = tf.convert_to_tensor(img_path),tf.convert_to_tensor(label),tf.convert_to_tensor(idx)
img_path = tf.cast(img_path,dtype=tf.string)
#read file
image_file = tf.read_file(img_path)
#decode jpeg/png/bmp
#tf.image.decode_image won't give shape out. So it will give error while resizing
image = tf.image.decode_jpeg(image_file)
#image preprocessing
image = tf.image.resize_images(image, [IMG_DIM,IMG_DIM])
float_image = tf.cast(image,dtype=tf.float32)
#subtracting mean and divide by standard deviation
float_image = tf.image.per_image_standardization(float_image)
#set the shape
float_image.set_shape(IMG_SIZE)
labels_original = tf.cast(label,dtype=tf.int32)
img_index = tf.cast(idx,dtype=tf.int32)
#parameters for shuffle
batch_size = BATCH_SIZE
min_fraction_of_examples_in_queue = 0.3
num_preprocess_threads = 1
num_examples_per_epoch = MAX_TEST_EXAMPLE
min_queue_examples = int(num_examples_per_epoch *
min_fraction_of_examples_in_queue)
images_batch, label_batch,idx = tf.train.batch(
[float_image,label,img_index],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size)
# Display the training images in the visualizer.
tf.summary.image('images', images_batch)
return images_batch, label_batch,idx在这里,tf.train.slice_input_producer([_img_names,_img_class,index],shuffle=False)是一个有趣的东西,看看如果您将shuffle=True放在哪里,它将协调所有三个数组。
第二件事是,num_preprocess_threads。只要您使用单线程进行出队操作,批处理就会以确定性的方式出现。但是多个线程将随机地对数组进行混洗。例如,对于图像0001.jpg,如果True label为1,则可能得到2或4。一旦它退出队列,它就是张量形式。对于这样的张量,tf.nn.softmax_cross_entropy_with_logits应该不会有问题。
https://stackoverflow.com/questions/45678931
复制相似问题