我基本上是用python第11章运行Francois的“深度学习”的代码,这是一种二元情感分类。每个句子的标签是0或1。在像书中一样运行模型之后,我尝试对其中一个“验证”句子做一个预测。完整的代码是一个公共的kaggle笔记本,可以在这里找到:https://www.kaggle.com/louisbunuel/deep-learning-with-python,它是笔记本的一部分:https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/chapter11_part02_sequence-models.ipynb
我添加的唯一内容是从标记化的tensorflow数据集中“提取”一个标记化语句,这样我就可以看到输出的一个示例。我期待一个数字从0到1 (确实是一个概率),但是我得到了一个从0到1的数字数组,每个单词在句子中都是一个。换句话说,模型似乎没有为每个句子指定标签,而是为每个单词指定标签。
有人能解释我做错了什么吗?这是我从tensorflow数据集中“提取”句子的方法吗?这是这本书/github的代码,在笔记本上
!curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -xf aclImdb_v1.tar.gz
!rm -r aclImdb/train/unsup
import os, pathlib, shutil, random
from tensorflow import keras
batch_size = 32
base_dir = pathlib.Path("aclImdb")
val_dir = base_dir / "val"
train_dir = base_dir / "train"
for category in ("neg", "pos"):
os.makedirs(val_dir / category)
files = os.listdir(train_dir / category)
random.Random(1337).shuffle(files)
num_val_samples = int(0.2 * len(files))
val_files = files[-num_val_samples:]
for fname in val_files:
shutil.move(train_dir / category / fname,
val_dir / category / fname)
train_ds = keras.utils.text_dataset_from_directory(
"aclImdb/train", batch_size=batch_size
)
val_ds = keras.utils.text_dataset_from_directory(
"aclImdb/val", batch_size=batch_size
)
test_ds = keras.utils.text_dataset_from_directory(
"aclImdb/test", batch_size=batch_size
)
text_only_train_ds = train_ds.map(lambda x, y: x)整序列数据集的准备
from tensorflow.keras import layers
max_length = 600
max_tokens = 20000
text_vectorization = layers.TextVectorization(
max_tokens=max_tokens,
output_mode="int",
output_sequence_length=max_length,
)
text_vectorization.adapt(text_only_train_ds)
int_train_ds = train_ds.map(
lambda x, y: (text_vectorization(x), y),
num_parallel_calls=4)
int_val_ds = val_ds.map(
lambda x, y: (text_vectorization(x), y),
num_parallel_calls=4)
int_test_ds = test_ds.map(
lambda x, y: (text_vectorization(x), y),
num_parallel_calls=4)
embedding_layer = layers.Embedding(input_dim=max_tokens, output_dim=256)
inputs = keras.Input(shape=(None,), dtype="int64")
embedded = layers.Embedding(input_dim=max_tokens, output_dim=256)(inputs)
x = layers.Bidirectional(layers.LSTM(32))(embedded)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop",
loss="binary_crossentropy",
metrics=["accuracy"])
model.summary()
callbacks = [
keras.callbacks.ModelCheckpoint("embeddings_bidir_gru.keras",
save_best_only=True)
]
model.fit(int_train_ds, validation_data=int_val_ds, epochs=2, callbacks=callbacks)
model = keras.models.load_model("embeddings_bidir_gru.keras")
print(f"Test acc: {model.evaluate(int_test_ds)[1]:.3f}")我对代码的“添加”是这个部分。在模型运行之后,我拿出这样的一句话:
ds = int_val_ds.take(1) # int_val_ds is the dataframe that is already vectorized to numbers
for sentence, label in ds: # example is (sentence, label)
print(sentence.shape, label)
>> (32, 600) tf.Tensor([1 1 1 0 1 0 0 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 0 0 0 0 0 1 0 0 0], shape=(32,), dtype=int32)如果我看一个元素的形状,这是一批包含36个相应标签的32句句子
sentence[2].shape
>> TensorShape([600])如果我打字
model.predict(sentence[2])
>> array([[0.49958456],
[0.50042397],
[0.50184965],
[0.4992085 ],...
[0.50077164]], dtype=float32)有600个元素我以为一个数字在0到1之间。出什么问题了?
发布于 2022-01-29 04:56:59
转述@你的评论
model.predict(tf.reshape(sentence[2] , [1 , 600] )https://stackoverflow.com/questions/70825749
复制相似问题