我试图适应一个简单的直方图模型与自定义的权重,没有输入。它应适合下列机构产生的数据的直方图:
train_data = [max(0,int(np.round(np.random.randn()*2+5))) for i in range(1000)]该模型由
d = 15
class hist_model(tf.keras.Model):
def __init__(self):
super(hist_model,self).__init__()
self._theta = self.add_weight(shape=[1,d],initializer='zero',trainable=True)
def call(self,x):
return self._theta我遇到的问题是,使用model.fit进行的培训是行不通的:在培训期间,模型的权重根本不会改变。我试过:
model = hist_model()
model.compile(optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2),
loss="sparse_categorical_crossentropy")
history = model.fit(train_data,train_data,verbose=2,batch_size=1,epochs=10)
model.summary()返回:
Epoch 1/3
1000/1000 - 1s - loss: 2.7080
Epoch 2/3
1000/1000 - 1s - loss: 2.7080
Epoch 3/3
1000/1000 - 1s - loss: 2.7080
Model: "hist_model_17"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Total params: 15
Trainable params: 15
Non-trainable params: 0
________________________我试着为同一个模型编写一个自定义的训练循环,效果很好。这是定制培训的代码:
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
for epoch in range(3):
running_loss = 0
for data in train_data:
with tf.GradientTape() as tape:
loss_value = loss_fn(data,model(data))
running_loss += loss_value.numpy()
grad = tape.gradient(loss_value,model.trainable_weights)
optimizer.apply_gradients(zip(grad, model.trainable_weights))
print(f'Epoch {epoch} loss: {loss_value}')我还是不明白为什么适合的方法不起作用。我遗漏了什么?谢谢!
发布于 2022-02-01 15:24:04
这两种方法的不同之处可能是损失函数。试着跑:
model.compile(optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))因为默认情况下from_logits参数被设置为False。这意味着您的模型的输出已经编码了一个概率分布。注意现在使用from_logits=True的损失差异
import numpy as np
import tensorflow as tf
d = 15
class hist_model(tf.keras.Model):
def __init__(self):
super(hist_model,self).__init__()
self._theta = self.add_weight(shape=[1,d],initializer='zero',trainable=True)
def call(self,x):
return self._theta
train_data = [max(0,int(np.round(np.random.randn()*2+5))) for i in range(15)]
model = hist_model()
model.compile(optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))
history = model.fit(train_data, train_data,verbose=2,batch_size=1,epochs=10)Epoch 1/10
15/15 - 0s - loss: 2.7021 - 247ms/epoch - 16ms/step
Epoch 2/10
15/15 - 0s - loss: 2.6812 - 14ms/epoch - 915us/step
Epoch 3/10
15/15 - 0s - loss: 2.6607 - 15ms/epoch - 1ms/step
Epoch 4/10
15/15 - 0s - loss: 2.6406 - 14ms/epoch - 955us/step
Epoch 5/10
15/15 - 0s - loss: 2.6209 - 19ms/epoch - 1ms/step
Epoch 6/10
15/15 - 0s - loss: 2.6017 - 18ms/epoch - 1ms/step
Epoch 7/10
15/15 - 0s - loss: 2.5829 - 15ms/epoch - 999us/step
Epoch 8/10
15/15 - 0s - loss: 2.5645 - 15ms/epoch - 1ms/step
Epoch 9/10
15/15 - 0s - loss: 2.5464 - 27ms/epoch - 2ms/step
Epoch 10/10
15/15 - 0s - loss: 2.5288 - 20ms/epoch - 1ms/step我认为所采用的减缩方法也可能有影响。有关更多细节,请查看文档。
https://stackoverflow.com/questions/70942576
复制相似问题