首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Keras拟合不训练模型权重

Keras拟合不训练模型权重
EN

Stack Overflow用户
提问于 2022-02-01 14:48:51
回答 1查看 78关注 0票数 0

我试图适应一个简单的直方图模型与自定义的权重,没有输入。它应适合下列机构产生的数据的直方图:

代码语言:javascript
复制
train_data = [max(0,int(np.round(np.random.randn()*2+5))) for i in range(1000)]

该模型由

代码语言:javascript
复制
d = 15
class hist_model(tf.keras.Model):
    def __init__(self):
        super(hist_model,self).__init__()
        self._theta = self.add_weight(shape=[1,d],initializer='zero',trainable=True)
        
    
    def call(self,x):
        return self._theta

我遇到的问题是,使用model.fit进行的培训是行不通的:在培训期间,模型的权重根本不会改变。我试过:

代码语言:javascript
复制
model = hist_model()
model.compile(optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2),
                loss="sparse_categorical_crossentropy")
history = model.fit(train_data,train_data,verbose=2,batch_size=1,epochs=10)
model.summary()

返回:

代码语言:javascript
复制
Epoch 1/3
1000/1000 - 1s - loss: 2.7080
Epoch 2/3
1000/1000 - 1s - loss: 2.7080
Epoch 3/3
1000/1000 - 1s - loss: 2.7080
Model: "hist_model_17"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
Total params: 15
Trainable params: 15
Non-trainable params: 0
________________________

我试着为同一个模型编写一个自定义的训练循环,效果很好。这是定制培训的代码:

代码语言:javascript
复制
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
for epoch in range(3):
    running_loss = 0
    for data in train_data:
        with tf.GradientTape() as tape:
            loss_value = loss_fn(data,model(data))
            running_loss += loss_value.numpy()
            grad = tape.gradient(loss_value,model.trainable_weights)
            optimizer.apply_gradients(zip(grad, model.trainable_weights))
    print(f'Epoch {epoch} loss: {loss_value}')

我还是不明白为什么适合的方法不起作用。我遗漏了什么?谢谢!

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2022-02-01 15:24:04

这两种方法的不同之处可能是损失函数。试着跑:

代码语言:javascript
复制
model.compile(optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2),
                loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))

因为默认情况下from_logits参数被设置为False。这意味着您的模型的输出已经编码了一个概率分布。注意现在使用from_logits=True的损失差异

代码语言:javascript
复制
import numpy as np
import tensorflow as tf

d = 15
class hist_model(tf.keras.Model):
    def __init__(self):
        super(hist_model,self).__init__()
        self._theta = self.add_weight(shape=[1,d],initializer='zero',trainable=True)       
    
    def call(self,x):
        return self._theta

train_data = [max(0,int(np.round(np.random.randn()*2+5))) for i in range(15)]

model = hist_model()
model.compile(optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2),
                loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))

history = model.fit(train_data, train_data,verbose=2,batch_size=1,epochs=10)
代码语言:javascript
复制
Epoch 1/10
15/15 - 0s - loss: 2.7021 - 247ms/epoch - 16ms/step
Epoch 2/10
15/15 - 0s - loss: 2.6812 - 14ms/epoch - 915us/step
Epoch 3/10
15/15 - 0s - loss: 2.6607 - 15ms/epoch - 1ms/step
Epoch 4/10
15/15 - 0s - loss: 2.6406 - 14ms/epoch - 955us/step
Epoch 5/10
15/15 - 0s - loss: 2.6209 - 19ms/epoch - 1ms/step
Epoch 6/10
15/15 - 0s - loss: 2.6017 - 18ms/epoch - 1ms/step
Epoch 7/10
15/15 - 0s - loss: 2.5829 - 15ms/epoch - 999us/step
Epoch 8/10
15/15 - 0s - loss: 2.5645 - 15ms/epoch - 1ms/step
Epoch 9/10
15/15 - 0s - loss: 2.5464 - 27ms/epoch - 2ms/step
Epoch 10/10
15/15 - 0s - loss: 2.5288 - 20ms/epoch - 1ms/step

我认为所采用的减缩方法也可能有影响。有关更多细节,请查看文档

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/70942576

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档