首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何在一定的迭代次数内冻结图层的权重?

如何在一定的迭代次数内冻结图层的权重?
EN

Stack Overflow用户
提问于 2020-02-19 00:23:51
回答 1查看 440关注 0票数 2

我正在尝试联合训练两个MLP,每个MLP预测一个不同的实值变量。我想要最小化这两个输出的损失,但我想在一些“热身”迭代中修复其中一个。

我是tensorflow的新手,但基本上我在Pytorch中寻找与此类似的东西:

代码语言:javascript
复制
def loss(self, *args, **kwargs) -> torch.Tensor:
        # Extract data
        data, target, probability = args

        # Iterate through each model and sum nll
        nll = []
        for index in range(self.num_models):
            # Extract mean and variance from prediction
            if self._current_it < self.warm_start_it:
                predictive_mean = self.mean[index](data)
                with torch.no_grad():
                    predictive_variance = softplus(self.variance[index](data))
            else:
                with torch.no_grad():
                    predictive_mean = self.mean[index](data)
                predictive_variance = softplus(self.variance[index](data))

            # Calculate the loss
            nll.append(self.calculate_nll(target, predictive_mean, predictive_variance))

        mean_nll = torch.stack(nll).mean()

        # Update current iteration
        if self.training:
            self._current_it += 1

        return mean_nll

我想我可以在我的模型的call()函数中做一些类似的事情,例如:

代码语言:javascript
复制
    def call(self, step, inputs, training=None, mask=None):

        if step < self.warmup:
            with tf.GradientTape() as t:
                mean_predictions = self.mean(inputs)
            var_predictions = self.variance(inputs)
        else:
            mean_predictions = self.mean(inputs)
            with tf.GradientTape() as t:
                var_predictions = self.variance(inputs)
        return mean_predictions, var_predictions

这是获得上述Pytorch等价物的正确方法吗?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-02-19 05:59:58

我最终做了以下几件事:

在主循环中,

代码语言:javascript
复制
mlp = UncertaintyMLP(805, 1)
    loss_fn = GaussianNLL()
    optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
    epochs = 1000
    for epoch in range(epochs):
        for step, (x_batch, y_batch) in enumerate(train_dataset):

            if epoch > mlp.warmup:
                for layer in mlp.mean.layers:
                    layer.trainable = False
                for layer in mlp.variance.layers:
                    layer.trainable = True
            with tf.GradientTape() as tape:
                output = mlp(step, x_batch)
                loss = loss_fn(y_batch, output)
            grads = tape.gradient(loss, mlp.trainable_weights)
            optimizer.apply_gradients(zip(grads, mlp.trainable_weights))

在model类中:

代码语言:javascript
复制
def call(self, step, inputs, training=None, mask=None):
    mean_predictions = self.mean(inputs)
    var_predictions = tf.math.softplus(self.variance(inputs)
    return mean_predictions, var_predictions

然而,我仍然很好奇,如果有的话,Tensorflow会是Pytorch的torch.no_grad()的等价物。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/60285334

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档