首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >TensorFlow 2中带导数的损失函数

TensorFlow 2中带导数的损失函数
EN

Stack Overflow用户
提问于 2020-11-24 22:19:13
回答 1查看 757关注 0票数 2

我使用TF2 (2.3.0) NN逼近函数y,它解决了ODE: y'+3y=0

我定义了剪刀损失类和函数,在这些函数中,我试图将单个输出与单个输入区分开来,因此方程成立,条件是y_true为零:

代码语言:javascript
复制
from tensorflow.keras.losses import Loss
import tensorflow as tf

class CustomLossOde(Loss):
    def __init__(self, x, model, name='ode_loss'):
        super().__init__(name=name)
        self.x = x
        self.model = model

    def call(self, y_true, y_pred):

        with tf.GradientTape() as tape:
            tape.watch(self.x)
            y_p = self.model(self.x)


        dy_dx = tape.gradient(y_p, self.x)
        loss = tf.math.reduce_mean(tf.square(dy_dx + 3 * y_pred - y_true))
        return loss

但是运行以下NN:

代码语言:javascript
复制
import tensorflow as tf
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense
from tensorflow.keras import Input
from custom_loss_ode import CustomLossOde


num_samples = 1024
x_train = 4 * (tf.random.uniform((num_samples, )) - 0.5)
y_train = tf.zeros((num_samples, ))
inputs = Input(shape=(1,))
x = Dense(16, 'tanh')(inputs)
x = Dense(8, 'tanh')(x)
x = Dense(4)(x)
y = Dense(1)(x)
model = Model(inputs=inputs, outputs=y)
loss = CustomLossOde(model.input, model)
model.compile(optimizer=Adam(learning_rate=0.01, beta_1=0.9, beta_2=0.99),loss=loss)
model.run_eagerly = True
model.fit(x_train, y_train, batch_size=16, epochs=30)

现在我从财政时代得到0的损失,这是没有任何意义的。

我在函数中同时打印了y_truey_test,它们看起来还好,所以我怀疑问题是在渐变中,我没有成功地打印。提供任何帮助

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-11-25 14:07:23

在这种情况下,用高级Keras定义自定义丢失有点困难。相反,我将从scracth编写培训循环,因为它允许对您可以做的事情进行更细粒度的控制。

我从这两个向导那里得到了灵感:

基本上,我使用的事实是,多个磁带可以无缝交互。我用一种计算损失函数,另一种用来计算要由优化器传播的梯度。

代码语言:javascript
复制
import tensorflow as tf
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense
from tensorflow.keras import Input

num_samples = 1024
x_train = 4 * (tf.random.uniform((num_samples, )) - 0.5)
y_train = tf.zeros((num_samples, ))
inputs = Input(shape=(1,))
x = Dense(16, 'tanh')(inputs)
x = Dense(8, 'tanh')(x)
x = Dense(4)(x)
y = Dense(1)(x)
model = Model(inputs=inputs, outputs=y)

# using the high level tf.data API for data handling
x_train = tf.reshape(x_train,(-1,1))
dataset = tf.data.Dataset.from_tensor_slices((x_train,y_train)).batch(1)

opt = Adam(learning_rate=0.01, beta_1=0.9, beta_2=0.99)
for step, (x,y_true) in enumerate(dataset):
    # we need to convert x to a variable if we want the tape to be 
    # able to compute the gradient according to x
    x_variable = tf.Variable(x) 
    with tf.GradientTape() as model_tape:
        with tf.GradientTape() as loss_tape:
            loss_tape.watch(x_variable)
            y_pred = model(x_variable)
        dy_dx = loss_tape.gradient(y_pred, x_variable)
        loss = tf.math.reduce_mean(tf.square(dy_dx + 3 * y_pred - y_true))
    grad = model_tape.gradient(loss, model.trainable_variables)
    opt.apply_gradients(zip(grad, model.trainable_variables))
    if step%20==0:
        print(f"Step {step}: loss={loss.numpy()}")
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/64995683

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档