首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >带鉴别器编译问题的VAE

带鉴别器编译问题的VAE
EN

Stack Overflow用户
提问于 2020-02-12 10:58:29
回答 1查看 146关注 0票数 0

相对于自然生成模型,这个vae的输入是一个RGB图像。在这里,如果我使用self.combined方法编译add_loss,损失在15000到-22000之间。使用mse编译很好。

代码语言:javascript
复制
    def __init__(self,type = 'landmark'):

        self.latent_dim = 128
        self.input_shape = (128,128,3)
        self.batch_size = 1
        self.original_dim = self.latent_dim*self.latent_dim
        patch = int(self.input_shape[0] / 2**4)
        self.disc_patch = (patch, patch, 1)

        optimizer = tf.keras.optimizers.Adam(0.0002, 0.5)

        pd = patch_discriminator(type)
        self.discriminator = pd.discriminator()
        self.discriminator.compile(loss = 'binary_crossentropy',optimizer = optimizer)
        self.discriminator.trainable = False

        vae = VAE(self.latent_dim,type = type)
        encoder = vae.inference_net()
        decoder = vae.generative_net()

        if type == 'image':
            self.orig_out = tf.random.normal(shape = (self.batch_size,128,128,3))
        else:
            self.orig_out = tf.random.normal(shape = (self.batch_size,128,128,1))

        vae_input = tf.keras.layers.Input(shape = self.input_shape)
        self.encoder_out = encoder(vae_input)
        self.decoder_out = decoder(self.encoder_out[2])

        self.generator = tf.keras.Model(vae_input,self.decoder_out)
        vae_loss = self.compute_loss()
        self.generator.add_loss(vae_loss)
        self.generator.compile(optimizer = optimizer)

        valid = self.discriminator([self.decoder_out,self.decoder_out])
        self.combined = tf.keras.Model(vae_input,valid)
        self.combined.add_loss(vae_loss)
        self.combined.compile(optimizer = optimizer)
        # self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)

        self.dl = DataLoader()

计算损失计算VAE的kl损失。最初,self.orig_out被设置为正常张量,并在下面的训练循环中进行更新。

代码语言:javascript
复制
    def compute_loss(self):
        bce = tf.keras.losses.BinaryCrossentropy()
        reconstruction_loss = bce(self.decoder_out,self.orig_out)
        reconstruction_loss = self.original_dim*reconstruction_loss
        z_mean = self.encoder_out[0]
        z_log_var = self.encoder_out[1]
        kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
        kl_loss = K.sum(kl_loss, axis=-1)
        kl_loss *= -0.5
        vae_loss = K.mean(reconstruction_loss + kl_loss)
        return vae_loss

培训回路:

代码语言:javascript
复制
    def train(self,batch_size = 1,epochs = 10):
        start_time = datetime.datetime.now()
        valid = np.ones((batch_size,) + self.disc_patch)
        fake = np.zeros((batch_size,) + self.disc_patch)
        threshold = epochs//10

        for epoch in range(epochs):
            for batch_i,(imA,imB,n_batches) in enumerate(self.dl.load_batch(target='landmark',batch_size=batch_size)):
                self.orig_out = tf.convert_to_tensor(imB, dtype=tf.float32)
                fakeA = self.generator.predict(imA)

                d_real_loss = self.discriminator.train_on_batch([imB,imB],valid)
                d_fake_loss = self.discriminator.train_on_batch([imB,fakeA],fake)
                d_loss = 0.5*np.add(d_real_loss,d_fake_loss)

                combined_loss = self.combined.train_on_batch(imA)
                #combined_loss = self.combined.train_on_batch(imA,valid)

                elapsed_time = datetime.datetime.now() - start_time


                print (f"[Epoch {epoch}/{epochs}] [Batch {batch_i}/{n_batches}] [D loss: {d_loss}] [G loss: {combined_loss}] time: {elapsed_time}")

如果使用self.combined方法编译kl丢失的add_loss(),则无法在train_on_batch期间传递输出,如上面所示。因此,生成器不会学习并产生随机输出。如何使用kl损失用鉴别器编译vae?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-04-08 00:36:16

我不知道这是否是正确的答案,但是VAE可以更容易地使用Tensorflow建模,因为它处理定制的训练循环。

您可以遵循这个link,它可能包含一些与您的问题相关的信息。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/60186806

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档