首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Pyro更改AutodiagonalNormal设置

Pyro更改AutodiagonalNormal设置
EN

Stack Overflow用户
提问于 2019-01-18 03:34:07
回答 1查看 262关注 0票数 1

我使用pyro-ppl 3.0进行概率编程。当我看完贝叶斯回归的教程时。我使用AutoGuide和pyro.random_module将一个正常的前馈网络转换为贝叶斯网络。

代码语言:javascript
复制
# linear regression
class RegressionModel(nn.Module):
    def __init__(self, p):
        # p number of feature
        super(RegressionModel, self).__init__()
        self.linear1 = nn.Linear(p, 2)
        self.linear2 = nn.Linear(2, 1)
        self.softplus = nn.Softplus()

    def forward(self, x):
        x = self.softplus(self.linear1(x))
        return self.linear2(x)

# model
def model(x_data, y_data):
    # weight and bias prior
    w1_prior = Normal(torch.zeros(2,2), torch.ones(2,2)).to_event(2)
    b1_prior = Normal(torch.ones(2)*8, torch.ones(2)*1000).to_event(1)
    w2_prior = Normal(torch.zeros(1,2), torch.ones(1,2)).to_event(1)
    b2_prior = Normal(torch.ones(1)*3, torch.ones(1)*500).to_event(1)

    priors = {'linear1.weight': w1_prior, 'linear1.bias': b1_prior,
             'linear2.weight': w2_prior, 'linear2.bias': b2_prior}

    scale = pyro.sample("sigma", Uniform(0., 10.))

    # lift module parameters to random variables sampled from the priors
    lifted_module = pyro.random_module("module", regression_model, priors)
    # sample a nn (which also samples w and b)
    lifted_reg_model = lifted_module()
    with pyro.plate("map", len(x_data)):
        # run the nn forward on data
        prediction_mean = lifted_reg_model(x_data).squeeze(-1)
        # condition on the observed data
        pyro.sample("obs",
                    Normal(prediction_mean, scale),
                    obs=y_data)
        return prediction_mean

guide = AutoDiagonalNormal(model)

#================

#================

# inference
optim = Adam({"lr": 0.03})
svi = SVI(model, guide, optim, loss=Trace_ELBO(), num_samples=1000)

def train():
    pyro.clear_param_store()
    for j in range(num_iterations):
        # calculate the loss and take a gradient step
        loss = svi.step(x_data, y_data)
        if j % 100 == 0:
            print("[iteration %04d] loss: %.4f" % (j + 1, loss / len(data)))

train()

for name, value in pyro.get_param_store().items():
    print(name, pyro.param(name))

结果如下: auto_loc张量(-2.1585,-0.9799,-0.0378,-0.5000,-1.0241,2.6091,-1.3760,1.6920,0.2553,4.5768,requires_grad=True) auto_scale张量(0.1432,0.1017,0.0368,0.7588,0.4160,0.0624,0.6657,0.0431,0.2972,0.0901,grad_fn=)

潜变量的个数自动设置为10,我想更改这个数字。正如本教程中所提到的,我添加了

代码语言:javascript
复制
##
latent_dim = 5
pyro.param("auto_loc", torch.randn(latent_dim))
pyro.param("auto_scale", torch.ones(latent_dim),
           constraint=constraints.positive)

在上面提到的#=之间。

但是结果还是一样的。数字是不变的。那么,如何设置AutoDiagnalNormal函数来改变潜在变量的数量呢

EN

回答 1

Stack Overflow用户

发布于 2020-10-15 19:30:51

我认为你只需要在训练设置之间进行pyro.clear_param_store()。我相信发生的情况是,您正在使用latent_dim=5进行训练,然后当您设置latent_dim=10时,旧的参数仍然在Pyro的全局参数存储中。请注意,pyro.param()语句的torch.randn(latent_dim)参数仅用于初始化,如果该参数已经初始化(可以在全局参数存储中找到),则该参数将被忽略。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/54243064

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档