我在跟踪GPflow 2.5.2文档的多输出内核笔记本。我试图用VGP或GPR模型代替SVGP模型,因为我只有很少的数据,不需要稀疏的方面。
我正在使用SharedIndependent多输出内核。
对于这两个模型,我得到了矩阵乘法中的维数不正确的ValueErrors。我想我需要改变输入数据的格式,但是我不知道怎么做,所以我只使用与SVGP模型相同的格式。
VGP模型的错误消息:
ValueError: Dimensions must be equal, but are 2 and 100 for '{{node MatMul}} = BatchMatMulV2[T=DT_DOUBLE, adj_x=false, adj_y=false](Cholesky, MatMul/identity_CONSTRUCTED_AT_top_level/forward/ReadVariableOp)' with input shapes: [100,2,2], [100,2].探地雷达模型的错误信息:
ValueError: Dimensions must be equal, but are 2 and 100 for '{{node triangular_solve/MatrixTriangularSolve}} = MatrixTriangularSolve[T=DT_DOUBLE, adjoint=false, lower=true](Cholesky, sub)' with input shapes: [100,2,2], [100,2].在像这样初始化VGP模型之后,我已经尝试将q_mu和q_sqrt值设置为建议的这里 (没有工作):
m.q_mu = np.zeros((len(x_train)*len(y_train.T), 1), dtype=gpflow.config.default_float())
m.q_sqrt = np.expand_dims(np.identity(len(x_train)*len(y_train.T), dtype=gpflow.config.default_float()), axis=0)代码如下:
import gpflow as gpf
def generate_data(N=100):
X1 = np.random.rand(N, 1)
Y1 = np.sin(6 * X1) + np.random.randn(*X1.shape) * 0.03 + 2
Y2 = np.sin(5 * X1 + 0.7) + np.random.randn(*X1.shape) * 0.1 + 0.5
return X1, np.concatenate((Y1, Y2), axis=1)
N=100
M=15
P=2
data = (X, Y) = generate_data(N)
# create multi-output kernel
kernel = gpf.kernels.SharedIndependent(
gpf.kernels.Matern52(active_dims=list(range(X.shape[1]))), output_dim=P
)
# initialization of inducing input locations (M random points from the training inputs)
Zinit = np.linspace(0, 1, M)[:, None]
Z = Zinit.copy()
# create multi-output inducing variables from Z
iv = gpf.inducing_variables.SharedIndependentInducingVariables(
gpf.inducing_variables.InducingPoints(Z)
)
m = gpf.models.SVGP(kernel, gpf.likelihoods.Gaussian(), inducing_variable=iv, num_latent_gps=P)
optimizer = gpf.optimizers.Scipy()
optimizer.minimize(
m.training_loss_closure(data),
variables=m.trainable_variables,
method="l-bfgs-b",
options={"iprint": 0, "maxiter": ci_niter(2000)},
)
# implementation of VGP
m = gpf.models.VGP(data, kernel, gpf.likelihoods.Gaussian(), num_latent_gps=P)
optimizer = gpf.optimizers.Scipy()
optimizer.minimize(
m.training_loss,
variables=m.trainable_variables,
method="l-bfgs-b",
options={"iprint": 0, "maxiter": ci_niter(2000)},
)
## implementation og gpflow.models.GPR
m = gpf.models.GPR(data, kernel)
optimizer = gpf.optimizers.Scipy()
optimizer.minimize(
m.training_loss,
variables=m.trainable_variables,
method="l-bfgs-b",
options={"iprint": 0, "maxiter": ci_niter(2000)},
)发布于 2022-10-06 09:10:32
不幸的是,目前只有SVGP支持多输出内核。
在更多的模型中支持它是一个共同要求,但是它是一个令人惊讶的大量工作,所以它从来没有完成。
好消息是,更简单的模型确实支持在多维上广播单个内核,这与SharedIndependent MOK完全相同。只要插入一个单输出内核,它就会广播。例如:
import gpflow as gpf
import numpy as np
D = 1
P = 2
def generate_data(N=100):
X1 = np.random.rand(N, D)
Y1 = np.sin(6 * X1) + np.random.randn(*X1.shape) * 0.03 + 2
Y2 = np.sin(5 * X1 + 0.7) + np.random.randn(*X1.shape) * 0.1 + 0.5
return X1, np.concatenate((Y1, Y2), axis=1)
N = 100
M = 15
data = generate_data(N)
train_data = (data[0][:70], data[1][:70])
test_data = (data[0][70:], data[1][70:])
# create multi-output kernel
kernel = gpf.kernels.SharedIndependent(gpf.kernels.Matern52(), output_dim=P)
# initialization of inducing input locations (M random points from the training inputs)
Zinit = np.linspace(0, 1, M)[:, None]
Z = Zinit.copy()
# create multi-output inducing variables from Z
iv = gpf.inducing_variables.SharedIndependentInducingVariables(
gpf.inducing_variables.InducingPoints(Z)
)
m = gpf.models.SVGP(
kernel, gpf.likelihoods.Gaussian(), inducing_variable=iv, num_latent_gps=P
)
optimizer = gpf.optimizers.Scipy()
optimizer.minimize(
m.training_loss_closure(train_data),
variables=m.trainable_variables,
method="l-bfgs-b",
options={"iprint": 0, "maxiter": 2000},
)
print("svgp", np.mean(m.predict_log_density(test_data)))
# implementation of VGP
m = gpf.models.VGP(
train_data, gpf.kernels.Matern52(), gpf.likelihoods.Gaussian(), num_latent_gps=P
)
optimizer = gpf.optimizers.Scipy()
optimizer.minimize(
m.training_loss,
variables=m.trainable_variables,
method="l-bfgs-b",
options={"iprint": 0, "maxiter": 2000},
)
print("vgp", np.mean(m.predict_log_density(test_data)))
## implementation og gpflow.models.GPR
m = gpf.models.GPR(
train_data,
gpf.kernels.Matern52(),
)
optimizer = gpf.optimizers.Scipy()
optimizer.minimize(
m.training_loss,
variables=m.trainable_variables,
method="l-bfgs-b",
options={"iprint": 0, "maxiter": 2000},
)
print("gpr", np.mean(m.predict_log_density(test_data)))https://stackoverflow.com/questions/73963058
复制相似问题