我寻找一个具有python接口的NMF实现,可以同时处理丢失的数据和零。
我不想在开始因子分解之前估算我的缺失值,我希望它们在最小化的函数中被忽略。
似乎scikit-learn、nimfa、graphlab和mahout都没有提出这样的选择。
谢谢!
发布于 2014-07-06 22:05:00
使用这个Matlab to python code conversion sheet,我能够从Matlab toolbox库中重写NMF。
我必须分解一个稀疏度为0.7%的40k X 1k矩阵。使用500个潜在特征,我的机器在100次迭代中花费了20分钟。
下面是方法:
import numpy as np
from scipy import linalg
from numpy import dot
def nmf(X, latent_features, max_iter=100, error_limit=1e-6, fit_error_limit=1e-6):
"""
Decompose X to A*Y
"""
eps = 1e-5
print 'Starting NMF decomposition with {} latent features and {} iterations.'.format(latent_features, max_iter)
X = X.toarray() # I am passing in a scipy sparse matrix
# mask
mask = np.sign(X)
# initial matrices. A is random [0,1] and Y is A\X.
rows, columns = X.shape
A = np.random.rand(rows, latent_features)
A = np.maximum(A, eps)
Y = linalg.lstsq(A, X)[0]
Y = np.maximum(Y, eps)
masked_X = mask * X
X_est_prev = dot(A, Y)
for i in range(1, max_iter + 1):
# ===== updates =====
# Matlab: A=A.*(((W.*X)*Y')./((W.*(A*Y))*Y'));
top = dot(masked_X, Y.T)
bottom = (dot((mask * dot(A, Y)), Y.T)) + eps
A *= top / bottom
A = np.maximum(A, eps)
# print 'A', np.round(A, 2)
# Matlab: Y=Y.*((A'*(W.*X))./(A'*(W.*(A*Y))));
top = dot(A.T, masked_X)
bottom = dot(A.T, mask * dot(A, Y)) + eps
Y *= top / bottom
Y = np.maximum(Y, eps)
# print 'Y', np.round(Y, 2)
# ==== evaluation ====
if i % 5 == 0 or i == 1 or i == max_iter:
print 'Iteration {}:'.format(i),
X_est = dot(A, Y)
err = mask * (X_est_prev - X_est)
fit_residual = np.sqrt(np.sum(err ** 2))
X_est_prev = X_est
curRes = linalg.norm(mask * (X - X_est), ord='fro')
print 'fit residual', np.round(fit_residual, 4),
print 'total residual', np.round(curRes, 4)
if curRes < error_limit or fit_residual < fit_error_limit:
break
return A, Y在这里,我使用Scipy稀疏矩阵作为输入,并使用toarray()方法将缺少的值转换为0。因此,使用numpy.sign()函数创建了掩码。但是,如果您有nan值,则可以使用numpy.isnan()函数获得相同的结果。
发布于 2017-04-11 12:31:10
Scipy具有解决非负最小二乘问题(NNLS)的method。在这个答案中,我在使用scipy的NNLS进行非负矩阵分解时重现了我的blogpost。你可能还会对我的其他博客帖子感兴趣,这些帖子使用了autograd、Tensorflow和CVXPY来实现NNMF。
目标:我们的目标是给定一个矩阵A,将其分解为两个非负因素,如下所示:
A (M×N) ≈ W (M×K) × H (K×N), such that W (M×K) ≥ 0 and H (K×N) ≥ 0

概述
我们的解决方案由两个步骤组成。首先,我们修复W并学习H,给定A。其次,我们修复H并学习W,给定A。我们反复重复此过程。固定一个变量并学习另一个变量(在此设置中)通常称为交替最小二乘,因为问题简化为最小二乘问题。然而,需要注意的重要一点是,由于我们希望约束W和H为非负,因此我们使用NNLS而不是最小二乘。
步骤1:学习H,给定A和W

使用上图,我们可以学习H的每一列,使用A和矩阵W中的相应列。
H[:,j]=NNLS(W,A[:,j])处理A中缺少的条目
在协同过滤问题中,A通常是用户-项目矩阵,它有很多缺失的条目。这些缺少的条目对应于未评分项目的用户。我们可以修改我们的公式,以解决这些缺失的条目。考虑到A中的M‘≤M个条目具有观测数据,我们现在将上面的等式修改为:
H[:,j]=NNLS(W[mask],A[:,j][mask])其中,掩码是通过仅考虑M个条目来找到的。
步骤2:学习W,给定A和H

代码示例
导入
import numpy as np
import pandas as pd创建要分解的矩阵
M, N = 20, 10
np.random.seed(0)
A_orig = np.abs(np.random.uniform(low=0.0, high=1.0, size=(M,N)))
print pd.DataFrame(A_orig).head()
0 1 2 3 4 5 6 7 8 9
0 0.548814 0.715189 0.602763 0.544883 0.423655 0.645894 0.437587 0.891773 0.963663 0.383442
1 0.791725 0.528895 0.568045 0.925597 0.071036 0.087129 0.020218 0.832620 0.778157 0.870012
2 0.978618 0.799159 0.461479 0.780529 0.118274 0.639921 0.143353 0.944669 0.521848 0.414662
3 0.264556 0.774234 0.456150 0.568434 0.018790 0.617635 0.612096 0.616934 0.943748 0.681820
4 0.359508 0.437032 0.697631 0.060225 0.666767 0.670638 0.210383 0.128926 0.315428 0.363711掩蔽几个条目
A = A_orig.copy()
A[0, 0] = np.NAN
A[3, 1] = np.NAN
A[6, 3] = np.NAN
A_df = pd.DataFrame(A)
print A_df.head()
0 1 2 3 4 5 6 7 8 9
0 NaN 0.715189 0.602763 0.544883 0.423655 0.645894 0.437587 0.891773 0.963663 0.383442
1 0.791725 0.528895 0.568045 0.925597 0.071036 0.087129 0.020218 0.832620 0.778157 0.870012
2 0.978618 0.799159 0.461479 0.780529 0.118274 0.639921 0.143353 0.944669 0.521848 0.414662
3 0.264556 NaN 0.456150 0.568434 0.018790 0.617635 0.612096 0.616934 0.943748 0.681820
4 0.359508 0.437032 0.697631 0.060225 0.666767 0.670638 0.210383 0.128926 0.315428 0.363711定义矩阵W和H
K = 4
W = np.abs(np.random.uniform(low=0, high=1, size=(M, K)))
H = np.abs(np.random.uniform(low=0, high=1, size=(K, N)))
W = np.divide(W, K*W.max())
H = np.divide(H, K*H.max())
pd.DataFrame(W).head()
0 1 2 3
0 0.078709 0.175784 0.095359 0.045339
1 0.006230 0.016976 0.171505 0.114531
2 0.135453 0.226355 0.250000 0.054753
3 0.167387 0.066473 0.005213 0.191444
4 0.080785 0.096801 0.148514 0.209789
pd.DataFrame(H).head()
0 1 2 3 4 5 6 7 8 9
0 0.074611 0.216164 0.157328 0.003370 0.088415 0.037721 0.250000 0.121806 0.126649 0.162827
1 0.093851 0.034858 0.209333 0.048340 0.130195 0.057117 0.024914 0.219537 0.247731 0.244654
2 0.230833 0.197093 0.084828 0.020651 0.103694 0.059133 0.033735 0.013604 0.184756 0.002910
3 0.196210 0.037417 0.020248 0.022815 0.171121 0.062477 0.107081 0.141921 0.219119 0.185125定义我们想要最小化的成本
def cost(A, W, H):
from numpy import linalg
WH = np.dot(W, H)
A_WH = A-WH
return linalg.norm(A_WH, 'fro')然而,由于A缺少条目,我们必须根据A中存在的条目来定义成本
def cost(A, W, H):
from numpy import linalg
mask = pd.DataFrame(A).notnull().values
WH = np.dot(W, H)
WH_mask = WH[mask]
A_mask = A[mask]
A_WH_mask = A_mask-WH_mask
# Since now A_WH_mask is a vector, we use L2 instead of Frobenius norm for matrix
return linalg.norm(A_WH_mask, 2)让我们试着看看我们随机分配的W和H的初始值的成本。
cost(A, W, H)
7.3719938519859509交替NNLS过程
num_iter = 1000
num_display_cost = max(int(num_iter/10), 1)
from scipy.optimize import nnls
for i in range(num_iter):
if i%2 ==0:
# Learn H, given A and W
for j in range(N):
mask_rows = pd.Series(A[:,j]).notnull()
H[:,j] = nnls(W[mask_rows], A[:,j][mask_rows])[0]
else:
for j in range(M):
mask_rows = pd.Series(A[j,:]).notnull()
W[j,:] = nnls(H.transpose()[mask_rows], A[j,:][mask_rows])[0]
WH = np.dot(W, H)
c = cost(A, W, H)
if i%num_display_cost==0:
print i, c
0 4.03939072472
100 2.38059096458
200 2.35814781954
300 2.35717011529
400 2.35711130357
500 2.3571079918
600 2.35710729854
700 2.35710713129
800 2.35710709085
900 2.35710708109
A_pred = pd.DataFrame(np.dot(W, H))
A_pred.head()
0 1 2 3 4 5 6 7 8 9
0 0.564235 0.677712 0.558999 0.631337 0.536069 0.621925 0.629133 0.656010 0.839802 0.545072
1 0.788734 0.539729 0.517534 1.041272 0.119894 0.448402 0.172808 0.658696 0.493093 0.825311
2 0.749886 0.575154 0.558981 0.931156 0.270149 0.502035 0.287008 0.656178 0.588916 0.741519
3 0.377419 0.743081 0.370408 0.637094 0.071684 0.529433 0.767696 0.628507 0.832910 0.605742
4 0.458661 0.327143 0.610012 0.233134 0.685559 0.377750 0.281483 0.269960 0.468756 0.114950让我们来查看掩码条目的值。
A_pred.values[~pd.DataFrame(A).notnull().values]
array([ 0.56423481, 0.74308143, 0.10283106])
Original values were:
A_orig[~pd.DataFrame(A).notnull().values]
array([ 0.5488135 , 0.77423369, 0.13818295])发布于 2021-08-13 08:17:35
下面是一个基于拟牛顿方法的基于梯度下降的相当快速和简单的解决方案:
import numpy.random as rd
from autograd_minimize import minimize
import numpy as np
import tensorflow as tf
def low_rank_matrix_factorization(X: np.array, rank: int = 5, l2_reg: float = 0.,
positive: bool = False, opt_kwargs:
dict = {'method': 'L-BFGS-B', 'tol': 1e-12}) -> np.array:
"""Factorises a matrix X into the product of two matrices.
The matrix can have nans.
:param X: Input matrix
:type X: np.array
:param rank: rank of matrix decomposition, defaults to 5
:type rank: int, optional
:param l2_reg: l2 regularisation for the submatrices, defaults to 0.
:type l2_reg: float, optional
:param positive: if true, the matrices must have positive coefficients, defaults to False
:type positive: bool, optional
:param opt_kwargs: parameters for the optimizer, defaults to {'method': 'L-BFGS-B', 'tol': 1e-12}
:type opt_kwargs: dict, optional
:return: completed matrix
:rtype: np.array
"""
mask = tf.constant(~np.isnan(X), dtype=tf.float32)
X_ = np.nan_to_num(X.copy(), 0)
X_t = tf.constant(X_, dtype=tf.float32)
Npos = tf.reduce_sum(mask)
def model(U=None, V=None):
return tf.reduce_sum(((U @ V) - X_t)**2 * mask)/Npos + (tf.reduce_mean(U**2) + tf.reduce_mean(V**2)) * l2_reg
x0 = {'U': rd.random((X_t.shape[0], rank)),
'V': rd.random((rank, X_t.shape[1]))}
if positive:
opt_kwargs['bounds'] = {'U': (0, None), 'V': (0, None)}
res = minimize(model, x0, **opt_kwargs, backend='tf')
return np.where(np.isnan(X), res.x['U'] @ res.x['V'], X)https://stackoverflow.com/questions/22767695
复制相似问题