我试图使用MATLAB的 princomp进行降维,但我不确定我是否做得对。
下面是我的测试代码,但我不确定我是否正确地做了投影:
A = rand(4,3)
AMean = mean(A)
[n m] = size(A)
Ac = (A - repmat(AMean,[n 1]))
pc = princomp(A)
k = 2; %Number of first principal components
A_pca = Ac * pc(1:k,:)' %Not sure I'm doing projection right
reconstructedA = A_pca * pc(1:k,:)
error = reconstructedA- Ac我使用ORL数据集进行人脸识别的代码:
%load orl_data 400x768 double matrix (400 images 768 features)
%make labels
orl_label = [];
for i = 1:40
orl_label = [orl_label;ones(10,1)*i];
end
n = size(orl_data,1);
k = randperm(n);
s = round(0.25*n); %Take 25% for train
%Raw pixels
%Split on test and train sets
data_tr = orl_data(k(1:s),:);
label_tr = orl_label(k(1:s),:);
data_te = orl_data(k(s+1:end),:);
label_te = orl_label(k(s+1:end),:);
tic
[nn_ind, estimated_label] = EuclDistClassifier(data_tr,label_tr,data_te);
toc
rate = sum(estimated_label == label_te)/size(label_te,1)
%Using PCA
tic
pc = princomp(data_tr);
toc
mean_face = mean(data_tr);
pc_n = 100;
f_pc = pc(1:pc_n,:)';
data_pca_tr = (data_tr - repmat(mean_face, [s,1])) * f_pc;
data_pca_te = (data_te - repmat(mean_face, [n-s,1])) * f_pc;
tic
[nn_ind, estimated_label] = EuclDistClassifier(data_pca_tr,label_tr,data_pca_te);
toc
rate = sum(estimated_label == label_te)/size(label_te,1)如果我选择足够的主成分,它给我相同的识别率。如果我使用少量的主成分 ( PCA ),那么使用PCA的比率就会更低。
以下是一些问题:
princomp函数是使用MATLAB计算第一个k主成分的最佳方法吗?我还简单地使用gpuArray尝试了GPU版本。
%Test using GPU
tic
A_cpu = rand(30000,32*24);
A = gpuArray(A_cpu);
AMean = mean(A);
[n m] = size(A)
pc = princomp(A);
k = 100;
A_pca = (A - repmat(AMean,[n 1])) * pc(1:k,:)';
A_pca_cpu = gather(A_pca);
toc
clear;
tic
A = rand(30000,32*24);
AMean = mean(A);
[n m] = size(A)
pc = princomp(A);
k = 100;
A_pca = (A - repmat(AMean,[n 1])) * pc(1:k,:)';
toc
clear;它工作得更快,但不适合大矩阵。也许我错了?
如果我使用一个大矩阵,它会给我:
在设备上使用内存不足的gpuArray时出错。
发布于 2013-07-07 21:34:43
“使用MATLAB计算第一个k主成分的最佳方法是princomp函数吗?”
它正在计算一个完整的SVD,所以它在大型数据集上将是缓慢的。您可以通过在开始时指定所需的维数并计算部分svd来显着地加快速度。部分svd的matlab函数是svds函数。
如果svds不够快,这里有一个更现代的实现:
http://cims.nyu.edu/~tygert/software.html (matlab版本:http://code.google.com/p/framelet-mri/source/browse/pca.m )
(参见描述算法http://cims.nyu.edu/~tygert/blanczos.pdf的文章)
你可以通过增加计算出的奇异向量的数量来控制你的近似的误差,这在链接的纸上有精确的界。下面是一个例子:
>> A = rand(40,30); %random rank-30 matrix
>> [U,S,V] = pca(A,2); %compute a rank-2 approximation to A
>> norm(A-U*S*V',2)/norm(A,2) %relative error
ans =
0.1636
>> [U,S,V] = pca(A,25); %compute a rank-25 approximation to A
>> norm(A-U*S*V',2)/norm(A,2) %relative error
ans =
0.0410当你有大量的数据和稀疏矩阵计算时,一个完整的SVD通常是不可能的,因为这些因素永远不会是稀疏的。在这种情况下,您必须计算一个部分的SVD,以适应内存。示例:
>> A = sprandn(5000,5000,10000);
>> tic;[U,S,V]=pca(A,2);toc;
no pivots
Elapsed time is 124.282113 seconds.
>> tic;[U,S,V]=svd(A);toc;
??? Error using ==> svd
Use svds for sparse singular values and vectors.
>> tic;[U,S,V]=princomp(A);toc;
??? Error using ==> svd
Use svds for sparse singular values and vectors.
Error in ==> princomp at 86
[U,sigma,coeff] = svd(x0,econFlag); % put in 1/sqrt(n-1) later
>> tic;pc=princomp(A);toc;
??? Error using ==> eig
Use eigs for sparse eigenvalues and vectors.
Error in ==> princomp at 69
[coeff,~] = eig(x0'*x0);https://stackoverflow.com/questions/15988655
复制相似问题