首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >自学习置换的重要性导致模型中零系数的非零值。

自学习置换的重要性导致模型中零系数的非零值。
EN

Stack Overflow用户
提问于 2022-05-13 08:40:48
回答 1查看 41关注 0票数 2

我对sklearn的permutation_importance函数感到困惑。我用正则logistic回归拟合了一条管道,导致的几个特征系数为0。然而,当我想在测试数据集中计算特征的排列重要性时,中的一些特性得到了非零重要性值。如果他们对分类器不起作用,这又怎么可能呢?

下面是一些示例代码和数据:

代码语言:javascript
复制
import numpy as np    
from sklearn.datasets import make_classification
from sklearn.model_selection import RepeatedStratifiedKFold
import scipy.stats as stats
from sklearn.utils.fixes import loguniform
from sklearn.preprocessing import StandardScaler
from sklearn.impute import KNNImputer
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import RandomizedSearchCV
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.inspection import permutation_importance


# create example data with missings
X, y = make_classification(n_samples = 500,
                           n_features = 100,
                           n_informative = 25,
                           n_redundant = 75,
                           random_state = 0)
c = 10000 # number of missings
X.ravel()[np.random.choice(X.size, c, replace = False)] = np.nan # introduce random missings
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size = 0.2, random_state = 0)

folds = 5
repeats = 5
n_iter = 25
rskfold = RepeatedStratifiedKFold(n_splits = folds, n_repeats = repeats, random_state = 1897)

scl = StandardScaler()
imp = KNNImputer(n_neighbors = 5, weights = 'uniform')
sgdc = SGDClassifier(loss = 'log', penalty = 'elasticnet', class_weight = 'balanced', random_state = 0)

pipe = Pipeline([('scaler', scl),
                 ('imputer', imp),
                 ('clf', sgdc)])
param_rand = {'clf__l1_ratio': stats.uniform(0, 1),
              'clf__alpha': loguniform(0.001, 1)}

m = RandomizedSearchCV(pipe, param_rand, n_iter = n_iter, cv = rskfold, scoring = 'accuracy', random_state = 0, verbose = 1, n_jobs = -1)
m.fit(Xtrain, ytrain)

coefs = m.best_estimator_.steps[2][1].coef_
print('Number of non-zero feature coefficients in classifier:')
print(np.sum(coefs != 0))

imps = permutation_importance(m, Xtest, ytest, n_repeats = 25, random_state = 0, n_jobs = -1)

print('Number of non-zero feature importances after permutations:')
print(np.sum(imps['importances_mean'] != 0))

你会发现第二个打印数字与第一个不匹配.

任何帮助都是非常感谢的!

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2022-05-13 14:20:45

因为你有一个KNNImputer。模型中系数为零的特征仍会影响其他列的计算,从而改变整个列的预测,从而具有非零置换的重要性。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/72226716

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档