我正在尝试使用交叉验证折叠来生成随机森林的特征重要性图。当只使用特征(X)和目标(Y)数据时,实现很简单,例如:
rfc = RandomForestClassifier()
rfc.fit(X, y)
importances = pd.DataFrame({'FEATURE':data_x.columns,'IMPORTANCE':np.round(rfc.feature_importances_,3)})
importances = importances.sort_values('IMPORTANCE',ascending=False).set_index('FEATURE')
print(importances)
importances.plot.bar()
plt.show()这会产生:

然而,我如何转换这段代码,以便为我将要创建的每个交叉验证折叠(k-折叠图)创建类似的图?
我现在拥有的代码是:
# Empty list storage to collect all results for displaying as plots
mylist = []
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import KFold
kf = KFold(n_splits=3)
for train, test in kf.split(X, y):
train_data = np.array(X)[train]
test_data = np.array(y)[test]
for rfc = RandomForestClassifier():
rfc.fit(train_data, test_data)例如,上面的代码使用交叉验证技术创建(3个折叠),我的目标是为所有3个折叠创建特征重要性图,从而生成3个特征重要性图。目前,它给了我循环错误。
我不确定使用创建的每个(k-折叠)分别通过随机森林为每个(k-折叠)生成特征重要性图的最有效的技术是什么。
发布于 2018-08-15 14:36:24
这是为我工作的代码:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import KFold
from sklearn.datasets import make_classification
# classification dataset
data_x, data_y = make_classification(n_features=9)
# feature names must be declared outside the function
# feature_names = list(data_x.columns)
kf = KFold(n_splits=10)
rfc = RandomForestClassifier()
count = 1
# test data is not needed for fitting
for train, _ in kf.split(data_x, data_y):
rfc.fit(data_x[train, :], data_y[train])
# sort the feature index by importance score in descending order
importances_index_desc = np.argsort(rfc.feature_importances_)[::-1]
feature_labels = [feature_names[-i] for i in importances_index_desc]
# plot
plt.figure()
plt.bar(feature_labels, rfc.feature_importances_[importances_index_desc])
plt.xticks(feature_labels, rotation='vertical')
plt.ylabel('Importance')
plt.xlabel('Features')
plt.title('Fold {}'.format(count))
count = count + 1
plt.show()

发布于 2018-08-12 23:29:59
错误的原因之一是代码rfc.fit(train_data, test_data)。你应该把训练标签作为第二个参数,而不是测试数据。
至于绘图,您可以尝试执行类似以下代码的操作。我假设你知道,在这种情况下,k-折叠式CV仅用于选择不同的训练数据集。测试数据被忽略,因为没有做出预测:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import KFold
from sklearn.datasets import make_classification
# dummy classification dataset
X, y = make_classification(n_features=10)
# dummy feature names
feature_names = ['F{}'.format(i) for i in range(X.shape[1])]
kf = KFold(n_splits=3)
rfc = RandomForestClassifier()
count = 1
# test data is not needed for fitting
for train, _ in kf.split(X, y):
rfc.fit(X[train, :], y[train])
# sort the feature index by importance score in descending order
importances_index_desc = np.argsort(rfc.feature_importances_)[::-1]
feature_labels = [feature_names[i] for i in importances_index_desc]
# plot
plt.figure()
plt.bar(feature_labels, rfc.feature_importances_[importances_index_desc])
plt.xticks(feature_labels, rotation='vertical')
plt.ylabel('Importance')
plt.xlabel('Features')
plt.title('Fold {}'.format(count))
count = count + 1
plt.show()https://stackoverflow.com/questions/51798540
复制相似问题