首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >无法为CatBoostRegressor调优超参数

无法为CatBoostRegressor调优超参数
EN

Data Science用户
提问于 2021-08-26 08:06:35
回答 1查看 495关注 0票数 1

我正试图使CatBoostRegressor适合我的模型。当我为基线模型执行K折叠CV时,一切都很好。但是当我使用Optuna进行超参数调优时,它做了一些非常奇怪的事情。它运行第一次试用,然后抛出以下错误:

代码语言:javascript
复制
[I 2021-08-26 08:00:56,865] Trial 0 finished with value: 0.7219653113910736 and parameters: 
{'model__depth': 2, 'model__iterations': 1715, 'model__subsample': 0.5627211605250965, 
'model__learning_rate': 0.15601805222619286}. Best is trial 0 with value: 0.7219653113910736. 
[W 2021-08-26 08:00:56,869] 

Trial 1 failed because of the following error: CatBoostError("You 
can't change params of fitted model.")
Traceback (most recent call last):

我对XGBRegressor和LGBM使用了类似的方法,它们工作得很好。那么,为什么我会收到CatBoost错误呢?

以下是我的金枪鱼编码:-

代码语言:javascript
复制
import optuna
from sklearn.metrics import mean_squared_error

def objective(trial):

    model__depth = trial.suggest_int('model__depth', 2, 10)
    model__iterations = trial.suggest_int('model__iterations', 100, 2000)
    model__subsample = trial.suggest_float('model__subsample', 0.0, 1.0)
    model__learning_rate = trial.suggest_float('model__learning_rate', 0.001, 0.3, log = True)

    params = {'model__depth' : model__depth,
              'model__iterations' : model__iterations,
              'model__subsample' : model__subsample, 
              'model__learning_rate' : model__learning_rate}

    pipe.set_params(**params)
    pipe.fit(train_x, train_y)
    pred = pipe.predict(test_x)

    return np.sqrt(mean_squared_error(test_y, pred))

cbr_study = optuna.create_study(direction = 'minimize')
cbr_study.optimize(objective, n_trials = 10)
EN

回答 1

Data Science用户

发布于 2021-08-27 10:04:17

这似乎是Catboost的一个问题,至少在GitHub上有一个(现在已经关闭)问题。可能会打开一个新的问题,让开发人员知道这一点。

在过去,我使用来自bayes_optBayesianOptimization对Catboost进行了调优(如包名所示,使用贝叶斯优化)。找到下面代码的主要部分和一个这里是完整的例子

代码语言:javascript
复制
def cbfunc(border_count,l2_leaf_reg, depth, learning_rate):
    params = {
        'eval_metric':'MAE', # using MAE here, could also be RMSE or MSE
        'early_stopping_rounds': esrounds,
        'num_boost_round': brounds,
        'use_best_model': True,
        'task_type': "GPU"
    }

    params['border_count'] = round(border_count,0)
    params['l2_leaf_reg'] = l2_leaf_reg
    params['depth'] = round(depth,0)
    params['learning_rate'] = learning_rate

    # Cross validation   
    cv_results = cb.cv(cb.Pool(xtrain, ytrain, cat_features=cat_features), params=params,fold_count=3,inverted=False,partition_random_seed=5,shuffle=True, logging_level='Silent') 
    # bayes_opt MAXIMISES: In order to minimise MAE, I use 1/MAE as target value
    return 1/cv_results['test-MAE-mean'].min()

pbounds = { 
        'border_count': (1,255),      # int. 1-255
        'l2_leaf_reg': (0,20),        # any positive value
        'depth': (1,16),              # int. up to  16
        'learning_rate': (0.01,0.2),
    }

optimizer = BayesianOptimization(
    f=cbfunc,
    pbounds=pbounds,
    verbose=2, # verbose = 1 prints only when a maximum is observed, verbose = 0 is silent
    random_state=5
)

optimizer.maximize(
    init_points=2,
    n_iter=500,
)
票数 0
EN
页面原文内容由Data Science提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://datascience.stackexchange.com/questions/100529

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档