首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >在运行一个单元格试图微调Wav2Wav医学词汇模块时,获取混合Precison错误

在运行一个单元格试图微调Wav2Wav医学词汇模块时,获取混合Precison错误
EN

Stack Overflow用户
提问于 2021-06-16 16:57:17
回答 1查看 694关注 0票数 2

以下是我的手机代码:

代码语言:javascript
复制
from transformers import TrainingArguments

training_args = TrainingArguments(
  output_dir="wav2vec2-medical",
  group_by_length=True,
  per_device_train_batch_size=32,
  evaluation_strategy="steps",
  num_train_epochs=30,
  fp16=True,
  save_steps=500,
  eval_steps=500,
  logging_steps=500,
  learning_rate=1e-4,
  weight_decay=0.005,
  warmup_steps=1000,
  save_total_limit=2,
)

这是我正在犯的错误。我不知道该怎么做。

代码语言:javascript
复制
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-26-f9014a6221db> in <module>
      1 from transformers import TrainingArguments
      2 
----> 3 training_args = TrainingArguments(
      4   # output_dir="/content/gdrive/MyDrive/wav2vec2-base-timit-demo",
      5   output_dir="./wav2vec2-medical",

~/Library/Python/3.8/lib/python/site-packages/transformers/training_args.py in __init__(self, output_dir, overwrite_output_dir, do_train, do_eval, do_predict, evaluation_strategy, prediction_loss_only, per_device_train_batch_size, per_device_eval_batch_size, per_gpu_train_batch_size, per_gpu_eval_batch_size, gradient_accumulation_steps, eval_accumulation_steps, learning_rate, weight_decay, adam_beta1, adam_beta2, adam_epsilon, max_grad_norm, num_train_epochs, max_steps, lr_scheduler_type, warmup_ratio, warmup_steps, logging_dir, logging_strategy, logging_first_step, logging_steps, save_strategy, save_steps, save_total_limit, no_cuda, seed, fp16, fp16_opt_level, fp16_backend, fp16_full_eval, local_rank, tpu_num_cores, tpu_metrics_debug, debug, dataloader_drop_last, eval_steps, dataloader_num_workers, past_index, run_name, disable_tqdm, remove_unused_columns, label_names, load_best_model_at_end, metric_for_best_model, greater_is_better, ignore_data_skip, sharded_ddp, deepspeed, label_smoothing_factor, adafactor, group_by_length, length_column_name, report_to, ddp_find_unused_parameters, dataloader_pin_memory, skip_memory_metrics, use_legacy_prediction_loop, push_to_hub, resume_from_checkpoint, mp_parameters)

~/Library/Python/3.8/lib/python/site-packages/transformers/training_args.py in __post_init__(self)
    609 
    610         if is_torch_available() and self.device.type != "cuda" and (self.fp16 or self.fp16_full_eval):
--> 611             raise ValueError(
    612                 "Mixed precision training with AMP or APEX (`--fp16`) and FP16 evaluation can only be used on CUDA devices."
    613             )

ValueError: Mixed precision training with AMP or APEX (`--fp16`) and FP16 evaluation can only be used on CUDA devices.

我试着在木星笔记本上运行,在本地设备上运行,也在Google Colab上运行,但是我还是得到了同样的错误。

EN

回答 1

Stack Overflow用户

发布于 2022-06-09 21:03:24

应该删除fp16=True或在GPU上运行,这是GPU唯一的参数。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/68007097

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档