我使用两个包,“数据集”和"rouge_score“来获得红宝石-1的分数。然而,查准率和召回率是不同的。我想知道哪个包能产生正确的分数?
from rouge_score import rouge_scorer
import datasets
hyp = ['I have no car.']
ref = ['I want to buy a car.']
scorer1 = datasets.load_metric('rouge')
scorer2 = rouge_scorer.RougeScorer(['rouge1'])
results = {'precision_rouge_score': [], 'recall_rouge_score': [], 'fmeasure_rouge_score': [], \
'precision_datasets': [], 'recall_datasets': [], 'fmeasure_datasets': []}
for (h, r) in zip(hyp, ref):
precision, recall, fmeasure = scorer2.score(h, r)['rouge1']
results['precision_rouge_score'].append(precision)
results['recall_rouge_score'].append(recall)
results['fmeasure_rouge_score'].append(fmeasure)
output = scorer1.compute(predictions=[h], references=[r])
results['precision_datasets'].append(output['rouge1'].mid.precision)
results['recall_datasets'].append(output['rouge1'].mid.recall)
results['fmeasure_datasets'].append(output['rouge1'].mid.fmeasure)
print('results: ', results)研究结果如下:
{'precision_rouge_score': [0.3333333333333333], 'recall_rouge_score': [0.5],
'fmeasure_rouge_score': [0.4],
'precision_datasets': [0.5], 'recall_datasets': [0.3333333333333333],
'fmeasure_datasets': [0.4]}发布于 2022-06-26 02:30:42
根据最初的论文,https://aclanthology.org/W04-1013.pdf,我看到了这个公式:

因此,以上两句话(Hyp:“我没有车。”)vs:“我想买一辆车。”),rouge1-recall =2 (I,car)/6 (I,W欲望,to,buy,a,car) = 0.333333。看来“数据集”包是正确的。
https://stackoverflow.com/questions/72758470
复制相似问题