首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Tensorflow:计算精度,回忆,F1分数

Tensorflow:计算精度,回忆,F1分数
EN

Stack Overflow用户
提问于 2022-01-05 08:20:05
回答 2查看 3.6K关注 0票数 1

我从Huggingface构建了一个BERT模型(基于Bert-基-多语种),并希望用它的精确性、召回和F1评分来评估该模型,因为准确性并不总是最佳的评估标准。

这里是我为用例修改过的示例笔记本。

创建培训/测试数据:

代码语言:javascript
复制
from transformers import BertTokenizer, TFBertModel, TFBertForSequenceClassification

TEST_SPLIT = 0.1
BATCH_SIZE = 2

train_size = int(len(x) * (1-TEST_SPLIT))

tfdataset = tfdataset.shuffle(len(x))
tfdataset_train = tfdataset.take(train_size)
tfdataset_test = tfdataset.skip(train_size)

tfdataset_train = tfdataset_train.batch(BATCH_SIZE)
tfdataset_test = tfdataset_test.batch(BATCH_SIZE)

构建模型:

代码语言:javascript
复制
MODEL_NAME = 'bert-base-multilingual-cased'
N_EPOCHS = 2

model = TFBertForSequenceClassification.from_pretrained(MODEL_NAME)
optimizer = optimizers.Adam(learning_rate=3e-5)
loss = losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])

model.fit(tfdataset_train, batch_size=BATCH_SIZE, epochs=N_EPOCHS)

示例输出:

代码语言:javascript
复制
All model checkpoint layers were used when initializing TFBertForSequenceClassification.

Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-multilingual-cased and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Epoch 1/2
415/415 [==============================] - 741s 2s/step - loss: 0.6652 - accuracy: 0.6321
Epoch 2/2
415/415 [==============================] - 717s 2s/step - loss: 0.6619 - accuracy: 0.6429
<keras.callbacks.History at 0x7fc970d72750>

评价:

代码语言:javascript
复制
benchmarks = model.evaluate(tfdataset_test, return_dict=True, batch_size=BATCH_SIZE)
print(benchmarks)

示例输出:

代码语言:javascript
复制
93/93 [==============================] - 42s 404ms/step - loss: 0.6536 - accuracy: 0.6108
{'loss': 0.6535539627075195, 'accuracy': 0.6108108162879944}

有了这个,我就得到了精确的分数。但是我想要一份分类报告,上面有所有提到的指标。

有人知道如何使用这样的“tfdatasets集”吗?

提前感谢!

EN

回答 2

Stack Overflow用户

发布于 2022-01-07 20:36:58

这对我来说很有用(找到了这里):

代码语言:javascript
复制
from keras import backend as K

def recall_m(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    possible_positives = 
    K.sum(K.round(K.clip(y_true, 0, 1)))
    recall = true_positives / (possible_positives + 
    K.epsilon())
    return recall

def precision_m(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    predicted_positives = 
    K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return precision

def f1_m(y_true, y_pred):
    precision = precision_m(y_true, y_pred)
    recall = recall_m(y_true, y_pred)
    return 2*((precision*recall)/(precision+recall+K.epsilon()))

# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])
票数 1
EN

Stack Overflow用户

发布于 2022-01-06 14:10:39

最简单的方法是,除了属于tensorflow-addons主/基包的度量之外,使用tf

代码语言:javascript
复制
       #pip install tensorflow-addons
       import tensorflow as tf
       import tensorflow_addons as tfa

       ....

       model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.00001),
                     loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                     metrics=[tf.keras.metrics.Accuracy(),
                              tf.keras.metrics.Precision(),
                              tf.keras.metrics.Recall(),
                              tfa.metrics.F1Score(num_classes=nb_classes,
                                                  average='macro',
                                                  threshold=0.5))
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/70589698

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档