环境: Google
我正在用TensorFlow text进行文本令牌化
docs = tf.data.Dataset.from_tensor_slices([['Never tell me the odds.'], ["It's a trap!"]])
tokenized_docs = docs.map(lambda x: token
/inst/include/sourcetools/tokenization/Token.h: In constructor 'sourcetools::tokens::Token::Token()':/inst/include/sourcetools/tokenization/Token.h:27: error: 'nullptr' was not declared in this scope
../inst
Local\Continuum\anaconda3\lib\site-packages\transformers\pipelines.py", line 40, in <module> from .tokenization_flaubert import FlaubertTokenizer
File "C:\Users\I323017\AppData\Local\Continuum\anaco