我有一个令牌过滤器和分析器,如下所示。但是,我无法获得要保留的原始令牌。例如,如果我使用单词:saint-louis执行_analyze,我只返回saintlouis,而我希望得到两个saintlouis and saint-louis,因为我有我的preserve_original set to true。The ES version i am using is 6.3.2 and Lucene version is 7.3.1
"analysis": {
"filter": {
"hyphenFilter": {
"pattern": "-",
"type": "pattern_replace",
"preserve_original": "true",
"replacement": ""
}
},
"analyzer": {
"whitespace_lowercase": {
"filter": [
"lowercase",
"asciifolding",
"hyphenFilter"
],
"type": "custom",
"tokenizer": "whitespace"
}
}
}发布于 2020-03-03 03:23:43
所以看起来pattern_replace令牌过滤器不支持preserve_original,至少在我使用的版本上不支持。
我采取了如下解决方法:
索引定义
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "whitespace",
"type": "custom",
"filter": [
"lowercase",
"hyphen_filter"
]
}
},
"filter": {
"hyphen_filter": {
"type": "word_delimiter",
"preserve_original": "true",
"catenate_words": "true"
}
}
}
}
}例如,这会将anti-spam这样的单词标记化为antispam(removed the hyphen)、anti-spam(preserved the original)、anti和spam.
Analyzer API用于查看生成的令牌
发布/_analyze
{ "text":“反垃圾邮件”,“分析器”:"my_analyzer“}
分析API ie的输出。生成的标记
{
"tokens": [
{
"token": "anti-spam",
"start_offset": 0,
"end_offset": 9,
"type": "word",
"position": 0
},
{
"token": "anti",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 0
},
{
"token": "antispam",
"start_offset": 0,
"end_offset": 9,
"type": "word",
"position": 0
},
{
"token": "spam",
"start_offset": 5,
"end_offset": 9,
"type": "word",
"position": 1
}
]
}https://stackoverflow.com/questions/60441801
复制相似问题