为了防止数据重复,同时损害来自logstash的数据,我为logstash添加了一个带有document_id列的peopleRowId字符串。然而,它没有得到评估。因此,在我的例子中,我试图将文档id设置为document_id => "%{[document][projectsRowId]}",但是由于某些原因,在弹性搜索中,由于我添加了ROW_NUMBER() OVER ( a.created_at命令)作为projectsRowId创建唯一id,因此没有对其进行评估。
[
{
"_index" : "projectsv3",
"_type" : "_doc",
"_id" : "%{[document][projectsRowId]}",
"_score" : 1.0,
"_source" : {...single record}]
我不知道为什么没有启用文档id。使用弹性搜索7和ECS也被禁用。我也尝试过其他方法,比如带指纹的过滤器,我也尝试将文档id设置为document_id => "%{projectsRowId}",尽管在所有情况下它都没有得到评估。
input {
jdbc {
jdbc_driver_library => "C:\\ElasticStack\\mysql-connector-java-8.0.24\\mysql-connector-java-8.0.24.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# mysql jdbc connection string to our database, mydb
jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/corrabla_sercweb"
# The user we wish to execute our statement as
jdbc_user => "root"
jdbc_password => "root"
schedule => "* * * * *"
clean_run => true
# use_column_value => true
# tracking_column => "%{[@metadata][fingerprint]}"
# tracking_column_type => "numeric"
# our query to fetch people details
statement => "select ROW_NUMBER() OVER (
ORDER BY a.created_at
) as projectsRowId , (a.created_at), tr.report_number as 'tech_report_number', tr.file_s3 as 'tech_report_file_name', tr.abstract as 'tech_report_abstract' , c.prefix as 'piPrefix' , c.first_name as 'piFirstName', c.middle_name as 'piMiddleName' ,c.last_name as 'piLastName', b.person_id, d.prefix as 'coPiPrefix' "
# use_column_value => true
# tracking_column => id
# tracking_column_type => "numeric"
}
}
output {
elasticsearch {
action => "create"
hosts => "http://127.0.0.1:9200"
index => "projectsv3"
doc_as_upsert => true
document_id => "%{[document][projectsRowId]}"
}
}发布于 2022-02-11 19:03:33
默认情况下,jdbc输入将字段名折叠为小写,因此您的事件将有一个名为projectsrowid的字段,而不是projectsRowId。如果在输入上设置了lowercase_column_names => false,那么`document_id => "%{projectsRowId}“就可以工作了。
https://stackoverflow.com/questions/71085144
复制相似问题