无论我做什么,我都不能增加超过10000事件/秒的索引。在一个日志存储实例中,我每秒从kafka获得大约13000个事件。我运行3 Logstash在不同的机器读取数据从相同的卡夫卡主题。
我已经建立了一个麋鹿集群与3个Logstash读取数据从卡夫卡,并发送到我的弹性集群。
我的集群包含3个Logstash、3个弹性主节点、3个弹性客户端节点和50个弹性数据节点。
Logstash 2.0.4
Elastic Search 5.0.2
Kibana 5.0.2所有Citrix VM的配置相同:
红帽Linux-7 英特尔(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz 6核 32 GB RAM 2 TB纺丝介质
Logstash文件:
output {
elasticsearch {
hosts => ["dataNode1:9200","dataNode2:9200","dataNode3:9200" upto "**dataNode50**:9200"]
index => "logstash-applogs-%{+YYYY.MM.dd}-1"
workers => 6
user => "uname"
password => "pwd"
}
}Elasticsearch数据节点的elastcisearch.yml文件:
cluster.name: my-cluster-name
node.name: node46-data-46
node.master: false
node.data: true
bootstrap.memory_lock: true
path.data: /apps/dataES1/data
path.logs: /apps/dataES1/logs
discovery.zen.ping.unicast.hosts: ["master1","master2","master3"]
network.host: hostname
http.port: 9200
The only change that I made in my **jvm.options** file is
-Xms15g
-Xmx15g我所做的系统配置更改如下:
vm.max_map_count=262144
在/etc/security/limits.conf中,我补充道:
elastic soft nofile 65536
elastic hard nofile 65536
elastic soft memlock unlimited
elastic hard memlock unlimited
elastic soft nproc 65536
elastic hard nproc unlimited索引率


活动数据节点之一:
$ -o
Total DISK READ : 0.00 B/s | Total DISK WRITE : 243.29 K/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 357.09 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
5199 be/3 root 0.00 B/s 3.92 K/s 0.00 % 1.05 % [jbd2/xvdb1-8]
14079 be/4 elkadmin 0.00 B/s 51.01 K/s 0.00 % 0.53 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13936 be/4 elkadmin 0.00 B/s 51.01 K/s 0.00 % 0.39 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13857 be/4 elkadmin 0.00 B/s 58.86 K/s 0.00 % 0.34 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13960 be/4 elkadmin 0.00 B/s 35.32 K/s 0.00 % 0.33 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13964 be/4 elkadmin 0.00 B/s 31.39 K/s 0.00 % 0.27 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
14078 be/4 elkadmin 0.00 B/s 11.77 K/s 0.00 % 0.00 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch

索引详细信息:
index shard prirep state docs store
logstash-applogs-2017.01.23-3 11 r STARTED 30528186 35gb
logstash-applogs-2017.01.23-3 11 p STARTED 30528186 30.3gb
logstash-applogs-2017.01.23-3 9 p STARTED 30530585 35.2gb
logstash-applogs-2017.01.23-3 9 r STARTED 30530585 30.5gb
logstash-applogs-2017.01.23-3 1 r STARTED 30526639 30.4gb
logstash-applogs-2017.01.23-3 1 p STARTED 30526668 30.5gb
logstash-applogs-2017.01.23-3 14 p STARTED 30539209 35.5gb
logstash-applogs-2017.01.23-3 14 r STARTED 30539209 35gb
logstash-applogs-2017.01.23-3 12 p STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 12 r STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 15 p STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 15 r STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 19 r STARTED 30533725 35.3gb
logstash-applogs-2017.01.23-3 19 p STARTED 30533725 36.4gb
logstash-applogs-2017.01.23-3 18 r STARTED 30525190 30.2gb
logstash-applogs-2017.01.23-3 18 p STARTED 30525190 30.3gb
logstash-applogs-2017.01.23-3 8 p STARTED 30526785 35.8gb
logstash-applogs-2017.01.23-3 8 r STARTED 30526785 35.3gb
logstash-applogs-2017.01.23-3 3 p STARTED 30526960 30.4gb
logstash-applogs-2017.01.23-3 3 r STARTED 30526960 30.2gb
logstash-applogs-2017.01.23-3 5 p STARTED 30522469 35.3gb
logstash-applogs-2017.01.23-3 5 r STARTED 30522469 30.8gb
logstash-applogs-2017.01.23-3 6 p STARTED 30539580 30.9gb
logstash-applogs-2017.01.23-3 6 r STARTED 30539580 30.3gb
logstash-applogs-2017.01.23-3 7 p STARTED 30535488 30.3gb
logstash-applogs-2017.01.23-3 7 r STARTED 30535488 30.4gb
logstash-applogs-2017.01.23-3 2 p STARTED 30524575 35.2gb
logstash-applogs-2017.01.23-3 2 r STARTED 30524575 35.3gb
logstash-applogs-2017.01.23-3 10 p STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 10 r STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 16 p STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 16 r STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 4 r STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 4 p STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 17 r STARTED 30528132 30.2gb
logstash-applogs-2017.01.23-3 17 p STARTED 30528132 30.4gb
logstash-applogs-2017.01.23-3 13 r STARTED 30521873 30.3gb
logstash-applogs-2017.01.23-3 13 p STARTED 30521873 30.4gb
logstash-applogs-2017.01.23-3 0 r STARTED 30520172 30.4gb
logstash-applogs-2017.01.23-3 0 p STARTED 30520172 30.5gb我通过将数据转储到文件中来测试logstash中的传入数据。我得到了一个290MB的文件,30秒内有377822行。所以卡夫卡没有问题,因为在给定的时间,我在我的3台Logstash服务器中每秒接收35000个事件,但是我的Elasticsearch能够以每秒10000个事件为索引。
有人能帮我解决这个问题吗?
编辑:我尝试了批量发送默认的125,500,1000,10000的请求,但是我仍然没有在索引速度上得到任何改进。
发布于 2018-04-03 10:22:26
我通过移动到更大的数据节点机器来提高索引率。
Data :具有以下配置的VMWare虚拟机:
14 CPU @ 2.60GHz
64GB RAM, 31GB dedicated for elasticsearch.我可以使用的禁食磁盘是SAN和光纤通道,因为我无法获得任何SSD或本地磁盘。
我实现了每秒100,000个事件的最大索引率。每个文档大小约为2至5 KB。
https://stackoverflow.com/questions/41880792
复制相似问题