首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >具有两个数据中心的高可用Graylog(mongodb,elasticsearch)日志记录系统

具有两个数据中心的高可用Graylog(mongodb,elasticsearch)日志记录系统
EN

Stack Overflow用户
提问于 2016-12-13 16:32:17
回答 1查看 1.2K关注 0票数 0

我需要配置一个划分为2个数据中心的高可用性graylog2集群。如果第一个数据中心完全关闭,则第二个数据中心必须继续运行,反之亦然。(前端场外的负载均衡器)

例如,每个数据中心可以有1个elasticsearch、1个灰度日志和2个mongodb实例。最后我有2个elasticsearch,2个graylog和4个mongodb实例。

正如我从mongodb文档中读到的,我需要一个奇数的投票者。因此,假设只有3个投票者。(第一个数据中心为2,第二个数据中心为1)

通过一些配置,弹性搜索可以按预期工作。但mongodb不是:(

那么,在任何数据中心完全关闭的情况下,是否有可能对2个数据中心进行高可用性配置?

最后,我想分享我的配置。注意:我当前的配置只有2个mongodb

谢谢..

弹性搜索优先:

代码语言:javascript
复制
  cluster.name: graylog
  node.name: graylog-1
  network.host: 0.0.0.0
  http.port: 9200
  discovery.zen.ping.multicast.enabled: false
  discovery.zen.ping.unicast.hosts: ["10.0.0.2"]
  discovery.zen.minimum_master_nodes: 1
  index.number_of_replicas: 2

elastic search search:

代码语言:javascript
复制
  cluster.name: graylog
  node.name: graylog-2
  network.host: 0.0.0.0
  http.port: 9200
  discovery.zen.ping.multicast.enabled: false
  discovery.zen.ping.unicast.hosts: ["10.0.0.1"]
  discovery.zen.minimum_master_nodes: 1

mongodb 1和2 (rs.conf()):

代码语言:javascript
复制
  {
        "_id" : "rs0",
        "version" : 4,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.0.0.1:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "10.0.0.2:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("****")
        }
  }

灰度日志第一个:

代码语言:javascript
复制
  is_master = true
  node_id_file = /etc/graylog/server/node-id
  password_secret = ***
  root_password_sha2 = ***
  plugin_dir = /usr/share/graylog-server/plugin
  rest_listen_uri = http://10.0.0.1:9000/api/
  web_listen_uri = http://10.0.0.1:9000/
  rotation_strategy = count
  elasticsearch_max_docs_per_index = 20000000
  rotation_strategy = count
  elasticsearch_max_docs_per_index = 20000000
  elasticsearch_max_number_of_indices = 20
  retention_strategy = delete
  elasticsearch_max_number_of_indices = 20
  retention_strategy = delete
  elasticsearch_shards = 2
  elasticsearch_replicas = 1
  elasticsearch_index_prefix = graylog
  allow_leading_wildcard_searches = false
  allow_highlighting = false
  elasticsearch_discovery_zen_ping_unicast_hosts = 10.0.0.1:9300, 10.0.0.2:9300
  elasticsearch_network_host = 0.0.0.0
  elasticsearch_analyzer = standard
  output_batch_size = 500
  output_flush_interval = 1
  output_fault_count_threshold = 5
  output_fault_penalty_seconds = 30
  processbuffer_processors = 5
  outputbuffer_processors = 3
  processor_wait_strategy = blocking
  ring_size = 65536
  inputbuffer_ring_size = 65536
  inputbuffer_processors = 2
  inputbuffer_wait_strategy = blocking
  message_journal_enabled = true
  message_journal_dir = /var/lib/graylog-server/journal
  lb_recognition_period_seconds = 3
  mongodb_uri = mongodb://10.0.0.1,10.0.0.2/graylog
  mongodb_max_connections = 1000
  mongodb_threads_allowed_to_block_multiplier = 5
  content_packs_dir = /usr/share/graylog-server/contentpacks
  content_packs_auto_load = grok-patterns.json
  proxied_requests_thread_pool_size = 32

第二个灰度日志:

代码语言:javascript
复制
  is_master = false
  node_id_file = /etc/graylog/server/node-id
  password_secret = ***
  root_password_sha2 = ***
  plugin_dir = /usr/share/graylog-server/plugin
  rest_listen_uri = http://10.0.0.2:9000/api/
  web_listen_uri = http://10.0.0.2:9000/
  rotation_strategy = count
  elasticsearch_max_docs_per_index = 20000000
  rotation_strategy = count
  elasticsearch_max_docs_per_index = 20000000
  elasticsearch_max_number_of_indices = 20
  retention_strategy = delete
  elasticsearch_max_number_of_indices = 20
  retention_strategy = delete
  elasticsearch_shards = 2
  elasticsearch_replicas = 1
  elasticsearch_index_prefix = graylog
  allow_leading_wildcard_searches = false
  allow_highlighting = false
  elasticsearch_discovery_zen_ping_unicast_hosts = 10.0.0.1:9300, 10.0.0.2:9300
  elasticsearch_transport_tcp_port = 9350
  elasticsearch_network_host = 0.0.0.0
  elasticsearch_analyzer = standard
  output_batch_size = 500
  output_flush_interval = 1
  output_fault_count_threshold = 5
  output_fault_penalty_seconds = 30
  processbuffer_processors = 5
  outputbuffer_processors = 3
  processor_wait_strategy = blocking
  ring_size = 65536
  inputbuffer_ring_size = 65536
  inputbuffer_processors = 2
  inputbuffer_wait_strategy = blocking
  message_journal_enabled = true
  message_journal_dir = /var/lib/graylog-server/journal
  lb_recognition_period_seconds = 3
  mongodb_uri = mongodb://10.0.0.1,10.0.0.2/graylog
  mongodb_max_connections = 1000
  mongodb_threads_allowed_to_block_multiplier = 5
  content_packs_dir = /usr/share/graylog-server/contentpacks
  content_packs_auto_load = grok-patterns.json
  proxied_requests_thread_pool_size = 32
EN

回答 1

Stack Overflow用户

发布于 2016-12-13 17:45:17

在您的配置文件中有很多误解。

例如,在Elasticsearch配置中,您写道:

代码语言:javascript
复制
discovery.zen.minimum_master_nodes: 2

如果两个ES节点中的一个出现故障,这将如何工作?

在您的Graylog配置中,您写道:

代码语言:javascript
复制
elasticsearch_shards = 2
elasticsearch_replicas = 1

如果两个ES节点中的一个出现故障,这将如何工作?

简而言之:在两个不同的数据中心(通过WAN)创建具有自治部件的高可用性集群并非易事。

我建议使用另一种架构,例如,使用RabbitMQ或Apache Kafka来缓冲日志消息,并让Graylog (运行在一个数据中心)从那里提取消息。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/41116636

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档