首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何使用汇流对接图像建立kafka集群

如何使用汇流对接图像建立kafka集群
EN

Stack Overflow用户
提问于 2019-06-06 06:20:07
回答 1查看 2.4K关注 0票数 1

我试图使用汇合的对接图像设置3个节点kafka集群。

https://hub.docker.com/r/confluentinc/cp-kafka

https://hub.docker.com/r/confluentinc/cp-zookeeper

代码语言:javascript
复制
docker run -d --restart always --name zk-1 -e zk_id=1 -e zk_server.1=ip1:2888:3888 -e zk_server.2=ip2:2888:3888 -e zk_server.3=ip3:2888:3888 -e ZOOKEEPER_CLIENT_PORT=2181 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluentinc/cp-zookeeper:5.2.1-1
docker run -d --restart always --name zk-2 -e zk_id=2 -e zk_server.1=ip1:2888:3888 -e zk_server.2=ip2:2888:3888 -e zk_server.3=ip3:2888:3888 -e ZOOKEEPER_CLIENT_PORT=2181 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluentinc/cp-zookeeper:5.2.1-1
docker run -d --restart always --name zk-3 -e zk_id=3 -e zk_server.1=ip1:2888:3888 -e zk_server.2=ip2:2888:3888 -e zk_server.3=ip3:2888:3888 -e ZOOKEEPER_CLIENT_PORT=2181 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluentinc/cp-zookeeper:5.2.1-1
docker run -d --restart always --name kafka-1 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip1:9092 -p 9092:9092 confluentinc/cp-kafka:5.2.1-1
docker run -d --restart always --name kafka-2 -e KAFKA_BROKER_ID=2 -e KAFKA_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -p 9092:9092 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip2:9092 confluentinc/cp-kafka:5.2.1-1
docker run -d --restart always --name kafka-3 -e KAFKA_BROKER_ID=3 -e KAFKA_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -p 9092:9092 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip3:9092 confluentinc/cp-kafka:5.2.1-1

但是,当我形成集群时,我看到的是1或2个代理,而不是我创建的所有3个代理。

docker run -it --net host confluentinc/cp-kafkacat kafkacat -b localhost:9092 -L

我是否需要在形成集群时设置bootstrap.server属性,如果是这样的话,我在汇合文档中没有提到它。

日志

代码语言:javascript
复制
[2019-06-06 06:12:06,788] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,793] INFO [ExpirationReaper-1-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,801] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-06-06 06:12:06,850] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
[2019-06-06 06:12:06,868] INFO Stat of the created znode at /brokers/ids/1 is: 565,565,1559801526862,1559801526862,1,0,0,72057595854848009,196,0,565
 (kafka.zk.KafkaZkClient)
[2019-06-06 06:12:06,870] INFO Registered broker 1 at path /brokers/ids/1 with addresses: ArrayBuffer(EndPoint(ip1,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 565 (kafka.zk.KafkaZkClient)
[2019-06-06 06:12:06,871] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-06-06 06:12:06,934] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
[2019-06-06 06:12:06,943] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,949] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,958] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,960] DEBUG [Controller id=1] Broker 2 has been elected as the controller, so stopping the election process. (kafka.controller.KafkaController)
[2019-06-06 06:12:06,966] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2019-06-06 06:12:06,966] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2019-06-06 06:12:06,970] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-06-06 06:12:06,983] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:9000,blockEndProducerId:9999) by writing to Zk with path version 10 (kafka.coordinator.transaction.ProducerIdManager)
[2019-06-06 06:12:07,004] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-06-06 06:12:07,005] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-06-06 06:12:07,007] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2019-06-06 06:12:07,031] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2019-06-06 06:12:07,044] INFO [SocketServer brokerId=1] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2019-06-06 06:12:07,048] INFO Kafka version: 2.2.0-cp2 (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,068] INFO Kafka commitId: 00d486623990ed9d (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,072] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
[2019-06-06 06:12:07,078] INFO Waiting until monitored service is ready for metrics collection (io.confluent.support.metrics.BaseMetricsReporter)
[2019-06-06 06:12:07,082] INFO Monitored service is now ready (io.confluent.support.metrics.BaseMetricsReporter)
[2019-06-06 06:12:07,082] INFO Attempting to collect and submit metrics (io.confluent.support.metrics.BaseMetricsReporter)
[2019-06-06 06:12:07,108] TRACE [Broker id=1] Cached leader info PartitionState(controllerEpoch=5, leader=2, leaderEpoch=1, isr=[2], zkVersion=1, replicas=[2], offlineReplicas=[]) for partition __confluent.support.metrics-0 in response to UpdateMetadata request sent by controller 2 epoch 6 with correlation id 0 (state.change.logger)
[2019-06-06 06:12:07,322] WARN The replication factor of topic __confluent.support.metrics is 1, which is less than the desired replication factor of 3.  If you happen to add more brokers to this cluster, then it is important to increase the replication factor of the topic to eventually 3 to ensure reliable and durable metrics collection. (io.confluent.support.metrics.common.kafka.KafkaUtilities)
[2019-06-06 06:12:07,334] INFO ProducerConfig values: 
    acks = 1
    batch.size = 16384
    bootstrap.servers = [PLAINTEXT://ip1:9092, PLAINTEXT://ip2:9092]
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 120000
    enable.idempotence = false
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    linger.ms = 0
    max.block.ms = 10000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig)
[2019-06-06 06:12:07,366] INFO Kafka version: 2.2.0-cp2 (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,366] INFO Kafka commitId: 00d486623990ed9d (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,399] INFO Cluster ID: 0DmmTlnXQMGD52urD7rxuA (org.apache.kafka.clients.Metadata)
[2019-06-06 06:12:07,449] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2019-06-06 06:12:07,459] INFO Successfully submitted metrics to Kafka topic __confluent.support.metrics (io.confluent.support.metrics.submitters.KafkaSubmitter)
[2019-06-06 06:12:08,470] INFO Successfully submitted metrics to Confluent via secure endpoint (io.confluent.support.metrics.submitters.ConfluentSubmitter)

如您所见,bootstrap.servers自动填充到ip1ip2中。而ip3失踪了。

EN

回答 1

Stack Overflow用户

发布于 2019-06-06 16:20:49

因此,首先,它已经存在于撰写文件中。

https://github.com/confluentinc/cp-docker-images/blob/5.1.2-post/examples/kafka-cluster/docker-compose.yml

但是,我认为它假设您的主机是Linux,因为正如预期的那样,这只是network: host工作的地方。

无论如何,一些注释

  • 例如,ip1不存在.它必须是zk-1,容器的主机名。
  • 开始更简单。让一个经纪人运行,你可以用一个动物园管理员来运行许多经纪人
  • 你不能在同一个主机上映射相同的端口.查看-p 2181:2181 -p 2888:2888 -p 3888:3888的每一种用法(2888和3888实际上不需要暴露在主机上),类似于-p 9092:9092
  • 如果将所有容器添加到相同的--net host中,则不应该使用kafkacat (就像Docker那样)。
  • 在一台主机上启动多个代理并不会增加许多“好处”
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/56471857

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档