首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >超级分类账布订货机不会从另一台机器连接到Kafka Brokers。

超级分类账布订货机不会从另一台机器连接到Kafka Brokers。
EN

Stack Overflow用户
提问于 2018-05-09 02:05:43
回答 1查看 1.6K关注 0票数 0

我在AWS上安装了一个超级分类器结构网络,在不同的EC2机器上有不同的组件。我可以随心所欲地把东西搬到任何我想要的地方,除了经纪人和订货人。他们似乎需要坐在同一台机器上。

如果我把我的动物园管理员、卡夫卡经纪人和订购者用一台EC2机器上的一个组成文件旋转一下,一切都很好。然后我试着在一台EC2上旋转动物园管理员和卡夫卡经纪人,让他们有时间定居下来,然后在另一台EC2机器上旋转我的订货机。

当订购者开始旋转时,他们似乎一开始就连接到代理,而代理日志则为testchainid (每个订单程序创建的一个虚拟通道)提供了创建新主题、分区等的内容。

代码语言:javascript
复制
[2018-05-09 01:48:10,978] INFO Topic creation {"version":1,"partitions":{"0":[2,3,0]}} (kafka.admin.AdminUtils$)
[2018-05-09 01:48:10,988] INFO [KafkaApi-0] Auto creation of topic testchainid with 1 partitions and replication factor 3 is successful (kafka.server.KafkaApis)
[2018-05-09 01:48:10,999] INFO Topic creation {"version":1,"partitions":{"0":[2,3,0]}} (kafka.admin.AdminUtils$)
[2018-05-09 01:48:11,059] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-05-09 01:48:11,062] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-05-09 01:48:11,152] INFO Loading producer state from offset 0 for partition testchainid-0 with message format version 2 (kafka.log.Log)
[2018-05-09 01:48:11,159] INFO Completed load of log testchainid-0 with 1 log segments, log start offset 0 and log end offset 0 in 49 ms (kafka.log.Log)
[2018-05-09 01:48:11,164] INFO Created log for partition [testchainid,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 103809024, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> -1, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-05-09 01:48:11,165] INFO [Partition testchainid-0 broker=0] No checkpointed highwatermark is found for partition testchainid-0 (kafka.cluster.Partition)
[2018-05-09 01:48:11,165] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-05-09 01:48:11,167] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions testchainid-0 (kafka.server.ReplicaFetcherManager)
[2018-05-09 01:48:11,241] INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
[2018-05-09 01:48:11,242] INFO [ReplicaFetcherManager on broker 0] Added fetcher for partitions List([testchainid-0, initOffset 0 to broker BrokerEndPoint(2,0fb5eecc47b8,9092)] ) (kafka.server.ReplicaFetcherManager)
[2018-05-09 01:48:11,306] INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Based on follower's leader epoch, leader replied with an offset 0 >= the follower's log end offset 0 in testchainid-0. No truncation needed. (kafka.server.ReplicaFetcherThread)
[2018-05-09 01:48:11,311] INFO Truncating testchainid-0 to 0 has no effect as the largest offset in the log is -1. (kafka.log.Log)

但他们似乎无法保持联系。

这是我的订单程序试图连接到代理时的日志转储:

代码语言:javascript
复制
2018-05-09 01:49:12.483 UTC [orderer/consensus/kafka] try -> DEBU 0f9 [channel: testchainid] Attempting to post the CONNECT message...
2018-05-09 01:49:12.483 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 0fa producer/leader/testchainid/0 state change to [retrying-1]
2018-05-09 01:49:12.483 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 0fb producer/leader/testchainid/0 abandoning broker 2
2018-05-09 01:49:12.483 UTC [orderer/consensus/kafka/sarama] shutdown -> DEBU 0fc producer/broker/2 shut down
2018-05-09 01:49:12.583 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 0fd client/metadata fetching metadata for [testchainid] from broker kafka0.hyperfabric.xyz:9092
2018-05-09 01:49:12.778 UTC [orderer/consensus/kafka/sarama] Validate -> DEBU 0fe ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-05-09 01:49:12.779 UTC [orderer/consensus/kafka/sarama] run -> DEBU 0ff producer/broker/2 starting up
2018-05-09 01:49:12.779 UTC [orderer/consensus/kafka/sarama] run -> DEBU 100 producer/broker/2 state change to [open] on testchainid/0
2018-05-09 01:49:12.779 UTC [orderer/consensus/kafka/sarama] dispatch -> DEBU 101 producer/leader/testchainid/0 selected broker 2
2018-05-09 01:49:12.779 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 102 producer/leader/testchainid/0 state change to [flushing-1]
2018-05-09 01:49:12.779 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 103 producer/leader/testchainid/0 state change to [normal]
2018-05-09 01:49:12.790 UTC [orderer/consensus/kafka/sarama] func1 -> DEBU 104 Failed to connect to broker 0fb5eecc47b8:9092: dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:12.790 UTC [orderer/consensus/kafka/sarama] handleError -> DEBU 105 producer/broker/2 state change to [closing] because dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:12.790 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 106 producer/leader/testchainid/0 state change to [retrying-2]
2018-05-09 01:49:12.790 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 107 producer/leader/testchainid/0 abandoning broker 2
2018-05-09 01:49:12.790 UTC [orderer/consensus/kafka/sarama] shutdown -> DEBU 108 producer/broker/2 shut down
2018-05-09 01:49:12.890 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 109 client/metadata fetching metadata for [testchainid] from broker kafka0.hyperfabric.xyz:9092
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] Validate -> DEBU 10a ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] run -> DEBU 10b producer/broker/2 starting up
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] run -> DEBU 10c producer/broker/2 state change to [open] on testchainid/0
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] dispatch -> DEBU 10d producer/leader/testchainid/0 selected broker 2
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 10e producer/leader/testchainid/0 state change to [flushing-2]
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 10f producer/leader/testchainid/0 state change to [normal]
2018-05-09 01:49:13.091 UTC [orderer/consensus/kafka/sarama] func1 -> DEBU 110 Failed to connect to broker 0fb5eecc47b8:9092: dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:13.092 UTC [orderer/consensus/kafka/sarama] handleError -> DEBU 111 producer/broker/2 state change to [closing] because dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:13.092 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 112 producer/leader/testchainid/0 state change to [retrying-3]
2018-05-09 01:49:13.092 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 113 producer/leader/testchainid/0 abandoning broker 2
2018-05-09 01:49:13.092 UTC [orderer/consensus/kafka/sarama] shutdown -> DEBU 114 producer/broker/2 shut down
2018-05-09 01:49:13.192 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 115 client/metadata fetching metadata for [testchainid] from broker kafka0.hyperfabric.xyz:9092
2018-05-09 01:49:13.387 UTC [orderer/consensus/kafka/sarama] Validate -> DEBU 116 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-05-09 01:49:13.388 UTC [orderer/consensus/kafka/sarama] run -> DEBU 117 producer/broker/2 starting up
2018-05-09 01:49:13.388 UTC [orderer/consensus/kafka/sarama] run -> DEBU 118 producer/broker/2 state change to [open] on testchainid/0
2018-05-09 01:49:13.388 UTC [orderer/consensus/kafka/sarama] dispatch -> DEBU 119 producer/leader/testchainid/0 selected broker 2
2018-05-09 01:49:13.388 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 11a producer/leader/testchainid/0 state change to [flushing-3]
2018-05-09 01:49:13.388 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 11b producer/leader/testchainid/0 state change to [normal]
2018-05-09 01:49:13.427 UTC [orderer/consensus/kafka/sarama] func1 -> DEBU 11c Failed to connect to broker 0fb5eecc47b8:9092: dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:13.427 UTC [orderer/consensus/kafka/sarama] handleError -> DEBU 11d producer/broker/2 state change to [closing] because dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:13.427 UTC [orderer/consensus/kafka] try -> DEBU 11e [channel: testchainid] Need to retry because process failed = dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host

我还注意到它提到了IP和端口127.0.0.11:53。我做了一个快速搜索,这是与码头群有关吗?我要用码头群来完成这个任务吗?到目前为止,我不喜欢其他任何东西都不需要它,只是通过IP地址连接。

我不知道具体问题是什么,也不知道从哪里开始寻找答案。如能提供任何协助,将不胜感激。

-更新

我做了更多的实验,我也能把动物园管理员转移到另一台机器上。现在只有卡夫卡经纪人和订货人才会彼此沟通,除非他们在同一台机器上。

我有一种感觉,我没有在某个地方指定kafka代理地址,这就是为什么日志有127.0.0.11:53而不是他们的实际地址。目前,我只在configtx文件中指定了地址,这个文件用于成因块。它还需要在其他地方设置吗?订货人怎么知道卡夫卡经纪人在哪里?

我的configtx:

代码语言:javascript
复制
Kafka:
    # Brokers: A list of Kafka brokers to which the orderer connects
    # NOTE: Use IP:port notation
    Brokers:
        - kafka0.hyperfabric.xyz:9092
        - kafka1.hyperfabric.xyz:10092
        - kafka2.hyperfabric.xyz:11092
        - kafka3.hyperfabric.xyz:12092
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2018-05-10 18:27:49

按照Kafka的工作方式,客户端从初始列表(在Fabric情况下,通过通道配置提供)连接到代理。然后,该代理将返回有关存在的其他代理的信息列表,以及它们所领导的分区等。然后,客户端使用第二个列表来选择要连接到的正确代理。

您看到的是,“第二个列表”使用的是无法解析的主机名,因为它们运行在另一台机器上的容器中。为了解决这个问题,您可能会让Kafka broker为一个不同于容器本地广告的主机名做广告。这是advertised.host.name注释中的“卡夫卡文件。”,还有一些您可以使用的广告端口名来覆盖。对于所提供的图像,您可以设置KAFKA_ADVERTISED_HOST_NAMEKAFKA_ADVERTISED_PORT环境变量来设置此覆盖。

设置一个外部可解决的名称,您的问题应该得到解决。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/50244517

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档