我正在创建一个流,其中源(生产者)在大约8分钟内产生了大约1200万条记录,转换器(消费者)开始使用它们,但在大约4分钟的某个时候,应用程序的日志中显示了以下内容,并且它停止接收超过这一点的任何内容:
2018-07-11 21:59:18,811 24043857 [kafka-coordinator-heartbeat-thread | cdSomeApp] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-2, groupId=cdSomeApp] Marking the coordinator 10.16.17.59:9092 (id: 2147483644 rack: null) dead
2018-07-11 21:59:18,815 24043861 [cdSomeApp.cd-source.container-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-2, groupId=cdSomeApp] Discovered group coordinator 10.16.17.59:9092 (id: 2147483644 rack: null)
2018-07-11 21:59:18,815 24043861 [cdSomeApp.cd-source.container-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-2, groupId=cdSomeApp] Marking the coordinator 10.16.17.59:9092 (id: 2147483644 rack: null) dead
2018-07-11 21:59:18,930 24043976 [cdSomeApp.cd-source.container-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-2, groupId=cdSomeApp] Discovered group coordinator 10.16.17.59:9092 (id: 2147483644 rack: null)
2018-07-11 21:59:18,933 24043979 [cdSomeApp.cd-source.container-0-C-1] ERROR o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-2, groupId=cdSomeApp] Offset commit failed on partition cdSomeApp.cd-source-0 at offset 140802810: The coordinator is not aware of this member.
2018-07-11 21:59:18,937 24043983 [cdSomeApp.cd-source.container-0-C-1] ERROR o.s.k.listener.LoggingErrorHandler - Error while processing: null
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:787)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:735)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:814)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:794)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:507)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:353)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:268)根据我所看到的,kafka配置的默认值应该可以正常工作,但如果有人知道更好的,请建议?
谢谢!
发布于 2018-07-12 21:32:24
该报告不包括任何版本信息。如果使用Spring Cloud Stream (App Starters版本-使用的是哪个bit.ly url?)、Spring Boot、SCDF和Kafka broker版本来编辑这篇文章,那就更好了。
尽管如此,我们在Spring Cloud Stream的切尔西发布-列车对阵Kafka 0.9中也有类似的报告。这里有一些details and the outcome。
如果您使用此版本组合,则必须升级到Ditmars (1.3.x)或最新的Elmhurst (2.0.x)版本。我们在App Starters project site中也有针对这些版本的最新bit.ly。
https://stackoverflow.com/questions/51306663
复制相似问题