首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Flink流- kafka.internals.Handover$ClosedException

Flink流- kafka.internals.Handover$ClosedException
EN

Stack Overflow用户
提问于 2021-07-06 12:14:52
回答 1查看 546关注 0票数 0

我正在使用Apache v1.12.3。

最近我遇到了这个错误,我不知道它到底意味着什么。错误与卡夫卡或Flink有关吗?

错误日志:

代码语言:javascript
复制
2021-07-02 21:32:50,149 WARN  org.apache.kafka.clients.consumer.internals.ConsumerCoordinator [] - [Consumer clientId=consumer-myGroup-24, groupId=myGroup] Offset commit failed on partition my_topic-14 at offset 11328862986: The request timed out.
// ...
2021-07-02 21:32:50,150 INFO  org.apache.kafka.clients.consumer.internals.AbstractCoordinator [] - [Consumer clientId=consumer-myGroup-24, groupId=myGroup] Group coordinator 1.2.3.4:9092 (id: 2147483616 rack: null) is unavailable or invalid, will attempt rediscovery

// ...

2021-07-02 21:33:20,553 INFO  org.apache.kafka.clients.FetchSessionHandler                 [] - [Consumer clientId=consumer-myGroup-21, groupId=myGroup] Error sending fetch request (sessionId=1923351260, epoch=9902) to node 29: {}.

// ...

2021-07-02 21:33:19,457 INFO  org.apache.kafka.clients.FetchSessionHandler                 [] - [Consumer clientId=consumer-myGroup-15, groupId=myGroup] Error sending fetch request (sessionId=1427185435, epoch=10820) to node 29: {}.
org.apache.kafka.common.errors.DisconnectException: null

// ...

2021-07-02 21:34:10,157 WARN  org.apache.flink.runtime.taskmanager.Task                    [] - Source: my_topic_stream (4/15)#0 (2e2051d41edd606a093625783d844ba1) switched from RUNNING to FAILED.
org.apache.flink.streaming.connectors.kafka.internals.Handover$ClosedException: null
    at org.apache.flink.streaming.connectors.kafka.internals.Handover.close(Handover.java:177) ~[blob_p-a7919582483974414f9c0d4744bab53199b880d7-d9edc9d0741b403b3931269bf42a4f6b:?]
    at org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.cancel(KafkaFetcher.java:164) ~[blob_p-a7919582483974414f9c0d4744bab53199b880d7-d9edc9d0741b403b3931269bf42a4f6b:?]
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.cancel(FlinkKafkaConsumerBase.java:945) ~[blob_p-a7919582483974414f9c0d4744bab53199b880d7-d9edc9d0741b403b3931269bf42a4f6b:?]
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.lambda$createAndStartDiscoveryLoop$2(FlinkKafkaConsumerBase.java:913) ~[blob_p-a7919582483974414f9c0d4744bab53199b880d7-d9edc9d0741b403b3931269bf42a4f6b:?]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-07-07 05:08:12

这是卡夫卡的问题。Kafka使用者客户端在向Kafka集群提交偏移时抛出一个错误(超时)。一个可能的原因是卡夫卡集群繁忙,无法及时响应。此错误导致运行Kafka使用者的任务管理器失败。

在从Kafka创建源流时,尝试向属性添加参数。可能的参数是:request.timeout.ms,将其设置为更长的时间,然后进行尝试。

参考文献:

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/68270362

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档