我们使用Alpakka Kafka streams来消费来自Kafka的事件。下面是如何定义流的方式:
ConsumerSettings<GenericKafkaKey, GenericKafkaMessage> consumerSettings =
ConsumerSettings
.create(actorSystem, new KafkaJacksonSerializer<>(GenericKafkaKey.class),
new KafkaJacksonSerializer<>(GenericKafkaMessage.class))
.withBootstrapServers(servers).withGroupId(groupId)
.withClientId(clientId).withProperties(clientConfigs.defaultConsumerConfig());
CommitterSettings committerSettings = CommitterSettings.create(actorSystem)
.withMaxBatch(20)
.withMaxInterval(Duration.ofSeconds(30));
Consumer.DrainingControl<Done> control =
Consumer.committableSource(consumerSettings, Subscriptions.topics(topics))
.mapAsync(props.getMessageParallelism(), msg ->
CompletableFuture.supplyAsync(() -> consumeMessage(msg), actorSystem.dispatcher())
.thenCompose(param -> CompletableFuture.supplyAsync(() -> msg.committableOffset())))
.toMat(Committer.sink(committerSettings), Keep.both())
.mapMaterializedValue(Consumer::createDrainingControl)
.run(materializer);下面是关闭流的代码片段:
CompletionStage<Done> completionStage = control.drainAndShutdown(actorSystem.dispatcher());
completionStage.toCompletableFuture().join();我也尝试过在可完成的未来做一个get。但是,join和get on future都不会返回。其他人也遇到过类似的问题吗?我是不是做错了什么?
发布于 2020-05-23 21:57:11
如果想要从流外控制流的终止,则需要使用KillSwitch:https://doc.akka.io/docs/akka/current/stream/stream-dynamic.html
发布于 2020-06-01 16:00:09
你的用法看起来是正确的,我找不到任何会妨碍排泄的东西。
Alpakka Kafka消费者最常错过的一件事是stop-timeout,它的默认设置是30秒。使用DrainingControl时,您可以安全地将其设置为0秒。
请参阅https://doc.akka.io/docs/alpakka-kafka/current/consumer.html#draining-control
https://stackoverflow.com/questions/61947594
复制相似问题