首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Kafka消费者输出过多的调试语句

Kafka消费者输出过多的调试语句
EN

Stack Overflow用户
提问于 2018-09-18 15:06:15
回答 1查看 2.3K关注 0票数 1

我面临着一些与在K8s集群中运行的服务产生的日志量相关的问题。

这个问题类似于here所描述的问题,但是我无法解决这个问题。我的项目使用Akka和Log4j2,我不知道如何修复,也遵循上一篇文章中报告的建议。

这是我关于log4j2和application.conf的配置,用于akka。

代码语言:javascript
复制
<Configuration status="WARN">
  <Appenders>
    <Console name="Console" target="SYSTEM_OUT">
      <PatternLayout pattern="%d{DEFAULT} [%t] %-5level %logger{1}.%method - %msg%n"/>
    </Console>
  </Appenders>
  <Loggers>
    <Root level="info">
      <AppenderRef ref="Console"/>
    </Root>
  </Loggers>
</Configuration>

相反,Akka是:

代码语言:javascript
复制
akka {

  # Options: OFF, ERROR, WARNING, INFO, DEBUG
  loglevel = "ERROR"

  # Log level for the very basic logger activated during ActorSystem startup.
  # This logger prints the log messages to stdout (System.out).
  # Options: OFF, ERROR, WARNING, INFO, DEBUG
  stdout-loglevel = "ERROR"

  # Log the complete configuration at INFO level when the actor system is started.
  # This is useful when you are uncertain of what configuration is used.
  log-config-on-start = off



    # Properties for akka.kafka.ConsumerSettings can be
    # defined in this section or a configuration section with
    # the same layout. 

      kafka.consumer {
          # Tuning property of scheduled polls.
          poll-interval = 500ms

          # Tuning property of the `KafkaConsumer.poll` parameter.
          # Note that non-zero value means that the thread that
          # is executing the stage will be blocked.
          poll-timeout = 500ms

          # The stage will await outstanding offset commit requests before
          # shutting down, but if that takes longer than this timeout it will
          # stop forcefully.
          stop-timeout = 30s

          # How long to wait for `KafkaConsumer.close`
          close-timeout = 20s

          # If offset commit requests are not completed within this timeout
          # the returned Future is completed `CommitTimeoutException`.
          commit-timeout = 15s

          # If commits take longer than this time a warning is logged
          commit-time-warning = 1s

          # If for any reason `KafkaConsumer.poll` blocks for longer than the configured
          # poll-timeout then it is forcefully woken up with `KafkaConsumer.wakeup`.
          # The KafkaConsumerActor will throw
          # `org.apache.kafka.common.errors.WakeupException` which will be ignored
          # until `max-wakeups` limit gets exceeded.
          wakeup-timeout = 6s

          # After exceeding maxinum wakeups the consumer will stop and the stage and fail.
          # Setting it to 0 will let it ignore the wakeups and try to get the polling done forever.
          max-wakeups = 10

          # If set to a finite duration, the consumer will re-send the last committed offsets periodically
          # for all assigned partitions. See https://issues.apache.org/jira/browse/KAFKA-4682.
          commit-refresh-interval = infinite

          # If enabled, log stack traces before waking up the KafkaConsumer to give
          # some indication why the KafkaConsumer is not honouring the `poll-timeout`
          wakeup-debug = true

          # Fully qualified config path which holds the dispatcher configuration
          # to be used by the KafkaConsumerActor. Some blocking may occur.
          #use-dispatcher = "akka.kafka.default-dispatcher"


          # Time to wait for pending requests when a partition is closed
          wait-close-partition = 500ms
    }

}

但我总是看到以下日志:

代码语言:javascript
复制
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-1
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-0
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-2
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-0] to broker eric-data-message-bus-kf-0.eric-data-message-bus-kf.default:9092 (id: 0 rack: null)
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-1] to broker eric-data-message-bus-kf-1.eric-data-message-bus-kf.default:9092 (id: 1 rack: null)
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-2] to broker eric-data-message-bus-kf-2.eric-data-message-bus-kf.default:9092 (id: 2 rack: null)
17:00:25.427 [kafka-coordinator-heartbeat-thread | GroupIDTest] DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending Heartbeat request for group GroupIDTest to coordinator eric-data-message-bus-kf-0.eric-data-message-bus-kf.default:9092 (id: 2147483647 rack: null)
17:00:25.428 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received successful Heartbeat response for group GroupIDTest
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-1
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-0
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-2
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-0] to broker eric-data-message-bus-kf-0.eric-data-message-bus-kf.default:9092 (id: 0 rack: null)
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-1] to broker eric-data-message-bus-kf-1.eric-data-message-bus-kf.default:9092 (id: 1 rack: null)
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-2] to broker eric-data-message-bus-kf-2.eric-data-message-bus-kf.default:9092 (id: 2 rack: null)

有什么建议吗?

EN

回答 1

Stack Overflow用户

发布于 2018-09-19 05:37:37

实际上,Kafka库在内部使用slf4j-log4j12,在内部使用log4j作为底层日志记录框架。

因此,你需要从你的pom或sbt文件中排除它,从kafka_2.10/kafka_2.11和kafka-client/zookeeper工件中或者项目pom/sbt文件中的任何其他地方排除它,并将slf4j-log4j12依赖项显式地放在pom或sbt中,并将你的log4j.xml放在src/main/resources文件夹中,级别为info,这样你就可以摆脱所有的调试语句。

pom.xml中的示例:

代码语言:javascript
复制
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.5</version>
</dependency>

       <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.11</artifactId>
            <version>1.0.0</version>
            <exclusions>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>log4j</groupId>
                    <artifactId>log4j</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

在build.sbt中:

对每条libraryDepencies行执行exclude("org.slf4j","slf4j-log4j12)命令。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/52380761

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档