首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >kafka日志增长过高

kafka日志增长过高
EN

Stack Overflow用户
提问于 2020-12-05 21:47:58
回答 1查看 44关注 0票数 0

我可以看到kafka logs正在快速增长,并淹没了文件系统。

如何更改kafka的设置,减少写入日志,并频繁轮换日志。

文件的位置是- /opt/kafka/kafka_2.12-2.2.2/logs及其大小-

代码语言:javascript
复制
5.9G    server.log.2020-11-24-14
5.9G    server.log.2020-11-24-15
5.9G    server.log.2020-11-24-16
5.7G    server.log.2020-11-24-17

上述文件中的示例日志。

代码语言:javascript
复制
[2020-11-24 14:59:59,999] WARN Exception when following the leader (org.apache.zookeeper.server.quorum.Learner)
java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:326)
        at org.apache.zookeeper.common.AtomicFileOutputStream.write(AtomicFileOutputStream.java:74)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
        at java.io.BufferedWriter.flush(BufferedWriter.java:254)
        at org.apache.zookeeper.server.quorum.QuorumPeer.writeLongToFile(QuorumPeer.java:1391)
        at org.apache.zookeeper.server.quorum.QuorumPeer.setCurrentEpoch(QuorumPeer.java:1426)
        at org.apache.zookeeper.server.quorum.Learner.syncWithLeader(Learner.java:454)
        at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:83)
        at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:981)
[2020-11-24 14:59:59,999] INFO shutdown called (org.apache.zookeeper.server.quorum.Learner)
java.lang.Exception: shutdown Follower
        at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:169)
        at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:985)
[2020-11-24 14:59:59,999] INFO Shutting down (org.apache.zookeeper.server.quorum.FollowerZooKeeperServer)
[2020-11-24 14:59:59,999] INFO LOOKING (org.apache.zookeeper.server.quorum.QuorumPeer)
[2020-11-24 14:59:59,999] INFO New election. My id =  1, proposed zxid=0x1000001d2 (org.apache.zookeeper.server.quorum.FastLeaderElection)
[2020-11-24 14:59:59,999] INFO Notification: 1 (message format version), 1 (n.leader), 0x1000001d2 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state) (org.apache.zookeeper.server.quorum.FastLeaderElection)

它还会写入/opt/kafka/kafka_2.12-2.2.2/kafka.log

代码语言:javascript
复制
[2020-12-05 16:51:10,109] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 16:51:10,109] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 16:51:10,109] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 16:51:10,110] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 17:01:09,528] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 17:11:09,528] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)

kafka用于弹性堆栈。

下面是来自server.properties文件的条目。

代码语言:javascript
复制
# A comma seperated list of directories under which to store log files
log.dirs=/var/log/kafka

它的日志文件如下

代码语言:javascript
复制
/var/log/kafka
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 heartbeat-1
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-12
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 auditbeat-0
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 apm-2
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-28
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 filebeat-2
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-38
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-44
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-6
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-16
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 metricbeat-0
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-22
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-32
-rw-r--r-- 1 kafka users  747 Dec  5 18:02 recovery-point-offset-checkpoint
-rw-r--r-- 1 kafka users    4 Dec  5 18:02 log-start-offset-checkpoint
-rw-r--r-- 1 kafka users  749 Dec  5 18:03 replication-offset-checkpoint

/opt/kafka/kafka_2.12-2.2.2/config路径中的文件中未启用任何DEBUG级别日志。

我如何确保它不会在/opt/kafka/kafka_2.12-2.2.2/logs中生成如此庞大的文件,以及如何通过压缩定期旋转它们。

谢谢,

EN

回答 1

Stack Overflow用户

发布于 2020-12-05 22:31:38

log.dirs是实际的代理存储,而不是流程日志,因此不应与其他流程日志一起位于/var/log

几乎每天6G并不是不合理,但您可以修改log4j.properties文件,使其与滚动文件附加器之间只保留1到2天的时间

通常,与任何Linux管理任务一样,您应该有单独的磁盘卷用于/var/log、OS存储和任何专用磁盘用于服务器数据-例如在/kafka挂载

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/65157595

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档