我试图从syslog服务器上设置一个用于源数据的flume代理。基本上,我已经在一个所谓的syslog服务器(server1)上设置了一个syslog服务器,以接收syslog事件,然后将所有消息转发到安装了flume代理的不同服务器(server2),然后所有数据最终都将汇到kafka集群。
水槽配置如下。
# For each one of the sources, the type is defined
agent.sources.syslogSrc.type = syslogudp
agent.sources.syslogSrc.port = 9090
agent.sources.syslogSrc.host = server2
# The channel can be defined as follows.
agent.sources.syslogSrc.channels = memoryChannel
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.memoryChannel.capacity = 100
# config for kafka sink
agent.sinks.kafkaSink.channel = memoryChannel
agent.sinks.kafkaSink.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.kafkaSink.kafka.topic = flume
agent.sinks.kafkaSink.kafka.bootstrap.servers = <kafka.broker.list>:9092
agent.sinks.kafkaSink.kafka.flumeBatchSize = 20
agent.sinks.kafkaSink.kafka.producer.acks = 1
agent.sinks.kafkaSink.kafka.producer.linger.ms = 1
agent.sinks.kafkaSink.kafka.producer.compression.type = snappy但是,不知怎么的,logsys并没有被注入水槽剂。
听听你的建议。
发布于 2017-02-23 08:49:58
我已经在所谓的服务器(server1)上安装了syslog服务器。
syslogudp源必须绑定到server1主机
agent.sources.syslogSrc.host = server1然后将所有消息转发给不同的服务器(server2)
不同的服务器指的是Sink
agent.sinks.kafkaSink.kafka.bootstrap.servers = server2:9092水槽代理只是托管这些组件(Source、Sink、Channel)以方便事件流的过程。
https://stackoverflow.com/questions/42410582
复制相似问题