--topic kafka-2: 指定主题的名称。 (2) 查看主题描述:sh kafka-topics.sh --describe --zookeeper 192.168.11.59:2181 --topic kafka-2输出示例:Topic:kafka -2 PartitionCount:2 ReplicationFactor:2 Configs: Topic: kafka-2 Partition: 0 Leader: 1 Replicas: ,消息将被发送到 kafka-2 主题。 默认是group 0,也可以指定,比如:sh kafka-console-consumer.sh --bootstrap-server 192.168.31.249:9092 --topic kafka
作者用的是虚拟机,下面是3台电脑的配置 主机名 IP地址 硬件配置 kafka-1 192.168.1.42 4CPU、4G内存、128G存储 kafka-2 192.168.1.41 4CPU、4G内存 192.168.1.47 server.3 3 所以,在kafka-1中执行下面命令 echo “1” > /usr/local/zookeeper/data/myid #kafka-1主机myid 在kafka -2中执行下面命令 echo “2” > /usr/local/zookeeper/data/myid #kafka-2主机myid 在kafka-3中执行下面命令 echo “3” > /usr/local zookeeper/bin/zkServer.sh start 全部启动后,查看启动结果 /usr/local/zookeeper/bin/zkServer.sh status kafka-1启动结果 kafka 192.168.1.42:9092 # 新增 zookeeper.connect=192.168.1.41:2181,192.168.1.42:2181,192.168.1.47:2181 # 新增 在kafka
虚拟机信息: 一共用到三台虚拟机; zookeeper 和 Kafka 共用统一虚拟机; 三台虚拟机信息: hostname:kafka-1,IP:10.0.0.1,ID:1 hostname:kafka $ cat /etc/hosts ...... 10.0.0.1 kafka-1 10.0.0.2 kafka-2 10.0.0.3 kafka-3 1. 3888 #server.2=zookeeper2:2888:3888 #server.3=zookeeper3:2888:3888 server.1=kafka-1:2888:3888 server.2=kafka $ cat /etc/zookeeper/conf/myid # on kafka-1 1 $ cat /etc/zookeeper/conf/myid # on kafka-2 2 $ cat /etc kafka.security.auth.SimpleAclAuthorizer allow.everyone.if.no.acl.found=true zookeeper.connect=kafka-1:2181,kafka
kafka_topic默认是maxwell 在上面测试脚本中设置值为lpc_maxwell,这个是静态传参, 也可以动态传参namespace_%{database}_%{table},动态传参脚本如下 "" 测试脚本kafka 3、如果filter参数是--filter='exclude: *.*, include: lpc.*' 这种情况下,执行测试脚本kafka-2并不会自动创建,在监控的数据发生变化时,才会自动创建
地址 = 1.1.1.2 软件 = jdk-1.8 zookeeper-3.5 2181 kafka-2.0.0 9092 主机名 = kafka kafka的版本,是有支持范围的,可查看官方文档filebeat-kafka配置 3.不同版本elk需要的jdk版本也不同,需要看好说明 二.部署配置 配置kafka集群 操作服务器(kafka-1,kafka
=192.168.196.128:2181,192.168.196.131:2181,192.168.196.132:2181 offsets.topic.replication.factor=3 在kafka
假设我们有三台服务器,在三台服务器的/etc/hosts文件中加入如下映射关系: 192.168.110.92 kafka-0 192.168.110.93 kafka-1 192.168.110.94 kafka 主机名 用户 192.168.110.92 kafka-0 kafka 192.168.110.93 kafka-1 kafka 192.168.110.94 kafka /kafka-console-producer.sh --topic test --bootstrap-server kafka-0:9092,kafka-1:9092,kafka-2:9092 //2 kafka-console-consumer.sh --topic test --from-beginning --bootstrap-server kafka-0:9092,kafka-1:9092,kafka /kafka-topics.sh --alter \ --bootstrap-server kafka-0:9092,kafka-1:9092,kafka-2:9092 \ --topic test2
不适合大数据量状态存储,尤其是key的维度比较高、value状态比较大的 object StateOperator { private val brokers = "kafka-1:9092,kafka
)\b|^Caused by:' negate: false match: after output.kafka: enabled: true hosts: ["kafka-1:9092","kafka /kafka-console-producer.sh --broker-list kafka-1:9092 kafka-2:9092 --topic app.log 消费topic的消息 . $ vim logstash.conf input { kafka { type => "kafka" bootstrap_servers => "kafka-1:9092,kafka
github.com/yahoo/kafka-manager 二、部署 2.1 初始化环境 初始化系统,关闭防火墙修改主机名与ip名称 名称 HOSTNAME IP 1 kafka-1 172.17.10.207 2 kafka
172.16.0.7 10.0.0.33 kafka-1 1/1 Running 0 1h 172.16.2.5 10.0.0.40 kafka 172.16.0.7 10.0.0.33 kafka-1 1/1 Running 0 1h 172.16.2.5 10.0.0.40 kafka
.type = org.apache.flume.sink.kafka.KafkaSink a1.sinks.sink1.kafka.bootstrap.servers = kafka-1:9093,kafka org.apache.flume.channel.kafka.KafkaChannel a1.channels.channel1.kafka.bootstrap.servers = kafka-1:9092,kafka org.apache.flume.channel.kafka.KafkaChannel a1.channels.channel1.kafka.bootstrap.servers = kafka-1:9093,kafka
0 1/1 Running 0 25m kafka-1 1/1 Running 0 11m kafka
max_elapsed_time: 0 # 无限重试 # 可选:原始扇出到 Kafka,供离线 ETL/回放 kafka/traces: brokers: ["kafka-1:9092","kafka
org.apache.flume.channel.kafka.KafkaChannel a1.channels.channel1.kafka.bootstrap.servers = kafka-1:9092,kafka
kafka-1 host: "local-168-182-111" path: "/opt/bigdata/servers/kafka/data/data1" - name: kafka
轻量:Benthos(推荐给单团队自运维)示例:logs_raw → PG(明细+TopK 预聚合写入)input: kafka: addresses: [ "kafka-1:9092","kafka
<none> kafka-1 1/1 Running 0 34s 172.10.2.47 node2 <none> <none> kafka
name: kafka-service-2 app: kafka-service-2 spec: containers: - name: kafka