我已经在Ubuntu16.04机器上安装了汇合平台,最初我已经配置了动物园管理员、Kafka和ksql,并启动了合流平台。我能看到下面的信息。
root@DESKTOP-DIB3097:/opt/kafkafull/confluent-5.1.0/bin# ./confluent start
This CLI is intended for development only, not for production
https://docs.confluent.io/current/cli/index.html
Using CONFLUENT_CURRENT: /tmp/confluent.HUlCltYT
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
connect is [UP]
Starting ksql-server
ksql-server is [UP]
Starting control-center
control-center is [UP]现在一切都结束了,当我检查汇合平台的状态时,我注意到Schema注册表、连接和控制中心已经关闭。
我已经检查了模式注册表的日志,并发现了下面的日志。
ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:210)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:61)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:72)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:39)
at io.confluent.rest.Application.createServer(Application.java:201)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:41)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:137)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:208)
... 5 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:422)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:275)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:135)
... 6 more
Caused by: java.util.concurrent.TimeoutException: Timeout after waiting for 60000 ms.
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:78)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:417)
... 8 more发布于 2022-03-20 20:27:24
在$CONFLUENT_HOME/etc/kafka中,您将看到server.properties。
取消对以下内容的注释并按以下方式更新
listeners=PLAINTEXT://0.0.0.0:9092advertised.listeners=PLAINTEXT://localhost:9092listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL在$CONFLUENT_HOME/etc/schema-registry中,您将看到模式-Regiy.properties,打开并更新如下
listeners=http://0.0.0.0:9092发布于 2019-01-31 07:51:51
我想,我找到了答案,
在Kafka配置文件中,添加属性host.name=host_ip_address,它将充当Kafka主机。因此,在任何Kafka引导属性出现的所有配置文件中,将其更改为相应的主机名或IP地址,如下所示。
bootstrap.servers=192.168.0.193:9092示例:在架构注册表配置中,我将以下属性从本地主机更改为相应的IP地址
kafkastore.bootstrap.servers=PLAINTEXT://192.168.0.193:9092 ##在其他文件中,检查属性bootstrap.servers=192.168.0.193:9092是否正确引用。并检查架构注册表配置文件是否正确引用。
(您实际上可以检查和比较/tmp/confluent kafka日志中的配置文件)
现在,在更改所有配置文件之后,服务已经启动并运行。
https://stackoverflow.com/questions/54441162
复制相似问题