首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Kafka broker在Docker中启动失败

Kafka broker在Docker中启动失败
EN

Stack Overflow用户
提问于 2021-10-06 10:03:41
回答 1查看 250关注 0票数 0

我有一个非常标准的撰写文件。当我第一次运行它时,所有的容器都运行得很好。当我运行docker-compose -f kafka-compose.yml down并再次运行它时,我得到以下错误:

broker | [2021-10-06 09:57:13,398] ERROR Error while creating ephemeral at /brokers/ids/1, node already exists and owner '72075955082625025' does not match current session '72075962265632769' (kafka.zk.KafkaZkClient$CheckedEphemeral)

我在代理容器中找不到server.properties。可能是因为这个原因吧?什么是必须改变的?

据我所知,这可能是因为并不是所有的设置都保存在已挂载的文件夹中,因此它会在启动时重新加载。但是是哪一个呢?

这是我的docker-compose文件:

代码语言:javascript
复制
version: '3.3'


networks:
  default-dev-network:
    external: true

services:

  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.0
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    volumes: 
      - $PWD/kafka-data/zookeeper/var-lib/data:/var/lib/zookeeper/data
      - $PWD/kafka-data/zookeeper/var-lib/log:/var/lib/zookeeper/log
      - $PWD/kafka-data/zookeeper/etc-kafka:/etc/kafka
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    networks:
      - default-dev-network

  broker:
    image: confluentinc/cp-kafka:6.2.0
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
      - "9092:9092"
      - "9101:9101"
    volumes:
      - $PWD/kafka-data/kafka/data:/var/lib/kafka/data
      - $PWD/kafka-data/kafka-home:/etc/kafka

    # entrypoint: sh -c 'sleep 30 && /etc/confluent/docker/run'

    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
      KAFKA_LOG4J_LOGGERS: "org.apache.zookeeper=ERROR,\
org.apache.kafka=ERROR,\
kafka=ERROR,\
kafka.cluster=ERROR,\
kafka.controller=ERROR,\
kafka.coordinator=ERROR,\
kafka.log=ERROR,\
kafka.server=ERROR,\
kafka.zookeeper=ERROR,\
state.change.logger=ERROR"
      # KAFKA_LOG4J_LOGGERS: "kafka.controller=ERROR, kafka.coordinator=ERROR, state.change.logger=ERROR"
      KAFKA_LOG4J_ROOT_LOGLEVEL: ERROR
      KAFKA_TOOLS_LOG4J_LOGLEVEL: ERROR
    networks:
      - default-dev-network

  schema-registry:
    image: confluentinc/cp-schema-registry:6.2.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
      - zookeeper
      - broker
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
    networks:
      - default-dev-network

  control-center:
    image: confluentinc/cp-enterprise-control-center:6.2.0
    hostname: control-center
    container_name: control-center
    depends_on:
      - zookeeper
      - broker
      - schema-registry
    ports:
      - "9021:9021"
    environment:
      CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
      CONTROL_CENTER_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
      CONTROL_CENTER_REPLICATION_FACTOR: 1
      CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
      CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
      CONFLUENT_METRICS_TOPIC_REPLICATION: 1
      PORT: 9021
    networks:
      - default-dev-network
EN

回答 1

Stack Overflow用户

发布于 2021-10-06 13:18:50

是的,如果您没有在容器重新启动时删除卷数据,则此错误很常见

在代理容器中找不到server.properties

它就在那里。

代码语言:javascript
复制
...
Status: Downloaded newer image for confluentinc/cp-kafka:6.2.0
sh-4.4$ ls /etc/kafka/
connect-console-sink.properties    connect-mirror-maker.properties  secrets
connect-console-source.properties  connect-standalone.properties    server.properties
connect-distributed.properties     consumer.properties          tools-log4j.properties
connect-file-sink.properties       kraft                trogdor.conf
connect-file-source.properties     log4j.properties         zookeeper.properties
connect-log4j.properties       producer.properties
sh-4.4$ ls /etc/kafka/server.properties
/etc/kafka/server.properties

并不是所有的设置都保存在已装载的文件夹中,因此它会在启动时重新加载。但是是哪一个呢?

他们是,但你得到的错误是来自Zookeeper挂载,而不是Kafka的卷数据

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/69463627

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档