我在创建kafka集群时运行了以下命令
sudo docker compose up kafka-cluster一天前,我已经成功地访问了Landoop门户,但是当我关闭系统并再次执行相同的步骤时。我现在无法从这个本地URL访问landoop
http://127.0.0.1:3030 我正在使用Ubuntu20.04,并在终端中生成了以下日志。
[sudo] password for pc-11:
[+] Running 1/0
⠿ Container code-kafka-cluster-1 Created 0.0s
Attaching to code-kafka-cluster-1
code-kafka-cluster-1 | Setting advertised host to 127.0.0.1.
code-kafka-cluster-1 | Starting services.
code-kafka-cluster-1 | This is landoop’s fast-data-dev. Kafka 0.11.0.0, Confluent OSS 3.3.0.
code-kafka-cluster-1 | You may visit http://127.0.0.1:3030 in about a minute.
code-kafka-cluster-1 | 2022-07-14 08:48:34,716 CRIT Supervisor running as root (no user in config file)
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/01-zookeeper.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/02-broker.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/03-schema-registry.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/04-rest-proxy.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/05-connect-distributed.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/06-caddy.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/07-smoke-tests.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/08-logs-to-kafka.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,729 WARN Included extra file "/etc/supervisord.d/99-supervisord-sample-data.conf" during parsing
code-kafka-cluster-1 | 2022-07-14 08:48:34,731 INFO supervisord started with pid 7
code-kafka-cluster-1 | 2022-07-14 08:48:35,735 INFO spawned: 'sample-data' with pid 91
code-kafka-cluster-1 | 2022-07-14 08:48:35,753 INFO spawned: 'zookeeper' with pid 93
code-kafka-cluster-1 | 2022-07-14 08:48:35,766 INFO spawned: 'caddy' with pid 94
code-kafka-cluster-1 | 2022-07-14 08:48:35,770 INFO spawned: 'broker' with pid 95
code-kafka-cluster-1 | 2022-07-14 08:48:35,773 INFO spawned: 'smoke-tests' with pid 97
code-kafka-cluster-1 | 2022-07-14 08:48:35,776 INFO spawned: 'connect-distributed' with pid 98
code-kafka-cluster-1 | 2022-07-14 08:48:35,779 INFO spawned: 'logs-to-kafka' with pid 99
code-kafka-cluster-1 | 2022-07-14 08:48:35,782 INFO spawned: 'schema-registry' with pid 100
code-kafka-cluster-1 | 2022-07-14 08:48:35,785 INFO spawned: 'rest-proxy' with pid 101
code-kafka-cluster-1 | 2022-07-14 08:48:36,262 INFO exited: caddy (exit status 2; not expected)
code-kafka-cluster-1 | 2022-07-14 08:48:37,264 INFO success: sample-data entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,264 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,266 INFO spawned: 'caddy' with pid 381
code-kafka-cluster-1 | 2022-07-14 08:48:37,267 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,267 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,267 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,267 INFO success: logs-to-kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,267 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,268 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:48:37,280 INFO exited: caddy (exit status 2; not expected)
code-kafka-cluster-1 | 2022-07-14 08:48:39,285 INFO spawned: 'caddy' with pid 389
code-kafka-cluster-1 | 2022-07-14 08:48:39,348 INFO exited: caddy (exit status 2; not expected)
code-kafka-cluster-1 | 2022-07-14 08:48:42,444 INFO spawned: 'caddy' with pid 403
code-kafka-cluster-1 | 2022-07-14 08:48:42,450 INFO exited: caddy (exit status 2; not expected)
code-kafka-cluster-1 | 2022-07-14 08:48:42,508 INFO gave up: caddy entered FATAL state, too many start retries too quickly
code-kafka-cluster-1 | 2022-07-14 08:49:04,090 INFO exited: schema-registry (exit status 1; not expected)
code-kafka-cluster-1 | 2022-07-14 08:49:04,099 INFO spawned: 'schema-registry' with pid 485
code-kafka-cluster-1 | 2022-07-14 08:49:05,124 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
code-kafka-cluster-1 | 2022-07-14 08:49:35,818 INFO exited: smoke-tests (exit status 0; expected)
code-kafka-cluster-1 | 2022-07-14 08:51:35,933 INFO exited: logs-to-kafka (exit status 0; expected)
code-kafka-cluster-1 | 2022-07-14 08:52:53,146 INFO exited: sample-data (exit status 0; expected)发布于 2022-07-15 07:54:42
我想出了解决方案,因为fast-data-dev没有被维护,所以我们可以在maintained或mydocker_compose.yml中进行更改,我已经用landoop/fast-data-dev:latest替换了landoop/fast-data-dev:cp3.3.0,最后的docker-compose.yml如下所示:
version: '2'
services:
# this is our kafka cluster.
kafka-cluster:
image: landoop/fast-data-dev:latest
environment:
ADV_HOST: 127.0.0.1 # Change to 192.168.99.100 if using Docker Toolbox
RUNTESTS: 0 # Disable Running tests so the cluster starts faster
ports:
- 2181:2181 # Zookeeper
- 3030:3030 # Landoop UI
- 8081-8083:8081-8083 # REST Proxy, Schema Registry, Kafka Connect ports
- 9581-9585:9581-9585 # JMX Ports
- 9092:9092 # Kafka Broker
# we will use elasticsearch as one of our sinks.
# This configuration allows you to start elasticsearch
elasticsearch:
image: itzg/elasticsearch:2.4.3
environment:
PLUGINS: appbaseio/dejavu
OPTS: -Dindex.number_of_shards=1 -Dindex.number_of_replicas=0
ports:
- "9200:9200"
# we will use postgres as one of our sinks.
# This configuration allows you to start postgres
postgres:
image: postgres:9.5-alpine
environment:
POSTGRES_USER: postgres # define credentials
POSTGRES_PASSWORD: postgres # define credentials
POSTGRES_DB: postgres # define database
ports:
- 5432:5432 # Postgres port在用最新版本更新图像之后,我就可以在127.0.0.1:3030上获得landoop了
我还可以访问landoop,甚至关闭集群并再次访问它。
https://stackoverflow.com/questions/72978800
复制相似问题