-2/conf/zoo.cfg vim /usr/local/zookeeper-cluster/zookeeper-2/conf/zoo.cfg clientPort=2183 dataDir=/usr /local/zookeeper-cluster/zookeeper-2/data 修改/usr/local/zookeeper-cluster/zookeeper-3/conf/zoo.cfg vim /usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh status 发现已经停止运行了。 /usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh status 4、我们把3号服务器也启动起来。 /usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh start 2号服务器会再次成为新的领导吗?
-2/conf/zoo.cfg vim /usr/local/zookeeper-cluster/zookeeper-2/conf/zoo.cfg 内容: clientPort=2183 dataDir=/usr/local/zookeeper-cluster/zookeeper-2/data 修改/usr/local/zookeeper-cluster/zookeeper-3/conf /usr/local/zookeeper-cluster/zookeeper-2/bin/ zkServer.sh status 发现已经停止运行了。 /usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh status 4、我们把3号服务器也启动起来。 /usr/local/zookeeper-cluster/zookeeper-2/bin/ zkServer.sh start 2号服务器会再次成为新的领导吗?
映射文件夹到宿主机对应文件夹 -v /root/zookeeper01/datalog:/datalog \ #映射文件夹到宿主机对应文件夹 zookeeper:3.4.14 docker run -d --name zookeeper 重启三个容器 或者 先停止再启动. # 重启容器 docker restart zookeeper-1 zookeeper-2 zookeeper-3 # 停止容器 docker stop zookeeper -1 zookeeper-2 zookeeper-3 # 启动容器 docker start zookeeper-1 zookeeper-2 zookeeper-3 7.
: ZOO_SERVER_ID value: "1" - name: ZOO_SERVERS value: 0.0.0.0:2888:3888,zookeeper -2:2888:3888,zookeeper-3:2888:3888 --- kind: Deployment apiVersion: apps/v1 metadata: name: zookeeper -2 namespace: rcmd spec: replicas: 1 selector: matchLabels: app: zookeeper-2 template : metadata: labels: app: zookeeper-2 spec: containers: - name: zookeeper -2 namespace: rcmd labels: app: zookeeper-2 spec: ports: - name: client port: 2181
/local/zookeeper-cluster/zookeeper-2/data 修改/usr/local/zookeeper-cluster/zookeeper-3/conf/zoo.cfg vim /usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh start /usr/local/zookeeper-cluster/zookeeper /usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh stop /usr/local/zookeeper-cluster/zookeeper /usr/local/zookeeper-cluster/zookeeper-3/bin/zkServer.sh start /usr/local/zookeeper-cluster/zookeeper 我们看结果 /usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh start /usr/local/zookeeper-cluster/zookeeper
/local/zookeeper-cluster/zookeeper-2/data 修改/usr/local/zookeeper-cluster/zookeeper-3/conf/zoo.cfg vim /usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh start /usr/local/zookeeper-cluster/zookeeper /usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh stop /usr/local/zookeeper-cluster/zookeeper /usr/local/zookeeper-cluster/zookeeper-3/bin/zkServer.sh start /usr/local/zookeeper-cluster/zookeeper 我们看结果 /usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh start /usr/local/zookeeper-cluster/zookeeper
: ZOO_SERVER_ID value: "1" - name: ZOO_SERVERS value: 0.0.0.0:2888:3888,zookeeper -2:2888:3888,zookeeper-3:2888:3888 --- kind: Deployment apiVersion: apps/v1 metadata: name: zookeeper -2 namespace: rcmd spec: replicas: 1 selector: matchLabels: app: zookeeper-2 template : metadata: labels: app: zookeeper-2 spec: containers: - name: zookeeper ZOO_SERVER_ID value: "3" - name: ZOO_SERVERS value: zookeeper-1:2888:3888,zookeeper
2181 environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zookeeper -2: image: zookeeper restart: always hostname: zookeeper-2 ports: - 2182:2181 environment: ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper - 9092:9092 - 19999:9999 expose: - 19092 links: - zookeeper-1 - zookeeper zookeeper-3 environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181,zookeeper
codis-ha1 192.168.88.112 codis-ha2 192.168.88.113 zookeeper-1(codis-proxy-1) 192.168.88.114 zookeeper codis-proxy hostname:zookeeper-1 apps: zookeeper1, codis_proxy_1 prots:2811,19000 hostname:zookeeper codis-ha1 192.168.88.112 codis-ha2 192.168.88.113 zookeeper-1(codis-proxy-1) 192.168.88.114 zookeeper 配置codis_proxy ( zookeeper-1、zookeeper-2、zookeeper-3 机器上配置) 配置codis_proxy_1 ( zookeeper-1 机器上配置) cd / 修改start_proxy.sh,启动codis-proxy服务 ( 在zookeeper-1、zookeeper-2、zookeeper-3上配置) zookeeper-1上(其他上面就是codis_proxy
zookeeper-cluster目录,将解压后的Zookeeper复制到以下三个目录 /usr/local/zookeeper-cluster/zookeeper-1 /usr/local/zookeeper-cluster/zookeeper zookeeper-cluster/zookeeper-1[root@localhost ~]# cp -r zookeeper-3.4.6 /usr/local/zookeeper-cluster/zookeeper clientPort=2181dataDir=/usr/local/zookeeper-cluster/zookeeper-1/data 修改/usr/local/zookeeper-cluster/zookeeper -2/conf/zoo.cfg clientPort=2182dataDir=/usr/local/zookeeper-cluster/zookeeper-2/data 修改/usr/local/zookeeper-cluster
zookeeper-deployment-2 namespace: ms spec: replicas: 1 selector: matchLabels: app: zookeeper -2 name: zookeeper-2 template: metadata: labels: app: zookeeper-2 name : zookeeper-2 spec: containers: - name: zoo2 image: uhub.service.ucloud.cn/metersphere selector: app: zookeeper-1 --- apiVersion: v1 kind: Service metadata: name: zoo2 labels: app: zookeeper port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP selector: app: zookeeper
本文在一台机器上模拟3个 zk server的集群安装 1.1 下载解压 解压到3个目录(模拟3台zk server): /home/hadoop/zookeeper-1 /home/hadoop/zookeeper server.1=localhost:2287:3387 server.2=localhost:2288:3388 server.3=localhost:2289:3389 /home/hadoop/zookeeper 1.3 启动验证 /home/hadoop/zookeeper-1/bin/zkServer.sh start /home/hadoop/zookeeper-2/bin/zkServer.sh start
conf/zoo-1.cfg conf/zoo-2.cfg # cp conf/zoo-1.cfg conf/zoo-3.cfg # vim conf/zoo-2.cfg dataDir=/tmp/zookeeper conf/zoo-2.cfg dataDir=/tmp/zookeeper-3 clientPort=2183 Step6:标识Server ID 创建三个文件夹/tmp/zookeeper-1,/tmp/zookeeper -2,/tmp/zookeeper-2,在每个目录中创建文件myid 文件,写入当前实例的server id,即1.2.3 # cd /tmp/zookeeper-1 # vim myid 1 # cd /tmp/zookeeper-2 # vim myid 2 # cd /tmp/zookeeper-3 # vim myid 3 Step7:启动三个zookeeper实例 # bin/zkServer.sh
conf/zoo-1.cfg conf/zoo-2.cfg # cp conf/zoo-1.cfg conf/zoo-3.cfg # vim conf/zoo-2.cfg dataDir=/tmp/zookeeper conf/zoo-2.cfg dataDir=/tmp/zookeeper-3 clientPort=2183 Step6:标识Server ID 创建三个文件夹/tmp/zookeeper-1,/tmp/zookeeper -2,/tmp/zookeeper-2,在每个目录中创建文件myid 文件,写入当前实例的server id,即1.2.3 # cd /tmp/zookeeper-1 # vim myid 1 # cd /tmp/zookeeper-2 # vim myid 2 # cd /tmp/zookeeper-3 # vim myid 3 Step7:启动三个zookeeper实例 # bin/zkServer.sh
zookeeper-cluster目录,将解压后的Zookeeper复制到以下三个目录 /usr/local/zookeeper-cluster/zookeeper-1 /usr/local/zookeeper-cluster/zookeeper zookeeper-cluster/zookeeper-1[root@localhost ~]# cp -r zookeeper-3.4.6 /usr/local/zookeeper-cluster/zookeeper clientPort=2181dataDir=/usr/local/zookeeper-cluster/zookeeper-1/data 修改/usr/local/zookeeper-cluster/zookeeper -2/conf/zoo.cfg clientPort=2182dataDir=/usr/local/zookeeper-cluster/zookeeper-2/data 修改/usr/local/zookeeper-cluster
value>true</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>zookeeper-1:2181,zookeeper zookeeper集群信息--> <property> <name>yarn.resourcemanager.zk-address</name> <value>zookeeper-1:2181,zookeeper metricsProvider.exportJvmInfo=true 开启4字命令mntr: 4lw.commands.whitelist=mntr 增加节点: server.1=zookeeper-1:2888:3888 server.2=zookeeper 管理节点hosts/资源节点hosts: 10.0.46.252 zookeeper-1 namenode-1 journalnode-1 resourcemanager-1 10.0.36.169 zookeeper
RESTARTS AGE zookeeper-0 1/1 Running 0 2m zookeeper-1 1/1 Running 0 1m zookeeper
ZOO_SERVERS value: "server.1=zookeeper-0:2888:3888 server.2=zookeeper-1:2888:3888 server.3=zookeeper
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> </property> <property> <name>hadoop.zk.address</name> <value>zookeeper-1:2181,zookeeper
host: "local-168-182-111" path: "/opt/bigdata/servers/zookeeper/data/data1" - name: zookeeper zkServer.sh status kubectl exec -it zookeeper-1 -n zookeeper -- zkServer.sh status kubectl exec -it zookeeper