namenode/datanode/nodemanager hadoop-2 172.20.2.204 secondarynode/datanode/nodemanager hadoop > <configuration> <property> <name>yarn.resourcemanager.address</name> <value>hadoop-3:8032</value> < /property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop-3:8030</value </value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>hadoop-3:8033< /value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop-3:8088<
n先部署一台机器,制作镜像,然后通过这个镜像去创建其他的EC2实例,最后完成配置与启动。
2.7.3.tar.gz 服务器环境 服务器系统使用centos7.X 64位版本 # hadoop-1 192.168.1.101 # hadoop-2 192.168.1.102 # hadoop > hostnamectl set-hostname hadoop-1 > hostnamectl set-hostname hadoop-2 > hostnamectl set-hostname hadoop (方便互相连通是的时候使用hostname) > vim /etc/hosts 192.168.1.101 hadoop-1 192.168.1.102 hadoop-2 192.168.1.103 hadoop -3 重启服务器使其修改生效,重启完成后会发现主机名已经改回来了,然后使用ping命令检查是否可以互相连通 > ping hadoop-1 > ping hadoop-2 > ping hadoop-3 > vim /usr/local/hadoop-2.7.3/etc/hadoop/slaves hadoop-2 hadoop-3 2, 文件 core-site.xml 改为下面的配置: > vim
property> <name>hbase.zookeeper.quorum</name> <value>hadoop-1,hadoop-2,hadoop property> </configuration> EOF 修改regionservers cat>/usr/local/hbase/conf/regionservers<<EOF hadoop-2 hadoop
服务器清单 $ hadoop-1 192.168.1.101 NameNode DataNode $ hadoop-2 192.168.1.102 DataNode $ hadoop clientPort=2181 initLimit=10 syncLimit=5 server.1=hadoop-1:2888:3888 server.2=hadoop-2:2888:3888 server.3=hadoop <property> <name>hbase.zookeeper.quorum</name> <value>hadoop-1:2181,hadoop-2:2181,hadoop value> </property> </configuration> 增加子节点 > vim /usr/local/hbase-1.3.1/conf/regionservers hadoop-2 hadoop
cm-server centos7.3 2 172.20.2.203 hadoop-1 centos7.3 3 172.20.2.204 hadoop-2 centos7.3 4 172.20.2.205 hadoop 添加各个节点hosts解析 172.20.2.222 cm-server 172.20.2.203 hadoop-1 172.20.2.204 hadoop-2 172.20.2.205 hadoop
(3).部署flink1.13.1 with hadoop 生产hadoop部署参见: hadoop-3:原生方式在aws搭建生产级hadoop-flink集群 hadoop-4:hadoop-flink
localhost 192.168.56.11 hadoop-1 debian1 192.168.56.12 hadoop-2 debian2 192.168.56.13 hadoop
主机名 系统版本 1 172.20.2.222 ambari-server centos7.3 2 172.20.2.203 hadoop-2 centos7.3 3 172.20.2.204 hadoop hosts解析 172.20.2.222 ambari-server 172.20.2.203 hadoop-1 172.20.2.204 hadoop-2 172.20.2.205 hadoop
”来分隔,将所有的机器名都换成如下形式: IP地址 机器名(hostname) 192.168.1.101 hadoop-1 192.168.1.102 hadoop-2 192.168.1.103 hadoop
修改服务器的host 和hadoop节点建立关联 > vim /etc/hosts 192.168.1.101 hadoop-1 192.168.1.102 hadoop-2 192.168.1.103 hadoop
到 hadoop103 [atguigu@hadoop102 module]$ rsync -av atguigu@hadoop103:/opt/module/hadoop-3.1.3/ hadoop
rbtnode1 # 此处172.19.0.24为nginx地址 Linux: vim /etc/hosts # 文件末尾加上下面三行 172.19.0.28 hadoop