首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >没有Namenode或Datanode或辅助NameNode停止

没有Namenode或Datanode或辅助NameNode停止
EN

Stack Overflow用户
提问于 2015-11-18 05:32:44
回答 4查看 13.4K关注 0票数 3

我按照下面链接中的步骤在Ubuntu12.04中安装了Hadoop。

cluster.php

所有东西都安装成功了,当我运行star-all.sh时,只有一些服务在运行。

代码语言:javascript
复制
wanderer@wanderer-Lenovo-IdeaPad-S510p:~$ su - hduse
Password:

hduse@wanderer-Lenovo-IdeaPad-S510p:~$ cd /usr/local/hadoop/sbin

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
hduse@localhost's password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password: 
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out
Starting secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password: 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ jps
7940 Jps
7545 ResourceManager
7885 NodeManager

一旦我通过运行脚本停止服务-all.sh

代码语言:javascript
复制
hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
hduse@localhost's password: 
localhost: no namenode to stop
hduse@localhost's password: 
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password: 
0.0.0.0: no secondarynamenode to stop
stopping yarn daemons
stopping resourcemanager
hduse@localhost's password: 
localhost: stopping nodemanager
no proxyserver to stop

我的配置文件

  1. 编辑bashrc文件 vi ~/..bashrc #HADOOP变量启动导出JAVA_HOME=/usr/lib/jvm/java-8-oracle/ export _INSTALL=/usr/local/HADOOP导出路径=$PATH:$HADOOP_INSTALL/bin导出路径=$PATH:$HADOOP_INSTALL/sbin export HADOOP_MAPRED_HOME=$HADOOP_INSTALL export HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_COMMON_HOME=$HADOOP_INSTALL ExportYARN_HOME=$HADOOP_INSTALL ExportYARN_HOME=$HADOOP_INSTALL export YARN_HOME=$HADOOP_INSTALL export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib“HADOOP_COMMON_lib_本机_DIR=$HADOOP_HOME/lib/本机导出HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib“#HADOOP变量结束
  2. hdfs-site.xml vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml默认块复制。在创建文件时可以指定复制的实际数量。如果在创建时间内未指定复制,则使用默认值。dfs.namenode.name.dir文件:/usr/local/hadoop_store/hdfs/namenode dfs.datanode.data.dir文件:/usr/local/hadoop_store/hdfs/datanode
  3. hadoop-env.sh vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh导出java_HOME=/usr/lib/jvm/java-8- $HADOOP_HOME/contrib/capacity-scheduler/*.jar;中f的oracle/ export $HADOOP_CLASSPATH,如果"$HADOOP_CLASSPATH“;然后出口HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS“出口HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,HADOOP_CLASSPATH=$f $HADOOP_DATANODE_OPTS”出口HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,-Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS“导出HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS”导出HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS#适用于多个命令(fs、dfs、fsck、导出HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER} HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}导出HADOOP_PID_DIR=${HADOOP_PID_DIR}导出HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR} ##字符串,表示hadoop的此实例。默认情况下是$USER。导出HADOOP_IDENT_STRING=$USER
  4. core-site.xml vi /usr/local/hadoop/etc/hadoop/core-site.xml hadoop.tmp.dir /app/hadoop/tmp其他临时目录的基础。fs.default.name hdfs://localhost:54310默认文件系统的名称。其方案和权限决定FileSystem实现的URI。uri的方案确定了命名FileSystem实现类的配置属性(FileSystem)。uri的权限用于确定文件系统的主机、端口等。
  5. mapred-site.xml vi /usr/local/hadoop/etc/hadoop/mapred-site.xml mapred.job.tracker localhost:54311是运行MapReduce作业跟踪器的主机和端口。如果“本地”,则作业作为单个映射和减少任务在进程中运行。 $ javac -versionjavac 1.8.0_66 $ java -version java版本"1.8.0_66“Java(TM) SE运行时环境(build 1.8.0_66-b17) Java HotSpot(TM) 64位服务器VM (Build25.66-B17,混合模式)

我对Hadoop并不熟悉,也找不到这个问题。为了跟踪服务,我可以在哪里找到Jobtracker和NameNode的日志文件?

EN

回答 4

Stack Overflow用户

发布于 2015-11-18 16:37:54

如果不是ssh问题,请执行下一个步骤:

  1. 从临时目录中删除所有内容:rm -Rf /app/hadoop/tmp,并格式化namenode服务器bin/hadoop -format。使用bin/start-dfs.sh.启动namenode和datanode在命令行中键入jps以检查节点是否正在运行。
  2. 检查hduser是否有权使用ls -ld目录编写hadoop_store/hdfs/namenode和datanode目录 您可以通过sudo +777 /hadoop_store/hdfs/namenode/更改权限。
票数 4
EN

Stack Overflow用户

发布于 2015-11-18 14:27:23

如果仔细查看一下启动all.sh命令日志,就可以很容易地看到日志文件ş路径。每个服务在尝试开始写入日志之后

代码语言:javascript
复制
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out
ocalhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out
票数 1
EN

Stack Overflow用户

发布于 2015-11-18 09:51:59

您必须为ssh设置无密码身份验证。hduse用户应该能够在不使用密码的情况下通过ssh登录本地主机。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/33772495

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档