首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Hbase HDFS集成- Hbase母版未启动

Hbase HDFS集成- Hbase母版未启动
EN

Stack Overflow用户
提问于 2015-02-03 09:32:53
回答 1查看 2.6K关注 0票数 5

我已经配置了在虚拟盒上并行运行的linux发行版的两个节点集群。

两个linux发行版中/etc/host文件的内容如下所示

代码语言:javascript
复制
hduser@ubuntu-master:~$ cat /etc/hosts
192.168.56.103  Ubuntu-Master master
192.168.56.102  LinuxMint-Slave slave
10.33.136.219   inkod2lp00100.techmahindra.com inkod2lp00100


hduser@LinuxMint-Slave ~ $ cat /etc/hosts
192.168.56.103  Ubuntu-Master master
192.168.56.102  LinuxMint-Slave slave
10.33.136.219   inkod2lp00100.techmahindra.com inkod2lp00100

两个linux发行版中hbase-site.xml ( Location - /usr/local/hbase/conf)的内容如下-

代码语言:javascript
复制
hduser@ubuntu-master:~$ cat /usr/local/hbase/conf/hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>
  <name>hbase.tmp.dir</name>
  <value>file:///usr/local/hbase/hbasetmp/hbase-${user.name}</value>
</property>

<property>
    <name>hbase.master</name>
    <value>Ubuntu-Master:16000</value>
    <description>The host and port that the HBase master runs at.</description>
</property>

  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://Ubuntu-Master:54310/hbase</value>
  </property>

<property>
  <name>hbase.cluster.distributed</name>
  <value>true</value>
</property>

<property>
  <name>hbase.zookeeper.quorum</name>
  <value>Ubuntu-Master,LinuxMint-Slave</value>
</property>

   <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>file:///usr/local/hbase/zookeeperdata</value>
  </property>

<property>
     <name>hbase.zookeeper.property.clientPort</name>
   <value>2222</value>
 </property>
</configuration>

hduser@LinuxMint-Slave ~ $ cat /usr/local/hbase/conf/hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>

<property>
  <name>hbase.tmp.dir</name>
  <value>file:///usr/local/hbase/hbasetmp/hbase-${user.name}</value>
</property>

<property>
<name>hbase.master</name>
<value>Ubuntu-Master:16000</value>
<description>The host and port that the HBase master runs at.</description>
</property>

  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://Ubuntu-Master:54310/hbase</value>
  </property>

<property>
  <name>hbase.cluster.distributed</name>
  <value>true</value>
</property>

<property>
  <name>hbase.zookeeper.quorum</name>
  <value>Ubuntu-Master,LinuxMint-Slave</value>
</property>


  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>file:///usr/local/hbase/zookeeperdata</value>
  </property>

<property>
     <name>hbase.zookeeper.property.clientPort</name>
   <value>2222</value>
 </property>

</configuration>

但是,当我在主节点中启动HBase服务时,每次没有启动HMaster时,在初始启动HMaster失败之后

请检查服务状态:

代码语言:javascript
复制
hduser@ubuntu-master:~$ jps

3793 SecondaryNameNode
5332 HQuorumPeer
4006 ResourceManager
4134 NodeManager
4883 JobHistoryServer
6286 Jps
3512 NameNode
3637 DataNode
5535 HRegionServer

hduser@LinuxMint-Slave ~ $ jps
2504 DataNode
3175 HQuorumPeer
2651 NodeManager
3681 Jps
3291 HRegionServer

在这里,HMaster服务的日志文件

代码语言:javascript
复制
2015-02-03 12:21:14,168 WARN  [Thread-12] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471) ………………….

……………………..
2015-02-03 12:21:14,185 DEBUG [master:Ubuntu-Master:60000] util.FSUtils: Unable to create version file at hdfs://Ubuntu-Master:54310/hbase, retrying
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471)

…………………………………………

2015-02-03 12:21:24,285 WARN  [Thread-15] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471)

……………………………………………………….
2015-02-03 12:21:24,286 DEBUG [master:Ubuntu-Master:60000] util.FSUtils: Unable to create version file at hdfs://Ubuntu-Master:54310/hbase, retrying
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471)
……………………………………………..

2015-02-03 12:21:34,312 WARN  [Thread-17] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471)
……………………………………………………………….
2015-02-03 12:21:44,333 FATAL [master:Ubuntu-Master:60000] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and no node(s) are excluded in this operation.
…………………………………….

    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
2015-02-03 12:21:44,334 INFO  [master:Ubuntu-Master:60000] master.HMaster: Aborting
2015-02-03 12:21:44,334 DEBUG [master:Ubuntu-Master:60000] master.HMaster: Stopping service threads
2015-02-03 12:21:44,335 INFO  [master:Ubuntu-Master:60000] ipc.RpcServer: Stopping server on 60000
2015-02-03 12:21:44,335 INFO  [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: stopping
2015-02-03 12:21:44,339 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2015-02-03 12:21:44,339 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2015-02-03 12:21:44,339 INFO  [master:Ubuntu-Master:60000] master.HMaster: Stopping infoServer
2015-02-03 12:21:44,364 INFO  [master:Ubuntu-Master:60000] mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60010
2015-02-03 12:21:44,508 INFO  [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-02-03 12:21:44,509 INFO  [master:Ubuntu-Master:60000] zookeeper.ZooKeeper: Session: 0x14b4e1d0a040002 closed
2015-02-03 12:21:44,510 INFO  [master:Ubuntu-Master:60000] master.HMaster: HMaster main thread exiting
2015-02-03 12:21:44,510 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
    at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:194)
    at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
    at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2803)
2015-02-03 12:21:44,515 ERROR [Thread-5] hdfs.DFSClient: Failed to close file /hbase/.tmp/hbase.version
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471)
EN

回答 1

Stack Overflow用户

发布于 2016-08-24 14:37:42

我最近也有同样的问题。

克服它的方法很简单,但是危险的

您肯定会丢失HDFS.上的所有数据。

你应该做以下几点:

  1. 停止所有hadoop服务:stop-hbase.sh && stop-yarn.sh && stop-dfs.sh
  2. 删除主机和从服务器上的所有HDFS数据,您可以在hadoop的/etc/hadoop/hdfs-site.xml中找到路径,在我的例子中,我必须删除的文件夹是/home/hadoop/hadoopdata/hdfs/namenode/home/hadoop/hadoopdata/hdfs/datanode

相反,您可以简单地删除两个服务器上的/home/hadoop/hadoopdata目录。

下面是您可能需要查找的配置文件的一部分:

代码语言:javascript
复制
    <property>
        <name>dfs.name.dir</name>
        <value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
    </property>

    <property>
        <name>dfs.data.dir</name>
        <value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
    </property>
  1. 在主服务器上运行:hadoop namenode -format ( namenode部分可能与您不同)。
  2. 在从服务器上运行:hadoop datanode -format (对于您来说,datanode部分可能再次不同)。
  3. 启动hadoop和其他服务:start-dfs.sh && start-yarn.sh && start-hbase.sh
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/28295521

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档