首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >org.apache.hadoop.ipc.RemoteException错误

org.apache.hadoop.ipc.RemoteException错误
EN

Stack Overflow用户
提问于 2015-06-24 13:54:59
回答 2查看 9.9K关注 0票数 0

我想从windows机器上复制一些文件到hadoop上,这些文件在SingleNode上的ubuntu14.04.02上运行。以下是用于此目的的代码;

代码语言:javascript
复制
Configuration configuration = new Configuration();
configuration.addResource(new Path("/core-site.xml"));
configuration.addResource(new Path("/mapred-site.xml"));
FileSystem hdfs = FileSystem.get(configuration);

Path homeDirectory = hdfs.getHomeDirectory();
System.out.println("Home directory\t\t: " + homeDirectory);
Path workingDirectory = hdfs.getWorkingDirectory();
System.out.println("Working directory\t: " + workingDirectory);
Path dataFolderPath = new Path("/ali");
dataFolderPath = Path.mergePaths(workingDirectory, dataFolderPath);
System.out.println("Data Folder Path\t: " + dataFolderPath);

if(hdfs.exists(dataFolderPath)){
    System.out.println("Data Folder Path exists.\nExisting path deleting...");
    hdfs.delete(dataFolderPath, true);
}
System.out.println("Data Folder Path creating...");

Path localFilePath = new Path("D:\\text.txt");
Path hdfsFilePath = new Path(dataFolderPath + "/text.txt");

System.out.println("Copying \'" + localFilePath + "\' to \'" + hdfsFilePath + "\'...");

hdfs.copyFromLocalFile(localFilePath, hdfsFilePath);

System.out.println("All completed");

这是我得到的控制台日志;

代码语言:javascript
复制
Home directory      : hdfs://10.0.0.14:9000/user/ademir
Working directory   : hdfs://10.0.0.14:9000/user/ademir
Data Folder Path    : hdfs://10.0.0.14:9000/user/ademir/ali
Data Folder Path exists.
Existing path deleting...
Data Folder Path creating...
Copying 'D:/text.txt' to 'hdfs://10.0.0.14:9000/user/ademir/ali/text.txt'...
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/ademir/ali/text.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)

当我在Hadoop运行的机器上执行此操作时,它完成时没有任何问题,但在本地网络中的Windows机器上,我得到了结果。

这个实现出了什么问题?这个问题的根源是什么?我该如何解决它?

谢谢你的帮助。

注意: Hadoop版本为2.6.0。另外,我是Hadoop的初学者。

EN

回答 2

Stack Overflow用户

发布于 2015-06-24 14:48:26

此提供链接提供更多可能的答案HDFS error: could only be replicated to 0 nodes, instead of 1

尤其是这个答案:这是你的问题--客户端不能与Datanode通信。因为客户端为Datanode接收的IP是内部IP而不是公共IP。看看这个。这是您的问题-客户端无法与Datanode通信。因为客户端为Datanode接收的IP是内部IP而不是公共IP。看看这个

如您所见,您的datanode也被标记为排除

票数 1
EN

Stack Overflow用户

发布于 2015-06-24 14:14:48

这里有类似的问题:HDFS error: could only be replicated to 0 nodes, instead of 1。看看有没有帮助。

还要检查主机文件,查看是否可以从windows计算机访问数据节点和名称节点。即IP:端口是可访问的。请注意,hadoop直接复制到数据节点,而不是通过名称节点

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/31018638

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档