我正在尝试配置NFS网关来访问HDFS数据,并遵循http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html。
简单地说,通过上述链接,我遵循了以下步骤:
sudo service rpcbind start // this will start portmapper and NFS daemons.
sudo netstat -taupen | grep 111 // this confirm that propgram is listenining to port 111
rpcinfo -p ubuntu // tells what all programs all listening for RPC clients.
sudo service nfs-kernel-server start // this will start mountd
rpcinfo -p ubuntu // this should show mountd
sudo service rpcbind stop // this will start system’s portmapper
sudo netstat -taupen | grep 111 // make sure no other program is running with this port 111. if yes, then use “kill -9 portnum”
sudo ./hadoop-daemon.sh start portmap // start portmap using hadoop program
sudo ./hadoop-daemon.sh start nfs3
sudo mount -t nfs -o vers=3,proto=tcp,nolock 192.168.125.156:/ /var/hdnfs
mount.nfs: **requested NFS version or transport protocol is not supported**上面的错误消失了(确保您停止了系统NFS调用服务nfs-内核-服务器停止),现在我看到的是NSF3的异常:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: srini is not allowed to impersonate root
at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.getFileLinkInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy14.getFileLinkInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:712)
at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1796)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58)
at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79)
at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1723)
at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1963)
at org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:162)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281)
at org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
2014-06-11 13:51:14,035 WARN org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: Exception
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: srini is not allowed to impersonate root发布于 2014-07-21 22:15:43
我认为这是响应https://issues.apache.org/jira/browse/HDFS-5804而添加的修复的结果。
具体来说,请注意Daryn在14/Jan/27号文件中的评论,他说希望摆脱基于isSecurityEnabled()的不同代码路径。我认为这意味着旧的默认行为被删除了,现在新的行为需要一定数量的配置。基本上,代码是为了支持安全性而修正的,旧的默认不安全行为所需的配置/文档从未更新以反映更改。开源,但关闭文档。
我认为有两项新的信息是完成这项工作所必需的--请注意,在您执行的将"nfsserver proxyuser“详细信息添加到core-site.xml的文档中(注意,我认为这在hadoop服务器端--特别是名称节点--并不是在运行nfsserver的客户端,尽管在我开始工作时,它已经在任何地方设置)。我遵循了这个步骤,但将这两个设置的值更改为* (star),以便"nfsserver“在尝试从任何地方连接时都可以模拟任何人。具体而言,nfsserver需要能够模拟ROOT以克服我们都遇到的问题。
<property>
<name>hadoop.proxyuser.nfsserver.groups</name>
<value>*</value>
<description>
The 'nfsserver' user is allowed to proxy all members of the 'nfs-users1' and
'nfs-users2' groups. Set this to '*' to allow nfsserver user to proxy any group.
</description>
</property>
<property>
<name>hadoop.proxyuser.nfsserver.hosts</name>
<value>*</value>
<description>
This is the host where the nfs gateway is running. Set this to '*' to allow
requests from any hosts to be proxied.
</description>
</property>这就引出了修复这个问题所需的第二个关键信息--您必须将nfs3服务器作为userid "nfsserver“运行--而不是像文档更容易解释的那样:
此命令不需要根权限。但是,确保启动Hadoop集群的用户和启动NFS网关的用户是相同的。
hadoop nfs3或hadoop-daemon.sh启动nfs3
注意,如果hadoop-daemon.sh脚本启动NFS网关,它的日志可以在hadoop日志文件夹中找到。
我相信这是作为jira 5804的一部分引入的第二个改变。最有可能的是,在过去,您应该以hdfs的形式运行nfs3,并且在出现不安全的集群时,不存在模拟。现在模拟似乎是默认的,而唯一可以配置的模拟用户就是字面上的"nfsserver“--这意味着您需要提供一个用户"nfsserver”。
最后,在添加上面提到的配置之后,您需要提供nfsserver用户:
#create a system user named nfsserver with hadoop as default group
sudo useradd -r -g hadoop nfsserver最后,以该用户的身份启动nfs3服务(除了已经启动了portmap之外)
sudo -u nfsserver hadoop-daemon.sh start nfs3发布于 2014-07-22 06:21:47
感谢您发现文档中的不一致之处。
使用https://issues.apache.org/jira/browse/HDFS-5804,用户不必像启动HDFS的用户一样启动NFS网关。(我很快就会修改用户指南)。
无论HDFS集群是否安全,始终应该指定以下两个属性:hadoop.proxyuser.nfsserver.groups和hadoop.proxyuser.nfsserver.hosts。正如用户指南中所指出的,"nfsserver“应该由启动NFS网关的用户替换。
对于安全的HDFS集群,谁启动NFS网关并不重要。这都是关于键标签中的用户的。在上述两个属性中,"nfsserver“应该由keytab中的用户替换。
顺便说一句,如果你还在apache用户电子邮件列表中发布了这个问题,它可能会得到更快的回答。
我创建了一个JIRA来跟踪文档修复:https://issues.apache.org/jira/browse/HDFS-6732
请查看并评论JIRA,新文档是否仍然具有误导性。谢谢!
https://stackoverflow.com/questions/24134012
复制相似问题