首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >无法使用scoop: exitCode=255导入数据

无法使用scoop: exitCode=255导入数据
EN

Stack Overflow用户
提问于 2016-07-27 17:10:26
回答 3查看 1.7K关注 0票数 0

我是哈多普星星之火中的一个菜鸟。我已经设置了hadoop/spark集群(1个namenode,2个datanode)。现在,我试图使用HDFS中的scoop从DB (mysql)导入数据,但是它总是失败的。

代码语言:javascript
复制
16/07/27 16:50:04 INFO mapreduce.Job: Running job: job_1469629483256_0004
16/07/27 16:50:11 INFO mapreduce.Job: Job job_1469629483256_0004 running in uber mode : false
16/07/27 16:50:11 INFO mapreduce.Job:  map 0% reduce 0%
16/07/27 16:50:13 INFO ipc.Client: Retrying connect to server: datanode1_hostname/172.31.58.123:59676. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/27 16:50:14 INFO ipc.Client: Retrying connect to server: datanode1_hostname/172.31.58.123:59676. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/27 16:50:15 INFO ipc.Client: Retrying connect to server: datanode1_hostname/172.31.58.123:59676. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/27 16:50:18 INFO mapreduce.Job: Job job_1469629483256_0004 failed with state FAILED due to: Application application_1469629483256_0004 failed 2 times due to AM Container for appattempt_1469629483256_0004_000002 exited with  exitCode: 255
For more detailed output, check application tracking page:http://ip-172-31-55-182.ec2.internal:8088/cluster/app/application_1469629483256_0004Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1469629483256_0004_02_000001
Exit code: 255
Stack trace: ExitCodeException exitCode=255: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
        at org.apache.hadoop.util.Shell.run(Shell.java:456)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 255
Failing this attempt. Failing the application.
16/07/27 16:50:18 INFO mapreduce.Job: Counters: 0
16/07/27 16:50:18 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
16/07/27 16:50:18 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 16.2369 seconds (0 bytes/sec)
16/07/27 16:50:18 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
16/07/27 16:50:18 INFO mapreduce.ImportJobBase: Retrieved 0 records.
16/07/27 16:50:18 ERROR tool.ImportTool: Error during import: Import job failed!

我能够用HDFS手动编写:

代码语言:javascript
复制
hdfs dfs -put <local file path> <hdfs path>

但是当我运行scoop导入命令时

代码语言:javascript
复制
sqoop import --connect jdbc:mysql://<host>/<db_name> --username <USERNAME> --password <PASSWORD> --table <TABLE_NAME> --enclosed-by '\"' --fields-terminated-by , --escaped-by \\ -m 1 --target-dir <hdfs location>

有人能告诉我我做错了什么吗?

以下是我已经尝试过的事情清单

  1. 关闭集群,格式化HDFS,然后重新启动集群(没有帮助)
  2. 确保HDFS不处于安全模式。

所有节点的/etc/hosts中都有这样的内容。

代码语言:javascript
复制
127.0.0.1 localhost
172.31.55.182 namenode_hostname
172.31.58.123 datanode1_hostname
172.31.58.122 datanode2_hostname

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

配置文件:

$HADOOP_CONF_DIR/core-site.xml:所有节点:

代码语言:javascript
复制
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://ip-172-31-55-182.ec2.internal:9000</value>
  </property>
</configuration>

$HADOOP_CONF_DIR/yarn-site.xml:所有节点:

代码语言:javascript
复制
<configuration>

  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>

  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>ip-172-31-55-182.ec2.internal</value>
  </property>

</configuration>

$HADOOP_CONF_DIR/mapred-site.xml:所有节点:

代码语言:javascript
复制
<configuration>
  <property>
    <name>mapreduce.jobtracker.address</name>
    <value>ip-172-31-55-182.ec2.internal:54311</value>
  </property>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>

NameNode专用配置

$HADOOP_CONF_DIR/hdfs-site.xml:

代码语言:javascript
复制
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///mnt/hadoop_data/hdfs/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.address</name>
    <value>0.0.0.0:50010</value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>0.0.0.0:50075</value>
  </property>
  <property>
    <name>dfs.datanode.https.address</name>
    <value>0.0.0.0:50475</value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>0.0.0.0:50020</value>
  </property>
</configuration>

$HADOOP_CONF_DIR/masters: ip-172-31-55-182.ec2.internal

$HADOOP_CONF_DIR/slaves:

代码语言:javascript
复制
ip-172-31-58-123.ec2.internal
ip-172-31-58-122.ec2.internal

DataNode专用配置

$HADOOP_CONF_DIR/hdfs-site.xml:

代码语言:javascript
复制
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///mnt/hadoop_data/hdfs/datanode</value>
  </property>
  <property>
    <name>dfs.datanode.address</name>
    <value>0.0.0.0:50010</value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>0.0.0.0:50075</value>
  </property>
  <property>
    <name>dfs.datanode.https.address</name>
    <value>0.0.0.0:50475</value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>0.0.0.0:50020</value>
  </property>
</configuration>
EN

回答 3

Stack Overflow用户

回答已采纳

发布于 2016-08-01 12:01:58

我将终止这个集群并从头开始。

票数 0
EN

Stack Overflow用户

发布于 2016-07-27 18:40:42

您试图从其中导入数据。我的意思是,在namenode和datanode中,您试图从哪台机器上connect.check主文件和从文件。

尝试从不同的服务器平ip地址,并检查它是否显示为up。

票数 0
EN

Stack Overflow用户

发布于 2016-07-28 12:18:39

进行这些更改并重新启动集群,然后再试一次:

按下面注释(#)中提到的方式编辑该部分,并删除注释

客户端节点上的/etc/hosts文件:

代码语言:javascript
复制
127.0.0.1 localhost yourcomputername  #get computername by "hostname -f" command and replace here
172.31.55.182 namenode_hostname ip-172-31-55-182.ec2.internal
172.31.58.123 datanode1_hostname ip-172-31-58-123.ec2.internal
172.31.58.122 datanode2_hostname ip-172-31-58-122.ec2.internal

集群节点上的/etc/hosts文件:

代码语言:javascript
复制
198.22.23.212 youcomputername #change to public ip of client node, change computername same as client node 
172.31.55.182 namenode_hostname ip-172-31-55-182.ec2.internal
172.31.58.123 datanode1_hostname ip-172-31-58-123.ec2.internal
172.31.58.122 datanode2_hostname ip-172-31-58-122.ec2.internal
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/38619284

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档