我想使用spark-ec2脚本启动一个AWS EC2实例。我得到了这个错误:
Initializing spark
--2016-11-18 22:33:06-- http://s3.amazonaws.com/spark-related-packages/spark-1.6.3-bin-hadoop1.tgz
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.1.3
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.1.3|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2016-11-18 22:33:06 ERROR 404: Not Found.
ERROR: Unknown Spark version本地安装的spark来自spark-1.6.3-bin-hadoop2.6.tgz,因此安装不应该尝试访问spark-1.6.3-bin-hadoop1.tgz。在init.sh中,当HADOOP_MAJOR_VERSION==1:
if [[ "$HADOOP_MAJOR_VERSION" == "1" ]]; then
wget http://s3.amazonaws.com/spark-related-packages/spark-$SPARK_VERSION-bin-hadoop1.tgz
elif [[ "$HADOOP_MAJOR_VERSION" == "2" ]]; then
wget http://s3.amazonaws.com/spark-related-packages/spark-$SPARK_VERSION-bin-cdh4.tgz
else
wget http://s3.amazonaws.com/spark-related-packages/spark-$SPARK_VERSION-bin-hadoop2.4.tgz
fi
if [ $? != 0 ]; then
echo "ERROR: Unknown Spark version"
return -1问题是:
-- http://s3.amazonaws.com/spark-related-packages上没有hadoop1的spark版本,所以这是spark安装失败的基本原因。
-- Hadoop _MAJOR_VERSION在安装过程中似乎被设置为1,即使我安装的是Hadoop版本2.x,也会导致上述问题。
--spark_ec2.py在安装期间从github拉取最新的spark-ec2,因此我看不到可能的本地修复。我没有信心直接从github分支和破解这个脚本。
有什么办法来解决这个问题吗?
发布于 2016-11-20 17:53:02
通过在本地调用spark-ec2脚本时包含此选项即可解决此问题:
--hadoop_major_version=2
https://stackoverflow.com/questions/40691022
复制相似问题