我已经创建了简单的word计数程序jar文件,这是测试和工作良好。但是,当我试图在我的Kubernetes集群上运行同一个jar文件时,它会抛出一个错误。下面是我的spark-submit代码以及抛出的错误。
spark-submit --master k8s://https://192.168.99.101:8443 --deploy-mode cluster --name WordCount --class com.sample.WordCount --conf spark.executor.instances=5 --conf spark.kubernetes.container.image=debuggerrr/spark-new:spark-new local:///C:/Users/siddh/OneDrive/Desktop/WordCountSample/target/WordCountSample-0.0.1-SNAPSHOT.jar local:///C:/Users/siddh/OneDrive/Desktop/initialData.txt 最后一个local参数是wordcount程序将在其上运行并获取结果的数据文件。
以下是我的错误:
status: [ContainerStatus(containerID=null, image=gcr.io/spark-operator/spark:v2.4.5, imageID=, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting(message=Back-off pulling image "gcr.io/spark-operator/spark:v2.4.5", reason=ImagePullBackOff, additionalProperties={}), additionalProperties={}), additionalProperties={started=false})]
20/02/11 22:48:13 INFO LoggingPodStatusWatcherImpl: State changed, new state:
pod name: wordcount-1581441237366-driver
namespace: default
labels: spark-app-selector -> spark-386c19d289a54e2da1733376821985b1, spark-role -> driver
pod uid: a9e74d13-cf77-4de0-a16d-a71a21118ef8
creation time: 2020-02-11T17:13:59Z
service account name: default
volumes: spark-local-dir-1, spark-conf-volume, default-token-wbvkb
node name: minikube
start time: 2020-02-11T17:13:59Z
container images: gcr.io/spark-operator/spark:v2.4.5
phase: Running
status: [ContainerStatus(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, image=gcr.io/spark-operator/spark:v2.4.5, imageID=docker-pullable://gcr.io/spark-operator/spark@sha256:0d2c7d9d66fb83a0311442f0d2830280dcaba601244d1d8c1704d72f5806cc4c, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=true, restartCount=0, state=ContainerState(running=ContainerStateRunning(startedAt=2020-02-11T17:18:11Z, additionalProperties={}), terminated=null, waiting=null, additionalProperties={}), additionalProperties={started=true})]
20/02/11 22:48:19 INFO LoggingPodStatusWatcherImpl: State changed, new state:
pod name: wordcount-1581441237366-driver
namespace: default
labels: spark-app-selector -> spark-386c19d289a54e2da1733376821985b1, spark-role -> driver
pod uid: a9e74d13-cf77-4de0-a16d-a71a21118ef8
creation time: 2020-02-11T17:13:59Z
service account name: default
volumes: spark-local-dir-1, spark-conf-volume, default-token-wbvkb
node name: minikube
start time: 2020-02-11T17:13:59Z
container images: gcr.io/spark-operator/spark:v2.4.5
phase: Failed
status: [ContainerStatus(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, image=gcr.io/spark-operator/spark:v2.4.5, imageID=docker-pullable://gcr.io/spark-operator/spark@sha256:0d2c7d9d66fb83a0311442f0d2830280dcaba601244d1d8c1704d72f5806cc4c, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=ContainerStateTerminated(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, exitCode=1, finishedAt=2020-02-11T17:18:18Z, message=null, reason=Error, signal=null, startedAt=2020-02-11T17:18:11Z, additionalProperties={}), waiting=null, additionalProperties={}), additionalProperties={started=false})]
20/02/11 22:48:21 INFO LoggingPodStatusWatcherImpl: Container final statuses:
Container name: spark-kubernetes-driver
Container image: gcr.io/spark-operator/spark:v2.4.5
Container state: Terminated
Exit code: 1
20/02/11 22:48:21 INFO Client: Application WordCount finished.
20/02/11 22:48:23 INFO ShutdownHookManager: Shutdown hook called
20/02/11 22:48:23 INFO ShutdownHookManager: Deleting directory C:\Users\siddh\AppData\Local\Temp\spark-1a3ee936-d430-4f9d-976c-3305617678df如何解决此错误?如何传递本地文件?
备注:JAR文件和数据文件出现在我的桌面上,而不是在坞映像中。
发布于 2020-02-11 19:52:11
不幸的是,在Kubernetes上,将本地文件传递给这项工作还没有被正式发布。Spark叉中有一个解决方案需要将资源暂存服务器部署添加到集群中,但它并不包含在发布的构建中。
为何不容易支持呢?想象一下如何在Kubernetes中配置您的机器和Spark之间的网络通信:为了使您的本地jars Spark能够访问您的机器(可能您需要在本地运行web服务器并公开其端点),反之亦然,为了将jar从您的机器推送到Spark,您的spark-submit脚本需要访问Spark (这可以通过Kubernetes入侵完成,并且需要集成多个组件)。
解决方案火花允许将您的工艺品(jars)存储在http-可访问的地方,包括与hdfs兼容的存储系统。请参考官方文件。
希望能帮上忙。
发布于 2020-02-28 06:41:04
从spark预编译包spark-2.4.4-bin-hadoop2.7.tgz下载。将您的jar放在examples文件夹中
tree -L 1
.
├── LICENSE
├── NOTICE
├── R
├── README.md
├── RELEASE
├── bin
├── conf
├── data
├── examples <---
├── jars
├── kubernetes
├── licenses
├── monitoring
├── python
├── sbin
└── yarn然后构建一个docker映像。
docker build -t spark-docker:v0.1 -f -f ./kubernetes/dockerfiles/spark/Dockerfile .
docker push spark-docker:v0.1现在运行spark-submit
spark-submit --master k8s://https://192.168.99.101:8443 --deploy-mode cluster --name WordCount --class com.sample.WordCount --conf spark.executor.instances=5 --conf spark.kubernetes.container.image=debuggerrr/spark-docker:v0.1 local:///C:/Users/siddh/OneDrive/Desktop/WordCountSample/target/WordCountSample-0.0.1-SNAPSHOT.jar local:///C:/Users/siddh/OneDrive/Desktop/initialData.txt https://stackoverflow.com/questions/60174548
复制相似问题