首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >提交Kuberentes出口代码

提交Kuberentes出口代码
EN

Stack Overflow用户
提问于 2022-03-25 19:36:01
回答 2查看 331关注 0票数 0

如何检查星火作业是否成功或以编程方式失败,同时运行火花提交。通常使用unix退出代码。

代码语言:javascript
复制
 phase: Failed
 container status:
     container name: spark-kubernetes-driver
     container image: <regstry>/spark-py:spark3.2.1
     container state: terminated
     container started at: 2022-03-25T19:10:51Z
     container finished at: 2022-03-25T19:10:57Z
     exit code: 1
     termination reason: Error

2022-03-25 15:10:58,457 INFO submit.LoggingPodStatusWatcherImpl: Application Postgres-Minio-Kubernetes.py with submission ID spark:postgres-minio-kubernetes-py-b70d3f7fc27829ec-driver finished
2022-03-25 15:10:58,465 INFO util.ShutdownHookManager: Shutdown hook called
2022-03-25 15:10:58,466 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-3321e67c-73d5-422d-a26d-642a0235cf23

进程失败,当我通过echo $获得unix中的退出代码时?它返回一个零错误码!

代码语言:javascript
复制
$ echo $?
0

豆荚的产生也是随机的。除了使用sparkonk8operator之外,火花提交是如何处理的?

EN

回答 2

Stack Overflow用户

发布于 2022-03-26 08:54:21

如果您使用bash,一种在输出上实现grep的方法。根据日志输出的发送位置,您可能必须在stderr or stdout上实现grep。

就像这样:

代码语言:javascript
复制
OUTPUT=`spark-submit ...`
if echo "$OUTPUT" | grep -q "exit code: 1"; then
    exit 1
fi
票数 1
EN

Stack Overflow用户

发布于 2022-10-08 19:53:22

除了@Rico提到的内容外,我还考虑了clusterclient的部署模式,将spark-submit shell文件放在$SPARK_HOME/bin目录中进行更改,如下所示。

代码语言:javascript
复制
#!/usr/bin/env bash

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

if [ -z "${SPARK_HOME}" ]; then
  source "$(dirname "$0")"/find-spark-home
fi

# disable randomized hash for string in Python 3.3+
export PYTHONHASHSEED=0



# check deployment mode.
if echo "$@" | grep -q "\-\-deploy-mode cluster";
then
    echo "cluster mode..";
    # temp log file for spark job.
    export TMP_LOG="/tmp/spark-job-log-$(date '+%Y-%m-%d-%H-%M-%S').log";
    exec "${SPARK_HOME}"/bin/spark-class org.apache.spark.deploy.SparkSubmit "$@" |& tee ${TMP_LOG};
    # when exit code 1 is contained in spark log, then return exit 1.
    if cat ${TMP_LOG} | grep -q "exit code: 1";
    then
        echo "exit code: 1";
        rm -rf ${TMP_LOG};
        exit 1;
    else
        echo "job succeeded.";
        rm -rf ${TMP_LOG};
        exit 0;
    fi
elif echo "$@" | grep -q "\-\-conf spark.submit.deployMode=cluster";
then
    echo "cluster mode..";
    # temp log file for spark job.
    export TMP_LOG="/tmp/spark-job-log-$(date '+%Y-%m-%d-%H-%M-%S').log";
    exec "${SPARK_HOME}"/bin/spark-class org.apache.spark.deploy.SparkSubmit "$@" |& tee ${TMP_LOG};
    # when exit code 1 is contained in spark log, then return exit 1.
    if cat ${TMP_LOG} | grep -q "exit code: 1";
    then
        echo "exit code: 1";
        rm -rf ${TMP_LOG};
        exit 1;
    else
        echo "job succeeded.";
        rm -rf ${TMP_LOG};
        exit 0;
    fi
else
    echo "client mode..";
    exec "${SPARK_HOME}"/bin/spark-class org.apache.spark.deploy.SparkSubmit "$@"
fi

然后,我建立并推动了我的火花码头形象。

有关详细信息,请参阅以下链接:

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/71622255

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档