首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Spark执行失败并出现错误: java.lang.NoClassDefFoundError: org.codehaus.janino.InternalCompilerException

Spark执行失败并出现错误: java.lang.NoClassDefFoundError: org.codehaus.janino.InternalCompilerException
EN

Stack Overflow用户
提问于 2021-10-07 09:34:03
回答 1查看 1.2K关注 0票数 1

一旦在dataset上调用了第一个操作,在Java中运行Spark程序就会立即失败,除非出现以下异常。已经尝试过Spark SQL fails with java.lang.NoClassDefFoundError: org/codehaus/commons/compiler/UncheckedCompileException中的所有建议,似乎没有什么可行的。尝试升级版本的火花仍然面临同样的错误。

请注意,不要使用spark-submit运行,但是通过下面的java -jar <app-name>是Spark配置:

代码语言:javascript
复制
    compile group: 'org.apache.spark', name: 'spark-sql_2.12', version: '2.4.3'
    implementation 'org.codehaus.janino:commons-compiler:3.0.16'
    implementation 'org.codehaus.janino:janino:3.0.16'

在排除配置下面也尝试过,仍然存在相同的错误:

代码语言:javascript
复制
    implementation('org.apache.spark:spark-sql_2.12:2.4.3') {
        exclude group: 'org.codehaus.janino', module: 'janino'
        exclude group: 'org.codehaus.janino', module: 'commons-compiler'
    }
    compile "org.codehaus.janino:commons-compiler:3.0.16"
    compile "org.codehaus.janino:janino:3.0.16"

异常堆栈跟踪:

代码语言:javascript
复制
java.lang.NoClassDefFoundError: org.codehaus.janino.InternalCompilerException
    at org.apache.spark.sql.catalyst.expressions.codegen.JavaCode$.variable(javaCode.scala:63)
    at org.apache.spark.sql.catalyst.expressions.codegen.JavaCode$.isNullVariable(javaCode.scala:76)
    at org.apache.spark.sql.catalyst.expressions.Expression.$anonfun$genCode$3(Expression.scala:109)
    at org.apache.spark.sql.catalyst.expressions.Expression$$Lambda$2984/0x00000000bfa2b020.apply(Unknown Source)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.spark.sql.catalyst.expressions.Expression.genCode(Expression.scala:105)
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$.$anonfun$create$1(GenerateSafeProjection.scala:155)
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$$$Lambda$2982/0x00000000ef9c1620.apply(Unknown Source)
    at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
    at scala.collection.TraversableLike$$Lambda$1442/0x00000000ff849e20.apply(Unknown Source)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.TraversableLike.map(TraversableLike.scala:238)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
    at scala.collection.immutable.List.map(List.scala:298)
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$.create(GenerateSafeProjection.scala:152)
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$.create(GenerateSafeProjection.scala:38)
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:1193)
    at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3382)
    at org.apache.spark.sql.Dataset.$anonfun$collectAsList$1(Dataset.scala:2794)
    at org.apache.spark.sql.Dataset$$Lambda$2827/0x00000000af983620.apply(Unknown Source)
    at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3364)
    at org.apache.spark.sql.Dataset$$Lambda$2919/0x00000000cf9f4220.apply(Unknown Source)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:78)
    at org.apache.spark.sql.execution.SQLExecution$$$Lambda$2920/0x00000000cfa4f020.apply(Unknown Source)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3364)
    at org.apache.spark.sql.Dataset.collectAsList(Dataset.scala:2793)

代码实现:

代码语言:javascript
复制
SparkConf sparkConf = new SparkConf().setAppName("App demo").setMaster("local[*]");

try (SparkSession sparkSession = createSparkSession(sparkConf)) {
    Dataset<Row> df = sparkSession.read().json("/Users/shubhampr/Documents/spark/examples/src/main/resources/people.json");
    df.show();
} catch (Exception e) {
    log.error("Error in processing file: {}", e.getMessage());
            return;
}

SparkSession createSparkSession(SparkConf sparkConf) {
        return SparkSession.builder()
                .sparkContext(new JavaSparkContext(sparkConf).sc())
                .getOrCreate();
}
EN

回答 1

Stack Overflow用户

发布于 2022-05-28 03:44:19

我们需要严格执行janino:commons-compiler以实现'3.0.16'版本,下面的修复程序是这样的:

代码语言:javascript
复制
// Spark lib
    compile "org.apache.spark:spark-core_2.12:2.4.3"
    compile "org.apache.spark:spark-sql_2.12:2.4.3"
    implementation('org.codehaus.janino:commons-compiler'){
        version {
            strictly '3.0.16'
        }
    }
    implementation('org.codehaus.janino:janino'){
        version {
            strictly '3.0.16'
        }
    }
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/69478786

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档