首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >线程"main“中的Kafka :=异常--没有为连接找到条目

线程"main“中的Kafka :=异常--没有为连接找到条目
EN

Stack Overflow用户
提问于 2019-04-23 10:07:49
回答 1查看 1.3K关注 0票数 0

运行kafka代码时

1)错误StreamExecution:查询id =c 6426655-446f-4306-91ba-d78e68e05c15,runId = 420382c1-8558-45a1-b26d-f6299044fa04,终止于错误java.lang.ExceptionInInitializerError。 2)线程中的异常流执行线程( id =c 6426655-446f-4306-91ba-d78e68e05c15,runId = 420382c1-8558-45a1-b26d-f6299044fa04“java.lang.ExceptionInInitializerError )。 3)线程“主”org.apache.spark.sql.streaming.StreamingQueryException:空中的异常

sbt依赖性

// https://mvnrepository.com/artifact/org.apache.spark/spark-core libraryDependencies += "org.apache.spark“%”火花-核心“% "2.2.3”

// https://mvnrepository.com/artifact/org.apache.spark/spark-sql libraryDependencies += "org.apache.spark“%”spark“% "2.2.3”

// https://mvnrepository.com/artifact/org.apache.spark/spark-streaming libraryDependencies += "org.apache.spark“%”火花流“%”2.2.3%“提供

// https://mvnrepository.com/artifact/org.apache.kafka/kafka libraryDependencies += %org.apache.kafka% "kafka“% "2.1.1”

// https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients libraryDependencies += "org.apache.kafka“%”kafka-客户“% "2.1.1”

// https://mvnrepository.com/artifact/org.apache.kafka/kafka-streams libraryDependencies += "org.apache.kafka“% "kafka-streams”% "2.1.1“

// https://mvnrepository.com/artifact/org.apache.spark/spark-sql-kafka-0-10 libraryDependencies +=“%org.apache.spark”火花-sql 0-10% "2.2.3“

// https://mvnrepository.com/artifact/org.apache.kafka/kafka-streams-scala libraryDependencies += "org.apache.kafka“% "kafka-streams-scala”% "2.1.1“

代码语言:javascript
复制
import java.sql.Timestamp

import org.apache.spark.sql.SparkSession


object demo1 {

  def main(args: Array[String]): Unit = {

    System.setProperty("hadoop.home.dir","c:\\hadoop\\")

    val spark: SparkSession = SparkSession.builder
      .appName("My Spark Application")
      .master("local[*]")
      .config("spark.sql.warehouse.dir", "file:///C:/temp") // Necessary to work around a Windows bug in Spark 2.0.0; omit if you're not on Windows.
      .config("spark.sql.streaming.checkpointLocation", "file:///C:/checkpoint")
      .getOrCreate

    spark.sparkContext.setLogLevel("ERROR")

    spark.conf.set("spark,sqlshuffle.partations","2")

    val df = spark.readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", "162.244.80.189:9092")
      .option("startingOffsets", "earliest")
      .option("group.id","test1")
      .option("subscribe", "demo11")
      .load()

    import spark.implicits._


    val dsStruc = df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)", "timestamp").as[(String, String, Timestamp)]


    val abc = df.writeStream
    .outputMode("append")
    .format("console")
    .start().awaitTermination()

    df.show()
EN

回答 1

Stack Overflow用户

发布于 2019-06-27 08:51:14

我也有过同样的问题。我使用了错误的库火花-sql库版本(2.2.0而不是2.3.0)。我成功的配置是:

org.apache.spark spark-core\_2.11 2.3.0 provided org.apache.spark spark-sql\_2.11 2.3.0 org.apache.spark spark-sql-kafka-0-10\_2.11 2.3.0 org.apache.kafka kafka-clients 0.10.1.0

希望能帮上忙。我受到这篇文章的启发

https://community.hortonworks.com/content/supportkb/222428/error-microbatchexecution-query-id-567e4e77-9457-4.html

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/55808828

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档