首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Sparksql和SparkCSV在SparkJob服务器中的应用

Sparksql和SparkCSV在SparkJob服务器中的应用
EN

Stack Overflow用户
提问于 2016-05-26 08:56:26
回答 2查看 455关注 0票数 1

我试图用一个简单的scala应用程序来利用SparlCSV和spark创建一个存储在HDFS中的CSV文件的数据框架,然后进行一个简单的查询来返回CSV文件中特定列的Max和Min。

当我使用sbt命令创建JAR时,会出现错误,稍后我将把JAR缩到j观察者/jars文件夹并在远程机器上执行。

代码:

代码语言:javascript
复制
import com.typesafe.config.{Config, ConfigFactory}
import org.apache.spark.SparkContext._
import org.apache.spark._
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext

object sparkSqlCSV extends SparkJob {
  def main(args: Array[String]) {
    val conf = new  SparkConf().setMaster("local[4]").setAppName("sparkSqlCSV")
    val sc = new SparkContext(conf)
    val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    val config = ConfigFactory.parseString("")
    val results = runJob(sc, config)
    println("Result is " + results)

  }

    override def validate(sc: sqlContext, config: Config):  SparkJobValidation = {
    SparkJobValid
  }

   override def runJob(sc: sqlContext, config: Config): Any = {
   val value = "com.databricks.spark.csv"
   val ControlDF = sqlContext.load(value,Map("path"->"hdfs://mycluster/user/Test.csv","header"->"true"))
   ControlDF.registerTempTable("Control")
   val aggDF = sqlContext.sql("select max(DieX) from Control")
   aggDF.collectAsList()

  }
}

错误:

代码语言:javascript
复制
[hduser@ptfhadoop01v spark-jobserver]$ sbt ashesh-jobs/package
[info] Loading project definition from /usr/local/hadoop/spark-jobserver/project
Missing bintray credentials /home/hduser/.bintray/.credentials. Some bintray  features depend on this.
Missing bintray credentials /home/hduser/.bintray/.credentials. Some bintray  features depend on this.
Missing bintray credentials /home/hduser/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /home/hduser/.bintray/.credentials. Some bintray features depend on this.
[info] Set current project to root (in build file:/usr/local/hadoop/spark-jobserver/)
[info] scalastyle using config /usr/local/hadoop/spark-jobserver/scalastyle-config.xml
[info] Processed 2 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 9 ms
[success] created output: /usr/local/hadoop/spark-jobserver/ashesh-jobs/target
[warn] Credentials file /home/hduser/.bintray/.credentials does not exist
[info] Updating {file:/usr/local/hadoop/spark-jobserver/}ashesh-jobs...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] scalastyle using config /usr/local/hadoop/spark-jobserver/scalastyle-config.xml
[info] Processed 5 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 1 ms
[success] created output: /usr/local/hadoop/spark-jobserver/job-server-api/target
[info] Compiling 2 Scala sources and 1 Java source to /usr/local/hadoop/spark-jobserver/ashesh-jobs/target/scala-2.10/classes...
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:8: object sql is not a member of   package org.apache.spark
[error] import org.apache.spark.sql.SQLContext
[error]                         ^
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:14: object sql is not a member of package org.apache.spark
[error]     val sqlContext = new org.apache.spark.sql.SQLContext(sc)
[error]                                           ^
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:25: not found: type sqlContext
[error]    override def runJob(sc: sqlContext, config: Config): Any = {
[error]                            ^
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:21: not found: type sqlContext
[error]     override def validate(sc: sqlContext, config: Config): SparkJobValidation = {
[error]                               ^
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:27: not found: value sqlContext
[error]    val ControlDF = sqlContext.load(value,Map("path"->"hdfs://mycluster/user/Test.csv","header"->"true"))
[error]                    ^
[error] /usr/local/hadoop/spark-jobserver/ashesh-jobs/src/spark.jobserver/sparkSqlCSV.scala:29: not found: value sqlContext
[error]    val aggDF = sqlContext.sql("select max(DieX) from Control")
[error]                ^
[error] 6 errors found
[error] (ashesh-jobs/compile:compileIncremental) Compilation failed
[error] Total time: 10 s, completed May 26, 2016 4:42:52 PM
[hduser@ptfhadoop01v spark-jobserver]$

我猜主要的问题是它缺少了sparkCSV和sparkSQL的依赖项,但是在使用sbt编译代码之前,我不知道该把依赖项放在哪里。

我发出以下命令来打包应用程序,源代码放在"ashesh_jobs“目录下

代码语言:javascript
复制
[hduser@ptfhadoop01v spark-jobserver]$ sbt ashesh-jobs/package

我希望有人能帮助我解决这个issue.Can你指定我的文件,我可以指定的依赖关系和格式输入

EN

回答 2

Stack Overflow用户

发布于 2016-05-26 12:29:13

下面的链接提供了有关创建其他上下文https://github.com/spark-jobserver/spark-jobserver/blob/master/doc/contexts.md的更多信息

另外,你还需要工作-服务器-额外的。

票数 0
EN

Stack Overflow用户

发布于 2016-08-10 18:06:20

在buil.sbt中添加库依赖项

libraryDependencies += "org.apache.spark“%”spark“% "1.6.2”

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/37456076

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档