首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >将外部jars加载到spark-notebook失败

将外部jars加载到spark-notebook失败
EN

Stack Overflow用户
提问于 2017-08-23 15:04:32
回答 2查看 838关注 0票数 0

我正在尝试从notebook连接到redshift,到目前为止我已经完成了以下操作-

为笔记本配置的元数据

代码语言:javascript
复制
"customDeps": [
   "com.databricks:spark-redshift_2.10:3.0.0-preview1",
   "com.databricks:spark-avro_2.11:3.2.0",
   "com.databricks:spark-csv_2.11:1.5.0"
]

已检查浏览器控制台,以确保在重新启动内核后加载此库

代码语言:javascript
复制
ui-logs-1422> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.m2/repository/com/databricks/spark-avro_2.10/3.0.0/spark-avro_2.10-3.0.0.jar
kernel.js:978 ui-logs-1452> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/databricks/spark-redshift_2.10/3.0.0-preview1/spark-redshift_2.10-3.0.0-preview1.jar
kernel.js:978 ui-logs-1509> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/databricks/spark-csv_2.11/1.5.0/spark-csv_2.11-1.5.0.jar
kernel.js:978 ui-logs-1526> [Tue Aug 22 2017 09:46:26 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/databricks/spark-avro_2.11/3.2.0/spark-avro_2.11-3.2.0.jar
When i try to load a table - i run into class not found exception,
java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.redshift. Please find packages at http://spark.apache.org/third-party-projects.html
  at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:594)
  at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86)
  at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:325)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
  ... 63 elided
Caused by: java.lang.ClassNotFoundException: com.databricks.spark.redshift.DefaultSource
  at scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:62)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25$$anonfun$apply$13.apply(DataSource.scala:579)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25$$anonfun$apply$13.apply(DataSource.scala:579)
  at scala.util.Try$.apply(Try.scala:192)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25.apply(DataSource.scala:579)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25.apply(DataSource.scala:579)
  at scala.util.Try.orElse(Try.scala:84)
  at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:579)

还有其他人遇到这个问题或者已经解决了这个问题吗?

我也注意到了另一个依赖项的类似问题,配置中是否遗漏了什么?

在notebook - notebooks/timeseries/Spark-Timeseries.snb.ipynb中尝试timeseries示例时,请注意,元数据中存在自定义依赖项-

代码语言:javascript
复制
"customDeps": [
    "com.cloudera.sparkts % sparkts % 0.3.0"
  ]

快速验证此包@ https://spark-packages.org/package/sryza/spark-timeseries的可用性(更新元数据以包括此行)

代码语言:javascript
复制
"com.cloudera.sparkts:sparkts:0.4.1"

重新启动后,加载内核验证库

代码语言:javascript
复制
ui-logs-337> [Wed Aug 23 2017 09:29:25 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Will fetch these customDeps artifacts:Set(Dependency(com.cloudera.sparkts:sparkts,0.3.0,,Set(),Attributes(,),false,true), Dependency(com.cloudera.sparkts:sparkts,0.4.1,,Set(),Attributes(,),false,true))
kernel.js:978 ui-logs-347> [Wed Aug 23 2017 09:29:37 GMT+0530 (IST)] [notebook.util.CoursierDeps$] Fetched artifact to:/Users/xxxx/.coursier/cache/v1/http/repo1.maven.org/maven2/com/cloudera/sparkts/sparkts/0.4.1/sparkts-0.4.1.jar
Error message -

<console>:69: error: object cloudera is not a member of package com
       import com.cloudera.sparkts._
                  ^
<console>:70: error: object cloudera is not a member of package com
       import com.cloudera.sparkts.stats.TimeSeriesStatisticalTests
EN

回答 2

Stack Overflow用户

发布于 2017-08-29 18:00:41

下载了另一个版本的spark-notebook(这不是主分支的)。

代码语言:javascript
复制
spark-notebook-0.7.0-scala-2.11.8-spark-2.1.1-hadoop-2.7.2 
against 
spark-notebook-0.9.0-SNAPSHOT-scala-2.11.8-spark-2.1.1-hadoop-2.7.2

此外,我还必须确保scala、spark和hadoop版本在我配置的依赖项中保持不变。在这个特定的例子中,我必须从命令行为amazon JDBC redshift驱动程序设置jar文件,因为这在maven存储库中不可用。

代码语言:javascript
复制
export EXTRA_CLASSPATH=RedshiftJDBC4-1.2.7.1003.jar

希望这对其他人有帮助

票数 0
EN

Stack Overflow用户

发布于 2019-03-17 01:39:04

如果需要,可以将jar添加到内核的环境部分"env“(EXTRA_CLASSPATH)中,如下所示:

代码语言:javascript
复制
cat /usr/local/share/jupyter/kernels/apache_toree_scala/kernel.json
{
  "argv": [
    "/usr/local/share/jupyter/kernels/apache_toree_scala/bin/run.sh",
    "--profile",
    "{connection_file}"
  ],
  "interrupt_mode": "signal",
  "env": {
    "__TOREE_SPARK_OPTS__": "",
    "PYTHONPATH": "/opt/cloudera/parcels/SPARK2/lib/spark2/python:/opt/cloudera/parcels/SPARK2/lib/spark2/python/lib/py4j-0.10.7-src.zip",
    "__TOREE_OPTS__": "",
    "PYTHON_EXEC": "python",
    "SPARK_HOME": "/opt/cloudera/parcels/SPARK2/lib/spark2",
    "DEFAULT_INTERPRETER": "Scala",
    "JAVA_HOME": "/usr/java/latest",
    "EXTRA_CLASSPATH": "/opt/cloudera/parcels/SPARK2/lib/spark2/jars/mysql-connector-java-5.1.15.jar"
  },
  "metadata": {},
  "display_name": "SPARK2/Scala",
  "language": "scala"
}
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/45832827

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档