我尝试将Databricks Connect设置为能够使用已在Azure上的Workspace上运行的远程Databricks Cluster。当我尝试运行命令'databricks-connect test‘时,它永远不会结束。
我关注官方documentation。
我在3.7版本中安装了最新的Anaconda。我已经创建了本地环境: conda create --name dbconnect python=3.5
我已经在5.1版本中安装了' Databricks -connect‘,它与我在Azure Databricks上的集群配置相匹配。
pip install -U databricks-connect==5.1.*我已经按如下方式设置了databricks-connect配置:
(base) C:\>databricks-connect configure
The current configuration is:
* Databricks Host: ******.azuredatabricks.net
* Databricks Token: ************************************
* Cluster ID: ****-******-*******
* Org ID: ****************
* Port: 8787完成上述步骤后,我尝试为databricks connect运行'test‘命令:
databricks-connect test因此,过程在关于MetricsSystem的警告后启动和停止,如下所示:
(dbconnect) C:\>databricks-connect test
* PySpark is installed at c:\users\miltad\appdata\local\continuum\anaconda3\envs\dbconnect\lib\site-packages\pyspark
* Checking java version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
* Testing scala command
19/05/31 08:14:26 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/05/31 08:14:34 WARN MetricsSystem: Using default name SparkStatusTracker for source because neither spark.metrics.namespace nor spark.app.id is set. 我希望这个过程应该像在官方documentation中那样进入下一步。
* Testing scala command
18/12/10 16:38:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/12/10 16:38:50 WARN MetricsSystem: Using default name SparkStatusTracker for source because neither spark.metrics.namespace nor spark.app.id is set.
18/12/10 16:39:53 WARN SparkServiceRPCClient: Now tracking server state for 5abb7c7e-df8e-4290-947c-c9a38601024e, invalidating prev state
18/12/10 16:39:59 WARN SparkServiceRPCClient: Syncing 129 files (176036 bytes) took 3003 ms
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.0-SNAPSHOT
/_/
Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_152)
Type in expressions to have them evaluated.
Type :help for more information.所以我的进程在'WARN MetricsSystem: Using default name SparkStatusTracker‘之后停止。
我做错了什么?我应该配置更多的东西吗?
发布于 2019-06-06 19:40:03
很多人似乎在Windows上的test命令中看到了这个问题。但是,如果您尝试使用Databricks connect,它可以很好地工作。忽略它似乎是安全的。
发布于 2020-02-12 21:51:31
看起来这个功能在runtimes 5.3或更低版本上不受官方支持。如果更新运行时有限制,我会确保spark conf设置如下:spark.databricks.service.server.enabled true然而,使用较旧的运行时,事情可能仍然不稳定。我建议使用运行时5.5或6.1或更高版本来执行此操作。
https://stackoverflow.com/questions/56389816
复制相似问题