目前,我正在通过databricks-connect与本地VS代码连接数据库。但是我的子任务都有模块未找到的错误,这意味着其他python文件中的代码没有找到。我试过:
有人在这方面有经验吗?或者更好的方式与python项目的databricks交互。
我的python部件代码似乎是在本地python env中执行的,只有与代码直接相关的星星之火才在集群中执行,但是集群并没有加载我所有的python文件。然后引发错误。
我有档案夹
在lib222.py中使用Foo类
主要代码是:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
sc = spark.sparkContext
#sc.setLogLevel("INFO")
print("Testing addPyFile isolation")
sc.addPyFile("lib222.py")
from lib222 import Foo
print(sc.parallelize(range(10)).map(lambda i: Foo(2)).collect())但是我发现了模块的错误,没有找到lib222。
此外,当我打印某些sys信息的python版本时,似乎python代码是在我的本地机器上执行的,而不是在远程驱动程序中执行的。我的db版本是6.6。详细错误:
> Exception has occurred: Py4JJavaError
An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6, 10.139.64.8, executor 0): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 182, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 695, in loads
return pickle.loads(obj, encoding=encoding)
ModuleNotFoundError: No module named 'lib222'
>
>During handling of the above exception, another exception occurred:
>
>Traceback (most recent call last):
File "/databricks/spark/python/pyspark/worker.py", line 462, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/databricks/spark/python/pyspark/worker.py", line 71, in read_command
command = serializer._read_with_length(file)
File "/databricks/spark/python/pyspark/serializers.py", line 185, in _read_with_length
raise SerializationError("Caused by " + traceback.format_exc())
pyspark.serializers.SerializationError: Caused by Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 182, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 695, in loads
return pickle.loads(obj, encoding=encoding)
ModuleNotFoundError: No module named 'lib222```发布于 2020-08-16 09:27:05
我在AWS上使用Databricks,我遵循的最佳实践如下-
conda从本地环境卸载condaconda create -n ENV_NAME python==PYTHON_VERSION客户端Python安装的次要版本必须与Databricks集群的次要Python版本相同(3.5、3.6或3.7)。Databricks Runtime 5.x有Python3.5,Databricks Runtime5.xML有Python3.6,Databricks Runtime6.1及以上版本有Python3.7。
注意:当pip指向官方版本时,请始终使用pip安装Pyspark。避免使用conda或conda-forge安装PySpark。
pandas 0.23.2NumPy 1.7pyarrow 0.15.1Py4J 0.10.9https://stackoverflow.com/questions/62843519
复制相似问题