首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >无法从中导入pyspark,因为它找不到py4j。

无法从中导入pyspark,因为它找不到py4j。
EN

Stack Overflow用户
提问于 2019-10-16 10:27:45
回答 1查看 1.3K关注 0票数 2

我已经建立了一个包含火花和琵琶的码头形象。如果我在pipenv中运行python并试图导入pyspark,它就会失败,错误是"ModuleNotFoundError: No模块名为'py4j'“。

代码语言:javascript
复制
root@4d0ae585a52a:/tmp# pipenv run python -c "from pyspark.sql import SparkSession"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/opt/spark/python/pyspark/__init__.py", line 46, in <module>
    from pyspark.context import SparkContext
  File "/opt/spark/python/pyspark/context.py", line 29, in <module>
    from py4j.protocol import Py4JError
ModuleNotFoundError: No module named 'py4j'

但是,如果我在同一个虚拟环境中运行pyspark,就不会出现这样的问题:

代码语言:javascript
复制
root@4d0ae585a52a:/tmp# pipenv run pyspark
Python 3.7.4 (default, Sep 12 2019, 16:02:06) 
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/10/16 10:18:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/10/16 10:18:33 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.2.1
      /_/

Using Python version 3.7.4 (default, Sep 12 2019 16:02:06)
SparkSession available as 'spark'.
>>> spark.createDataFrame([('Alice',)], ['name']).collect()
[Row(name='Alice')]

我承认我从其他地方复制了我的Dockerfile的大量代码,所以我不完全理解这个文件是如何挂在被子下面的。我本来希望py4j在PYTHONPATH上就足够了,但显然还不够。我可以证实,它存在于PYTHONPATH,而且它存在:

代码语言:javascript
复制
root@4d0ae585a52a:/tmp# pipenv run python -c "import os;print(os.environ['PYTHONPATH'])"
/opt/spark/python:/opt/spark/python/lib/py4j-0.10.7-src.zip:
root@4d0ae585a52a:/tmp# pipenv run ls /opt/spark/python/lib/py4j*
/opt/spark/python/lib/py4j-0.10.4-src.zip

有人能建议我如何使我的虚拟服务器中的python解释器可以使用py4j吗?

这是Dockerfile。我们从本地jfrog工件缓存中提取工件(docker映像、apt包、pypi包等),因此这里的所有工件引用:

代码语言:javascript
复制
FROM images.artifactory.our.org.com/python3-7-pipenv:1.0

WORKDIR /tmp

ENV SPARK_VERSION=2.2.1
ENV HADOOP_VERSION=2.8.4

ARG ARTIFACTORY_USER
ARG ARTIFACTORY_ENCRYPTED_PASSWORD
ARG ARTIFACTORY_PATH=artifactory.our.org.com/artifactory/generic-dev/ceng/external-dependencies
ARG SPARK_BINARY_PATH=https://${ARTIFACTORY_PATH}/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz
ARG HADOOP_BINARY_PATH=https://${ARTIFACTORY_PATH}/hadoop-${HADOOP_VERSION}.tar.gz


ADD apt-transport-https_1.4.8_amd64.deb /tmp

RUN echo "deb https://username:password@artifactory.our.org.com/artifactory/debian-main-remote stretch main" >/etc/apt/sources.list.d/main.list &&\
    echo "deb https://username:password@artifactory.our.org.com/artifactory/maria-db-debian stretch main" >>/etc/apt/sources.list.d/main.list &&\
    echo 'Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/02update &&\
    echo 'Acquire::http::Timeout "10";' > /etc/apt/apt.conf.d/99timeout &&\
    echo 'Acquire::ftp::Timeout "10";' >> /etc/apt/apt.conf.d/99timeout &&\
    dpkg -i /tmp/apt-transport-https_1.4.8_amd64.deb &&\
    apt-get install --allow-unauthenticated -y /tmp/apt-transport-https_1.4.8_amd64.deb &&\
    apt-get update --allow-unauthenticated -y -o Dir::Etc::sourcelist="sources.list.d/main.list" -o Dir::Etc::sourceparts="-" -o APT::Get::List-Cleanup="0"


RUN apt-get update && \
    apt-get -y install default-jdk

# Detect JAVA_HOME and export in bashrc.
# This will result in something like this being added to /etc/bash.bashrc
#   export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
RUN echo export JAVA_HOME="$(readlink -f /usr/bin/java | sed "s:/jre/bin/java::")" >> /etc/bash.bashrc

# Configure Spark-${SPARK_VERSION}
# Not using tar -v because including verbose output causes ci logsto exceed max length
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${SPARK_BINARY_PATH}" -o /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && cd /opt \
    && tar -xzf /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && rm spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && ln -s spark-${SPARK_VERSION}-bin-hadoop2.7 spark \
    && sed -i '/log4j.rootCategory=INFO, console/c\log4j.rootCategory=CRITICAL, console' /opt/spark/conf/log4j.properties.template \
    && mv /opt/spark/conf/log4j.properties.template /opt/spark/conf/log4j.properties \
    && mkdir /opt/spark-optional-jars/ \
    && mv /opt/spark/conf/spark-defaults.conf.template /opt/spark/conf/spark-defaults.conf \
    && printf "spark.driver.extraClassPath /opt/spark-optional-jars/*\nspark.executor.extraClassPath /opt/spark-optional-jars/*\n">>/opt/spark/conf/spark-defaults.conf \
    && printf "spark.driver.extraJavaOptions -Dderby.system.home=/tmp/derby" >> /opt/spark/conf/spark-defaults.conf

# Configure Hadoop-${HADOOP_VERSION}
# Not using tar -v because including verbose output causes ci logsto exceed max length
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${HADOOP_BINARY_PATH}" -o /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && cd /opt \
    && tar -xzf /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && rm /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && ln -s hadoop-${HADOOP_VERSION} hadoop

# Set Environment Variables.
ENV SPARK_HOME="/opt/spark" \
    HADOOP_HOME="/opt/hadoop" \
    PYSPARK_SUBMIT_ARGS="--master=lo    cal[*] pyspark-shell --executor-memory 1g --driver-memory 1g --conf spark.ui.enabled=false spark.executor.extrajavaoptions=-Xmx=1024m" \
    PYTHONPATH="/opt/spark/python:/opt/spark/python/lib/py4j-0.10.7-src.zip:$PYTHONPATH" \
    PATH="$PATH:/opt/spark/bin:/opt/hadoop/bin" \
    PYSPARK_DRIVER_PYTHON="/usr/local/bin/python" \
    PYSPARK_PYTHON="/usr/local/bin/python"

# Upgrade pip and setuptools
RUN pip install --index-url https://username:password@artifactory.our.org.com/artifactory/api/pypi/pypi-virtual-all/simple --upgrade pip setuptools
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2019-10-16 13:02:23

我想我只是通过安装独立的py4j来解决这个问题:

代码语言:javascript
复制
$ docker run --rm -it images.artifactory.our.org.com/myimage:mytag bash
root@1d6a0ec725f0:/tmp# pipenv install py4j
Installing py4j…
✔ Installation Succeeded 
Pipfile.lock (49f1d8) out of date, updating to (dfdbd6)…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
✔ Success! 
Updated Pipfile.lock (49f1d8)!
Installing dependencies from Pipfile.lock (49f1d8)…
     ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 42/42 — 00:00:06
To activate this projects virtualenv, run pipenv shell.
Alternatively, run a command inside the virtualenv with pipenv run.
root@1d6a0ec725f0:/tmp# pipenv run python -c "from pyspark.sql import SparkSession;spark = SparkSession.builder.master('local').enableHiveSupport().getOrCreate();print(spark.createDataFrame([('Alice',)], ['name']).collect())"
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/10/16 13:05:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/10/16 13:05:48 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
[Row(name='Alice')]
root@1d6a0ec725f0:/tmp#

不太清楚为什么我要给py4j已经在PYTHONPATH,但到目前为止似乎还好,所以我很高兴。如果有人能解释为什么不显式安装py4j就不能工作,我很想知道。我只能假设我的Dockerfile中的这一行:

PYTHONPATH="/opt/spark/python:/opt/spark/python/lib/py4j-0.10.7-src.zip:$PYTHONPATH"

没有成功地让解释器知道py4j。

只是为了确认(万一有帮助) pip认为py4j和pyspark安装在哪里:

代码语言:javascript
复制
root@1d6a0ec725f0:/tmp# pipenv run pip show pyspark
Name: pyspark
Version: 2.2.1
Summary: Apache Spark Python API
Home-page: https://github.com/apache/spark/tree/master/python
Author: Spark Developers
Author-email: dev@spark.apache.org
License: http://www.apache.org/licenses/LICENSE-2.0
Location: /opt/spark-2.2.1-bin-hadoop2.7/python
Requires: py4j
Required-by: 
root@1d6a0ec725f0:/tmp# pipenv run pip show py4j
Name: py4j
Version: 0.10.8.1
Summary: Enables Python programs to dynamically access arbitrary Java objects
Home-page: https://www.py4j.org/
Author: Barthelemy Dagenais
Author-email: barthelemy@infobart.com
License: BSD License
Location: /root/.local/share/virtualenvs/tmp-XVr6zr33/lib/python3.7/site-packages
Requires: 
Required-by: pyspark
root@1d6a0ec725f0:/tmp#

另一种解决方案是将py4j zip文件解压缩为安装spark的Dockerfile阶段的一部分,然后相应地设置PYTHONPATH:

代码语言:javascript
复制
unzip spark/python/lib/py4j-*-src.zip -d spark/python/lib/
...
...
PYTHONPATH="/opt/spark/python:/opt/spark/python/lib:$PYTHONPATH"

实际上,这是最好的解决方案。下面是新的Dockerfile:

代码语言:javascript
复制
FROM images.artifactory.our.org.com/python3-7-pipenv:1.0

WORKDIR /tmp

ENV SPARK_VERSION=2.2.1
ENV HADOOP_VERSION=2.8.4

ARG ARTIFACTORY_USER
ARG ARTIFACTORY_ENCRYPTED_PASSWORD
ARG ARTIFACTORY_PATH=artifactory.our.org.com/artifactory/generic-dev/ceng/external-dependencies
ARG SPARK_BINARY_PATH=https://${ARTIFACTORY_PATH}/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz
ARG HADOOP_BINARY_PATH=https://${ARTIFACTORY_PATH}/hadoop-${HADOOP_VERSION}.tar.gz


ADD apt-transport-https_1.4.8_amd64.deb /tmp

RUN echo "deb https://username:password@artifactory.our.org.com/artifactory/debian-main-remote stretch main" >/etc/apt/sources.list.d/main.list &&\
    echo "deb https://username:password@artifactory.our.org.com/artifactory/maria-db-debian stretch main" >>/etc/apt/sources.list.d/main.list &&\
    echo 'Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/02update &&\
    echo 'Acquire::http::Timeout "10";' > /etc/apt/apt.conf.d/99timeout &&\
    echo 'Acquire::ftp::Timeout "10";' >> /etc/apt/apt.conf.d/99timeout &&\
    dpkg -i /tmp/apt-transport-https_1.4.8_amd64.deb &&\
    apt-get install --allow-unauthenticated -y /tmp/apt-transport-https_1.4.8_amd64.deb &&\
    apt-get update --allow-unauthenticated -y -o Dir::Etc::sourcelist="sources.list.d/main.list" -o Dir::Etc::sourceparts="-" -o APT::Get::List-Cleanup="0"


RUN apt-get update && \
    apt-get -y install default-jdk

# Detect JAVA_HOME and export in bashrc.
# This will result in something like this being added to /etc/bash.bashrc
#   export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
RUN echo export JAVA_HOME="$(readlink -f /usr/bin/java | sed "s:/jre/bin/java::")" >> /etc/bash.bashrc

# Configure Spark-${SPARK_VERSION}
# Not using tar -v because including verbose output causes ci logsto exceed max length
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${SPARK_BINARY_PATH}" -o /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && cd /opt \
    && tar -xzf /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && rm spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
    && ln -s spark-${SPARK_VERSION}-bin-hadoop2.7 spark \
    && unzip spark/python/lib/py4j-*-src.zip -d spark/python/lib/ \
    && sed -i '/log4j.rootCategory=INFO, console/c\log4j.rootCategory=CRITICAL, console' /opt/spark/conf/log4j.properties.template \
    && mv /opt/spark/conf/log4j.properties.template /opt/spark/conf/log4j.properties \
    && mkdir /opt/spark-optional-jars/ \
    && mv /opt/spark/conf/spark-defaults.conf.template /opt/spark/conf/spark-defaults.conf \
    && printf "spark.driver.extraClassPath /opt/spark-optional-jars/*\nspark.executor.extraClassPath /opt/spark-optional-jars/*\n">>/opt/spark/conf/spark-defaults.conf \
    && printf "spark.driver.extraJavaOptions -Dderby.system.home=/tmp/derby" >> /opt/spark/conf/spark-defaults.conf

# Configure Hadoop-${HADOOP_VERSION}
# Not using tar -v because including verbose output causes ci logsto exceed max length
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${HADOOP_BINARY_PATH}" -o /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && cd /opt \
    && tar -xzf /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && rm /opt/hadoop-${HADOOP_VERSION}.tar.gz \
    && ln -s hadoop-${HADOOP_VERSION} hadoop

# Set Environment Variables.
ENV SPARK_HOME="/opt/spark" \
    HADOOP_HOME="/opt/hadoop" \
    PYSPARK_SUBMIT_ARGS="--master=local[*] pyspark-shell --executor-memory 1g --driver-memory 1g --conf spark.ui.enabled=false spark.executor.extrajavaoptions=-Xmx=1024m" \
    PYTHONPATH="/opt/spark/python:/opt/spark/python/lib:$PYTHONPATH" \
    PATH="$PATH:/opt/spark/bin:/opt/hadoop/bin" \
    PYSPARK_DRIVER_PYTHON="/usr/local/bin/python" \
    PYSPARK_PYTHON="/usr/local/bin/python"

# Upgrade pip and setuptools
RUN pip install --index-url https://username:password@artifactory.our.org.com/artifactory/api/pypi/pypi-virtual-all/simple --upgrade pip setuptools

因此,显然,我不能将zip文件放在PYTHONPATH上,并将该zip文件的内容提供给python解释器。正如我上面所说的,我从其他地方复制了这段代码,所以为什么它对其他人有效,而不是对我……我不知道。哦,好吧,现在一切正常。

下面有一个很好的命令来检查它是否都能工作:

代码语言:javascript
复制
docker run --rm -it myimage:mytag pipenv run python -c "from pyspark.sql import SparkSession;spark = SparkSession.builder.master('local').enableHiveSupport().getOrCreate();print(spark.createDataFrame([('Alice',)], ['name']).collect())"

下面是运行该命令的输出:

代码语言:javascript
复制
$ docker run --rm -it myimage:mytag pipenv run python -c "from pyspark.sql import SparkSession;spark = SparkSession.builder.master('local').enableHiveSupport().getOrCreate();print(spark.createDataFrame([('Alice',)], ['name']).collect())"
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/10/16 15:53:45 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/10/16 15:53:55 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
19/10/16 15:53:55 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
19/10/16 15:53:56 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
[Row(name='Alice')]
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/58411154

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档