PySpark中是否有从.tsv.gz中读取.tsv.gz的方法?
from pyspark.sql import SparkSession
def create_spark_session():
return SparkSession.builder.appName("wikipediaClickstream").getOrCreate()
spark = create_spark_session()
url = "https://dumps.wikimedia.org/other/clickstream/2017-11/clickstream-jawiki-2017-11.tsv.gz"
# df = spark.read.csv(url, sep="\t") # doesn't work
df = spark.read.option("sep", "\t").csv(url) # doesn't work either
df.show(10)获取以下错误:
Py4JJavaError: An error occurred while calling o65.csv.
: java.lang.UnsupportedOperationException
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
/var/folders/sn/4dk4tbz9735crf4npgcnlt8r0000gn/T/ipykernel_1443/4137722240.py in <module>
1 url = "https://dumps.wikimedia.org/other/clickstream/2017-11/clickstream-jawiki-2017-11.tsv.gz"
2 # df = spark.read.csv(url, sep="\t")
----> 3 df = spark.read.option("sep", "\t").csv(url)
4 df.show(10)spark.version是3.1.2
发布于 2021-09-26 12:27:57
在读取文件之前,可以使用SparkContext.addFile将文件下载到每个节点,如下所示:
from pyspark import SparkFiles
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("test").getOrCreate()
url = "https://dumps.wikimedia.org/other/clickstream/2017-11/clickstream-jawiki-2017-11.tsv.gz"
spark.sparkContext.addFile(url)
df = spark.read.option("sep", "\t").csv("file://" + SparkFiles.get("clickstream-jawiki-2017-11.tsv.gz"))
df.show(10)发布于 2021-09-25 23:57:32
您的问题很可能是.csv()没有期望一个url。充其量,您需要首先:
.gz是压缩文件扩展名)看起来您已经知道如何处理由制表符分隔的文件(正如.tsv扩展名所暗示的那样)。
发布于 2021-09-26 01:57:18
您需要将文件下载到本地位置(如果您在集群中运行(Ex: HDFS),则需要将文件放在HDFS位置)&使用Spark从该位置读取文件。
import wget
url = "https://dumps.wikimedia.org/other/clickstream/2017-11/clickstream-jawiki-2017-11.tsv.gz"
local_path = '/tmp/wikipediadata/clickstream-jawiki-2017-11.tsv.gz'
wget.download(url, local_path)
df = spark.read.option("sep", "\t").csv('file://'+local_path)
df.show(10)https://stackoverflow.com/questions/69330177
复制相似问题