据推测,Apache支持Facebook的Zstandard压缩算法,从Spark2.3.0 (https://issues.apache.org/jira/browse/SPARK-19112)开始,但我无法真正读取Zstandard压缩文件:
$ spark-shell
...
// Short name throws an exception
scala> val events = spark.read.option("compression", "zstd").json("data.zst")
java.lang.IllegalArgumentException: Codec [zstd] is not available. Known codecs are bzip2, deflate, uncompressed, lz4, gzip, snappy, none.
// Codec class can be imported
scala> import org.apache.spark.io.ZStdCompressionCodec
import org.apache.spark.io.ZStdCompressionCodec
// Fully-qualified code class bypasses error, but results in corrupt records
scala> spark.read.option("compression", "org.apache.spark.io.ZStdCompressionCodec").json("data.zst")
res4: org.apache.spark.sql.DataFrame = [_corrupt_record: string]我需要做些什么才能读到这样的文件?
环境为AWS EMR 5.14.0。
发布于 2018-06-15 18:02:30
根据这句话,Spark2.3.0中对Zstandard的支持仅限于内部和洗牌输出。
读取或写入Zstandard文件使用Hadoop的org.apache.hadoop.io.compress.ZStandardCodec,它是在Hadoop2.9.0中引入的(2.8.3包含在EMR5.14.0中)。
https://stackoverflow.com/questions/50868307
复制相似问题