我正在使用来自MovieLens的公开可用的csv数据集,我已经为ratings.csv创建了一个分区数据集:
kite-dataset create ratings --schema rating.avsc --partition-by year-month.json --format parquet下面是我的year-month.json:
[ {
"name" : "year",
"source" : "timestamp",
"type" : "year"
}, {
"name" : "month",
"source" : "timestamp",
"type" : "month"
} ]下面是我的csv导入命令:
mkite-dataset csv-import ratings.csv ratings导入完成后,我运行此命令来查看是否实际创建了年和月分区:
hadoop fs -ls /user/hive/warehouse/ratings/我注意到的是,只创建了一个年份分区,并且在其中创建了一个单月分区:
[cloudera@quickstart ml-20m]$ hadoop fs -ls /user/hive/warehouse/ratings/
Found 3 items
drwxr-xr-x - cloudera supergroup 0 2016-06-12 18:49 /user/hive/warehouse/ratings/.metadata
drwxr-xr-x - cloudera supergroup 0 2016-06-12 18:59 /user/hive/warehouse/ratings/.signals
drwxrwxrwx - cloudera supergroup 0 2016-06-12 18:59 /user/hive/warehouse/ratings/year=1970
[cloudera@quickstart ml-20m]$ hadoop fs -ls /user/hive/warehouse/ratings/year=1970/
Found 1 items
drwxrwxrwx - cloudera supergroup 0 2016-06-12 18:59 /user/hive/warehouse/ratings/year=1970/month=01执行这种分区导入的正确方式是什么,这将导致创建所有年份和所有月份的分区?
发布于 2016-10-27 11:50:39
在时间戳的末尾添加三个零。
使用下面的shell脚本来完成此操作
#!/bin/bash
# add the CSV header to both files
head -n 1 ratings.csv > ratings_1.csv
head -n 1 ratings.csv > ratings_2.csv
# output the first 10,000,000 rows to ratings_1.csv
# this includes the header, and uses tail to remove it
head -n 10000001 ratings.csv | tail -n +2 | awk '{print "000" $1 }' >> ratings_1.csv
enter code here
# output the rest of the file to ratings_2.csv
# this starts at the line after the ratings_1 file stopped
tail -n +10000002 ratings.csv | awk '{print "000" $1 }' >> ratings_2.csv就连我也有这个问题,加了3个零就解决了。
https://stackoverflow.com/questions/37778161
复制相似问题