我在Cloudera工作,刚刚开始学习。所以我一直在尝试用水槽实现一个著名的twitter例子。经过努力,我已经能够从Twitter上流数据,现在它被保存在一个文件中。在我得到这些数据之后,我想对Twitter数据进行分析。但问题是我不能把推特的数据放在桌子上。我已经成功地创建了“tweet”表,但无法加载表中的数据。下面我给出了Twitter.conf文件、外部表创建查询、数据加载查询、错误消息和一些数据块。请指点我哪里做错了。请注意,我一直在用蜂巢编辑器编写查询。
Twitter.conf文件
# Naming the components on the current agent.
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
# Describing/Configuring the source
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.consumerKey = 95y0IPClnNPUTJ1AHSfvBLWes
TwitterAgent.sources.Twitter.consumerSecret = UmlNcFwiBIQIvuHF9J3M3xUv6UmJlQI3RZWT8ybF2KaKcDcAw5
TwitterAgent.sources.Twitter.accessToken = 994845066882699264-Yk0DNFQ4VJec9AaCQ7QTBlHldK5BSK1
TwitterAgent.sources.Twitter.accessTokenSecret = q1Am5G3QW4Ic7VBx6qJg0Iv7QXfk0rlDSrJi1qDjmY3mW
TwitterAgent.sources.Twitter.keywords = hadoop, big data, analytics, bigdata, cloudera, data science, data scientiest, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing
# Describing/Configuring the channel
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 100
# Binding the source and sink to the channel
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sinks.HDFS.channel = MemChannel
# Describing/Configuring the sink
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = /user/cloudera/latestdata/
TwitterAgent.sinks.flumeHDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000 表查询中的外部表查询和加载数据
CREATE External TABLE tweets (
id BIGINT,
created_at STRING,
source STRING,
favorited BOOLEAN,
retweet_count INT,
retweeted_status STRUCT<
text:STRING,
user:STRUCT<screen_name:STRING,name:STRING>>,
entities STRUCT<
urls:ARRAY<STRUCT<expanded_url:STRING>>,
user_mentions:ARRAY<STRUCT<screen_name:STRING,name:STRING>>,
hashtags:ARRAY<STRUCT<text:STRING>>>,
text STRING,
user STRUCT<
screen_name:STRING,
name:STRING,
friends_count:INT,
followers_count:INT,
statuses_count:INT,
verified:BOOLEAN,
utc_offset:INT,
time_zone:STRING>,
in_reply_to_screen_name STRING
)
PARTITIONED BY (datehour INT)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
LOCATION '/user/cloudera/tweets';
LOAD DATA INPATH '/user/cloudera/latestdata/FlumeData.1540555155464'
INTO TABLE `default.tweets`
PARTITION (datehour='2013022516')当我试图将数据加载到表中时出错。
处理语句时出错:失败:执行错误,从org.apache.hadoop.hive.ql.exec.MoveTask返回代码20013。文件格式错误。请检查文件的格式。
twitter数据文件我得到了
SEQ!org.apache.hadoop.io.LongWritableorg.apache.hadoop.io.Text������R�LX�}H�f�>(�H�Objavro.schema�{“type”:“user_friends_count”、"name":"Doc“、"doc":"adoc”、“field”:[{“name”:“id”、"type":"string"}、{"name":"user_friends_count“、"type":"int”、"null"},{“名称”:“user_location”、“类型”:“字符串”、“空”}、{“名称”:“user_description”、“类型”:“string”、"null"}、{"name":"user_statuses_count“、"type":"int”、"null"}、{“user_followers_count”、"type":"int“、"null"}、{user_name”“类型”:“字符串”、“空”}、{“名称”:“user_screen_name”、“类型”:“string”、"null"}、{“created_at”:“created_at”、"type":"string“、”null“、{"name":"text”、"type":"string“、"null"}、{"name":"retweet_count”、“retweet_count”、“type”、"null"}。{“名称”:“转述”,“类型”:“布尔”,“空”},{“名称”:“in_reply_to_user_id”,“类型”:“长”,“空”},{“名称”:“源”,“类型”:“字符串”,“空”},{“in_reply_to_status_id”,“类型”:“长”,“空”},{“名称”:“media_url_https”,"type":"string","null"},{"name":"expanded_url",“type”:“expanded_url”,“string”,
这已经是一个星期了,无法找到什么是解决办法。如果需要更多的信息,请告诉我,我会在这里提供。
发布于 2018-10-28 17:19:41
Flume并不是在编写JSON,所以JSONSerde不是您想要的。
你需要调整这些线条
TwitterAgent.sinks.flumeHDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text水槽目前正在编写包含Avro的Sequencefile
SEQ!org.apache.hadoop.io.LongWritableorg.apache.hadoop.io.Text� �����R�LX� }H�f�>(�H�Objavro.schema�
蜂巢可以按原样读取Avro,所以不清楚为什么要使用JSONSerde。
https://stackoverflow.com/questions/53028772
复制相似问题