(编辑:略作编辑以更好地反映意图,但由于所取得的进展,编辑量很大。)
主题"t_raw"被赋予多种类型的消息,其中它们都包含一个公共的"type"键:
{"type":"key1","data":{"ts":"2018-11-20 19:20:21.1","a":1,"b":"hello"}}
{"type":"key2","data":{"ts":"2018-11-20 19:20:22.2","a":1,"c":11,"d":"goodbye"}}
{"type":"key1","data":{"ts":"2018-11-20 19:20:23.3","a":2,"b":"hello2"}}
{"type":"key2","data":{"ts":"2018-11-20 19:20:24.4","a":3,"c":22,"d":"goodbye2"}}最终,我需要将其分解为其他流,在这些流中它们将被切碎/聚合/处理。我希望能够在任何事情上都使用STRUCT,但我目前的努力是这样做的:
create stream raw (type varchar, data varchar) \
with (kafka_topic='t_raw', value_format='JSON');对于第一个层次,那么
create stream key1 with (TIMESTAMP='ts', timestamp_format='yyyy-MM-dd HH:mm:ss.S') as \
select \
extractjsonfield(data, '$.ts') as ts, \
extractjsonfield(data, '$.a') as a, extractjsonfield(data, '$.b') as b \
from raw where type='key1';
create stream key2 with (TIMESTAMP='ts', timestamp_format='yyyy-MM-dd HH:mm:ss.S') as \
select \
extractjsonfield(data, '$.ts') as ts, \
extractjsonfield(data, '$.a') as a, extractjsonfield(data, '$.c') as c, \
extractjsonfield(data, '$.d') as d \
from raw where type='key2';这似乎是可行的,但是随着最近STRUCT的添加,是否有一种方法来代替上面所做的extractjsonfield使用它呢?
ksql> select * from key1;
1542741621100 | null | 2018-11-20 19:20:21.1 | 1 | hello
1542741623300 | null | 2018-11-20 19:20:23.3 | 2 | hello2
^CQuery terminated
ksql> select * from key2;
1542741622200 | null | 2018-11-20 19:20:22.2 | 1 | 11 | goodbye
1542741624400 | null | 2018-11-20 19:20:24.4 | 3 | 22 | goodbye2如果没有STRUCT,是否有一种使用香草卡夫卡流( ksql,因此是apache-kafka-streams标签)的直接方式?
是否有更多的卡夫卡式/高效/优雅的方式来解析这一点?我不能将它定义为空的STRUCT<>
ksql> CREATE STREAM some_input ( type VARCHAR, data struct<> ) \
WITH (KAFKA_TOPIC='t1', VALUE_FORMAT='JSON');
line 1:52: extraneous input '<>' expecting {',', ')'}可以做类似的事情,这就是一些(不是最近的)讨论
CREATE STREAM key1 ( a INT, b VARCHAR ) AS \
SELECT data->* from some_input where type = 'key1';FYI:上面的解决方案不能在合流-5.0.0中工作,最近的补丁修复了extractjsonfield错误并启用了这个解决方案。
真正的数据有几种更相似的消息类型。它们都包含"type"和"data"键(以及顶级的其他键),而且几乎所有的键都具有嵌套在"data"中的"ts"时间戳等效项。
发布于 2018-11-23 09:54:49
是的,您可以这样做-- KSQL不介意列不存在,只需得到一个null值。
测试数据设置
将一些测试数据填充到主题中:
kafkacat -b kafka:29092 -t t_raw -P <<EOF
{"type":"key1","data":{"ts":"2018-11-20 19:20:21.1","a":1,"b":"hello"}}
{"type":"key2","data":{"ts":"2018-11-20 19:20:22.2","a":1,"c":11,"d":"goodbye"}}
{"type":"key1","data":{"ts":"2018-11-20 19:20:23.3","a":2,"b":"hello2"}}
{"type":"key2","data":{"ts":"2018-11-20 19:20:24.4","a":3,"c":22,"d":"goodbye2"}}
EOF将主题转储到KSQL控制台进行检查:
ksql> PRINT 't_raw' FROM BEGINNING;
Format:JSON
{"ROWTIME":1542965737436,"ROWKEY":"null","type":"key1","data":{"ts":"2018-11-20 19:20:21.1","a":1,"b":"hello"}}
{"ROWTIME":1542965737436,"ROWKEY":"null","type":"key2","data":{"ts":"2018-11-20 19:20:22.2","a":1,"c":11,"d":"goodbye"}}
{"ROWTIME":1542965737436,"ROWKEY":"null","type":"key1","data":{"ts":"2018-11-20 19:20:23.3","a":2,"b":"hello2"}}
{"ROWTIME":1542965737437,"ROWKEY":"null","type":"key2","data":{"ts":"2018-11-20 19:20:24.4","a":3,"c":22,"d":"goodbye2"}}
^CTopic printing ceased
ksql>建立数据源流的模型
在上面创建一个流。注意STRUCT的使用和每一列的引用:
CREATE STREAM T (TYPE VARCHAR, \
DATA STRUCT< \
TS VARCHAR, \
A INT, \
B VARCHAR, \
C INT, \
D VARCHAR>) \
WITH (KAFKA_TOPIC='t_raw',\
VALUE_FORMAT='JSON');将偏移量设置为最早,以便查询整个主题,然后使用KSQL访问完整流:
ksql> SET 'auto.offset.reset' = 'earliest';
Successfully changed local property 'auto.offset.reset' from 'null' to 'earliest'
ksql>
ksql> SELECT * FROM T;
1542965737436 | null | key1 | {TS=2018-11-20 19:20:21.1, A=1, B=hello, C=null, D=null}
1542965737436 | null | key2 | {TS=2018-11-20 19:20:22.2, A=1, B=null, C=11, D=goodbye}
1542965737436 | null | key1 | {TS=2018-11-20 19:20:23.3, A=2, B=hello2, C=null, D=null}
1542965737437 | null | key2 | {TS=2018-11-20 19:20:24.4, A=3, B=null, C=22, D=goodbye2}
^CQuery terminated单独查询类型,使用->运算符访问嵌套元素:
ksql> SELECT DATA->A,DATA->B FROM T WHERE TYPE='key1' LIMIT 2;
1 | hello
2 | hello2
ksql> SELECT DATA->A,DATA->C,DATA->D FROM T WHERE TYPE='key2' LIMIT 2;
1 | 11 | goodbye
3 | 22 | goodbye2将数据保存在单独的Kafka主题中:
用分离的数据填充目标主题:
ksql> CREATE STREAM TYPE_1 AS SELECT DATA->TS, DATA->A, DATA->B FROM T WHERE TYPE='key1';
Message
----------------------------
Stream created and running
----------------------------
ksql> CREATE STREAM TYPE_2 AS SELECT DATA->TS, DATA->A, DATA->C, DATA->D FROM T WHERE TYPE='key2';
Message
----------------------------
Stream created and running
----------------------------新流的架构:
ksql> DESCRIBE TYPE_1;
Name : TYPE_1
Field | Type
--------------------------------------
ROWTIME | BIGINT (system)
ROWKEY | VARCHAR(STRING) (system)
DATA__TS | VARCHAR(STRING)
DATA__A | INTEGER
DATA__B | VARCHAR(STRING)
--------------------------------------
For runtime statistics and query details run: DESCRIBE EXTENDED <Stream,Table>;
ksql> DESCRIBE TYPE_2;
Name : TYPE_2
Field | Type
--------------------------------------
ROWTIME | BIGINT (system)
ROWKEY | VARCHAR(STRING) (system)
DATA__TS | VARCHAR(STRING)
DATA__A | INTEGER
DATA__C | INTEGER
DATA__D | VARCHAR(STRING)
--------------------------------------支持每个KSQL流的主题:
ksql> LIST TOPICS;
Kafka Topic | Registered | Partitions | Partition Replicas | Consumers | ConsumerGroups
---------------------------------------------------------------------------------------------------------
t_raw | true | 1 | 1 | 2 | 2
TYPE_1 | true | 4 | 1 | 0 | 0
TYPE_2 | true | 4 | 1 | 0 | 0
---------------------------------------------------------------------------------------------------------https://stackoverflow.com/questions/53438413
复制相似问题